Orange is my favorite color

Mark Greene from Cars Yeah interviewed me last week on business, cars and lessons learned. We talked about what got me into cars in the first place, the one car I would buy if money was no object and what has me excited about the future of

Brian Ghidinelli interview on Cars Yeah

It was a lot of fun! Mark has interviewed hundreds of interesting business and motorsport personalities and to be included on the list is an honor. Be careful – you might spend all day listening!

I love researching things. Learning about something new and finding the best way to approach a problem feeds the engineering part of my background. I satisfy that by reading a ton and because of my online businesses, I read a lot of SaaS-related online marketing and sales material. But sometimes I read more than I act. As a recent example, in setting up the best possible go-to-market strategy for our new live timing app, RaceHero, I ran out of time before vacation and lost weeks out of my marketing strategy by failing to kick it off before I left town. That same “cobbler’s shoes” fate had befallen my email lists for the past 9 years.

This year I’ve been working to do more A/B testing. A/B tests are mathematically-backed competitions between two or more options which are scored by the actions of the users. They may also be the single highest ROI tactic for software companies. These tests could be anything from a simple color change of a button (shocking, to what extent that can matter) all the way to a completely different web page. I was first exposed to the technique back in 2001 at Yahoo! when I was helping redesign their search results but it wasn’t until the last 18 months I ran the first A/B test on

Which brings me to how I grew one of my mailing lists by 53% in 45 days. In the history of this 9-year old list, the last two months would make a hockey stick look like a rolling foothill. This is like the face of a glacier where it meets into the ocean: a vertical line that reads like an error.

The technique is perhaps too simple to be valuable by itself: I put the signup form where many more people would see it in their routine use of our platform. Previously, we had a few CTAs in our transactional emails and on our event calendar but the actual signup took place on a mailing lists screen under “My Account”. Few people went there looking to add more email to their inbox so list growth was steady but slow. Now, when users create a new account or reconfirm their details, they see checkboxes at the bottom of their personal information letting them opt-in to the lists. Thousands of people see these screens every month and because the lists are valuable, thousands of them are signing on.

“It was just a test”

The obvious question is why didn’t we do this sooner? The answer is because I wasn’t sure if these transactional flows were appropriate places to insert a list signup. I had concerns that it might negatively impact registration conversion. So I hemmed. And I hawed. And I periodically looked at the issue in our bug tracker but never quite got over the “ewwwww” feeling to put it in place.

That’s the beauty of A/B testing. A feature request in our bug tracker is a stone being added to a wall with mortar: it becomes a seemingly permanent piece of your architecture, of your user interface and of your responsibility. But an A/B test, why, that’s just a test! We have made no commitments to keeping it around. We’re not even sure we like it! Implementation only took a few hours so we could rip it out at any time for any reason and respond, “It was just a test.” Simply doing it, rather than talking about it, is rule #2 of A/B testing fight club.

Since it was just a test, we hoped for a nice bump to justify keeping it around. We were blown away. At the current rate, we will reach 25,000 subscribers (333% growth) by the end of 2015. And because we have a large percentage of first-time participants come through our service, we should see a permanently scaled growth rate. As an additional bonus, this list drives participation for our events so we’re creating a positive feedback loop for our event organizers and our bottom line too.

All because of a feature I was skeptical of “implementing” but happy to “test”.

Remember: making the call is making progress. Doing is better than planning. Execution is more valuable than ideas. Look at your to-do list and find one or two things you’ve been putting off. What can you do to “test” it (whatever that might mean in your case) to move it ahead and gain confidence in your choice?

I’m automating online marketing for event organizers at by synchronizing event listings with third party calendars. I’m working with a typical API which is modeled such that you create a venue, they return an ID and then you use that ID in your event creation call. Pretty standard stuff.

The challenge is when I want to update or synchronize the listings later. The search makes it difficult to identify an event or venue that I’ve created in the system vs. one that someone else has created (which may have different attributes) or determine if I’m selecting the matching item at all (do we trust a string comparison on the name?) The venues in this API do support an arbitrary properties collection where I could stash an ID from our system but you can’t search based on those properties which results in searching, then looping to see if a property matches our original request and, if not, creating a new venue.

It doesn’t have to be that tedious. As a rule of thumb, any API which allows the creation of a logical object (people, events, places, etc) where the record of authority may originate from another system should accommodate a foreign identifier as a standard property and allow searching against the identifier.

I’m sure there are APIs which follow this practice but it should be a standard. Simplify my integration efforts so that I can easily send and synchronize data with you. Keep us loosely coupled. It’s one more field on your end or it’s a ton of code on my end that results in a more fragile relationship. Don’t make me maintain my own database table of ID mappings.

Fantastic race weekend coming from 7th to best a field of the 45 best racers on the west coast:

Spec Miata came in its traditional form, with six cars up front in the lead back through the early stages. Kyle Kaiser’s metallic blue No. 88 Miata was the first of the lead group to fall back when he spun, and with two to go it was down to Charlie Hayes, Tyler Vance, Brian Ghidinelli and Joey Jordan.

Hayes, in the 22 TFB/AIM Tires/RM Autosports Miata, and Vance, in the No. 85 TMG/RM Autosports/Sparco Miata, were racing hard to hold the top spot when they went side by side, and then off the track, in turn nine on the way to the one to go board. Both Ghidinelli and Jordan took advantage of the off to move into the lead and begin the final lap.

Ghidinelli kept Jordan behind him for the final circuit, and entered the last turn conscientiously reminding himself not to overcook the corner. With that in mind, Ghidinelli’s No. 12 Mazda Miata probably slowed too much, and Jordan got a run up the front straight to the finish. It was too little, too late for Jordan’s No. 47 Miata, and Ghidinelli crossed the stripe in front by less than a car length.

Next up, Laguna Seca… and then onwards to the SCCA Runoffs in October.