Wednesday, June 04, 2014

Marketing Automation Buyer Survey: Many Myths Busted but Planning is Still Key to Success

The marketing automation user survey I mentioned last March has finally been published on the VentureBeat site (you can order it here). At more than 50 pages and with dozens of graphs and charts, it’s not light reading. But it’s still fascinating because the findings challenge much of the industry’s conventional wisdom.

For example, industry deep thinkers often say that deployment failure has more to do with bad users than bad software. The underlying logic runs along the lines that all major marketing automation systems have similar features, and certainly they share a core set that is more than adequate for most marketing organizations. So failure is the result of poor implementation, not choosing the wrong tools.

But, as I reported in my March post, it turns out that 25% of users cited “missing features” as a major obstacle – indicating that the system they bought wasn’t adequate after all. My analysis since then found that people who cited “missing features” are among the least satisfied of all users: so it really mattered that those features were missing. The contrast here is with obstacles such as creating enough content, which were cited by people who were highly satisfied, suggesting those obstacles were ultimately overcome.*



We also found that people who evaluated on “breadth of features” were far more satisfied than people who evaluated on price, ease of learning, or integration. This is independent confirmation of the same point: people who took care to find the features they needed were happy the result; those who didn’t, were not.



But the lesson isn’t just that features matter. Other answers revealed that satisfaction also depended on taking enough time to do a thorough vendor search, on evaluating multiple systems, and (less strongly) on using multiple features from the start. These findings all point to concluding that the primary driver of marketing automation success is careful preparation, which means defining in advance the types of programs you’ll run and how you’ll use marketing automation. Buying the right system is just one result of a solid preparation process; it doesn't cause success by itself.  So it's correct that results ultimately depend on users rather than technology, but not in the simplistic way this is often presented.

I’d love to go through the survey results in more detail because I think they provide important insights about the organization, integration, training, outside resources, project goals, and other issues. But then I’d end up rewriting the entire report. At the very least, take a look at the executive summary available on the VentureBeat site for free. And if you really care about marketing automation success, tilt the odds in your favor by buying the full report.

__________________________________________________________________________
* I really struggled to find the best way to present this data.  There are two dimensions: how often each obstacle was cited and the average satisfaction score (on a scale of 1 to 5) of people who cited that obstacle.  The table in the body of the post just shows the deviation of the satisfaction scores from the over-all average of 3.21, highlighting the "impact" of each obstacle (with the caveat that "impact" implies causality, which isn't really proven by the correlation).  The more standard way to show two dimensions is a scatter chart like the one below, but I find this is difficult to read and doesn't communicate any message clearly.


Another option I tried was a bar graph showing the frequency of each obstacle with color coding to show the satisfaction level.  This does show both bits of information but you have to look closely to see the red and green bars: the image is dominated by frequency, which is not the primary message being communicated.  If anyone has a better solution, I'm all ears.







No comments: