Archive for 21 April 2016

Testing and Enterprise Software

A few weeks ago I asked a question on Twitter, looking for feedback about a certain workflow in the product I help to manage. In part because the product in question (Adobe Analytics) caters to an audience that thinks about data-driven optimization, a well-meaning user of the product suggested that we A/B test the workflow with users, and then promote the “winner” so that it becomes the workflow for everyone.

A/B Testing Is Almost Always a Great Idea

Generally, I am a huge fan of A/B (or “split”) testing. In fact, my company sells an industry-leading testing and optimization solution (Adobe Target) which helps companies all over the world improve web sites and mobile apps for their customers. While I myself am not an expert on testing theory, the basics are easy to grasp: If you’re not sure how to improve your customer experience (and who is ever really sure?), run a test and let the results speak for themselves. It’s an awesome concept, the returns on investment are huge, and almost everyone who has a digital presence should be doing it.

Almost.

But Probably Not for Enterprise Software

The amazing Blair Reeves has written an excellent piece on the difference between product management for enterprise software and product management for consumer software. I highly suggest that you read his post, because it gives a lot of context for my opinion on this matter, and I’m not going to restate it all here. However, I will quote one particularly salient point:

When big companies pay you millions of dollars for software, the last thing they want is major, unannounced and unexpected changes to the product . . . This is even more true for business-critical applications like the ones I’ve spent my entire technology career working on, like digital analytics, marketing intelligence and ecommerce platforms. If one of those goes down, it’s not just annoying — it’s lost business. The stakes of failure are potentially huge, and not just because customers expect the product they paid for to always be available.

The reason A/B testing is great for most web sites but usually terrible within enterprise software products is that enterprise users (“pay[ing] you millions of dollars for software”) do not like to be jerked around when it comes to their user experience. I mean, nobody likes when things suddenly change on them, but the whole relationship is different when your customers are massive organizations rather than individual customers. Many of the customers that I personally work with insist on months of advance notice when there are user experience updates. Your customers learn, and they train their (often very large) organizations on how to use your tool. When you change it on them in the name of testing, they are often left hampered in their ability to navigate to the key workflows that provide value.

From a testing perspective, the reason this doesn’t work is that test participants aren’t supposed to know that they are in a test. (More on that later.) Let’s consider a new user of your tool. We’ll call him Charles, and he’s just gotten access to your product. You want Charles to have a positive experience, since he is part of company paying a lot of money for your software, and the tasks he completes with your product may well contribute to the customer’s view of their return on investment with your company. To get familiar, Charles watches a few training videos recorded by your education team. He reads a bit of documentation as well.

Now Charles logs in. But wait! This doesn’t look quite like the videos. Where is the feature he read about in the documentation? This doesn’t look right! Charles is confused, and frustrated. To make matters weirder, Charles asks a friend at a different company, Amanda, why the product doesn’t match the documentation or videos. But Amanda sends Charles a screen shot: to her, the product looks exactly what is documented.

What happened here? Charles ended up in a certain test group, getting a different variant of the user experience than other users. Amanda was in the control group, which got the traditional user experience. There could be multiple test groups, each with a different experience. Some of these experiences may indeed be better than the control, and users may be able to find their way painlessly. But many of them won’t be, and frustration will mount. Nobody is actually going to quit Facebook because their News Feed looks different than someone else’s News Feed (and even if a few people did, it wouldn’t put a dent in Facebook’s overall growth), but they often will abandon your software, putting revenue at risk for you.

Why not simply create separate training, documentation, etc. for each test variant? This certainly would help, but show me the software company that has enough resources and coordination to support creating multiple versions of all supporting materials and I’ll show you the Lost City of Atlantis.

I may not have even touched on the biggest reason to avoid A/B testing inside of enterprise software products: the duplication of effort for your user experience design and product development teams. Instead of building each feature once, they have to build it 3-4 times. The opportunity cost of approaching workflow changes this way is huge, and puts a ton of burden on your most valuable resources.

(Another potential issue here is sales. The last thing you want during a demo to a prospect is an unfamiliar experience that your sales team doesn’t know how to navigate. This can be mitigated by excluding demo accounts or demo environments from tests, but is still risky at best; when the customer buys, what they see may not match what was demoed.)

How Should Enterprise Software Perform Tests? 

I’m a bit worried that this comes across as if I am saying that product managers should simply take their best guess on a user experience. Far from it. The answer, however, is prototyping and user testing.

(None of this, by the way, is intended to say that enterprise software developers can’t programmatically test the size of a button or the content of a hero banner. My warning is to avoid A/B testing in production on key user workflows.)

Prototyping involves user experience designers and/or developers creating a light version of the proposed change. This can be in a development or beta environment, or can even (in the case of UI updates on the web) be a JavaScript bookmarklet that appears to change the layout and CSS in your product for a user, without actually making any underlying code changes.

User testing involves working with a small group of users, typically of varying experience levels, and possibly in different industries, to see how well they understand the prototype. These tests should likely be run by your user experience design team, with product management and development supporting. The test administrator should give the subject a series of tasks to complete given the proposed new experience. Their ability (or inability) to adapt and intuit their way through the new experience, while talking you through their thought processes, will yield valuable insights.

In some cases, you may even be able to “A/B test an experience in production” by explicitly inviting users to join a test. This would drop them into a variant on their normal experience, and they can provide feedback via a form.

Prototyping and user testing early in the software development process will not only ensure that the experience you’re building is easily understood, but it will allow all of the supporting materials (documentation, training, etc.) to be created around a polished final product, so that there is harmony across all of the “channels” through which users master your product.

So my advice is to test away, but do it in the right way for the enterprise. Involve your customers in testing prior to (and during) product development. But don’t risk alienating your users, whose experiences pave the road to your future earnings. Don’t give them unexpected variants on an experience that their business relies on.