Archive for Ben

Testing and Enterprise Software

A few weeks ago I asked a question on Twitter, looking for feedback about a certain workflow in the product I help to manage. In part because the product in question (Adobe Analytics) caters to an audience that thinks about data-driven optimization, a well-meaning user of the product suggested that we A/B test the workflow with users, and then promote the “winner” so that it becomes the workflow for everyone.

A/B Testing Is Almost Always a Great Idea

Generally, I am a huge fan of A/B (or “split”) testing. In fact, my company sells an industry-leading testing and optimization solution (Adobe Target) which helps companies all over the world improve web sites and mobile apps for their customers. While I myself am not an expert on testing theory, the basics are easy to grasp: If you’re not sure how to improve your customer experience (and who is ever really sure?), run a test and let the results speak for themselves. It’s an awesome concept, the returns on investment are huge, and almost everyone who has a digital presence should be doing it.


But Probably Not for Enterprise Software

The amazing Blair Reeves has written an excellent piece on the difference between product management for enterprise software and product management for consumer software. I highly suggest that you read his post, because it gives a lot of context for my opinion on this matter, and I’m not going to restate it all here. However, I will quote one particularly salient point:

When big companies pay you millions of dollars for software, the last thing they want is major, unannounced and unexpected changes to the product . . . This is even more true for business-critical applications like the ones I’ve spent my entire technology career working on, like digital analytics, marketing intelligence and ecommerce platforms. If one of those goes down, it’s not just annoying — it’s lost business. The stakes of failure are potentially huge, and not just because customers expect the product they paid for to always be available.

The reason A/B testing is great for most web sites but usually terrible within enterprise software products is that enterprise users (“pay[ing] you millions of dollars for software”) do not like to be jerked around when it comes to their user experience. I mean, nobody likes when things suddenly change on them, but the whole relationship is different when your customers are massive organizations rather than individual customers. Many of the customers that I personally work with insist on months of advance notice when there are user experience updates. Your customers learn, and they train their (often very large) organizations on how to use your tool. When you change it on them in the name of testing, they are often left hampered in their ability to navigate to the key workflows that provide value.

From a testing perspective, the reason this doesn’t work is that test participants aren’t supposed to know that they are in a test. (More on that later.) Let’s consider a new user of your tool. We’ll call him Charles, and he’s just gotten access to your product. You want Charles to have a positive experience, since he is part of company paying a lot of money for your software, and the tasks he completes with your product may well contribute to the customer’s view of their return on investment with your company. To get familiar, Charles watches a few training videos recorded by your education team. He reads a bit of documentation as well.

Now Charles logs in. But wait! This doesn’t look quite like the videos. Where is the feature he read about in the documentation? This doesn’t look right! Charles is confused, and frustrated. To make matters weirder, Charles asks a friend at a different company, Amanda, why the product doesn’t match the documentation or videos. But Amanda sends Charles a screen shot: to her, the product looks exactly what is documented.

What happened here? Charles ended up in a certain test group, getting a different variant of the user experience than other users. Amanda was in the control group, which got the traditional user experience. There could be multiple test groups, each with a different experience. Some of these experiences may indeed be better than the control, and users may be able to find their way painlessly. But many of them won’t be, and frustration will mount. Nobody is actually going to quit Facebook because their News Feed looks different than someone else’s News Feed (and even if a few people did, it wouldn’t put a dent in Facebook’s overall growth), but they often will abandon your software, putting revenue at risk for you.

Why not simply create separate training, documentation, etc. for each test variant? This certainly would help, but show me the software company that has enough resources and coordination to support creating multiple versions of all supporting materials and I’ll show you the Lost City of Atlantis.

I may not have even touched on the biggest reason to avoid A/B testing inside of enterprise software products: the duplication of effort for your user experience design and product development teams. Instead of building each feature once, they have to build it 3-4 times. The opportunity cost of approaching workflow changes this way is huge, and puts a ton of burden on your most valuable resources.

(Another potential issue here is sales. The last thing you want during a demo to a prospect is an unfamiliar experience that your sales team doesn’t know how to navigate. This can be mitigated by excluding demo accounts or demo environments from tests, but is still risky at best; when the customer buys, what they see may not match what was demoed.)

How Should Enterprise Software Perform Tests? 

I’m a bit worried that this comes across as if I am saying that product managers should simply take their best guess on a user experience. Far from it. The answer, however, is prototyping and user testing.

(None of this, by the way, is intended to say that enterprise software developers can’t programmatically test the size of a button or the content of a hero banner. My warning is to avoid A/B testing in production on key user workflows.)

Prototyping involves user experience designers and/or developers creating a light version of the proposed change. This can be in a development or beta environment, or can even (in the case of UI updates on the web) be a JavaScript bookmarklet that appears to change the layout and CSS in your product for a user, without actually making any underlying code changes.

User testing involves working with a small group of users, typically of varying experience levels, and possibly in different industries, to see how well they understand the prototype. These tests should likely be run by your user experience design team, with product management and development supporting. The test administrator should give the subject a series of tasks to complete given the proposed new experience. Their ability (or inability) to adapt and intuit their way through the new experience, while talking you through their thought processes, will yield valuable insights.

In some cases, you may even be able to “A/B test an experience in production” by explicitly inviting users to join a test. This would drop them into a variant on their normal experience, and they can provide feedback via a form.

Prototyping and user testing early in the software development process will not only ensure that the experience you’re building is easily understood, but it will allow all of the supporting materials (documentation, training, etc.) to be created around a polished final product, so that there is harmony across all of the “channels” through which users master your product.

So my advice is to test away, but do it in the right way for the enterprise. Involve your customers in testing prior to (and during) product development. But don’t risk alienating your users, whose experiences pave the road to your future earnings. Don’t give them unexpected variants on an experience that their business relies on.

On my way to SSAC 2016

Every year, during the tremendous stress of Adobe Summit preparation, I get a two-day respite. It is called the Sloan Sports Analytics Conference (SSAC), and it is held during two glorious days at the end of February/beginning of March in Boston (at the massive Boston Convention and Exhibition Center the past two years). We’re now two days away from the start of the conference (11-12 March 2016), and I’m getting—to borrow a lyric from Pete Carroll—pumped and jacked.

SSAC 2016

In the past, Bill Simmons (an early participant and evangelist for the event) dubbed it “Dorkapalooza,” which is a fairly accurate, if pejorative, moniker. The audience of several thousand attendees, which is overwhelming male and skews hard toward the 18-to-25 age range, is made up mostly of the following groups:

  • College and Ph.D. students who are there looking to network with professional teams’ front offices or sports-related startups in the hopes of getting a job. These kids wear suits and as such as easily identifiable. They are young, smart, and hungry, but supply of talent far outpaces demand, and I wish they would all go get digital analytics jobs.
  • Front office and team representatives, who are there, presumably, to get acquainted with some of the latest research in the world of sports. It is awesome when you are in a session and you realize you are sitting next John Hollinger and he is chatting with Kevin Pritchard, and you get to eavesdrop.
  • Journalists/bloggers. I remember going and saying hi to Kevin Pelton in 2013; he seemed genuinely taken aback that someone had recognized him and liked his work, which I appreciated about him. It’s a great opportunity to chat with the people who bring us the sports we love.
  • Random sports fans like me, who have no real reason to be there other than to soak in the sports dorkiness and hear from their favorite sports luminaries. They are lucky we don’t show up in sweatpants and ruin the whole vibe.

The content at this conference is wide-ranging and entirely unlike any other conference I’ve ever attended. One of the things I like about it is that they serve each of the above-mentioned audiences really well. For the job-seekers, there are special workshops where they can do practice interviews and get résumé advice. For those already in the world of sports, there are research papers and talks by their peers on best practices and pitfalls. And for me, there are panels.

Panels are the lifeblood of the SSAC for me. They will put four or five people on a stage and have them just shoot the breeze for an hour based on some lightly prepared topics. These sessions are less analytical; a question might pay lip service to data (“Shane Battier, how did you actually use analytics to prepare for games?”) but most of the panelists are former players/coaches/general managers/owners, and they’re certainly not data scientists by any stretch. They tell stories, debate about what is going on in their world today, share strategies that have helped them in their roles, and make predictions about the future. The conference does a great job combining different personalities and types of experiences; for example, a panel on contract negotiation might have a former player, a current general manager, an agent, and a negotiation expert. Thus, we get to hear a perspective on negotiation from every angle. Sometimes they even combine two people who have a history, such as when Daryl Morey shares the stage with Masai Ujiri and they talk with surprising specificity about failed trade talks and

Panel at SSAC 2013 featuring (left to right) Michael Lewis, Marc Cuban, Nate Silver, Daryl Morey, and Paraag Marathe.

Panel at SSAC 2013 featuring (left to right) Michael Lewis, Marc Cuban, Nate Silver, Daryl Morey, and Paraag Marathe.

While panel topics vary and include different sports as well as topics which are common across sports, I’m mostly interested in the NBA. I think it is the most fertile ground for interesting application of data and analytics right now; it was baseball a decade ago, and perhaps it will be football, soccer, golf, or something else entirely a decade from now. Here are a few of the basketball panels I’m looking forward to attending this year:

  • Analytics in Action. This one combines two professional basketball players (Shane Battier and Sue Bird), one former coach (Stan Van Gundy), and an analyst (Dean Oliver), with ESPN’s Kevin Arnovitz moderating. They’ll talk informally about how data and analysis has changed the game on the court. Battier and Bird did a great job last year and I relish the opportunity just to hear them tell stories about life in the NBA (or WNBA) and the process they go through as players. Again, these stories usually aren’t strictly about analytics, but they give great insight into the minds of two of the more cerebral basketball players in recent memory. Van Gundy is always tremendous, with his dry, acerbic sense of humor and willingness to say exactly what he is thinking.
  • Basketball Analytics: Hack-a-Stat. The inimitable Zach Lowe will be moderating this one, with Brian Kopp (who helped create and market SportVU), Mike Zarren (Danny Ainge’s right-hand man with the Celtics), former player Brian Scalabrine, and former (future?) coach Tom Thibodeau. This one is always a must-attend. Obviously, Thibs has never been to Sloan before, because he has always been in the midst of an NBA season. I am dying to hear the perspective he brings. This looks to me like a great group to interpret some of the recent changes to the game: the unprecedented success of the Golden State Warriors, the blurring of lines between positions, etc.
  • Future of the Front Office. This one actually spans sports, with Nick Caserio (director of player personnel, New England Patriots), Jeff Luhnow (GM, Houston Astros) joining Daryl Morey (GM, Houston Rockets, and a co-founder of the conference) and Bob Myers (GM, Golden State Warriors), with Jackie MacMullan moderating. Bob Myers is the architect of this historic, transformational Golden State Warriors team, and I’m excited to hear his take on how his strategy and tactics are changing, and get his insider take on how that team came together. I’m also a huge New England Patriots fan (and former employee), so an insight on what the Patriots are doing differently to keep their edge will be extremely insightful.

When I’m not attending panels, there are a handful of other sessions worth shouting out.

  • One of my fondest memories of SSAC comes from my first conference, in 2012, when a young visitor cartography professor at Harvard named Kirk Goldsberry burst on the basketball analytics scene by giving a groundbreaking presentation on “CourtVision,” where he used shot location data to visualize players’ tendencies and strengths/weaknesses in a way nobody had seen before. Spencer Hall and I were enthralled, immediately declaring Goldsberry the “winner” of the conference. We struck up a relationship with him which (at least for me) continues to this day; Kirk always manages to find time to chat with me for a few minutes at SSAC. I don’t know whether that will continue now that he has been hired by the San Antonio Spurs (and is probably in greater demand at SSAC than ever before), but I will be in attendance when he gives his latest talk, “The Curry Landscape,” on how developments in the communication of data is lagging behind the production and analysis of data. It’s a topic near and dear to my heart in my work at Adobe and, as with much of the content at SSAC, applies to industries other than sports.
  • My product, Adobe Analytics, always manages to get a few shout-outs at various panels or talks dealing with the business of sports, since we’re the dominant digital/customer intelligence solution for the enterprise, which includes leagues, ticket companies, etc. This year, our customer Aidan Lyons, VP of Fan-Centric Marketing for the NFL, is speaking in the “Competitive Advantage” track on “Using Data, Analytics and Technology to Bring NFL fans Closer to the Game.” Whether he mentions my product by name or not, he’ll definitely be talking about our data and how it helps the NFL have a one-on-one relationship with fans.

These are just five of the 16 panels, talks, sessions, or workshops I am hoping to take part in at SSAC 2016; you can read the whole agenda on the SSAC web site. There is far too much for me to cover in a single preview blog post. However, I am going to try to blog after each day of the conference giving my thoughts and reporting on some of the more interesting nuggets that I took away from the firehose of insight that is SSAC.

My only regret is that I don’t have more friends attending the conference with me, so if you’re interested in learning at the feet of some of the more progressive and innovative minds in every aspect of the world of sports, I strongly suggest signing up for the SSAC email list so that you’ll be notified when 2017 tickets go on sale. This sort of thing is more fun with someone next to you so you can discuss the latest crazy thing Jeff Van Gundy said, so hopefully I’ll see you in Boston in 2017.

Friday Lunch with Takashi

It’s a running joke with one of my colleagues that I often force him to drive with me up Takashi in Salt Lake City before it opens in case there is a line around the block. Usually there are only five or six people waiting in front when we pull in; once in a rare while, there are several dozen, even at 5:00 (the restaurant opens at 5:30 most nights) and we’re lucky to get a table. 

Jason, Tim, and I did the same for lunch this past Friday, arriving a few minutes before they opened at 11:30, but this time was different: We were there to interview Takashi Gibo himiself, at his eponymous restaurant. The premise for the meeting was two-fold:

  1. Eat delicious sushi with friends, obviously. 
  2. My session at Adobe Summit is going to use an analogy comparing analysts and sushi chefs, and getting Takashi’s perspective on his practice will inform the story I tell. 

Our group with Takashi

I had sat in front of Takashi before, at his sushi bar, while he prepared rolls and fish for me, but I had not taken the opportunity at that time to introduce myself. I wish I had. Today we found Mr. Gibo to be soft-spoken, humble, and patient. We stumbled through hastily prepared interview questions that probably seemed rambling and incongruous, but he thoughtfully, slowly answered each one, talking at length about how he got into sushi, where his inspiration for his menu comes from, and why he is in Salt Lake City of all places. 

Jason and I were both a bit intimidated, since Takashi is something of a local legend as well as the owner and proprietor of our favorite restaurant in the world, but I think Takashi was also intimidated in his own way, and behind his teal-framed glasses his eyes were as furtive as mine. 

He speaks with an accent blended from the three countries where he has lived: Japan, Peru, and the United States. I did not expect to hear a Japanese chef speaking English with a hispanic flavor, but this also explains his menu, which features ingredients that, in his words, “could get me arrested if I tried this in Japan.” Spicy peppers are a mainstay. Ceviche, a Peruvian dish, is featured in a few places. He mentioned his Strawberry Fields roll several times, as it happens, was inspired not by the Beatles song which lends the roll its name (although Takashi mentioned loving the Beatles), but by an experience where Takashi gazed on a strawberStrawberry Fields rollry cake and got the idea to combine strawberries, escolar, and almonds with thai chili peppers rounding out the unorthodox combination. 

This is a man whose life, going back to his earliest memories, has been about delighting the taste buds. Takashi’s parents first owned a bakery, and then got into the restaurant business themselves, first in Lima when Takashi was a small boy, and then in Okinawa beginning when he was 12 years old. While there was sushi in Peru, it was not until his brother took him to a proper sushi bar in Okinawa that Takashi fell in love with the craft. He is, I got the sense, the only sushi chef at his restaurant who has been formally trained in the ancient Japanese art of sushi preparation; he mentioned that he has several times hired chefs with zero experience making sushi. In fact, he prefers them; they are often better learners. I am amazed at how well he is able to teach them, as I have never had a bad sushi experience in however many dozen pilgrimages I have made to his sushi mecca. 

In fact, the flavors that he has crafted over the years are so delightful that I was taken aback when he said that the one he believes represents his restaurant best is ponzu sauce, the citrus soy-based sauce that his chefs use on many of their dishes. While it is a common addition, and they use it well, there are so many others that I would have gone with. Their lemon pesto adds a tart zing wherever it goes. He mentioned that they use garlic quite a bit (again, flying in the face of the Japanese tradition in which Takashi trained). Or their spicy mayonnaise, which they always use sparingly, but which is also much more intricate than the lazy Sriracha-and-mayo like most American sushi joints. I happen to know from experience (attempting to recreateScreen Shot 2016-01-24 at 9.30.54 AM it) that it is a blend of mayonnaise, tobiko, togarashi, Sriracha, and sesame oil. The fact that they have their own unique take on spicy mayo is really what Takashi is all about, but it is not the flavor that he thinks best defines his restaurant. On the other hand, ponzu sauce may be the perfect epitome of Takashi’s sushi, because it is an ingredient that anyone can use—they sell it every grocery store—and yet nobody uses it as well as he does. While he certainly has his exotic ingredients too, he is a master of taking fairly common ingredients and using them better than anyone else can. 

Takashi’s wife, Tamara, is the general manager at the restaurant and brokered the interview for me. We sat at a table for four, and talked while I ate a Black Magic Woman roll and a Buddha roll, the latter of which was invented, it turns out, not by Takashi, but by one of his chefs, who needed something to serve to a vegetarian friend. It is a fairly pedestrian roll if you simply list the components: tempura vegetables, rolled in rice and seaweed, and drizzled with a bit of eel sauce (but, unlike most other sushi restaurants, not to the point that the roll itself tastes sweet). To Takashi’s point about ponzu sauce, the quality of these very common ingredients makes the roll quite a different, almost purer experience than a similar roll from another restaurant. 

Perhaps the best thing I can say about Takashi comes from his answers to my final questions. I asked him why he still serves from behind the sushi bar. He certainly does not have to. He has a large staff, and many capable chefs. “I love it,” he said, as softly as ever. These are the people Seth Godin wrote about in Linchpin: people who take genuine pleasure in sharing their art with you. Takashi sincerely wants to dazzle you with sushi, to have you love what you just ate, and the pleasure he gets fromScreen Shot 2016-01-24 at 9.23.55 AM providing you with that experience is enough to keep him on his feet for five hours a night most nights of the week. Lastly, I asked him whether, with waiting times for tables commonly reaching two hours even on weeknights, he has ever considered expanding, perhaps opening another restaurant in the area. He smiled, but firmly answered no. Why not? “My inventory list is very long,” he said. His concern comes down to quality. He would not be able to ensure quality in two places at once, and he does not want to have to hop between both restaurants. He believes, I got the sense, that it would degrade the experience in both locations. He is choosing quality over money. Fortunately, as long as Takashi is in charge at his own restaurant, he probably does not need to fear losing either one.

Tips and Tricks for Tips and Tricks

It has become a tradition of sorts that I get the opportunity to lead an hour-long session at Adobe Summit every year in the U.S. and London on “tips and tricks” to help users get value of the product I work on, Adobe Analytics. When we say “tips and tricks,” what we really mean is practical, hands-on advice specific to a product or toolset, as opposed to theoretical thought leadership content that makes up much of the rest of conferences (including Adobe Summit).

I’m certainly not the world’s most accomplished speaker, but I’ve been doing this for a few years now and—truly for reasons unrelated to my presentation skills—they’ve been really well received.

Every year I seem to get questions from a few colleagues or presenters at other conferences asking me what my “keys to success” are. Seriously, it’s mostly serendipity, but here are a few tips and tricks I’ve learned about doing “tips and tricks” sessions. (Some of this may be applicable to sessions where you’re not doing tips and tricks for a software product, or not. I don’t know.)

1. Make sure your tips and tricks are tips and tricks
The goal with a session of this type is to get practical recommendations into the hands of users of the product. Don’t wax too philosophical or spend a lot of time on industry/market trends. There is a time and place for those, but this isn’t it. If it can’t be demoed live or shown in screen shots of your product, it probably doesn’t fit in this session.

2. Create a cheat sheet that attendees can take away with them
You’re likely going to throw a lot of information at attendees; there aren’t 2-3 big ideas that you are leading the audience toward, there are probably 6-10 minimally related product tips, and it’s a lot for audiences to consume. A handout summarizing the tips/tricks you’re sharing makes it easier for them to remember everything, and giving attendees something to take away (even if 70% throw it in the garbage) shows that you really want them to take value away from the session. It allows them to focus on learning/watching without feeling the need to document everything as you’re talking.

3. Where possible, spend 80% of your time on things users can do today, and 20% teasing things that are coming soon
If you have to, it’s better to spend 100% of your time on things users can do today. Again, you want your session to be practical. But I have found that a couple of “coming soon” tips gets people excited about what’s coming soon and gives you an opportunity to boost usage of a couple of neat features that they might miss if they aren’t looking for them. NOTE: Make sure none of your sneak peaks are part of larger main-stage announcements, and make sure they are coming reasonably soon (i.e., within the next 2-3 months).

4. Make sure you know the intended audience for your tips and tricks
If your session is positioned for advanced users, make sure you are sharing advanced tips; if positioned for novice users, share basic tips. I like to be really clear about the intended audience in my session description so that the audience is not disappointed. For what it’s worth, I have never gotten survey feedback that the tips/tricks I shared were too advanced, but every year I get a few comments that they were too basic.

5. Walk people through the tips in detail
I normally give a short introduction to each tip, explaining why what I am about to share is interesting/valuable. Then I take the audience through the tip, step by step. (I use screen shots because Internet access at conferences is notoriously spotty and it avoids having to wait for pages to load, but if you’re up for it, you may want to do this in the product.) I probably go into more detail than I need to, but I want to emphasize how approachable each of the tips really are.

6. Have really solid use cases
Many attendees won’t automatically make connections between their business problems and the tips you’re sharing, so good use cases or stories are critical to making your tips relatable. They are the difference between “That was interesting!” and “Wow, that will solve the exact problem I’ve been having.” I try to keep them generic so that they are as easy to understand as possible, even though this means talking about (in my case) really basic metrics like Page Views and Revenue.

7. Even if your tips don’t have a common thread, try to come up with a good overarching theme.
You need something to frame the session in the audience members’ minds, even if the limbic device doesn’t have to do with the tips themselves. For example, two years ago my session was focused on tips related to recently released features, so my theme was “we’ve been busy,” and I used the image of bees in a beehive. Last year I talked about these tips being the difference between a “good” user of the product and a “hall-of-fame level” user of the product, and I made an analogy with Tony Parker of the San Antonio Spurs improving his jump shot a few years ago to go from “very good point guard” to “hall-of-fame point guard.” (So, in this analogy, the tips are like his shooting stroke.) If nothing else, a good theme will give you a way to introduce your session and warm up your audience.

I’m sure there is other advice I’m forgetting, but these are the big ones. Most of the other advice I would give applies to any presentation you might give at a conference: prepare early, rehearse often, and use humor. Tips and tricks sessions are great at vendor-specific conferences, and if you’ve been invited to give one, you should feel fortunate. They usually score highly with attendees, and they’re highly rewarding because you know you’re giving people something they can take back to their offices and use first-thing Monday morning. My last piece of advice, therefore, is to have a great time with the experience and enjoy knocking the socks off of your attendees with some amazing content!

I got 99 problems . . . but are they customer problems?

A lot has been written about “user versus customer” as an abstract concept ever since (and probably long before) Jack Dorsey asked Twitter to stop calling its users users and start calling them customers. I’ve also seen product managers and UX designers write about how “product owns the customer; UX owns the user.” I think most of this discussion is useful and relevant in defining roles and responsibilities in a product organization, and I’ve referred to various aspects of it many times in my own work with other product managers, UX designers, and developers.

That said, while I don’t spend a lot of time on product management blogs, I haven’t seen a good, concise explanation of how to think about user problems versus customer problems when defining product requirements for design and engineering. I’m writing this post mostly to remind myself to think a little differently in the new year.

I often find myself sitting down to write up a PRD and starting from what I call the “customer problem.” But is it really the customer problem? For example:

  • The customer needs to better understand retention and churn in their own customer base.
  • The customer needs a better way to visualize the overlap between marketing segments.
  • The customer needs to be able to upload massive amounts of customer data for analysis in our tool.

No. No. NO. These statements do represent problems, and they are probably even worth documenting and elucidating as we present our ideas and try to win advocates and acolytes to our cause, but they are not customer problems. They are user problems. They describe what a user needs to be able to do once they are already in the product.

Okay, then; what is a customer problem?

A customer problem—which should be at the beginning or near the beginning of every PRD and concept deck we create—should describe how the customer (a business) is failing to grow. Both customer problems and user problems describe inefficiencies, but customer problems come with a clear indication of the actual (monetary) value that a solution would provide to the businesses we serve, while user problems may only hint at that value.

As such, customer problems are more likely to describe something that would keep the CEO or CMO or CIO up at night than they are to describe something that keeps the day to day user of the product up at night. That’s a good thing, because it can’t hurt to start speaking the C-suite’s language.

Here are a handful of examples of what I mean:

  • The customer has reached a point of diminishing returns from their acquisition budget but still needs to hit a top-line growth target of 20% YoY. The most efficient way for them to do this is by personalizing the on-site experience.
  • The customer is facing intense downward pricing pressure from a smaller competitor who is trying to steal market share, and they are looking for new ways to engage with their own customers to keep them from attriting.
  • The customer knows that segmentation is everything in marketing, but beyond their legacy market segments they have been going after for years, they do not know how to find the right groupings of individuals so that they can reach them in the right way at the right time.
  • The customer has been unable to consolidate customer data stores across the enterprise and, as such, has not been able to empower various marketing teams with insights at the brand level.
  • The customer has millions of dollars flowing into marketing campaigns across various channels, but has only a very limited view of how those dollars influence not just short-term revenue, but long-term customer value.

As I’m sure you can see, user problems can/should flow from customer problems. But we could solve any of the user problems a few paragraphs up from here without actually ensuring that we’re solving the real underlying customer problem that will help users of your product grow their business.

As I’m sure you can also see, the customer problems I’ve listed here are composites, more indicative of a market problem than anything any one particular customer is facing uniquely. (Unless you’re building a product for an individual customer, you will want to pick a someone from across your customer base to represent the customer problem, but it should reflect broader trends, obviously.)

All of this might seem obvious, in which case maybe you haven’t made it to this point in the post. In a way, it is obvious; we all probably try to focus on customer problems, but we often settle for user problems even though we know better. Why is that? As a colleague pointed out, it’s probably because we spend a lot of timing documenting problems, use cases, and solutions for developers, who do solve user problems with their work; thus, we naturally tend to put things in terms that developers are most likely to understand and relate to; however, those user problems should still be grounded in customer problems. This will enable engineering teams to understand the deeper problems as well, giving them the best chance to innovate in ways you could never have envisioned.

What if I don’t know what the customer problem is?

As a product manager you are probably fairly close to your users. Your users sit within the organizations that pay for your product, and many of them likely understand the factors that are limiting growth of the business. If you ask them what challenges the business as a whole is facing, they may or may not get right to the heart of the matter, but this is why (no pun intended) I love the “Five Whys” method of interviewing.

The theory is that within (roughly) five iterations of asking a person “why?” you can get to the root cause behind the initial problem described. You ask what problems are facing the business, and they give answer. You ask why that’s a problem. They answer. You ask why that’s a problem. They answer. Lather, rinse, repeat. By the fifth “why,” you should be at a root problem that motivates the users you’re interviewing to action in a big way. That should either be the customer problem or very close to it.

You may need to ask to interview the user’s manager, or his/her manager’s manager in order to get the level of detail you want in order to truly understand the customer problem, but the “Five Whys” method will help you get there.

A final huge benefit to thinking about customer problems

One of the most underrated skills in product management, in my limited experience, is the ability to convince people. You have a vision for your area of product ownership, and you’re likely competing for resources (development, UX, etc.) with other products or other features. You will get nowhere with that vision unless you are able to convince key stakeholders that what you want to do is supremely valuable to your customers (and therefore to your own business).

There are many factors that play into the “convincingness” of a product proposal, but in my (again, limited) experience, proposals that are based in a rock-solid knowledge of the problems your customers at large are facing, where the product manager can speak knowledgeably and authoritatively about the business problems preventing customers from growing, tend to convince more people more completely. These projects tend to get funded because stakeholders feel the customer’s frustration much more acutely than if the problem presented is purely a user-level problem.

Think customer

One of my goals for 2016 and beyond is to do a better job in interviews with users getting to the root of the customer problem and being deliberate about making that the central focus of my product requirement/design work. (Incidentally, I think I’ve been reasonably successful at this, but more by chance than by choice.) I have no doubt that it will produce better products and features; I just wish I had come to these realizations much sooner.

Ah, well; I never said I was a fast learner.

Thoughts on Breaking Bad

One of the advantages of having consumed Breaking Bad entirely via Netflix (with the exception of the final eight episodes, which AMC aired as part of a marathon on 12/30 and 12/31) is that I missed out entirely on all of the reviews and the commentary that must have happened immediately following (and during) each episode as it aired originally. This means that, as far as I know, all of the rambling, unrelated thoughts that follow in this post are original and have never been published by anyone else.

(Of course I know that any such pretense is delusional; nothing you will read here is unique, or even terribly thought-provoking. But, after that finale—which I watched, alone, last night at around 1:30 AM—and that series, like many of you, I need to say something, even just for my own edification.)

So, here I present my assorted thoughts and reactions and would love to engage in some “therapy” via comments if you agree/disagree, or just want someone to hold you digitally after seeing what went down in Albuquerque.

* * *

My text, as it were, is Alan Sepinwall’s entertaining (if not exactly life-changing) The Revolution Was Televised, which chronicles the rise of television as the preferred medium for crafting and consuming drama over (primarily) the past 15 years. Of the 10 shows Sepinwall discusses, I have heavily invested in only three: Friday Night Lights, LOST, and now Breaking Bad. Those other two shows serve, for me, as points of comparison in judging Breaking Bad. I am going to add a fourth point of comparison, which Sepinwall mentions as having been so close to making his list, but ultimately left on the cutting-room floor: The West Wing. These four shows represent my four favorite TV dramas of all-time, by far.

I won’t bore you with an exhaustive tale-of-the-tape, but in my mind I have to refer in particular to LOST and West Wing when I try to put Walter White into some context.

* * *

Perhaps the most striking thing about Breaking Bad to me was its complete and utter lack of preachiness. It had no objective—no real lesson to teach us. And why should it? Once you decide that your show is going to lay wisdom on the audience, aren’t you sort of boxed in? Plot has to develop in a certain way; characters do and say things according to a code that you have established.

Breaking Bad was total anarchy. Was there ever a point at which you were sure of how the plot was going to break? Or how a character was going to react to a stimulus? Well, okay, maybe when you still believed Walt that he was doing everything for his family. But even that turned out to be a falsehood.

Contrast this with, say, LOST. It was far from preachy, but it was far more heavy-handed in the way that it warned about the consequences of actions, usually via flashbacks. With the benefit of hindsight, we could easily see how characters’ poor choices had made them unhappy. In Breaking Bad, even though we see Walt’s steady transformation, we are never given a clear window for reflection.

The West Wing, far more than either of the other two shows just mentioned, had not just a political agenda but also a moral agenda. It did not just want you to believe in a platform, it wanted you to be honest and make good choices and be patient and weigh both sides of issues. It wanted to teach you how to live. Personally, I found its approach not to be holier-than-thou, and so it didn’t bother me that President Bartlet, and really Matt Santos after him, seemed to be a little too perfect. Even LOST, though, could not help philosophizing on its way out the door, with Christian Shephard explaining how people need each other.

It’s not just that Walt was incapable of reflection by the end of the series—I’m sure the writers could have come up with an elegant way for him to say something about his purpose that would have made us all stop and say, “Ah, wisdom!”—it is that they had the wherewithal not to take the bait.

And that is both refreshing and disorienting. We are used to TV shows (again, even the good ones) trying to bestow wisdom. Instead, Breaking Bad said, “Here is some stuff that happened. Any lessons that you want to draw, you can draw, but you do so only from the events you observed, and not because we were trying to convey something.” And we loved it.

* * *

There is this rumor, now apparently confirmed, that Sir Anthony Hopkins wrote a letter to Bryan Cranston in which Hopkins said, “Your performance as Walter White was the best acting I have seen—ever.” And later: “If you ever get a chance to – would you pass on my admiration to everyone—Anna Gunn, Dean Norris, Aaron Paul, Betsy Brandt, R.J. Mitte, Bob Odenkirk, Jonathan Banks, Steven Michael Quezada—everyone—everyone gave master classes of performance . . . The list is endless.”

At first, I was skeptical of the authenticity of the letter—the tone seemed a little too ebullient and complimentary—but I also couldn’t argue. TV is so good at plucking little-known actors seemingly out of thin air and turning them into Emmy winners. With the exception of Cranston and Odenkirk (well, and the episode of Parks & Recreation where Jonathan Banks plays Ben Wyatt’s dad), I had never seen any of these actors or actresses before. But Hopkins is right; from episode one it was a set of performances unlike anything I have seen before.

But there is one casting choice that stands out—at least to me personally—as particularly dissonant, and I hope it was intended that way: Jesse Plemons as Todd, the nephew of the neo-Nazi crew that Walt begins hiring to do his dirty work in Season Five. Why dissonant? Because Plemons is best known as the nerdy, kind Landry Clarke from Friday Night Lights. Unlike with the dad from Malcolm in the Middle, we don’t see a lengthy transition in Todd. It is true that we do not know the depths of his depravity from the start, but we only know him for 13 episodes. And, by the end, it is clear that he is a different sort of psychopath than Walt: incapable of empathy, happy to kill innocent people, and lusting for misery (typified by the way he gives ice cream to the captive Jesse and then cheerfully executes Andrea seemingly within a matter of minutes). Even though he has the same manner of speech, he is the polar opposite of Landry Clarke. There are two kinds of “bad guys,” and Breaking Bad used both kinds perfectly: those who look the part, and those who look anything but. Turning Landry into Todd was almost better than the slow revelation of Gus’ depravity, or Walt’s. It was a fascinating, almost manipulative contrast from the very first frame, and the sort of thing that Breaking Bad was all about.

* * *

As I sat alone last night watching the last six episodes, I came to the point where Walt leaves the house with Holly, while Skyler falls her to knees in the street. As a father of three young kids, this was the most panic-inducing scene of television I have ever watched. I almost had to turn it off but I knew that I would not be able to settle down if I did so.

I’m rarely “affected” by media. I can point to a handful of scenes across a number of shows and films that drew a real emotional reaction from me, but never anything quite like Holly’s kidnapping. I am told that this was generally regarded as the best episode of the series, and on the basis of IMDb ratings it might be the best episode of television ever; unless I am reading this wrong, it scored a perfect 10/10 based on almost 44,000 ratings.

There is no way I can argue with that score. It was a 10.

* * *

So, where does it rank on my list? I have been struggling with this question since last night. I have decided that I can’t rank Walter White and Jesse Pinkman against Josh Lyman and Leo McGarry, or against Jack Shephard and Kate Austen, or against Eric Taylor and Matt Seracen. Breaking Bad is too different. Perhaps all four of these shows are too different to be ranked against one another.

Come to think of it, I am not sure why we are so obsessed with ranking things. Does the fact that Breaking Bad is so tremendous somehow disparage Friday Night Lights?

Of course there is the “if you were stuck on a desert island, which would you take?” test, and (amusingly, given the premise of that scenario) I would have to take LOST, because it spurs the imagination in a way that the other three shows don’t.

Fortunately, I am not stuck on a desert island, and the ability of Breaking Bad to suck you in to this world of relative morals and the perfect anti-hero puts it right up there. I’m going with a three-way tie. A Mexican standoff, if you will—a place where Walter Hartwell White would have been very much at home.

2013 Robert Horry Memorial Playoffs All-“Rising Stock” Team

The 2013 NBA playoffs are over, and it’s time to reflect. In my opinion, one of the most fascinating lenses through which to view any NBA playoffs is in terms of players whose stock rose the most with the spotlight on them.

It felt like this playoffs featured more big names turning into stars, and more marginal guys turning into big names, than in the past few years. And, as often happens, it seemed like every time one surprising star got eliminated, it seemed that another emerged in the next round.

(I could have named this team after any of the dozens of players who have made names for themselves in the postseason, but I chose Robert Horry because I couldn’t think of anyone whose success was more closely tied to the playoffs than Big Shot Bob. And yes, I know Horry is not dead.)

So here is my 2013 Robert Horry Memorial* Playoffs All-“Rising Stock” team:

Stephen CurryStephen Curry – Guard, Golden State Warriors

Do you remember the first round? It was a while ago. In the first round, Curry was THE guy in the NBA. Nobody was more exciting for the first six games of the playoffs (plus that ridiculous 18-35 performance in 58 minutes in game one against the Spurs before he got worn out and/or injured). Curry averaged almost four three-pointers a game while shooting 44% from beyond the arc, 47% from the floor, did not miss a free throw in the Denver series, and almost managed to dish out almost 10 assists per game.

Mike Conley – Guard, Memphis Grizzlies

This was the playoffs where everyone realized that Memphis may not have wildly overpaid when they signed Conley to a five-year, $40 million extension a couple of years ago. We came to appreciate his defense, and the way he works the pick-and-roll. He didn’t set the world on fire with his shooting in the postseason, but he also didn’t turn the ball over, distributed nicely, and rebounded. Fans of franchises without all-star PG came out of the playoffs dreaming of this guy.

Danny Green – Guard, San Antonio Spurs

You’re sick of the Danny Green story by now: cut numerous times, played in the D-League, etc. But for five glorious games in the NBA Finals, Green was the best outside shooter in the NBA. At one point, he was 25-38 from three in the series. We knew Green was dangerous, but trending-on-Twitter-multiple-times dangerous? No chance. The only question is: do you now claim to have known Green had this in him before the series started, or are you honest?

Nate Robinson – Guard, Chicago Bulls

It was a playoffs full of amazing individual games, and until game six of the finals, perhaps none was more amazing than Chicago 142, Brooklyn 134 in triple overtime. This was “The Nate Robinson Game.” 34 points on 14-23 FG and a completely silly flying bank shot. Robinson, along with Joakim Noah, epitomized the underdog Bulls. 17 points on 51% shooting in round one meant that we all wanted to see what highlight-reel play this diminutive point guard would make next.

Kawhi LeonardKawhi Leonard – Forward, San Antonio Spurs

By the end of one of the best NBA Finals ever, was Kawhi Leonard the most dependable player on a dependable team? I say yes, despite the infamous missed free throw in game six. He averaged a double-double while guarding LeBron James for much of the series, and seemed to pour in big shot after big shot after key defensive play. A second-year SF who averaged 55% from the floor and 40% from three in the playoffs, plus 8.7 RPG? Yes please.

Paul George – Forward, Indiana Pacers

It’s possible that nobody’s stock rose higher in this playoffs than George’s. Even though George was an all-star this past season, playing in Indiana limited his exposure. But in taking the Heat to seven games, George was, in a word, ridiculous. He hit from anywhere and everywhere, going for 48% FG and 44% 3P in the Heat series (twisting the knife for many Jazz fans). It’s the best when a young guy already on a star trajectory makes “the leap” on the biggest stage.

Chris “Birdman” Andersen – Forward, Miami Heat

I know this is the choice you all are going to hate. Birdman is the oldest guy on this list by far, and it isn’t so much that his stock rose as an asset, but he definitely became something of a household name. In part, this is because everyone on the Heat is a celebrity (especially those with tattoo turtlenecks), and a bunch of casual fans now know this guy. But it’s also because Andersen at one point made 17 straight field goals. Miami definitely doesn’t win game one against Indiana without his physical play at both ends.

Roy Hibbert – Center, Indiana Pacers

Hibbert is another guy who, like Conley, engendered skepticism when he signed a four-year, $58 million extension last July. It didn’t help matters that Hibbert was then lackluster for much of the regular season. But he upshifted when it mattered most, averaging more than three blocks per game against the Knicks, then turning in a 22.1/10.4 line against the Heat. His block on Carmelo Anthony late in game six against New York and the charge he drew by going straight up against LeBron in game six of the ECF were things of beauty. With Dwight Howard falling apart physically and mentally, is Hibbert the best defensive center in the NBA?

Marc Gasol – Center, Memphis Grizzlies

It was a great postseason for Memphis overall, but perhaps nobody rose in prominence more than the large Spaniard. The fact that “Did Memphis end up with the better Gasol brother after all?” was a legitimate question among NBA fans by the end of the second round should tell you everything you need to know. 17.2/8.5 for a big man with range who can block shots is a solid postseason effort.

Honorable mention: David West (IND; great playoffs but overshadowed by George and Hibbert); Klay Thompson (GSW; overshadowed by Curry), Chandler Parsons (HOU; had a good series, but didn’t stick out the way that others on the list did), Kirk Hinrich (CHI; because people realized that the Bulls don’t win without him), Chris Copeland (NYK; ridiculous from three, and a fan favorite, but his team lost the only series where he played significant minutes), Boris Diaw (SAS; against all odds, had a decent NBA Finals and avenged himself from all of those “Fat Boris Diaw” jokes). 

So that’s the team. The fact that there was Did I miss anyone? Let me know in the comments.


Staying Home

It happened again. Another loudmouth on Twitter proclaimed himself the Commissioner of Being a Good Sports Fan and declared, essentially, that none of us has any right to complain about our teams if we are not going to the games. Because, of course, “true fans” go to all the games.

True fans do this, true fans do that.

It’s not quite the dumbest thing I’ve ever heard, but it’s close. And people keep saying it.

First of all, I am not aware of a cabinet-level position that gives anyone the right to decide what makes a true fan. So please do us all a favor and stop pretending that what you say matters (although, to your credit, it did inspire this blog post, so you’ve got that going for you).

But, second, let me tell you all the reasons I typically avoid LaVell Edwards Stadium like the plague despite being, I think, a pretty huge BYU football fan:

  • At home, I have an HD TV.
  • At home, I have a remote control so I don’t have to watch freshmen make fools of themselves trying to kick field goals during commercial breaks.
  • At home, I have free food that is better than anything you pay $45 for at the stadium.
  • At home, I have a couch, which is extremely comfortable and inviting unlike the metal benches (with the roughly 36 square inches of space they allow you).
  • At home, I do not have to be surrounded by idiots yelling at the coaches, players, and officials at the tops of their lungs despite the fact that nobody who can hear them cares what they have to say.
  • At home, I can show up for the game whenever I want with no traffic, and when the game is over I don’t have to sit in the mass exodus for an hour.
  • At home, I actually get cell service so I can talk to a theoretically unlimited number of friends about the game while I am watching it.
  • At home, I have every stat in the world at my finger tips; I can analyze the game from every angle that you can in the stadium and then some.

So what’s the argument against staying home? The “thrill” of being surrounded by 74,000 of your closest friends? Yeah. . . no thanks. Not only does that not do it for me, but it also has nothing whatsoever with being a “true fan” (if there is such a thing). But hey, knock yourself out, Mr. Commissioner.

Arbitrariness. Such a confusing thing.

Good GM, Bad GM: Late Bloomers and Draft Prowess

I don’t expect to receive an answer to this question since I know there only three of you out there reading my blog, but I’m going to ask it anyway.

Let’s say you’re evaluating an NBA GM’s drafting/scouting ability. Should he get credit for picks who ultimately turned into solid players, but did so only after leaving the team that drafted them? Take, for example, Kris Humphries. I know you think he’s overpaid, but don’t forget that after his two seasons with the Jazz, everyone believed he was a total bust. He notched just 0.1 total Win Shares during his first two seasons in the NBA. But fast forward a few seasons and Humphries has tallied a totally respectable 10.7 Win Shares while averaging a double-double over his past two seasons with the Nets. So should Kevin O’Connor get credit for drafting Humphries, a serviceable NBA starter, even though the Utah Jazz never benefited directly from that pick?

This is vaguely similar to questions digital marketers face around multi-touch attribution. If a user arrives at your site by clicking a paid search link in Google but does not purchase, and then a month later arrives at your site by typing your address into his browser and this time he does purchase, should that original paid search click-through get credit? If so, how much? It’s a little different because most NBA players would have been drafted eventually anyway; if Kevin O’Connor hadn’t picked Humphries, someone else would have, and we’d be wondering whether that person deserves credit.

I can see arguments both ways. A GM who picks a player who only pans out later in his career might have correctly read the player’s potential, and we should reward that GM for his vision. But a GM’s job is to deliver concrete wins to his team via the draft, and a late bloomer does not help his cause. In case anyone is out there reading, leave me a comment: what do you think?

Pickup Basketball Purism

I tweeted about this last night, but 140 characters just wasn’t enough for me to state my case regarding the scoring in pickup basketball. (I only tackle the really important issues on this blog.)

pickup basketballI love pickup basketball. In fact, the widespread availability of pickup basketball is one of the best reasons to live in Utah. Not only do we have YMCA-like fitness centers in every town, but on any given weeknight or weekday morning there is an 87.9% chance that there are four churches where guys are playing ball within a one-mile radius of any given location along the Wasatch Front. I love that every Monday, Wednesday, and Friday morning at 6:00 AM I drive for two minutes and I’m at basketball. Same thing on Thursday nights. Oh, and sometimes I play during lunch at work. (Despite all of this, I’m pretty terrible.)

What I don’t love is keeping score by 1s and 2s. You know, what would normally be a two-point field goal in high school, college, NBA, or really any organized form of basketball becomes a one-pointer, and a three-pointer counts for two.

Here’s my argument:

  1. Basketball—real basketball—has what I consider to be a fairly simple scoring system. If it were, say, pickup figure skating, or even pickup tennis, I could see wanting to simplify the score-keeping. But honestly, how hard is it to credit each team with two points for any basket inside the three-point line, and three points for any basket outside it? Am I missing something here?
  2. More importantly, counting by 1s and 2s fundamentally changes the game. By making a three-pointer worth twice as much as a two, instead of 1.5x, you’re possibly incenting people to play outside; you’re giving them a good reason to play bad (i.e., not very fun) basketball. When a three is a three and a two is a two, the upside of jacking up a bunch of threes probably doesn’t outweigh the upside of good ball movement and working for a decent shot inside. But when you’re counting by 1s and 2s, suddenly it might make more sense to play three or four guys around the arc and hoist up three point tries all game. Three-pointer after three-pointer is great for the shooter(s) when he’s hitting. . . and completely annoying for everyone else. Everyone hates the guy who brings the ball up the floor and then calls his own number by pulling up for a three. I’m not saying people consciously decide to play differently when counting by 1s and 2s, but the possibility is there (and it doesn’t need to be; see point #1).
  3. Along these same lines, remember, there are no free throws in pickup basketball, so even if you’re counting by 2s and 3s in a pickup game, the incentive to shoot a lot of threes is already higher than it is in organized basketball. Let’s say I’m an NBA player who shoots 50% generally from inside the three-point arc and 40% outside of it. Some fans look at this and say, 40% * 10 three-point tries = 12 points and 50% * 10 two-point tries is = 10 points, so shouldn’t you always take the three? The answer is no, primarily because this faulty analysis ignores the fact that in organized basketball you are far more likely to get fouled and produce valuable free throws when shooting inside the three-point line (driving to the basket or helping to create shots for teammates), so your two-point tries are more valuable than they seem on the face of it. The possibility of creating free throws does not exist in pickup basketball, whether you’re counting by 1s and 2s or whether you’re counting by 2s and 3s, so you’re already more incentivized to play outside than you normally would be; why make things even worse by increasing the value of a three-pointer unnecessarily?

As you can tell, I’ve given this some thought. And maybe that’s because I’m too much of a purist; the NBA and college ball have been playing with 2s and 3s since the early 1980s, and the ABA had it even earlier. It just seems silly to change something that works so well.

So now I am counting on you, all three of my blog readers (hi mom!), to tell me what I’m missing. Who invented counting by 1s and 2s and why did they do it? Do you have a preference and why? Did I miss something important?