Category: design

Dec 09

Five UX Research Pitfalls

I wrote this for UX Mag a while ago and it remains one of my all-time favorite writing projects. However, I never posted it in its entirety on my blog so here it is now. Enjoy!

More and more organizations view UX as a key contributor to successful products, connecting teams with end-users and guiding product innovation within the organization. Though it’s fantastic to see this transition happen, there are growing pains associated with becoming a user-driven organization. These are the pitfalls that I see organizations grappling with most often.

Pitfall 1: It’s easier to evaluate a completed, pixel-perfect product so new products don’t get vetted or tested until they’re nearly out the door.

Months into a development cycle and just days before the release date, you realize that the UI has serious flaws or missing logic. If you’re lucky, there is enough flexibility in the schedule to allow grumbling engineers to re-architect the product. More likely, though, the PM will push to meet the original deadline with the intent to fix the UI issues later. However, “later” rarely happens. Regardless, everyone wonders: how could these issues have been caught earlier?

The UI is typically built after the essential architectural elements are in place and it can be hard to test unreleased products with users until the very last moment. However, you can gather feedback early in the process:

  • Don’t describe the product and ask users if they would use it. In this case, you are more likely testing your sales pitch rather than the idea itself. If you ask users if they want a new feature, 90% of the time they’ll say yes.

  • Test with the users you want, not the users you already have. If you want to grow your audience with a new product, you should recruit users outside your current community.
  • Validate that the problem you are solving actually exists. Early in the design cycle, find your future users and research whether your product will solve their real-world problems. Look for places where users are overcoming a problem via work-around solutions (e.g., emailing links to themselves to keep an archive of favorite sites) or other ineffective practices (e.g., storing credentials in a text file because they can’t remember their online usernames and passwords).
  • Verify your mental models. Make sure that the way you think about the product is the same as your user. For instance, if you’ve been pitching your product idea to your coworkers as “conversational email” but your actual users are teenagers who primarily use text messaging, then your email metaphor probably won’t translate to your younger users. Even if you don’t intend to say “conversational email” in your product, you will unconsciously make subtle design choices that will limit your product’s success until you find a mental model that fits that of your users, not of your coworkers.
  • Prototype early. Create a Flash or patched-together prototype internally as soon as possible. Even if your prototype doesn’t resemble a finished product, you’ll uncover and develop confidence in the major issues to wrestle down in the design process. You’ll also have an easier time seeing the areas of the product that need animations or on-the-fly suggestions which often go unscoped when the product is only explored in wireframes and design specs but can require significant engineering time.
  • Plan through v2. If you intend to launch a product with minimal vetting or testing, make sure you’ve written down and talked about what you intend for the subsequent version. One of the downsides of the “release early, release often” philosophy is that it’s easy to get distracted or discouraged if your beta product doesn’t immediately succeed. Or upon launch you might find your users pulling you in a direction you hadn’t intended because the product wasn’t fully fleshed out or dealing with weeks of bug-fixing and losing sight of the big picture. Once the first version is out the door, keep your team focused on the big picture and dedicated to that second version.

Pitfall 2: Users click on things that are different, not always things they like. Curious trial users skew the usage statistics for a new feature.

Upon adding a “Join now!” button to your site, you cheer when you see an unprecedented 35% click-through rate. Weeks later, registration rates are abysmal and you have to reset expectations with crestfallen teams. So you experiment with the appearance of your “Join now!” button by changing its color from orange to green, and your click rates shoot up again. But a few days later, your green button is again performing at an all-time low.

It’s easy for an initial number spike to obscure a serious issue. Launching a new feature into an existing product is especially nerve-wracking because you only have one chance to make a good first impression. If your users don’t like it the first time, they likely won’t try it again and you’ve squandered your best opportunity. Continuously making changes to artificially boost numbers leads to feature-blindness and distrustful users. Given all of this, how and when can you determine if a product is successful?

  • Instrument the entire product flow. Don’t log just one number. If you’re adding a new feature, you most likely want to know at least three stats: 1) what percentage of your users click on the feature, 2) what percentage complete the action, and 3) what percentage repeat the action again on a different day. By logging the smaller steps in your product flow, you can trace the usage statistics within all of these points to look for significant drop-offs.

  • Test in sub-communities. If you are launching a significant new feature, launch the feature in another country or in a small bucket and monitor your stats before launching more widely.
  • Dark-launch features. If you are worried that your feature could impact site performance, launch the feature silently without any visible UI and look for changes in uniques, visit times, or reports of users complaining about a slow site. You’ll minimize the number of issues you might have to potentially debug upon the actual launch.
  • Anticipate a rest period. Don’t promise statistics the day after a release. You’ll most likely want to see a week of usage before your numbers begin leveling.
  • Test the discoverability of your real estate. Most pieces of your UI will have certain natural discoverability rates. For instance, consider adding a new temporarily link to your menu header for a very small percentage of your users just to understand the discoverability rates for different parts of your UI. You can use these numbers as a baseline for evaluating future features.

Pitfall 3: Users give conflicting feedback.

You are running a usability study and evaluating whether users prefer to delete album pictures using a delete keystroke, a remove button, a drag-to-trash gesture, or a right-click context menu. After testing a dozen participants, your results are split among all four potential solutions. Maybe you should just recommend implementing all of them?

It’s unrealistic to expect users to understand the full context of our design decisions. A user might suggest adding “Apply” and “Save” buttons to a font preference dialog. However, you might know that an instant-effect dialog where the settings are applied immediately without clicking a button or dismissing the dialog allows the user to preview their font changes immediately and saves the user from opening up the dialog repeatedly to make small font style tweaks. With user research, it’s temptingly easy to create surveys or design our experiments so study participants simply vote on what they perceive as the right solution. However, the user is giving you data, not an expert opinion. If you interpret user feedback at face value, you typically end up with a split vote and little data to make an informed decision.

  • Ask why. Asking users for their preference is not nearly as informative as asking users why they have a preference. Perhaps they are basing their opinion upon a real-world situation that you don’t think is applicable to the majority of your users (e.g., “I like this new mouse preference option because I live next to a train track and my mouse shakes and wakes up my screen saver”).

  • Develop your organization’s sense of UI values. Know what UI paradigms (e.g. Mac vs. Windows, Web vs. Desktop, etc) and UI values (e.g. strong defaults or lots of customization, transparency or progressive disclosure) your team values. When you need to decipher conflicting data, you’ll have this list for guidance.
  • Make a judgment call. It’s not often helpful to users to have multiple forms of the same UI. In most cases it adds ambiguity or compensates for a poorly designed UI. When the user feedback is conflicting, you have to make a judgment call based upon what you know about the product and what you think makes sense for the user. Only in rare cases will all users have the same feedback or opinion in a research study. Making intelligent recommendations based upon conflicting data is what you are paid to do.
  • Don’t aim for the middle ground. If you have a legitimate case for building multiple implementations of the same UI (e.g., language differences, accessibility, corporate vs. consumer backgrounds, etc.), don’t fabricate a hodgepodge persona (”Everyone speaks a little bit of English!”). Instead, do your best to dynamically detect the type of user situation upfront, automate your UI for that user, and offer your user an easy way to switch.

Pitfall 4: Any data is better than no data, right?

You are debating whether to put a search box at the top or the bottom of a content section. While talking about the issue over lunch, your BD buddy suggests that you try making the top search box “Search across the Web” and the bottom search box “Search this article” to compare the results between the two. You can’t seem to place your finger on why this idea seems fishy though you can see why this would be more efficient than getting your rusty A/B testing system up and running again. Sensing your skepticism, your teammate adds, “I know it’s not perfect, but we’ll learn something about search boxes, right? I don’t see a reason not to put it in the next release if it’s easy?”

The human mind’s ability to fabricate stories to fill in the gaps in one’s knowledge is absolutely astounding. Given two or three data points, our minds can construct an alternate reality in which all of those data points make flawless sense. Whether it’s an A/B test, a usability study, or a survey, if your exploration provides limited or skewed results, you’ll most likely end up in a meeting room discussing everyone’s different interpretations of the data. This meeting won’t be productive and you’ll either agree with the most persuasive viewpoint or you’ll realize that you need a follow-up study to reconcile the potential interpretations of your study.

  • Push for requirements. When talking with your colleagues, try to figure out what you are trying to learn. What is the success metric you’re looking for? What will the numbers actually tell you? What are the different scenarios? This will help you determine the study you should run while also anticipating future interpretations of the data before running the study (e.g., if the top search bar performs better, did you learn that the top placement is better or just that users look for site search in the upper left area of a page?).

  • Recognize when a proposed solution is actually a problem statement. Sometimes someone will propose an idea that doesn’t seem to make sense. While your initial reaction may be to be defensive or to point out the flaws in the proposed A/B study, you should consider that your buddy is responding to something outside your view and that you don’t have all of the data. In this scenario, perhaps your teammate is proposing running the search box study because he has a meeting early next week and needs to work on a quicker timeline. From his perspective, he’s being polite by leading with a suggestion without realizing that you don’t have the context for his suggestion. However, after pushing him for what problem the above study will resolve, you can also help him think through alternative ways of getting the data he needs faster.
  • Avoid using UX to resolve debates. UX might seem like a fantastic way to avoid personal confrontation (especially with managers and execs!). After all, it’s far easier to debate UX results rather than personal viewpoints. However, data is rarely as definitive as we’d like. Conducting needless studies runs the risk of slowing down your execution speed and perhaps leaving deeper philosophical issues unresolved that will probably resurface again. Sometimes we agree to a study because we aren’t thinking fast enough to weigh the pros and cons of the approach, and it seems easier to simply agree. However, you do have the option of occasionally saying, “You’ve raised some really good points. I’d like to spend a few hours researching this issue more before we commit to this study. Can we talk in a few hours?” And when you do ask for this time, be absolutely certain to proactively follow-up with some alternative proposals or questions, not just reasons why you think it won’t work. You should approach your next conversation with, “I think we can apply previous research to this problem,” or “Thinking about this more, I realized I didn’t understand why it was strategically important to focus on this branding element. Can you walk me through your thinking?” or “After today’s conversation, I realized that we were both trying to decrease churn but in different ways. If we do this study, I think we’re going to be overlooking the more serious issue, which is…”

Pitfall 5: By human nature, you trust the numbers going in the right direction and distrust the numbers going in the wrong direction.

Hours after a release, you hear the PM shout, “Look! Our error rates just decreased from .5% to .0001%. Way to go engineering team! Huh, but our registration numbers are down. Are we sure we’re logging that right?”

Even with well-maintained scripts, the most talented stats team, and the best intentions, your usage statistics will never be 100% accurate. Because double-checking every number is unrealistic, you naturally tend to optimize along two paths: 1) distrust the numbers that are going in the wrong direction and, more dangerously, 2) trust the numbers that are heading in the right direction. To make matters worse, data logging is amazingly error-prone. If you spot a significant change in a newly introduced user activity metric, 9 times out of 10 it’s due to a bug rather than a meaningful behavior. As a result, five minutes of logging can result in five days of data analyzing, fixing, and verifying.

  • Hold off on the champagne. Everyone wants to be the first to relay good news so it’s hard to resist saying, “We’re still verifying things and it’s really early, but I think registration numbers went up ten-fold in the last release!” Train yourself to be skeptical and to sanity-check the good news and the bad news.

  • QA your logging numbers. Data logging typically gets inserted when the code is about to be frozen. Since data logging shouldn’t interfere with the user experience, it tends not to be tested. Write test cases for your important data logging numbers and include testing them in the QA process.
  • Establish a crisp data vocabulary. Engagement, activity, and session can mean entirely different things between teams. Make sure that your data gatekeeper has made it clear how numbers are calculated on your dashboards to help avoid false alarms or overlooked issues.

One of the main tenets of user research is to constantly test the assumptions that we develop from working on a product on a daily basis. It takes time to develop the skills to know how to apply our UX techniques, when our professional expertise should trump the user’s voice, or when to distrust user data. As a researcher, you are trained to keep an open mind and to keep asking questions until you understand the user’s entire mental picture. However, it’s that same open-mindedness and willingness to understand the user’s perspective that makes it easy to assume that because their perspective can make sense, that it should also justify changes within our product design. Or, because we are so comfortable with a particular type of UX research, we tend to over-apply it to our team’s questions.

While by no means a complete list, I hope these five pitfalls from my personal experience will be relevant to your professional lives and perhaps, provide some food for thought as we all strive to become better researchers and designers.

 

Mar 08

why the symphony needs a progress bar

Progress bar at the symphony

(photo courtesy of Santa Barbara Choral Society)

About three years ago, my work-life balance started to improve – start-up sleep deprivation was no longer a constant norm. I didn’t have enough time to restart violin lessons but season tickets to the San Francisco Symphony? Yup, I could swing that.

I bought tickets for myself and my husband, Todd, a relatively new concert-goer. But after a few shaky experiences, I was worried that Todd would back out of a subsequent season subscription. I started doing anything I could to avoid the, “Oh my god – is this only the first movement?” mid-concert terror. Seeing the experience from a newbie’s perspective, my UX instincts kicked in and I started jotting down the, “If only the symphony had…” moments. Three years later, here’s my list:


If Only the Symphony Had…

Progress bar at the symphony

1. A Progress Bar

Even the most devout classical music listener has, “OMG is this over yet?” moments. When you’re not responding to a performance, the experience becomes torturous if you don’t know whether you’ve endured 5% or 95% of the piece. A progress bar would make a world of difference. Nearly every other performance genre has accompanying scoreboards, screens, tickers, or subtitles to track the event’s progress. A JumboTron might be inappropriate but a few progress lights on the conductor’s podium would really help.


MTT Talks

2. People Who Talk

Half of the fun of following a sports team is getting to know the players. At the symphony, you regularly have a two-hour experience with over a hundred performers with absolutely no words exchanged. I love encores because the artist announces the piece they are about to play and I can suddenly match a voice to a performer. Then they become real. I’d love for the conductor or soloist to provide a 3-4 sentence introduction, “Thank you for joining us this evening. Tonight we will be performing…” It’s only natural that the audience feels more engaged when they hear a performer’s voice. In the three years I’ve attended the San Francisco Symphony, I’ve never heard Michael Tilson Thomas talk!


quiet candy

3. Quiet Candy

The symphony season is almost perfectly aligned with head cold season – fall through spring. No one wants to cough during a performance but when that annoying tickle happens, you can only hold your breath and writhe in agony. I’m sure Ms. Stewart would endorse a hospitable offering of wax paper-wrapped candy in the entryway as both a welcoming gesture and a potential quick-fix to hold you over until you can make a mad dash to the water fountain.


4. A tl;dr opener

My typical symphony experience started with leaving Meebo a little early without dinner and finding myself starving in a 101-N traffic jam with a spouse who is thinking, “Wait a second, if we miss the symphony, we can skip the concert and get pizza instead!” We have never missed a performance but we sprinted from the parking lot on a few occasions. With seconds to spare, I’d see Todd crack open his program to find a dense Ph.D. thesis on the first piece. Two-three sentences in, the lights would dim and suddenly Todd was grasping his dark, useless program notes with no idea of what he was listening to.

Here’s a San Francisco Symphony program written for Messiaen’s Oiseaux Exotiques (click to read the 11-page version):

In all of the 2,000 words, the title, “Exotic Birds”, is never translated! Assuming Todd made it through the first paragraph before the music began, he’d know the commissioner, dedication, and all of the locations and conductors who have played this piece of work since 1956. This is not helpful information for someone who is going to listen to Messiaen for the first time!

The first paragraph needs to be oriented to a 30-second, the-lights-are-dimming panic scan. Here’s what I wish preceded the lengthy write-up:

Oiseaux Exotiques (“Exotic birds”), 1956
Duration: 16 minutes (no movements)
Composer: Oliver Messiaen (1908-1992), France
Period: 20th century
Influences: Roman Catholicism, birds, colors, Japanese music, landscapes
Instruments: Piano and small orchestra
Listening notes: Forty-eight birdsongs are played throughout this piece. Messiaen was not familiar with American birds so many of the birdsongs such as the Cardinal, Wood Thrush, Prairie Chicken, Oriole, and Finch were exotic to his ear.


concert notes

5. Program notes on the fold

While I’m harping about program notes, I’ll also mention a personal pet peeve. I dread the moment when I accidentally close my program and realize that I’ve lost the position to the concert notes. I’ll need to carefully open and flip through pages to locate the notes again without squeaking a chair or elbowing my neighbor. I know that it might make economical sense to bury the program notes amidst diamond cocktail ring advertisements but I’d really appreciate a program that naturally falls open to the concert details. If the advertising dollars can’t be missed, then offer a lightweight $.99 iPhone app that has white-on-black text (to avoid glowing screens) that can be flicked in the dark.


sing along

6. Programming for beginners

When you launch a new product, you inevitably have a few crazy, very vocal early adopters (why don’t you support Opera’s browser yet?) that you have to selectively ignore if you want a product that appeals to a wider audience. The symphony is the same. About half of the audience attends for a pleasant symphony-going experience. A small minority will be hard-core educated symphony folks who needle, “Why haven’t we heard more atonal music by post-Janáček Slavic composers this season?” The remainder are the musically tepid spouses and children who have been dragged to the hall and are just trying to stay awake and to clap at the right times.

To sustain the symphony, there needs to be beginner programming at every concert – even if it’s just a 3-minute warm-up to perk up newbie ears with a, “Oooh – I’ve heard of this!” moment. Pre-concert talks are fantastic but I’m battling hectic schedules and a seatmate who (though he’d graciously never admit it) probably wants to spend less, not more, time at the symphony. However, it’s these seat-mates who determine whether I repurchase symphony season tickets and who will probably determine whether the symphony thrives longterm.


I can imagine that in two hundred years people will attend rock concerts performed by historical cover bands and wonder, “Why do they require that we stand for the entire concert?” Or, “If the concert really begins at 11pm, why do they print 10pm on the tickets?” The symphony was intended for entertainment and our rigid adherence to its nineteenth century form has made it increasingly difficult to appreciate. A progress bar is long overdue!

Feb 16

armed and dangerous in silicon valley

Last weekend I took an Adobe InDesign course at BAVC and was surrounded by Sales & Marketing start-up folks taking classes so they didn’t have to bother their busy design and engineering teams with small requests. I had to restrain myself from recruiting every single one of them (especially the one who brought donuts in the morning).

Becoming armed and dangerous in Silicon Valley is easier than most people realize. There are amazing tech classes in the Bay Area that don’t require technical degrees or taking a sabbatical – they are just a little hard to find:

  1. BAVC – offers an exhaustive selection of video production courses as well as Adobe, HTML, CSS, JavaScript, Flash, color, and typography workshops. If you’re a start-up, you might even qualify to take classes for free.
  2. TechShop – offers electronics, machining, and other workshop classes. Right now, Autodesk provides Autodesk Inventor workshops for free (for members). Or prototype that electric gizmo you’ve been dreaming about with the Arduino series. The TechShop’s laser cutting and etching course is far and away their most popular course. The TechShop also has three locations in San Francisco, Menlo Park, and Mountain View.
  3. Stanford Continuing Studies – offers nearly everything from language development, liberal arts courses, writing workshops, lecture series, to professional development. The Personal & Professional Development Series offers financing, leadership, PHP, entrepreneurial, public speaking, and Web design courses. I took a public speaking course with Matt Abrahams. The writing workshops are also highly regarded. It’s also worth mentioning that many of the Art and Archaeology instructors offer international trips to excavate or study art in-person.
  4. UC Berkeley Extension – offers back-end computer science courses such as System Administration and Networking as well as front-end classes like Web development and graphic design. Some classes are available online.
  5. California College of the Arts – offers Web and graphic design classes such as Adobe Creative Suite bootcamps plus other hard-to-find courses such as creating interactive ePubs for your iPad, Cocoa Touch programming, or a how to use a Wacom tablet.
  6. BioCurious – this up-and-coming Biotech workspace in Sunnyvale offers a complete working laboratory. Learn how to do genome sequencing and cloning with their weekend workshops and then start your own genomic experiment – no prerequisite experience necessary!
  7. SFSU Extension – SFSU’s quarterly programming and design classes include jQuery, HTML5, Mobile UI design, ActionScript, and WordPress. Many of the classes are available in weekend workshops.
  8. Digital Media Playground – teaches digital photography and video production so you’ll no longer feel guilty about carrying around a camera you don’t know how to use. It’s also one of the few places that regularly teaches food photography.
  9. The Crucible – prep for Burning Man in no time. These friendly folks offer every Industrial Arts class you can dream of including welding, hula hooping with fire, neon sign making, blacksmithing, and electronics.

If someone’s snickered at your purple comic sans e-mail signature, consider a typography classes. If you are a Project or Product Manager who isn’t totally fluent in geekspeak, look at the Berkeley, SFSU, and Stanford computer science courses. If you are a Sales or Marketing professional who wants to tweak brochures for conferences or start a company blog, take a WordPress, HTML, Photoshop, or InDesign classes. And if you’re a hardcore computer geek, maybe you crave working with something tangible – you’ll love the TechShop and Crucible.

Enjoy!
-Elaine

Jan 01

News Round-Up

Todd and I had two holidays goals that couldn’t be more different. After a month of late nights at the chocolate factory, Todd wanted to zone out on a beach. However, beach chairs make me twitchy. We ended up relaxing on Vieques Island during the day and then kayaking around the BioBay at night – perfect. I charged my mobile batteries to 100% before donning flip flops and then headed to the beach where Todd plugged in his headphones and got to his sunny meditation. Occasionally, the beach surf sounds were interrupted by Todd’s iPhone buzz. He’d pull his iPhone out of his pocket only to say, “Wait a second… this email is from you!” as I’d just sent him all of the news articles that I thought he was missing. Needless to say I had a lot of reading time this week. Here are some of the interesting articles I want to share for anyone interested. Happy reading!

1. The Touchy-Feely Future of Technology (NPR)

If you haven’t heard of Bill Buxton, read the above. And then, head over to see all of Buxton’s papers and research here: Bill Buxton Papers. Buxton is one of the most prolific HCI researchers out there and has been a pivotal figure since the 80′s. Looking through his research is like looking into the future and waiting for technology to catch up.

2. What Does Your Brand Say About You (Washington Post)

A brand is more than Marketing veneer. It’s felt throughout the entire culture and operation.

  • Long lines = “They don’t care about my time”
  • Rush off the phone = “They rush product dev too”
  • Strict policies = “Inflexible”
  • Outdated website = “Outdated ideas”
  • Unexciting messaging = “Boring product”

3. Volkswagen Silences Work Email After Hours (Washington Post)

To help employees maintain a better work/life balance, Volkswagen and others have agreed to stop sending company emails outside work hours.

I love this. There are definitely people who handle their email best at midnight or 5am which means that it’s inevitable that some unlucky recipient is going to feel stressed before falling to sleep or while getting ready to head out the door. Most of the time an email isn’t even that stressful in the longrun but receiving the email in a setting where you can’t do anything immediately makes it worse. There are always exceptions but I love the idea of preventing email after normal work hours so team members can officially decompress out of the office.

4. Online Shopping: Better for the Environment? (LA Times)

Whew – I feel a tinge better about ordering my recent fix of gummy bears via Amazon prime now. Just make sure you recycle the box.

5. Outsourcing Resolutions (WSJ)

“Having someone you love tell you how to become a better person could be terrifying… Who better to tell us how to improve than someone who knows us well?”

Years ago, Todd floored me when November 1st rolled around and he said, “It’s November? I only have 60 more days to complete my resolutions!” I’ve kept New Years resolutions ever since. This year, inspired by this article, we wrote each other’s New Years resolutions to share on December 31st. Then, I decided I wanted to jot down what I would have said for my own New Years resolutions to compare with Todd. It resulted in good dialogue and further goal refinement.

In the end, I realized that this is how performance reviews and personal annual goals should feel. A boss/mentor/trusted peer thinks about their three goals for you based upon their perspective, you come up with your three, and then there’s a conversation to reconcile and brainstorm together. Which leads me to the next article…

6. Everything That’s Wrong with Performance Reviews (Washington Post)

Performance reviews fail because they are heavy-handed, bureaucratic, and a “dysfunctional pretense” that is an obstacle to having a real conversation. (Also see WSJ’s Get Rid of the Performance Review from 2008). By pairing performance reviews with pay, the employee thinks their review determines their pay when it is likely governed more by the market and internal budgets. In addition, performance reviews reinforce the manager and subordinate relationship and focus on past mistakes instead of planning for performance in the future.

7. How To Have a Tough Conversation (Chicago Tribune)

Just a few good tips on having hard conversations: reverse your thinking, help the conversation feel safe, define goals for conversation. It’s intended for the professional setting but I probably need it most for coping with phone chains. AT&T and airlines customer service bring out the very worst in me. If they can’t locate my lost luggage or understand my issue within five minutes, oh! my blood boils!

8. Haters Are Going To Hate This Story (NPR)

Quick rundown of haters online and in music including, “if you have haters, you’re doing something right” and advocating for a “don’t like” button.

9. Creating Magic Moments for Customers (Washington Post)

Craft the story you want users to tell that differentiates you from your competitors. Unexpected + delight = magic.