How to efficiently work through survey feedback

By
, 
By
Ryan Walker
, 
June 6, 2022
Survey feedback

‍

My survey feedback analysis process has been born from the fact that many of the Startmate products I work on are still in the MVP stage, and I will have aggressively acquired a variety of users via every organic/DTC method in the book. (I’ve got everyone from my aunt and uncle, to my postie and his dog using the product.) This begs the question of how do I find out who’s feedback I should prioritise? Who is actually best suited for my product? What market is my product best fit for?

This context is further exacerbated as other feedback processes fall victim to information overload. We want as much feedback as possible, but once it has come in, we don’t know where to start unpacking it, and even worse, when it comes to sharing it, we overshare with no overarching narrative to the information.

With all this context in mind, this following method is how I efficiently work through survey feedback and craft a story that best highlights key information.

  1. Trace the lines I want to cut.I.e. Asking the right questions.
  2. Cut to find the outliers. I.e. Who loves our product and why?
  3. Have an opinion. I.e. Articulating our observations.
  4. Tell your story, own the narrative. I.e.Effectively communicating back to stakeholders.

‍

‍

Trace the lines I want to cut

No survey will be perfect or give you better insight than talking to customers will, but I believe this method can be used to refine how we approach feedback analysis.

The most important step I can take before sending out a feedback survey is to make a note of the specific knowledge that it will provide, the purpose of the product, what success metrics or hypotheses I have, and what do I need to quantify to help me to understand if I've achieved them.

The most common metric we use at Startmate is NPS (Net Promoter Score). In short, the purpose of NPS is to determine the likelihood that an individual would be a promoter of a program or product. For context, at Startmate we aim for an NPS of 75+ when a program is beyond MVP stage.

NPS by itself is only really good for external stakeholders and leaders to get a snapshot of how a program or product is tracking. The problem is that it doesn’t tell me what is going on or why, which means we need to add additional questions to help refine our analysis process.

Measure twice, cut once

Every survey should have at least 1-3 categorical questions in it that allow us to cut and segment information to help decipher the feedback. The questions used should inform the answers to our success criteria.

Disclaimer: these categorical questions are additional to the typical feedback questions we would ask in our survey.

One categorical question should be related to a user’s product experience. NPS is a good example of a categorical product experience question, although we ask people on a scale of 1-10, there are really only 3 categories, which gives us a good place to start segmenting the data later on.

  1. 😍 Promoter (9-10)
  2. 😐 Passives (7-8)
  3. 😕 Detractor (0-6)

Inspired by Superhuman Founder Rahul Vohra, another question that we experimented with for the MVP of the Startmate Founder's Fellowship was: “How would you feel if you could no longer use the product?”

  • Very Disappointed
  • Somewhat Disappointed

Not DisappointedThe reason I like this question is in the framing. Instead of using a push question, such as NPS, where we ask the user if they will promote our product (a question that is fundamentally idealistic and would fail the Mom Test***), this method frames the question as a pull by metaphorically pulling the rug out from under a user to see how they would feel, and gauging that response.

“It’s not anyone else’s responsibility to show us the truth. It’s our responsibility to find it. We do that by asking good questions,” says Rob Fitzpatrick in his book The Mom Test.

‍

Cut to find the outliers

“Segment to find your supporters and paint a picture of your high-expectation customers,” writes Rahul Vohra in his First Round Review article.

We’ve traced our lines, the survey has been sent out and the feedback has come in, time to start cutting! The first place to start is with those categorical questions above and group users by their shared experience score(i.e. promoter, passives, and negatives).

Start by measuring the percent who were a promoter (gave an NPS of 9-10), or if you’re using Rahul’s method, those who selected ‘very disappointed’. Rahul suggests that companies that struggled to find growth almost always had less than 40% of users categorised as a promoter or responded ‘very disappointed’. If all has gone well with the user test we’ll see a healthy percentage of +40% users in that promoter group, so then we can shift our focus to figuring out how we get the rest of the fence-sitters (passives) into the promoter camp. This is where qualitative feedback becomes important.

What type(s) of people love the product?

Let’s have a more thorough look into our promoter group. What type(s) of people love the product? This should be relatively straightforward but here are some good starting questions.

  • What occupation do they work in?
  • What industry is their company?
  • What was their usage of the product/platform? (Assuming you have relevant data for this.)
  • What stage was the user in before and after using the product? (This can be an indicator of a group of users who had experienced what is called the Hero Journey whilst using the product.)
  • Age, gender, etc, there are countless examples.

If there are significant groups in this segment, make note of these outliers as they will come in handy later. There's also a good chance there won't be any significant groups. Don't worry, as this is something we can optimise the user groups for in future product tests and surveys.

And, why do they love the product?

We now want to look through that promotor group and read the qualitative responses to our questions (as mentioned above this will differ from product to product). The overarching goal is to understand why promoters love the product. This can be one of the more manual parts of this process, so to avoid being overwhelmed I like to use a tally system.

I will put the survey question that is being answered at the top of the page and then proceed to read each response, as I’m reading I start bullet-pointing themes that show up, and as I read more feedback that is related to that theme, I add a tally. 

For example:

“What would make the product better for you?”

  • Faster support communication  IIII I
  • Clarity of onboarding instructions  II

Quite often I’ll read a response and will be unsure if it fits an existing theme, in this case, I will make a new theme for this point, and later on, I can always concatenate it to another theme later when more context is relevant, or make it a sub-theme under an existing one.

Once I’ve completed this process, I add my tallies and rank them from most to least, but before I start making any conclusions, I need to now go through the passive group and repeat this process.

Passive users give better insight into our product than promoters.

This may be a controversial opinion but I strongly believe that without the contrast of haters and fence-sitters to our promoters and evangelists, we will never truly understand what makes our product great. In the wise words of Ted Mosby in How I Met Your Mother: “Every night can’t be legendary. If all nights are legendary, no nights are legendary.”

By going through our promoter group’s feedback, we have already benchmarked what users love about the product, so now, by following the same process of analysing the what and why of our passive user group, we will be comparing the data against the promoter group to see what made them rate their experience differently. Ideally, this will lead us to our best product wins.

Don’t be alarmed if, for the most part, we don’t see anything too different, most people will say the same feedback, but they may have a stronger deposition to being pessimistic in the way they respond to questions (which isn’t helped if you’re akin to having a negativity bias). What we want to look out for are your outliers,**** groupings of feedback that are significantly different from our promoter group.

Significant differences are subjective and will vary from product to product, but someone who’s spent enough time on a product should know when they see themes emerging that contrast the promoter group. I suggest working through the questions below to get the ol’ noggin working.

  1. What areas did the passive group have a significantly different experience to the promoter group?
  2. and why did they have a different experience?
  3. What would push someone off the fence (passive) to a promoter?

Hopefully, by this point, conclusions and theories are starting to form. Make note of them under each group of feedback to deal with later. We want to work through this efficiently and keep going through each category.

By the end, if we haven’t found any significant experience differences between the two groups, I suggest trying to group the passives and promoters differently, to see if you find something significant, if that fails, you may not have enough user feedback or you need to change the persona definitions of users, which usually requires more information from them.

What do I do with my detractor user group?

Some will advise that you should politely disregard those who would be a detractor of your product. They are so far from loving you that they are essentially a lost cause. I agree with this in principle, but would say that I prefer to put this group to the side until after I've finished my analysis. I don’t want to be ignorant to a potentially poor user experience, especially when you're unable to see the user in the digital world.

‍

Have an opinion

We’ve cut our feedback into groups, understood what users we have and why they love our product. We probably have a few theories at this point based on the significant statistical differences we’ve observed between our promoter and passive users groups. The next step is to turn these into informed opinions and actions that we want to implement to improve the product.

Start by articulating your observation as a statement. For instance:

‘The level of founder connection and strength of their explorer group was the most important deciding factor to a fellow rating their experience highly.’

Now using your data observations from the last step, provide evidence to back up this statement. For example, using percentage comparisons or the difference in the number of users who reported an issue. This becomes


‘The level of founder connection and strength of their explorer group was the most important deciding factor to a fellow rating their experience highly.’

  • 17/21 promoters said the connection to other Fellows was their favourite part of the program, with only 6/21 saying they had an average or worse experience with Explorer groups.
  • In comparison, 21/25 passives said connecting with founders was the best part of the fellowship but had a higher number of 15/25 that said they had an average or worse experience with Explorer groups.

And lastly, to drive the point home, use quotes to reinforce the reader’s belief in the theory.

"My Explorer group has been a huge part of why I enjoyed the SFF1. Being surrounded by people in the same stage and mindset is very helpful." — Promoter respondent

‍

Tell your story, own the narrative

In my last blog, I mentioned that when working in a decentralised team, as I do at Startmate, effectively communicating with team members and leaders is a necessary skill. Unfortunately, a bunch of opinions on a Google Doc won’t get you anywhere. Rather, purposely owning the narrative to tell the story you want to tell is where we give magic to digital words and lead.

Effectively communicating your feedback analysis  is a lot easier when we remember in most instances the reader will not have had first-hand experience working with this set of users so it’s best to craft your report around these questions.

  • What impression do we want to convey after reading the report?
  • If a reader can only take one thing away from the report what would we want it to be?
  • What actions (if any) will we make from this report?
  • Optional: How will we keep ourselves accountable?

At the end of the day, a fancy report with a bunch of stats and pull-out quotes won’t convey the message by itself. It’s only when we stand up and deliver a compelling story will we have a chance at exciting our team and successfully sharing our learnings.

‍

Some additional context

*To calculate your Net Promoter Score, ask your users, on a scale of 0-10, ‘How likely is it that you would recommend [product] to a friend or colleague?’

Respondents are grouped as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, in turn fuelling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

In a new column beside the NPS responses above we use this formula below.

  • If user’s NPS is 0-6 = -100
  • If user’s NPS is 7-8 = 0
  • If user’s NPS is 9-10 = 100

Average the column to get your overall NPS.

**Ideally I would ask the users categorical questions in their onboarding that I can tag to their feedback to give me more context, and as an added benefit, let me ask fewer questions in the final feedback survey. This should result in a better response conversion rate and enable me to focus on questions that let me benchmark a user’s progress over a timeframe.

***You shouldn’t ask your mom whether your business idea is a good one. Your mom will lie to you (just ‘cuz she loves you). It’s a bad question. Even the question of NPS falls in this trap as it invites everyone to lie to you at least a little. It’s not anyone else’s responsibility to show us the truth. It’s our responsibility to find it. We do that by asking good questions.

****“An outlier is an observation that lies an abnormal distance from other values in a random sample from a population.” — Nist, Sematech

‍

Give the author some love!
3
Written by
Ryan WalkerRyan Walker
Written by

More Articles