Indie Game Dev | CS Student

Report RSS Gauging Customer and Player Input On Games

Posted by on

I originally posted this on my personal blog. I thought this information might be useful to others though. If anyone has any questions or comments then please leave them. I would be interested to see what others have to say regarding gauging player input

NPS is a powerful tool that can be converted to very powerful metrics to measure a game’s performance. That’s a bit of a broad statement. It’s true though. The best part is we already have the tools to do this. We already have star reviews on mobile markets to gather information for us. That seems like a very ‘duh’ thing to say, that star ratings mean something, but I would argue they aren’t really being utilized properly.

First, let me start by explaining NPS.

Net Promoter Score” is a customer loyalty metric developed by (and a registered trademark of) Fred Reichheld, Bain & Company, and Satmetrix. It was introduced by Reichheld in his 2003 Harvard Business Review article “One Number You Need to Grow”.[2] NPS can be as low as −100 (everybody is a detractor) or as high as +100 (everybody is a promoter). An NPS that is positive (i.e., higher than zero) is felt to be good, and an NPS of +50 is excellent.

NPS is such an important metric because it’s one of the first that quantifies the customer experience and, more importantly, the cost of word-of-mouth exposure. Business has always known that word-of-mouth advertising is the best anyone can get. We’ve never been able to easily place a dollar value on it though. We’ve also had very little insight on how negative word-of-mouth advertising can impact a product or business image. In short, NPS allows us to quantify the customer experience and how it affects our bottom line. That seems like a really powerful metric when it’s put into perspective. We really should be paying attention to it.

Did you know that it takes 5 good responses to negate a single negative response?

NPS is typically measured on a 1-10 rating scale, and in some cases (depending on the business) a 1-5 rating scale. On a typical 1-10 rating scale a 9 and 10 are considered promoters, 7 and 8 are considered passives, and anything below is considered a detractor. Promoters will (obviously) promote the product and the business. They tend to not go anywhere. Detractors will bad mouth the product or business and jump ship as quickly as possible. Both promoters and detractors tend to be vocal and can offer great feedback. Passives are harder to gauge. They tend not to be vocal. There is a little bit of an art to raising their allegiance with a product or business. They are also opportunists. They will stick with a product or business because they are comfortable, but if they are offered something better then they will jump ship.As the quote states above, NPS can vary from -100 (all detractors) to +100 (all promoters). The metric isn’t really measured in a percentage although some companies change the metric to display it as such and make it easier to conceptualize. The metric takes the total amount of detractors and subtracts them from the total amount promoters. That will give the total score. So, if a product receives 5 promoters and 3 detractors, the overall score would be 2. Some companies will say 2 out of 8 (or % promoters, or happiness rating, of a total of 8 responses) to make the metric easier to grasp. I typically measure the metric in this way as well.NPS is typically measured with 2 or 3 questions.

  1. How likely are you to recommend this product to friends/family/colleagues?
  2. (Optional) How likely are you to recommend this company?
  3. Why?

The first and third questions are the important ones. The second is typically thrown in for good measure to rate the company as a whole. The first question simplifies and boils down the equation to something easy and intuitive. It measures the whole package of customer happiness. It’s unique because it offers very specific insight and makes the question easy enough to answer to garnish more responses. The more responses, the more accurate the results (which is what NPS is designed to offer). The third question completes the feedback loop and allows the customer a chance to offer insight as to why they are happy or upset. Anything above a 0 score isn’t to shabby. Anything above a 50 is typically considered great. Anything below a 0 needs to be examined closely and fixed.

How does that relate to star reviews though?

Game developers obviously aren’t going to send customers a questionnaire. Very few games have the system already in place to do this (mostly MMO or social games only). Still, the process has to be friction free for the customer to participate. That’s where we have it easy. App stores already engage the customer and asks that magical question for us. The very act of having a star rating system is basically saying, “Would you recommend this app?” The stars ask the question while the comments close the feedback loop. As application developers, we have the luck of having some of the most vocal and responsive customers. Compare app reviews to just about any other product. The response rate is typically much higher.So how do we boil down those star ratings to the NPS? That’s easy enough. Businesses already use an established system with 1-5 scale ratings. That translates directly to 5 star reviews. 5 is considered a promoter. 4 is considered a passive. 3 and lower are all considered detractors. The comments close the feedback loop and explain why the customer rated the app the way they did.

Let’s use JetPack Joyride as an example (mostly because it’s one of my favorite games. Specifically, I’m using the Android version though an accurate analysis would use all versions of the app on all ecosystems (though sometimes segmented markets like the Apple App Store, Google Play, and the Microsoft Store require some independent gauging). Currently Jetpack Joyride, at the time of writing, has an average of a four and a half star review (out of five) broken down to:

  • 321,905 five star reviews: Promoters
  • 40,234 four star reviews: Passives
  • 18,862 three star reviews : Detractors
  • 7,119 two star reviews: Detractors
  • 23,915 one star reviews: Detractors

That means JetPack Joyride has 321,905 promoters and 49,896 detractors (you can’t please everyone). That leaves with roughly a 66% NPS rating which is considered to be really great! Keep in mind, anything above %0 is trending in the right direction.Let’s compare that to an app called Flight Track 5. I specifically picked this app because it’s ratings are a bit deceiving. At the time of writing it has a 3 star review with a total of 64 responses. Those responses break down to:

  • 26 five star reviews: Promoters
  • 3 four star reviews: Passives
  • 3 three star reviews: Detractors
  • 5 two star reviews: Detractors
  • 27 one star reviews: Detractors

That means that Flight Track has 26 total promoters and 35 detractors. Yikes! That would give Flight Track a promoter score of% -14. That is a negative fourteen. Remember, NPS scores swing from -100 to +100. The percent sign is kind of added just to make things easier to conceptualize but doesn’t mean much. Looking at this data, Flight Track has a major reputation problem. I’m sure the developers could read the comments to find out why. Those comments complete the feedback look and offer great insight on what the developers of Flight Track need to improve. While every customer response may not offer specific insight, if a lot of people complain about the same thing I would think that improving on that one thing would drastically change customer perception. The comments are a good place to start.

An Added Bonus

The cool thing about using NPS is that customer can go back into the app stores and change their responses. Data is dynamic. Developers can look at segmented time scales (perhaps before and after changes were implemented) and the entire life of the app. Think about how powerful that is for a moment. That makes the iterative life of an app, and the potential revenue, offer much greater potential.During my travels around the interwebs, I haven’t read much of anything relating to gauging and quantifying customer reaction and experience. Perhaps this is an easy and cost efficient (as in the data already exists) way of doing that for developers.

Post a comment

Your comment will be anonymous unless you join the community. Or sign in with your social account: