As part of a series of thought leadership articles by the team of ninjas here at Big Blue Bubble, we welcome the first by Andrew Kope. Leading a team of programmers & analysts, Andrew designs and manages analyses of game-play data in a bid to improve user monetization and retention as part of the company’s leadership team.



I hear the phrase “A/B testing” on an almost daily basis. It’s often touted as a cure-all for game design decision making – remove personal bias from the equation, and make data-driven decisions because “the numbers don’t lie”. Now I’m not saying that A/B testing can’t work, or can’t be effective… but as with a lot of things which cross my desk, the devil is in the details.

“The devil is in the details”


Consider the following: We have published a F2P racing game, where users earn soft currency by completing races, with new cars/upgrades costing soft currency to purchase. You can enter only 10 races per day each costing one ‘action point’, with the option to buy more action points or more soft currency via IAP. User retention is good, but maybe UA is a little pricey given the game’s relatively narrow target audience, so the execs are looking for a way to improve ARPU.

During a design meeting, the suggestion is made to change the UI so that the upgrade screen is visible ahead of the currently prominent race screen in the main menu… but after some discussion, the team divides. One side thinks this is a great idea; it will improve the ARPU by improving the visibility of the upgrade screen, a sink for in-game currency. The other side disagrees; downgrading the visibility of the race screen will make users run fewer races and therefore use up less of their action points, another important sink for IGC.