Reflections on Sifteo Playtesting Part 2 — Collaborating with Marketing

While I was at Sifteo I managed the games team’s playtesting service.  Dubbed the office PlayLab by our marketing VP, Laurie Peterson, I took on PlayLab’s games user research studies while the marketing team ran product, branding and packaging focused studies. We often piggy-backed off of each other’s recruited participants and even helped each other with observation on the various studies, which worked out really well.  For example, if I had a large homeschooling group coming in to play Sandwich Kingdom or Low Rollr I would reach out to the marketing team to let them know that a group of five or so 8-10 year old boys were coming in.  Marketing would do the same for me which drastically improved our ability to get even close to the right sample size of data.

UX/UR purists might argue that sharing recruits for gathering market and user data could result in negative outcomes.  The goals of marketing and user research are quite different.  In user research, specifically games user research, the participant plays through a game or sections of a game to tease out level pacing, go through first run experiences, and aid in difficulty tuning.  In short, participants measure whether or not a game’s current functionality and feature set meet their expectations.  In market research, participants rank visual assets or discuss their emotional connections to mock-ups.  Emotion is one of the most important parts of game design, however during games user research participants are never directly asked to place a value judgement on their emergent emotions.  If the user does so this data is certainly reported, but it is not central to the thesis of the report.  Market research expects value judgements. With these differences in mind, wouldn’t mixing and matching participants in studies with varying goals be problematic?

At Sifteo, where resources (and I don’t mean humans) were limited bootstraps were plentiful.  Neither marketing nor the games team had a dedicated recruiting expert so study leaders normally did most of the scheduling.  That meant that producers, marketing leads, VPs and game developers were all chipping in to the process and furthermore, we couldn’t be picky.  If our target audience and potential customers were in the house, we ALL wanted to spend time with them.  Recruits and their time slots were shared, however we made an effort to set and follow some rules that worked for us —

Playtesters of games can only do as such for the game they are playtesting.  No value judgements, provided or not, drove decisions at reviews, pitches or greenlight meetings.  And vice versa — if a marketing research study required play of a game within their script, data collected from that study would not be used to drive game design decisions.  

This rule wasn’t written on the wall or ever really voiced but in reflection on the process, this is what I believe worked for the two teams. Following these rules not only allowed the teams to stay close to the Sifteo participants from all angles but it also helped the games and marketing teams see where their goals and needs aligned.  By performing a truly user-centered approach that drove our data, both the marketing and games teams were able to take on design challenges with a more holistic approach.

 

Leave a Reply

Your email address will not be published. Required fields are marked *