I ran a single playtest for Echoes of Lalibela with a single, in-person playtester. This is a recent college graduate who has playtested several of my projects over the years, putting them on the outer edge of the target audience. Ideally, I would test with someone I’m not as close to, but my connections with people around the target audience are minimal, particularly on short notice. They have enough playtesting experience and design knowledge to provide meaningful feedback on such an early prototype.
The playtest ran for around fifteen minutes, during which the tester played through the prototype in its entirety and filled out the post-test survey. Given the minimal state of the prototype, I believed that a broad playtest was the best approach, so I did not have them focus on any one attribute. The only technical issue we encountered was that the rules and instructions, which had been added in-engine, did not show up in the build for unknown reasons.
I have run numerous playtests before, almost entirely in informal settings such as this. The few more formal settings were part of larger, collaborative events with amenities provided, so I’ve never had to plan for refreshments. That said, the survey was nice to have in addition to a notebook for my own observations.
My playtest session relied heavily on the other sessions run by the rest of the team. I knew that I wouldn’t be able to recruit an adequate number of testers on my own, so I didn’t stress over it. For the record, though, a single playtester on their own is not enough, just as a single person cannot be an effective statistical sample. Arranging an event for a group to play and have refreshments of some kind would be more effective, and multiple events would be even better.
My playtester was particularly fond of exploring the landscape, sparse as it was. While the prototype does not strongly engage exploration, this is a key goal in our broader design. The themes and setting we’re going for came through well, they simply need refinement. Our central map-building task is a good starting point, but it is somewhat flimsy. Delivery of instructions and feedback are unclear, and the end state does not feel rewarding yet. Addressing these would go a long way in polishing this prototype.
Based on the feedback I received, I would:
- Keep NPC conversations in the same layout to avoid resetting tasks by talking.
- Trim or separate dialogue so players can more easily parse instructions alongside narrative.
- Split the drag-and-drop task into three phases, one for each type of item. This would address the difficulty differentiating between them.
- Add written, corrective feedback to building placement. These should be simple and short, just enough for players not to rely so much on trial and error.
- Add validation to the final dialogue so it cannot be reached without completing the drag-and-drop task.
- Add animation or other celebratory effects to the final dialogue to make it more clearly rewarding.
I will admit some uncertainty as to how this feedback can shift the overarching design. Most of what my playtester expressed was a desire to see the concept fleshed out further, which the design has. In general, I believe we could be more granular in our descriptions, as we struggled to fully articulate what we wanted the prototype to be. Beyond that, we simply need to develop more of our design into playable form. This was an MVP prototype; the next step would be a vertical slice.