There is a common practice in our company to perform Developers Exploratory Testing sessions, explained by my colleague Davor here. The cool thing is that this way of performing higher level testing has actually become accepted by our developers, and they really enjoy it.
In my current work of developing our organization wide practices for quality, I have made a deep dive into how DET is carried out on a regular basis. What I have seen is that DET is accepted and acknowledged as a valuable practice, however it is not really carried out in its full potential. There are many details and aspects of it to work on, especially regarding reporting and follow-up.
The other day I was asked to help one of our teams with a DET session. As they are familiar with the approach, I wanted to expand their view on its potential through light weight note taking and showed them an example of a very basic session template I like to use. I also explained the intentions of having bugs and issues separated as well as how I do with test notes and the summary. We decided to use a whiteboard instead and put stickies with issues and bugs on there for visualization. We spoke about the mission and decided on two different areas to focus on. The app is an iOS app where a new component is used in the app core functionality for iOS5.
The four of us paired into two teams with some different iOS devices and tested for about 45 minutes. We found quite a few issues and bugs that were put on the whiteboard, but not so much notes about which areas we covered. Of course, by looking at the issues you might get a hint of what areas have been covered, but how much? We also quite quickly noted that we had to tag each issue with the test environment (iOS version and device), since they differed quite a lot. Most of the things found were considered bugs to fix, which is also not always the case in all projects or settings. It usually depends on the customer and the relationship to them, as well as having a decision maker on this present during testing. In this project, the scrum master/tech lead which was present knows the customer well enough to make those judgements.
But what about the learnings?
After the session debrief I got into a meta-debrief, discussing the outcome compared to my introduction about reporting. At first there was not very much understanding of why having test notes is valuable, but this shifted a little during our discussion. “While our smaller projects change that rapidly, even storing test notes might be waste” is a motivation I will take with me. I explained a common scenario of being asked what was tested and with what configuration. It could also be valuable for future test sessions to know what parts of a functionality was covered and when this was tested before. I also like to emphasize the possibility to explain current functionality which might not be explicitly stated in requirements.
The team was happy with the experience and I got some more input on how to improve the value of our DET sessions. I am not going to abandon the reporting, but I need to find a way of combining the fun part of interacting between pairs through the collaborative setting of the whiteboard. This actually helped getting attention on the issues in the debrief.
And then the developers actually got their own ideas of what a session note taking tool should look like to suit their needs, this is how they sketched it out after the session. I would explore some other possibilities before building our own for example RapidReporter or SessionWeb, but it is really cool that the meta-discussion could trigger the further thinking about the problem. Other aspects of the problem with collective knowledge transfers I wrote about here.