Showing posts with label Pair Testing. Show all posts
Showing posts with label Pair Testing. Show all posts

Wednesday, 13 December 2017

Pairing for skill vs. Pairing for confidence

I went to a WeTest leadership breakfast this morning. We run in a Lean Coffee format and today we had a conversation about how to build confidence in people who have learned basic automation skills but seem fearful of applying those skills in their work.

I was fortunate to be sitting in a group with Vicki Hann, a Test Automation Coach, who had a lot of practical suggestions. To build confidence she suggested asking people to:
  • Explain a coding concept to a non-technical team mate
  • Be involved in regular code reviews
  • Practice the same type of coding challenge repeatedly

Then she talked about how she buddies these people within her testing team.

Traditionally when you have someone who is learning you would buddy them with someone who is experienced. You create an environment where the experienced person can transfer their knowledge or skill to the other.

In a situation where the person who is learning has established some basic knowledge and skills, their requirements for a buddy diversify. The types of activities that build confidence can be different to those that teach the material.

Confidence comes from repetition and experimentation in a safe environment. The experienced buddy might not be able to create that space, or the person who is learning may have their own inhibitions about making mistakes in front of their teacher.

Vicki talked about two people in her organisation who are both learning to code. Rather than pairing each person with someone experienced, she paired them with each other. Not day-to-day in the same delivery team, but they regularly work together to build confidence in their newly acquired automation skills.

In their buddy session, each person explains a piece of code that they’ve written to the other. Without an experienced person in the pair, both operate on a level footing. Each person has strengths and weaknesses in their knowledge and skills. They feel safe to make mistakes, correct each other, and explore together when neither know the answer.

I hadn’t considered that there would be a difference in pairing for skill vs. pairing for confidence. In the past, I have attempted to address both learning opportunities in a single pairing by putting the cautious learner with an exuberant mentor. I thought that confidence might be contagious. Sometimes this approach has worked well and others not.

Vicki gave me a new approach to this problem, switching my thinking about confidence from something that is contagious to something that is constructed. I can imagine situations where I’ll want to pair two people who are learning, so that they can build their confidence together. Each person developing a belief in their ability alongside a peer who is going through the same process.


Sunday, 4 September 2016

The end of the pairing experiment

I have spoken and written about the pairing experiment for sharing knowledge between agile teams that I facilitated for the testers in my organisation. After 12 months of pairing, in which we saw many benefits, I asked the testers whether they would like to continue. The result was overwhelming:

Survey Results

I had asked this same question regularly through the experiment, but this was the first time that a majority of respondents had asked to stop pairing. As a result, we no longer do structured, rostered, cross-team pairing.

Why?

The first and most obvious reason is above. If you ask people for their opinion on an activity that they're being instructed to undertake, and they overwhelmingly don't want to do it, then there's questionable value in insisting that it happens regardless. Listen to what you are being told.

But, behind the survey results is a reason that opinion has changed. This result told me that the testers believed we didn't need the experiment anymore, which meant they collectively recognised that the original reason for its existence had disappeared.

The pairing experiment was put in place to address a specific need. In mid-2015 the testers told me that they felt siloed from their peers who worked in different agile teams. The pairing experiment was primarily focused on breaking down these perceived barriers by sharing ideas and creating new connections.

After 12 months of rostered pairing the testers had formed links with multiple colleagues in different product areas. The opportunity to work alongside more people from the same products offered diminishing returns. Each tester already had the visibility of, and connection to, other teams.

Additionally, our pairing experiment wasn't happening in isolation. Alongside, the testers within particular product areas started to interact more frequently in regular team meetings and online chat channels. We also started meeting as an entire testing competency once a week for afternoon tea.

The increased collaboration between testers has shifted our testing culture. The testers no longer feel that they are disconnected from their colleagues. Instead there's a strong network of people who they can call on for ideas, advice and assistance.

The pairing experiment achieved its objective. I'm proud of this positive outcome. I'm also proud that we're all ready to let the experiment go. I think it's important to be willing to change our approach - not just by introducing new ideas, but also by retiring those that have fulfilled their purpose.

Now that we've stopped pairing, there's time available for the next experiment. I'm still thinking about what that might be, so that our testing continues to evolve.

Monday, 6 June 2016

Benefits of cross-team pair testing in agile

One year ago, in June 2015, I launched a pairing experiment in my organisation. The primary purpose of this experiment was to share knowledge between testers who were working in different agile teams, testing different applications and platforms.

I shared the results of our experiment in my talk at TestBash in Brighton earlier in the year. For those who missed this presentation, this is a short written summary of the four main benefits that we observed from cross-team pair testing.

Visibility of other teams

Before we began the experiment I had received feedback from the testers that they felt siloed from their testing peers. At that stage we had 20 testers spread across 15 different agile teams, which meant that many were working as the only specialist tester in a cross-functional delivery team. 

This isolation was starting to seed imposter syndrome. The testers were beginning to doubt their own abilities and feel uncertain about whether they were doing things right.

Happily, one of the strongest themes in the feedback about cross-team pairing was that it increased visibility of what was happening in other teams. The opportunity to understand how another team operated was described as interesting, eye opening and awesome. Seeing other practices and processes gave a degree of comfort to each tester that their own approach was appropriate.

Broader Scope

One of the challenges in being the only test specialist in a team is in generating testing ideas. It can be difficult to consider different perspectives when brainstorming as an individual.

Through pairing, the testers were able to see their application through fresh eyes by exploring with a tester from outside of their product. A different mindset helped them to identify gaps in the application and think of creative ideas to explore functionality. The opportunity to have deep discussions about testing led to the discovery of interesting problems on unexpected pathways.

The broader thinking demonstrated within a pairing session was then carried into future testing as each individual tester started to augment their own planning with ideas they had seen demonstrated by their peers.

Improve communication

Make fewer assumptions. Ask more questions. These are two central tenets of testing that most testers will believe they do. When compared to other disciplines, its often true that testers are asking more questions. Pairing highlighted situations where testers had started to relax these instincts.

The tester who was hosting a session would often make incorrect assumptions about the depth of their visitors knowledge. Their "simple" explanations were difficult for someone from outside of their delivery team to understand. 

The presence of an outsider exposed the amount of assumed institutional knowledge in the business stories, test planning and informal communication of a team. The tester who was visiting a peer would have to ask a lot of questions in order to understand the application and how it would be tested.

Pairing caused the testers to question their own expectations of knowledge in the team. They started to make fewer assumptions about what had been left unstated in team documentation. By increasing the number of questions they asked, the testers began to interrogate whether there was truly shared understanding or instead shallow agreement.

New Approach

Every person will have a unique way of working. Not just in their thinking, but in the particular combination of tools that they use to capture, process and report information. 

Pairing gave the testers the opportunity to observe and experience the work environment of a colleague. In many cases this first-hand experience led to the discovery of a new tool that could be adopted by the visiting tester in their own work. Through pairing we saw chrome extensions, excel macros and screenshot tools propagate across the department.

The proliferation of these tools meant that the testers were more productive. They were able to reduce the repetitive tasks in their workflow and use appropriate tools to support their test approach.


Benefits of cross-team pair testing in agile


For more information about pairing, please refer to my previous posts:

Friday, 13 November 2015

Using strong-style pairing and a coding dojo for test automation training

At work we're implementing a brand new automation suite for one of our internet banking applications. This is the first framework that I've introduced from a coaching perspective as opposed to being the tester implementing automation day-to-day within a delivery team.

Aside from choosing tools and developing a strategy for automation, I've discovered that a large proportion of the coaching work required is to train the testers within the teams in how to install, use and extend the new suite.

I've done a lot of classroom training and workshops before, but I felt that these formats weren't well suited to teaching automation. Instead I've used two practices that are traditionally associated with software development rather than testing: strong-style pairing and a coding dojo.

I've been surprised at how well these practices have worked for our test automation training and thought I would share my experience.

Strong-style pairing

After a series of introductory meetings to explain the intent of the new suite and give a high-level overview of its architecture, each tester worked independently using the instructions on our organisation wiki to get the tests running on their local environment.

As the testers were completing their installations, I worked in parallel to create skeleton tests with simple assertions in different areas of the application, one area per tester. To keep the training as simple as possible I wanted to split out distinct areas of focus for individual learning and reduce the potential for merge conflicts of our source code.

As they were ready, I introduced an area to each tester via individual one hour pairing sessions using strong-style pairing. The golden rule of strong-style pairing is:

"for an idea to go from your head into the computer it MUST go through someone else's hands"

For these sessions I acted as the navigator and the tester who I was training acted as the driver. As the testers were completely unfamiliar with the new automation suite, strong-style pairing was a relatively comfortable format. I did a lot of talking, while the testers themselves worked hands-on, and together we expanded the tests in their particular area of the application.

As the navigator, I prepared for each pairing session by thinking up a series of objectives at varying degrees of difficulty to accommodate different levels of skill. My overarching goal was to finish the hour with a commit back to the repository that included some small change to the suite, which was achieved in two-thirds of the sessions.

As a coach, I found these sessions really useful to judge how much support the testers will require as we progress from a prototype stage and attempt to fulfill the vision for this suite. I now have a much more granular view of where people have strengths and where they may require some help.

I had a lot of positive feedback from the testers themselves. For me the success was that many were able to continue independently immediately following the session and make updates to the tests on their own.

Coding Dojo

At this point everyone had installed the suite individually, then had their pairing session to get a basic understanding of how to extend an existing test. The next step was to learn how to implement a new test within the framework.

I felt that a second round of individual pairing would involve a lot of needless repetition on my part, explaining the same things over and over again. Ultimately I wanted the testers in the team to start pairing with each other to learn collaboratively as part of our long-running pairing experiment.

I found a "how do you put on a coding dojo?" video and decided to try it out.

I planned the dojo as a two hour session for six testers. I decided to allow 90 minutes for coding, with 15 minutes on each side for introduction and closing activities. Within the 90 minutes, each of the six testers would have 15 minutes in the navigator/co-pilot role, and 15 minutes at the keyboard in the driver/pilot role.

I thought carefully about the order in which to ask people to act in these roles. I wanted to start with a confident pilot who would put us on the right course. I also wanted the testers to work in the pairs that they would work in immediately following the session to tackle their next task. So I created a small timetable. To illustrate with fictitious testers:



On the morning of the session I sent an email out to all the participants that reiterated our objective, shared the timetable, and explained what they would not require their own laptops to participate.

We started the session at 1pm. I had my laptop prepared, with only the relevant applications open and all forms of communication with the outside world (email, instant messaging, etc.) switched off. The laptop was connected to a projector and we had a large flipchart with markers to use a shared notes space.

I reiterated the content of the morning email and shared our three rules:

  • The facilitator asks questions and doesn't give answers
  • Everyone must participate in the code being written
  • Everyone must take a turn at the keyboard

Then I sat back and watched the team work together to create a new test!

Though I found it quite challenging to keep quiet at times, I could see that the absence of a single authority was getting the group to work together. It was really interesting to see the approach taken, which differed from how I thought they might tackle the problem. I also learned a lot more about the personalities and social dynamics within the team by watching the way they interacted.

It took almost exactly 90 minutes to write a new test that executed successfully and commit it back to the repository. Each tester had the opportunity to contribute and there was a nice moment when the test passed for the first time and the room collectively celebrated!

I felt that the session achieved the broader objective of teaching all the testers how to implement a new test, and provided enough training so that they can now work in their own pairs to repeat the exercise for another area of the application.

I intend to continue to use both strong-style pairing and coding dojos to teach test automation.







Wednesday, 24 June 2015

A pairing experiment for sharing knowledge between agile teams

Over the past month I've started running a pairing experiment in my organisation. The primary purpose of this experiment is to share knowledge between testers who are working in different agile teams, testing different applications and platforms.

The Experiment Framework

After researching pair testing, I decided to create a structured framework for experimenting with pairing. I felt there was a need to set clear expectations in order for my 20+ testers to have a consistent and valuable pairing experience.

This did felt a little dictatorial, so I made a point of emphasizing the individual responsibility of each tester to arrange their own sessions and control what happened within them. There has been no policing or enforcement of the framework, though most people appear to have embraced the opportunity to learn beyond the boundaries of their own agile team.

I decided that our experiment will run for three one-month iterations. Within each month, each pair will work together for one hour per week, alternating each week between the project team of each person in the pair. As an example, imagine I pair Sandi in Project A is paired with Danny in Project B. In the first week of the iteration they will pair test Project A at Sandi's desk, then in the second week they will pair test Project B at Danny's desk, and so on. At the end of the monthly iteration each pair should have completed four sessions, two in each project environment.

In between iterations, the team will offer their feedback on the experiment itself and the pairing sessions that they have completed. As we are yet to complete a full iteration I'm looking forward to receiving this first round of feedback shortly. I intend to adapt the parameters of the experiment before switching the assigned pairs and starting the second iteration.

At the end of the three months I hope that each person will have a rounded opinion about the value of pairing in our organisation and how we might continue to apply some form of pairing for knowledge sharing in future. At the end of the experiment, we're going to have an in-depth retrospective to determine what we, as a team, want to do next.


An example of how one tester might experience the pairing experiment

A Sample Session

In our pair testing experiment, both the participants are testers. To avoid confusion when describing a session, we refer to the testers involved as a native and a visitor.

The native hosts the session at their work station, selects a single testing task for the session, and holds accountability for the work being completed. The native may do some preparation, but pairing will be more successful if there is flexibility. A simple checklist or set of test ideas is likely to be a good starting point.

The visitor joins the native to learn as much as possible, while contributing their own ideas and perspective to the task.

During a pairing session there is an expectation that the testers should talk at least as much as they test so that there is shared understanding of what they're doing and, more importantly, why they are doing it.

When we pair, a one hour session may be broken into the following broad sections:

10 minutes – Discuss the context, the story and the task for the session.

The native will introduce the visitor to the task and share any test ideas or high-level planning they have prepared. The visitor will ask a lot of questions to be sure that they understand what the task is and how they will test it.

20 minutes – Native testing, visitor suggesting ideas, asking questions and taking notes.

The native will be more familiar with the application and will start the testing session at the keyboard. The native should talk about what they are doing as they test. The visitor will make sure that they understand every action taken, ask as many questions as they have, and note down anything of interest in what the native does including heuristics and bugs.

20 minutes – Visitor testing, native providing support, asking questions and taking notes.

The visitor will take the keyboard and continue testing. The visitor should also talk about what they are doing as they test. The native will stay nearby to verbally assist the visitor if they get confused or lost. Progress may be slower, but the visitor will retain control of the work station through this period for hands-on learning.

10 minutes – Debrief to collate bug reports, reflect on heuristics, update documentation.

After testing is complete it’s time to share notes. Be sure that both testers understand and agree on any issues discovered. Collate the bugs found by the native with those found by the visitor and document according to the traditions of the native team (post-it, Rally, etc.). Agree on what test documentation to update and what should be captured in it. Discuss the heuristics listed by each tester, add any to the list that were missed.

After the session the visitor will return to their workstation and the pair can update documentation and the wiki independently.

To support this sample structure and emphasise the importance of communication, the following graphic that included potential questions to ask in each phase was also given to every tester:

Questions to ask when pair testing

I can see possibilities for this experiment to work for other disciplines - developers, business analysts, etc. I'm looking forward to seeing how the pairing experiment evolves over the coming months as it molds to better fit the needs of our team.

Thursday, 28 May 2015

Dominos to illustrate communication in pair testing

I recently ran a one hour workshop to introduce pair testing to my team. I wanted to make the session interactive rather than theoretical however, having done the research, I struggled to find any practical tips for training people in how to pair effectively. Having created something original to suit my purpose, I thought I would share my approach in case it is useful for others.

I coach a large team of 20 agile testers who are spread across several different teams, testing different applications and platforms. Though I wanted the workshop to be hands-on, the logistics of 10 pairs performing software testing against our real systems was simply too challenging. I needed to go low-tech, while still emulating the essence of what happens in a pair testing session.

So, what is the essence of pair testing? I spent several days thinking on this and, in the end, it wasn't until I bounced ideas around with a colleague that I realised. Communication.

Most people understand the theory of pairing immediately. Two people, one machine, sharing ideas and tackling a single task together. It's not a difficult concept. But the success of pairing hinges on the ability of those who are paired to communicate effectively with one another. How we speak to each other impacts both our enjoyment and our output.

With this goal in mind I started to research communication exercises, and found this:

Dominos

One of the listening skills activities that I do is that you have people get in groups of 2, you give one of them a pack of 8 dominos and the other a shape diagram of rectangles (dominos) in a random pattern. Only the person without the dominos should see the pattern. They sit back to back on the floor or the one with the dominos at a table and the other in a chair back to back. The one with the diagram instructs the other on placing the dominos to match the diagram. The one with the dominos cannot speak. They get 2 min. I usually do this in a big group where they are all working in pairs at once.
Then they switch roles, get a new pattern and do the exercise again, this time the person with the dominos is allowed to speak. 2 min. usually successful.
Then we debrief looking at challenges, jargon words used, analyze how they provided instructions without being able to watch the person, tone, questions asked, etc. ( I have this all in a document if you want it) It is quite fun and enlightening for those who are training to be able to be in a support role with technology.


Though it wasn't quite right for my workshop, this was an exercise for pairs that was interactive, communication focused, and involved toys. I decided to adapt it for my purpose and use dominos to illustrate two different types of knowledge sharing -- "follow me" and "flashlight" -- that hoped to see occur in our real-life pair testing sessions.

Follow Me

The workshop participants were placed in pairs. One person in the pair was given a packet of dominos and a diagram of 8 dominos in a pattern. They were given 2 minutes to arrange their dominos to match the diagram while their partner observed.

I asked each pair to push all their dominos back into a pile. The person who had arranged the dominos was asked to pick up the instruction diagram and hold it out of view of their partner. The person without the instructions was then given 2 minutes to repeat the same domino arrangement with limited assistance from their partner who was forbidden from touching the dominos!

Though the person with the dominos had seen the puzzle completed and knew it's broad shape, it was clear that they would need to talk to their partner and ask a lot of questions about the diagram in order to repeat the arrangement precisely. It was interesting to observe the different approaches; not every pair successfully completed the second part of this exercise within the 2 minute time frame.

After the exercise we had a short debrief. The participants noticed that:

  • pairs who talked more were able to complete the task quicker,
  • there were advantages to using non-verbal communication, particularly pointing and nodding, to help the person arranging the dominos, 
  • though it seemed easy when observing the task, attempting to repeat the same steps without the diagram was more challenging than people expected, 
  • it was frustrating for the person with the instructions to be unable to touch the dominos, and
  • keeping an encouraging tone when giving instructions helped to focus people on the task rather than feel stressed by the short deadline.


I felt that there were clear parallels between this activity and a pair testing scenario in which a tester is exploring a completely unfamiliar domain with guidance from a domain expert. I emphasised the importance of being honest when help is required, and keeping up a constant dialog where people are uncertain.

Flashlight

In the same pairs, one person was given a diagram of 8 dominos while the other was given a partial diagram that included only four. The person with access to only the smaller diagram was given 2 minutes to arrange the full set of 8 dominos.

Example of a full map of 8 dominos (left) next to a corresponding partial map of 4 dominos (right)

In this iteration the person who was arranging the dominos was given some understanding of what was required, but still needed need to ask their partner for assistance to complete the entire puzzle. As previously, the person with the complete picture was not permitted to touch the dominos and kept their instructions hidden from their partner.

Again we had a short debrief. The participants felt that this exercise was much easier than the first. Because the person arranging the dominos was bringing their own knowledge to the task it meant that almost every pair completed the arrangement within the 2 minutes.

As a facilitator I noticed that this little bit of extra knowledge changed the communication dynamics of some pairs quite dramatically. Instead of talking throughout, the observers remained silent as their partner completed the arrangement of the first four dominos. Only once the person with the dominos had completed the task to the extent of their abilities did they ask their pair for input.

The pairs who worked in this way were ultimately slower than their colleagues who kept talking to one another. One way that talking made things quicker was in eliminating double-handling of dominos -- "You'll need that one later".

Having shared this reflection, the two people switched roles and, with new diagrams, repeated the activity. With the expectation set that communication should remain continuous, it seemed that the pairs worked quicker together. The second iteration was certainly noisier!

I felt that there were clear parallels between this activity and one in which a tester is exploring a domain where they have some familiarity but are not an expert. It's important to remember that there is always something to learn, or opportunities to discover the ways in which the maps of others differ to our own. This exercise illustrated how important it is to continue communicating even when we feel comfortable in our own skills.

I was happy with how the dominos activities highlighted some important communication concepts for effective pair testing. If you'd like to repeat this workshop in your own workplace I would be happy to share my domino diagrams to save you some time, please get in touch.

Friday, 15 May 2015

Pair Testing

I'm currently working on defining a pair testing experiment to share testing knowledge across the agile teams within my organisation. What follows is my aggregated research on pair testing, which may be useful to others who are looking to implement pairing in their workplace.

Approach to pairing

Pair testing is a way of approaching a test design process by having two people test the same thing at the same time and place, continuously exchanging ideas. [1]

When paired, two people use a single machine or device. One has the keyboard, though it may pass back and forth in a session, while the other suggests ideas or tests, pays attention and takes notes, listens, asks questions, grabs reference material, etc. [2]

The pair should tackle a single testing task, so that they have a shared and specific goal in mind. Though the pair will work together, one person must own the responsibility for getting the task done. The person with ownership of the task may do some preparation, but pairing will be more successful if there is flexibility. A simple checklist or set of test ideas is likely to be a good starting point. [3]

During a pairing session the testers should talk at least as much as they test so that there is shared understanding of what they're doing and, more importantly, why they are doing it. [4]

Benefits of pairing

These benefits have been taken from the listed references and grouped into three themes:

High creativity

Working in a pair forces each person to explain their ideas and react to the ideas of others. The simple process of phrasing ideas seems to bring them into better focus and naturally triggers more ideas. 

Applying the information and insight of two people to a problem can lead to the discovery of how easily a person working alone can be a victim of tunnel vision.

Pairing brings people into close enough contact to learn about each other and practice communicating and resolving problems.

The camaraderie and the running commentary about the process, necessarily maintained by the pair in order to coordinate their efforts, tends to increase the positive energy in the process. 

High productivity

Each person must stay focused on the task or risk letting their partner down. 

Pairing allows the person at the keyboard to follow their train of thought without pausing to take notes or locate reference information. It encourages dogged pursuit of insights.

Two people working together limits the willingness of others to interrupt them.

Training Technique 

A strong pairing is one where people are grouped so that their strengths will mutually complement their weaknesses. This presents an opportunity for people to learn from one another.

Pairing is a good way for novices to keep learning by testing with others. It's also useful for experienced testers when they are new to a domain to quickly pick up business knowledge.

Experience Reports

A selection of referenced extracts from other blogs about pair testing that I found useful:


How do I know if I'm pairing or doing a demo? This is an important distinction to be aware of. If you're sitting with someone, and one of you is controlling all the conversation for the whole session, then you are not pairing.

Pairing is an interactive partnership. There is a certain level of inquiry and challenge, without feeling of threat or accusation. This is important - another party is essentially reviewing your testing as you perform it, and it's important to respond to these comments. Some of this feedback may suggest other tests, which if feasible you should attempt to include. Sometimes if your session has a goal, and the suggestion would take you off topic, you might want to leave it until the end or schedule another session.



Remember to narrate as you code. What are you thinking? Are you hunting for a file? What’s the test you’re writing now? Why that test? As I was coding, I was often silent. I knew what I was trying to do, but since the code was unfamiliar, I was spending a lot of time hunting. What I discovered was that my partner was feeling a bit useless because he felt he couldn’t contribute. As soon as he told me this, I started describing what I was trying to do and he was immediately able to start pointing me to sections of the code that he had fresh in his mind. 

As a tester, be sure to ask questions. It can be hard to ask questions that you think are dumb – especially when starting out. When I first started pairing as a tester, I felt reluctant to speak up because I didn’t want the programmer to feel like I was telling them how to do their job. I also didn’t want them to think I was stupid. I’ve not had any of the programmers I’ve worked with get defensive or treat me like an idiot. In fact, many things that I thought were stupid questions led to a discussion where we decided to use a different strategy than the one the programmer initially chose. 



If one or the other goes in with the idea that it is a one-way learning experience, the experience will fail.” Pair testing is only effective in an environment of mutual respect and trust. 

Whoever is “driving” during pair testing must ensure that the other party is actively participating and understands what is going on. Encourage thinking and talking aloud, keeping the other person informed on the motivation behind your actions.



You have to trust them to light the way and they have to trust you to send them a signal the moment you are aware that you are over your head. A good pair will tell you that it’s ok and you will get back on track together. 

Trust, vulnerability and communication in this moment is the bedrock of pairing. It is also the bedrock of building great software. 
The Moment Marlena Compton



I do think there are some times when it does make sense to pair test. For example, if you have a new hire who just doesn't know the system or how to test it, you might have him ride along to learn the system by testing it with a buddy. Likewise, if you are coaching someone new to testing, (or teaching an old dog new tricks), it might make sense to sit down and do real, serious, mission-important test work with two people at one keyboard for an extended period of time, say over an hour. Third, if you notice that you and a peer are finding different kinds of bugs, you might pair test just to learn about each other's testing styles -- to see how the other works, and to gain the skills to put on the 'hat' that finds that other category of defect.

Notice all of these situations are about learning.



... it removes so much fear of failure, which removes a blame culture ... If something gets missed, it’s not one person’s fault.



... testing together you will hit issues neither of you have hit [alone]. This is because you are both different testers and will test differently and so together you will try things neither have thought of. Also you will be able to track down more detail since you both have different ways to figure out the issue.
Pair Testing QA Hipster



Increasingly, organizations are bringing people with visual challenges or other disabilities into their accessibility test effort, but these testers still work in silos. Pairing testers with disabilities with non-disabled testers yields valuable results.



Do you know of any other resources that might be useful to add to this list?