In 2019 when Will Christopherson and I first asked athletes and selectors about their past team selection experiences, we heard it all:
…the good: “I felt like the focus was on development, learning and bonding. I learned so much to take back and felt valued 99% of the time. While it was super long, I didn't mind because of the environment and it was certainly very thorough.”
…the bad: “If we'd had more than one weekend for selections, 4 of the 5 teams would have been different. We had to rely a lot on what we thought we knew about players, even if that wasn't right.”
…and the ugly: “Don't worry we don't need a selector from that city, it's just going to be Sydney and Melbourne players on the team anyway.”
The close community vibe is one reason we love the sport, but when it comes to selections, this adds a shade of grey. It means that selectors often come in with existing perceptions of players, which makes it more difficult to be objective.
How do cognitive biases affect selections?
Cognitive biases are logic flaws in our thinking, judgement, and decision making. We are all susceptible to them, and they can significantly impact selection outcomes.
These are examples of cognitive biases and their effect on selections:
Confirmation bias: Disproportionately focusing on a small number of examples that confirm pre-existing perceptions about a player. Ultimate is particularly susceptible to this type of bias due to the size of the community.
Ostrich effect: Ignoring examples which contradict existing beliefs about a player. This leads to poor outcomes if that player’s skills, fitness, or commitment has changed significantly between campaigns.
Affinity bias: Favouring players because they remind us of ourselves or a player we rate. These players generally receive more attention, feedback, and sponsorship, which results in teams that lack diversity in backgrounds, experiences, locations, and playing styles.
False cause: Assuming that one thing is the cause of the other. This can be problematic when a player’s performance is impacted by contextual factors that are not acknowledged. For example, when playing outside their role or network, or when guarded by a strong defender.
Authority bias: Placing disproportionate weight on the perspectives of senior selectors, head coaches, and respected community members due to their position. This is closely associated with conformity bias (going with the group) and can result in a lack of robust discussion.
Anchoring bias: The first piece of critical information we hear influences our judgement of everything that follows. During selector discussions the first comment made about a player can impact their outcome, particularly if it is a heated one.
This list offers us useful questions to reflect on: Was there a time in my playing career that my selection outcome was affected by one of these biases? As a coach or a selector, was there a time when one of these biases influenced the outcome of a player we were considering?
The reality is that every single player in our sport will have experienced one of these biases over their playing career. These biases are exacerbated by time pressures, long selection days, and a closely connected community. However just because our decision making is fallible, that doesn’t mean we can’t take steps towards reducing the impact of bias on selection processes. It’s in our interest, as players, selectors, and coaches, to actively reduce the impact of these biases, so that we can select the best teams possible.
What can we learn from corporate recruitment?
The business world has invested heavily in improving the quality and fairness of recruitment processes over the last two decades. While corporate Australia still has a way to go, there are several recruitment design principles that can help reduce the impact of bias in sport selections:
Appoint diverse selectors: Make sure that selector groups are large and diverse enough to bring different perspectives, support robust discussions, and encourage constructive challenge. While especially applicable for representative teams, it's also important for selections at all levels—for a club team, a selecting group made up of established players from different universities, aged-based state programs, and club leadership will give you a wider pool of knowledge to draw from.
Run unconscious bias training: Research has shown that training selectors on unconscious bias and discussing bias at the start of each selection process can improve decision making. HBR has published some great research on unconscious bias training. A simple way to start these discussions is to review the list of unconscious biases as a selector group and have a 30 minute discussion about personal experiences with these biases.
Use multiple stages: Design selections to include multiple stages. This allows selectors to form a fair view of players based on consistency of quality and it allows players to showcase their strengths and act upon feedback over time. Stages could include:
Measuring non-field specific selection criteria e.g. attendance, attitude.
Repeatable baseline athletic testing, capturing objective measures of speed and agility.
Skills based drills that isolate or showcase specific selection criteria e.g. defensive footwork, break throws.
Game-scenarios that test specific focuses, skills, or criteria e.g. no breaks.
Score activities against selection criteria: Explicitly design drills, games, and activities against selection criteria, based on the above stages. Assess each selection criterion more than once. Score players in real time for each activity and agree consistent scoring approaches before starting. For example, to assess “consistently break the mark with a low turnover rate”, run a straightforward break-mark drill and assign players a score out of 10.
Use formalised assessment systems to capture feedback: Use a central system to capture game and drill scores, player feedback, and other benchmarking data in real time. For example, setting up a simple Google Sheet listing each selection stage and activity, with space to input scores and player feedback. Update these during selections, not retrospectively.
Capture specific examples: When note taking, capture specific examples of player performance within games and drills, rather than generic comments about overall offensive or defensive playing style.
Rotate player groups: Often players go through selection events in similar groups to make logistics simpler. Teammates and drill partners have a clear impact on a player’s performance, so it’s important to regularly shuffle teams and players around to offset factors like existing connections, established roles, and uneven matchups.
Assign players to selectors: Ask each selector to watch, score, and take notes on specific players during each activity, ensuring that all players are covered. Switch assignments regularly throughout the process so that all selectors watch all players.
Give players feedback during the process: Provide clear and specific feedback throughout the selection process. Have selectors agree on feedback before providing it, so that players do not receive contradictory messages. Make sure that concerns are shared early, honestly, and sensitively with players, giving them a genuine opportunity to demonstrate improvement.
Use a third-party moderator: Use an independent facilitator during final selector discussions. They can play a crucial role in challenging selector comments, asking for specific examples, and ensuring that detailed feedback is agreed for every player. In a club or University team environment, this is a great opportunity to bring in past players or trusted interstate connections.
Make decisions using well-rounded data: Actively refer to selection footage, game statistics, criteria scoring, fitness benchmarking, and notes during the final selector discussions. For club teams, making more use of filming during selections can improve decision making objectivity.
Invest in the player experience: If in doubt, take steps that leave players feeling confident that they had a chance to showcase their strengths, were genuinely considered for the team, and received clear and specific feedback along the way.
I've made many mistakes as a player, selector, candidate, and recruiter – we’re all human. Selecting high performing representative teams requires us to reflect on our own biases and continue to improve the way we approach selections. Ultimately, this matters if we want to grow the participation, viewership, and sponsorship levels of our sport.
One of the themes of this article is the use of objective measurements (e.g., speed) to determine if a player should be selected. While I agree that objective measurements have strong benefits, I think that they also have weaknesses as well.
The positives include a justifiable reason why one player is picked over another. If one receiver races another and finishes 10 metres ahead in a 40 metre sprint, then I think that everyone would agree that the faster player is a better selection based solely on that objective measurement. If drops are recorded, players who drop fewer passes will be preferred over players who drop passes more often and so on.
The annual NFL combine (how prospective players are evaluated by teams prior to the gridiron draft) places significant emphasis on metrics. However, the results are very much hit or miss. Players who are ranked highly and drafted early routinely flame out and are out of the league in a couple of years.
The most famous case of misleading objective measurements is a quarterback who literally finished last in speed, arm strength, etc. As a result of the poor metrics, the player received a poor evaluation and therefore was the 199th player chosen. Six quarterbacks were taken ahead of him. Fortunately, Tom Brady’s leadership skills were much more important than his speed and arm strength and he became arguably the most successful player in NFL history.
My objective is not to denigrate objective measurements. I think that they are very useful in many circumstances. However, I don’t think they’re an end in themselves. Leadership, willingness to work within a team structure, openness to learning and so on cannot be measured and can be difficult to evaluate in a single weekend.
All that said, there would be understandable confusion if a person with worse objective measurements is chosen because of better soft skills. There is an onus on selectors to be open and honest about why players were selected or cut. I think we’re in broad agreement on this point. The question is how to achieve it.
p.s., I definitely support the idea of taking videos during the selections. This is a low-cost solution which is easy to implement and I think will be very helpful during post-selection discussions about why players were and weren’t chosen for the team.
Hi Laura - Thanks for this very informative article. In your opinion (For National team Selection) what is the ideal number of selection events AND what is the ideal number of participants to select a team of 20. Cheers Woodie