Skip to main content

Opinion piece, Policy

What video game fans can teach us about inclusive AI 

The popularity of chatbots has once again put artificial intelligence (AI) in the spotlight, not least because of the many challenges the technology poses for organisations. One aspect that is too often overlooked is the role AI can play in inclusion – or exclusion. How do we ensure that AI doesn’t replicate or intensify barriers that already exist for disabled people? 

A possible answer comes far from the world of policymaking and business – from an online community of video game fans, in fact.  

This surprising case study shows how AI, when used well, can transform user experiences. But it also shows the importance of human involvement and co-design in terms of getting it right.  

From retro to retro-fitted 

AI in this case was used by fans of an older video game involving lots of dialogue. Frustratingly for many players, most dialogue had to be read as text. Indeed, the game involved a lot of reading in general, presenting a barrier to many players – especially given that text size, colour and font often weren’t adjustable. 

What the fans did next was really quite impressive. They used AI to add a speech-to-text feature to the game. All dialogue and in-game text was now ‘spoken’ by an AI-generated voice. What’s more, the voice was carefully programmed so it was expressive and natural-sounding.  

Now, suffice to say this was not done for profit, just for fans’ enjoyment – a solution for not having voice actors (or the budget for them) to hand. With that in mind, we should look at what this story can teach organisations about the effective use of AI. 

The importance of co-design 

The first lesson here is that AI works best when it has accessibility and useability for different users in mind. Though the work of this game’s fans wasn’t strictly about accessibility, the result did give users a far wider range of ways to enjoy the game. The game’s story could now be enjoyed visually or by audio. It was a gap effectively filled by the use of AI, a problem solved. 

Secondly, and even more critically, this AI solution was designed around the needs of users – because it was designed by the users themselves. It was informed by an online community of fans discussing ways to improve upon their favourite game and problem-solve. In this way, many of the formal aspects of a well-designed business process, like scoping, user testing, and seeking feedback, happened organically.  

This story shows how well an AI solution can work, but also includes examples of steps missed out by many organisations. One such step is taking the diversity of users, including disabled users, into account. An inclusive design process, which includes disabled people from the start, is key to identifying barriers and ensuring the solution removes barriers rather than generates them.  

When inclusive design doesn’t happen 

Where this doesn’t happen, barriers become more entrenched. For example, online job portals might automatically sift out applicants who have a gap in their CV or who don’t have the requisite years of experience. Many talented disabled candidates with ‘non-standard’ CVs are therefore frozen out of jobs by an AI process that reflects the misconceptions of those who designed it. Similarly, AI programmes using facial and voice recognition sadly carry very human prejudices: people who struggle to make eye contact, who stammer or who have a facial disfigurement may not be “recognised” by AI, or not seen as trustworthy, reliable or fluent at interview, for example.  

To come back to our gaming example, using AI to add an option to ‘hear’ the game’s dialogue did not mean removing the option to ‘read’ the dialogue. AI was used to add to players’ experiences and provide extra ways of enjoying the game, not to define the ‘only’ way of doing so. 

The co-design approach of the video game fans is harder to replicate in a large organisation, and requires different teams to work together. Getting it right on AI use cannot just sit in Diversity, Equity & Inclusion Teams but must cut across the whole organisation – from procurement right through to IT and recruitment to make sure that biases are not designed in but actively designed out.  

But this story is hopefully a good antidote to the many examples of where implementation of AI has created barriers. How helpful AI is depends as much on the very real people programming it, and the diversity (or not) of their perspectives, as it does on the AI programme itself. Ultimately, for all the popular fears about AI, it does what we tell it to do. We just have to make sure what we tell AI to do is informed by what everyone actually needs. 

Loading, Please Wait