Artificial intelligence and new technologies

This resource explores some of the ways in which businesses should engage with new and emerging technologies in order to obtain the benefits of innovation without sacrificing disability inclusion.

Last Modified: 10 May 2024


Artificial intelligence and new technologies

Members of the Technology Taskforce are often asked what businesses should do with new and emerging technologies. Are they a force for good, or are they bad, in terms of disability inclusion? 

The answer, as with all products of human societies, is that it depends how they are used. 

This resource explores some of the ways in which businesses should engage with new and emerging technologies in order to obtain the benefits of innovation without sacrificing disability inclusion.

Should businesses welcome new and emerging technologies?

Yes – cautiously. Throughout history, businesses have benefited hugely from developments in technology. New technologies have also often helped reduce or remove barriers for disabled people’s participation in society. 

However, new technologies have – where implemented carelessly – have also led to disability discrimination and un-inclusive practices. 

Businesses need to implement new and emerging technologies in ways that do not lead to unfavourable outcomes for disabled people. 

Key considerations 

  • What do disabled people think about this technology? Consult with disabled people who will use or be affected by it, as well as expert groups within your organisation such as disability networks and ERGs. Our resources, ‘Staff consultation methods’ and ‘Customer consultation methods’ have more information. 
  • How will the technology be operated? Think about how all types of disability might affect how a person might use it. 
  • What outputs will it generate? Can disabled people access them? Will these have a disproportionately negative impact on disabled people? 
  • How will it interact with the rest of the business? Are other technologies dependent on it? If so, what impact will the new technology have on the operation and outputs of these technologies? 
  • How can you test its accessibility? It may not be immediately clear exactly how disabled users will interact with it, so testing for new and emerging technologies should be particularly extensive. It is vital to test the accessibility of all technologies you use. This should occur before introducing it, and regularly throughout its lifecycle. Our resource, ‘Accessibility testing,’ has more information. 

Example – Artificial intelligence

What is AI? 

AI has made headlines in recent years due to advances in its complexity and ability to mimic human intelligence. 

Many of us are familiar with AI chatbots such as Open AI’s ChatGPT. AI chatbots are given prompts by a user, and produce text, images, video or other outputs that can seem convincingly human. 

Similar underlying technologies are also being applied to other areas of life – such as sifting job applications and discovering new pharmaceuticals. AI “minds” (neural networks) can be asked to analyse sets of data, look for patterns and suggest actions based on those patterns. 

AI can be put to many uses that help disabled people at work and at home. For example, a 

How AI could help disabled people

  • AI can remove communication barriers. Automatic closed captions are already commonplace in online meeting platforms and video sharing sites. But there’s much more – there are AI-powered Braille translators that let disabled people read anything. Signapse has created an AI-generated sign language interpretation system that is already providing real time travel information at train stations and airports. 
  • AI can automate tasks that some disabled people can find difficult or impossible. For example, someone with a learning disability could ask an AI chatbot to summarise a long email for them, which they might not otherwise be able to understand.  
  • Automated accessibility testing can help make content more accessible. AI-based tools can check whether websites, apps and other content is compatible with Web Content Accessibility Guidelines (WCAG) and other accessibility standards. While they still need to be checked by a person, this can make the process easier. 
  • Advice on tone can help neurodivergent people understand written communication. For example, they could ask an AI tool to check the tone of an email they received. This can help remove uncertainty. It can also help people with mental health conditions like anxiety. 
  • Help disabled people find employment. For example, an AI chatbot could help someone write a job application that is tailored to the specifics of the job. It could also look at a job advert and suggest skills and experience to highlight in the application. This is important because disabled people on average have to apply for 60 per cent more jobs before getting a job.

Potential harms of AI

AI programmes can ‘hallucinate’  

This is the term used when AI programmes produce convincing sounding nonsense. IBM has produced a more detailed explanation.  

For example, in 2023 a US lawyer used ChatGPT to write legal filings, and at least six of the cases it cited as precedents were made up.  

This could harm disabled people in a number of ways: 

  • Customers with disabilities could be told incorrect information by an AI chatbot they use to contact your organisation. They may be told that the building is accessible when it is not, for example – or that a product is compatible with their assistive technology, when it isn’t. 
  • Errors may be introduced into disabled employees work if they use an AI tool at work. This could lead to lost productivity, problems with their performance, and potentially performance management concerns. 
  • An AI programme used in recruitment or performance management may identify non-existent problems with a disabled person’s suitability for a job. This could lead to unfavourable treatment, and potentially discrimination. 

Any organisation that relies on outputs generated by AI programmes must have measures in place to catch such ‘hallucinations.’ It is no excuse that the output was generated by an AI tool. Organisations are responsible for the outputs they use. 

AI programmes can replicate biases against disabled people.  

AI programmes are trained on existing datasets. These are generated by specific groups and activities, and captured in specific ways. This can lead to biases in any of those areas to be replicated by the AI.  

Example – Amazon and AI 

In 2018, Amazon stopped using an AI recruitment tool that automatically placed a lower value on applications that contained the word “woman.” 

This situation occurred because the tool was ‘trained’ on Amazon’s historic hiring practices. During that period, more men had been hired than women. As a result, the tool associated applications from women with a lower chance of success. This led to a lower rating. 

The AI tool was not inherently discriminatory. The discrimination arose because it replicated the historic biases in the dataset used to ‘train’ the tool. It devalued applications from women because applications from women had been given lower values previously. It was only by applying its recommendations without correcting for that bias that Amazon risked disadvantaging women.  

In this way, AI programmes can replicate biases against marginalised groups, including disabled people. 

It is not hard to imagine a similar scenario, but with disabled applicants turned away by an AI tool because their applications had features that are associated with having a disability. For example, people with disabilities are more likely to have gaps in their employment history, due to illness or treatment. Using this as a reason not to interview someone could easily be discriminatory under law. However, an AI tool may not know to discount this. Instead, it may identify a historic pattern that people with gaps in their CVs were not interviewed, and apply that rule. 

So, should businesses use AI?

AI is currently a topic of significant debate among businesses and disability inclusion practitioners.  

  • On one hand, many disabled people welcome the automation of tasks that they find difficult or impossible.  
  • Some also claim that human biases could be reduced or eliminated by AI tools that have been ‘trained’ not to discriminate in the ways humans can. 
  • On the other hand, automation of the functions of business or government leaves some disabled people concerned that their needs will not be factored into the functioning of the AI. 

As with all technologies, we recommend that businesses remember that no tool is inherently accessible or discriminatory. How the tool is used determines whether it helps or harms. 

Businesses need to look at how AI tools operate. They may decide that the risk of discrimination is too great and too complex to mitigate. Existing systems and ways of working may actually provide more robust protections against discrimination. 

If businesses do use AI tools, they must have oversight from humans with expertise in disability inclusion. AI tools ‘trained’ on historic data sets will replicate historic biases, which must be identified and removed. 


If you require this content in a different format, contact enquiries@businessdisabilityforum.org.uk.

© This resource and the information contained therein are subject to copyright and remain the property of the Business Disability Forum. They are for reference only and must not be copied or distributed without prior permission.


No posts

Bookmark (0)
Please login to bookmarkClose