star

How your charity can use AI ethically

Awdur: Dawn Kofie; Amser Darllen: 8 munud
Mae hwn yn adnodd agored. Mae croeso i chi ei gopïo a'i addasu. Darllenwch y telerau.

Artificial Intelligence (AI) is embedded in our everyday lives. And over a quarter of charities use it in their day-to-day work. AI has the potential for social good and efficiency, but there are ethical issues around using it. This article focuses on how your charity can make the most of the benefits of AI-powered tools in an ethical way.

What AI is

The Information Commissioner’s Office definition of AI is: 

“…an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated, or with a ‘human in the loop’.” 

Opportunities for using AI in your organisation

AI tools can save your charity time and money. 

Tools like ChatGPT, Jasper AI and Copy.ai can help you to create content. Alzheimers UK uses them to:

  • start the process of coming up with ideas
  • help create social media content
  • convert reports or research into video scripts. They also use AI video tools, like pictory.ai, to generate video to go with the script.

But as Tom Pratt outlines in his blog on AI for charities, AI doesn’t begin and end with content creation tools. You can use it to:

  • create images and audio, as well as video
  • fundraise more efficiently 
  • improve supporter engagement
  • make your marketing campaigns more effective 
  • track your progress and measure your impact
  • streamline processes and improve workflow.

Torchbox have produced a nifty introduction to AI. They’ve also put together some examples of how AI is being used to help nonprofits create impact.

Ethical issues with how AI is developed

Ethical issues with technology development are not new. For example the laptop and mobile phone supply chain involves cobalt mining, a cause of human rights abuses in the Democratic Republic of the Congo.  

Some would argue that the way AI is developed also makes it unethical. The companies that create AI are secretive about the data sets they use to train their software. These data sets consist of huge amounts of images, text, audio, video and data points from the internet. But there’s little or no attention paid to copyright, licensing, ownership or consent. 

As with cobalt mining there are also ethical issues around supply chains of human labour involved in developing AI. Companies employ workers in Africa and the Philippines to review the content that powers their AI systems. These workers deal with content containing graphic sexual abuse, hate speech and violence. They’re badly paid, often traumatized by the content they moderate and can be sacked at short notice. 

Risks and limitations of AI

There are other drawbacks that you need to be aware of if your charity wants to use AI ethically.

Data security and privacy issues

AI systems can collect publicly available personal data from things like social media profiles and Companies House. They can also collect it unintentionally, for example through facial recognition. And AI chatbots can guess sensitive information – for example your race, location and occupation – using what you type into them. How AI systems use this personal data is not always transparent. Not all AI companies are open about how long, where, and why they store people’s personal data. And not all AI models have security measures in place to protect personal data.

Maintaining and worsening bias and discrimination

The data sets and data that AI companies use often come from the internet, which is full of inaccurate and biased content. These data sets are often incomplete, inaccurate and not diverse. This can, and has, resulted in discriminatory decisions.

False content

AI tools like ChatGPT can generate inaccurate information and present it as if it’s true. And AI can be used to clone voices and create fake images, video and audio. This makes it a tool that, like many others, is open to abuse or use to cause harm.

Lack of explainability

Explainability (also referred to as “interpretability”) is the concept that an AI system and its output can be explained in a way that “makes sense” to a human being at an acceptable level. But it’s not always clear how the algorithms used in AI systems make their decisions. So it’s difficult for people to understand how their data is being used to make decisions that affect them.

Reputational risk

There can be data leaks if staff input organisational information into tools like ChatGPT. Legal problems may arise if AI generates inaccurate or discriminatory content. And privacy concerns may mean a loss of trust from people who use your services, and donors who support your work.

Environmental impact

The vast amount of energy it takes to train an AI model is usually generated by fossil fuels. This leads to greenhouse gas emissions. Developing AI models also creates electronic waste that contains hazardous chemicals. These can pollute water supplies and soil, harm our health and damage the environment. 

Negative societal impact

AI and automation have the potential to change the way we work. If people can’t find other employment or retrain, there may be job losses.

AI can be used to build profiles of people without their consent. And because companies like Google, Apple and Microsoft are powerful, they’ll shape the direction AI takes. 

AI regulation in the UK

The UK government’s white paper (March 2023) sets out its approach to regulating AI. Unlike in the EU, there will be no new comprehensive set of AI laws. And no new AI regulator. Instead, regulators including the UK Information Commissioner’s Office and the Financial Conduct Authority, will oversee how their industries use AI. A new part of central Government will support this work. 

The white paper includes 5 principles that regulators should follow:

  • safety, security, robustness – AI systems should be safe, secure and fit for purpose.
  • transparency and explainability – companies that create and use AI should be able to explain what its used for and when. And also how they work and how they make decisions.
  • fairness: AI systems should not discriminate against individuals or organisations, violate their rights or create unfair outcomes.
  • accountability and governance: measures should be in place to oversee AI systems. There should be clear lines of responsibility throughout their use.
  • contestability and redress: there must be clear ways to challenge harmful outcomes and decisions made by AI.

Unfortunately, there are lots of issues with AI regulation. And because tech companies move so fast, governments are playing catch up. 

How to use AI in an ethical way

AI tools are undoubtedly useful. But to use them ethically you need to be open, inclusive and accountable. 

Questions to consider

Things to ask yourself are:

  • do you understand how AI might affect your charity? (consider data analysis and content generation processes as a starting point)
  • how might AI affect the people you support? 
  • do AI tools align with your mission and values?
  • how might they help you to achieve your goals?
  • what’s your organisation’s level of digital maturity?
  • how does using AI tools fit with your wider digital goals?
  • where might you use them? Service delivery? Comms and marketing? Fundraising?
  • what’s the potential harm for your charity, staff and end users?

The Civic AI Handbook suggests that you should: 

  • be open about how and when AI is used
  • always have a human review AI content
  • not share private or sensitive information with third-party AI services (without checking their privacy policies first)

You also need to know:

  • what data you’ll be putting into tools like ChatGPT
  • how you’ll make sure you’re checking the accuracy of what’s created
  • how you’ll make sure there’s a human in the loop
  • the possible impact on job roles in your charity
  • when you should develop workflows that depend on AI

Actions to take

You can:

  • increase your understanding of what AI is and its societal impact. Allied Media’s People’s guide to AI is a good place to start, and the Scottish AI Alliance runs a Living with AI course. It covers how AI is changing how we live and work.
  • consider how you’re going to make sure the way you use AI is data protection compliant. The Information Commissioner’s Office has produced an AI and data protection risk toolkit.

Credits: Thanks to Ed Baldry, Director of Innovation at Torchbox and Tom Pratt, Director at Albert Road Consulting for taking part in interviews for this article.

Wedi'i gomisiynu gan Catalyst