Safe AI: is it possible?

28 Apr 2023
Share this story
Safe AI: is it possible?

The EdSAFE AI Alliance is a global grouping of education organisations, institutions, research and standards bodies whose mission is to ensure a safer, more equitable and more trusted education ecosystem of AI innovation. Managing Director Beth Havinga explains the importance of the global pledge launched  by the alliance, and why educators and policy makers should sign.

“Like everyone, I’m excited by the possibilities of AI – we just need to ensure its safe,” Beth told Bett. “We’ve got some of the best minds in EdTech working on this, but the technology is developing at a dizzying pace. Since November, we’ve been about 1 ½ weeks ahead of what was happening: identifying threats just before they became real.”

The alliance has outlined ten concerns, which signatories pledge to address. These include:

1. System transparency

Solution providers need to explain how their AI systems learn or are trained. That’s the only way AI tools can be used safely.

2. Data usage

As well as understanding how AI tools are made, solution providers should also be open about how the tools use data and what that data is used for.

3. Informed consent

Data usage agreements must be in place. Any related consent processes, specifically for minors, must be made accessible and clear.

4. Privacy

Because of the way AI learns, you can’t ever erase what you’ve told it (which makes it impossible to comply with GDPR laws). In this context, privacy is a key concern, as is appropriate measures and processes to ensure a user’s right to rectification.

5. Safety and security

Security should be a top consideration, no matter how benign the AI tool might appear. To ensure the confidentiality of personal data, the system need to be able to resist external threats.

6. Bias and learning limitations

To identify, assess and mitigate bias or limitations (unintentional or otherwise) for any specific user types or groups, solution providers and users need to be vigilant. Processes are needed that focus on inclusiveness by design, non-discriminatory practices, fairness and equity to avoid abuses.

7. Accountability

Clear lines of responsibility and accountability are needed so that when breaches occur, there is a clear path to remediation.

8. Human-in-the-loop

We can’t leave the machines in charge. Human oversight of key decisions and actions is essential when AI is used within education and learning environments to ensure ethical and human-centred use.

9. Verifiability

Human oversight is also needed to review the interactions between the AI system and users (particularly minors), including internal AI metrics.

10. Stakeholder support

To have a deeper understanding of the opportunities and challenges of AI in learning environments, educators, learners, and the broader education community needs to be involved in the identification of education problems and the formulation of the solutions.

That may sound like a lot of things to be worried about – and it is. But there’s no getting around it if we’re to reap the vast benefits AI offers safely. Beth says in some places there might be an argument for hitting the pause button to allow regulators to catch up, but not to ban the technology as some jurisdictions have chosen to.

EdSAFE AI Alliance is urging all stakeholders including policy makers, educators, providers of learning tools and systems and researchers to sign the pledge, which can be accessed here. The considerations will help national, regional and international authorities set general guidelines, but they will also help individual institutions and teachers be aware of what they need to think about in order to take responsibility for their own environments.

For more insights on the world of EdTech, and to keep up to date with all things Bett, subscribe to our newsletter here.

Take me back to the hub
Loading

Recommended Content

Loading
  • Here’s our expert advice on how to make sure the connections between educators and solution providers made at Bett don’t end after the show.
  • Bett UK 2024 brought 30,000 educational professionals from 129 countries to explore the latest in education together.
  • AI learns from past mistakes, and so must we

    28 Feb 2024 by Kay Firth-Butterfield, CEO, Good Tech Advisory
    Bett 2024 keynote speaker Kay Firth-Butterfield was recently named by Time Magazine as one of only four people recognised for their impact on AI. Here the CEO of Good Tech Advisory LLC reflects on the ...
Take me back to the hub

Subscribe to Bett

Sign up to the Bett newsletter to keep up to date with our global series and hear the very latest and most important announcements over the coming months. Simply fill out the form to receive the latest newsletters.

Sign up

Our Partners