European AI Fund aims to help shape the future of artificial intelligence

An illustration depicts a brain made from strings of lights overlaid on a laptop and surrounded by technology, education and health related icons.
The rapidly evolving use of artificial intelligence will have major implications on the future of civil society. Illustration: Craig Kelley, Jr.

Everywhere we look these days, it seems as if machines are taking over. More and more people find themselves interacting with artificial intelligence in all facets of daily life — from chatbots that act as customer service agents to AI security systems that decide who can enter a building or even cross a border.

While AI has the potential to improve things like health care, workplace efficiency and security, it also can pose profound risks to people and organizations across issues as varied as education, the environment, criminal justice and community development. The way it has permeated our lives has raised big questions and concerns about digital rights and the need to ensure public policy protects the people whose lives are most impacted by this rapid shift towards a digital-first society.

Recognizing the need to help strengthen the voice of civil society in shaping the future of AI, the Charles Stewart Mott Foundation joined with several other foundations to support the creation of the European AI Fund, a pooled fund at the Network of European Foundations. The collaborative effort brings together an array of funders who work on different issues to address how digital rights and technologies are impacting the work and role of nonprofits.

“We’re at a critical moment where the direction of future technologies is being decided, and there’s a very loud voice coming from private industry and governments, but civil society feels a lot quieter,” said Catherine Miller, director of the fund. “So, we want to amplify the volume of civil society’s work by supporting organizations that are already working on advocacy and policy, as well as supporting others who represent communities that are most affected by the impact of AI to engage in the conversation.”

Miller stressed that one of the most concerning aspects of AI technology is that its use can amplify existing inequalities and racial discrimination.

“If you look at predictive policing, for example, sci-fi films can give you a sense that there is just some kind of robo-cop, totally inhuman technology that decides everything,” she said. “But the problematic elements of it are the human elements. The data that’s going into creating the models is biased, because it’s data that humans have chosen based on the way that policing has historically played out.”

If you look at predictive policing, for example, sci-fi films can give you a sense that there is just some kind of robo-cop, totally inhuman technology that decides everything. But the problematic elements of it are the human elements.”
Catherine Miller headshot. Catherine Miller, deputy director of the European AI Fund

Sarah Chander, a senior policy advisor at the European Digital Rights Network, echoed Miller’s concern about AI exacerbating inequalities and discrimination and added that the increased use of AI systems should be concerning to anyone interested in questions of privacy, freedom of expression or freedom to be in public spaces. She also points out the power imbalance created by the use of machines for surveillance to drive decision making.

“What we have tried to do better is think about the question of surveillance in general and how the use of new technologies by states and companies that hold a lot of power impacts people, especially members of marginalized communities,” Chander said. “From that perspective, we see the use of artificial intelligence and surveillance technologies not just as a promising endeavor, as we are often told, but also as an exercise of power that could exacerbate and increase the harms that marginalized communities are already facing.”

EDRi, a network of civil rights organizations, experts, advocates and academics based in 19 countries, was one of the initial organizations to receive a grant from the European AI Fund. Throughout the past year, EDRi has deepened a connection with the Platform for International Cooperation on Undocumented Migrants, another early grantee of the Fund that works to protect human rights for migrants across Europe.

Chander and PICUM Deputy Director Alyna Smith recognized how collaborating could strengthen the advocacy work of both organizations. Smith says groups like PICUM bring an important perspective to digital rights conversations, even though they are not technical experts.

“Four or five years ago, PICUM wasn’t working on digital rights at all,” said Smith. “But we’re in a completely different place now because the more we’ve engaged, the more we see how embedded AI is in many of our issues, particularly where state authorities are taking a very strong enforcement approach centered around control and surveillance — instead of what could be a benefit to the people.”

An illustration shows a robot sitting at a computer surrounded by slides and paperwork.
As AI technology steadily replaces human decision-making in all facets of daily life, many civil society groups are raising questions about ethics and accountability.

“We recognized that you can’t view digital rights in isolation to other social justice issues and that we as a movement also have a lot to learn from people with years of expertise about certain manifestations of unfair criminalization,” added Chander. “We wanted to make sure that what we are advocating for is also representative of racial justice, migrant justice, gender justice, workers’ rights and many other points of view.”

Initially, EDRi and PICUM worked together with other partners on a mapping project to get a clearer picture of how AI is implemented in the context of migration.

“There are cases in which software was used to assess whether people’s claims about their region of origin matches their dialect,” Smith said. “You can imagine how problematic that can be.”

That collaboration has evolved into a coalition of organizations that represent a cross-section of civil society advocating as a united voice on the impacts of technology to real people. These groups are focusing their advocacy efforts on calling for the proposed European AI Act to prioritize protecting fundamental rights.

If passed, the AI Act will be the first of its kind from a major governing body to set regulation standards on the use of artificial intelligence. Though it would regulate the AI landscape only in Europe, the act has the potential to become a precedent-setting global standard.

Miller said there are currently more than 3,000 amendments to the act, so there is still a long road ahead. But she said those amendments include nearly all the points that civil society organizations wanted to see addressed — thanks to the advocacy efforts of groups like EDRi and PICUM.

“There have been some real wins in the process,” Miller said. “It’s certainly encouraging to see that civil society organizations have come together to collaborate on this and also that they’re now recognized as a valid voice by policymakers.”