THE RIGHTFUL AI

AUTHOR: Beatriz Valero de Urquía

The rise of artificial intelligence has set off ethical alarms. Jan Kleijssen talks to Tech For Good about the Council of Europe’s mission to regulate the design and development of these technologies to protect human rights worldwide

Half a century before the European Union was born, there was already a Council of Europe. Set up in the wake of World War II, the Council’s mission is to promote peace, stability and the protection of human rights to over 830 million people. Now, it is adapting those rights to the digital era. Instead of developing political declarations, the Council of Europe protects democracy and the rule of law through legal cooperation and the creation of more than 160 international agreements, treaties, and conventions. Some of them, such as the 1950 European Convention on Human Rights and the 1981 Data Protection Convention, have set the standards for human rights legislation worldwide. The latter is often considered the “grandmother of the GDPR” and it was the first step in the Council’s journey towards ensuring that the digital revolution doesn’t incur human rights violations.

As the Council’s Director of Information Society, Jan Kleijssen oversees standard-setting, monitoring and cooperation activities on a wide variety of issues, including freedom of expression, data protection, money laundering, cybercrime, and corruption. Recently, he has set his sights on the new challenge that Europe faces: the ethical implications of the deployment of artificial intelligence (AI). “The use of new technologies by governments really poses challenges to human rights,” Kleijssen says. “AI, which was a theoretical possibility for many years, has now become a practical reality.” Kleijssen has recently participated in the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI), which assessed the need for an international legal framework for the ethical development, design and application of AI. The committee included representatives from international organisations, national policy experts, IT companies, civil society and academia, who came together to examine AI governance and discuss issues such as the possible long-term societal effects of AI and the sustainable development of AI applications.

The use of new technologies by governments really poses challenges to human rights”

Jan Kleijssen

“Of course, artificial intelligence creates fantastic opportunities,” Kleijssen says. “But the hasty deployment of automated systems with blind faith in its objectivity and impartiality has been proven to lead to absolutely disastrous results. Governments can mitigate risks by creating regulations and imposing standards to ensure that the use of the technology is as safe as can possibly be.” In the past, the unregulated use of AI has had terrible consequences. This was first seen in 2020, when the UK’s Office of Qualifications and Examinations Regulation used an algorithm to decide the grade of final-year high school students, who couldn’t take their A-level exams due to the COVID-related lockdowns. What ensued was a grand scandal. The system was soon proven to be highly biased, as it limited how many pupils could achieve certain grades and based its outputs on schools' prior performance, downgrading around 40% of predicted results. In response, the UK government decided to void all algorithm-generated results and replace them with teacher-assessed grades. A second, more disastrous instance of the use of biased AI took place in Kleijssen’s homeland, the Netherlands, in February of 2021. There, the tax authorities developed an AI system to detect tax fraud and applied it to people who were receiving child and family benefits. The AI identified over 20,000 families that had allegedly defrauded the system, and as many as 10,000 families were forced to repay tens of thousands of euros at very short notice. When the algorithm was found to have been biased, the Dutch government was forced to resign.

The cabinet’s resignation came after a damning parliamentary report that concluded that the system discriminated against people with non-Dutch family names, to the point of considering spelling mistakes in the tax allowance forms as deliberate attempts to commit tax fraud. However, when people tried to seek justice, the courts rejected the cases because they assumed that an automatic system couldn’t make mistakes. The name of the report, Unprecedented Injustice, reflected the scale of the scandal. “The lack of AI governance was one of the factors that greatly contributed to this tragedy - because there's no other word for it,” Kleijssen says. “None of the boxes were ticked. The system was not robust; it had not been tested, there was no human oversight, and biases were not recognised. And it led to very dramatic results. “It meant that a single mother with three children who received €1,000 per month had to pay back €100,000 within six months. Many lives were lost, marriages broke up and more than 1,000 children were forcefully taken away and placed in care because the parents were considered to have criminally defrauded the welfare state. It was a disaster.”

These examples illustrate the need for governments to regulate the use of AI, instead of having blind faith in the technology. Now, two years after CAHAI’s first meeting, the committee has supported this belief by expressing the need for a legally-binding framework on the use of AI and outlining some of the elements that would be vital to its success. Instead of laying down detailed technical parameters for the design, development and application of AI systems, the new treaty would establish certain basic principles that would regulate the use of these systems in accordance with human rights regulations. To achieve this, CAHAI proposed establishing a methodology for risk classification of AI systems including a number of categories such as “low risk”, “high risk”, or “unacceptable risk”. These laws should apply to the development, design, and application of AI systems both by public and private actors and be open to accession by non-member states. The goal is for this legislation to transcend Europe, and become a global standard. “Our governments are quite ambitious, and have set less than two years for the negotiation of what would be the world's first AI treaty,” Kleijssen says. “Governments will now start negotiating, and every word and every comma will be discussed because a lot of this will be binding. We sincerely hope that our observer states will actively continue to participate.”

Our governments have set less than two years for the negotiation of what would be the world's first AI treaty.”

Although the Council is a European organisation, its legislation has been adopted by non-members or ‘observer states’, namely Canada, the Holy See, Japan, Mexico, and the US. The government of Israel requested to be included in the discussions regarding a new law against cybercrime. Moreover, the Council has also established a framework where non-governmental organisations such as non-profits and private companies can sit at the table and make their positions known. Although this multi-stakeholder approach is very valuable to ensure all sectors of society are committed to developing ethical technologies, it also creates challenging situations when faced with opposing interests. “In most of our member states, there was not yet - and there still isn’t - a governance position on AI issues,” Kleijssen says. “Even within the same country, AI is often delivered by a variety of ministries who have competing agendas. Whereas the Justice ministries lean very much towards setting common rules, a number of ministries of Economic Affairs were initially afraid that this would hamper innovation in the race with the US and China.”

However, regulations don’t necessarily hold back innovation. Highly regulated industries such as the medical and financial sectors have flourished in Europe despite very tight restrictions, with the former being able to develop one of the main vaccines against COVID-19 in record time. Eventually, the members of CAHAI were able to recognise this and identify the need to impose one common, legally binding framework for the use of AI across Europe - and hopefully the world. Currently, the European Parliament does have an AI article but it is only applicable to members of the EU. In the rest of the world, companies adhere to ethical standards that advise on the ethical use of technologies. For Kleijssen, this isn’t enough. Ethical standards vary widely from company to company and country to country and organisations have no legal obligation to adhere to them. “Ethical standards draw the attention to important principles, but they're just principles,” Kleijssen says. “If you and I have a problem, we cannot go to court and invoke an ethical standard, because it cannot apply. It's absolutely not a remedy. It doesn't put an obligation on governments. In the case of the Netherlands, the tax authorities perhaps had an ethical standard, but since they were not binding, they could not be enforced by the courts and individual citizens had nothing to rely on.

“Ethical frameworks are useful and we take inspiration from when we try to identify what should be regulated by law. They are important, but they’re not themselves sufficient.” Although AI is its most recent focus, the Council of Europe is no stranger to turning ethical standards into law. Recently, Kleijssen participated in the celebration of the 20th anniversary of the Cybercrime Convention, the first international treaty on crimes committed via the Internet and other computer networks. The document deals with very varied digital crimes, including copyright infringements, computer-related fraud, child pornography and violations of network security. Today, the Convention is still the world's only legal instrument to fight against cybercrime, and it has evolved to keep up with technological advances.

Ethical standards are important, but they’re not themselves sufficient.”

“These treaties are living instruments, they’re not set in stone,” Kleijssen says. “Last year, the Council obtained an agreement from all 66 countries - which was not easy - and the European Union on a new protocol to the Cybercrime Convention. “The traditional way of cooperation between governments or against crime is what is called a ‘mutual legal assistance agreement’, which takes six months. In the good old days, this was a reasonable time. But now, of course, everything's in the cloud, and six months is a ridiculously long time to find evidence because, by the time the request reaches the other country, the evidence has already been transferred or deleted. As a result, impunity at the moment is extremely wide when it comes to cybercrime.”

Enter the new protocol. This legislation makes it possible for governments to directly address private companies in advance so that they can react quickly when a crime has been committed and bypass the current cumbersome bureaucracy. This at the moment is only possible in emergency situations such as the terrorist attacks on Charlie Hebdo and Bataclan in France. In those cases, when lives were on the line, Microsoft reacted immediately to help identify and track down the perpetrators. “At the moment, that is an exceptional situation, but the idea is for this collaboration to become much more generalised,” Kleijssen says. As was the case with the Cybercrime Convention, any treaties that regulate new technologies such as AI will have to be revisited periodically and adapted to the new threats that will arise. Although public policy can only follow innovation, this surge of concern regarding the ethical implications of digital tools might drive tech firms to ensure that they adopt a framework of ‘privacy by design’ and ensure that the right safeguards are in place from the very start.

Whatever technology you use, you cannot violate human rights”

After AI, what comes next? Kleijssen believes that Council will sit down and discuss the metaverse. “I've already launched within the Council of Europe a proposal that we start looking very seriously at the metaverse,” Kleijssen says. “This space will have fantastic opportunities, for instance in online education, but I think - it's an educated guess - that organised crime will very quickly install itself in the metaverse. Issues like child abuse, child pornography, bullying and violence against women will also happen in the metaverse. “Whatever technology you use, you cannot violate human rights. It's therefore very important to not wait until this technology has been rolled out and is used by millions of people worldwide. I would very strongly plead to human rights and the rule of law to be included by design, from the very outset. I'm under no illusion that we will be able to stop every abuse, but we can certainly mitigate it and provide remedies when things go wrong.” Human rights have not changed in the digital age. The right to privacy, the right to be protected against arbitrary government actions and the right to be treated without discrimination, among others, are more relevant than they have ever been, albeit being implemented in a different way. As society moves towards a more digitally-enhanced future, it’s governments’ responsibility to ensure that human rights are respected both on and offline.