The EU Artificial Intelligence Act is a set of regulations designed to categorise and govern the development of AI within the EU based on specific levels of risk.
The primary aim of this EU AI Act is to ensure AI systems are safe and secure and promote trustworthy AI development – but is the desired outcome likely to become reality?
We have spoken to four experts across the technology sector to get their thoughts on this new legislation and how effective they deem it to be, as they weigh up the pros and cons of the act.
Heightening security through reduced risk
One of the more prominent aspects of this new regulation is the classification of levels of risk within AI systems. Through this clear categorisation and identification of security risks, businesses should be able to control the trustworthiness of their systems.
For Martin Davies, Audit Alliance Manager at Drata: “The EU AI Act has a clear common purpose to reduce the risk to end users. By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated.
“Even in circumstances where high-risk biometric AI applications are still permitted for the purposes of law enforcement, there is still a limitation on the purpose and location for such applications, which prevents their misuse (intentional or otherwise) in this sector.”
As AI becomes a standard integration into more and more modern technology, it grows ever more important to build secure systems that perform as they are intended.
Ilona Cohen, Chief Legal and Policy Officer at HackerOne, highlights the importance of this strong stance on security, saying: “We are pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies.
“We also support the commitment to AI model evaluation using a range of methodologies to address systemic risk, including security concerns and unintended outcomes.”
Drata’s Davies continues: “Furthermore, those high-impact AI systems that remain permitted under the EU AI Act will still need impact assessments. This will require organisations that use them to understand and articulate the full spectrum of potential consequences.
“This is a step in the right direction, and the proposed penalties will mean that the developers of such high-impact AI applications are rendered accountable for their outcomes.
“The positive impact this Act could have on creating a safe and trustworthy AI ecosystem within the EU will lead to an even wider adoption of the technology. To that extent, this regulation will encourage innovation within defined parameters, which will only benefit the AI industry at large.”
However, not everyone thinks the regulation will have such an immediate positive impact.
Global preparation for AI regulation
Around the world, governments are turning their focus to compliance and regulation on the development of artificial intelligence. But this is no mean feat.
According to Hugh Scantlebury, CEO and Founder of Aqilla: “Companies, individuals and governments around the world are working on an almost unimaginable range of AI-related projects. So, trying to regulate the technology right now is like trying to control the high seas or bring law and order to the Wild West.
“If we did attempt to introduce regulation, it would have to be global – and such an agreement seems unlikely any time soon. Otherwise, if one region, such as the EU, or one country, such as the UK, attempts to regulate AI and establish a ‘safe framework,’ developers will simply move to another jurisdiction to continue their work. And that’s before we consider those already based outside the EU or the UK. Would a global agreement stop state-sponsored or independent developers in countries like Russia, China, Iran, and South Korea?”
A global consensus is important to ensure an even playing field that promotes AI development while prioritising security. This regulatory cohesion will not be easy to attain, as Darren Thomson, Field CTO EMEAI at Commvault, explains: “The EU AI Act is a comprehensive, legally binding framework that clearly prioritises regulation of AI, transparency, and prevention of harm.
“Following suit to some degree, the UK is maintaining a lighter touch on governance. Its AI Action Plan sets out a commendable vision for the future, but, arguably, with insufficient regulatory oversight. Meanwhile, the recently announced US AI Action Plan aims to brush regulatory hurdles under the carpet and push forward to win the global AI race.
“But rather than being a positive sign of progress, this regulatory divergence is creating a complex landscape for organisations building and implementing AI systems. The lack of cohesion makes for an uneven playing field and conceivably, a riskier AI-powered future. Organisations will need to determine a way forward that balances innovation with risk mitigation, adopting robust cybersecurity measures and adapting them specifically for the emerging demands of AI.”
For Aqilla’s Scantlebury: “The birth of AI is second only to the foundation of the Internet in terms of its power to fundamentally alter our lives – and some people even compare it to the discovery of fire.
“Hyperbole aside, AI is still in its infancy, and we have only scratched the surface of what it could achieve. So, right now, no one is in a position to legislate – and even if they were, AI is developing at such a pace that the legislation wouldn’t keep up.”
So, is the EU AI Act going to bring peace and harmony, or signal the beginning of greater complexity and fragmentation? Only time will tell.
Martin Davies, Drata
Ilona Cohen, HackerOne
Darren Thomson, Commvault
Hugh Scantlebury, Aqilla