Working to regulate artificial intelligence | CQI | IRCA Skip to main content
Regulating artificial intelligence

Working to regulate artificial intelligence

Progress indicator

Regulating artificial intelligence
Published: 1 Feb 2024

An international BSI poll reveals that 62% of the public want tighter controls around AI, but governments around the world are working to differing standards and timescales. Can the quality sector step in to provide some much-needed confidence?

Artificial intelligence (AI) has made the headlines for all manner of reasons over the past couple of years: some positive; others more alarming. The pace of innovation has surged with the development of generative AI platforms – that is, those that create something. Platforms such as ChatGPT have become well known among the public and have shown huge promise for automating complex creative and technical tasks.

But concern and controversy over how AI will be used in the near future has developed just as quickly as the technology itself. A recent poll, carried out by BSI, of 10,000 people across nine countries found that 62% want global guidelines for the development of AI; something BSI calls the 'AI confidence gap'.

The survey also found that nearly two-fifths (38%) use AI every day in their work and 62% expect their industry to do so by 2030.

The public's concern is multifaceted: workers in industries strongly impacted by AI fear their livelihoods could be replaced by a computer. Other commentators have voiced ethical concerns over using AI in the military, taking life-and-death decisions out of the hands of humans. Furthermore, serious mistakes in news articles written by AI have led several leading media companies to ban the use of the technology altogether.

A spokesperson at BSI tells us that while AI has the potential to be a transformative force for good, it also threatens privacy and security if not correctly implemented.

"Having the right guardrails in place to ensure its responsible use is key," they say. "Today, AI technologies are typically driven by data using machine-learning technologies. This allows… increased automation, scaling and human support for many activities, but can also introduce new risks around privacy and security if not protected using proper cybersecurity controls.

"This technology is also available to malicious actors who may use it to scale up and automate adversarial activities. Specific applications introduce specific threats."

Another, more existential, concern is that the technology could become so intelligent that it 'rises up' against its makers, and could pose a threat to human existence itself.

If that sounds a bit far-fetched, consider the myriad warnings from big names in the tech sector who’ve called for restraint and a slower, more considered development of AI in the future. Leaders such as Bill Gates and Warren Buffett have even likened AI to nuclear technology, "both in terms of potential and danger".

"Agreed standards and principles of best practice that can evolve alongside the technology and its applications can absolutely support legislation globally. Standards like ISO/IEC 42001, which was developed with input from 38 countries, offer an opportunity to harmonise international best practice."

British Standards Institute (BSI)

Clearly what’s needed is a framework – rules and regulations for the AI sector to live by that allow it to thrive while minimising social risks. Yet the development of legislation to govern AI is perhaps the only part of the sector that’s not proceeding apace.

As we explored in of the Winter 2023 issue of Quality World different countries are all bringing in their own forms of AI legislation in different ways and on different timescales. And until it arrives, and is consistent, there is a massive guideline-shaped hole in the world of AI.

Or is there? Schemes are fast emerging from the quality industry that will help organisations develop AI products while minimising risk. For example, BSI has a portfolio of AI tools and services designed to help developers build trust with clients and the public.

These range from training courses, which help staff navigate the quality landscape around AI, to algorithm-testing services – crucial for demonstrating accuracy, efficiency and trustworthiness.

There is now even an international standard for AI. ISO/IEC 42001:2023 Information technology – Artificial intelligence – Management system is designed to help organisations consider things such as: non-transparent automatic decision-making; the utilisation of machine learning instead of human-coded logic for system design; and continuous learning.

So can standards like this help to close the public's confidence gap, filling the void left by the delay in legislation? And how will they support legislation when it gets here?

"This is where internationally agreed guidance can help, says the BSI spokesperson. “Agreed standards and principles of best practice that can evolve alongside the technology and its applications can absolutely support legislation globally. Standards like ISO/IEC 42001, which was developed with input from 38 countries, offer an opportunity to harmonise international best practice."

Indeed, it's logical to see why the quality sector is stepping in here to provide a consistent approach. It can act quickly and internationally, unifying organisations from different countries, whereas governments seem to have differing priorities, timescales and approaches.

What are we going to do with artificial intelligence?

Senior auditor Pedro Mejias explores the role of artificial intelligence in quality management systems.

Access the CQI Quality Learning Hub

CQI Quality Learning Hub .png

Supporting your professional development

Quality World

Get the latest news, interviews and features on quality in our industry leading magazine.

Join the conversation

Become a member and you too could input into future revisions of standards such as ISO 9001.