Is it possible to regulate artificial intelligence?

By Consultants Review Team Thursday, 07 December 2023

Global government monitoring of AI technologies has risen in 2023, resulting in regulatory momentum. As the speed of AI innovation continues to accelerate, AI-focused rules and laws are maturing. The tone has been set by a slew of important worldwide announcements. Consider the G-7 countries' statement on the Hiroshima AI process, which calls for the development of a set of worldwide Guiding Principles and a Code of Conduct for organizations building sophisticated AI Systems. Then came US President Biden's Executive Order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" on October 30, 2023. The UK Bletchley Declaration on AI Safety was issued on November 2, 2023, and the UK Artificial Intelligence (Regulation) Bill, 2023 was issued on November 22, 2023, both under the auspices of British Prime Minister Rishi Sunak.

Will these declarations be of assistance?

They all sound big, but in reality, except for the EU and China, they indicate that governments have taken a more hands-off approach while wrestling with how to confront the significant concerns AI poses and encouraging firms to self-regulate.

What about businesses?

In anticipation of the government's move to control AI development and deployment, four businesses established the Frontier Model Forum in July 2023. The White House obtained voluntary agreements from 15 of the world's biggest AI businesses to mitigate the threats posed by AI.

What are the dangers?

While AI's transformative potential is recognised, legitimate concerns have been raised about the significant risks of AI, ranging from more alarmist AI posing an existential threat to humanity to real societal harms such as biases (computational and statistical source bias as well as human and systemic biases), data privacy, discrimination, disinformation, meddling in democratic processes such as elections, fraud, deep fakes, worker displacement, and AI m

But the big question is whether AI can be regulated. To address this, we must first define AI. There is no single definition, although it is widely accepted that AI is the use of computer systems to accomplish activities similar to those performed by intelligent individuals.

Much of the discussion about AI revolves around the AI Model - data and logic—the algorithm, which can work with varying degrees of autonomy and provides probabilities, forecasts, or judgements as outputs.

While all hazards can develop in a variety of ways and be identified and mitigated, the dangers posed by AI systems are distinct. The algorithms can 'think' and improve as a result of repeated exposure to large volumes of data, and once that data is internalised, they can make judgements on their own. Even individuals who make the decisions can be perplexed by the decision-making process. This is known as AI's 'black box' problem. Yavar Bathaee, Harvard Journal of Law and Technology, Vol. 31, No. 2 Spring 2018.

However, from a legal and regulatory standpoint, it is necessary to focus on all of the phases before and after the model, where many of the dangers occur. It is critical to analyse the human and social systems that surround the models because how well those systems function dictates how effectively the models and technology actually work and what impact they have in applied situations.

The growing and nascent AI regulatory framework distinguishes between responsible usage problems, which involve humans and their duties, and trustworthy technology, which is concerned with the features and characteristics of the technology itself. It necessitates human-centric design processes, controls, and risk management throughout the AI model lifecycle, with the goals of fairness, enhanced transparency, identifying and mitigating risks in AI design, development, and use, accountability in algorithmic decision-making, including bias, and improving transparency in platform work.

Laws based on legal theories, notably intent and causality, are focused on human behaviour and can thus be applied to human-driven decision-making processes. The White House AI Pledge emphasises three essential values that must be crucial to the development of AI – safety, security, and trust – and represents an important step towards developing responsible AI.

Companies have pledged to further current AI safety research, including on the interpretability of AI systems' decision-making processes, enhancing the resilience of AI systems against misuse, and publicly exposing their red-teaming and safety procedures in their transparency reports.

Companies are now developing next-generation AI systems that are more powerful and complicated than the current industry frontier, such as GPT-4, Claude 2, PaLM 2, Titan, and, in the case of image general, DALL-E 2. Many of these have the potential to have far-reaching implications for national security and fundamental human rights.

The draconian EU Act, enacted by the European Parliament in June 2023 (but yet to become law), takes a risk-based approach, categorising AI technologies according to the level of risk they bring. The EU Act has included a list of 'Prohibited AI practises' for AIs with unacceptable levels of risk, which includes, among other things, the use of facial recognition technology in public areas and AI that may influence political campaigns.

Before entering the market, creators of 'Foundation Models' must register the product with an EU database. Creators of generative AI systems must provide transparency to end users and make specifics about copyrighted data used to train their AI systems public. One of the transparency duties is to publish AI-generated content.

Unlike the EU AI Act, President Biden's Executive Order and the preceding "Blueprint for an AI Bill of Rights" take a rights-based regulatory approach and lack AI deployment prohibitions and enforcement measures. The new Order, on the other hand, requires advanced AI system creators to share their safety test results and other information with the US government before releasing them to the public. It also invokes the Defence Production Act, which requires enterprises designing foundation models that pose a major threat to national security to notify the government and submit the findings of safety tests.

Furthermore, the National Institute of Standards and Technology (NIFT) has been tasked with developing rigorous standards and methods for testing, analysing, verifying, and certifying AI systems before they are made public. Notably, the Order encourages agencies to combat algorithmic prejudice in the criminal justice system and calls on Congress to establish a data privacy bill.

What about in India?

There is no particular AI policy or law in India. Interestingly, in Christian Louboutin SAS v. M/S The Shoe Boutique (CS (COMM) 583/2023, the Delhi High Court stepped in and said that ChatGPT replies cannot be used as the foundation for legal or factual adjudication of disputes in a court of law.



 

Current Issue