Legal Norms Artificial Intelligence

Evolving Legal Norms for Artificial Intelligence  in the European Union and the United States

Introduction

Artificial Intelligence (AI) has been a hot topic for the last 2-3 years for politicians, technologists, and many people in civil societies globally.  The use of the technology has obvious benefits for increasing productivity and value produced by businesses and organizations, along with dangers from misuse, such as deep fake propaganda and serious security risks. Two recent efforts to develop legislation addressing AI technology offer an opportunity to compare and contrast the differing approaches in the European Union (EU) and the United States (US).  

Summary of the EU AI Act 

On January 26, 2024, the Council of the European Union issued a “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.” (EU AI Act). That proposal has been adopted by the EU Parliament and will likely become EU law before the end of 2024. (See https://assets.ey.com/content/dam/ey-sites/ey-com/en_gl/topics/ai/ey-eu-ai-act-political-agreement-overview-february-2024.pdf).  (See also: Summary: What does the European Union Artificial Intelligence Act Actually Say?, February 23, 2024, Maria Villegas Bravo, https://epic.org/summary-what-does-the-european-union-artificial-intelligence-act-actually-say/).  

The EU AI Act sets up a risk categorization scheme to prohibit some types of AI systems and identify the rest as either high risk, requiring significant regulation and management, or general-purpose AI (GPAI) systems that can be deployed if they voluntarily adhere to Codes of Practice outlined in the Act. The EU describes the purposes of the Act in multiple ways within the proposed legislation, but a succinct statement of the intent follows:  

A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, including democracy, rule of law and environmental protection as recognised and protected by Union law.’ (EU AI Act Article 5).

Prohibited AI systems are identified in Title II, Article 5 of the EU IA Act to address a variety of unacceptable use cases for AI.  These include AI that may distort human behaviors through deployment of subliminal, manipulative, deceptive techniques, or by exploiting vulnerabilities related to age, disability, or socio-economic circumstances.  Other prohibited activities include banning systems the classify biometric data, use social scoring, assess likelihood of future criminal behavior, scraping face recognition data, and related manipulative or socially distortive activities.  

High-Risk AI Systems are described in Title III, Chapter 1, Article 6 through reference to EU Directives (Annex II) or through specific use cases described in Annex III.  Those use cases include biometrics, critical infrastructure, education and vocational training, employment, use in certain public systems, some law enforcement uses, border control and immigration, and some uses by judicial authorities.  The requirements for high-risk AI Systems are dealt with in Title III, Chapter 2, Article 8.  Compliance will require such actions as establishing a risk management system, conducting data governance, preparing technical documentation, using automated record keeping, providing instructions for us, ensuring human oversight, meet high levels of cybersecurity and establishing a quality management system. (See https://artificialintelligenceact.eu/high-level-summary/). The EU will also establish harmonized standards, conformity assessment, registration and related requirements for providers of AI models and systems. (See Title III, Chapter 5).

General Purpose AI (GPAI) Models are classified and regulated by Title VIIIA, Chapter 2, Article 52c.  All GPAI model providers must provide technical documentation, instructions for use, establish a policy with respect to the EU Copyright Directive, and publish a summary about the content used for training. For some GPAI models, providers must also meet the transparency obligations found in Title IV, Article 52.  However, free and open license GPAI model providers only need to comply with the Copyright Directive and publish the training data summary, unless they present a systemic risk. To simplify compliance with the harmonization standards for GPAI model providers, they may rely upon codes of practice. IF a GPAI model presents systemic risk, then more rigorous requirements are imposed, although not as stringent as those for high-risk systems, including model evaluations, adversarial testing, tracking and reporting serious incidents, and meeting cybersecurity protections (EU AI Act, Title VIIIA, Chapter 3). 

Summary of the U.S. Executive Order on AI 

On October 30, 2023, the President issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) (See https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/).  The goal of the AI EO is to establish a government-wide effort to develop and govern AI using eight guiding principles.  These are:

  • Artificial Intelligence must be safe and secure. 
  • Governance must promote responsible innovation, competition, and collaboration.
  • While developing use of AI we must support American workers.
  • Artificial Intelligence policies must be consistent with advancing equity and civil rights. 
  • The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
  • Americans’ privacy and civil liberties must be protected.
  • The Federal Government must manage the risks from its own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI.
  • The Federal Government should lead the way to global societal, economic, and technological progress.

The regulatory effort will be built upon existing resources, including the AI Risk Management Framework, NIST AI 100-1, the Secure Software Development Framework, and several cybersecurity laws and executive orders.  There is a clear emphasis on identifying and prohibiting use of specific types of systems and activities such as synthetic content, dual use foundation models, cybersecurity, biosecurity, and preventing the malicious use of Federal Data for AI training.  The White House will establish an AI Council to help coordinate the activities of federal agencies and act as a central repository for federal guidelines, policies, and communications.  The AI EO establishes numerous reports and guideline deadlines for federal agencies to provide the comprehensive overview needed for responsible legislation and regulations in alignment with the guiding principles.

Comparison of Similarities and Differences

The EU AI Act addresses many of the same concerns as those being reviewed by the United States in its AI EO. However, the EU has many directives in place that will shape the EU AI Act.  The guiding principles behind its legislative effort differ as well, due to the multinational composition of the EU and the various concerns of the member states. The U.S. effort is somewhat less constrained by such multinational considerations, but prior legislation and multiple regulatory demands of various agencies will inevitably shape the proposals coming to the White House from the multitude of agencies involved. A similar effort to develop legislation and regulations for digital assets resulted in the overwhelming enforcement requirements crowding out the innovation and other positive benefits of a policy on digital assets. The same danger exists for AI legislation.  

Technical Analysis

Introduction to Generative AI

The broadest takeaway from the recent developments in AI, and in particular Generative AI, is that intelligence is now available as a service. As an analogy, generative AI takes the benefits the industrial revolution applied to factory work and applies it to knowledge work. As noted in both the Executive Orders concerning AI and digital assets, AI offers the potential for great benefits with correspondingly great risks. AI acts as an accelerant for work requiring human intelligence, speeding up the time it takes to do routine and repetitive tasks that require intelligent human action. Generative AI like OpenAI’s ChatGPT, Google’s Bard, or Anthropic’s Claude model are examples of applications based on large language models (LLMs) that have been the focus of much of the attention around the exponential growth in the AI sector. These applications primarily rely on applying algorithms to massive amounts of data to generate content, process data, and synthesize insights based on prompts written in natural language. As generative AI LLMs are trained on human created data, clarified through human reinforcement learning, and applied to tasks that would typically require human intelligence to reason out, the different AI models can dramatically decrease the amount of time it takes to accomplish tasks that formerly require human intelligence to accomplish. Automating these routine tasks in content creation, software development, customer service and sales, frees up human intelligence for other tasks, and has the potential to dramatically increase the value of productivity for workers utilizing generative AI in their workflows.

Evolution and Current State

Generative AI arrived slowly, and then all at once. Built on the foundations of deep learning, generative AI can be broadly defined as applications built on LLMs. Deep learning has powered many of the advances in generative AI, but the most recent models like ChatGPT 4, Google’s Bard, and Anthropic’s Claude differ in that they can process extremely large datasets with varying types of data, perform more than one task, and generate novel synthetic content from the underlying dataset. Trained on broad ranges of data including text, images, video, audio, and computer code, the generative AI application in use today is capable of generalized functions that can be adapted to flexible use cases, depending on the need of the user. These use cases include editing, summarizing and answering questions, drafting new content, generating computer code, and performing data analysis, to name a few. To understand the exponential growth the field is undergoing, Anthropic’s AI Claude was capable of processing roughly 9,000 tokens of text per minute when it was introduced in March 2023, and is now capable of processing 100,000 tokens of text per minute, equivalent to size of an average novel.(See, “The Economic Potential of AI,” McKinsey and Company, June 2023, page 5). Generative AI requires significant investment in hardware infrastructure and resources, reflected in the $8 billion raised by Generative AI companies between 2020 to 2022, “accounting for 75% of total investments in such companies during that period.”

Machine Learning Foundations

AI is not new and is based around Machine Learning methodologies that have been around for decades. Machine Learning is technology that uses supervised or unsupervised learning to analyze data and make predictions based on the analysis. Supervised learning comes with labels associated with the data and unsupervised learning does not. Rather, unsupervised learning seeks to find patterns without being told what to look for. With generative AI, humans are involved and help fine tune the results in what’s known as Reinforcement Learning with Human Feedback (RLHF). The human feedback piece is important, as it can help mitigate some of the risks associated with this technology. The type of data utilized and generated by generative AI models isn’t purely text data, but includes image data, video, and speech. There is also translation between the different types of data, and written prompts can make images and video, and excel charts can be pasted into generative AI models like ChatGPT and give analysis based on the image input, as one example. Generative AI models are deep learning models based upon the human brain and how it learns, which allows intelligence to be utilized on-demand as a service. The machine learning foundations of generative AI lend the ability of LLMs to learn and adapt, giving rise to many of the emergent properties that are predicted to add so much value in the form of enhanced productivity for companies and organizations.

Capabilities and Applications

Generative AI could add trillions of dollars of value to the global economy. The benefits will (at first) likely come in the areas of customer operations, marketing and sales, software engineering, and R&D. Low hanging fruit for generative AI applications are in content creation and processing of data/information that would normally take human intelligence time to parse and process, without an equivalent addition of value for the work. Automating and speeding low level tasks that require human intelligence would free up time for more creative, impactful, and innovative work. The key gains lie in the ability of generative AI models to understand natural language for tasks, lowering the barrier for workers to be able to benefit from applying intelligent content generation in flexible situations. However, generative AI applications don’t stop there, and have developed emergent capabilities that surprise even the designers of the models themselves. AI researchers and developers discovered that LLMs began to develop emergent capabilities, capable of roleplaying, teaching, writing code, drafting strategy, giving legal and medical advice, coaching, and more. This is where much of the value has the potential to be realized when working with LLMs. Generative AI models will likely completely change the face of how workers produce value in our current economy over time, equivalent to revolutions like the power loom in industrial revolution era mills, applied to the realm of knowledge work. Of course, such revolutionary change is not without significant challenges.

Challenges

Many of the technological revolutions we’ve experienced in human history, including agriculture, the printing press, the telegraph, were slow revolutions. AI is a technology expanding at an exponential rate, making it a fast revolution. There is tremendous potential to transform how we work (especially with knowledge work consider generative AI capabilities), but AI also carries significant risk, which is the spirit behind the EU AI Act and the AI executive order. We simply don’t know the widespread impact yet on how accelerating access and the speed, accuracy and creativity of intelligence on demand will be worldwide. The US and EU have put in place frameworks to stay ahead of these developments, and to chart a middle path between two basic responses to change: denial or panic. Responding to the rapid changes generative AI represents, it is easy to either go into denial, thinking AI won’t impact one’s job, or panic, thinking AI will displace everyone and take all jobs. A middle path is helpful here, as a positive viewpoint is that AI will make you and your team productive on a level never seen before. Regardless of the productive benefits, there are issues concerning data security, compliance, and utilizing LLMs to undertake actions like building weapons or planning terror attacks. These issues must be taken into account as AI continues to develop, and sensible regulation to help guide and shape growth can ensure the data of users isn’t used without their consent/renumeration, models are compliant with US or EU frameworks, and guardrails are in place to prevent models from helping design actions/weapons that can harm others. One thing is certain, an active role will need to be taken by regulators to handle the exponential growth we are currently witnessing in the generative AI space.

Read more articles by this author:
https://www.braumillerlaw.com/author/james-holbein/