Regulating Artificial Intelligence: Preventing Economic Disruption and Ensuring Social Stability

Date:

November 13, 2023

Author:

Milan Kordestani

Entrepreneur, writer, and founder of 3 purpose-driven companies oriented toward giving individuals control over their own discourse and creation. Milan works to produce socially positive externalities through a mindset of social architecture.

Milan Kordestani

November 13, 2023

Milan Kordestani Profile Image

Milan Kordestani

Hi! I'm Milan, an LA based founder and writer, architecting impact-first businesses.

Popular Articles

See More

Links to supporting documents and resources are available for download for further reading and reference. The document includes source references used in the creation of this content.

Download the Reference Document

Executive Summary

The rise of artificial intelligence (AI) is one of the most popular topics of the 2020s that are often discussed by scientists, journalists, industry leaders, and regular individuals. Whereas AI tools provide unique benefits for a variety of stakeholders, their use is accompanied by a set of social and economic risks. In particular, the available evidence provides a compelling reason to believe that the integration of AI into different industries can lead to the disruption of the economy, job displacement, and increased social inequality. A number of employees in the public and private sectors are at risk of job loss, such as workers of manufacturing factories, content creators, administrators, drivers, and many others. Scientists and practitioners continue arguing over the best way to approach the threats of AI in a way that would prevent economic disruption and ensure social stability without hampering AI research. 

The current white paper is dedicated to a critical analysis of ways to regulate AI tools to address the social and economic risks of this technology. The main goal of the research is to offer a set of practical recommendations to regulate AI so that Generation Z could benefit from the technology in terms of ensuring social stability and economic prosperity. The study sought to provide a detailed overview of the economic and social risks associated with AI, analyze the disruptive impact of AI on the manufacturing, healthcare, education, financial services, transportation, TV/film, publishing, and content creation sectors, review the current corporate recommendations for guiding AI regulation and development frameworks, and offer potential solutions and regulatory proposals for the chosen industries to control the disruptive effects of AI, while still allowing for innovation and growth.

It was found that AI has already become a vital concern for stakeholders of many industries. Policymakers, businesses, non-government organizations, and citizens all agree that AI must be regulated to prevent a plethora of risks associated with the technology. Simultaneously, there is currently no agreement among them regarding the exact strategy that should be chosen. Some stakeholders, such as Microsoft and OpenAI, advocate for the establishment of a strict regulatory approach establishing a separate agency under the Department of Commerce that would license and audit all large language models, while others, such as the European Commission and Alphabet, suggest the use of a risk-based framework that focuses on developing customized solutions for specific high-risk industries. A detailed analysis of the relevant regulatory interventions, such as the Sherman Antitrust Act, Trade Adjustment Assistance Act, American Recovery and Reinvestment Act, Sarbanes-Oxley Act, Fair Labor Standards Act, Joe Biden’s Build Back Better Plan, and a series of initiatives supporting a transition from fossil fuels to renewable energy, leads to a conclusion where policymakers may choose suitable strategies among a variety of different options. The choice of specific measures mainly depends on the perceived significance of AI risks. In particular, those stakeholders who advocate for the use of a risk-based framework believe that the growing reliance on AI algorithms does not pose a critical threat to the economy and society that would justify the establishment of a separate agency and strict licensing requirements. Policymakers and industry leaders who believe in the threat of an “AI uprising”, at the same time, are proponents of strict regulatory measures. 

The current white paper proposes a two-layer framework for regulating AI. The first layer of this framework includes a set of provisions applying to all industries. It includes a comprehensive AI act that includes a set of specific provisions aimed at protecting the economy and society from the uncontrolled integration of AI into different spheres of life, such as:

  • AI adjustment assistance programs, 
  • the mandatory appointment of AI officers at large corporations, 
  • mandatory transition periods and severance payments for displaced workers of large corporations with the valuation exceeding $5 billion, 
  • investment into AI education campaigns, 
  • the support of large-scale AI projects to stimulate the transition of the workforce to AI-dominated jobs, 
  • and the establishment of a separate agency under the Department of Commerce that issues licenses for all large language models and monitors their functionality. 

The white paper also suggests a set of specific recommendations for the chosen industries, such as mandatory training sessions for “at-risk” employees in the manufacturing sector. The proposed regulatory framework can become a solid basis for further research on the problem under investigation. It also can be considered by policymakers of different countries when creating their versions of an AI regulatory policy. 

Chapter 1. Introduction

1.1. Research Background

Intensification of scientific and technological progress creates unprecedented challenges and opportunities for humanity. Disruptive technologies, such as the Internet and smartphones, have transformed various industries and become an inalienable part of life. The pace of scientific and technological progress continues to increase, translating into the continuous introduction of revolutionary solutions in various industries. Blockchain, virtual reality, artificial intelligence, and machine learning are examples of technologies that are becoming increasingly popular in different sectors. They offer value to diverse stakeholder groups and transform the ways in which organizations conduct different operations. The exact impact of most of these technologies will be impossible to quantify, but there is no limit to the variety of possible scenarios in which these technologies shape the course of human events, for better or worse. 

Artificial intelligence is one of the technologies that is often discussed in academic research as a driver of major changes in numerous industries. For example, AI-based automation solutions increase operational efficiency and simplify the completion of routine tasks, healthcare platforms leveraging AI offer personalized treatment plans and improve healthcare outcomes, adaptive learning platforms run by AI customize educational programs, and smart city solutions powered by AI optimize infrastructure, improve traffic management, and reduce energy consumption. The benefits provided by AI span myriad industries and potentially extend to nearly any aspect of human life. Therefore, it is natural that the number of studies dedicated to the advantages of AI has been growing. 

AI is a tech sector that has been attracting increased attention from investors. According to Statista, corporate investment in this niche reached $91.9 billion in 2022. During the period between 2017 and 2022, the number of companies using AI has increased from around 25% to 60%. Roser argues that the AI industry entered a new era in 2021, when private investment and mergers and acquisitions related to the technology skyrocketed, leading to the injection of large resources in AI development. The idea that AI is likely to drive social and economic progress is not new. Some authors predicted such a scenario in the 2000s; furthermore, a number of solutions were implemented in the 2010s to support the specific needs of different industries and businesses. Nowadays, however, AI developers find themselves in a unique situation where their product development activities can be supported by unprecedented amounts of funding, which facilitates the rapid design, development, deployment, and improvement of AI products. 

The emergence of ChatGPT served as an important milestone in the evolution of AI. As stated above, there were many products powered by artificial intelligence in the past. However, ChatGPT became a disruptive technology that not only introduced unique features and delivered value to users, but also dramatically increased public awareness of artificial intelligence and machine learning. The platform attracted a significant amount of attention and quickly became popular among users from different regions of the globe. The latest data show that the platform recorded 1.6 billion visits in June 2023. The rising popularity of ChatGPT and its impressive features have been discussed in detail in numerous scholarly and non-scholarly sources. The platform’s success became an important event that drew the attention of different stakeholders to AI and triggered intense debate over its implications. 

One of the topics that emerged as a result of the rise of ChatGPT is the threat of artificial intelligence. The introduction of this disruptive platform encouraged further research and development activities in the industry and encouraged stakeholders to design new AI solutions, such as Bing Chat and Google Bard. But the process was not without concern. For example, the public was stunned when ChatGPT-4 circumvented solving a CAPTCHA code by going to TaskRabbit and asking a worker to help on the false pretense that such a task was necessary to help a blind person. As a result, a number of entrepreneurs, journalists, and scientists expressed their concerns about the future of AI. A letter signed by Elon Musk and thousands of experts advocated for the pause in AI research, claiming that “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” The risks associated with AI remain a topical research area and a popular theme raised in scholarly and non-scholarly sources. 

A number of the concerns pertaining to the risks of AI could be categorized as speculations over dystopian scenarios of AI uprisings. Such concerns are to a large extent inspired by popular films and TV shows, such as Black Mirror, Westworld, Terminator, and the Matrix quartet. Simultaneously, some other concerns focus on empirical evidence related to the possible scenarios underpinning the potential disruption caused by AI technology. The technology can result in job loss, trigger skill shifts, contribute to income inequality, cause additional privacy concerns, translate into overdependence on AI, and cause various ethical dilemmas. Such issues have been widely discussed in academic research in order to find optimal solutions for creating and implementing a consistent strategy to guide the introduction of new AI platforms, the improvement of their features, and the integration of novel AI solutions into different sectors in a way that prioritizes human well-being, fairness, and sustainable growth. Developing effective regulations to ensure that current and future generations benefit from AI and leverage it to achieve high economic growth, prosperity, and high quality of living is a timely research topic and a critical task for both scientists, politicians, and practitioners.  

1.2. Problem Statement 

The current study is dedicated to a critical investigation of ways to regulate artificial intelligence to ensure social stability and prevent economic disruption for Generation Z. Individuals belonging to Generation Z were born between the late 1990s and early 2010s. They are the first generation that was born with access to the Internet; thus, these people are sometimes called “digital natives”. This generation is characterized by a set of distinctive characteristics, such as a relatively slow pace of life, dependence on social media, and unpredictable purchasing behavior patterns as compared to Millennials. The reliance on digital tools is arguably the most important feature of Generation Z. This demographic quickly embraces new technologies and utilizes it to improve their lives. For these reasons, Generation Z is currently the most active adopters of AI tools, including ChatGPT. It seems justified to assume that this technology will be a major part of their lives, strongly influencing the labor market, economic growth, and a number of other economic and social issues. Therefore, it is in this generation’s best interests to develop a comprehensive and consistent set of policies to regulate the use of AI. 

As demonstrated, the risks of AI are not a new area of scholarly inquiry. Yet, to the best of the author’s knowledge, the overwhelming majority of voices proposing solutions to mitigate these risks emerge from non-scholarly articles, white papers, or blog posts that represent subjective opinions of different individuals who might not possess the knowledge and competence that are needed to discuss such a controversial topic. 

Indeed, the number of scholarly sources focusing on addressing the broad risks of AI is surprisingly limited. Helberger and Diakopoulos recently wrote an article arguing that ChatGPT differs from previous large language models because of its scale of use and dynamic context, then recommended the European Commission change the AI Act to consider these issues. Hacker et al.  argue that such disruptive platforms as ChatGPT must be regulated on four levels, including direct regulation, content moderation, data protection, and policy proposals. While some attempts were made to develop practical recommendations for regulating AI, these attempts have been fragmentary and inconsistent. Many of them offered excessively broad guidelines, while others, in contrast, focused on narrow segments of particular industries. As a result, there is currently no common vision among scientists and practitioners on what AI regulations have to be developed to prevent economic disruption and ensure social stability for Generation Z. 

1.2. Research Goal and Objectives 

The author of this study seeks to offer a set of practical recommendations to regulate AI so that Generation Z can see benefits ranging from social stability to economic prosperity. This white paper thus asks: how can artificial intelligence be regulated to prevent economic disruption and ensure social stability for Generation Z? 

The study will complete the following research objectives:

  1. To provide a detailed overview of the economic and social risks associated with AI;
  2. To analyze the disruptive impact of AI on the manufacturing, healthcare, education, finance services, transportation, publishing, television/film industries, and the creator economy;
  3. To review the current corporate recommendations for guiding AI regulation and development frameworks ensuring that new AI solutions are implemented in an ethical manner;
  4. To offer potential solutions and regulatory proposals for the creator economy and the aforementioned sectors, to control the disruptive effects of AI while still allowing for innovation and growth. 

1.3. Structure of the White Paper 

This white paper comprises seven chapters. After this introduction, the following chapter focuses on the social and economic risks related to AI technology. It mainly discusses the economic risks associated with AI and the threat of job loss that further progress in the AI sector can cause. The main goal of the second chapter is to present a landscape of negative economic and social implications of AI if humanity adopts a laissez-faire approach to regulating this technology. 

The third chapter offers a comprehensive industry analysis of AI. It demonstrates the current impact of AI on the creator economy and the television/film, publishing, manufacturing, healthcare, education, finance services, and transportation industries and provides valuable insights into the potential ramifications of insufficient regulation. 

Detailed information about the relevant corporate recommendations and strategies provided by leading tech companies can be found in the fourth chapter of the white paper. The knowledge of such recommendations is important to understand how tech companies developing AI solutions approach the issue of ethical guidelines and regulatory frameworks related to artificial intelligence and machine learning. 

The fifth chapter introduces the historical context of the problem under investigation. It seeks to draw parallels between the current AI regulation needs and historical instances of regulatory interventions in non-tech industries. The rationale behind such an analysis is based on the author’s attempt to determine what regulatory actions can be potentially effective in ensuring that Generation Z uses AI to its benefit. 

The sixth chapter of the white paper offers detailed regulatory proposals for the chosen industries that provide tangible measures to control the disruptive effects of AI, while still allowing for innovation and growth. The proposals balance the interests of employees, industry, countries, and other relevant stakeholders. 

Finally, the last section puts forward a set of recommendations for further research, summarizes the main regulatory proposals of the study, and reflects on the main limitations of this white paper.  

Chapter 2. Overview of AI Landscape and Risks Associated with a Laissez-Faire Approach to AI Regulation  

2.1. The Current State of AI

The term “artificial intelligence” was introduced in 1956 by a team of scientists led by John McCarthy, who conducted a study “on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Six years earlier, computing genius Alan Turing examined the mathematical possibility of artificial intelligence. Turning made the powerful assumption that all activities performed by humans while using available information to solve practical problems and make decisions could be duplicated by machines, provided that they have access to full information. The ideas of both these individuals were rooted in theoretical constructions and utopian philosophies about the ability of humanity to create computers with infinite memory and advanced processing capabilities, These thinkers did not describe any practical constructs for building such an intelligence. At the same time, these researchers pointed to the existence of an intriguing research area that was later pursued by the scientists whose empirical models and mathematical justifications allowed for formulating the first theories of AI. 

During the period between 1957 and 1974, AI had been becoming an increasingly popular research field. Some early solutions, such as General Problem Solver and ELIZA, provided insights into the problem-solving capabilities of computers. Governments and private stakeholders started investing in AI research, which resulted in the appearance of multiple research papers on this topic. Most of these studies, however, revealed that despite the promising features of computers, they could not exhibit human intelligence in their current form owing to the lack of storage capacity and low processing speed. 

The introduction of expert systems and deep learning techniques in the 1980s provided scientists with the possibility to continue examining AI-related research problems and develop AI-powered programs, even though AI research did not have significant funding at the time. The introduction of speech recognition software, a chess-playing computer program that managed to defeat the world’s chess champion, and a robot that could recognize emotions showed that AI research and development had achieved significant progress. AI and machine learning turned into popular tools that were utilized in multiple industries to achieve diverse goals. 

In the 21st century, AI attracts an unprecedented amount of attention from numerous stakeholders. AI investments constantly hit record numbers. The data from NetBase Quid via AI Index report indicates that AI companies received unprecedented funding in 2020, 2021, and 2022. In 2020 they attracted $153.63 investment, which mostly came from private investment ($64.50 billion), minority investment ($48.22 billion), and merger and acquisition transactions ($27.28 billion). In 2021, these numbers were even higher. AI became an attractive target for private investors, who recognized the unique potential of this technology. At the same time, the growing number of mergers and acquisitions involving AI companies illustrates the value that corporate investors attach to AI. While the amount of AI investment declined in 2022, the growing focus on ChatGPT and other AI technologies in 2023 guarantees an increase in investment this year.. 

One of the most important arguments illustrating the growing importance of AI is that the number of organizations using different AI solutions has been rapidly growing. Only 20% of companies utilized AI in 2017; however, according to the report by McKinsey, this number increased to 56% in 2021. A recent article on Forbes shows a similar figure. Apparently, the most popular areas for AI use include customer service (56%), cybersecurity (51%), digital personal assistance (47%), customer relationship management (46%), inventory management (40%), and content production (35%). Many companies use different AI solutions simultaneously to solve various problems. The number of AI capabilities embedded in organizations increased from 1.9 in 2018 to 3.9 in 2021. A recent survey of 250 executives reveals that AI has the capability to improve the functions and performance of products (51%), optimize internal business operations (36%), free up employees by automating tasks (36%), improve decision-making (30%), and optimize external processes (30%). 

Academic research shows that AI is highly effective in solving multiple tasks in different sectors. Research by Johnson et al. reveals that public and private organizations often use AI solutions to fuel research and development activities, but most of them focus on exploration rather than exploitation. Interestingly, the scientists point out that the majority of these solutions seek to augment human activities instead of replacing them. A recent Chinese study dedicated to the application of AI in the healthcare industry indicates that AI tools are already widely used to assist with detection, diagnosis, and treatment in neurology, cancer, and cardiology. The authors show that the full potential of AI in this sector is yet to be realized. Waltersmann et al. share information about the impressive benefits delivered by AI to manufacturing companies in such fields as production planning, quality assurance, predictive quality, predictive maintenance, and energy efficiency. At times it feels as though all industries could benefit from the unique capabilities of AI, which explains their use by the increasing number of organizations. 

AI usage patterns and investments in AI research and development activities are not the only indicators of the technology’s momentum. The annual number of scholarly publications on AI-related topics has increased from 162,444 in 2010 to 334,497 in 2021. Only 8,466 individuals attended thirteen major AI conferences in 2010, but this number reached 76,453 attendees in 2021. The data from the Center for Security and Emerging Technology show that the annual number of global patent filings for AI technologies has grown from 2,560 to 141,241 during the same period. New research papers and patents dedicated to AI deepen the knowledge of the technology and provide stakeholders with an opportunity to better understand the disruptive impact of artificial intelligence. 

Whereas many scientists recently expressed their interest in AI, it is important to emphasize that the current progress in this field is mainly driven by industry. 195 out of 281 AI Ph.D. graduates in North America were represented by industry, and only 84 were represented by academia. Whereas many scholars only started examining the implications of AI, organizations from various industries have already launched highly effective solutions to increase efficiency, improve decision-making, enhance customer service, and simplify the completion of routine tasks. Industry leaders are the most important stakeholders in this field since they develop and deploy AI solutions, requiring scientists to refrain from conducting exploratory studies and focus on analyzing specific cases of the implementation of AI solutions. The prevalence of practitioners among the stakeholders driving the progress in AI development is a crucial aspect of the problem under investigation. It implies that scientists and policymakers currently are unable to provide the necessary support and guidance for the dynamic AI sector alone. 

While it is important to note that the AI regulatory frameworks have been developing in a slow manner, the number of relevant laws addressing different AI-related issues has significantly grown recently. The number of AI-related bills passed into law in 127 selected countries increased from only one in 2016 to 37 in 2022. The dangers of AI and the integration of this technology into different sectors encourage policymakers to develop new laws to regulate the use of AI and protect society from its risks. Most of these laws, however, currently exist as rough drafts.

The process of developing comprehensive AI laws is ongoing, although some countries have already introduced laws that offer interesting insights into ways in which AI could be regulated. Members of the European Union utilize the risk-based approach described in the Artificial Intelligence Act. Such an approach distinguishes between unacceptable, high-risk, and not explicitly banned AI applications and allows for the use of AI in all sectors besides critical services that might threaten livelihoods or encourage destructive behaviors. The text of the document is not finalized. Nevertheless, the main contributions of this legislation are its detailed definitions of AI-related matters and the development of a comprehensive risk evaluation and management approach. Furthermore, it also prohibits the utilization of automation decision-making tools in such processes. The regulation calls for the use of a universal approach to evaluate the risks related to each AI application to make sure that its implementation does not hurt public interests. 

Canada also has a risk-based approach in its proposed regulatory framework, though this is not yet law. Once passed, the law will require developers to design specific mitigation plans to reduce risks associated with their products. The proposed law is similar to the draft of the EU Artificial Intelligence Act. However, unlike the document proposed by the European Parliament, the draft law introduced by the Canadian Parliament shifts the responsibility for risk management to developers. 

Similar to Canada, the United States currently does not have federal legislation on AI. The National Institute of Standards and Technology designed a set of broad AI guidelines. The Artificial Intelligence Risk Management Framework (AI RMF) calls for the use of a four-stage approach to assess and manage risks related to AI that involves testing, evaluating, verifying, and validating each application. It also calls for dividing AI risks into a series of groups so that each group could be separately governed, mapped, measured, and managed. AI RMF is currently available only as an early draft and, thus, is not a final framework. This document is also not legally binding. 

The same conclusion can be made in relation to the AI Bill of Rights, which summarizes the main risks associated with AI and encourages stakeholders to protect data privacy, prevent algorithmic discrimination, prioritize safety, and maximize the effectiveness of AI tools. None of these two documents offers any specific policies to prevent AI risks. Regulations introduced by some states, however, managed to address this gap. In particular, the Automated Employment Decision Tool Law adopted in New York requires all employers who use automated employment decision tools to conduct annual audits of their technologies to make sure that the tools do not have any biases in their algorithms. 

China recently adopted a series of laws at the federal and provincial levels that address AI-related issues. The federal law introduced requires all entities that utilize AI for marketing purposes to inform their consumers of this fact and prohibits the use of data collected with the help of AI to advertise the same products at different prices. Shanghai Regulations on Promoting the Development of the AI Industry offer an avenue for local companies to develop AI products in line with AI regulations from other countries. Considering China’s interest in AI, it seems justified to assume that the number of AI laws adopted in China will continue to increase in the near future.

Table 1. Summary Assessment of International Policies Regarding AI

Country

Position

European Union

  • Developing the Artificial Intelligence Act, a yet-to-be-adopted doctrine which distinguishes between unacceptable, high-risk, and not explicitly banned AI applications
  • Allows for the use of AI in all sectors besides critical services that might threaten livelihoods or encourage destructive behaviors

Canada

  • Developing a law similar to the EU Act, which will require developers to design specific mitigation plans to reduce risks associated with their products
  • Shifts the responsibility for risk management to developers

US

  • Developing the Artificial Intelligence Risk Management Framework (AI RMF), a yet-to-be-adopted framework that calls for testing, evaluating, verifying, and validating each AI application for risk
  • Dividing AI risks into a series of groups so that each group could be separately governed, mapped, measured, and managed
  • AI Bill of Rights, a declaration summarizing risks associated with AI and encouraging stakeholders to protect data privacy, prevent algorithmic discrimination, prioritize safety, and maximize the effectiveness of AI tools
  • Does not offer any policies for actually enforcing regulation or management; entirely advisory

China

  • Introduced federal law that requires all entities that utilize AI for marketing purposes to inform their consumers and prohibits uses of AI data
  • Shanghai Regulations on Promoting the Development of the AI Industry supports entrepreneurs to keep China’s tech scene competitive in AI

While many countries managed to introduce drafts of AI legislation, most of them have not adopted final laws yet. Moreover, even their drafts mostly introduce broad strategies and a set of guiding principles rather than specific solutions. Some state and local regulatory bodies adopted legislative acts that protect society from the most critical risks of AI. At the same time, these documents focus on narrow aspects of AI use, such as the utilization of artificial intelligence in the recruitment process. The majority of AI laws, therefore, are either broad documents defining AI and offering some basic principles of AI risk management, or specific laws addressing some narrow domains of AI use. Neither the international community nor its individual constituent countries have a consistent vision of an AI regulatory framework. Many of them introduced rough drafts of laws proposing a risk-based approach. However, the specific details of this approach are not entirely understood as of yet. 

2.2. AI and the Risk of Job Loss 

AI has the potential to cause disruption in the labor market. The technology is capable of jeopardizing employment opportunities and causing skill shifts resulting in the emergence of fundamentally new employment patterns. One of the obvious aspects of this impact is the automation of tasks. AI threatens job security through automation since AI-powered machines and algorithms effectively and efficiently perform routine and repetitive tasks, reducing the demand for human labor in a number of industries, such as manufacturing, customer service, and many others. Those jobs that used to be performed by humans in the past might be performed by AI solutions. 

From this perspective, the advancement of AI might be compared to the industrial revolution, which also saw a shift in labor patterns due to the introduction of efficient machine technologies. Levy’s study predicts that a modest number of jobs will be lost to autonomous long-distance trucks, industrial robotics, and automated customer service responses in the next seven years. He further argues that AI will create visible threats to job security in a number of sectors, providing politicians with an opportunity to drive the populist agenda against AI-powered automation solutions. Tschang and Almirall assert that AI-driven automation has been replacing routine and low-skilled jobs, and predict that it also will start to replace nonroutine and low-skilled jobs in the near future. Wajcman believes that the scale of the “automation wave” brought about by AI is unprecedented and, thus, is likely to result in the loss of a massive number of jobs. In general, there is a consensus among scientists that automation associated with AI is an important technological advancement whose impact on the job market goes beyond the completion of a limited number of routine tasks. Potentially, technology can replace the majority of low-skilled jobs, including even those that require certain creativity. 

One of the critical factors underpinning the risks that AI poses to job security is its ability to store and analyze data. In addition to completing routine tasks, something that was a popular application of AI tools in the past, modern AI solutions display impressive performance in performing cognitive tasks that used to be the sole responsibility of highly skilled specialists. For instance, AI is highly effective in distinguishing between different risk groups of customers of financial institutions. Compared to customer service specialists, AI tools deliver reliable and accurate results in a swift and efficient manner. Recent evidence from the healthcare industry demonstrates that research on neural networks and biomarker development led to the emergence of novel AI programs that have wide applications for diagnosis in oncology. The use of AI to assist with drafting legal documents, processing data, building arguments based on case law, and even making final decisions in court reveals the potential of this technology to revolutionize the legal industry and accelerate legal proceedings. The utilization of AI in knowledge-intensive jobs is not without its flaws. Disruptive AI tools, such as ChatGPT, are characterized by the pattern of “AI hallucination”, which refers to situations when AI presents data that never existed. Because of this reason, most AI tools in their current form are not ready for being implemented in knowledge-intensive industries without human supervision. Nevertheless, it seems justified to assume that further progress in the field will amplify job security risks for knowledge-intensive workers. 

Another issue pertaining to the impact of AI on employment is connected to efficiency. While the risk of automation refers to a scenario in which an AI tool performs routine tasks that used to be performed by employees, the risk of efficiency-related job loss refers to a situation in which the superior efficiency delivered by AI makes many jobs redundant. Compared to the workforce, AI has higher efficiency and productivity. As a result, it can perform more tasks.  In the transportation sector, for example, revolutionary AI solutions can replace a plethora of jobs, including those of drivers, warehouse workers, quality assurance specialists, and others. In the area of customer service, AI chatbots can replace almost all employees, such as frontline customer service employees and individuals specializing in specific areas. The field of content creation witnesses the unprecedented threat from AI because the technology can make the job of SEO copywriters, SMM managers, and editors redundant. Given the large storage capacity of AI and its ability to use the volumes of information that are insurmountable for humans without suffering from common cognitive limitations, AI is capable of not only replacing some jobs but also making entire lines of work outdated. 

The impact of AI on employment is a controversial research area. Many scientists believe that AI poses a threat to job security and warn humanity against blindly embracing this technology. Makridakis argues that any industry witnessing the rise of AI will suffer from a temporary increase in unemployment rates. As a disruptive technological advancement, AI changes common employment patterns and leaves many people unable to adapt to a new reality. Wang and Siau express concerns that 30% to 60% of all jobs can be replaced by AI in the next decade based on the reports of multiple credible organizations. A study by Korinek and Stiglitzwho applied several different theories to explore the phenomena of artificial intelligence and robotics from the perspective of their economic and social effects, came to the conclusion that AI would inevitably increase unemployment rates unless humanity launched interventions to prevent the negative effects of worker-replacing technological change. 

The impact of AI on job security is a controversial issue that might be hard to address on a global scale. The pace of AI integration differs greatly across various professional fields, countries, and industries. Thus, whereas its effects on employment can be complementary in one case, it is possible that some other case, in contrast, exhibits heightened job security concerns associated with the technology. Nevertheless, there is already some evidence pointing to the potential of AI to replace around a third of jobs. Approximately 37% of people working in the United Kingdom claim that their jobs are meaningless and mostly comprise a set of routine tasks. The majority of these tasks can be completed by AI-driven machines. Foxconn recently replaced 60,000 factory workers with robots and plans to conduct even more significant job cut initiatives in the future in line with the strategy of harnessing automation in manufacturing operations”. For this company, adopting AI and robotics to cover most factory operations has already become a cheaper option than hiring a substantial number of low-skilled employees working on minimum wage. Specialists from McKinsey share alarming figures showing that 45% of all paid activities can be performed by AI, while 60% of all occupations can see approximately 30% of their constituent activities automated. While different scientists and analysts do not agree on the exact scope of the job loss threat associated with AI, the fact that AI already started replacing many jobs in various industries is a truism. 

Some scientists, however, believe that AI research and development does not always result in unemployment. The research by Mutascu, which presents an analysis of the historical data on the relationship between AI and unemployment during the period between 1998 and 2016, shows that the impact of AI on jobs is nonlinear and concludes with a surprising remark that the accelerated use of AI actually reduces unemployment rates. It should be noted, however, that this study was carried out before the recent introduction of disruptive platforms like ChatGPT. Fleming advises society not to be excessively concerned about the job losses driven by AI. He puts forward an assumption that the integration of AI into most professions will be guided by the concept of “bounded automation”. Therefore, only low-skilled jobs can be threatened by technology in the near future. Naude points out that while it might seem that novel AI solutions can easily replace many jobs, their deployment is expensive and difficult, which explains why many companies have not yet adopted any AI tools in their operations. Unlike many journalists and other authors of non-scholarly sources on the impact of AI on unemployment, Naudethe scientist adopts a more cautious approach to forecasting the future of AI and encourages stakeholders to carefully measure the advantages and disadvantages of the technology in terms of disrupting the labor market. 

2.3. Economic Risks of AI  

Even if workers do not lose their jobs because of AI, there is a possibility that the rise of AI technology will reduce their well-being because of negatively affecting the wealth gap and contributing to inequality. The study by Korinek and Stiglitzstates that “in the absence of such intervention [against adverse effects of innovation], worker-replacing technological change may not only lead to workers getting a diminishing fraction of national income, but may actually make them worse off in absolute terms”. The potential of AI to replace human jobs with AI results in a situation when people owning AI technologies and leading the process of AI integration accumulate unprecedented wealth, whereas workers face the threat of job loss and might agree to a lower salary in exchange for keeping the job.

The redistribution of income is a critical problem associated with AI. One of the approaches to address this problem is to launch a set of policies to maximize the likelihood that AI integration leads to Pareto improvements. The scientists believe that changes in factor prices brought about by AI would translate into gains on the complementary factors, thus allowing for the achievement of uplifting the poor without lessening the quality of life for others. According to the agreed position of 41 economists from top universities in the USA, the “rising use of robots and artificial intelligence in advanced countries is likely to create benefits large enough that they could be used to compensate those workers who are substantially negatively affected for their lost wages.” AI threatens job security, but it also can offer an avenue for shared prosperity if utilized properly. The available evidence provides a premise to assert that the ways in which the income produced by AI is distributed in society are the key issue in shaping the future of AI and addressing social and economic risks associated with this technology. Depending on the specific policies adopted to redistribute AI-driven profits, humanity can either benefit from artificial intelligence or, in contrast, enter a new era of an unprecedentedly large inequality gap.

 The influence of AI on inequality is a critical issue underpinning the economic risks associated with the technology. There are several factors that play a major role in driving this impact. The unchecked progress of AI can lead to labor market polarization. AI-driven automation can simultaneously create the demand for high-skilled and high-paid jobs related to different segments of AI use and eliminate numerous low-skilled jobs as a result of automation. The data from McKinsey show that many organizations already started hiring employees for AI-related roles, including software engineers (39%), data engineers (35%), AI data scientists (33%), machine learning engineers (30%), data architects (28%), AI product managers (22%), design specialists (22%), data visualization specialists (21%), and translators (8%). Most of these professions require significant skills and knowledge. Furthermore, as stated above, the education sector is not ready yet for meeting the demand for such specialists since progress in the AI field is driven by industry rather than academia. Therefore, the majority of specialists hired for AI job positions are experienced workers rather than holders of advanced degrees. Such a scenario illustrates that labor market polarization can result in the widening of an inequality gap that is inherently linked to the skills gap. 

The magnitude of the skills gap can be especially significant if one considers the global rather than national level. Ironically, while many scientists warn society about the dangers of AI as a technology that can be present in all parts of human life, approximately 2.9 billion people still have never used the Internet. Even those technologies that exist at the moment create a significant digital divide contributing to inequality gaps. Further development of AI is likely to widen the global gap even more. Many communities or even countries might find themselves in a situation where they are deprived of an opportunity to benefit from AI. In South Sudan, Somalia, Burundi, the Central African Republic, and Ethiopia, around 92.8%, 90%, 89.7%, and 89.7%, and 83.3% of the population do not have access to the Internet. People living in these states are highly unlikely to take advantage of AI since they do not have the means to access the technology. Considering that the international community still has not found a way to achieve a significant increase in Internet penetration rates in the Global South, finding a way to expand the benefits of AI to the global population currently seems to be a critical challenge. 

Data bias is another pertinent nuance of the income inequality threat. AI algorithms use a system of rules developed by their creators; thus, the algorithms might use the biases of their developers. Some scholars are concerned that AI algorithms might amplify biases in decision-making, which can potentially result in discriminatory outcomes and magnify the adverse effects on marginalized communities, widening the existing gaps and contributing to new inequalities. A number of stakeholders point to the significance of this risk as one of the main factors determining whether AI produces adverse economic outcomes. Furthermore, as described above, the state of New York adopted a specific local law requiring companies that use AI algorithms to conduct annual audits of these tools with the purpose of preventing discriminatory practices. The full spectrum of this problem is not entirely understood because many biases might be implicit, thus making it hard to detect them. Further research is needed to develop effective systems for identifying all pertinent bias threats in AI algorithms. 

Another vital aspect of the potential impact of AI on inequalities pertains to privacy concerns. There is a popular belief that AI can establish a culture of surveillance in which both government organizations and businesses would regularly collect detailed data on individuals for various purposes. Some communities can be disproportionately targeted by such surveillance practices, which would further deteriorate their well-being and widen inequalities existing in society. Those power imbalances that already exist can be increased even more if communities that were traditionally believed to be “risky” are specifically targeted by government and private companies that have access to novel AI solutions. 

Possible economic losses associated with unemployment, wealth concentration patterns, and inequalities are not the only economic risks brought about by AI. The available evidence demonstrates that AI operationalization can lead to market monopolization. Companies possessing the resources to develop their AI capabilities can use advanced algorithms and personalized features that create network effects and favor dominant players of the market. As a result, those companies that already have significant market shares can increase them even more, while also reducing the bargaining power of customers and introducing substantial barriers to entry. In this situation, other market players might struggle with checking the monopolistic position of such large corporations as Google or Microsoft. Finding ways to prevent market monopolization is an important priority for stakeholders that is likely to play a major role in shaping the economic impact of AI. 

The uncontrolled rise of AI can cause economic disruptions in traditional sectors. Entire industries might become redundant owing to the introduction of revolutionary AI solutions. As a result, stakeholders may expect economic turbulence as both workers and employers will be forced to adapt to new market conditions. Large sectors employing thousands or even millions of people can disappear, resulting in numerous, substantial economic risks resulting from the change in the purchasing power, inflation rates, and investment. Entry-level programming and data analysis, proofreading, translation, graphic design, accounting, postal service, data entry, bank tellers, and administrative support are the examples of jobs that can disappear in the near future owing to AI. It might be too early to speculate on the economic implications of such processes, but it seems likely that they are likely to pose significant economic risks. 

Chapter 3. Industry Analysis 

3.1. The Integration of AI into the Manufacturing Industry

The manufacturing industry is one of the sectors that has witnessed the rapid rise of AI. In 2022, companies operating in North America ordered approximately 44,000 robots with an estimated price of $2.4 billion, which represented an 11% increase compared to the orderings that were made in 2021. Apparently, the rising gap between the paces of revenue growth and wage growth - as well as the pandemic, which forced companies to implement multiple strategies to comply with quarantine measures, support the expansion of remote work options, deploy sanitation stations, and maintain social distancing - provided owners of factories with both the means and the motivation to facilitate the adoption of AI tools. AI solutions might be expensive and hard to deploy, but they can continue functioning even during times of uncertainty, while also not requiring any healthcare coverage, bonuses, and additional expenses. Therefore, it is not surprising that many large enterprises accelerated their shift to AI-powered robotics in the last several years.

An analysis of recent information available online shows that multiple companies introduced revolutionary technologies using AI algorithms and data elements that are tailored to the manufacturing sector. Hyundai Motor developed Stretch Robot, which grabs packages from a shipping container, Emerson Electric launched new projects in the industrial automation industry with a focus on the semiconductor market, and GE launched the autonomous robotics system. NVIDIA recently deployed IGX Orin, which is an AI platform with industrial inspection capabilities, Siemens adopted Microsoft’s OpenAI Service, and Rockwell Automation designed a Smart Manufacturing solution to predict manufacturing process problems using AI. The number of AI applications related to manufacturing continues to increase.

Cobots, additive manufacturing, generative design, predictive maintenance, and smart factory solutions are among the most well-known cases illustrating the utilization of AI in the manufacturing industry. Cobots are a new type of AI-powered robots that do not require a dedicated space and can function alongside humans. They can perform a plethora of functions, including simple operations like polishing or screwing and complex quality assurance inspections enabled by specific cameras. At the moment, automotive manufacturers, such as Ford and BMW, are widely using robots to perform different tasks and report high efficiencies brought about by this technology. Therefore, streamlining manufacturing processes with cobots is one of the user cases of AI applications in the manufacturing industry. 

Generative design is a popular smart manufacturing solution. It uses input data on the weight and size of products to manufacture those products that meet the necessary requirements. Generative design incorporates not only the data on products and materials but also the data on costs. In other words, one can set limits on the amounts of money that can be used to produce specific items, thus maintaining certain efficiency levels. Generative design produces blueprints and instructions that can be then used in the manufacturing process. It works effectively with additive manufacturing since it can conceptualize the products created using 3D printing tools. Many companies have already started to use generative design solutions to produce cheap and light components and enhance the quality of their products. Such a process is instrumental in product design and development, but it can lead to the loss of numerous jobs in the manufacturing industry. 

Predictive maintenance is another field in which manufacturers widely apply AI. It helps anticipate servicing needs and prevent the loss of financial and time resources. Both early and late machine maintenance results in financial losses; moreover, the latter can lead to significant safety hazards. Effective predictive maintenance systems forecast companies’ needs in specific replacement parts, which assists in planning. The system framework introduced by Li et al. comprises maintenance implementation, data acquisition, sensor selection, data processing, data mining, and decision support modules, which are all powered by AI. Aivalioties et al. present a methodology that leverages AI to design a digital twin of a factory and apply simulations to calculate the remaining useful life of machinery equipment. The implementation of AI in predictive maintenance is closely connected to the Internet of Things since the retrieval of manufacturing data occurs with the help of different sensors reporting vibrations, thermal images, the presence of liquids, and efficiency. The deployment of AI allows for predicting possible problems with equipment in advance. 

Smart factory solutions are another well-known example of the use of AI in the manufacturing industry. The concept of a smart factory implies operating a facility independently from humans. FANUC has been operating a product line without humans since 2001, as its facility can work without supervision for up to a month. Philips, at the same time, has a razor-producing factory where only nine human employees are required to be present at any time. In other words, the entire factory operates with only nine supervisors as opposed to manufacturing facilities employing hundreds of people. Smart factories deliver such benefits as increased efficiency, enlarged productivity, cost savings, quality improvement, flexibility, enhanced safety, and supply chain optimization. Thus, one may expect that the increasing number of organizations will deploy smart factory solutions in the near future. 

The rise of AI in the manufacturing industry can pose a set of risks. Unemployment rates are among the most obvious issues in this field. The manufacturing industry employs a significant number of people. Therefore, AI-driven disruption can result in the loss of millions of jobs in different regions of the globe. It is crucial to emphasize that many of these jobs are low-skilled and low-paid, which makes them an optimal target for AI job replacement projects. Seseni and Mbohwa examined the case of Soweto furniture manufacturing small and medium enterprises. They found that computerization automation can simultaneously cause large-scale job losses and lead to the essential improvement in the quality of products and services. The loss of jobs can lead to multiple adverse outcomes discussed in the previous chapter of this white paper.

Many countries heavily rely on manufacturing as a way to improve the quality of living of their low-income residents. For example, Bangladesh essentially leveraged the fast fashion manufacturing sector into reducing poverty rates and becoming a middle-income country. Manufacturing also is a viable strategy to increase the quality of living in marginalized communities. The loss of manufacturing jobs to AI, therefore, is a critical challenge that can result in substantial economic and social losses, especially in least-developed countries. Large enterprises might close their factories in the Global South, or at least reduce the scale of manufacturing operations. Such a process could reduce GDP growth rates in these states and affect global foreign direct investment flows. 

Changes in entry barriers are another vital socioeconomic risk related to the integration of AI into the manufacturing industry. Manufacturing already is one of the most prominent sectors in which companies that lack large resources struggle to compete or succeed. The rise of AI can increase entry barriers even more since companies will need to invest additional money into deploying AI infrastructure, purchasing and installing software, and conducting employee training. SMEs are likely to find themselves in a position where their resources are insufficient for effectively utilizing AI in their factories. As a result, they might produce limited gains from AI, while large enterprises will invest in the large-scale implementation of novel AI solutions that allow them to take advantage of the economies of scale and scope. 

3.2. The Integration of AI into the Healthcare Sector

The healthcare industry has also witnessed an unprecedented number of disruptive AI applications. The market of AI healthcare is expected to increase from $14.6 billion in 2023 to $102.7 billion in 2028, exhibiting a compound annual growth rate of around 47.6%. Specialists expect that this growth will be mainly driven by the growing demand for cost reduction initiatives and the availability of big data. Large healthcare datasets comprising patient data require the utilization of analytical solutions. While many healthcare organizations already utilize electronic healthcare record systems, the demand for AI solutions operating with big data is likely to increase even more along with the growing investment and frequent government interventions. 

Many new AI solutions tailored to different segments of the healthcare industry appear on a monthly basis. Around 42% and 40% of healthcare providers from the recent Healthcare AI Survey use healthcare-specific models and algorithms, and production-ready codebases, respectively. The technologies are used in medical imaging, diagnostics, drug discovery and development, personalized medicine, customer communication, data analytics, and predictive analytics. The number of specific AI healthcare solutions might be hard to calculate. Haleem et al. published a  study focusing on AI applications in orthopedics and mentioning diagnosing, surgical training, treatment, surgical, administrative, and problem-solving tools that could be further divided into 15 subgroups based on their functions. Each group, in turn, comprises numerous specific solutions, including those that were specifically designed for particular facilities. This paper shows that each clinical field already embraced numerous AI solutions. Administrative processes already are widely affected by AI. For example, the research by Stanfill and Marc cites automated medical coding, healthcare data management and governance, health information management workforce training, and Patient privacy and confidentiality as four areas in which AI improves health information management. Almost all aspects of healthcare facilities’ operations can benefit from AI integration. 

The use of AI in the healthcare industry delivers multiple benefits. It enhances diagnostics and treatment, increases efficiency, reduces costs, assists with personalizing care, improves resource allocation, and expands telemedicine applications. A survey that was carried out in 2022 reveals that a significant number of healthcare specialists in the United States report that artificial intelligence and machine learning tools improve clinical outcomes (59%), operational performance (58%), the efficiency of the health system (53%), administrative performance (46%), financial outcomes (47%), and consumer engagement (46%). These numbers illustrate that many healthcare organizations might use AI solutions for different purposes because of the impressive benefits brought about by these tools. 

AI integration in the healthcare industry can unify multiple systems and processes. AI can be used as a component of a new system incorporating biochemical assays, electronic health record systems, and data retrieved from wearable devices. If used consistently as part of such a system, AI can enable targeted diagnostics and personalized care. It can tailor recommendations and prescriptions to the unique needs of patients and change them based on the results of vital signs measurements in real-time. 

Healthcare specialists are enthusiastic about the future of AI healthcare. They are willing to embrace AI technologies provided that these tools exhibit extreme accuracy, protect data privacy, and offer wide training opportunities. Physicians and nurses recognize the benefits that AI can deliver. Patients also do not have concerns about the technology. The study by Fritsch et al., which was carried out among 452 German patients, showed that only 4.77% of them had negative or very negative approaches toward the use of AI in the healthcare sector. The patients from this study agreed that the technology had to be controlled by a physician but denied having significant concerns about AI. It is important to note, however, that this study was conducted in Germany. A research paper from a Global South country might have delivered different results. In general, it seems justified to state that both healthcare professionals and patients currently are enthusiastic about AI integration in the healthcare industry. 

Unlike manufacturing, the healthcare sector is unlikely to witness a significant volume of job losses because of AI. Davenport and Kalakota claim that even though AI can result in the loss of at least 5% of jobs in the near future, there is no evidence to claim that the healthcare sector can be significantly affected by this technological advancement. As stated above, the use of most AI solutions in this industry requires the supervision of healthcare professionals. Moreover, most healthcare jobs are high-skilled and cannot be easily replaced. Administrators, accountants, and other individuals who work in the healthcare industry on non-medical jobs are most likely to face job security risks because of the rise of artificial intelligence. For the majority of healthcare professionals, however, the risk of losing a job to AI is negligible. 

The healthcare industry is not a sector exhibiting high unemployment rates. In contrast, many healthcare organizations struggle with attracting employees, especially nurses. The use of AI can help these organizations at least partially cover their staffing needs. The use of AI solutions is expected to increase the productivity of the existing medical workers, as they will be able to perform more operations using AI. As a result, healthcare organizations might receive an opportunity to increase salaries for the existing employees, thus improving their motivation. The arguments laid out above illustrate that AI might contribute to solving the problem of the shortage of medical staff. 

The integration of AI into the healthcare sector can have a wide range of positive social and economic implications. In addition to assisting with solving the staffing problem, AI solutions can increase the operational efficiency of healthcare organizations by eliminating waste and reducing costs. Such a result can be obtained by reducing medical errors, optimizing resource allocation, and streamlining administrative processes. Another crucial benefit is increased access to healthcare. Telemedicine and wearables can provide residents of remote regions with access to high-quality healthcare services. Owing to the reduced number of errors, increased data processing speed, and a number of other advantages of AI, novel artificial intelligence and machine learning instruments can substantially improve healthcare outcomes. In turn, such an outcome is likely to translate into a plethora of positive results for communities and entire countries, such as enhanced quality of life, increased life expectancy, reduced healthcare costs, increased productivity, and increased economic growth rates. 

At the same time, the embrace of healthcare AI should not be described as a utopian scenario. AI integration in this industry is associated with a number of risks. In particular, owing to difficulties with accessing this technology, its use in the healthcare industry can widen inequalities. People who still do not have access to the Internet are highly unlikely to enjoy the benefits of AI. Therefore, ironically, the rise of AI healthcare applications can simultaneously close and widen gaps in accessing high-quality healthcare services. Residents of rural areas who normally struggle with accessing healthcare facilities would welcome the use of AI, while people from low-income neighborhoods who lack access to the Internet would be unable to take advantage of the technology. 

There are some other social and economic risks related to AI-driven healthcare. The wide application of AI tools could potentially result in the erosion of human expertise and the loss of valuable patient-provider relationships. For many patients, especially the elderly, relationships with particular healthcare specialists are a strong driver of the effectiveness of treatment. The introduction of AI algorithms that make all decisions regarding treatment can cause the loss of empathy and trust built between patients and healthcare providers. Such a scenario is dangerous since it can create substantial threats for humanity originating from the overreliance on technology. Medical errors, discrimination biases, and flawed algorithms embedded in AI can have catastrophic implications. Because of this reason, it is of paramount importance to establish effective regulations to ensure the accuracy and reliability of all AI healthcare solutions and make sure that their use is supervised by competent healthcare professionals at all times. 

3.3. The Integration of AI into the Industry of Financial Services

Unfortunately, there is currently no information about the total size of the AI market in the sector of financial services. Nevertheless, indirect evidence shows that large cost savings encourage different companies operating in this industry to experiment with AI solutions. According to an expert at Nvidia, around 36% of U.S. financial services companies have already deployed AI solutions and achieved at least a 10% reduction in operational costs. A report from Business Intelligence shows that 80% of banks are aware of the benefits of AI, while 75% of large banks are already implementing at least some AI strategies. The cumulative cost savings opportunities of AI in banking reportedly reach $199 billion, $217 billion, and $31 billion in conversational banking, anti-fraud and risk management, and credit underwriting, respectively. Such impressive cost savings trigger the unprecedented integration of AI solutions into the industry of financial services. 

Progress in AI-driven disruption of financial services depends on several factors, such as the explosion of big data, the availability of infrastructure, regulatory requirements, and competition. The explosion of big data is an important factor from the perspective of AI disruption. Clients interact with financial institutions using a substantial amount of information in the digital domain that includes transaction reports, emails, images, and many other types of data. It is necessary both for financial institutions and clients to retrieve and process the data as quickly as possible to improve customer experience. The use of AI can streamline data handling processes, simultaneously increasing the accuracy of operations and reducing costs. The use of big data powered by AI also can inform decision-making by providing financial institutions with detailed customer data. In general, patterns related to big data are among the key drivers of AI growth in the industry of financial services.

The availability of infrastructure is another major factor in this field. The use of AI requires significant computational resources and a number of complementary technologies, such as cloud platforms capable of storing large amounts of data. Many financial institutions currently are unable to achieve meaningful progress in deploying the infrastructure that can support novel AI solutions. Moreover, as stated above, such infrastructure is unavailable in many countries, especially the ones that have low Internet penetration rates. 

Regulatory requirements are a relevant factor shaping the integration of AI into the sector of financial services. Compared to other industries, this sector is heavily regulated. Financial institutions face a set of regulatory requirements addressing various aspects of their operations. On the one hand, AI can be beneficial from this perspective since AI solutions can assist with ensuring regulatory compliance by standardizing documents and making sure that an organization meets all the requirements. On the other hand, AI itself can become a regulated issue subject to a set of restrictions. The ways in which new laws and policies focusing on the industry of financial services will address AI will to a large extent predetermine the pace of its integration into the industry. 

Finally, the last relevant factor in this field is competition. Financial services face significant competition with direct rivals and companies offering substitute products and services. Technology is a well-known differentiator in this space since it can become the basis of a sustainable competitive advantage. The fact that AI reportedly increases operational efficiency, improves client experiences, and reduces costs encourages financial institutions to accelerate the exploration of AI solutions and facilitate their deployment. 

The specific AI applications in the sector of financial services are abundant. Chatbots powered by natural language processing have already become an inalienable part of bank-client communication. Whereas they usually cannot solve particular problems that require the attention of human specialists, these solutions can help clients find an answer to a standard question and solve a common problem. Chatbots are highly effective in assisting with new account creation, budget allocation, and a number of other operations. Most of them, however, often misunderstand the nuances of human dialogues (59%), misunderstand requests (59%), execute incorrect commands(30%), struggle with understanding accents (29%), fail to distinguish the owner’s voice (23%), and provide inaccurate information (14%), according to respondents from the study by Suhel et al. At the moment, AI chatbots are not ready to replace human specialists in financial institutions. Only 19% of the respondents from the research paper by Ris et al., which focused on the UK banking industry, admitted to preferring AI chatbots to human customer support service specialists. At the same time, before the recent emergence of ChatGPT-3, the idea of replacing humans with chatbots in customer support of financial institutions used to be unrealistic. Nowadays, it has already become evident that the majority of tasks completed by bank tellers and other employees interacting with clients can be performed by AI-powered tools. 

AI is widely used for fraud detection and prevention purposes. The deployment of AI solutions can increase the likelihood of spotting abnormal transactions that can be indicative of fraud. Awotunde et al. presented a model in their study that leveraged the artificial neural network to detect loan fraud and assist with bank loan management in order to prevent manipulations during loan applications. The model managed to predict fraud risks with 98% accuracy. AI algorithms are capable of utilizing a set of rules and instructions to spot behavioral indicators that are indicative of suspicious activities and can translate into high risks. 

One of the most important aspects of AI use in the industry of financial services is customization. From this perspective, the role of AI in this industry could be compared to the role of AI in the healthcare sector. Financial services organizations that take advantage of AI capabilities can leverage the technology to provide personalized recommendations to clients that consider all the relevant input data. Unlike human specialists, AI tools do not make mistakes in data retrieval and processing except for the cases of AI hallucination. Therefore, personalized recommendations focusing on particular customers can be more accurate, reliable, and valuable. 

AI is used in many different segments of the financial services industry. Its integration into trading is one of the popular research topics. At the moment, however, it is too early to make final conclusions on its effectiveness in this sector. AI tools offering the instrument of algorithmic trading potentially are supposed to be capable of identifying trends and executing trades at high speed. Simultaneously, their outcomes strongly depend on the assumptions and input data. AI-powered trading systems facilitate the process of trading, reduce the likelihood of making a careless mistake, and ensure the consistency of a strategy implemented by particular traders. However, their effectiveness is constrained by the limitations of the algorithms guiding their operations. Thus, the role of such systems in trading mainly boils down to reducing mistakes and making the entire process more convenient. 

The rise of AI poses a significant threat to banking jobs. Banks from different areas of the globe, especially from wealthy countries, are embracing digitalization and cutting costs, which often implies closing branches and firing staff members. In 2017, the Royal Bank of Scotland announced the closure of 259 branches and eliminated approximately 680 jobs. AI and other digital technologies are likely to make many jobs redundant, such as entry clerks, risk analysts, loan underwriters, and customer support representatives. The technology can replace employees involved in both front-office and back-office operations. Whereas some high-skilled individuals are supposed to remain to supervise AI algorithms, low-skilled workers would hardly deliver value to banks that cannot be delivered by AI. The research by Wells Fargo predicts that robots will eliminate more than 200,000 jobs in the banking industry within the next decade. Considering that many jobs in this sector involve the completion of repetitive tasks, such as answering standard questions and helping clients sign documents, the potential of AI replacement projects in this field is significant. Therefore, it is of paramount importance to introduce strict regulations to make sure that the advancement of AI does not result in a rapid decline in the quality of life caused by the loss of millions of jobs. 

The use of this technology in the sector of financial services can amplify a set of economic risks. In addition to those threats that were already discussed above, such as overreliance on technology and adverse outcomes of job losses, there are some unique risks that are specific to this industry. In particular, the integration of AI algorithms into trading can result in increased market volatility and a high risk of flash crashes. Since trading systems quickly respond to various events, they can amplify price swings and cause market disruptions in a fast manner, making it much harder for specialists to predict and prevent them. Furthermore, the fact that different financial institutions use the same or similar algorithms would contribute to systemic risks since any malfunction or disruption associated with widely used AI algorithms would lead to catastrophic consequences across the entire industries, such as a cascade of selling. The interconnectedness of financial institutions relying on AI algorithms, thus, might become a valid concern destabilizing the financial system. 

3.4. The Integration of AI into the Transportation Industry

Stakeholders in the transportation industry are enthusiastically embracing the power of AI. According to the recent report published by the International Association of Public Transport, 86% of public transport stakeholders are currently engaged in partnerships focusing on the development and adoption of artificial intelligence. The report from Precedence Research provides the figure of $2.3 billion as the size of the global AI transportation market in 2021. The specialists forecast that the market will continue to grow at the compound annual growth rate of around 22.97% and will reach $14.79 billion by the end of 2030. According to Iyer, the transportation sector is one of the industries that displays especially impressive progress in capturing the value produced by AI and adopting AI solutions to overcome the obstacles faced by the sector. Therefore, investments in AI transportation solutions are expected to continue to grow in the near future. 

Transportation stakeholders use AI for a variety of tasks. A recent survey shows that they mostly prefer utilizing AI in such fields as real-time operations management (25%), customer analytics (25%), intelligent ticketing systems (21%), predictive maintenance (17%), scheduling and timetabling (17%), multimodal journey planning (17%), fraud detection (13%), safety management (10%), and route design (10%). The implementation of AI tools delivers benefits both to drivers and other stakeholders. They can enjoy increased safety, enhanced efficiency, and accessibility as a result of the use of AI in traffic flow optimization, infrastructure planning, and intelligent transportation systems. To illustrate the advantages of this technology, it seems justified to discuss the specific benefits that AI provides to different stakeholders. 

Enhanced safety is one of the most important benefits of AI in the transportation industry. Several effective AI technologies, such as computer vision and sensors, enable vehicles to detect possible threats and respond to them in real-time. The most important advantage of AI in this sphere is that the technology can reduce the number of accidents by decreasing the significance of human errors. The role of AI in this field can be broad and cover not only various tools available within the paradigm of “bounded automation” but also the construction of autonomous vehicles that can function independently from humans. If AI applications are used consistently as part of the Internet of Vehicles and along with the concepts of autonomous driving, intersection management, and predictive planning, stakeholders can implement the necessary changes to reduce the number of accidents by decreasing congestion, optimizing routes, and eliminating the main drivers of accidents. There is currently no detailed information about the role of AI in improving safety in the transportation industry, but there is a consensus among specialists that such an outcome is one of the high-priority goals for the stakeholders who develop novel AI applications for the sector.

Efficiency is another relevant advantage of AI. The conference paper by Olayode et al. introduces an artificial neural network that is capable of ensuring the efficiency of traffic management by predicting congestion. The model was found to predict the traffic flow based on the data pertaining to a road intersection, vehicle classes, and speed. According to the authors, it could decrease road congestion significantly if the model had a large dataset with historical and real-time data. AI approaches display impressive performance in increasing the efficiency of air traffic management. The study by Tang et al. provides empirical evidence from multiple studies pointing to the ability of different applications to improve the efficiency of airspace management, air traffic services, flight operations, and air traffic flow management. Guo et al. believe that the increased efficiency of transportation can be enabled by a combination of the operator network, big data platform, and operator optimization implementation platform. A big data platform would be closely connected with AI platforms, which are responsible for model training, prediction, and machine learning. According to these scientists, the role of AI in the transportation sector mainly boils down to supporting the analysis of big data. 

Many scientists believe that AI transportation solutions are mandatory components of smart infrastructure and city planning. AI tools can be instrumental in analyzing information about public transit usage, weather conditions, traffic patterns, and other relevant issues that are important in maintaining smart cities. Nikitas et al. put an emphasis on connected vehicles, autonomous vehicles, autonomous personal and unmanned aerial vehicles, the Internet of Things, and physical Internet as those topics that are often discussed in relation to the use of AI in the transportation industry within the context of smart cities. The study by Agarwal et al.concludes that intelligent traffic management and control, smart packing management, emergency transportation systems, safe mobility, and smart traveler information systems are among the main applications of AI in the transportation sector that are of paramount importance for developing Indian smart cities.  While the idea of an AI-powered future remains a utopian scenario in many sectors, stakeholders of the transportation industry have already achieved significant progress in integrating this technology into various operations. 

The uncontrolled expansion of AI transportation applications, however, might be associated with different risks. In particular, autonomous and semi-autonomous vehicles run by AI could cause unpredictable situations and accidents. It is imperative to design an effective system of regulations that would prevent such situations by requiring audits and safety inspections for vehicles and introducing a set of guidelines for drivers on how to ensure the safety of their cars. Countries also have to develop regulations on how to address accidents involving autonomous vehicles. The research by Abdullah and Manap illustrates that the Malaysian legislation currently is unable to determine tortious liability in those road accidents that involve autonomous vehicles. The scientists argue that shifting the liability to manufacturers or making users liable for accidents are two viable options and assert that it is crucial to ensure consistency in this field. This article shows that the legislative system of many countries currently is not ready for dealing with a number of legal issues arising from the implementation of AI in the transportation sector. 

The majority of social and economic risks related to the uncontrolled development of AI that were mentioned in the previous sections also apply to AI transportation applications. Simultaneously, it seems crucial to highlight the significance of cybersecurity vulnerabilities in this sector. While cybersecurity is a relevant issue in many sectors witnessing the rise of artificial intelligence, the transportation sector is especially important from this perspective given the high price of most vehicles and the safety risks associated with possible cyberattacks. The number of cyberattacks against not only particular vehicles but also entire public transport systems has been growing, causing substantial losses. In this situation, AI algorithms simultaneously are a desirable target for hackers and a powerful mechanism for protecting the sector from cybersecurity threats. 

Possible job losses in the transportation industry currently represent a highly uncertain scenario. A recent report by PwC published on the official website of the UK government predicts that the country will lose approximately 550,000 jobs in the transportation and logistics sectors due to AI in the next 20 years. At the same time, its authors admit that this impact is likely to manifest itself only in the long-term perspective. Delivery drones and autonomous vehicles have not become an inalienable part of the sector yet. Furthermore, it is possible that their development and integration will be slowed down because of safety and security concerns. Given such uncertainty, predicting the exact number of jobs that will be lost to AI in the transportation sector is challenging. 

3.5. The Integration of AI into the Education Industry

The education sector witnesses numerous AI applications. Intelligent tutoring systems, smart content creation platforms, education chatbots, automated grading tools, translation services, and virtual classrooms are bright examples of technologies that are already used in many different educational institutions to improve academic outcomes, enhance the student experience, support educators, and increase efficiency. The size of the AI education market reached $1.82 billion in 2021 and is expected to continue growing at a compound annual growth rate of 36% in the next 8 years. The growing reliance on AI tools in the sector can be to a large extent explained by the outbreak of COVID-19. Owing to the rapid growth of online education platforms during the pandemic, many stakeholders who barely used novel technologies before 2020 were forced to adopt them. As a result, approximately 51% of educators in the United States are now more confident about online education than they used to be before the pandemic. Such enthusiasm concerning education technologies encourages educational institutions and instructors to experiment with novel solutions, including those based on artificial intelligence. 

The capabilities of AI in the education sector are abundant. The framework introduced by Hwang and Chien states that AI can assume the role of teachers, students, and peers within the new AI-powered metaverse. All these roles are important to support stakeholders in performing their roles and facilitating the teaching and learning processes. Furthermore, AI also is expected to play a major role in arbitration, simulation, and decision-making by creating and maintaining a learning environment. The overwhelming majority of contemporary AI applications that are used in the industry focus on providing learning support, although a few of them also are deployed to assess students’ learning. Chiu and Chai showed in their study that AI-powered curriculum design is a powerful instrument that can provide multiple benefits for teachers, reduce costs, and even enhance learning experiences for students. However, surprisingly, this important avenue for improvement has not been recognized yet by a significant number of educational institutions. It seems that the education industry currently uses AI to improve the products and services delivered to learners. At the same time, the utilization of AI to increase efficiency is a less popular approach. Such a pattern has profound implications for the economic and social risks associated with the rise of AI in the education sector. 

The impact of AI on job loss in the education sector currently remains a controversial area. The aforementioned report by PwC predicts that the integration of AI in the education sector will produce approximately 300,000 job gains in the UK in the next 20 years. In other words, much more jobs will be gained than lost in the UK education sector because of AI. Such a prediction can be explained by the argument stating that most stakeholders of the education sector currently view AI as a mechanism to support the learning process rather than as a way to increase operational efficiency and reduce costs. Many journalists and stakeholders of the education industry expressed their concerns about the possible job loss in the education sector along with the expansion of online education that utilizes AI-powered tools instead of humans. However, at the moment, there is no evidence to assert that such concerns are justified. Given the current trends in the education sector, AI is more likely to produce job gains than job losses. Despite such a prediction, the sector still needs to develop comprehensive regulations addressing the risks associated with AI to make sure that all stakeholders benefit from the technology and leverage it to improve the learning process. The industry also needs to be prepared for unexpected scenarios that could translate into the sudden amplification of various social and economic risks related to AI. 

3.6. The Integration of AI into the Film/Television/Streaming Industries

The idea of utilizing AI to improve films and television shows is not new. Filmmakers have been using AI tools since the mid-1990ss. The Matrix and The Lord of the Rings are bright examples of movies that relied on AI to add special effects and create computer-generated images. However, recent evidence suggests that the role of artificial intelligence in the sector has dramatically increased in the last decade. Unfortunately, there is currently no information about the size of the film/television AI market because of the unique nature of this industry. Nonetheless, one can infer from indirect evidence that its size has been growing. 

 An analysis of different sources leads to the conclusion that AI tools are used in the film/TV industry in such areas as scriptwriting and storytelling, editing videos, selecting actors, improving animation and visual effects, and improving marketing activities. A combination of AI and ML tools is capable of writing a unique script based on the example of previous films and TV shows, thus maximizing the likelihood of success for a new product. D.A.N. is an example of a film that was created using AI. Whereas Jon Finger still wrote the script for this film, he heavily relied on Gen2 from Runway to produce individual shots. This case shows that AI has the potential to assist with brainstorming when writing scripts and even producing specific shots that can build up the scene. From the perspective of scriptwriting, the impact of AI on the film/television sector is similar to its effect on the copywriting sector.

AI tools are already widely used in video editing. Producers used the IMW Watson program in order to create a trailer for the film “Morgan”, as the instrument is capable of detecting high-action scenes from a movie and combining them within a short video. A video scaling tool by Topaz Labs reportedly sharpens fuzzy footage and assists with motion interpolation. In general, it is clear that although AI tools currently are unable to replace video editors, they can significantly increase the productivity of these employees. The area of actor selection is another interesting application of AI in the film/television industry. As stated above, many companies use AI algorithms to select employees; thus, some film directors also apply AI for this purpose. Moreover, some AI solutions can enable automatic auditions by using actors’ textual and audio data to show how particular individuals would perform certain scenes. Another crucial application of AI is the creation of digital actors. At the moment, this process mainly occurs in the case of non-human characters, such as Thanos from Avengers: Infinity War. At the same time, there is a premise to believe that the idea of using the existing data on actors, especially background ones, to create artificial characters in films and TV shows will become appealing in the near future.

Various AI solutions are currently utilized to assist with enhancing visual effects (VFX) and animation. The main benefit of AI in this area pertains to reduced costs and time needed to create animations and VFX. The AI tools developed by Runway are an example of a novel solution that can be applied to facilitate the processes of masking and rotoscoping. 

Finally, the last area in which AI is often applied in the sector is marketing. Data analysis tools embedded into many AI solutions allow for a detailed analysis of the information about the target audience. The results of such an analysis can inform decision-making, support the production of appealing marketing materials, guide the implementation of marketing strategies, and increase customer engagement. The arguments laid out above illustrate that AI tools can now be used in almost all the operations involved in the film/television industry. 

There is currently not enough information about the economic and social risks of AI in this sector. Nonetheless, the 2023 Writers Guild of America strike and concomitant SAG-AFTRA strike of film and television actors illustrates the growing significance of such threats. Writers are striking to ask for higher wages, seeking to prevent the overuse of AI-generated storylines and dialogue that would be categorized as “literary material” or “source material”. One demand of screenwriters is to make sure that humans remain the only authors of “literary” and “source” material for awards consideration. They claim that “only a person can be considered a writer, and AI-generated material would not be eligible for writing credit.”

Writers are not the only stakeholder group that is concerned about the threat of AI. Apparently, many actors believe that the integration of AI in the sector will lead to the loss of transparency, a further decrease in wages, and the eventual replacement of background actors with artificially created characters. There is a popular opinion that production companies will force all actors to sign a consent to the use of their visual and audio data, which will eventually result in their replacement by so-called “synthetic actors” created by AI algorithms. Such a risk related to the use of AI seems to have triggered public discourse over the tools to control the spread of AI and creating ethical safeguards to make sure that the use of artificial intelligence tools is beneficial for all the parties. The full spectrum of social and economic risks that AI poses to the visual arts industry is not known yet, but it seems justified to assert that writers and background actors are the most vulnerable groups in the sector in terms of the risk of being replaced by AI. 

There have been no regulatory proposals yet in regard to film/television AI tools. At the same time, there are several points that are likely to become the basis for such proposals in the future. In particular, the special agreement for background actors published by the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) currently does not include any provisions regarding AI. While many tech companies adopted certain self-regulation frameworks that guide the use of AI tools, the film/television industry does not currently have any mandatory or voluntary rules guiding the application of artificial intelligence and machine learning. The Alliance of Motion Picture and Television Producers reportedly has made a “groundbreaking” proposal that protects actors’ “digital likeness” by requiring their informed consent for the use of digital replicas or alterations of their performance; however, the actors’ union has rejected this proposal by claiming that production companies would only pay actors for one day of their work on the condition that these individuals would sign an informed consent form. Stakeholders are in the process of designing the initial proposals concerning the best ways to regulate AI in the film/television sector. 

3.7. The Integration of AI into the Publishing Industry

The publishing industry is in the process of embracing different AI tools in its quest to improve efficiency and performance. The majority of publishing firms already employ different automated technologies to facilitate the production process; however, AI has the potential to cover the existing gaps in this area by reducing inefficiencies related to the overreliance on human activities. There are also many other fields in which AI can provide significant benefits to publishing companies. AI tools can facilitate research, synthesize certain types of content, conduct initial checks of content, and customize content to ensure consistency. A recent study based on the data retrieved from various publishing companies depicts AI as a valuable tool that writers and other professionals use. Thus, as of this moment, the majority of authors cooperating with publishing companies are likely to benefit from AI rather than face job security threats.

The process of editing is time-consuming both for authors and for publishers. Reliable AI tools can significantly accelerate this process by offering valuable features to not only identify careless mistakes but also detect minor stylistic inaccuracies and other issues that might undermine the quality of content. Hypothetically, AI will facilitate all the organizational processes of a typical publishing company that operates between the stages of content creation and consumption. The available evidence indicates that AI can ensure the swift implementation of text analysis, formatting, proofreading, and translation processes, while also supporting the preparation of final products. An additional application of artificial intelligence in the publishing sector is connected with the ability of the technology to assist with creating and enhancing graphic images, which often are an integral part of a final piece of content. For instance, for a journal like The Washington Post, the presence of a relevant image that captivates readers’ attention is often just as important as the quality of content because appealing titles and images draw readers’ attention to an article. Thus, AI can be considered a revolutionary technology that simplifies and accelerates many organizational processes carried out by publishing companies. 

One of the most important applications of AI in the industry is to ensure the alignment of articles, books, and other publishing materials with the expectations of readers. The technology can analyze reader data and provide customized recommendations for individuals on the best articles or books that they would enjoy reading, whereas trend analysis might help writers create engaging content that is appealing to a particular target audience. Based on such trend analysis, publishers can ensure that their value offering is suitable for a particular niche. 

The benefits of AI in the publishing sector are evident, but the use of the technology also is accompanied by a set of significant risks. In particular, the introduction of new content creation tools may eventually lead to job displacement of some authors. Despite the popular concern about the possible job loss in the publishing industry, many specialists do not agree that this risk is critical. The results of a recent survey conducted among around 300 industry professionals revealed that AI is more likely to strengthen core business functions than to replace writers. Moreover, the cases of The Washington Post and Axel Springer surprisingly show that AI tools not only positively influence sales and readership statistics, but also enhance job stability for the staff. Another popular risk related to AI is connected with quality. Given the tendency of many AI tools to “hallucinate”, there is a risk that many books and articles will contain inaccurate information, which, in turn, can translate into adverse outcomes for the entire society. For instance, an article that offers AI-generated content on medical topics can give wrong advice resulting in the injury or even death of a reader. Misinformation, copyright issues, and quality concerns are currently among the main topics that are raised in regard to the advancement of AI in the publishing industry. 

One of the most important benefits of AI in the publishing industry is connected with the ability of AI algorithms to convert and generate content based on a source article that is adjusted to different social groups, which results in improved reach and increased profit margins. Apple News, Google News, Instagram, YouTube, Spotify, and many other platforms already widely use AI algorithms for these purposes, creating customized product offers without increasing the size of their marketing and content editing teams. Meta, for instance, uses AI algorithms to predict how valuable a particular piece of content might be to specific users based on their demographic characteristics and previous history of the account. The information published by the company’s Transparency Center provides detailed information about the drivers of AI predictions. For example, predictions related to the likelihood of clicking on an author’s profile on Instagram depend on the number of times the user viewed the profile of this author, the number of times other users clicked on the author’s profile, and the number of times the author’s followers clicked on the profile. Google News has long used AI algorithms to customize news to viewers’ preferences, which explains the wide criticism of this platform for allegedly adjusting to viewers’ biases. Stakeholders of the publishing industry employ AI algorithms for different purposes. The risk of AI bias currently seems to be the most disturbing AI-related threat facing the sector. 

There is currently no comprehensive proposal for regulating the rise of AI in the publishing sector. A number of authors recently signed a petition that protects the authorship of their content. They claim that large language models must be forbidden to use copyrighted material without the explicit consent of its authors. According to the president of the Authors Guild, “AI regurgitates what it takes in, which is the work of human writers… [therefore], it’s only fair that authors be compensated for having ‘fed’ AI and continuing to inform its evolution.” Copyright law is currently one of the most controversial issues related to AI. Moreover, it is addressed in a number of drafts of AI laws, including the AI Act. It seems justified to expect that the issue of regulating the use of copyrighted content by large language models will be one of the heavily regulated aspects of AI functionality. It is important to emphasize that authors are among low-paid professionals, as their median salary in the United States is only around $23,330. In this situation, it seems justified to state that AI can further contribute to the reduction in authors’ wages, which are already lower than the average wage. Such a process could encourage many authors to change their profession, which can lead to the erosion of democracy in the long-term perspective, something that is already often described as a critical threat of AI. Therefore, the use of copyrighted content created by authors currently seems to be a critical social threat that must be addressed by stakeholders. 

3.8. The Integration of AI into the Creator Economy

The last application of AI that is discussed in this white paper pertains to the creator economy. The term “creator economy” refers to content creators of all types. It is reasonable to group all creators together for analysis, even if they create very different content, because of many of the shared experiences they have relating to AI.

The available evidence provides a compelling reason to believe that the creator economy has already embraced artificial intelligence. Influencers and content creators widely apply AI with the purpose of creative ideation and content enhancement. The majority of AI instruments utilized in this field could be divided into text-focused and visual-focused tools. Text-focused generative AI tools play a more important role in this sector since content creators apply them for brainstorming, content creation, editing, and proofreading purposes. For example, according to popular video blogger Jordan Harrod, visual-focused tools help edit content and enhance its quality. In particular, she admits to using AI tools to convert large videos from YouTube into short video clips that could be posted in TikTok. From this perspective, the application of AI in the creator economy can be compared to the use of this technology in the film/television industry since artificial intelligence allows for simplifying the process of video editing and enhancement, reducing the time spent on preparing a final product. 

Interestingly, the majority of journalists, authors, scientists, and influencers believe that AI tools will never replace human creativity. Hill-Yardin et al., who conducted a study on the use of ChatGPT in scientific writing, note that this software “is a useful tool to get us started, but, like running an immunohistochemistry experiment using an antibody of questionable specificity, without probing, integrating, and non-linear minds to assess the content, the outputs might be of little value”. Harrod believes that using AI to replace content creators in the creator economy is unjustified, “if you're going into the creative space, you're doing it because you have ideas and you want to create them, and you're looking for tools that can help you do that, not because you're trying to just generate as much content time as physically possible”. A similar opinion also was expressed by Dave Wiskus, the CEO of Nebula, “Autotune didn’t destroy singing – it just made people with a different kind of creativity more likely to find a good way  to express that creativity; Photoshop didn’t run photography  - it just did it so more people could get into photography; it made photography more accessible so we could get better art from more artists… I think AI, at least as we understand these tools today, will give the same effect.” The arguments laid out above provide a compelling reason to assert that AI is highly unlikely to fully replace content creators and influencers in the creator economy.

Despite the optimistic predictions of most specialists highlighted in the previous paragraph, some recent cases show that AI-driven influencers can be popular among readers and viewers. The case of AI influencer Rozy shows the impressive potential of AI to revolutionize the way in which content is created and shared. Rozy’s account on Instagram, which currently has around 156,000 subscribers, is a bright example of a successful artificial influencer. The account participated in more than 100 advertising campaigns and is forecasted to earn around $1,000,000 the next year. Lil Miquela, who currently has 2.7 million readers, not only succeeded in promoting many brands, including Vogue and Prada, but also managed to take part in activist campaigns, such as the #MeToo movement. The example of Lil Miguela shows that an influencer created by AI can not only create content that is interesting for followers but also successfully take social action that might potentially influence the public opinion. Such an issue is simultaneously impressive and disturbing given the high risk of misinformation related to AI. 

At the moment, the role of AI in the creator economy mostly boils down to providing a set of tools to assist with the completion of those tasks that used to be performed manually. At the same time, several important issues should be mentioned in relation to the potential risks associated with the technology. First, AI is likely to result in job disruption and displacement in the case of low-skilled content creators. Those content creators whose work is not directly connected with creativity are likely to be the most vulnerable group in this field. Authors of SEO-optimized articles exemplify such stakeholders. These individuals create texts that are supposed to include a certain number of keywords so that a website would be better indexed by search engines. In many situations, the meaning of these texts and their value for customers are not crucial since these pieces of content are primarily created for search robots. Modern AI tools, such as Google Bard or ChatGPT, are capable of producing such articles by meeting the given requirements in terms of the frequency of certain keywords. At the same time, corporations that are concerned about their image and reputation are likely to invest in creating high-quality SEO texts so that these articles would simultaneously include a required number of keywords and be appealing to potential visitors of the platform. Such a task might be difficult to complete for the majority of modern large language models, thus justifying the decision to hire competent SEO copywriters. The example of SEO copywriting illustrates that whereas low-skilled content creators are at risk of losing a job, competent and high-skilled professionals are likely to keep a constant flow of orders. 

One of the most important issues related to the risk described above is that many content creators do not have a formal employment status since they work as freelancers. According to different estimations, there were approximately 50-60 million freelancers in the United States alone before the COVID-19 pandemic. This figure has likely increased in recent years. While most freelancers do not rely on freelance work as the main source of their income, many of them consider freelancing as an important constituent of their earnings. People living in many parts of the Global South, such as in India and Kenya, strongly rely on freelancing since they can successfully compete with rivals from other nations owing to their low fees. The advancement of large language models can negatively affect the job security of these individuals, especially considering that their employment rights are often not protected by employment contracts and applicable laws. Furthermore, given that many of them do not have an official employment status, it is difficult to identify these individuals and target them with any social program to help them adjust to the AI-driven labor market. 

The second risk of AI in the creator economy is the redundancy of content. The use of content created by authors to train large language models might result in situations when content creators unknowingly plagiarize the work of other authors. Such a risk is critical because it could erode creativity, reduce the value of new content, and adversely affect the development of the creator economy. It also might hurt content creators whose ideas can be used without proper compensation. This risk is already being addressed by policymakers who are trying to find an optimal way to regulate the use of content for pre-training large language models. 

Another threat related to the rise of AI in the creator economy is inequality. As stated above, the development of large language models occurs in a way that puts various individuals in fundamentally different conditions. Those individuals who have access to normal Internet infrastructure can find themselves in an advantageous position, whereas content creators working in so-called Internet cafes are likely to struggle with taking full advantage of AI models and algorithms. Besides the risks highlighted above, the integration of large language models and AI algorithms into the creator economy also is accompanied by a set of threats that were already discussed above in relation to other industries, such as quality dilution, the loss of authenticity, data privacy concerns, and ethical dilemmas. 

The case of the rise of AI in the creator economy is different from most other sectors discussed in this white paper. For example, AI can offer promising solutions in the healthcare and education industries, but stakeholders hardly discuss scenarios in which AI could provide complete products in these two sectors without any human intervention. In contrast, even the current AI tools demonstrated their ability to create content that can be confused with the content written by humans. Therefore, many social and economic risks related to the integration of AI into the creator economy have already become pressing. 

Chapter 4. Review of Corporate Recommendations 

4.1. Introduction

As previously demonstrated, progress in the field of AI is mainly driven by industry rather than academia. Large corporations that introduce revolutionary AI products are responsible for setting new principles, rules, and guidelines in the AI industry. While both policymakers and scientists currently are not ready to support the rapid pace of AI development, these companies shoulder the responsibility for regulating the rise of artificial intelligence so that it would occur in a sustainable manner. Considering that there is currently no consistent regulatory landscape on the international, national, and local levels regarding the use of AI technology, internal recommendations and principles adopted by leading technology companies currently play a critical role in shaping AI progress and mitigating the risks associated with this technology. This chapter offers a detailed discussion of the corporate recommendations and strategies in relation to AI regulation provided by leading tech companies, including Google, OpenAI, Microsoft, Apple, Meta, and IBM.  

It is worth noting that all corporate recommendations include some degree of risk assessment and risk-based regulation. This universal point could serve as a foundation for building a shared framework. On issues of self-regulation and government licensing, however, there are significant differences of perspective across AI stakeholders. 

4.2. OpenAI

Unlike other companies discussed below, OpenAI does not have a universal ethical framework guiding all the actions of the company in relation to AI research and development. Nevertheless, the firm’s official website describes a set of broad principles adopted by the corporation, including broadly distributed benefits, technical leadership, and cooperative orientation. The need to distribute benefits broadly is a critical pillar of OpenAI’s stance on AI development since the company seeks to minimize any cases of harm induced by AI and to make sure that as many humans as possible can unlock the value of the technology. Like other leading tech companies, OpenAI recognizes the risks posed by AI tools to equality and encourages stakeholders to distribute the benefits delivered by the technology broadly. Whereas the goal of making new AI tools accessible worldwide seems utopian owing to low Internet penetration rates and other constraints that are impossible to overcome in a swift manner, creating a system for ensuring the fair distribution of AI benefits seems to be a viable option for addressing social and economic risks amplified by artificial intelligence. 

The principle of long-term safety is a unique feature of OpenAI’s approach to AI ethics. The company’s officials expressed their concerns regarding the “competitive race” involving AI research. Furthermore, the firm announced that if another company managed to come close to creating artificial general intelligence, OpenAI would stop competing with this project and instead start assisting with its further development. This approach marks a fundamental difference from the competitive stance adopted by other companies in this analysis.

The enterprise agrees to assume broad responsibility for managing the societal impact of artificial intelligence. The company’s officials agree not only to engage in safety and policy advocacy but also to “strive to lead in those areas that are directly aligned with our mission and expertise”. As part of this responsibility, the enterprise regularly publishes its AI research in line with the principle of cooperative orientation and intends to share policy, standards, and safety research with industry stakeholders and policymakers in the future to make sure that the industry as a whole manages AI risks. 

One of the regulatory measures supported by OpenAI is the introduction of licensing and testing requirements. Furthermore, the company also supports the establishment of a monitoring agency for pre- and post-deployment review of AI products and funding of AI safety research. In general, OpenAI supports a universal approach to AI regulation that implies the creation of an agency that requires a license for all AI products whose capabilities exceed certain limits. This way, it will be easier to ensure compliance with safety standards.

A recent testimony given by Sam Altman to U.S. Congress provides important information about the company’s stance on AI risks. It shows that OpenAI recognizes that a scenario in which AI creators cause “significant harm to the world” is possible. In particular, Altman specifically emphasized the risk of “one-on-one interactive disinformation”, which, in his opinion, must be prevented using the instrument of AI regulation. One of the most interesting insights that could be gained from analyzing Altman’s testimony is that he is not sure about specific scenarios underpinning the disruption of the economy and society as a result of the integration of AI algorithms into various spheres of life, but he is confident that these scenarios are possible, “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that, we want to work with the government to prevent that from happening”. In general, Altman’s testimony illustrates that OpenAI advocates for the use of a proactive approach toward AI regulation so that the government would address AI risks even before they become visible. 

OpenAI adopted a more cautious approach to mitigating the risks associated with its AI products. The company reportedly spent six months carrying out external red teaming and testing of GPT-4 before releasing it to the public. Moreover, the corporation has recently launched a specific alignment division whose main goal is to reduce the risks of AGI. The division will use approximately 20% of the firm’s capacity to make sure that a human-level automated alignment researcher is capable of ensuring the compliance of new AI algorithms with safety standards. Therefore, whereas the company advocates for the adoption of strict regulations, OpenAI also works on enhancing its own self-regulation approach. 

The companies which follow were selected because they all belong to Partnership on AI, a research group comprising Google, IBM, Microsoft, Apple, Meta, and Amazon. 

4.3. Google (Alphabet)

Google is ready to embrace AI. The company’s officials recognize the revolutionary role of this technology and its potential to improve human life. The website of Google AI states that “while we are optimistic about the potential of AI, we recognize that advanced technologies can raise important challenges that must be addressed clearly, thoughtfully, and affirmatively”. The company claims to adhere to seven principles that guide its AI research. First, the company promises to take into account relevant economic and social factors associated with the development of AI technologies. It is important to emphasize that the enterprise’s specialists are optimistic about the future of AI and claim that “the overall likely benefits substantially exceed the foreseeable risks and downsides”. Second, the firm claims to take the necessary measures to avoid creating and reinforcing unfair bias in AI algorithms. Third, the corporation develops and tests its products in a constrained environment that helps ensure sufficient levels of safety. Fourth, the firm seeks to ensure that all systems and products designed by Google are accountable to humans. This way, the enterprise plans to address a set of popular concerns related to the lack of control over large language models. Fifth, in addition to safety, the design of AI solutions also incorporates privacy principles. Sixth, the company claims to rely on rigorous scientific research and extensive scientific evidence in developing and upgrading its AI products. Finally, the last AI principle to which Google adheres is to conduct regular audits of its solutions to minimize the cases of harmful uses. 

In addition to the set of AI principles, Google claims that it will not engage in the development of certain AI products. First, it will not support and participate in the development of those large language models that mainly focus on causing or facilitating injury to people. In other words, Google plans to refrain from participating in the production of AI-based military products. Second, the firm announced its commitment to not developing technologies that gather data in a way that violates international norms. Third, Google will not develop products whose purpose is not aligned with international law and human rights. Finally, it also should be noted that the company claims not to develop products that “are likely to cause overall harm”. Interestingly, the firm preserves the right to develop AI algorithms that cause harm provided that the benefits of these solutions are expected to outweigh the risks. Google’s own evaluations and forecasts are in this situation the only criterion that is used to evaluate both benefits and risks of AI algorithms. To make sure that the company complies with its principles and refrains from working on applications that are incompatible with its ethical norms, the corporation has a specific team that conducts ethical reviews of all new solutions implemented by the company through the stages of intake, analysis, adjustment, and decision. The official website of Google, however, does not provide detailed information about the composition of this team and the ways in which it operates. 

One of the most important features of Google’s stance on AI regulation is that the company’s officials believe that self-regulation is not sufficient for protecting humanity against the risks of AI. Even before the recent progress in AI research caused by the introduction of ChatGPT-3, the company insisted on the need to regulate AI. According to Google’s CEO, “Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways… sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.” The document called Recommendations for Regulating AI, which was recently issued by Google, calls for regulating AI by taking a sectoral approach on the basis of the existing legislation, adopting a risk-based framework, promoting interoperability in AI standards, ensuring parity between AI and non-AI systems, and recognizing the role of transparency. In general, the firm encourages policymakers to avoid generalizations and instead adopt a pragmatic approach to balance the interests of relevant stakeholders and implement a customized risk-based model in each industry. 

The leadership of Google is engaged in discussions with policymakers regarding AI regulation. A head of the enterprise’s cloud division recently announced that the company’s leaders were having “productive conversations” with EU regulations regarding the EU AI Act. In particular, the company’s representatives are reportedly discussing ways to protect creative industries and enforce copyright laws. In a recent recommendation to the U.S. government, Google advised policymakers to issue detailed guidance to all agencies on tackling AI risks. From this perspective, the company’s recommendations differ from the recommendations of many other corporations calling for the establishment of a separate agency responsible for addressing AI risks. In a recent proposal sent to Australian policymakers, the company recommended the Australian government to ensure clarity on liability for the misuse and abuse of AI systems, establish a copyright system to enable the use of copyright-protected content, refrain from introducing strict privacy laws so that that data could flow across borders, facilitate cross-sector collaboration, and adopt a framework for the use of data for training AI algorithms. In general, Google’s approach on AI regulation is cautious. The company calls for balancing the interests of relevant stakeholders and avoiding overregulation. Instead of adopting universal policies, the company encourages policymakers to apply a risk-based approach that would focus on protecting high-risk sectors and designing customized interventions rather than “one-fit-for-all” policies. 

Despite the consistent approach taken by Google in relation to the issue of AI regulation, the company has been heavily criticized for failing to recognize the full spectrum of unregulated AI until recently. Google’s employees reportedly labeled the launch of Bard as “rushed,” “botched” and “un-Googley” in a series of internal messages. Given the firm’s attempt to introduce a new product as soon as possible, it is natural that certain ethical and regulatory aspects of the new AI algorithms and large language models launched by the corporation might have been overlooked. Timnit Gebru, who is one of the leaders in AI ethics research and an ex-member of Google’s ethical AI team, believes that Google did not take enough measures to address the large social and economic risks associated with AI, such as high model training costs, large environmental emissions, bias risks, and misinformation. Furthermore, Dr. Geoffrey Hinton left Google because, as he believes, the launch of ChatGPT-3 led to Google adopting an aggressive strategy to facilitate the work on its own large language models without taking the necessary precautions. In light of recent publications, it seems justified to state that Google’s stance on AI ethics and regulation is heavily criticized because of the company’s rushed introduction of Google Bard.

4.4. IBM

IBM has a consistent AI ethics framework. The company states that the goal of its AI products is to augment human intelligence. The firm does not seek to create large language models that would replace humans and instead pursues the concept of “bounded intelligence” in its projects. The company also claims to comply with copyright law by ensuring that clients’ data and insights belong to their creators. This principle is important since it addresses one of the most popular controversies related to AI products. Finally, the corporation also tries to make its technology explainable and transparent. IBM promises to reveal data on the training of all AI algorithms and the basis of algorithms’ recommendations. These three principles are not unique, but they illustrate the stance that was taken by IBM on AI ethics. Contrary to many other firms, IBM believes in the potential of self-regulation to protect humanity from AI risks. In addition to these principles, the company’s AI research is also guided by five pillars, including privacy, robustness, transparency, exploitability, and fairness. All these principles are standard and could be found in the ethics statements of many other technology companies. 

The corporation advocates for the so-called “precision regulation” approach, which is similar to the strategy recommended by Google. The approach comprises four pillars that were announced by Christina Montgomery, IBM’s chief privacy and trust officer. First, the firm calls for adopting the strongest regulation exclusively in the fields with the highest risks in line with the risk-based framework. Second, the company believes that policymakers must define risks and provide guidance on AI uses in high-risk areas to reduce uncertainty. Third, IBM supports the call for transparency as a mandatory principle of any AI algorithm. Finally, the last principle involves the introduction of certain requirements to companies in relation to testing their AI products and conducting impact assessments. In general, IBM warns policy makers against overregulating the field and instead suggests the application of a risk-based approach that would not interfere with research and development activities. 

The company is one of the most active supporters of the risk-based approach to AI regulation. Following the recent EU AI Act vote in the European Parliament, IBM made an announcement praising the institution for “preserving the Commission’s risk-based approach to artificial intelligence in line with our consistent calls globally for precision regulation on AI, which we believe is the best way to protect people and values while promoting innovation”. According to the enterprise, a broad regulation of technology focusing on AI would be harmful for society and innovation. The study conducted by Morning Consult on behalf of the corporation showed that around 70% of Europeans and 62% of Americans agree on the need to adopt a precision regulation approach. The arguments laid out above illustrate that both IBM and Google advocate for similar regulatory measures based on the risk-based framework, whereas OpenAI supports a broader approach. 

4.5. Microsoft 

Microsoft is one of the companies involved in the “competitive race” in the AI sector. The company’s official website distinguishes between six principles of AI research, including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. While all of these principles are common and are hardly different from the pillars announced by other leading tech companies, such as Google or IBM, it is important to emphasize that these principles are operationalized by the company through the Responsible AI Standard. This standard includes a set of criteria that are used to achieve a set of goals across these six principles. Compared to most other companies involved in AI research and development, the operationalization of AI ethics principles at Microsoft is very detailed. The firm has a clear framework aimed at ensuring that each AI product presented by the corporation meets strict requirements.

Until recently, the company also had an AI Ethics Team comprising 30 individuals that was supposed to guide the company in developing new AI products and implementing the assessments mentioned in the Responsible AI Standard. According to a member of this group, “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies’… our job was to show them and to create rules in areas where there were none.” However, the firm decided to eliminate this team as part of a 2023 cost-cutting initiative. The corporation still maintains the Office of Responsible AI, which creates rules and principles for guiding the firm’s AI projects. However, the enterprise no longer has a team that would enforce these rules and help employees comply with them. According to the officials, such an event was a result of the reorganization efforts; furthermore, actual investment in AI responsibility allegedly increased. 

Microsoft’s position regarding AI regulation resembles the one expressed by OpenAI’s officials, which is natural given the companies’ close relations with each other (Microsoft has been a major investor in OpenAI). Microsoft supports a system that would require all AI developers to notify the government about testing new products and apply for a license before deploying any systems. Moreover, the company calls for ensuring that all AI algorithms involved in critical infrastructure can be slowed down or even turned off immediately. The firm supports the establishment of a government agency in the United States that would enforce safety standards, monitor AI testing, and license all large language models before they can be deployed. The company’s CEO also advocates for an executive order that would promote voluntary compliance with the guidelines included in the NIST framework. It is crucial to empathize that Microsoft agrees that AI developers sometimes have to bear the legal responsibility for complying with security regulations, something that certain other tech companies try to avoid. In general, in comparison with most other corporations, Microsoft supports a strict AI regulatory framework that has a relatively broad scope but, at the same time, allows for the implementation of customized solutions in line with the risk-based approach. 

4.6. Apple

Apple is one of the leading companies involved in AI and ML research and development, although it does not have its own AI product yet that could compete with Bard or ChatGPT. Nonetheless, the firm has adopted a set of ethical principles and joined discussions regarding AI regulation. Given the corporation’s limited progress in the field of AI, however, its stance on the economic and social risks of AI has not received a significant amount of attention. Moreover, the firm currently does not have a separate ethics framework for AI research and development.

It is all but certain that the corporation’s leadership is aware of the dangers of artificial intelligence. Apple recently restricted the use of ChatGPT among its employees in response to concerns about possible data leaks and other security issues. Little is known about the company’s stance on AI regulation. One could infer from the recent interview with Tim Cook that Apple pursues a balanced approach in which AI legislation is supposed to be supplemented by companies’ self-regulation frameworks. Presumably, the company will introduce its position on AI regulation in a press-release in the near future in order to join the popular discourse. 

4.7. Meta

Meta has a Responsible AI platform that briefly summarizes the firm’s principles of AI responsibility. The principle of privacy & security is operationalized at the company through the unification of AI products and systems into a single ecosystem so that it would be easier to manage risks. The company plans to interact closely with users to help them protect their own data while providing the feedback that can be instrumental in improving AI products. The principle of fairness & inclusion is enforced through the Fairness Flow tools, which allow for detecting and eliminating algorithmic bias. The Responsible AI team was supposed to play a major role in monitoring such biases and offering solutions to eliminate them, though it was disbanded in 2022. The AI Red Team works closely with partners to test AI algorithms and check their resilience against various threats. The company tries to ensure transparency in its AI research and development by providing users with recent data on the ways in which their AI products work. From this perspective, Meta’s practices are similar to the actions of other tech companies engaged in AI research and development. It also is important to emphasize that the company is open to the idea of collaborating with other companies on the future of Responsible AI, which is evident in the firm’s engagement in a number of international projects addressing AI ethics. 

Leadership at Meta recognizes the importance of AI regulation. The company supports the development of uniform standards, which is evident in its participation in the AI Observatory project launched by the OECD. Facebook also supported the Technical University of Munich in establishing the Independent TUM Institute for Ethics in Artificial Intelligence. Insights from Facebook AI Research (FAIR) are regularly published online so that various stakeholders can access them. 

Despite these efforts, Facebook’s own AI algorithms are often criticized for spreading misinformation. The scandal involving Cambridge Analytica illustrates the inherent weaknesses of the AI algorithms deployed by Facebook. At the moment, the company’s AI tools are too weak to protect social media from misinformation and fake news. The scholarly paper by Schonau shows that Meta’s predictive recommendation system is flawed and exhibits significant biases caused by the constraints of AI algorithms. Apparently, the firm’s self-regulation framework is currently unable to address the risks of AI products designed by the corporation. 

The firm has not announced an official position on AI regulation. Nevertheless, it seems that Meta supports a risk-based approach and warns policymakers against regulating the technology as a whole. The company is ready to be transparent and explain all the aspects of the work on its AI algorithms. Apparently, the corporation does not support the establishment of a separate federal agency that would license all large language models and instead advertises the “precision approach,” which also was supported by IBM and Google. Contrary to Microsoft and OpenAI, Meta advocates for regulating the uses of AI rather than the technology itself. Such proposals might seem unconvincing given that the flaws in Facebook’s AI algorithms are one of the main arguments in favor of establishing an AI federal agency. Interestingly, there is no evidence that the company’s position on AI ethics changed recently, which is surprising given that the rise of ChatGPT-3 marked the advancement of a new era in AI research and development. 

4.8. Amazon

Amazon is one of the leading companies in machine learning (ML) research. It has a set of principles underpinning the responsible use of AI and ML. In particular, the document Responsible Use of Machine Learning provides an exhaustive list of criteria and practices that have to be used in the stages of design and development, deployment, and ongoing use of AI and ML applications. For example, developers are supposed to evaluate use cases, understand the capabilities and limitations of each application, build and train diverse teams, collect data, train and test data, monitor biases, explain ML systems, ensure auditability, and comply with relevant regulations during the stage of design and development. The company has a set of resources that help comply with the ethical guidelines, such as Amazon SageMaker Clarify, Augmented AI, Sage Maker Model Monitor, and SageMaker Data Wrangler. The enterprise, therefore, not only has a set of general principles guiding AI and ML development but also offers practical solutions for operationalizing these principles. 

There are three broad principles that apply to Amazon’s AI and ML research. First, all its systems are subject to monitoring and human review in order to prevent inaccurate predictions and ensure the reliability and accuracy of all models. Second, the company reports data explaining the predictions of all its models. Therefore, users and other stakeholders can understand the models’ behavior. Finally, a set of tools developed by the company allow for detecting biases and disparities. The use of these tools helps follow the guidelines described above. 

Amazon’s position on AI regulation is currently unclear. In 2020, Amazon and Meta joined forces in opposing facial recognition bills because the proposed legislation was “overly broad” and could hamper technology development. At the same time, the firm’s recent press release covering AI regulation does not provide any specific information besides repeating the principles outlined in the aforementioned Responsible Use of Machine Learning guide. One could infer from indirect evidence that the firm’s position on AI regulation resembles the one expressed by IBM, Meta, and Google, who all oppose broad regulation and instead believe that a narrow risk-based approach is better equipped to deal with the threats of AI. 

Table 2. Summary Assessment of Corporate Positions on AI

Company

Position

OpenAI

  • does not identify a universal ethical framework in regards to AI
  • recognizes the risks posed by AI tools to equality and encourages stakeholders to distribute the benefits delivered by the technology broadly
  • supports a universal approach to AI regulation that implies the creation of an overseeing agency that requires a license for all AI products whose capabilities exceed certain limits
  • unique states that if another company managed to come close to creating artificial general intelligence, they would stop competing with this project and instead start assisting with its further development

Google (Alphabet)

  • adheres to seven principles that guide its AI research
  • argues that self-regulation is not sufficient for protecting humanity against the risks of AI
  • calls for regulating AI by taking a sectoral approach on the basis of the existing legislation, adopting a risk-based framework, promoting interoperability in AI standards, ensuring parity between AI and non-AI systems, and recognizing the role of transparency
  • does not encourage licensing or extensive governmental regulation 

IBM

  • maintains a consistent AI ethics framework
  • supports a so-called “precision regulation” approach only, making them the most active supporters of the risk-based approach to AI regulation

Microsoft

  • distinguishes between six guiding principles of AI research, and the operationalization of AI ethics principles at Microsoft is very detailed
  • supports a strict AI regulatory framework that balances a relatively broad scope with the implementation of customized solutions
  • supports a system that would require all AI developers to notify the government about testing new products and apply for a license before deploying any systems

Apple

  • adopted a set of ethical principles and joined discussions regarding AI regulation
  • not a major player in the AI industry right now, so has not yet needed to self-regulate or articulate extensive stances

Meta

  • publishes lists of the firm’s principles of AI responsibility  
  • focuses on the development of uniform standards through the AI Observatory project
  • supports a risk-based approach only and warns policymakers against regulating the technology as a whole

Amazon

  • operates under an exhaustive list of criteria and practices that have to be used in the stages of design and development, deployment, and ongoing use of AI and ML applications
  • opposes broad regulation and instead advocates for a very narrow risk-based approach and self-regulation
  • one of the leading companies in machine learning (ML) research

Chapter 5. Historical Context and Lessons 

5.1. Introduction 

The available evidence provides a compelling reason to believe that the current debates over AI regulation can rely on the examples of multiple regulatory interventions in non-tech industries. The Sherman Antitrust Act, the Trade Adjustment Assistance Act, Joe Biden’s Build Back Better Plan, The American Recovery and Reinvestment Act, the Sarbanes-Oxley Act, the Fair Labor Standards Act, and a series of initiatives to support the transition from fossil fuels to renewable energy are the examples of regulatory interventions that sought to provide meaningful benefits to society and protect stakeholders from a number of critical economic and social risks. These laws succeeded in achieving their goals to a certain extent, although they have been heavily criticized by different stakeholders. It seems justified to discuss these interventions in the current chapter to put the issue of AI regulation into a relevant context and show what historical lessons can help policymakers adopt effective AI laws. 

5.2. The Sherman Antitrust Act

The Sherman Antitrust Act of 1890 is an important law that marked a crucial milestone in the history of the United States. In the most general view, it could be defined as a law prohibiting monopolies, while ensuring free competition thrives. The document was supposed to preserve a competitive marketplace and protect customers from abuses. According to the Supreme Court in Spectrum Sports, Inc. v. McQuillan, “the purpose of the Sherman Act is… to protect the public from the failure of the market… the law directs itself… against conduct which unfairly tends to destroy competition itself.” The law was introduced to prevent legal persons from interfering with the competitive environment because “every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony.” The document offers a definition of anticompetitive conduct, emphasizes that any actions aimed at monopolizing trade or commerce are illegal, and highlights that the provisions apply in all U.S. territories. As per the law, the U.S. Department of Justice started enforcing the Act in the federal courts. 

The Sherman Antitrust Act is relevant to the problem under investigation because it was the first legislation in the United States to address the issue of monopolies, which was an increasingly important threat to U.S. society. Such trusts as Standard Oil and the American Railway Union were believed to have monopolized certain sectors and raised prices on goods and services as a result of their dominant position. The Sherman Antitrust Act became a natural response to the growing threat that was recognized not only by policymakers but also by journalists, scientists, and regular citizens. A similar scenario can be observed nowadays when the discourse regarding the dangers of AI involves multiple stakeholders who are becoming increasingly concerned about the dangers of AI. 

One of the important parallels between the Sherman Antitrust Act and the current regulatory efforts regarding AI legislation, such as the EU AI Act, is that the Sherman Antitrust Act was the first law of its kind to address an issue that was largely outside the policymakers’ attention in the past. As a result, the document was very broad and had many gaps and loopholes, such as the lack of clear definitions of “monopolies” and “trusts” and the absence of consistent tests for the “unreasonable” restraints of trade. These issues were later addressed in the Clayton Antitrust Act. In a similar way, the EU AI Act as well as drafts of similar laws in other countries can become the basis on which future AI laws could be built. Despite the limitations of the Act, the Supreme Court used the Sherman Antitrust Act to facilitate the “trust-busting” campaign, which is evident in the cases against Northern Securities Co., the Standard Oil Company, and the American Tobacco Company. Thus, despite the broad scope and the lack of strict definitions and criteria, the Sherman Antitrust Act became a critical milestone in the development of antitrust laws in the United States. Policymakers can consider the ambitious approach proposed in the Sherman Antitrust Act when considering the best ways to regulate AI. 

5.3. Trade Adjustment Assistance Acts

While the Sherman Antitrust Act is relevant to the topic under discussion because it was the first law to address the growing threat of monopolistic practices to American society, the Trade Adjustment Assistance Act of 1974 is pertinent to the problem under investigation because of its focus on providing reemployment assistance. At that time, the rapid development of global trade provided U.S. manufacturers with an opportunity to reduce costs by shifting production to foreign countries, outsourcing services, or increasing imports of articles or services. The Trade Adjustment Assistance for Workers was a program established to help displaced employees find a new job with minimal losses. Eligible workers could receive payments during 117 weeks, participating in full-time training aimed at obtaining new skills, and apply for a supplementary wage provided that they were above 50 years old. Whereas the issue of financial support during the transition period used to be a popular idea even before the 1970s, the Trade Adjustment Assistance Act of 1974 introduced the concept of training, which was a revolutionary approach to facilitating reemployment. The training component of the law illustrated that the government recognized rapid changes in the economy and the labor market and was ready to provide the support for employees that went beyond financial assistance. 

Similar measures can be required to help employees whose jobs were replaced by AI adapt to changes in the labor market. According to the report published by PwC, AI is likely to produce a substantial number of new jobs in the United Kingdom in the next 20 years. The education sector alone can gain as many as 300,000 new jobs. However, to take advantage of such job openings, potential candidates will need to possess new skill sets and have deep knowledge of AI-related issues. Thus, educating such people on the nature of AI and training them on the use of AI tools in different areas could be an essential part of the reemployment initiatives suggested in the new AI legislation. 

In 2002, policymakers changed the legislation by introducing wage insurance and health credits as new constituents of trade adjustment assistance. Moreover, the new version of the act broadened the definition of eligible workers by incorporating primary, upstream, and downstream workers displayed by increased imports or plant relocation programs as well as farmers who suffered from price declines as a result of imports. This change shows how important it is for policymakers to ensure that the scope of assistance programs covers a sufficient number of affected employees. 

Unfortunately, it is hardly possible to quantify the benefits of trade adjustment assistance programs introduced in the United States. According to Aho and Bayard, the assessment of the benefits of the program from the Trade Adjustment Assistance Act of 1974 was difficult because the majority of benefits referred to the improvement in the perceptions of the U.S. government and the potential foreign policy gains associated with such positive perceptions. The sole fact that a country supports displaced employees allegedly improves its impact, which might indirectly influence foreign direct investment and other relevant indicators. Because of the difficulties with quantifying the results of trade adjustment assistance for a country, it is currently hard to determine whether such programs provided significant benefits for the United States.

Some evidence illustrates that the programs have been ineffective in supporting workers. In a testimony focusing on the trade adjustment assistance program based on the Act of 1974, Louis Jacobson stated that the gains per person were too small compared to the magnitude of losses. Furthermore, many employees did not return to a similar job even two years after applying for the program; as a result, the financial assistance did not cover the entire transition period and did not help them find a new job. To address this problem, Jacobson argued for the introduction of a training component into the program. However, even after the program adopted such a component, the situation hardly improved. Reynolds and Palatucci argue that participation in the program “has no discernible impact on the employment outcomes of the participants.” Furthermore, similar results were reported by the Office of Management and Budget and the Department of Labor. The scientists mainly explain such outcomes by the flaws in the training programs and the lack of adequate training options in various regions. 

Trade adjustment assistance programs introduced in the acts adopted in 1974 and 2002 are relevant for discussing AI regulation because they address the threat of job loss. These laws introduce a set of different measures as part of assistance programs, including financial assistance, training, health insurance, and specific measures targeting older individuals. However, the effectiveness of the programs has been limited. One of the important lessons from these programs is that the support of displayed workers is a challenging endeavor that must incorporate different measures besides financial assistance. It is necessary to prepare displaced workers for a new job by helping them acquire new skills and find a job with adequate compensation, something that many employees are unable to do by themselves. Training is arguably the most important component of such programs. 

Another crucial lesson is that programs should engage not only employees but also employers so that individuals who successfully pass the training programs could find a job using effective channels. Furthermore, the program also showed that local governments must closely monitor the effectiveness of training sessions that are available for displaced workers to make sure that these activities are valuable for employees and meet the demands of the labor market. 

5.4. The American Recovery and Reinvestment Act 

The global financial crisis of 2008 became a critical challenge for the U.S. economy and society. The American Recovery and Reinvestment Act sought to alleviate the negative implications of the Great Recession by providing assistance to small businesses, reducing the tax burden, and creating new jobs. According to the document’s authors, the law was supposed to facilitate government spending so that it could compensate for a decline in private investment. The law focused on infrastructure, education, and healthcare, which were believed to be suffering from the crisis. From this perspective, the Act could be compared to current drafts of AI regulation that offer a risk-based approach to prioritize the regulation in high-risk industries. 

Many initiatives within the American Recovery and Reinvestment Act directly targeted the threat of job loss. The government encouraged the implementation of local projects with the stimulus package of $85 billion, extended medical insurance to displaced workers, funded the computerization of patient records at healthcare facilities, helped school districts and states underwrite staff salaries and educational programs, and funded preschool and special education programs. According to the Office of the President, the government expected to save or create around 0.7 million, 3 million, 2.5 million, and 0.7 million jobs in 2009, 2010, 2011, and 2012, respectively. Blinder and Zandi praised the U.S. response to the Great Recession, including financial measures, the fiscal stimulus, and job creation initiatives. According to the projections, the unemployment rate would have reached 15% if the country had not adopted a radical approach to protecting jobs. 

Recent evidence suggests that the law indeed has made a substantial impact on unemployment rates. According to the more recent study by Blinder and Zandi, which was published in 2015, the country would have lost 17 million jobs instead of 8 million, while the unemployment rates would have reached 16% instead of 10% if the government had not adopted the American Recovery and Reinvestment Act. Furthermore, the country also would have witnessed much more adverse trends in relation to GDP decline and budget deficit. During the period between late 2009 and mid-2011, the law alone was responsible for a 2-2.5% increase in the country’s gross domestic product. 

Not all scientists depict the law as a highly effective instrument in saving jobs. Conley and Dupor argue that the act contributed to job gains in the public sector but failed to make a desirable impact on private sector jobs. The working paper published as part of the working paper series of the Federal Reserve Bank of San Francisco shows that the fiscal spending initiative reportedly produced only 0.8 million jobs by October 2010, which mostly could be found in the construction industry. Simultaneously, in some other industries, such as the healthcare sector, the positive effects of the program were hardly visible at that time. 

The American Recovery and Reinvestment Act provides valuable historical lessons about the potential of a radical public spending program to prevent large-scale job losses during a crisis. Unlike the trade adjustment assistance programs discussed earlier, it mainly focuses on creating new jobs instead of providing training for displaced workers. The available evidence provides a compelling reason to claim that this Act has been highly successful in helping the United States survive the uncertain times during and after the Great Recession, mitigating the economic and social implications of the crisis. Nonetheless, it should be noted that the law was adopted in response to a crisis rather than a set of gradual changes, such as the ones related to AI integration. 

5.5. The Sarbanes-Oxley Act of 2002

The Sarbanes-Oxley Act was a federal law that introduced mandatory financial record-keeping and reporting practices. The law emerged in response to a set of accounting scandals, such as the ones involving WorldCom and Enron. These scandals reportedly cost millions of dollars for investors and demonstrated the inability of the U.S. regulatory framework to protect the economy and society from the destruction of evidence. The act was supposed to restore public confidence in U.S. securities markets and prevent corporate and accounting scandals. The law is relevant from the perspective of the problem under investigation because it introduced mandatory practices to ensure transparency and accountability in a field that was previously regulated by internal organizational policies. Instead of encouraging companies to adopt comprehensive ethical frameworks and strict internal guidelines, the act introduced mandatory practices, thus acknowledging the inability of corporations to succeed in preventing corporate and accounting scandals using self-regulation. 

Another important parallel between the Sarbanes-Oxley Act and the current debates concerning AI regulation is that the law created the Public Company Accounting Oversight Board, which was supposed to monitor, regulate, and inspect accounting firms that performed audits of public companies. Such a body resembles the federal agency that might be created in the United States to oversee AI developers, which is currently recommended not only by policymakers and scientists but also by some tech companies, including OpenAI and Microsoft. The act includes a set of provisions pertaining to auditor independence, corporate responsibility, analyst conflicts of interest, enhanced financial disclosures, studies and reports, commission authority, corporate tax returns, fraud accountability, and white-collar crime penalty enhancement. From the perspective of the current study, however, the most important feature of the Sarbanes-Oxley Act is that it turned accounting into a heavily regulated area and introduced the mandatory organizational practices that must be conducted by all public companies to promote transparency and accountability.  

The Sarbanes-Oxley Act became a crucial milestone that had large effects on the U.S. economy and society. Moreover, it encouraged many other countries, such as Germany, Canada, India, Japan, Italy, and others, to adopt similar laws. At the same time, the implications of this act remain a controversial topic. The results of the 2014 Sarbanes-Oxley Compliance Survey indicate that the costs of compliance with the regulations increased by 20%, putting a manageable yet considerable financial burden on corporations. The survey published by Foley & Lardner LLP reveals that the average compliance costs for companies with the revenue below $1 billion increased from $1.7 million to $2.8 million. Moreover, in the opinion of around 70% of respondents, public companies whose revenues are below $251 million should be exempt from Section 404 of the Sarbanes-Oxley Act. There is also a popular concern that the bill might have reduced the competitive edge of the United States by introducing unnecessary bureaucratic barriers and creating a complex regulatory environment that discouraged investors. In particular, the study commissioned by the New York government revealed that “the flawed implementation of the 2002 Sarbanes-Oxley Act (SOX), which produced far heavier costs than expected, has only aggravated the situation, as has the continued requirement that foreign companies conform to U.S. accounting standards rather than the widely accepted – many would say superior – international standards”. Whether the law was successful remains a controversial issue. 

5.6. The Fair Labor Standards Act

The Fair Labor Standards Act was not concerned with the issue of unemployment. However, it is relevant to the problem under investigation because it was an initiative that introduced fundamentally new instruments to protect workers. Before 1938, the United States legislation did not use such terms as “the minimum wage”, “overnight pay”, and “the maximum workweek”. The laws of such states sought to protect employees’ rights; however, this process was challenging. In particular, the Supreme Court voided the law adopted in the District of Columbia that set minimum wages for women. Progress in the field of worker protection was inconsistent and fragmentary. While some states tried to set some minimum standards to regulate child labor and minimum wages, others continued to support the “free market” as a universal force that would supposedly encourage employers to implement such changes through the mechanism of self-regulation. Indeed, some industry stakeholders took decisive measures in this direction. In particular, the Cotton Textile Code abolished child labor, introduced the maximum workweek of 40 hours, and set a minimum weekly wage of $13 and $12 in the North and the South, respectively. Despite such industry initiatives, many stakeholders advocated for the introduction of strict legislation ensuring the consistency and clarity in worker protection. From this perspective, the situation prior to the adoption of the Fair Labor Standards Act resembles the current situation in the AI sector. Many tech companies set voluntary restrictions and adopted comprehensive ethics codes; moreover, several firms encourage the government to focus on managing only high-risk areas and letting the industry manage other risks of AI technology. However, the majority of stakeholders believe that self-regulation might be insufficient for addressing all the risks of AI and call for strict regulation. 

The Fair Labor Standards Act is sometimes depicted as a revolutionary piece of legislation. Some other scholars, however, believe that it was a weak law that covered only approximately 20% of the U.S. labor force. Furthermore, it left many loopholes, such as those pertaining to the employment of independent contractors instead of employees. Nevertheless, it contributed to the consistency and uniformity in terms of labor laws and prevented the occurrence of unpopular legal cases protecting employers from any liability for violating workers’ rights. Several years before the law’s adoption, the Supreme Court made a controversial decision in Morehead v. Tipaldo, in which it essentially voided the New York law on minimum wages as a violation of the liberty of contract. The Fair Labors Standard Act made such cases impossible. Whereas the measures required by the Act could be regarded as insufficient, they still created a certain basis upon which further developments in worker protection could be built. 

5.7. Joe Biden’s Build Back Better Plan

The so-called Build Back Better Plan refers to a legislative framework that was introduced by Joe Biden. It was initially created as the most ambitious public investment project since the 1930s that comprised the American Families, American Jobs, and American Rescue plans focused on different fields. Not all of them have been eventually adopted as laws. The American Rescue Plan was signed into law in 2021. At the same time, two other parts of the framework were significantly changed before being adopted. In particular, the Infrastructure Investment and Jobs Act incorporated only some of the infrastructural goals of the American Jobs Plan. Furthermore, the Inflation Reduction Act of 2022 encompasses many essential parts of the American Families Plan and American Jobs Plan, although it does not include the safety net proposals. Considering that this part of the white paper is concerned with the examination of various regulatory options rather than final bills, it seems justified to review the main aspects of the initial Build Back Better Plan as a revolutionary instrument that was expected to provide relief to American society, introduce the concept of family leave, reduce the U.S. contributions to climate change, and offer a number of other initiatives that are relevant to the current research.

The American Rescue Plan included a number of cash payments and supplements to stakeholders affected by the COVID-19 pandemic. The American Rescue Plan Act of 2021 extended weekly unemployment benefits to $300, removed a tax on the first $10,200 in unemployment benefits, provided direct stimulus payments ($1,400) to eligible individuals, provided emergency paid leave, offered tax credits, increased food stamp benefits, expanded the child tax, child and dependent care, and earned income tax credits, increased taxes on large corporations and wealthy individuals, and offered grants to small businesses. Additional stimulus was provided to education, state and local government aid, housing, COVID-19 provisions, transportation, healthcare, and cybersecurity. The law was ambitious in its focus on a significant part of the U.S. population. According to the study by Jorda et al., the Act and other fiscal measures of the U.S. government raised inflation rates by around 3%. At the same time, the scientists believe that the absence of such measures would have resulted in deflation and slow economic growth. The case of the American Rescue Plan Act of 2021 shows how a series of decisive measures can provide relief to a substantial part of the population by making direct cash payments to individuals and businesses, while also utilizing fiscal tools to relieve the fiscal pressure on low-income and middle-income households at the expense of high-income individuals and large corporations. 

The American Jobs Plan was an ambitious strategy to create numerous jobs, strengthen employment protection measures, and reduce the negative effects of climate change. Using the successful example of the American Recovery and Reinvestment Act, the Biden administration aimed at making large investments in infrastructure. They planned to invest approximately $4 trillion in physical infrastructure, public housing, research and development, and the “care economy”. It was planned to increase corporate tax rates, thus partially reversing the Tax Cuts and Jobs Act of 2017, while also eliminating subsidies for fossil fuel corporations and utilizing the instrument of deficit spending. Compared to the American Recovery and Reinvestment Act, the American Jobs Plan used similar instruments, but it had a broader scope since it targeted a number of policy areas that were not traditionally associated with infrastructure. 

The American Families Plan included a set of provisions that were supposed to provide quantifiable benefits to households. In particular, the Biden administration planned to use $1 trillion and $800 billion in spending and tax credits, respectively, to increase spending on childcare, ensure the availability of free pre-kindergarten services, allocate money to paid family and medical leave, ensure the availability of free community colleges, and provide health insurance subsidies. The initiatives were supposed to be funded using fiscal measures, such as increasing tax rates for wealthy individuals.  

The Build Back Better Plan was a highly ambitious strategy, which explains why many of its provisions have not been adopted in their original form. Nonetheless, it provides examples of many interesting instruments that can be used to provide relief to individuals and households affected by crises, such as the creation of new jobs through investment in infrastructure, the extension of social safety nets in the form of paid leave, unemployment benefits, and direct cash payments, and the introduction of free healthcare and education services. Most of the measures, especially those related to the COVID-19 pandemic, cannot be directly applied to the case of AI because they were a direct response to the large crisis and, therefore, targeted the short-term perspective. At the same time, those instruments that were included in the American Families Plan and the American Jobs Plan were supposed to become the mechanisms for driving long-term changes. Therefore, some of them may be considered when analyzing possible regulatory frameworks for addressing the risks of AI. 

5.8. Initiatives to Support the Transition from Fossil Fuels to Renewable Energy in the United States

The COVID-19 pandemic contributed to the loss of thousands of jobs in the energy industry. The country’s energy sector lost 840,000 jobs in 2020 as a result of the crisis, which accounted for approximately 10% of the workforce. Interestingly, natural gas fuels and petroleum lost approximately 21% of employees, while wind and solar sectors, in contrast, exhibited a 1.8% growth. Given the large environmental impact of non-renewable energy sectors, finding a way to facilitate a transition from fossil fuels to renewable energy could be an environmentally sound program that also was likely to produce significant job gains. The Biden administration launched many initiatives in this area. They are worth examining in the current study because they offer avenues for supporting a workforce transition to new sectors, something that might be useful when addressing the risks of AI. 

The record sale of six wind leases is one of the most well-known initiatives in this area. According to the Interior Secretary, “the investments we are seeing today will play an important role in delivering on the Biden-Harris administration’s commitment to tackle the climate crisis and create thousands of good-paying, union jobs across the nation”. One of the advantages of the project is that it is expected to provide significant benefits to local communities, including underprivileged households, by adding thousands of jobs in different sectors and powering more than 2 million homes. A number of other initiatives support the promotion of wind and solar energy. For instance, the Department of Transportation made power investments to facilitate the development of those areas in which offshore wind turbine components will be built and staged. A recently announced pilot program to support the deployment of clean energy in underserved rural communities was launched as part of the American Rescue Plan. It implies not only supporting regional coalitions and encouraging the development of industry clusters but also providing job training so that more people could work in renewable energy projects. All these initiatives are expected to support a gradual transition of the energy workforce from fossil fuels to renewables. 

The initiatives discussed in this section represent a set of separate measures that pursue similar goals. They seek to support the development of renewable energy projects in a way that provides maximum benefits to underserved communities and low-income individuals. These initiatives follow the logic of the American Recovery and Reinvestment Act of 2009, although all of them focus exclusively on renewable energy programs. The emphasis on training, however, illustrates the long-term perspective of these measures. Unlike the Recovery and Reinvestment Act of 2009, these initiatives aim at supporting long-term changes in the U.S. economy. Therefore, their example might be useful when discussing ways to address the risks of AI, which certainly has been causing long-term transformations in the economy and society. 

Chapter 6. Potential Solutions for Regulation

6.1. Introduction

The previous chapters of this white paper showed that AI is a complex technological advancement that has already made a disrupting impact on a number of industries as well as on humanity as a whole. Therefore, the question of how to address the risks of AI remains topical. The current chapter discusses a set of broad solutions and recommendations discussed in popular culture to address the threats of artificial intelligence. Unlike Chapter 7, which focuses on specific strategies and recommendations to mitigate the risks of AI in particular industries, this chapter offers a broad overview of the potential solutions that are often discussed as possible instruments to protect stakeholders from the threats of AI. It focuses on the analysis of the concept of universal income, reviews the main approaches to regulating AI, and elaborates on other solutions that can be instrumental in the context of AI. 

6.2. Universal Basic Income 

It was established in the previous parts of this white paper that artificial intelligence can lead to adverse social and economic implications. Two of them are especially important. First, the adoption of AI in different industries, such as manufacturing and financial services sectors, can trigger the loss of millions of jobs globally. Thus, it is necessary to find a way to protect these jobs or at least make sure that people who lose their jobs can support themselves during the transition period. From this perspective, stakeholders should consider some measures in the spirit of the tools introduced in the Trade Adjustment Assistance Acts. 

Second, the distribution of AI benefits is currently one of the key problems associated with the technology. Around 2.7 billion people still have never used the Internet. There is no reason to believe that the benefits of the Internet are distributed fairly in today’s globalized world since low Internet penetration rates prevent many people from taking advantage of the unique opportunities provided by the Internet. Ensuring the fair distribution of AI benefits currently seems to be an even more challenging task. To make sure that the further adoption of AI does not widen inequality gaps and does not provide disproportionate benefits to the rich, it is of paramount importance to create a set of instruments to extend the benefits of AI to all people, including those who have never used the Internet and AI in their lives. 

The concept of universal basic income (UBI) has been proposed as capable of meeting both these requirements. In the most general view, universal basic income can be defined as a system in which all citizens regularly receive a certain guaranteed income. It is crucial to emphasize that all citizens are eligible for UBI regardless of their demographic characteristics or employment status. In other words, those people who work a full-time job are supposed to receive the same amount of monthly payment as those who are unemployed. The feature of unconditional payment is the most important characteristic of UBI that makes it fundamentally different from the majority of other initiatives involving the provision of cash payments to citizens. Citizens who receive UBI would not face exhaustive bureaucratic barriers since they would not need to prove their entitlement to the program. Another important feature of the concept is its focus on cash payments. Whereas many countries use taxpayers’ money to invest in education, healthcare, or other areas, proponents of UBI believe that taxpayers are in the best position to decide on the best ways to spend their money. Thus, allocating a certain part of the budget to make such direct cash payments to citizens would benefit the economy. Moreover, payments are supposed to be regular and predictable, thus reducing citizens’ stress and helping them plan their future. 

The concept of universal basic income has a set of advantages and disadvantages. The first benefit of UBI is poverty reduction. According to recent estimations, 659 million people live on less than $2.15 per day. The overwhelming majority of Global South countries, including those with large gross domestic products, struggle with finding the best ways to support marginalized communities and impoverished individuals. The idea of offering a cash payment to all citizens, therefore, is promising since it could result in poverty reduction. Some specialists, however, are skeptical regarding the ability of UBI to eliminate poverty. Hanna and Olken, who investigated the experience of Peru and Indonesia, came to the conclusion that targeted programs are much more potent than universal programs in transferring welfare to impoverished individuals on a per-beneficiary basis. Some individuals might be excluded from targeted programs, but they still reportedly perform better than universal projects focusing on the entire population. Zon is of the opinion that UBI is a weak measure that is incapable of reducing poverty unless the amount of a universal basic income equals the minimum wage, which is implausible given the cost of such a proposal. In theory, the introduction of a universal basic income linked to the minimum income standard level would eradicate poverty, which was demonstrated by Connolly et al. However, the costs of a UBI program that provides such a significant cash payment that eradicates poverty might put a significant financial burden on taxpayers. 

The second positive effect of UBI is connected with physical and mental health. According to the study by Painter, a universal basic income has the potential to enhance both the physical and mental health of individuals. Painter argues that the government must proceed with testing this promising instrument in order not to miss a unique opportunity that presented itself to humanity. Ruckert et al. point out that UBI can be highly effective in reducing health inequalities because it can provide all individuals with the means to afford healthy living. In general, it is important to emphasize that whereas the impact of UBI on poverty is often depicted as a controversial issue, its impact on the population’s health hardly is an object of debate. Sometimes specialists disagree on the price of UBI, but there is a consensus among them that the majority of UBI projects are highly likely to improve physical and mental health.

The third advantage of UBI is that providing citizens with cash payments can boost their consumption. From this perspective, the rationale behind the introduction of a universal basic income is similar to the rationale behind quantitative easing. During the Great Recession as well as the post-crisis period, central banks of developed countries followed the Federal Reserve System’s example and injected substantial amounts of money into the economy to stimulate consumption. By using the instrument of open market operations, central banks provided financial institutions with substantial amounts of money, which were supposed to encourage lending and, by extension, consumption. In a similar manner, the provision of cash payments to individuals also can be seen as an attempt to encourage consumption, which can positively influence the economy. MacNeil and Vibert admit that there are currently no studies that would specifically examine the impact of UBI on consumption. Therefore, as the scientists explain, any assumptions concerning the relationship between UBI and consumption patterns are made on the basis of similar cases involving targeted or universal programs. The conclusions of the studies covering other programs might not be always transferrable to the case of UBI because of the unique nature of this concept. In light of such controversy, the exact influence of UBI on consumption might be controversial. 

The fourth benefit that UBI might provide is improved social security. This concept can supposedly provide financial security during times of uncertainty. When employees lose their jobs, the provision of cash payments can help them meet basic needs and avoid the most negative implications of job displacement. Straubhaar is enthusiastic about the future of UBI. He describes this instrument as a “liberal”, “contemporary”, and “just” mechanism that “offers the best social-political prerequisite for “prosperity for all” in the 21st century”. The scholar argues that UBI is a result of decades of research in social security that offers a set of unprecedented benefits. Koistinen and Perkio point out that social security is the essence of any universal basic income proposal and even argue that there have been many UBI-like social security proposals, thus equating UBI to a set of other initiatives that were implemented by governments in different regions of the globe. Like in the case of poverty, the positive influence of UBI on social security boils down to a cost-benefit tradeoff. Large cash payments can help workers maintain a high quality of living after losing a job, but funding such a program could become an unbearable financial burden for any government, especially considering that even unemployment programs are often criticized for their inefficiency. 

Finally, the last factor that is sometimes presented as a positive aspect of UBI is the flowering of creativity and innovativeness. People who do not need to worry about satisfying their basic needs might have both the time and the willingness to generate new ideas. One might argue that not all people are in a position to produce innovations; however, such an argument can be countered by claiming that even a person without creative abilities can engage in meaningful social innovations, such as time banking projects. The rationale behind this assumption is rooted in Maslow’s theory of needs claiming that people who satisfy so-called “lower-order” needs are supposed to move towards “higher-order” needs. Yun et al. assert that a combination of UBI and some other factors, such as capital fluidity and a sharing economy, can stimulate open innovation dynamics and contribute to the transformation of a country into an entrepreneurial state. In the long-term perspective, UBI can allegedly address the risks of capitalism. The positive impact of UBI on creativity is currently hard to demonstrate because of the lack of similar initiatives and the fact that such issues as creativity are much harder to assess than poverty or health. 

In addition to the benefits of UBI, such a concept also is characterized by a set of drawbacks and risks. First, any social security initiative that includes UBI-like elements is hard to fund because of its large scope, especially if considered on a global scale. As stated above, a small-scale program providing a small amount of money to citizens is unlikely to ensure social security and reduce poverty rates; therefore, the amount of monthly payments should at least equal the minimum wage level. The study by Rincon et al. states that the overwhelming majority of individuals would not support the idea of UBI if it were funded by reducing the existing benefits. Most of them, at the same time, are enthusiastic about the idea of increasing taxes for large corporations and individuals whose income significantly exceeds the median level. Monnier and Vercellone put forward a proposal to use monetary supply to fund a part of UBI payments. Such a proposal seems strange given the high inflationary risks associated with the regular use of monetary supply as a form of providing direct cash payments to the population. Such a risk would be especially critical for Global South countries, which sometimes struggle with maintaining low inflation rates even at normal times. After a detailed discussion of the possible options, Monnier and Vercellone conclude that reforms in the tax systems are the most promising mechanism to fund a basic universal income. Ghatak and Jaravel also believe that taxes are the only viable option to fund UBI. The scientists point out that increasing the tax rate for rich individuals and large corporations, which is the most popular proposal among the proponents of the UBI concept, would be insufficient for covering a universal basic income. Thus, an increase in tax rates for middle-income and low-income individuals also would be mandatory to cover the cost of the universal income program. 

Second, one might argue that the construct of UBI is unfair. Whereas proponents of this concept argue that UBI is a fair instrument that can reduce poverty and promote equality, the sole idea of a universal basic income is that all citizens of a country, including both the rich and the poor, would be entitled to the same cash payments. In other words, shareholders of multinational corporations and those individuals who have high-paid jobs would receive the same cash payments as those people who live in poverty. Such an approach is heavily criticized by some specialists as unfair since it reportedly defeats its own purpose. From this perspective, targeted programs seem to be a better alternative since they provide payments only to those people who actually need financial assistance. 

Third, the argument that UBI would stimulate individuals to be creative and innovative is controversial. There is currently no evidence to suggest that UBI can encourage the satisfaction of higher-order needs in accordance with the principles of Maslow’s hierarchy of needs theory. As stated above, creativity is hard to conceptualize and assess. Apparently, there are numerous moderating and mediating variables that affect the effects of UBI on this phenomenon, thus making it hard to forecast the implications of a large-scale UBI program. 

One of the popular arguments against UBI is that a universal basic income would inevitably make a part of the population lazy, discouraging people from making an effort to work. By being discouraged from working, many people would not have the motivation to innovate. From this perspective, UBI could be compared to a Laissez-faire leadership style, which implies providing subordinates with full control over the completion of their tasks and allowing them to use any means they see fit to fulfill their job responsibilities. The available evidence provides a compelling reason to assert that some people, who are mostly competent specialists with a solid educational background, might benefit from a Laissez-faire leadership style of their manager since it would provide them with more flexibility, while others, in contrast, would perform poorly in an environment without external pressure. In a similar manner, UBI also can discourage many people from working since it would remove the factor of external pressure that forces many individuals to put extra efforts to succeed in their job. Other individuals, at the same time, might thrive in such a new environment and produce impressive innovations. 

The concept of UBI is often discussed in relation to AI. Potentially, the integration of artificial intelligence into various industries would dramatically increase productivity and performance, leading to an increase in both revenues and profits. As a result, both large businesses and governments would have additional money that could be used to fund UBI. Bruun and Duka recently made a detailed proposal for using UBI to mitigate the negative social effects of AI. The initialization stage of the proposal involves setting up funding for UBI think tanks and discussing the potential effects of UBI programs with relevant stakeholders, whereas the testing stage involves conducting a nationwide experiment, gathering evidence, and publishing an extensive research paper. Finally, the last stages are supposed to entail creating a government agency dedicated to UBI, changing school curricula accordingly, introducing the UBI law, and slashing government agencies that are currently responsible for various social welfare programs. It is crucial to emphasize that Bruun and Dukaadvocate for industry-specific cases so that those sectors that heavily rely on AI would fund UBI. 

Wright and Przegalinska go as far as saying that humanity will be doomed if a universal basic income is not introduced. They believe that AI-based robots and other tools will eventually replace the majority of jobs, while the remaining employees will receive low wages. In this situation, a universal basic income is presented by these scholars as one of the few ways to “save humanity”. Korinek and Stiglitz do not use such strong words, but they also agree that a universal basic income is one of the obvious mechanisms to mitigate the social risks of AI. Regardless of whether one agrees with the potential of UBI to prevent the threats of artificial intelligence, the significance of UBI within the context of AI is worth discussing. 

6.3. Regulation of AI

At the moment, the regulation of AI is a highly controversial issue. Stakeholders do not agree even on the basic approach to regulating artificial intelligence. Some of them support a centralized approach that addresses AI as a unique technological advancement. Such a position, in particular, is supported by OpenAI and Microsoft, which advocate for the establishment of a separate federal agency in the United States that would license all AI products. Other stakeholders believe that adopting such a radical stance would hamper innovation and reduce investment in AI, thus slowing down scientific and technological progress as well as economic growth. Google and IBM both insist on the need to use a risk-based framework that focuses on particular industries that are most likely to suffer from the adverse implications of AI and adopt separate measures to prevent all the threats of AI without disrupting the research and development activities involving this technology. Proponents of this approach usually dislike the idea of a separate agency that would be responsible for licensing AI. 

In most cases, the establishment of a separate agency and the use of a risk-based framework are presented as mutually exclusive strategies. But a detailed analysis of most proposals leads to the conclusion that companies that advocate for the use of a strict licensing approach by a separate agency also mostly support a risk-based framework. Therefore, the idea of using a risk-based approach is not controversial since it is supported by the majority of stakeholders. The establishment of a new agency responsible for licensing AI products, however, is an object of intense debate. 

The available evidence provides a compelling reason to believe that the question of whether a government must license and certify all AI products can be resolved after examining the risks of the technology. In other words, those stakeholders who believe that the benefits of AI significantly exceed its risks, which is one of the main pillars of Google’s AI ethics framework, discourage policymakers from creating a separate agency responsible for monitoring AI products. Apparently, such stakeholders perceive AI as a new technology that can be compared to many previous inventions, such as the Internet of Things or big data. Using the logic of such stakeholders, one can ask why policymakers do not license new IoT devices or big data solutions yet are supposed to license AI tools. From this perspective, the only valid argument in favor of establishing a separate agency responsible for AI is that artificial intelligence poses unprecedented risks to humanity that significantly exceed those posed by big data, IoT, and other disruptive technologies that were introduced recently. For those who do not agree with such a statement, the idea of establishing a separate agency responsible for AI might seem ungrounded. 

The idea of establishing a separate agency responsible for AI and licensing all AI products is not in line with the previous regulatory trends. However, dramatically changing the regulatory landscape is not an unprecedented endeavor. The Sherman Antitrust, Trade Adjustment Assistance, Sarbanes-Oxley, and Fair Labor Standards Acts all are examples of unique pieces of legislation that introduced new standards that were not previously used in U.S. legislation. The Fair Labor Standards Act set minimum wage levels, established the maximum workweek, and banned child labor in certain industries, which, in the opinion of some policymakers, journalists, economists, and businesspersons, could negatively influence the competitiveness of the U.S. economy. The Trade Adjustment Assistance Acts created monetary benefits and training programs for employees who have lost their jobs due to imports, outsourcing, and overseas manufacturing. Such an initiative was unconventional. The Sarbanes-Oxley Act disrupted the existing organizational frameworks responsible for monitoring the transparency, accuracy, reliability, and accountability of accounting processes by introducing a series of new tools. As stated above, many specialists still are of the opinion that this piece of legislation created unnecessary regulatory barriers for U.S. companies and reduced the attractiveness of the U.S. economy for foreign investors. Finally, the Sherman Antitrust Act banned trusts and monopolies, thus making an unconventional step to protect the interests of society. 

All of these laws are similar to each other in addressing an important challenge that could not be dealt with using the existing means. For instance, the Sherman Antitrust Act banned trusts because the growing threats of monopolies and trusts could not be prevented using the existing instruments to support healthy competition. If one agrees with the premise that AI is a critical challenge for society rather than one of many technologies that are introduced by tech companies, the idea of licensing AI tools and creating a separate agency to monitor the performance of AI tools might seem justified. 

Another important issue pertaining to the regulation of AI is connected with liability. As stated above, the contemporary regulatory environments are not ready for AI. In December 2019, an individual driving a new Tesla car with an AI navigation system killed two persons in an accident and faced twelve years in prison, even though the accident was caused by the AI rather than the driver. The contemporary criminal system mostly uses the construct of human liability to prosecute cases involving AI, something that might be erroneous given the unique circumstances of accidents caused by artificial intelligence. Deciding on who will be liable for AI-related damages is an important factor. At the moment, users of AI products that cause injuries to other people are likely to be sued for negligence, and courts might find them guilty for breaching the duty of care. However, in the case of sophisticated AI algorithms, it would be much more difficult to establish liability. The most well-known arguments related to this matter revolve around insurance, testing, and user negligence. While some specialists are of the opinion that insurance companies should shoulder the responsibility for AI-cased accidents, others insist that the most important issue in this field is to strictly monitor the process of testing all AI products to minimize the risk of such cases. 

6.4. Other Solutions

Some unconventional solutions besides the ones discussed above also can be put forward as possible instruments to mitigate the risks of AI. In particular, one of the ways to address this problem is to follow the example of the American Recovery and Reinvestment Act in creating new jobs. Such an initiative might seem radical; however, if it is implemented on a small scale, it would require such substantial amounts of money as a universal basic income program. The government could consider creating new jobs in a series of sectors along with intensive training programs to help displaced employees adapt to the situation and secure a stable income. Such a proposal is based on the premise that AI will not cause a substantial number of job losses, but rather will dramatically overhaul the labor market. In such a situation, job-creating initiatives in the spirit of the American Recovery and Reinvestment Act could help employees adjust to new conditions of the labor market, acquire new skills, and contribute to the completion of important projects. 

Another important solution is investment in AI research. This white paper demonstrated that progress in the AI sector is mainly driven by industry, whereas both academia and policymakers lag behind industry leaders. In this situation, the full spectrum of benefits and risks of AI products is not entirely understood, which creates a dangerous situation in which companies introduce new AI tools to the market without realizing the possible implications of their products to the economy and society. To address this issue, stakeholders must increase investments in AI research to ensure awareness of AI risks and support decision-making among policymakers. Governments, international organizations, and large corporations should consider increasing investment in AI research to produce extensive evidence for the positive and negative implications of various AI algorithms and large language models. The factor of collaboration plays an especially important role here. Some specialists advocate for the establishment of intergovernmental organizations and partnerships involving various stakeholders to accelerate the pace of AI research and stimulate the exchange of knowledge. Investment in AI research is an important measure that would be beneficial for all stakeholders, including governments, tech companies, and society. 

Cooperation between tech companies involved in the development of AI products is currently one of the most important strategies to protect humanity against AI risks. One of the most important points of the AI ethics framework used by OpenAI is that the company promises that if some other company comes close to creating artificial general intelligence, OpenAI will stop competing with this project and instead will start assisting with its further development. The adoption of similar principles by other tech companies would be highly desirable to make sure that humanity can address the risks produced by uncontrolled progress in the field of artificial intelligence. Close cooperation between the leading tech companies would stimulate the exchange of knowledge and best practices, eventually contributing to the effectiveness of other measures controlling AI progress. Therefore, regardless of the specific regulatory frameworks and self-regulation methods that are used to address AI risks, the collaboration of industry stakeholders is mandatory to manage the development of AI products. 

Self-regulation is undoubtedly a relevant measure in relation to the problem under investigation. Companies can adopt internal organizational policies and practices to make sure that the AI products that they produce are safe, accurate, and reliable. As a parallel, many companies recently adopted comprehensive corporate social responsibility strategies in order to improve their reputation and gain a sustainable competitive advantage. Recycling programs, energy efficiency projects, and other green initiatives have reduced harmful emissions produced by large corporations. At the same time, one might argue that the magnitude of these reductions was insufficient and that radical regulatory measures would have made much more impressive progress in protecting the environment from harmful activities. The examples of the Sherman Antitrust Act, Trade Adjustment Assistance Act, and Sarbanes-Oxley Act illustrate that self-regulation might be incapable of addressing significant social problems because while some companies are willingly changing their practices to respond to the expectations of stakeholders, others continue focusing on reducing costs, which involves creating trusts, reducing the bargaining power of customers, and making employees work overtime. In this situation, the argument that self-regulation is insufficient for addressing the risks of AI seems convincing.

Finally, the last measure that should be discussed is the integration of AI topics in university - and even K12 - curricula. If AI is to become an inalienable part of human life, all individuals must be aware of at least the basic principles of AI use and the main benefits and risks associated with this technology. AI competence is likely to turn into a critical prerequisite for many employees entering the labor market. Therefore, teaching young people the basic aspects of AI is a natural change that should follow the further integration of AI into various industries. The proposed measure involves introducing a subject focusing on the basic principles of AI algorithms, the specifics of large language models, and the ethics of AI. Higher awareness of AI, in such a situation, will turn into a higher awareness of AI risks. As a result of such changes in the education system, users are likely to be better equipped to deal with AI risks, and more likely to ensure equitable access to the prosperity generated by AI development. 

6.5. Recommendations

The previous sections of this chapter offer a review of a set of options that are often discussed as possible instruments to manage AI risks. This section, at the same time, introduces the author’s recommendations based on a detailed analysis of these options. It should be noted, however, that this analysis is not concerned with industry-specific factors. A discussion of industry-specific recommendations can be found in the next chapter of the white paper. 

The evidence presented in the study provides a compelling reason to believe that a universal basic income should not be used to manage AI risks. In theory, the concept of UBI might look like a promising solution that is capable of reducing poverty, providing social security, boosting consumption, enhancing the population’s health, and promoting creativity. However, in practice, most of the positive scenarios involving the use of UBI to achieve such outcomes are utopian. To the best of the author’s knowledge, there is currently no study that would convincingly prove that UBI could promote creativity rather than reduce people’s incentive to work. The use of UBI can be a highly effective method to alleviate poverty and provide social security. However, it is currently unclear how a large-scale universal basic income program could be funded other than by increasing tax rates for the entire population. Equitable application of a UBI across the Global South seems impossible to fathom. Offering small cash payments, at the same time, would defeat the purpose of a program by failing to achieve any meaningful progress in improving health, reducing poverty rates, and increasing social security.

The disadvantages of UBI currently seem insurmountable. As stated above, the majority of proponents of UBI believe that “the rich” must fund this concept. However, simulations conducted by Ghatak and Jaravel  demonstrate that further increasing tax rates for high-income households and large corporations would be insufficient for covering the immense cost of a large-scale UBI program. Therefore, there is currently no solution to the problem of UBI funding. 

The argument that UBI could stimulate consumption is questionable. The instrument of quantitative easing, which implies injecting large amounts of money into the economy, implies providing financial institutions with the money that would be used to support businesses and the financial system. However, there is no reason to believe that individuals who receive cash payments from the state would use them in a way that supports the economy. People might utilize regular cash payments to travel abroad or invest in property in some other country. As a result, the UBI would take money from taxpayers without providing any visible benefit to the economy and the financial system. 

The arguments laid out above provide a compelling reason to state that UBI is not an optimal solution to manage the risks of AI. Creating a program that provides financial assistance to displaced employees and other individuals affected by the adverse implications of AI is a promising idea. However, there are currently no convincing arguments that would demonstrate why such a goal should be pursued using a universal basic income rather than a targeted program. Using past experience and the successful examples of various pieces of legislation, such as the Trade Adjustment Assistance Act, it seems justified to claim that governments should consider using a customized approach focusing on affected social groups rather than applying utopian universal programs that are unprecedentedly expensive yet do not offer proven benefits. 

A targeted program focusing on affected individuals, especially displaced employees, would be capable of addressing the social risks of AI without creating an unnecessary financial burden for governments. A gradual increase in corporate taxes, in such a situation, would be sufficient for covering the program’s funding. Given the large increase in productivity and performance delivered by AI, corporations are in a position to fund such a financial assistance program. Using the example of the trade adjustment assistance projects, policymakers should make sure that the programs offer not only regular cash payments during a prolonged period of time but also comprehensive training programs that would help individuals succeed in a work environment dominated by AI and ML. 

A large-scale targeted program focusing on displaced employees is not the only recommendation to mitigate the risks of AI. The available evidence indicates that job creation initiatives, investment in AI research, and changes in the education system also are important measures that must be applied to protect humanity against the threats of AI. The American Recovery and Reinvestment Act demonstrated that job creation initiatives can exhibit impressive performance in reducing unemployment rates in times of uncertainty. Governments can use the successful examples of this piece of legislation and launch a series of job creation initiatives in those sectors that are most affected by AI. These projects would support targeted assistance programs for displayed employees, shielding affected social groups from the risks of AI. Investment in AI research also is a mandatory element of a strategy to ensure the safety of artificial intelligence. Wealthy governments must allocate a budget to fund AI research in the form of grants and programs focusing on specific risks related to this technology. Finally, changes in the education system are required to increase public awareness of AI, prepare future employees for working in an AI-dominated environment, meet the employers’ demand for employees who are competent in AI, and make sure that people are fully aware of the dangers of artificial intelligence and the mitigation methods that they can use to reduce the likelihood and potential impact of AI threats. 

Cooperation and self-regulation are relevant factors that can significantly reduce the significance of social and economic risks associated with AI. However, there is currently no instrument that policymakers could use to make companies intensify their efforts in developing AI ethics frameworks and intensifying cooperation with other stakeholders other than by setting strict requirements in regard to specific organizational policies and practices related to AI research and development. One might assume that if governments succeed in integrating AI into education curricula and raising public awareness of AI, tech companies involved in AI research would be interested in adopting responsible AI frameworks, engaging in AI research, and cooperating with other tech firms in mitigating AI risks. Such a scenario could be compared to the growing importance of corporate social responsibility initiatives that are implemented by companies from different regions of the globe in an attempt to improve their public image. If stakeholders start expecting tech companies to have detailed ethics frameworks that address all the risks of AI in a responsible manner that differs from the “competitive race” that is currently pursued by many corporations, such as Google, companies will be forced to respond to such an expectation. This way, both self-regulation and cooperation will be leveraged to mitigate the risks of AI.

From the perspective of regulatory measures that address AI, this paper recommends that policymakers adopt unconventional methods to address the risks of this technology because of the unprecedented nature of the challenges caused by this invention. The evidence examined in the study leads to the conclusion that AI is capable of disrupting a number of industries, dramatically increasing unemployment rates, making humanity overly reliant on technology, and creating a number of other economic and social risks. Many technologies, such as the Internet of Things and big data, also created certain risks, but the magnitude of these threats is incompatible with the dystopian scenarios that might unfold if humanity does not take immediate measures to control the uncontrolled growth of AI. Therefore, the critical risks related to AI call for the use of unconventional methods. In a similar manner, the rise of monopolies and trusts, the abuse of workers’ rights, and the failure of public companies to ensure the accuracy, reliability, and transparency of accounting processes encouraged policymakers to take radical measures embedded in the Sherman Antitrust Act, the Fair Labor Standards Act, and the Sarbanes-Oxley Act, respectively. The arguments laid out above show that the rise of AI constitutes a disrupting event accompanied by a set of serious threats that call for the use of unconventional and radical measures.

The establishment of a separate agency under the Department of Commerce that licenses all large language models is a necessary measure. Whereas it will inevitably slow down progress in the field of AI, such a measure will reduce the threats of AI and allow academia and policymakers to keep pace with industry stakeholders engaged in the development of AI tools. A thorough certification process using a set of standard measures will promote the uniformity of AI standards, stimulate the development of standardized instruments to address the most important AI risks, and encourage companies to adopt more robust frameworks to test their AI products. The licensing process must involve detailed requirements regarding the features of AI tools, such as explainability, accountability to humans, transparency, and fairness. It also is supposed to make sure that the training of large language models does not violate copyright laws. The performance of large language models should be carefully monitored to make sure that all the aspects of AI hallucinations and potential errors should be monitored and explained in detail, thus providing users with an opportunity to use AI products knowing the full spectrum of their strengths and limitations. In addition to the establishment of a separate agency, policymakers also are recommended to adopt a risk-based framework to develop customized solutions for high-risk industries so that each risk would be protected from the risks of AI. 

Chapter 7. Recommended Regulatory Proposals 

7.1. Introduction of the Two-Layer Framework

The current white paper suggests the use of a two-layer framework to regulate the risks associated with artificial intelligence. The first layer of the framework includes a set of measures that control the development of the technology and the risks that are associated with it. These measures are universal and apply to all sectors of the economy. The second layer, simultaneously, includes a set of customized measures that are to be applied in particular industries to address the unique AI risks that they face, and is described in the next section. 

Thus, the first layer of the proposed framework will be based on the EU AI Act, which includes a set of provisions for addressing the risks of AI. First, it will regulate the use of data in pre-training AI models. In particular, AI developers should be prohibited to use copyrighted content without the explicit consent of its rights holder. Industries might consider different instruments to simplify the provision of such consent, such as industry-wide agreements. At the same time, the prohibition of the use of copyrighted content without the explicit consent of their authors or adequate compensation must be included in the AI Act because this issue is one of the few matters on which the majority of specialists, scientists, and practitioners agree. 

Second, the government must create an independent federal agency under the Department of Commerce that is responsible for licensing and auditing of AI companies. This agency would be headed by a five-member Commission. The Commissioners would be appointed by the President and confirmed by the Senate. In the United States, for example, the government is recommended to create a separate federal agency that would issue licenses for all large language models meeting a set of criteria and provide regular audits of these models. The magnitude of the AI risks faced by the economy and society is significant, justifying the use of radical measures, such as the ones prescribed in the Sarbanes-Oxley Act and the Fair Labor Standard Act. Therefore, the introduction of the AI Act establishes the basic rights and responsibilities related to the use of AI and creates a separate agency that issues licenses for large language models and monitors their use through regular audits. 

In addition to the establishment of the agency, the AI Act also is supposed to include provisions related to the allocation of money for AI adjustment programs, AI education campaigns, and AI projects as well as the requirements related to self-regulation in the area of AI and the provision of severance and transition periods for employees who are let go because of AI. The requirement related to self-regulation can be compared to the introduction of new requirements under the Sarbanes-Oxley Act of 2002. There is currently no premise to believe that companies can shoulder the prevention of AI risks by introducing effective self-regulation instruments. Therefore, it is necessary to set mandatory measures that will guide companies’ self-regulation. Those firms that use AI algorithms and meet size requirements will be obligated to appoint an AI officer who will be responsible for monitoring the use of AI tools, preventing AI risks, and ensuring the compliance of the company with AI guidelines and the AI Act. Those companies that do not use AI or do not meet the size requirements should not have such an officer, but they still need to have clear documentation of the ethics regulating the use of their large language models. 

Another provision of the AI Act pertains to AI adjustment assistance programs. As will be shown below, AI adjustment assistance programs will be modeled on the basis of trade adjustment assistance programs introduced by the Trade Adjustment Assistance Act. They will include the provision of regular financial assistance to affected employees as well as training sessions that could help these individuals acquire new skills and knowledge. Employees from any industry will be able to apply for participation in AI adjustment assistance programs, but the eligibility criteria will be established separately for each section. 

All the companies whose valuation is above $5 billion will be required to provide severance and transition periods to the employees who are considered to be at risk of being displaced by AI. A specific discussion of such periods and requirements will be provided below. At the same time, it is crucial to emphasize that all large corporations will be required to display transparency in regard to the integration of AI into their operations, regularly identify “at-risk” workers in terms of the likelihood of being replaced by AI, provide the mandatory transition periods for all employees who are laid off due to AI, and pay severance to all these individuals. The government also will issue mandatory training programs for all the individuals from the “at-risk” category to better prepare them for a possible job shift. 

The United States already invests significant amounts of money in infrastructure projects that can help reduce the adverse effects of job displacement for many communities. At the same time, investment in AI education companies is recommended as a promising solution to facilitate the process of re-skilling the labor force and help them complete a successful career shift. By creating public-private partnerships with companies offering AI education services, the government will empower employees to acquire new skills and knowledge. Many of the displaced workers also will be able to take advantage of the current initiatives aimed at investing in infrastructure projects across the country. Finally, it also is important to create a separate program to support the introduction of AI projects in areas populated by low-income individuals and underserved communities. This proposal is in line with the set of initiatives related to the transition from fossil fuels to renewable energy that was discussed in the previous section. By stimulating the creation of new AI projects across the country, such as autonomous vehicles development plants, healthcare AI development projects, smart city initiatives, agricultural automation programs, and virtual reality startups, governments could stimulate the creation of new jobs, contribute to the acquisition of AI knowledge and skills by the workforce, and support the growth of the AI sector while addressing its social and economic risks. 

The second layer of the proposed framework applies to each sector, with industry-specific requirements to address industry-specific risks. 

7.2. Recommendations for the Manufacturing and Transportation Sectors

The manufacturing and transportation sectors are those industries that are likely to lose a significant number of jobs as a result of the rise of AI. Many employees of the manufacturing sector and logistics facilities are low-skilled and generally have a low level of income. Therefore, they must be protected to make sure that the development of AI technology does not dramatically decrease their social security. To protect workers from the negative effects of AI, stakeholders should consider the following measures:

  • To offer extensive AI adjustment assistance programs to all displaced employees of the manufacturing and transportation sectors that involve providing regular financial assistance for at least 3 years and retraining them to help these people pursue new job positions that are connected with AI;
  • To offer mandatory training programs to those people who are at risk of displacement by AI;
  • To invest in AI education campaigns to facilitate a career switch for many employees;
  • To obligate manufacturing and transportation companies to replace employees with AI in a regular manner, using long warning and transition periods;

A detailed description of the proposed measures can be found in the table below.

Table 3. Recommended Solutions for the Manufacturing and Transportation Industries

Measure

Details

AI adjustment assistance programs

  • All employees who have lost their manufacturing jobs to AI are entitled to monthly financial assistance for 3 years or until they find a new job with the required compensation;
  • All eligible employees can attend training courses that will help them acquire new skills in AI-related fields or other sectors;

Investment in AI education campaigns

  • Local governments will allocate funding to support AI education campaigns by creating public-private partnerships with AI companies specializing in the field of AI education;
  • AI education campaigns will provide employees of different industries with an opportunity to acquire new skills and knowledge;
  • AI education campaigns will have a practical focus since they will seek to enable a successful career shift

Warning and transition periods

  • All manufacturing, transportation, and logistics companies must be transparent regarding their plants to integrate AI into the organizational processes and the number of job losses associated with such a measure;
  • All manufacturing, transportation, and logistics must identify at-risk categories of employees that might be displaced by AI in the next year;
  • A firm cannot fire an employee displaced by AI unless he or she was previously put on this at-risk list;
  • A firm must offer a three-month transition period for all employees displaced by AI;
  • Being on the “at-risk” list is a mandatory condition for receiving financial assistance in line with the AI adjustment assistance programs

Mandatory training programs

  • All “at-risk employees” and those employees who started their three-month transition period must undergo a series of training courses to acquire new skills and prepare for a possible job shift

As the table below illustrates, the study proposes making the job replacement process as gradual as possible, using the “at-risk” criterion as a warning sign to help employees start their preparation for entering the labor market in a new job position. Mandatory training programs will start the preparation process while employees are still working on their previous jobs, while the AI adjustment assistance program will help these workers acquire new skills and eventually find a new job. Large-scale infrastructure projects that are currently funded by the government are expected to be an additional measure in helping affected communities survive large job losses in the manufacturing, transportation, and logistics industries. 

7.3. Recommendations for the Healthcare Industry

The healthcare industry currently does not require such radical measures. As stated above, specialists are under the impression that AI will produce job gains rather than job losses in this sector. Therefore, the most important task of policymakers is to make sure that AI is being implemented in the industry in an adequate manner, using all the safety precautions and testing all products before their introduction. From this perspective, the proposed regulatory framework involving the mandatory testing of all AI products and compliance with a set of requirements discussed above seems to be the best solution for addressing the risks of AI in the healthcare sector. For this industry, the risks of AI are connected with the responsible use of the technology rather than the risk of job losses. 

One of the most important recommendations for the healthcare industry from the perspective of the problem under investigation is to safeguard patient information. Therefore, all AI models that are used in the sector must comply with strict requirements related to the anonymization and encryption of data as well as secure data storage. All healthcare organizations should be required to conduct annual audits of their AI algorithms to make sure that all AI models and algorithms used by these entities fully comply with relevant data protection regulations, such as the General Data Protection Regulation in the EU or the Health Insurance Portability and Accountability Act in the USA. It also is important to make sure that these regular audits address the risk of algorithmic bias, such as in the case of the AI recruitment tools used in New York. Each healthcare facility must have a separate specialist responsible for the safety, transparency, reliability, accuracy, and accountability of AI algorithms. 

Given the significance of AI algorithms in the healthcare sector, it is necessary to set specific size requirements for healthcare facilities requiring all healthcare organizations that employ at least 30 individuals and use AI algorithms to have dedicated AI specialists and AI usage frameworks. In general, specific recommendations for the healthcare industry in relation to addressing the risks of AI mainly revolve around safeguarding patient data, conducting regular audits of AI algorithms, and strengthening self-regulation mechanisms by appointing AI officers and creating comprehensive AI usage frameworks.  

7.4. Recommendations for the Education Sector

The education sector experiences a number of threats related to AI. In addition to those measures that were already discussed above, policymakers and industry stakeholders should consider the following measures to minimize the negative impact of AI on the education sector:

  • To incorporate a new AI subject into school curricula;
  • To require educators to supervise all AI-based education systems;
  • To set strict data privacy regulations to safeguard student data and prevent the misuse of sensitive information by AI tools;
  • To introduce large-scale training programs for educators on the use of AI;
  • To facilitate collaborative learning by leveraging AI into creating collaborative learning environments;
  • To prioritize customization and personalization as the main avenues for AI integration in the education system;
  • To conduct regular audits of AI education tools to check them for bias and errors;
  • To encourage the development of open-source education tools to facilitate the exchange of knowledge and best practices;
  • To fund large-scale programs supporting the integration of AI in the education system;
  • To utilize AI adjustment assistance programs to support displaced employees of the education sector and help them find a job.

In general, the most important tasks faced by the education sector from the perspective of the problem under investigation are to equip teachers with the tools to use AI in the classroom, leverage the power of AI into customizing and enhancing learning experiences, ensuring the safety of AI, protecting the privacy of data, and supporting teachers and other employees who have lost their job to AI. There is no reason to believe that the number of educators who were placed by AI will be large. Nonetheless, stakeholders must be prepared for retraining these individuals and providing them with the means to maintain a sufficient quality of living until they find a new job. 

The role of AI in the education industry in the near future will mainly focus on creating customized learning experiences. Therefore, the most important recommendations for the sector are to ensure that customized education experiences comply with uniform standards. There is no need to conduct specific audits of each AI algorithm used in the education industry, but it seems necessary to carry out annual audits of the use of AI models and algorithms by each educational organization to make sure that these instruments are used in a way that is compatible with broad education goals and policies. The threats of AI to the labor market should be addressed by a set of regular measures in the case of the education sector, such as AI adjustment assistance programs and transition periods. At the same time, considering the unique nature of the education industry, the instrument of AI education campaigns can be especially effective in the sector. Large education institutions can become a basis for such education campaigns, whereas educators and administrators can become an important part of the target audience. 

7.5. Recommendations for the Industry of Financial Services

The sector of financial services is highly likely to be largely affected by AI. The main measures that must be used to reduce the risks of AI in the industry of financial services are as follows:

  • To create strict regulations for AI tools in the industry of financial services that would ensure the absence of algorithmic biases and regulatory compliance;
  • To leverage AI to detect fraud;
  • To ensure the uniformity of standards for AI algorithms that are used in everyday operations of financial institutions;
  • To make sure that all AI tools used in the industry of financial services are monitored and controlled by humans;
  • To set strict requirements regarding cybersecurity policies, such as the one requiring the presence of a separate employee responsible for the protection of AI models and algorithms from cybersecurity risks;
  • To foster collaboration between the public and private sectors to develop and implement responsible AI practices;
  • To introduce a requirement for all companies operating in the industry to provide transparent reports on AI usage;
  • To use the AI adjustment assistance program to assist those employees who have lost their job to AI. 

The majority of the recommendations resemble the ones that were formulated for other sectors. At the same time, there are several issues that should be highlighted. In particular, all companies operating in the industry of financial services should include separate sections on the use of AI algorithms in their annual reports. Furthermore, annual audits of AI algorithms must specifically focus on the prevention of the risk of algorithmic bias when making decisions regarding clients, something that is irrelevant for most other industries. Furthermore, it also is crucial to encourage strict cybersecurity policies because of the exposure of financial services organizations to cybersecurity risks. Considering the unique nature of the sector of financial services, companies should be prevented from using AI algorithms that were developed in-house without undergoing the process of certification. Therefore, unlike firms from most other industries, companies operating in the sector of financial services must either use approved AI algorithms or submit their own AI algorithms to the agency under the Department of Commerce for certification. 

7.6. Recommendations for the Film/Television Industry

The threat of AI seems to be distant for many sectors discussed in this study. However, the film/television industry has already faced a number of challenges related to the rise of AI. A number of AI tools are already widely used to cut costs, create appealing visual effects, and ensure the alignment of product features with viewers’ expectations. However, the strikes initiated by both actors and screenwriters show that stakeholders are becoming increasingly concerned about the risk of being replaced by AI tools, losing their digital “likeness”, and losing their right to credits. 

Most risks related to the film/television industry can be addressed in the AI Act, which must be adopted to address the main threats associated with the technology. In particular, as stated above, it is of paramount importance to regulate the use of copyrighted data for pre-training AI models, something that is a critical issue for screenwriters. 

The use of the digital “likeness” of actors is another vital issue that should be addressed in the legislation. The AI Act should clearly require that the use of the images and voices of individuals in pre-training AI models must occur exclusively upon receiving their informed consent. At the moment, actor unions claim that adopting such a clause would be insufficient because production companies would force all background actors to provide such consent, which then could be used to create “synthetic actors”.  However, all further issues in regard to the creation of “synthetic actors” and the use of actors’ data must be addressed by actor unions and production companies in the form of memorandums of agreement. The music industry has faced similar challenges, with concerns over artists signing over their Masters for life. The music industry has, and continues, to address such concerns in numerous ways: small labels offering alternatives, increased education around signing rights, etc. Similarly, the film and television industry will go through such a growth process, with unions ensuring that artists’ rights are protected. While policy makers and AI developers should support such union efforts, it is not the role of governments to further regulate to a degree that slows down the speed of content production or stifles economic growth. In other words, the current paper suggests the adoption of the informed consent clause in regard to the use of copyrighted data and the images and voices of actors as an important provision in the AI Act, and support of unions as they work within the industry to protect workers. Then, any other disagreements between stakeholders of the television/film industry in relation to the use of AI must be resolved through the instrument of self-regulation. 

The film/television industry is unique in terms of employment since many individuals do not have full-time jobs and are employed on a project basis. In this situation, protecting background actors using warning and transition periods and severance payments would not be plausible. At the same time, individuals who were employed at least in one film/television project in the last year should be eligible for AI adjustment and assistance programs, which will help them acquire new skills in AI-related fields and other sectors. Given the uncommon nature of the film/television industry, the measures taken by governments are limited. Therefore, actor and writer unions should play a major role in protecting the sector from AI risks via the instrument of self-regulation.

7.7. Recommendations for the Publishing Industry

The rise of AI in the publishing industry is likely to result in the loss of jobs among editors as well as some low-skilled employees employed in other segments of the sector. The magnitude of a possible job loss, however, is unlikely to be large. Those individuals who might be displaced because of AI can take advantage of AI adjustment assistance programs, which can help them acquire new skills and eventually find new jobs. The issue of the use of copyrighted content in pre-training AI models is supposed to be addressed in the AI Act. Most other concerns of the employees who might be affected by the integration of AI into the publishing industry, however, should be addressed through the mechanism of self-regulation. In particular, publishing companies are supposed to adopt a set of ethical guidelines and explicit standards regulating the use of artificial intelligence in different segments of the sector. 

As stated above, algorithmic bias currently seems to be the most important risk related to the integration of AI algorithms into the publishing industry. Many platforms try to leverage AI algorithms to customize news to the expectations of their target audience. Such a pattern can result in a situation when AI algorithms not only exhibit the bias resulting from their poor pre-training but also the bias that is directly incorporated from readers. Therefore, all AI algorithms that are used by companies operating in the sector must be submitted to the federal agency under the Department of Commerce. Publishing companies meeting a set of criteria in terms of size and market capitalization must be prohibited from using any AI algorithms except for those that are approved by the agency. 

7.8. Recommendations for the Creator Economy

The integration of AI into the creator economy does not require any specific measures besides the protection of copyrighted content in terms of its use in pre-training AI models. Unlike the manufacturing or transportation industries, the creator economy does not exist as a separate sector. It combines multiple jobs from different industries that involve the creation of diverse pieces of content. The concerns of all these employees mainly revolve around the use of their work by pre-trained models without the provision of their informed consent. Therefore, as stated above, it is crucial to address this issue by installing informed consent as a mandatory requirement for the use of copyrighted content by large language models. If content creators are affected by AI and have an employment status, they can apply for participation in AI adjustment assistance programs, which can support them during the transition period. The specific requirements for participating in AI adjustment assistance programs will be set separately for each sector. 

The creator economy is unlikely to experience adverse implications of AI in the near future - if anything, most creators will thrive. Therefore, one of the few regulatory measures that must be used in this field is to require the provision of informed consent regarding the use of copyrighted content when pre-training large language models. Companies that cooperate with content creators, such as publishing firms, are recommended to utilize AI detection tools, such as GPTZero. By using such advanced tools, stakeholders will prevent a decline in the quality of content and the use of large language models to create new content without human intervention. 

Chapter 8. Conclusion and Future Perspectives 

The study showed that AI is an important technological advancement that created significant benefits and challenges for humanity. The technology offers impressive opportunities for people and entire industries, but it also poses critical social and economic risks. In particular, the white paper showed that the adoption of AI can result in high unemployment rates, large inequalities, skills gaps, data biases, data leaks, disruptions in traditional sectors, and economic losses - and the ramifications will be worse if AI is left to be developed unchecked and unregulated. There is a consensus among policymakers, industry leaders, and scientists that AI must be regulated in a proactive manner to make sure that humanity is protected from its adverse implications. However, at the moment, progress in the field of AI is driven by industry, whereas both academia and policymakers are incapable of keeping up with the pace of new developments in AI research. The study shows that the uncontrolled growth of AI presents a set of substantial threats that must be addressed in order to avoid a dystopian scenario involving a future society in which a small number of individuals who control AI enjoy the benefits of this technology, leaving the rest of people living in poverty. 

The white paper shows that AI regulation is a topical issue that must be addressed as soon as possible. The available evidence provides a compelling reason to believe that governments should create separate agencies responsible for licensing all large language models and monitoring their use. These agencies should develop a set of specific standards for large language models that focus on testing, the use of data for training a model, compliance with copyright laws, AI hallucination, liability, and other relevant issues. Companies must not be allowed to introduce any AI products to the market except for those that are licensed by these agencies. Whereas such a process could slow down AI research and development, it would allow for dramatically reducing the risks associated with AI and helping academia intensify research into AI to make sure that further progress in this field is supported by extensive evidence. 

Other recommendations to mitigate the risks of AI involve launching AI adjustment assistance programs for displaced employees, integrating AI into school curricula, utilizing information campaigns to raise public awareness of AI, investing in AI research, encouraging collaboration between stakeholders, requiring mandatory transition periods and severance payments for “at-risk” employees in the case of the companies whose valuation exceeds $5 billion, investing in AI projects across the country, and creating job creation projects in those areas that have been especially affected by AI. In general, the study advocates for the use of a radical approach to controlling the rise of AI by establishing a specific agency that licenses all large language models. At the same time, policymakers also should adopt a risk-based framework to address the risks of AI in specific industries. Most of the proposals introduced in this white paper are aligned with the recommendations put forward by Microsoft in relation to AI regulation. 

Those industries that are more likely than others to witness significant job displacement effects as a result of AI should be prioritized by policymakers. The manufacturing and transportation sectors are expected to lose millions of jobs in the next several decades. To help affected individuals and communities, policymakers should introduce extensive AI adjustment assistance programs that would provide monthly financial assistance to displaced employees for at least three years and offer various training programs for these people. Moreover, companies from these sectors must replace employees with AI in a regular manner, first putting them on an “at-risk” list for a year and then providing them with a 3-month warning period if they decide to proceed with the decision. At-risk employees will use this time to attend mandatory training programs and acquire new skills. Moreover, public-private partnerships established by the government are expected to help numerous individuals learn about new AI professions and complete a successful career shift into AI fields. 

The study showed that the healthcare industry does not require such radical measures since it is unlikely to witness large-scale job losses because of AI. Thus, compliance with the requirements of a new agency should be sufficient to address the threats of AI in this sector. The education sector should respond to the rise of AI by leveraging technology to customize and enhance learning experiences, ensure the safety of AI, and protect the privacy of data. Those educators who lose their job to AI should be allowed to take advantage of AI adjustment assistance programs. The sector of financial services would benefit from strict regulations controlling not only the algorithms and training data of each large language model but also the ways in which it is used. Furthermore, it is also important to set additional cybersecurity requirements and conduct regular audits of AI algorithms to minimize AI risks in this sector. 

The rise of AI in the publishing industry and the creator economy should be addressed by introducing the necessary requirement related to informed consent regarding the use of copyrighted content and personal information, including “digital likeness”, in pre-training AI models. The rest of the issues have to be addressed through the instrument of self-regulation. Similar recommendations also can be suggested to stakeholders of the television/film industry. Actor and writer guilds are recommended to sign memorandums of understanding with production companies to regulate the use of AI, including the creation of “synthetic actors” and the eligibility of AI models for writing credit. At the same time, there is no premise to suggest any regulatory initiatives besides prohibiting the use of personal information and copyrighted content without the explicit consent of individuals and copyright holders, respectively. Further research is needed to obtain a more comprehensive understanding of AI regulation in different contexts. First, scientists should consider conducting more research and composing papers focusing on the integration of AI into specific sectors besides the ones discussed in this white paper. For instance, they might examine the use of AI in the tourism and restaurant industries. A critical analysis of the social and economic risks of AI in various industries would enrich an understanding of the problem under investigation. 

Second, scholars are encouraged to investigate the problem of AI regulation by distinguishing between the cases of wealthy countries and the Global South. This research was mostly conducted from the perspective of the United States, although certain examples from the Global South were considered to illustrate various points. Given that countries of various wealth levels have fundamentally different financial resources and AI research capabilities, a “one-fit-for-all” approach is hardly capable of addressing the risks of AI in all modern countries. Thus, a series of studies focusing on the cases of specific nations would help better illustrate the research problem in meaningful contexts. 

Finally, the last recommendation for further research is to scrutinize specific conditions for AI adjustment assistance programs for displaced employees. This study did not offer a detailed description of such conditions because of the broad scope of the white paper. Therefore, this issue should be addressed in further research.

The findings of the white paper are subject to four limitations. First, the scope of the investigation was broad. As a result, many conclusions and recommendations are general and cannot be applied in their current form owing to the lack of details. Nonetheless, they still could serve as a solid basis for specific regulatory proposals and industry initiatives. Second, the study focused only on eight industries, even though the effects of AI are expected to be felt by the entire economy. Third, the white paper relied exclusively on secondary data. Finally, the last limitation is that the author was forced to use many non-scholarly sources since the number of scholarly studies covering recent developments in AI research and development is still limited.