Keywords
Artificial intelligence, Latin America, AI, LATAM, AI regulations, AI and human rights, AI and technological advancement
This article is included in the Artificial Intelligence and Machine Learning gateway.
This article is included in the Japan Institutional Gateway gateway.
Today’s approaches to regulating AI diverge from countries and regions. For instance, Japan’s framework emphasizes agile governance, relying on soft law, corporate cooperation, and a deregulatory approach. In contrast, the EU adopts a hard law model with a risk-based approach centered on protecting human rights. Discussing the benefits and downsides of each will be deemed valuable for LATAM’s journey to regulating AI.
The authors identified guidelines and laws from Japan and the European Union by searching official websites and Google Scholar through 2023 and 2024. We divided the search between both researchers, one of us in charge of the Japanese documents and the other of the EU and Latin America ones. The relevant documents were selected when containing soft laws, hard laws, and independent reports about AI. The authors then proceeded to conduct a document analysis of the current situation in the LATAM region as well as a comparison between the Japanese and EU approaches to AI. Through this process, the logic behind each approach was deduced and then evaluated for suitability for the LATAM region.
LATAM faces unique challenges in regulating AI. Limited investment in innovation and research, along with insufficient venture capital, places the region below global averages in technological development. Overly restrictive regulations risk worsening this disparity. A Japanese approach appears more convenient for the region rather than a European one. In this realm, it is crucial to avoid a binary logic that positions technological growth and human rights protection as opposing objectives.
The study proposes five principles for future AI regulation in LATAM: fostering innovation, balancing risks, and benefits, tailoring frameworks to regional needs, avoiding restrictive measures, and incorporating both Western and Asian perspectives. Ultimately, this study underscores the importance of context-specific, balanced regulations that address LATAM’s unique challenges and opportunities in AI governance.
Artificial intelligence, Latin America, AI, LATAM, AI regulations, AI and human rights, AI and technological advancement
The rapid and uncertain evolution of artificial intelligence (AI) has brought both promising technological advancements and significant concerns related to security, privacy, and crime. This dichotomy is often referred to as the dual nature of AI.1 In response, governments across Asia, North America, and Europe have taken steps to regulate AI, seeking to harness its benefits while mitigating risks to human rights. However, in the global pursuit to govern legislate AI, many developing countries in the Global South, especially in Latin America (LATAM), are still in the initial steps.
Failing to properly leverage the benefits of AI threatens to exacerbate the developmental and technological disparities between the LATAM region and the leading economies. Consequently, a thoughtful examination of a potential LATAM approach to AI becomes imperative for ensuring the continued development of these countries in the era of cutting-edge digital technology.
Although it is necessary to explore how the LATAM region should deal with AI, such exploration is still neglected in the academic literature. Research has compared ongoing regulatory efforts in different LATAM countries, but these lack a comparison with Asian and European policies, specifically the approaches of Japan and the European Union (EU).2 A comparison along these lines could provide valuable benefits by elucidating different motivations, objectives, and concerns, which could then be leveraged by LATAM policymakers.
Based on the above, the article investigates the most suitable approach to AI regulation in Latin American countries considering the region’s particular needs. The paper employs a qualitative methodology through a legal comparison method. Because it is meaningful to refer to the position of the EU and Japan, this article analyzes and compares the policies and regulations of the EU and Japan to elucidate the advantages and pitfalls of each framework.3 The article elaborates on the relationship between technology and human rights, providing a nuanced understanding of LATAM’s developmental needs to determine a suitable regulation type for the region.
The article first discusses the current situation of AI regulation in LATAM, discussing the development and technology gap in the region. Next, it addresses the EU’s approach to AI, which relies on hard law to protect human rights, and the Japanese framework, which implements soft law to foster the promotion of technology. This elicits a discussion of the relationship between technology and human rights, drawing attention to the dual nature of technology and its potential to affect and enhance human rights. Finally, reflecting on the previous sections, the paper concludes by suggesting the best approach to regulating AI in the LATAM region.
The authors identified guidelines and laws from Japan and the European Union by searching official websites and Google Scholar through 2023 and 2024. We divided the search between both researchers, one of us in charge of the Japanese documents and the other of the EU and Latin America ones. Regarding the EU AI act, a continuous revision of updates was required given the Act’s approval process at the time of writing this paper. The relevant documents were selected when containing soft laws (guidelines), hard laws, and independent reports about AI. The authors then proceeded to conduct a document analysis of the current situation in the LATAM region as well as a comparison between the Japanese and EU approaches to AI. Through this process, the logic behind each approach was deduced and then evaluated for suitability for the LATAM region. This includes a discussion on both the benefits and downsides of frameworks that legislate the relationship between human rights protection and technology promotion.
The comparison of the Japanese and EU approaches suggests that a flexible, “soft law” framework might be better for the LATAM region. However, this approach must avoid a binary logic that frames technology promotion and human rights protection as conflicting objectives. The region must adopt a nuanced, context-sensitive framework that incorporates perspectives from both Western and Asian models. To this end, the study proposes five principles for regulating AI in LATAM: prioritizing innovation, balancing risks with benefits, addressing regional disparities, ensuring equitable technological growth, and protecting human rights.
In LATAM, efforts toward the governance of AI are still at the initial stages, characterized by inequality from country to country. A key development, however, was made in 2023 at the Forum for Ethics of Artificial Intelligence in LATAM and the Caribbean.4 Here, twenty LATAM and Caribbean countries collectively discussed their initiatives concerning AI regulation, leading to the signing of the Santiago Declaration. This declaration emphasizes the need for countries in these neighboring regions to work cooperatively to harness the benefits and mitigate the risks of AI technologies. It recognizes the importance of integrating the particularities of the LATAM and Caribbean regions when creating and utilizing AI technologies, emphasizing the respect of human rights when evaluating AI policies. Paragraph 6 recognizes the instrumental value of technologies for the full enjoyment of human rights, especially among vulnerable groups.5
At the national level, Chile has emerged as a frontrunner in AI regulation, showcasing significant accomplishments. Colombia has also established a robust ethical framework for AI, while Mexico concentrates on bolstering privacy rights. In 2023, Argentina approved the “Recommendations for a Trustworthy AI.”6 Notably, a recent LATAM feminist forum on AI highlighted diverse perspectives, where the Delegate of Uruguay shared ongoing concerns about regulating AI to balance innovation and protect human rights.7 Peru has enacted legislation emphasizing the importance of individuals and human rights in AI use.8 Meanwhile, Venezuela is in the process of drafting a bill addressing data protection.9
Despite such efforts, data from Oxford Insights illustrate that the LATAM region is still far behind the major countries’ approaches. The Oxford AI Readiness Index 2023 measures the readiness of 193 countries to implement AI in the delivery of public services. It employs 39 indicators to analyze the government, technology sector, and data infrastructure of a given country.10 National results concerning LATAM and Caribbean countries demonstrated a substantial difference in close to forty points from the United States of America (USA), the Index’s leader. Regionally, LATAM ranks sixth out of nine places, behind North America, Europe, East Asia, the Middle East, and North Africa.
While “four of the five” leading LATAM countries—Brazil, Chile, Uruguay, and Colombia—were “within the global top 40” in governmental capacity, the Index found that overall, the region “seems to be lagging in the Innovation Capacity dimension … where we find a gap of almost 10 points between the regional and global average.”11 Here, “Innovation Capacity” indicates whether a country’s technology sector has the conditions required to support innovation. It comprises five indicators measuring the time spent dealing with government regulations, venture capital (VC) availability, research and development (R&D) spending, company investment in emerging technology, and published papers on AI research. As pointed out by Oxford Insights, when the technology sector lacks strength, countries might begin to rely on foreign AI systems.12 This may deter the development of domestic technology sectors and ignore the particularities of LATAM countries as such AI systems are trained with foreign data.
The results from the Index thus demonstrate the gap between LATAM’s technology sector and those of the major world economies. It suggests that AI regulations in the region should pay more attention to this gap and focus on strengthening domestic technological investment and research while considering the potential for innovation and human development. Moreover, failing to leverage the benefits of AI could potentially expand the development gap between LATAM and high-income countries. In fact, before COVID-19, the region was already experiencing growing poverty rates, and the downturn of the region’s major economies (such as in Peru) became a more significant issue. The recent pandemic precipitated even more serious struggles in LATAM, which in one year lost the equivalent of 30% of its progress since 1990 and became the region most damaged by the pandemic.13
This fact is of extreme importance considering the ongoing development gap between LATAM and high-income countries. According to the World Inequality Report (2022), the world is marked by a high level of income inequality and an extreme level of wealth inequality both within and between countries. The inequalities between countries have reached their peak; in 2021, LATAM owned only 51% of the average global wealth compared to 142% of East Asia, 230% of Europe and 390% in North America. Additionally, compared to 36% in Europe, the top 10% of LATAM’s population owns 55% of the national income, while the bottom 50% earns 27 times less than the top 10%.Conversely, in Europe, the value is only 9 times less. For instance, in Brazil, one of the LATAM countries with better prospects, the bottom 50% earns 29 times less than the top 10%, while in France the difference is only seven times less.
Leaders in LATAM have already recognized the need to regulate AI, considering the potential of these technologies for both enhancing and damaging human rights. The Santiago Declaration and national efforts are commendable. However, the region still needs to move from declarations to effective policies. To this end, it is crucial to consider the need to reduce the development and technology gap between LATAM and other major world economies. AI technologies could play a key role in this regard. To this end, the frameworks established by the EU and Japan provide useful models for potential ways forward.
The EU approach to AI is characterized as “holistic and hard-law-based.”14 EU efforts to regulate AI can be traced to its resolutions about robotics in 2017. The Civil Law Rules on Robotics is one of the first adopted resolutions addressing AI.15 While it certainly acknowledges the importance of technology to promote innovation, it focuses more on major safety concerns and the protection of human rights. For instance, its introduction states the need for AI development to “preserve the dignity, autonomy, and self-determination of the individual.”16 Its articles further address the social, ethical, legal, and economic concerns about AI. Article 10, for example, mentions some of the stakes at risk when using robotics, including health, freedom, privacy, integrity, dignity, self-determination, and non-discrimination. Article 13 reviews the relevant principles and values set forth by Article 2 of the Treaty of the European Union, the Charter of Fundamental Rights, and the Union Law, pointing out that all future policies should comply with these instruments. In particular, it emphasizes human dignity, equality, justice, equity, transparency, and individual and social responsibility. Article 19 adds the principles of necessity and proportionality, and Articles 20 and 43 emphasize the necessity of addressing robotics-related job loss. Overall, the resolution suggests that its focus on human rights is in part to protect the EU’s aging population, showing concerns about the potential harmful uses of technology, such as job replacement and the impact of algorithms on people’s choices.17 This is a crucial early position of the EU’s approach to AI regulation, as its influence on the later European Union AI Act (hereafter called the EU AI Act) is obvious.
The EU AI Act highlights the same rights and principles as the Civil Law on Robotics and elaborates on the harmful uses of AI systems. The act was first proposed in 2021.18 It was finally approved and published in the European Official Journal on July 12, 2024, entering into force on the August 2.19 It faced several debates and significant lobbying from major tech companies that significantly delayed its approval, which had originally been expected by 2023.
The EU AI Act presents a risk-based approach with different levels of compliance burdens on AI producers and users, in line with EU values and fundamental rights. Categorizing the threats as minimal risk, limited risk, high risk, and unacceptable risk, the approach aims to mitigate the threats to fundamental rights and guarantee the safety of AI systems.20 Regarding unacceptable risk, Article 5 in Chapter II of the EU AI Act explicitly prohibits AI systems that constitute a direct and unacceptable threat to individuals.21 Here, “unacceptable risk” includes cognitive behavioral manipulation targeting people or vulnerable groups, such as voice-activated toys that encourage dangerous behavior in children. Additionally, social scoring methods that classify individuals based on behavior, socioeconomic status, or personal characteristics are deemed unacceptable. The prohibition extends to using biometric identification for the categorization of individuals, as well as to real-time and remote biometric identification systems like facial recognition.
Furthermore, the high-risk category comprises AI systems that have a negative impact on safety or fundamental rights. These threats are classified into two groups: first, those integrated into products falling under the EU’s product safety legislation, covering diverse items like toys, aviation, cars, medical devices, and elevators. The second group includes AI systems that fall within specific critical areas including the management and operation of critical infrastructure, education, employment, access to essential services, law enforcement, migration and border control, and legal interpretation.22
This risk-based approach responds to four objectives, as explained in the proposal’s explanatory memorandum:
▪ ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
▪ ensure legal certainty to facilitate investment and innovation in AI;
▪ enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
▪ Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.23
To achieve these four objectives, the EU AI Act was proposed as the proportional, necessary, and most effective policy instrument.24 It covers every AI system and user within the EU market (Article 2: Scope). It also provides explicit enforcement measures (Section 3) and penalties (Chapter XII). Moreover, it is a generalist regulation that does not limit its scope to a specific sector, but rather covers every application, user, and provider of an AI system.25
Notably, the concern to secure fundamental rights lies at the core of the risk-based approach. In fact, the proposal’s principal text mentions the phrase “fundamental rights” 81 times. Furthermore, two of the four above objectives emphasize the respect and effective enforcement of the existing law on “fundamental rights.”26 Additionally, the proposal stresses the improvement of fundamental rights protection when explaining the reasons for adopting a regulation in the form of a legal instrument.27 In particular, it expressly lists rights referring to the EU Charter of Fundamental Rights. These rights include the right to human dignity (Article 1), privacy and the protection of personal data (Articles 7 and 8), non-discrimination (Article 21), equality between women and men (Article 23), as well as an effective remedy, a fair trial, defense, and the presumption of innocence (Articles 47 and 48). Additionally, the proposal expresses its intention to prevent a so-called “chilling effect” on the rights to the freedom of expression (Article 11) and the freedom of assembly (Article 12).28 The proposal also covers workers’ rights to fair and just working conditions (Article 31), consumer protection (Article 28), the rights of the child (Article 24), and the integration of persons with disabilities (Article 26). Finally, it covers the right to a high level of environmental protection and the improvement of the environment’s quality (Article 37) in relation to health and safety obligations.
This comprehensive overview illustrates the EU’s focus on human rights when regulating AI. The proposal names this approach “Responsible innovation,” alleging that restrictions to the rights of freedom to conduct business (Article 16) and the freedom of art and science (Article 13) are proportionate and restricted to the minimum necessary to tackle high-risk AI technology development and use.29
It is important to note, however, that nowhere does the EU AI Act proposal fully explain the reasons it considers the risk-based approach proportionate and necessary. Only Section 2.3 argues that the “a risk-based approach” is “proportional and necessary.”30 Additionally, according to Section 3.3, different policy options were considered against economic and societal impacts while keeping the overall focus on potential impacts to fundamental human rights. However, the risk-based approach was chosen as the most effective means of achieving the four objectives.31 Nevertheless, some legal scholars have already voiced uncertainties about the approach.32
Some questions remain. First, why can’t alternative policy options achieve the objectives outlined in the proposal? Put differently, why were other policy choices deemed to be likely to fail to protect human rights or be unable to protect them as effectively as a hard law, risk-based approach? Second, is there any hierarchy between the four objectives of the risk-based approach?33 This is a crucial matter given that a focus on safeguarding fundamental rights explains the strictness of the risk-based approach. More precision on these aspects could have clarified whether the EU AI Act was truly the optimal policy choice.
The Japanese approach is regarded as “sector-specific and soft-law-based.”34 One feature that is different from the European approach is the attempt to assure that technology, including AI, has a positive impact on society. As of February 2024, there is no statute that holistically regulates the development and use of AI in Japan. The Japanese stance is indicated in the report prepared by the Expert Group on How AI Principles Should be Implemented, which was initiated by the Ministry of Economy, Trade and Industry (METI) in 2021. This report notes that AI innovation and development are rapid and complex, so it is difficult for laws and regulations to keep up. As a result, rule-based regulation that attempts to strictly regulate the actions of innovators can hinder innovation. In such circumstances, rule-based governance is not appropriate, and a goal-based approach “that can guide entities such as companies to the value to be attained” is preferable.35 As a result, the report claims that instead of regulating on the basis of legislation, “an intermediate rule like a guideline with multi-stakeholders” should be set.36
One important relevant concept in the Japanese stance is that of agile governance, which was mentioned in the report of the Study Group on New Governance Models in Society 5.0, established by the METI in 2021. In Japan, AI governance is a part of the governance of “‘Society 5.0,’ which is a human-centered society where high integration of cyberspace and physical space can promote economic development and solve social issues.”37 Changes in Society 5.0 are rapid and difficult to predict.38 In such circumstances, traditional approaches to laws and regulations, which emphasize the government’s role in regulating private actors through various enforcement mechanisms, face considerable challenges.39 In this context, agile governance is proposed as a more appropriate alternative. Agile governance has multi-layered features, and both governments and corporations have roles to play.40 Thus, the government does not make laws and regulations based on a traditional rule-based approach, and the corporation’s voluntary activities are emphasized.41
Instead, agile governance provides principles and guidelines that are not legally binding. Along these lines, the Cabinet Office issued Social Principles of Human-Centric AI in 2019, which highlighted the following principles: human-centric, education/literacy, privacy protection, ensuring security, fair competition, fairness, accountability, transparency, and innovation.42 These are in line with the basic principles guiding the current Japanese government. Meanwhile, Governance Guidelines for Implementation of AI Principles was published in 2022. These general guidelines mandate that corporations that develop and operate AI systems are expected to prepare the goals of AI governance.43 More specific regulations in Japan include the Digital Platform Transparency Act, the Financial Instruments and Exchange Act, the Act on the Protection of Personal Information, the Road Traffic Act, the Road Transport Vehicle Act, the Installment Sales Act, the High Pressure Gas Safety Act, the Copyright Act, and the Unfair Competition Prevention Act.44
Based on the above, some corporations have developed their own guidelines. For instance, Sony created Sony Group AI Ethics Guidelines in 2018 and updated them in 2021. These guidelines are based on eight principles: supporting creative lifestyles and building a better society, stakeholder engagement, the provision of trusted products and services, privacy protection, respect for fairness, the pursuit of transparency, the evolution of AI, and ongoing education.45 Similarly, NEC Corporation enacted the NEC Group AI and Human Rights Principles in 2019. These principles include fairness, privacy, transparency, the responsibility to explain, proper utilization, AI and talent development, and dialogue with multiple stakeholders.46 Fujitsu Limited also developed the Fujitsu Group AI Commitment in the same year, which strive for the provision of value to customers and society with AI, the creation of human-centric AI, a sustainable society with AI, and AI that respects and supports people’s decision making as part of its corporate responsibility, emphasizing transparency and accountability in AI implementation.47
The above corporate principles and guidelines were all prepared before the Governance Guidelines for Implementation of AI Principles were published in 2022. Other corporations have since established their own guidelines and principles in response. In 2022, for example, Panasonic established the Panasonic Group AI Ethics Principles, which include creating a better life and society, prioritizing safety, respecting human rights and fairness, transparency and accountability, and protecting customers’ privacy. 48 In 2023, Epson prepared the Epson Group AI Ethical Principles, which promote the coexistence of humans and AI and creating new value through collaboration, accountability, safe and secure data distribution, and responsible development.49
Many of these principles discuss the relationship between humans and AI. This indicates that companies that develop and operate AI do not only focus on the technological development of AI but emphasize its relationship with humans. Many of them also mention social values such as fairness, accountability, and sustainability. It is also interesting that there are a variety of foci for each corporation. Some mention the importance of collaborating with other stakeholders, others discuss education, and some call for the spread of responsible AI. This indicates that overall consistency between the corporation’s principles alongside the unique foci of each corporation.
There are some issues worth noting in the Japanese approach. The first is the context. In Japan, the governance of AI is discussed in the context of science and technology policymaking, and this is what influences the nature of the Japanese stance toward AI governance. As noted above, Japanese AI governance is dictated by Society 5.0, a concept proposed by the Japanese government’s 5th Science and Technology Basic Plan, which indicates places AI governance within the science and technology sector. Furthermore, the METI, which deals with Japanese industry, covers the issues of AI. So long as AI governance is considered solely within the context of science and technology, it is it is likely that the Japanese approach will continue to focus on the promotion of technology. If, for example, the Japanese Human Rights Bureau initiates the creation of AI guidelines, its approach could be different.50
Another issue with the current system from a practical perspective of the current system is the possibility of limited commitment by corporations. Even if it is true that multiple actors need to seriously consider their approaches to AI, the current Japanese stance pursuant to the Governance Guidelines for Implementation of AI Principles is that corporations need to develop goal-based principles if necessary, but that they can merely follow the Social Principles of Human-Centric AI prepared by the Cabinet Office. In this case, corporations do not have to work on AI governance if they do not have much commitment to it. This could create future problems.
The Japanese approach may change with time. For instance, some Diet members from the Liberal Democratic Party, the current ruling party, claim that the guidelines on AI need to be legislated.51 The Minister of State for Science and Technology Policy, Takaichi Sanae, has stated that the Cabinet Office needs to further research the legislation of other states on the subject.52 With the EU’s legislation of the AI Act, there is a possibility that the necessity of legislation is discussed in Japan.
In AI governance, it is essential to understand and be aware of the issues related to both technology and human rights. Technology can be used to violate human rights, but also to promote them. Hence, the need for balance and contextual limitations is necessary. An overprotective framework may stiffen innovation, preventing countries from achieving potential growth, while an extremely permissive one may pose intolerable risks to human rights.
As stated by the Human Development Report, technological advances within proper risk management can lead to an increase in people’s capabilities and agency.53 Indeed, technological progress can enable access to education, healthcare, and financial services, among others, increasing choices and potentially improving quality of life. From a utilitarian perspective, technology plays a crucial role in facilitating the promotion of human rights. AI can assist with image recognition and gathering data on rights abuses. For instance, this might include performing analyses of geospatial images to detect mass human rights violations in remote regions or contamination analysis of soil and water to determine the human rights impacts of mining activities.54 Additionally, nations can utilize satellite information to monitor displaced populations, and forensic technology supports law enforcement agencies in reconstructing crime scenes and ensuring accountability for perpetrators.
However, while emerging technologies offer myriad benefits for human rights, they also harbor potential risks. Authoritarian regimes employ surveillance tools to monitor dissidents and vulnerable populations, and the proliferation of “deepfakes” threatens democratic processes and women’s rights. Moreover, as Land and Arison point out, using technology to protect human rights requires evaluating the equitable distribution of a given technology, as underlying conditions play a crucial role in the accessibility of technology to every individual.55 For instance, prepaid water meter technology in South Africa has faced serious criticism regarding preventing access to water for the poor or those without access, suggesting that technology can produce more harm if applied without considering social imbalances.56 Hence, a policy that promotes innovation in the use of technology for enhancing human rights simultaneously requires awareness of the risks that technology poses violate human rights and make efforts toward equal access.
The European Union AI Act, as stated earlier, is considered an instrument that places strict restrictions on AI to protect the freedoms to conduct business and of art and science, calling this approach a responsible innovation. Nevertheless, the act faced serious delays due to concerns about unnecessary restrictions on technological progress, suggesting that it might not be as balanced as alleged. Indeed, representatives from various companies in the EU addressed an open letter to the European Commission, the European Council, and the European Parliament in July 2023. Signed by representatives from 150 businesses, including Siemens in Germany and Airbus in France, the letter criticized the ineffectiveness and bureaucratic approach of the EU AI Act. The letter called for a broad, principled, and risk-based approach rather than a rigid one, raising concerns about the act’s potential to jeopardize Europe’s competitiveness and technological sovereignty. According to the letter’s reasoning, an eventual approval of the act would lead to important compliance costs for European companies, forcing them to move their activities outside the Union.57 The European Tech Alliance takes a similar position. In their November 2023 statement, this organization of 30 leading European tech companies stated that current EU regulations could demand up to 30% of EU tech companies’ resources in compliance.58
Furthermore, although the European Union approach refers to its proposal as a balance between innovation, technology, and human rights, this is only one side of the coin. While it ensures the protection of human rights from harmful uses of technology, the proposal might not comprehensively consider the potential of AI technologies to further the enjoyment of human rights. The risk-based approach in act suggests a stronger concern about AI’s threats to human rights than the possibility of enhancing them. In this sense, the act pays excessive attention to AI’s negative effects and fails to properly consider its positive potential. An excessive focus on such negative aspects generates a negative view of technology as a whole, which could delay the use of cutting-edge technology to enhance human rights.
Like the European position, the Japanese approach also centers humanity at the core of their regulations. However, the key difference lies in how they choose to leverage the benefits of AI for human rights. While the regulations can be regarded as a weak, AI moves in a fast-paced environment and massive changes can occur within a year or even just a few months. Hard law regulations in this realm may be inadequate to keep up with these changes, and the Japanese concept of agile governance might be more effective for such a context.
Flexibility is thus a key feature of the current Japanese stance. Because the current regulations are not based on law, it is a soft approach rather than a hard one. This is the biggest difference between the European and Japanese approaches to technology. One assumption in the European approach is the binary understanding of technology and human rights, in which the former is a threat to the latter. However, the Japanese stance transcends this binary. To those who embrace that binary, the Japanese approach may appear weak. However, if we assume equally positive aspects to technology, the Japanese stance becomes more appropriate. Indeed, the focus on the benefits of AI rather than on its threats is a perspective that allows us to leverage the positive aspects of this technology. Here, AI is not considered a threat but rather a useful instrument for humans. However, its full impact is still unknown and more research is needed on the effectiveness of the Japanese approach to effectively counter the potential risks of AI systems to human rights.
This section returns to the situation of LATAM, a region characterized by economic inequalities. The gap between countries in this region and richer countries lies not only in their wealth but also the lagging of development in various sectors, one of them being technology. The objective is then clear. The LATAM region must aim to reduce this development gap, which implies improving important sectors such as technology.
AI plays a crucial role in this matter. As discussed above, this technology has the potential to promote innovation, which could enable the greater enjoyment of human rights. Hence, the region should avoid a binary logic that places technology and human rights in opposition. Instead, LATAM region should prioritize regulations that stipulate the essential role of technology in improving the condition of human rights. Rather than binaries or negative perspectives that emphasize AI threats to human rights, focus should be put on improving LATAM’s technology gap. This approach cannot be made by solely focusing on human rights protections. Rather, it requires placing technology in a key cooperative role for the enjoyment of rights.
Furthermore, it is crucial to adopt an approach that focuses on LATAM’s particularities and unique cultures, rather than one that simply copies foreign frameworks. There is a need for a balanced approach that both considers safeguards for immediate threats to human rights and secures the immediate and long-term benefits of AI for the enjoyment of human rights. In this context, a risk-based policy with a strong emphasis on AI threats to human rights restrains the potential of using AI to promote the enjoyment of human rights. In contrast, a policy choice should transcend protecting human rights to promoting them. This approach would better address LATAM’s objective of reducing its technology and development gap. It does not leave rights without protection, but switches the focus from protection only to the widespread promotion of human rights. This implies going beyond harm reduction or avoidance to furthering the promotion of benefits. It is in this logic that a policy about AI should be considered. Strengthening the technology sector in LATAM must be seen as a path for fostering human rights, not the opposite.
The Japanese approach on this matter thus seems a more advantageous model than the EU AI Act. By relying on previously established regulations to counter AI threats under the competence of science and technology policymaking, this approach can focus on developing a strong technology sector. This paves the way for new technologies that may extend the enjoyment of human rights, ultimately improving human development. The LATAM region could achieve its technology and development objectives by following a similar strategy.
Measures to protect human rights must be considered in a context that considers the region’s objectives. When necessary, if certain AI technologies pose harm to individuals and national security, such technology can be subject to prohibition and special sanctions. However, AI regulations in LATAM should avoid enacting costly compliance measures that may negatively impact innovation. In other words, technology can collaborate with the promotion of human rights. By following a comprehensive strategy like that of Japan and avoiding binarism, the region can strengthen its technology sector. Combined with a strategy for distributing the benefits of AI to all individuals, the LATAM region would leverage the benefits of emerging technologies, shortening its development gap.
This study has analyzed both the Japanese and EU approaches to AI regulation to propose a model for the LATAM region. The Japanese approach relies on soft law and companies’ cooperation through the concept of agile governance. The Japanese adopted a deregulatory approach with guidelines and principles, acknowledging the regulations’ inability to evolve as fast as technology. However, it is important to bear in mind that the Japanese approach is governed solely in the context of science and technology, which facilitates framing AI as a tool for innovation and technological development.
Conversely, the EU AI Act is not restricted to the technology sector. Perhaps due to this influence, the EU AI Act proposal places more concern on human rights protection, adopting a hard law, risk-based approach, and focusing on the risks AI poses to human rights. The act prohibits specific AI systems as unacceptable risks and stipulates special procedures for high-risk AI, in addition to specific sanctions if the regulation is infringed. In this sense, it could be said that the EU approach emphasizes the negative potential of AI more than the Japanese approach.
Based on a comparison of both approaches, this study has drawn attention to the dual nature of technological process. AI can be used to both violate human rights and enhance them. With this dual nature in mind and using data from Oxford Insight, the study has also noted the technology gap in the LATAM region. Needs for innovation, science research, and joint venture capital sets LATAM below the global average in technology development. In this regard, this paper argues that overly restrictive regulations will not reduce the technology gap between LATAM and the major global economies.
On the contrary, the Japanese concept of agile governance stands strongly as a useful approach for LATAM’s region needs in technology promotion. This does not imply, however, ignoring the risks AI poses to human rights. Meanwhile, an approach like the EU AI Act should be evaluated bearing in mind the impact of such regulations on the technology sector. High compliance costs can potentially exacerbate the existing weakness in LATAM’s technology sector. Consequently, regulations in the region should avoid a binary logic that places excessive focus on AI’s risks. In other words, when balancing interests, technology promotion should not be seen as opposed to human rights. Future regulations in LATAM must be mindful that the equitable distribution of the benefits of emerging technologies can enhance human rights.
There are some limitations to this study. While the EU AI Act remains substantially consistent with its core proposal—namely a risk-based approach with a focus on human rights protections—this study has focused on the initial proposal, as it was undertaken when the EU AI Act was still in the legislative process. Some minor divergence from the initial proposal and the final act is therefore to be expected.
Future research should discuss the final act and analyze its various impacts on innovation and technology promotion. That said, only time will reveal the full impact of the EU AI Act on both human rights protection and innovation. Additionally, future research should explore how the Japanese approach deals with AI risks to human rights, which is crucial for analyzing the convenience of laxer regulations.
Overall, this study stresses the need to consider LATAM-specific requirements regarding AI regulation. It illustrates the need for a comprehensive consideration of AI risks and benefits and context-specific regulation, alongside a need to broaden our view to include Asian perspectives.
As a result of this study, the following suggestions can be proposed for regulating AI in LATAM. These are meant to serve as a supplement to existing general guidelines and principles from international organizations:
1. Latin America should adopt a regulatory approach that promotes innovation and technological advancement similar to the Japanese agile governance model. This model acknowledges the inability of regulations to evolve as quickly as technology and emphasizes cooperation with companies rather than restrictive measures on them.
2. Regulations should balance AI’s potential risks to human rights with its benefits, recognizing its dual nature. The focus should not solely be on the risks but also on how AI can enhance human rights.
3. Regulatory frameworks should be tailored to LATAM’s specific needs, particularly its technology gap and economic conditions. A restrictive approach could hinder innovation and worsen the existing gap in technological development.
4. Technological development and human rights protections should not be seen as opposing objectives. Future regulations must avoid a binary logic and ensure that the benefits of AI are equitably distributed, assuring both technological growth and human rights protection.
5. Latin American countries should consider not only Western models but also Asian perspectives when developing a holistic, balanced, and future-proof AI regulatory framework.
All data utilized in this study were sourced from publicly available materials, including academic articles accessed via Google Scholar and official documents from government websites. This process includes such keywords as: “LATAM AI”, “Japan AI”, and “EU AI Act”. Selection criteria included relevance to the research question, publication recency, and authoritative sources (government reports, and official policy documents).
The documents employed in chapters 3 and 4 can be accessed through the website of the Japanese Ministry of Economy, Trade and Industry (METI), and the European Union legislation website (EUR-Lex). The specific sources and corresponding links are detailed in the reference section of the manuscript. These resources are freely accessible, and readers can obtain the same data by visiting the provided links. No additional permissions are required, and no dataset was used in this study.
Links to the analyzed publications:
European Union:
Japan:
▪ https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20210709_8.pdf
▪ https://www.meti.go.jp/press/2021/07/20210730005/20210730005-2.pdf
▪ https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf
▪ https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf
1 Brundage, M., et al., The malicious use of artificial intelligence: Forecasting, prevention, and mitigation (2018), https://arxiv.org/abs/1802.07228.
2 For a comparison of policies in LATAM, see Filgueiras, F. Designing artificial intelligence policy: Comparing design spaces in Latin America (2023). https://onlinelibrary.wiley.com/doi/abs/10.1111/lamp.12282
3 Most of this paper was written during the ongoing discussions about the EU AI Act, and the final text was not yet approved. Hence, much of this paper refers to the proposed text and not the approved version. This article therefore uses the nomenclature “EU AI Act” to refer to the approved act and “EU AI Act proposal” to refer to the original proposal.
4 Foro Internacional de la Alianza Latinoamericana para la Inteligencia Artificial (Foro IALAC, 2023), https://foroialac.org/vivo/.
5 Declaración de Santiago (Ministerio de Ciencia, Tecnología, Conocimiento e Innovación de Chile, 2023, p. 2), https://minciencia.gob.cl/uploads/filer_public/40/2a/402a35a0-1222-4dab-b090-5c81bbf34237/declaracion_de_santiago.pdf.
6 Recomendaciones para una inteligencia artificial fiable (Gobierno de Argentina, 2023), https://www.argentina.gob.ar/sites/default/files/2023/06/recomendaciones_para_una_inteligencia_artificial_fiable.pdf.
7 Segunda Cumbre Ministerial sobre la Ética de la Inteligencia Artificial en América Latina y El Caribe (2024), https://foroialac.org.
8 Ley N° 31814-Ley que promueve el uso de la inteligencia artificial en favor del desarrollo económico y social del país (2023), https://cdn.www.gob.pe/uploads/document/file/5038703/ley-que-promueve-el-uso-de-la-inteligencia-artificial-en-fav-ley-n-31814.pdf?v=1692895308
9 Segunda Cumbre Ministerial sobre la Ética de la Inteligencia Artificial en América Latina y El Caribe (2024), https://foroialac.org.
10 Oxford Insights. (2023, December). Government AI readiness index 2023 (p. 9). https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-2.pdf
13 United Nations Development Program. (2022). Human development report 2021/2022: Uncertain times, unsettled lives: Shaping our future in a transforming world (ISBN: 9789211264517). United Nations Development Program. https://hdr.undp.org/system/files/documents/global-report-document/hdr2021-22reportenglish_0.pdf
14 Japan’s approach to AI regulation and its impact on the 2023 G7 presidency (Center for Strategic and International Studies [CSIS], 2023), https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency.
15 European Parliament. (2017). European Parliament resolution of 16 February 2017 on civil law rules on robotics (2015/2103(INL)). https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html
18 European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM/2021/206 final). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
19 Initially, the agreement (which was expected by mid-2023) faced several obstacles due to industry leaders’ concerns about the impact on business and the proposal’s slowness to adapt to rapid AI development. Principal disagreement lay in the rules for foundation models following a tiered approach, introducing tighter rules for the most powerful ones that were bound to have more impact on society. For instance, Cédric O, France’s former state secretary for digital and a cofounder of the AI startup Mistral, was lobbying for the company, arguing that the AI Act could “kill the company.” EU’s AI Act negotiations hit the brakes over foundation models (Euractiv, 2023), https://www.euractiv.com/section/artificial-intelligence/news/eus-ai-act-negotiations-hit-the-brakes-over-foundation-models/.
21 European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) N° 300/2008, (EU) N°167/2013, (EU) N° 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689&qid=1725175363474
22 EU AI Act: First regulation on artificial intelligence (European Parliament, 2023, June 1), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence . The threat level of unacceptable and high risk demands prohibition for the former and registration in an EU database for the latter.
25 Hard law and soft law regulations of artificial intelligence in investment management (Cambridge Yearbook of European Legal Studies, 2023, p. 271), https://www.cambridge.org/core/journals/cambridge-yearbook-of-european-legal-studies/article/hard-law-and-soft-law-regulations-of-artificial-intelligence-in-investment-management/94A747407D4CA9226C6CCAE3E3E6616E.
29 Ibid. & European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence. Recital 138.
32 For example, see Ebers, M., Truly risk-based regulation of artificial intelligence: How to implement the EU’s AI Act (2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4870387.
34 Habuka notes that the United Kingdom also uses this approach, and places the USA between the UK’s approach and the “holistic and hard-law-based” approach of the EU (Center for Strategic and International Studies [CSIS], 2023), https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency.
35 AI Governance in Japan Ver. 1.1 (Ministry of Economy, Trade and Industry [METI], 2021, p. 20), https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20210709_8.pdf.
37 Governance Innovation Ver. 2 (METI, 2021, p. 2), https://www.meti.go.jp/press/2021/07/20210730005/20210730005-2.pdf.(GOVERNANCE INNOVATION Ver 2).
38 Governance Innovation Ver. 2 (METI, 2021, p. 59), https://www.meti.go.jp/press/2021/07/20210730005/20210730005-2.pdf
42 https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf, pp.7-11. (HumanCentric AI)
43 https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf, p.17-18. (Governance Guidelines for Implementation of AI Principles)
48 Panasonic and NTT to establish a new joint venture company (Panasonic, 2022, August 29), https://news.panasonic.com/global/press/en220829-2
49 AI ethical principles (Epson, n. d.), https://corporate.epson/en/philosophy/epson-way/principle/ai-ethical-principles.html
50 In Japan, the Ministry of Justice deals with human rights issues. See Article 2, Items 26-29, Act for Establishment of the Ministry of Justice. In the EU, the European Commission is the primary body responsible for AI-related issues.
51 政策提言 [Policy Proposal]. (2023, December 22). Liberal Democratic Party of Japan. https://www.jimin.jp/news/policy/207268.html
52 Takaichi, S. (2023, December 22). 会見 [Press conference]. Cabinet Office of Japan. https://www.cao.go.jp/minister/2309_s_takaichi/kaiken/20231222kaiken.html
53 United Nations Development Program. (2022). Human development report 2021/2022: Uncertain times, unsettled lives: Shaping our future in a transforming world. Pg. 160,161. https://hdr.undp.org/system/files/documents/global-report-document/hdr2021-22reportenglish_0.pdf
54 Wang, X., Liu, J., Zhang, Y., & Zhao, W. (2023). Evaluating the impact of industrial pollution on water quality using machine learning techniques. Water, Air, & Soil Pollution, 234(8), Article 694. doi:10.1007/s11270-023-06694-x; The Graduate Institute of International and Development Studies. (2023). Artificial intelligence and human rights: Final report. https://www.graduateinstitute.ch/sites/internet/files/2023-09/AI%26HR%20Final%20Report%20-%20Publication.pdf
55 New technologies for human rights: Law and practice (Cambridge University Press, 2018), https://www.cambridge.org/core/books/new-technologies-for-human-rights-law-and-practice/A6473E8A4F6A9ED12675E54A03318802.
56 Prepaid meters are obstacles to accessing water in Africa (Truthout, 2023), https://truthout.org/articles/prepaid-meters-are-obstacles-to-accessing-water-in-africa/.
57 Various authors. Open letter to the representatives of the European Commission, the European Council and the European Parliament. https://drive.google.com/file/d/1wrtxfvcD9FwfNfWGDL37Q6Nd8wBKXCkn/view
58 European tech companies face an overwhelming amount of rules harming their ability to grow and compete (EUTech Alliance, 2023), http://eutechalliance.eu/european-tech-companies-face-an-overwhelming-amount-of-rules-harming-their-ability-to-grow-and-compete/.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
No source data required
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: My research focuses on the governance of emerging technologies, particularly artificial intelligence. I’m especially interested in how regulatory innovation—such as sandboxes and experimental frameworks—can support responsible AI development. I work on the design of national AI strategies and institutional models in the Global South, with a strong emphasis on inclusive policymaking, global cooperation, and aligning technology with social needs.
Is the work clearly and accurately presented and does it cite the current literature?
Yes
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Not applicable
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: AI ethics, AI policy, EU ICT law, EU AI law, sociology of law and emerging technologies
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 17 Mar 25 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)