Hiroshima Process: New International Rules for the Development of Advanced AI Systems

On October 30, the so-called G7, a group that brings together seven of the world's great powers (USA, United Kingdom, Germany, Canada, France, Japan and Italy), approved a series of International Guiding Principles on Artificial Intelligence (hereinafter AI) and a Code of Conduct volunteer for AI system developers.

The pact, called the “Hiroshima AI Process” (in honor of the city where the agreement was ratified), emerges from a regulatory proposal that Japan put on the table in May 2023 and which has finally been consolidated through the creation of two documents that aim, first of all, to promote safe, secure and reliable AI around the world and, on the other hand, to provide guidance to organizations that develop and use advanced AI systems, such as generative AI systems.

The Japanese alternative, finally the winner, has prevailed against the non-interventionist vision of the United States and the restrictive policy of the EU with regard to AI developers. The Japanese government proposed a looser but firm regulation when it comes to regulating problems such as copyright or the exposure of personal data, which has finally convinced members of the G7 to take positions regarding the establishment of regulatory principles for AI systems. To this end, the developed global framework will be based on four essential pillars:

  • Analysis of the priority risks, challenges and opportunities posed by generative AI;
  • International Guiding Principles of the Hiroshima Process for organizations that develop advanced AI systems;
  • Hiroshima Process International Code of Conduct for organizations that develop advanced AI systems;
  • Principle of cooperation based on projects to support the development of responsible AI tools and best practices.

Since the International Guiding Principles and the Code of Conduct are the issues that have been recently defined, we will focus the study on the analysis of both texts.

International Guiding Principles

The European Commission defines the document as a non-exhaustive list of principles that is based on the current principles of the Organization for Economic Cooperation and Development (OECD) in the field of AI. In addition, it indicates that its content is addressed to all AI agents, where appropriate, to cover the design; development; deployment and use of advanced AI systems.

The heading of the document indicates that the objective is to promote safe and reliable AI worldwide, and to guide organizations that develop and use the most advanced AI systems, with special emphasis on foundational models and generative AI systems. In parallel to this objective, organizations must respect the rule of law, procedural guarantees, fairness and non-discrimination, democracy, humanocentrism, human rights and diversity in the design, development and deployment of advanced AI systems.

In compliance with the above, advanced AI systems that violate democratic values should not be developed or put on the market, either because they are especially harmful to individuals or communities, because they facilitate terrorism or because they allow misuse for criminal purposes or substantial risks to human rights or security.

The text, depending on the risks that AI developers may generate, describes and imposes on organizations the following eleven principles:

  1. Take appropriate measures during the development of advanced AI systems, before and during their deployment and commercialization, to identify, evaluate and mitigate the risks that AI can cause during its lifecycle.
  2. Monitor the use of improper patterns, both at the time of their commercialization and at the time of their implementation in the market.
  3. To publicly report on the appropriate and inappropriate capabilities, limitations and fields of use of advanced AI systems, to increase accountability and ensure sufficient transparency.
  4. Develop information exchange and responsible incident reporting between organizations that develop advanced AI systems. Agents include governments, industry, civil society and the academic world.
  5. Develop, disclose and implement AI governance and risk management policies, based on a risk approach. Among others, privacy policies and mitigation measures will be taken into account.
  6. Invest and apply strong security controls, including physical security, cybersecurity, and protection against internal threats throughout the AI lifecycle.
  7. Develop and implement reliable mechanisms for the authentication and origin of content such as watermarks or other techniques that allow users to identify content that has been generated by AI systems. These mechanisms will be implemented as long as they are technically feasible.
  8. Prioritize research to reduce social and security risks, and encourage investment in effective mitigation measures.
  9. Give priority to the development of advanced AI systems to address the world's biggest problems, including the climate crisis, global health and education.
  10. Promote the development and adoption of international technical standards.
  11. Apply appropriate data entry and protection measures for personal data and intellectual property.

In principle, the above list is not exhaustive, but rather it is a “living” document that can be modified and altered depending on the needs, challenges and risks of advanced AI systems.

Since this is not an International Convention to which Member States subsequently adhere, members of the G7 have asked organizations that develop advanced AI systems to commit to applying the guiding principles on AI in pursuit of a safer and more reliable design, development, deployment and use of advanced AI systems.

Code of Conduct

On the other hand, a Code of Conduct has been developed that essentially builds on what has already been established in the previous International Guiding Principles, its main objectives being:

  • Identify, evaluate and mitigate risks before and during the deployment of the AI system.
  • Mitigate vulnerabilities and patterns of misuse.
  • Generate transparency about the limitations and inappropriate use of AI systems.
  • Responsibly sharing information with other agencies.
  • Implement security measures that protect personal data and intellectual property.

The European Commission indicates that, as was the case with the International Guiding Principles, the document will be reviewed and updated as necessary, through multilateral consultations, in order to ensure that it remains an up-to-date document suitable for its purpose. Likewise, in order to be able to apply what was established in the Hiroshima Process, the texts may be adopted by the different jurisdictions with unique approaches. Although these are not guidelines as exhaustive and strict as the European Artificial Intelligence Act, many of the ideas that are intended to be transferred go in the same direction as the European standard.

Although this is an initiative that, a priori, seems to be beneficial for AI users, there is a big problem: it is not a binding standard, and therefore, its application is voluntary. Consequently, it is up to large corporations to decide if the guidelines are applicable to their developments or if, on the contrary, they will only be subject to laws whose compliance is unavoidable.

Alejandro Daga Godoy, Legal Counsel, Legal Army

Read more

Related posts that might interest you

All our news