![]() ![]() Zeng, who also co-directs the China-UK Research Center for AI Ethics and Governance, suggested that the Security Council consider establishing a working group to consider near-term and long-term challenges AI poses to international peace and security. Professor Zeng Yi, director of the Chinese Academy of Sciences Brain-inspired Cognitive Intelligence Lab, told the council “the United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security.” Humans must maintain control of all weapons systems chief also said he welcomed calls from some countries for the creation of a new United Nations body to support global efforts to govern AI, “inspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change.” He warned the council that the advent of generative AI could have very serious consequences for international peace and security, pointing to its potential use by terrorists, criminals and governments causing “horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.” Set up an AI framework for development and governanceĪs a first step to bringing nations together, Guterres said he is appointing a high-level Advisory Board for Artificial Intelligence that will report back on options for global AI governance by the end of the year. Secretary-General Antonio Guterres said the United Nations is “the ideal place” to adopt global standards to maximize AI’s benefits and mitigate its risks. Generative AI could have very serious consequences But skeptics say regulation could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their large language models adhere to regulatory strictures. Other AI executives such as OpenAI’s CEO, Sam Altman, have also called for regulation. With robust and reliable evaluation of AI systems, he said, “governments can keep companies accountable, and companies can earn the trust of the world that they want to deploy their AI systems into.” But if there is no robust evaluation, he said, “we run the risk of regulatory capture compromising global security and handing over the future to a narrow set of private sector actors.” “And any failed approach will start with grand policy ideas that are not supported by effective measurements and evaluations.” Run the risk of regulatory capture compromising global security “Any sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw,” Clark said. ![]() Right now, however, there are no standards or even best practices on “how to test these frontier systems for things like discrimination, misuse or safety,” which makes it hard for governments to create policies and lets the private sector enjoy an information advantage, he said. He said he’s encouraged to see many countries emphasize the importance of safety testing and evaluation in their AI proposals, including the European Union, China and the United States. In a video briefing to the U.N.’s most powerful body, Clark also expressed hope that global action will succeed. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |