Fostering a secure and trustworthy environment for Artificial Intelligence (AI) is crucial. Therefore, the United States National Institute of Standards and Technology (NIST), under the Department of Commerce, has unveiled the Artificial Intelligence Safety Institute Consortium. This innovative initiative, announced on November 2, 2023, represents a collaborative effort to establish a new measurement science. It will focus on identifying scalable and proven techniques and metrics crucial for the responsible development and utilization of AI, especially in the realm of advanced AI systems.
Consortium’s Core Objective and Collaborative Focus
The primary objective of the Consortium is to navigate the intricate landscape of risks posed by AI technologies. It will also safeguard the public while simultaneously encouraging advancements in innovative AI technologies. NIST aims to harness the diverse interests and capabilities of the broader community to identify proven, scalable, and interoperable measurements and methodologies essential for the responsible use and development of trustworthy AI.
Engaging in collaborative Research and Development (R&D), shared projects, and the evaluation of test systems and prototypes are key activities outlined for the Consortium. This collective effort is in direct response to the Executive Order titled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued on October 30, 2023. The executive order emphasized a comprehensive set of priorities pertaining to AI safety and trust.
Call for Participation and Cooperation
To achieve these ambitious objectives, NIST has extended an invitation to interested organizations to contribute their technical expertise, products, data, and/or models through the AI Risk Management Framework (AI RMF). This invitation, part of NIST’s initiative to collaborate with non-profit organizations, universities, government agencies, and technology companies, is a proactive step towards ensuring a diverse and inclusive approach to the challenges posed by AI. The collaborative activities within the Consortium are slated to commence no earlier than December 4, 2023, contingent upon receiving a sufficient number of completed and signed letters of interest. Participation is open to all organizations capable of contributing to the Consortium’s activities, with selected participants required to enter into a Consortium Cooperative Research and Development Agreement (CRADA) with NIST.
Addressing AI Safety Challenges and Regulatory Milestones
The establishment of the Consortium marks a significant advancement for the United States in catching up with other developed nations in crafting regulations governing AI development. This is particularly pertinent in areas such as user and citizen privacy, security, and the mitigation of unintended consequences. The move is reflective of a milestone under President Joe Biden’s administration, showcasing a commitment to adopting specific policies to manage AI in the United States.
The Consortium is instrumental in developing new guidelines, tools, and methods. It is also the best practice to facilitate the evolution of industry standards for developing or deploying AI in a safe, secure, and trustworthy manner. It arrives at a pivotal time, not only for AI technologists but for society as a whole, ensuring that AI aligns with societal norms and values while fostering innovation. As the Consortium takes shape, it is set to become a cornerstone in the broader landscape of AI development, promoting responsible practices and shaping the trajectory of AI technology in the years to come.
Personal Note From MEXC Team
Check out our MEXC trading page and find out what we have to offer! There are also a ton of interesting articles to get you up to speed with the crypto world. Lastly, join our MEXC Creators project and share your opinion about everything crypto! Happy trading! Learn about interoperability now!
Join MEXC and Start Trading Today!