In response to Executive Order 14110, the National Institute of Standards and Technology (NIST) has recently released four draft publications that aim to establish frameworks and guidelines for the use and development of Artificial intelligence (AI) technologies.
The call for public commentary on the four drafts by 2nd June, 2024 represents an opportunity for the AI community to contribute to the shaping of effective and secure AI practices. Falx has provided commentary and feedback, with a key focus on AI 600-1 and SP 800-218A, and further innovative recommendations.
Additionally, NIST has introduced the NIST GenAI Challenge, a new initiative designed to rigorously evaluate generative AI technologies. This challenge will help identify and understand the capabilities and limitations of AI technologies, particularly in distinguishing between content created by humans and machines. Quoting NIST, the objectives of the NIST GenAI evaluation include by are not limited to:
- Evolving benchmark dataset creation,
- Facilitating the development of content authenticity detection technologies for different modalities (text, audio, image, video, code),
- Conducting a comparative analysis using relevant metrics, and
- Promoting the development of technologies for identifying the source of fake or misleading information.
For those interested in contributing to the future of AI, more information and details on how to engage with these initiatives can be found on the NIST website and within the respective published documents. As we continue to explore the vast potentials of AI, initiatives like these are invaluable in ensuring that the technology develops in a way that aligns with global safety and security standards.
Artificial Intelligence Risk Management Framework: Generative AI Profile (NIST AI 600-1)
A Plan for Global Engagement on AI Standards (NIST AI 100-5)
Falx has created tooling to make use of documents, frameworks, and standards such as the recently released draft publications. AI 600-1 has already been introduced as part of our wider controls assessment, which is a sub-section of the overall tool. We help businesses navigate the security intracies of AI technologies from a technical and business risk perspective, driving engineering changes and strategy decisions.