A look at NIST's Latest AI Draft Publications

A look at NIST's Latest AI Draft Publications

Introducing the latest NIST draft publications

In response to Executive Order 14110, the National Institute of Standards and Technology (NIST) has recently released four draft publications that aim to establish frameworks and guidelines for the use and development of Artificial intelligence (AI) technologies.

  • AI 600-1 and SP 800-218A, provide guidance on managing risks associated with generative AI technologies, such as chatbots and text-based generative tools. These serve as supplemental resources to the established AI Risk Management Framework (AI RMF) and the Secure Software Development Framework (SSDF).
  • AI 100-4, advocates for methods to ensure transparency in AI-generated or altered digital content, known as synthetic data.
  • AI 100-5, outlines a strategy for international collaboration in the development of AI standards.

The call for public commentary on the four drafts by 2nd June, 2024 represents an opportunity for the AI community to contribute to the shaping of effective and secure AI practices. Falx has provided commentary and feedback, with a key focus on AI 600-1 and SP 800-218A, and further innovative recommendations.

Additionally, NIST has introduced the NIST GenAI Challenge, a new initiative designed to rigorously evaluate generative AI technologies. This challenge will help identify and understand the capabilities and limitations of AI technologies, particularly in distinguishing between content created by humans and machines. Quoting NIST, the objectives of the NIST GenAI evaluation include by are not limited to:

  • Evolving benchmark dataset creation,
  • Facilitating the development of content authenticity detection technologies for different modalities (text, audio, image, video, code),
  • Conducting a comparative analysis using relevant metrics, and
  • Promoting the development of technologies for identifying the source of fake or misleading information.

For those interested in contributing to the future of AI, more information and details on how to engage with these initiatives can be found on the NIST website and within the respective published documents. As we continue to explore the vast potentials of AI, initiatives like these are invaluable in ensuring that the technology develops in a way that aligns with global safety and security standards.

View the NIST AI Governance Documents

  1. Artificial Intelligence Risk Management Framework: Generative AI Profile (NIST AI 600-1)

  2. Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST SP 800-218A)

  3. Reducing Risks Posed by Synthetic Content (NIST AI 100-4)

  4. A Plan for Global Engagement on AI Standards (NIST AI 100-5)

Falx’s AI Security Assessments

Falx has created tooling to make use of documents, frameworks, and standards such as the recently released draft publications. AI 600-1 has already been introduced as part of our wider controls assessment, which is a sub-section of the overall tool. We help businesses navigate the security intracies of AI technologies from a technical and business risk perspective, driving engineering changes and strategy decisions.

Contact Falx for Advanced AI Cyber Security Solutions