Pryon’s Guide to Responsible AI and the RAI SHIELD Assessment

Explore how the DoD defines Responsible AI, learn about the RAI Toolkit, and discover how Pryon adheres to each step of the RAI SHIELD Assessment.

Federal government agencies and departments are exploring AI’s potential to enhance their operations, streamline information delivery, and improve citizen services. The many federal organizations under the Department of Defense umbrella are certainly no different — the DoD recognizes how AI-enabled capabilities provide “powerful new possibilities for prediction, automation, content generation, and accelerating the speed of decision.”

While expressing excitement at the possibilities AI presents, the DoD is also taking a measured approach when it comes to adopting AI technologies. In particular, the CDAO (Chief Digital and Artificial Intelligence Office) wing of the DoD has adopted a series of best practices regarding Responsible AI, enshrining this guidance in the RAI Toolkit. The RAI Toolkit helps DoD agencies evaluate whether an AI project adheres to the RAI Framework and is thus suitable for deployment.

This blog post is intended to serve as a guide to Responsible AI and the RAI Toolkit for a federal government audience. You’ll learn:

  • How the DoD defines Responsible AI (RAI)
  • Why RAI matters
  • What’s included in the RAI SHIELD Assessment
  • How Pryon adheres to the tenets of RAI SHIELD

What is Responsible AI?

Responsible Artificial Intelligence (RAI) is a dynamic approach to the design, development, deployment, and use of trustworthy AI capabilities. Before initiating any AI projects, DoD agencies must ensure these projects adhere to the RAI framework, which integrates DoD AI Ethical Principles while emphasizing the necessity for technical maturity to build effective, resilient, robust, reliable, and explainable AI.

The following DoD AI Ethical Principles help undergird the RAI Framework:

  • Responsible: DoD personnel will exercise appropriate levels of judgement and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedures and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Learn more about the RAI Toolkit here.

Why does RAI matter?

The DoD places a great deal of emphasis on Responsible AI, arguing that the way AI is designed, developed, and used will determine whether its effects are good or bad. The Department says implementing AI responsibly can lead to fast innovation and greater effectiveness, which will enable further deployment of AI.

The right AI technologies, according to the DoD, can be trusted, serve as a vehicle for our national values, integrate deterrence, contribute to efficient and cost-saving innovation, and ensure the United States is ready for the future.

Responsible AI is the place where cutting-edge tech meets timeless values.

What’s the SHIELD Assessment and what’s included in it?

The RAI Toolkit contains an assessment, the SHIELD Assessment, which ensures each project identifies and addresses RAI-related risks and opportunities at all stages of the product lifecycle. SHIELD breaks these activities down into six sequential categories:

© 2023 Chief Digital and Artificial Intelligence Office (CDAO)

  • Set Foundations
  • Hone Operationalizations
  • Improve & Innovate
  • Evaluate Progress
  • Log for Traceability
  • Detect via Continuous Monitoring

How does Pryon’s deployment methodology adhere to RAI SHIELD?

The SHIELD framework delineates four stages of the lifecycle of an AI project: Design, Develop, Deploy, and Use. As shown in the diagram above, these four stages can be divided into a total of seven phases. Here’s how Pryon’s deployment methodology aligns with these seven phases.

© 2023 Chief Digital and Artificial Intelligence Office (CDAO)

Intake

The Intake phase typically occurs during the final negotiations of a Pryon RAG Suite procurement. Having identified and engaged stakeholders, the Pryon team will confirm the AI use case(s). Use cases will be confirmed and finalized per the stated requirements (i.e., workflow mapping, impact goals, and AI’s role in the solution) for which Pryon’s AI solution will be implemented.

Upon finalization of procurement, the Pryon solutions team, in partnership with client stakeholders, will decide to proceed to the Ideation phase and agree to how the joint team will engage and track RAI-related issues. This includes having conducted feasibility and data/content availability analysis.

Ideation

The Ideation phase occurs upon procurement of Pryon software and starts with a refinement of requirements as preliminarily understood during Intake, including any necessary translation to functional requirements and design specifications. During this phase, any risks or statements of concern, by either the Pryon solutions team or client stakeholders, are identified and incorporated into the ultimate delivery plan to ensure they are tracked and managed throughout the project lifecycle.

Assessment

During the Assessment phase, the Pryon team ensures all requirements and statements of concern have success/failure criteria that have been mutually agreed upon with stakeholders (3.1 HONE). Moreover, all applicable content and data to be included in the solution is collected and labelled or tagged to ensure ground truth and eliminate bias (3.2 HONE). Prior to the Assessment stage, the Pryon team would have ensured AI was suitable for the solution (3.3) given the explicit requirements. These elements of the Assessment phase are captured by updating existing project documents.

Acquisition/Development

Having developed requirements in accordance with DoD AI Ethical Principles, the Pryon team partners with customers to ensure the following are adhered to: documentation requirements, permissions, data access, roles and responsibilities for the system, traceability, continuous monitoring, stakeholder engagement, and appropriate documentation procedures. Moreover, the Pryon team partners with customers to develop user acceptance testing and ongoing maintenance and system evolution plan, including agreed-upon processes for updating the system, adding new content, and/or additional configurations.

TEVV (Test & Evaluation, Validation & Verification)

Pryon RAG Suite is engineered to avoid the inherent risks of Generative AI seen in other products. Prior to the TEVV phase, the Pryon development team will already have tested the deployed Pryon solution to mitigate the risk of any adversarial attacks or model drift. During the TEVV phase, the Pryon solutions team partners with customers to test for performance, security, and adherence to requirements.  As the system evolves through initial deployment, user acceptance testing, and ongoing updates per user requirements and content changes, documentation is updated accordingly.

Integration and Deployment

Pryon partners with customers to thoroughly test the deployed solution, train users, conduct user acceptance testing, educate customers on how to report support incidents, and update documentation as necessary.

Use

Once a Pryon AI solution has been deployed, the Pryon team conducts quarterly reviews with clients. This includes reviewing usage reports and metrics to determine how the system may be (re)configured and/or what content should be updated or removed.  Through this partnership, customers can evolve their Pryon solution per the usage patterns and requirements of the user community, as these often become clearer with real-world usage. Moreover, if additional use cases are identified, the Pryon team can facilitate necessary updates, collection creation, and/or necessary configurations in support of these.

Conclusion

The Responsible AI Toolkit, DoD Ethical AI Ethical Principles, and SHIELD framework provide DoD agencies with a critical foundation to understand and assess how to implement and deploy AI in a government context. Pryon stands ready to help DoD agencies complete the SHIELD framework and quickly achieve rapid results with trustworthy AI.

Download the two-pager for a quick-hit summary of how Pryon adheres to RAI SHIELD. To learn more about the benefits of Pryon RAG Suite for federal agencies, visit our Defense and Intelligence page.

Pryon’s Guide to Responsible AI and the RAI SHIELD Assessment

Explore how the DoD defines Responsible AI, learn about the RAI Toolkit, and discover how Pryon adheres to each step of the RAI SHIELD Assessment.

Federal government agencies and departments are exploring AI’s potential to enhance their operations, streamline information delivery, and improve citizen services. The many federal organizations under the Department of Defense umbrella are certainly no different — the DoD recognizes how AI-enabled capabilities provide “powerful new possibilities for prediction, automation, content generation, and accelerating the speed of decision.”

While expressing excitement at the possibilities AI presents, the DoD is also taking a measured approach when it comes to adopting AI technologies. In particular, the CDAO (Chief Digital and Artificial Intelligence Office) wing of the DoD has adopted a series of best practices regarding Responsible AI, enshrining this guidance in the RAI Toolkit. The RAI Toolkit helps DoD agencies evaluate whether an AI project adheres to the RAI Framework and is thus suitable for deployment.

This blog post is intended to serve as a guide to Responsible AI and the RAI Toolkit for a federal government audience. You’ll learn:

  • How the DoD defines Responsible AI (RAI)
  • Why RAI matters
  • What’s included in the RAI SHIELD Assessment
  • How Pryon adheres to the tenets of RAI SHIELD

What is Responsible AI?

Responsible Artificial Intelligence (RAI) is a dynamic approach to the design, development, deployment, and use of trustworthy AI capabilities. Before initiating any AI projects, DoD agencies must ensure these projects adhere to the RAI framework, which integrates DoD AI Ethical Principles while emphasizing the necessity for technical maturity to build effective, resilient, robust, reliable, and explainable AI.

The following DoD AI Ethical Principles help undergird the RAI Framework:

  • Responsible: DoD personnel will exercise appropriate levels of judgement and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedures and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Learn more about the RAI Toolkit here.

Why does RAI matter?

The DoD places a great deal of emphasis on Responsible AI, arguing that the way AI is designed, developed, and used will determine whether its effects are good or bad. The Department says implementing AI responsibly can lead to fast innovation and greater effectiveness, which will enable further deployment of AI.

The right AI technologies, according to the DoD, can be trusted, serve as a vehicle for our national values, integrate deterrence, contribute to efficient and cost-saving innovation, and ensure the United States is ready for the future.

Responsible AI is the place where cutting-edge tech meets timeless values.

What’s the SHIELD Assessment and what’s included in it?

The RAI Toolkit contains an assessment, the SHIELD Assessment, which ensures each project identifies and addresses RAI-related risks and opportunities at all stages of the product lifecycle. SHIELD breaks these activities down into six sequential categories:

© 2023 Chief Digital and Artificial Intelligence Office (CDAO)

  • Set Foundations
  • Hone Operationalizations
  • Improve & Innovate
  • Evaluate Progress
  • Log for Traceability
  • Detect via Continuous Monitoring

How does Pryon’s deployment methodology adhere to RAI SHIELD?

The SHIELD framework delineates four stages of the lifecycle of an AI project: Design, Develop, Deploy, and Use. As shown in the diagram above, these four stages can be divided into a total of seven phases. Here’s how Pryon’s deployment methodology aligns with these seven phases.

© 2023 Chief Digital and Artificial Intelligence Office (CDAO)

Intake

The Intake phase typically occurs during the final negotiations of a Pryon RAG Suite procurement. Having identified and engaged stakeholders, the Pryon team will confirm the AI use case(s). Use cases will be confirmed and finalized per the stated requirements (i.e., workflow mapping, impact goals, and AI’s role in the solution) for which Pryon’s AI solution will be implemented.

Upon finalization of procurement, the Pryon solutions team, in partnership with client stakeholders, will decide to proceed to the Ideation phase and agree to how the joint team will engage and track RAI-related issues. This includes having conducted feasibility and data/content availability analysis.

Ideation

The Ideation phase occurs upon procurement of Pryon software and starts with a refinement of requirements as preliminarily understood during Intake, including any necessary translation to functional requirements and design specifications. During this phase, any risks or statements of concern, by either the Pryon solutions team or client stakeholders, are identified and incorporated into the ultimate delivery plan to ensure they are tracked and managed throughout the project lifecycle.

Assessment

During the Assessment phase, the Pryon team ensures all requirements and statements of concern have success/failure criteria that have been mutually agreed upon with stakeholders (3.1 HONE). Moreover, all applicable content and data to be included in the solution is collected and labelled or tagged to ensure ground truth and eliminate bias (3.2 HONE). Prior to the Assessment stage, the Pryon team would have ensured AI was suitable for the solution (3.3) given the explicit requirements. These elements of the Assessment phase are captured by updating existing project documents.

Acquisition/Development

Having developed requirements in accordance with DoD AI Ethical Principles, the Pryon team partners with customers to ensure the following are adhered to: documentation requirements, permissions, data access, roles and responsibilities for the system, traceability, continuous monitoring, stakeholder engagement, and appropriate documentation procedures. Moreover, the Pryon team partners with customers to develop user acceptance testing and ongoing maintenance and system evolution plan, including agreed-upon processes for updating the system, adding new content, and/or additional configurations.

TEVV (Test & Evaluation, Validation & Verification)

Pryon RAG Suite is engineered to avoid the inherent risks of Generative AI seen in other products. Prior to the TEVV phase, the Pryon development team will already have tested the deployed Pryon solution to mitigate the risk of any adversarial attacks or model drift. During the TEVV phase, the Pryon solutions team partners with customers to test for performance, security, and adherence to requirements.  As the system evolves through initial deployment, user acceptance testing, and ongoing updates per user requirements and content changes, documentation is updated accordingly.

Integration and Deployment

Pryon partners with customers to thoroughly test the deployed solution, train users, conduct user acceptance testing, educate customers on how to report support incidents, and update documentation as necessary.

Use

Once a Pryon AI solution has been deployed, the Pryon team conducts quarterly reviews with clients. This includes reviewing usage reports and metrics to determine how the system may be (re)configured and/or what content should be updated or removed.  Through this partnership, customers can evolve their Pryon solution per the usage patterns and requirements of the user community, as these often become clearer with real-world usage. Moreover, if additional use cases are identified, the Pryon team can facilitate necessary updates, collection creation, and/or necessary configurations in support of these.

Conclusion

The Responsible AI Toolkit, DoD Ethical AI Ethical Principles, and SHIELD framework provide DoD agencies with a critical foundation to understand and assess how to implement and deploy AI in a government context. Pryon stands ready to help DoD agencies complete the SHIELD framework and quickly achieve rapid results with trustworthy AI.

Download the two-pager for a quick-hit summary of how Pryon adheres to RAI SHIELD. To learn more about the benefits of Pryon RAG Suite for federal agencies, visit our Defense and Intelligence page.

No items found.

Pryon’s Guide to Responsible AI and the RAI SHIELD Assessment

Explore how the DoD defines Responsible AI, learn about the RAI Toolkit, and discover how Pryon adheres to each step of the RAI SHIELD Assessment.

Federal government agencies and departments are exploring AI’s potential to enhance their operations, streamline information delivery, and improve citizen services. The many federal organizations under the Department of Defense umbrella are certainly no different — the DoD recognizes how AI-enabled capabilities provide “powerful new possibilities for prediction, automation, content generation, and accelerating the speed of decision.”

While expressing excitement at the possibilities AI presents, the DoD is also taking a measured approach when it comes to adopting AI technologies. In particular, the CDAO (Chief Digital and Artificial Intelligence Office) wing of the DoD has adopted a series of best practices regarding Responsible AI, enshrining this guidance in the RAI Toolkit. The RAI Toolkit helps DoD agencies evaluate whether an AI project adheres to the RAI Framework and is thus suitable for deployment.

This blog post is intended to serve as a guide to Responsible AI and the RAI Toolkit for a federal government audience. You’ll learn:

  • How the DoD defines Responsible AI (RAI)
  • Why RAI matters
  • What’s included in the RAI SHIELD Assessment
  • How Pryon adheres to the tenets of RAI SHIELD

What is Responsible AI?

Responsible Artificial Intelligence (RAI) is a dynamic approach to the design, development, deployment, and use of trustworthy AI capabilities. Before initiating any AI projects, DoD agencies must ensure these projects adhere to the RAI framework, which integrates DoD AI Ethical Principles while emphasizing the necessity for technical maturity to build effective, resilient, robust, reliable, and explainable AI.

The following DoD AI Ethical Principles help undergird the RAI Framework:

  • Responsible: DoD personnel will exercise appropriate levels of judgement and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedures and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Learn more about the RAI Toolkit here.

Why does RAI matter?

The DoD places a great deal of emphasis on Responsible AI, arguing that the way AI is designed, developed, and used will determine whether its effects are good or bad. The Department says implementing AI responsibly can lead to fast innovation and greater effectiveness, which will enable further deployment of AI.

The right AI technologies, according to the DoD, can be trusted, serve as a vehicle for our national values, integrate deterrence, contribute to efficient and cost-saving innovation, and ensure the United States is ready for the future.

Responsible AI is the place where cutting-edge tech meets timeless values.

What’s the SHIELD Assessment and what’s included in it?

The RAI Toolkit contains an assessment, the SHIELD Assessment, which ensures each project identifies and addresses RAI-related risks and opportunities at all stages of the product lifecycle. SHIELD breaks these activities down into six sequential categories:

© 2023 Chief Digital and Artificial Intelligence Office (CDAO)

  • Set Foundations
  • Hone Operationalizations
  • Improve & Innovate
  • Evaluate Progress
  • Log for Traceability
  • Detect via Continuous Monitoring

How does Pryon’s deployment methodology adhere to RAI SHIELD?

The SHIELD framework delineates four stages of the lifecycle of an AI project: Design, Develop, Deploy, and Use. As shown in the diagram above, these four stages can be divided into a total of seven phases. Here’s how Pryon’s deployment methodology aligns with these seven phases.

© 2023 Chief Digital and Artificial Intelligence Office (CDAO)

Intake

The Intake phase typically occurs during the final negotiations of a Pryon RAG Suite procurement. Having identified and engaged stakeholders, the Pryon team will confirm the AI use case(s). Use cases will be confirmed and finalized per the stated requirements (i.e., workflow mapping, impact goals, and AI’s role in the solution) for which Pryon’s AI solution will be implemented.

Upon finalization of procurement, the Pryon solutions team, in partnership with client stakeholders, will decide to proceed to the Ideation phase and agree to how the joint team will engage and track RAI-related issues. This includes having conducted feasibility and data/content availability analysis.

Ideation

The Ideation phase occurs upon procurement of Pryon software and starts with a refinement of requirements as preliminarily understood during Intake, including any necessary translation to functional requirements and design specifications. During this phase, any risks or statements of concern, by either the Pryon solutions team or client stakeholders, are identified and incorporated into the ultimate delivery plan to ensure they are tracked and managed throughout the project lifecycle.

Assessment

During the Assessment phase, the Pryon team ensures all requirements and statements of concern have success/failure criteria that have been mutually agreed upon with stakeholders (3.1 HONE). Moreover, all applicable content and data to be included in the solution is collected and labelled or tagged to ensure ground truth and eliminate bias (3.2 HONE). Prior to the Assessment stage, the Pryon team would have ensured AI was suitable for the solution (3.3) given the explicit requirements. These elements of the Assessment phase are captured by updating existing project documents.

Acquisition/Development

Having developed requirements in accordance with DoD AI Ethical Principles, the Pryon team partners with customers to ensure the following are adhered to: documentation requirements, permissions, data access, roles and responsibilities for the system, traceability, continuous monitoring, stakeholder engagement, and appropriate documentation procedures. Moreover, the Pryon team partners with customers to develop user acceptance testing and ongoing maintenance and system evolution plan, including agreed-upon processes for updating the system, adding new content, and/or additional configurations.

TEVV (Test & Evaluation, Validation & Verification)

Pryon RAG Suite is engineered to avoid the inherent risks of Generative AI seen in other products. Prior to the TEVV phase, the Pryon development team will already have tested the deployed Pryon solution to mitigate the risk of any adversarial attacks or model drift. During the TEVV phase, the Pryon solutions team partners with customers to test for performance, security, and adherence to requirements.  As the system evolves through initial deployment, user acceptance testing, and ongoing updates per user requirements and content changes, documentation is updated accordingly.

Integration and Deployment

Pryon partners with customers to thoroughly test the deployed solution, train users, conduct user acceptance testing, educate customers on how to report support incidents, and update documentation as necessary.

Use

Once a Pryon AI solution has been deployed, the Pryon team conducts quarterly reviews with clients. This includes reviewing usage reports and metrics to determine how the system may be (re)configured and/or what content should be updated or removed.  Through this partnership, customers can evolve their Pryon solution per the usage patterns and requirements of the user community, as these often become clearer with real-world usage. Moreover, if additional use cases are identified, the Pryon team can facilitate necessary updates, collection creation, and/or necessary configurations in support of these.

Conclusion

The Responsible AI Toolkit, DoD Ethical AI Ethical Principles, and SHIELD framework provide DoD agencies with a critical foundation to understand and assess how to implement and deploy AI in a government context. Pryon stands ready to help DoD agencies complete the SHIELD framework and quickly achieve rapid results with trustworthy AI.

Download the two-pager for a quick-hit summary of how Pryon adheres to RAI SHIELD. To learn more about the benefits of Pryon RAG Suite for federal agencies, visit our Defense and Intelligence page.