




















The Roadmap outlines five key lines of effort that form the foundation of CISA's strategy. These lines of effort provide a framework for AI adoption, ensuring that it is not only technologically sound but also ethically responsible and secure.
CISA intends to use AI-enabled software tools to strengthen cyber defense and support the critical infrastructure mission. The agency is committed to adopting AI responsibly, ensuring ethical and safe use consistent with the Constitution and all applicable laws and policies.
CISA will assess and assist in the secure design of AI-based software across a diverse array of stakeholders. The goal is to develop best practices and guidance for secure and resilient AI software development and implementation.
The agency plans to assess and recommend mitigation strategies for AI threats facing our nation’s (USA's) critical infrastructure. This will be done in partnership with other government agencies and industry partners that develop, test, and evaluate AI tools.
CISA will collaborate with interagency, international partners, and the public on AI-enabled software efforts. This includes coordinating with international partners to advance global AI security best practices and principles.
CISA is committed to educating its workforce on AI software systems and techniques. The agency will actively recruit interns, fellows, and future employees with AI expertise. CISA will ensure that internal training reflects—and new recruits understand—the legal, ethical, and policy aspects of AI-based software systems, in addition to the technical aspects.
The release of CISA's Roadmap for AI Adoption is another milestone in the journey towards harnessing the power of AI securely and ethically. It's a start in guiding us towards a future where AI plays a pivotal role in securing the USA’s digital infrastructure.
As I opined in Biden's Executive Order on AI and the Implications for Industrial Organizations, industrial organizations should maintain their focus on leveraging proven AI tools and technologies to drive sustainable business outcomes. There's no holding back The Industrial AI (R)Evolution, but there will be lots to navigate from ethical and legal perspectives as the full impact of Generative AI sends ripples (or Tsunamis?) through every industry sector and our daily lives. We'll share more on what AI leading industrial organizations are adopting in our ongoing research. We offer ARC's Industrial AI Impact Assessment Model as a robust framework for these organizations seeking guidance on navigating the evolving AI landscape to harness the full potential of AI for their operations.
ARC's Industrial AI Impact Assessment Model takes into consideration a wide range of AI techniques, including machine learning, neural networks, computer vision, and natural language processing. It also factors in emerging AI techniques, including Generative AI, but also Causal AI, Explainable AI, NeuroSymbolic AI, etc. This comprehensive approach ensures that industrial organizations can effectively assess the impact of various AI technologies on their operations and make informed decisions.
For more information or to contribute to Industrial AI research, please contact Colin Masson at cmasson@arcweb.com .