On January 7, 2025, the Department of Homeland Security released its first “Playbook for Public Sector Generative Artificial Intelligence Deployment (“Playbook”) to serve as a comprehensive guide for DHS and other public sector organizations to responsibly integrate GenAI into their operations. The Playbook emphasizes the importance of deploying AI technologies in a manner that is responsible, trustworthy, and effective. The Playbook follows the DHS’s AI Task Force that was launched in 2023 and charged with identifying and exploring the potential of AI in DHS missions while simultaneously emphasizing the importance of privacy, civil rights and civil liberties.
Using the lessons learned from its own case studies, DHS proposes actionable steps that government officials can take to advance organizations’ uses of GenAI.
Although the Playbook is designed for public sector organizations, it sets out an instructive framework by which private organizations – including those contracting with the Federal government – can align, develop and/or deploy their own GenAI programs.
- Mission-Enhancing GenAI Use Cases
The Playbook encourages public sector organizations to ensure that GenAI deployments align with their mission.
- Aligning Deployments with Mission. Organizations should identify and design narrowly scoped GenAI pilot programs that address specific mission-enhancing processes before fully integrating GenAI into mission-critical operations.
- Executive Sponsorship. Enlisting an executive sponsor early in the process whose organization can benefit from the GenAI pilot, and who also can provide support and endorsement for pilot efforts.
- Resource Assessment. Pilot plans should assess resource needs -including funding, staff, data, and technology – and define criteria for evaluating success prior to being deployed.
- Coalition Building and Effective Governance
Organizations should seek sponsorship from senior executives and build cross-organizational coalitions to oversee GenAI deployments.
- Stakeholder Engagement. Solicit buy-in from key internal stakeholders, including risk management, compliance, and oversight leads.
- Governance Structures. Evaluate and adapt current governance structures to address AI governance, ensuring representation from a wide range of stakeholders.
- Tools and Infrastructure
Organizations must assess existing tools and infrastructure to support GenAI deployments.
- Model Selection. Consider commercial, open-source, or open-weight AI models based on organizational needs and resources.
- Security and Integration. Adapt existing processes for security compliance, scalability, and integration with existing IT infrastructure.
- Responsible Use and Trustworthiness Considerations
Organizations should prioritize responsible and trustworthy AI use early when deploying GenAI models. This includes:
- Risk Identification. Organizations should identify potential risks, such as inaccuracies, discrimination, privacy impact, and data bias.
- Organizational Guidance. Developing clear organizational guidance and principles for responsible and trustworthy use is essential.
- Human Review. GenAI outputs should not be used as the sole basis for critical decisions; human review should be incorporated where necessary.
- Collaboration with Experts. GenAI pilot teams should collaborate with legal, privacy, civil rights, and cybersecurity experts to address responsible use considerations throughout development.
- Measurement and Monitoring
Organizations should develop qualitative and quantitative metrics to assess GenAI pilots. This includes:
- Metric Development. Identify or develop metrics that reflect the goals of GenAI pilots, covering mission impact, value, and oversight objectives.
- Monitoring Infrastructure. Implement infrastructure to monitor metrics and communicate progress to stakeholders.
- Iterative Improvements. Use metrics to inform iterative improvements or updates based on pilot performance.
- Training and Talent Acquisition
Organizations should invest in training and talent acquisition to support GenAI development.
- GenAI Literacy Training. Train staff on GenAI capabilities, limitations, and risks to create a common understanding across the organization.
- Skill Assessment. Identify necessary technical skills and offer upskilling or cross-training opportunities for current employees.
- Hiring Skilled Employees. Consider hiring skilled employees or partnering with external organizations to support GenAI development.
- Usability Testing and Other Feedback Mechanisms
Organizations should engage relevant users in regular usability testing and communicating with stakeholders to gather feedback.
- User Engagement. Identify relevant users and iteratively test the product with them throughout the development lifecycle.
- Stakeholder Communication. Share pilot updates with stakeholders to keep them engaged and gather informed feedback.
- Feedback Incorporation. Provide opportunities for stakeholders to share feedback and incorporate it as appropriate to improve GenAI applications.