Securing Generative AI Workflows

Hi, I’m Evan — an engineer with a passion for information security, automation, and building resilient cloud infrastructure. I spend a lot of time in the weeds solving real-world problems, whether that’s through client work or experiments in my homelab. This blog is where I document those lessons, not just to keep track of what I’ve learned, but to share practical insights that others in the field can apply too. My focus is on bridging the gap between security best practices and operational efficiency. Whether you’re planning your infrastructure, hardening environments, or just learning the ropes, I hope these posts give you something useful to take with you. Thanks for stopping by — let’s keep learning and building together.
Generative AI is changing how businesses work, but it also brings new security, privacy, and compliance challenges. Think of it like adding a powerful new team member—you want to make sure it’s doing the right things safely. Securing these AI workflows means taking a thoughtful, structured approach: discovering where AI is used, assessing risks, putting the right controls in place, and keeping an eye on it over time. Let’s walk through a useful strategy to use generative AI confidently while keeping your organization protected.
Discover
The first step in securing generative AI workflows is to discover all instances of AI agents across the environment. Without a clear picture of where these tools are being used, businesses cannot effectively manage risk.

Discovery starts with building a comprehensive inventory of systems, applications, and binaries where AI models, libraries, or agent frameworks may be deployed. Vulnerability scanning should extend beyond just network-level scans. Effective discovery for generative AI requires file system–level inspection to capture details about individual binaries, dependencies, and libraries.
Specialized tools that generate a Software Bill of Materials (SBOM)—such as the open-source tool Syft—can play a critical role in this process. An SBOM provides visibility into every component that makes up an application, making it easier to identify outdated or vulnerable dependencies. While SBOM tools are excellent for software composition analysis, discovery should also leverage more traditional endpoint detection and response (EDR) and vulnerability management tools. These solutions perform deep scans of endpoints and cloud VMs, and while they may not natively generate SBOMs, many support exporting their generated results in open standards like CycloneDX and SPDX. These outputs can then be fed back into SBOM tools for further enrichment and analysis.
Technical scanning, however, only tells part of the story. Many employees use generative AI in ways that may not be captured by endpoint scans or SBOM analysis—for example, through their web browser or approved software that interacts with an LLM model via API calls. For some businesses, especially those with heavy SaaS (software-as-a-service) adoption, this may actually be the most common way generative AI is used day to day. These activities often remain invisible, which is where we get the term “shadow AI”.
To fill this gap, organizations should supplement technical discovery with employee surveys or interviews. By asking staff how they use AI in their daily work, security teams can uncover shadow AI practices and gain insight into business-driven use cases that technical scans alone would miss.
Assess
Once discovery is complete, the next step is to assess the AI security posture (AI-SPM) of your environment.

This means evaluating the overall state of an organization’s security configuration, readiness, and risk exposure, and then establishing a systematic way to continuously monitor, assess, and improve that state. Think of it as a “health check” or ongoing “fitness plan” for your environment, including generative AI workflows.
Assessing posture requires a comprehensive approach to maintaining the integrity, confidentiality, and availability (CIA) of generative artificial intelligence and machine learning systems. It involves building a strategy for continuous monitoring and improvement of the security posture of the AI models, services, and workflows found in the infrastructure. AI-SPM focuses on spotting and fixing weak spots, misconfigurations, and risks associated with AI usage while also ensuring and/or maintaining compliance with privacy, security, and ethical standards for the organization’s industry.
Several frameworks and tools can help structure this process:
NIST AI Risk Management Framework (AI RMF) – provides a foundation for identifying and managing AI-specific risks.
OWASP AI Maturity Assessment (AIMA) – a useful resource for measuring the maturity of AI security practices.
SBOM scanning and remediation reviews – ensures that vulnerabilities found in AI-related components are addressed, tracked, and prioritized. Automation is certainly encouraged here wherever possible to make assessments repeatable and auditable.
Assessment should not stop at static analysis. Grey-box penetration testing provides an additional layer of assurance by simulating real-world attack scenarios. A well-scoped pentest can help uncover vulnerabilities that are specific to the environment, validate whether the remediation efforts were effective, and probe the resilience of AI systems against bad actors trying to manipulate the system or execute model-specific exploits.
Ultimately, assessment acts as the bridge between discovery and control. This step transforms raw visibility into actionable security intelligence.
Control
After discovery and assessment, the next step is to apply security controls to generative AI workflows.

Controls should incorporate prompt-level security—intercepting user inputs and analyzing them for prompt injection or jailbreak attempts.
An AI prompt is the input given (by the user) to a large language model via a generative AI platform.
Prompt injection is an attempt to manipulate an LLM model to make it do something harmful. This is the number one method bad actors use to manipulate generative AI according to the OWASP Top Ten. An attacker might use social engineering techniques to guide generative AI, giving instructions to override its original commands. This could lead to unintended actions, such as leaking sensitive internal information or executing unauthorized operations. By exploiting vulnerabilities in the prompt handling, attackers can potentially bypass security measures altogether.
Tools like Cloudflare’s WAF can act a proxy, inspecting layer 7 traffic and enforce gen-AI specific rulesets, blocking malicious prompts before they reach in-house or third-party models. This same visibility also helps address shadow AI, where unapproved AI tools may bypass traditional defenses.
Equally important are privacy safeguards. Generative AI must be configured to protect sensitive data, such as PII or PHI, and ensure compliance with the framework adopted during the assessment phase.
Finally, organizations should enforce outbound controls to monitor and restrict what data leaves the environment, reducing the risk of unintentional disclosure through AI services.
In short, controls provide the guardrails that keep AI secure, compliant, and aligned with organizational risk objectives.
Reporting
The final step in securing generative AI workflows is risk management and compliance reporting.

Organizations should create dashboards that visualize and prioritize AI-related risks, making it easier to track vulnerabilities, remediation efforts, and overall security posture.
For industries with regular compliance requirements, these reports are especially valuable, simplifying audit preparation and demonstrating adherence to regulatory standards. A structured reporting process ensures that stakeholders have clear visibility into both the current state of AI security and ongoing mitigation efforts.



