AI Security Glossary
Key terms and definitions for AI security, governance, and compliance.
Shadow AI
The use of AI tools by employees without organizational knowledge, approval, or oversight. Shadow AI typically involves free or consumer AI tools such as ChatGPT, Claude, DeepSeek, or Perplexity being used on corporate data without enterprise agreements, creating data leakage, compliance, and governance risks that traditional security tools cannot detect.
Learn more →Secure AI Gateway
A governed platform that provides employees access to multiple AI models (ChatGPT, Claude, Gemini, and more) through a single interface, while guaranteeing that data never trains public models. It replaces expensive per-seat AI licenses with pay-per-use pricing and provides full audit trails, policy enforcement, and usage analytics.
Learn more →AI Firewall
A security layer that inspects every AI interaction in real time, providing protection against prompt injection attacks, jailbreak attempts, and data exfiltration. It enforces policies on what data can be sent to AI models and blocks unauthorized or risky interactions automatically.
Learn more →AI Usage Analytics
Deep visibility into how AI is used across an organization, going beyond simple traffic logs. AI Usage Analytics reveals which teams use which models, what data is being shared, what use cases drive the most value, and where security risks exist. This data enables informed AI governance decisions.
Learn more →Multi-LLM
Access to multiple Large Language Models through a single governed interface. Instead of managing separate subscriptions and licenses for ChatGPT, Claude, Gemini, Mistral, Perplexity, and others, a Multi-LLM platform lets employees choose the best model for each task while maintaining consistent security and governance policies.
Learn more →Zero Data Training
A guarantee that all AI interactions route through enterprise channels where model training is disabled by default. This means that prompts, code, documents, and other data shared with AI models are never used to train or improve public AI models. The data stays private permanently.
Learn more →Model Isolation
A security architecture where all AI interactions are routed through the Secure AI Gateway with contractual guarantees that data is never used for model training. Model isolation ensures that each organization’s data remains completely separate from public model training pipelines.
Learn more →Data Sovereignty
The principle that an organization retains full control over its data, including how it is processed, where it resides, and whether it is used by external systems. In the context of AI, data sovereignty means ensuring that employee prompts and company data never train external AI models.
Learn more →AI Red Teaming
The practice of systematically testing AI systems for security vulnerabilities by simulating real-world attacks. This includes prompt injection, jailbreak attempts, data exfiltration, and business logic abuse. AI red teaming helps organizations find and fix weaknesses before they are exploited.
Learn more →Prompt Injection
An attack technique where malicious inputs are inserted into AI prompts to manipulate the model’s behavior, bypass its safety guardrails, or extract sensitive information. Prompt injection is one of the primary attack vectors that AI Firewalls are designed to detect and block.
Jailbreak
An attempt to bypass the safety restrictions or guardrails of an AI system, causing it to perform unintended actions, generate prohibited content, or reveal restricted information. Jailbreaks exploit weaknesses in how AI models interpret instructions.
DLP (Data Loss Prevention)
Security technology that detects and prevents sensitive data from being shared with unauthorized systems. In the context of AI, DLP scans prompts in real time to catch and redact PII, credentials, API keys, source code, and financial data before they reach any AI model.
Learn more →Shelfware
Software licenses that are purchased but go largely unused. In enterprise AI, per-seat licenses (such as Copilot at $30/user/month) often result in shelfware when only 20–30% of licensed users actively engage with the tool, driving the effective cost per active user to $100–$200.
Learn more →Per-Seat Licensing
A pricing model where organizations pay a fixed cost per user per month regardless of actual usage. For AI tools, this model creates budget pressure because most employees require AI access for data privacy, but only a fraction use the tools heavily enough to justify the cost.
Learn more →Pay-Per-Use Pricing
A consumption-based pricing model where organizations pay only for actual AI usage. Costs scale based on tokens consumed rather than seats provisioned, allowing organizations to provide AI access to every employee at a fraction of per-seat licensing costs.
Learn more →CASB (Cloud Access Security Broker)
A security tool that monitors and controls cloud application usage. While CASBs can track that someone accessed an AI tool, most cannot inspect the content of AI conversations. This means you know someone used ChatGPT, but not what data they shared.
Learn more →SSO (Single Sign-On)
An enterprise authentication mechanism that allows users to access multiple applications, including AI tools, with a single set of credentials. SSO integration ensures that AI access is tied to corporate identity and can be managed centrally.
RBAC (Role-Based Access Control)
A permission management system that controls access to AI models and features based on a user’s role, team, or department. RBAC ensures that different groups within an organization have appropriate levels of AI access and policy enforcement.
GDPR (General Data Protection Regulation)
The European Union regulation governing data protection and privacy. GDPR requires organizations to demonstrate how personal data is processed, including when it is shared with AI tools. Unsanctioned AI usage can create GDPR violations if employee prompts contain personal data that trains external models.
DORA (Digital Operational Resilience Act)
An EU regulation requiring financial institutions to maintain operational resilience across their ICT systems. DORA has specific implications for AI governance, requiring organizations to document, monitor, and control how AI tools interact with sensitive financial data.
EU AI Act
European Union legislation establishing a comprehensive regulatory framework for artificial intelligence. It classifies AI systems by risk level and imposes requirements for transparency, governance, and compliance that affect how enterprises deploy and manage AI tools.
HIPAA (Health Insurance Portability and Accountability Act)
US federal law that protects sensitive patient health information. Organizations in healthcare must ensure that AI tools do not process or store protected health information (PHI) in ways that violate HIPAA requirements, making governed AI access critical.
SOC 2 (Service Organization Control 2)
A compliance framework that evaluates an organization’s controls for security, availability, processing integrity, confidentiality, and privacy. AI platforms handling enterprise data should demonstrate SOC 2 compliance to assure customers that their data is properly protected.