Author: Daniel

  • Executive Summary: Policy-Reliant DLP Fails Against Shadow AI

    Executive Summary: Policy-Reliant DLP Fails Against Shadow AI

    Data Loss Prevention (DLP) based on policies was built for a world that no longer exists.

    For decades, security teams have relied on policies that looked sound on paper. In practice, translating them into DLP rule sets has meant a constant struggle between false positives that disrupt business and missed incidents that expose sensitive data.

    As the use of generative AI tools rapidly increases, so does the threat of data loss. Employees paste proprietary code into chatbots, upload customer data into AI-powered marketing tools, and rely on copilots embedded into everyday SaaS platforms – often without security review.

    This new reality, often called Shadow AI, refers to the unsanctioned or unmanaged use of AI tools by employees – without formal security review, governance, or contractual safeguards. It is not necessarily malicious – in fact, it is often driven by productivity. But when sensitive enterprise data is introduced into AI systems that the organization does not control, visibility is lost, and traditional DLP controls become ineffective.

    Shadow AI exposes a critical weakness: policy-reliant DLP cannot keep pace with AI-driven workflows.

    This article is a shortened version based on a white paper we recently published in collaboration with CISO Tradecraft. The full version is available here.

    Past: How We Got Here

    Traditional DLP began with simple keyword filtering – static “dirty word lists” designed to catch sensitive terms moving across network chokepoints. In an era of mostly unencrypted traffic and predictable data flows, basic string matching and file hashes were often enough.

    As regulatory pressure increased in the early 2000s, DLP evolved into content-aware inspection. Regex-based detection enabled teams to identify structured data such as credit card and Social Security numbers. While more sophisticated, these systems still relied on predefined patterns – generating high false positives and struggling with context.

    Then came widespread encryption after the Snowden disclosures. HTTPS became the norm, forcing organizations to deploy proxy-based inspection tools just to regain visibility. Meanwhile, insider risks and endpoint data movement remained largely unaddressed.

    With the migration to the cloud, CASB solutions emerged to enforce policy across SaaS platforms. But agent gaps, BYOD environments, and expanding cloud ecosystems made enforcement inconsistent and incomplete.

    At every stage, DLP adapted incrementally, but always remained policy-driven, perimeter-focused, and dependent on predicting how data might leave the organization.

    That model was already strained, and then AI arrived.

    Present: The State of DLP, and the Age of AI

    Traditional DLP was already showing cracks before AI came into the picture. Modern enterprises generate vast amounts of structured and unstructured data across endpoints, SaaS platforms, cloud storage, and collaboration tools. Manually translating all possible data flows, users, and destinations into static policies is no longer scalable. The result is familiar to every security team: overwhelming false positives, missed incidents, and rising operational costs.

    AI made the problem ever more complex.

    Tools like Microsoft Copilot, ChatGPT, Claude, and Gemini are being embedded into everyday workflows – sometimes enabled by default. These systems don’t just move data; they interpret, synthesize, and recombine it. They operate through dynamic prompts, contextual conversations, and encrypted channels that bypass traditional inspection methods.

    Sensitive data is no longer exfiltrated only through file attachments or obvious pattern matches. It is pasted into conversational prompts, indexed by AI copilots, and transmitted to systems outside the enterprise boundary. Once submitted to an external AI platform, often without contractual safeguards, organizations lose visibility and control over how that data is retained, processed, or reused.

    Blocking AI outright is rarely effective. Employees find workarounds, productivity suffers, and security teams lose what little visibility they had. Yet allowing unrestricted use exposes organizations to regulatory violations, intellectual property loss, and reputational damage. Compliance frameworks do not distinguish between malicious exfiltration and accidental disclosure.

    Monitoring traffic alone is no longer enough. By the time a violation is detected, the data may already be gone.

    This is the core problem: policy-based DLP was designed to match patterns and enforce rules. AI-driven workflows require understanding context, intent, and behavior – in real time.

    Future: Using AI to Contain AI

    AI-driven workflows are here to stay, and our security models must evolve accordingly.

    The answer is not more policies (no matter how polished or automated), but rather a shift from static enforcement to context-aware protection. Modern approaches leverage AI to understand data flows across endpoints, SaaS platforms, email, storage, and web interactions – automatically classifying both structured and unstructured data by sensitivity, not just format.

    Instead of trying to predict every possible exfiltration scenario in advance, these systems learn what normal business behavior looks like. They evaluate who is sharing data, what is being shared, where it’s going, and whether that action aligns with legitimate business intent – in real time.

    This allows organizations to enable AI adoption safely rather than attempting to block it outright. The goal is not to slow innovation, but to ensure that sensitive data moves responsibly within AI-powered environments.

    For leadership, the imperative is clear: legacy DLP and passive monitoring cannot protect a modern enterprise. Compliance requirements are tightening, financial penalties are rising, and AI usage is accelerating. The only sustainable path forward is to deploy intelligent, adaptive controls capable of operating at the speed and scale of AI.

    Shadow AI is not a theoretical risk – it is already embedded in everyday business operations. The organizations that adapt their security models now will be the ones that innovate confidently, without sacrificing control.

    The question is no longer whether AI will be used inside your organization, but whether your security architecture is prepared for it.

    If you’re evaluating how to modernize DLP for the AI era, we encourage you to read the full white paper and explore what context-aware, AI-driven data protection looks like in practice.

  • Thinking Beyond Policies: AI‑Ready Data Protection

    Thinking Beyond Policies: AI‑Ready Data Protection

    Read the original post in Sentra’s blog

    AI assistants, SaaS, and hybrid work have made data easier than ever to discover, share, and reuse. Tools like Gemini for Google Workspace and Microsoft 365 Copilot can search across drives, mailboxes, chats, and documents in seconds – surfacing information that used to be buried in obscure folders and old snapshots.

    That’s great for productivity – but dangerous for data security.

    Traditional, policy‑based DLP wasn’t designed to handle this level of complexity. At the same time, many organizations now use DSPM tools to understand where their sensitive data lives – but still lack real‑time control over how that data moves on endpoints, in browsers, and across SaaS.

    Together, Sentra and ORION close this gap: Sentra brings next‑gen, context-driven DSPM; ORION brings next‑gen, behavior‑driven DLP. The result is end‑to‑end, AI‑ready data protection from data store to last‑mile usage, creating a learning, self‑improving posture rather than a static set of controls.

    Why DSPM or DLP Alone Isn’t Enough

    Modern data environments require two distinct capabilities: deep data intelligence and real-time enforcement based on contextual business context.

    DSPM solutions provide a data-centric view of risk. They continuously discover and classify sensitive data across cloud, SaaS, and on-prem environments. They map exposure, detect shadow data, and surface over-permissioned access. This gives security teams a clear understanding of what sensitive data exists, where it resides, who can access it, and how exposed it is.

    DLP solutions operate where data moves – on endpoints, in browsers, across SaaS, and in email. They enforce policies and prevent exfiltration as it happens. 

    Without rich data context like accurate sensitivity classification, exposure mapping, and identity-to-data relationships, DLP solutions often rely on predefined rules or limited signals to decide what to block, allow, or escalate.

    DLP can be enforced, but its precision depends on the quality of the data intelligence behind it.

    In AI-enabled, multi-cloud environments, visibility without enforcement is insufficient – and enforcement without deep data understanding lacks precision. To protect sensitive data from discovery by AI assistants, misuse across SaaS, or exfiltration from endpoints, organizations need accurate, continuously updated data intelligence, real-time, context-aware enforcement, and feedback between the two layers. 

    That is where Sentra and ORION complement each other.

    Sentra: Data‑Centric Intelligence for AI and SaaS

    Sentra provides the data foundation: a continuous, accurate understanding of what you’re protecting and how exposed it is.

    Deep Discovery and Classification

    Sentra continuously discovers and classifies sensitive data across cloud‑native platforms, SaaS, and on‑prem data stores, including Google Workspace, Microsoft 365, databases, and object storage. Under the hood, Sentra uses AI/ML, OCR, and transcription to analyze both structured and unstructured data, and leverages rich data class libraries to identify PII, PHI, PCI, IP, credentials, HR data, legal content, and more, with configurable sensitivity levels.

    This creates a live, contextual map of sensitive data: what it is, where it resides, and how important it is.

    Reducing Shadow Data and Exposure

    Sentra helps teams clean up the environment before AI and users can misuse it. 

    It uncovers shadow data and obsolete assets that still carry sensitive content, highlights redundant or orphaned data that increases exposure (without adding business value), and supports collaborative workflows for remediation for security, data, and app owners.

    Access Governance and Labeling for AI and DLP

    Sentra turns visibility into governance signals. It maps which identities have access to which sensitive data classes and data stores, exposing overpermissioning and risky external access, and driving least‑privilege by aligning access rights with sensitivity and business needs.

    To achieve this, Sentra automatically applies and enforces:

    Google Labels across Google Drive, powering Gemini controls and DLP for Drive, and Microsoft Purview Information Protection (MPIP) labels across Microsoft 365, powering Copilot and DLP policies.

    These labels become the policy fabric downstream AI and DLP engines use to decide what can be searched, summarized, or shared.

    ORION: Behavior‑Driven DLP That Thinks Beyond Policies

    ORION replaces policy reliance with a set of intelligent, context-aware proprietary AI agents

    AI Agents That Understand Context

    ORION‘s agents collect rich context about data, identity, environment, and business relationships

    This includes mapping data lineage and movement patterns from source to destination, a contextual understanding of identities (role, department, tenure, and more), environmental context (geography, network zone, working hours), external business relationships (vendor/customer status), Sentra’s data classification, and more. 

    Based on this rich, business-aware context, ORION‘s agents detect indicators of data loss and stop potential exfiltrations before they become incidents. That means a full alignment between DLP and how your business actually operates, rather than how it was imagined in static policies.

    Unified Coverage Where Data Moves

    ORION is designed as a unified DLP solution, covering: 

    • Endpoints
    • SaaS applications
    • Web and cloud
    • Email
    • On‑prem and storage, including channels like print

    From initial deployment, ORION quickly provides meaningful detections grounded in real behavior, not just pattern hits. Security teams then get trusted, high‑quality alerts.

    Better Together: End‑to‑End, AI‑Ready Protection

    Individually, Sentra and ORION address critical yet distinct challenges. Together, they create a closed loop:

    Sentra → ORION: Smarter Detections

    Sentra gives ORION high‑quality context:

    • Which assets are truly sensitive, and at what level.
    • Where they live, how widely they’re exposed, and which identities can reach them.
    • Which documents and stores carry labels or policies that demand stricter treatment.

    ORION uses this information to prioritize and enrich detections, focusing on events involving genuinely high‑risk data. It can then adapt behavior models to each user and data class, improving precision over time.

    ORION → Sentra: Real‑World Feedback

    ORION‘s view into actual data movement feeds back into Sentra, exposing data stores that repeatedly appear in risky behaviors and serve as prime candidates for cleanup or stricter access governance. It also highlights identities whose actions don’t align with their expected access profile, feeding Sentra’s least‑privilege workflows. This turns data protection into a self‑improving system instead of a set of static controls.

    What this means for Security and Risk Teams

    With Sentra and ORION together, organizations can:

    • Securely adopt AI assistants like Gemini and Copilot, with Sentra controlling what they can see and ORION controlling how data is actually used on endpoints and SaaS.
    • Eliminate shadow data as an exfil path by first mapping and reducing it with Sentra, then guarding remaining high‑risk assets with ORION until they’re remediated.
    • Make least‑privilege real, with Sentra defining who should have access to what and ORION enforcing that principle in everyday behavior.
    • Provide auditors and boards with evidence that sensitive data is discovered, governed, and protected from exfiltration across both data platforms and endpoints.

    Instead of choosing between “see everything but act slowly” (DSPM‑only) and “act without deep context” (DLP‑only), Sentra and ORION let you do both well – with one data‑centric brain and one behavior‑aware nervous system.

    Ready to See Sentra + ORION in Action?

    If you’re looking to secure AI adoption, reduce data loss risk, and retire legacy DLP noise, the combination of Sentra DSPM and ORION’s DLP offers a practical, modern path forward.

    See how a unified, AI‑ready data protection architecture can look in your environment by mapping your most critical data and exposures with Sentra, and letting ORION protect that data as it moves across endpoints, SaaS, and web in real time.

    Request a joint demo to explore how Sentra and ORION together can help you think beyond policies and build a data protection program designed for the AI era.

  • ORION Closes $32 Million in Funding – Building DLP Beyond Policies

    ORION Closes $32 Million in Funding – Building DLP Beyond Policies

    We are thrilled to announce that ORION has just closed $32 million in funding led by Norwest and joined by IBM and existing investors PICO Venture Partners, Lama Partners, Underscore VC, and others.

    As demand for ORION’s AI-powered alternative to traditional DLP grows, this round comes less than a year after our seed funding, bringing total capital raised to $38 million.

    Already protecting customers with tens of thousands of employees across finance, healthcare, and technology sectors, we are proud to be pioneering a new approach to data security that eliminates reliance on DLP policies and minimizes manual intervention.

    The funding will enable us to accelerate development of our proprietary LLMs and specialized AI agents while expanding go-to-market operations to meet growing enterprise demand for our autonomous DLP.

    “This funding is a powerful validation of what we’ve believed from day one: better policies are not the solution for DLP” said Nitay Milner, CEO and co-founder of ORION.

    “Traditional DLP solutions often add more policies, invest hours in improving them, or perhaps refine them with AI, but data loss incidents are more widespread than ever. By moving beyond policy-based DLP and using AI to gain true contextual understanding, we’re giving enterprises a way to accurately distinguish between legitimate workflows and malicious activity”.

    Thinking Beyond Policies: A New Path for Preventing Data Loss

    For more than a decade, enterprises have relied on traditional, notoriously inefficient DLP tools. These tools, based on thousands of human-authored policies, require constant tuning, generate a constant stream of false positives, yet still fail to stop data exfiltration.

    Built on the assumption that more policies equal stronger protection, these tools cannot keep pace with modern risks posed by AI-driven workflows, uncontrolled SaaS adoption, and distributed workforces.

    Because policies only protect against known threats, legacy DLP leaves enterprises exposed to unpredictable, rapidly emerging patterns of data loss that are becoming increasingly common.

    ORION replaces traditional policy-centric models with automated, context-driven detection based on data-loss indicator analysis.

    Powered by specialized AI agents and ORION’s proprietary LLM, the platform continuously detects and analyzes data loss indicators in real time, capturing the full context behind every movement, including content sensitivity, data lineage, user identity, behavioral intent, and environmental purpose.

    By enabling customers to understand why data is moving, ORION prevents exfiltration before it occurs, dramatically reducing false positives while capturing incidents that existing DLP tools routinely miss. This approach significantly reduces maintenance costs and empowers enterprises to protect sensitive information without the inefficiencies of legacy systems.

    Organizations using ORION have reported a massive reduction in DLP maintenance and tuning, accurate prevention of data movement beyond anticipated scenarios, and a near-zero false-positive rate.

    Policies still play a role in ORION, but only where they are most effective: deterministic, predictable scenarios. Everything else is handled autonomously by ORION’s analysis agent.

    “ORION is rewriting the rules of data security, eliminating the rigid policy structures that have held DLP back for decades,” said Dave Zilberman, General Partner at Norwest. “With a fully autonomous, context-driven approach, ORION isn’t just building a better product; it’s redefining how enterprises safeguard their most critical asset: data.”

    We’re Just Getting Started

    This raise is an exciting milestone for us, but it’s only the beginning.

    Data security is undergoing one of the fastest transformations in decades. Organizations are shifting to AI-driven workflows, distributed teams are the new standard, and sensitive information moves faster and through more systems than ever before. Traditional DLP wasn’t built for this reality.

    Over the coming year, you’ll see ORION broaden coverage across even more environments, provide security teams with deeper insights into data movement, and unlock levels of accuracy and autonomy that weren’t possible until now.

    Most importantly, we’re committed to giving every organization the ability to protect both humans and AI systems without slowing them down.

    We want to extend our deepest thanks to our customers, investors, and partners who believe in our mission and push us to build better every day. Your trust, feedback, and collaboration have been instrumental in shaping ORION into what it is – and what it’s becoming.

    To our team – none of this happens without your focus, hard work, creativity, and commitment. This would not be possible without you. We’re grateful for everything you’ve built and everything still ahead.

    About Orion

    ORION Security prevents data exfiltration by replacing policy-based enforcement with real-time contextual intelligence that leverages proprietary LLMs and specialized AI agents to autonomously detect and prevent data loss. Based in New York City, with offices in Tel Aviv, Israel, the company was founded in 2024 by CEO Nitay Milner, a former product leader at Cisco-acquired Epsagon, and CTO Jonathan Kreiner, a former application security leader at WalkMe. The company is backed by Norwest, PICO Venture Partners, Lama Partners, IBM, and others.

  • Indicators of Data Loss: The Path to Fully Automated DLP

    Indicators of Data Loss: The Path to Fully Automated DLP

    For the past decade, Data Loss Prevention (DLP) tools and platforms have been built on thousands of human-authored policies intended to cover every possible way sensitive data might leave the organization.

    Traditional DLP implementation starts with defining policies. The issue is that it also usually ends there – often without ever leaving “monitor-only” mode. Implementation requires countless efforts from the security team, only to result in a flood of false positives while data still leaks.

    Maintenance and false positives are just symptoms of the real problem: Policy reliance.

    In this blog, we will explain how ORION goes beyond policies through automated data-loss indicator analysis, helping you achieve true, efficient DLP coverage.

    DLP Policies are not Enough

    Policies, by nature, focus on what is happening rather than why.

    This means they protect against threats you already know to expect – they cannot detect or prevent new or evolving forms of data loss.

    The ability of policies to cover broad, deterministic use cases makes them well-suited to meet compliance standards that often require all-encompassing, coarse-grained rules.

    But when policies are your entire DLP strategy, they consistently fail to cover every edge case. They’re bound to be too granular or too broad, resulting in a constant flood of false positives and missed exfiltration incidents.

    As your organization continues to grow, the destinations for data exfiltration grow exponentially in both number and complexity. This is especially true with the rise of AI-driven workflows, as new, unprecedented ways for sensitive data to leave the organization unnoticed emerge every day.

    Modern DLP vendors try to fix this by adding more policies or using AI to generate new ones – but more rules don’t solve the core issue: policies alone will never provide complete DLP coverage.

    Data Loss Indicators

    To overcome the limitations of policies, ORION takes a different approach to DLP through automated Data Loss Indicator analysis.

    A data loss indicator is a contextual signal, such as unusual timing, unexpected destinations, identity mismatches, or abnormal data volume, that suggests a user’s action may be inappropriate or risky, even if it technically matches allowed behavior.

    Instead of asking what is happening (“a file was uploaded”), indicators help answer why it is happening and whether the action aligns with normal patterns.

    To put it simply, if anti-virus tools versus EDR represent the leap from signatures to behavior, then traditional DLP versus data loss indicator analysis represent the leap from rules to reasoning.

    Unlike traditional DLP alerts triggered by predefined policies, data loss indicators are uncovered by ORION’s set of specialized AI agents through continuous observation of how data moves across your environment.

    These specialized agents collect and analyze a wide range of contextual information for every data trace in your organization and analyze whether the behavior aligns with expected data movement patterns. From this foundation, ORION can detect subtle signals that static policies could never capture.

    The shift from activity-based monitoring to intent-aware detection enables security teams to distinguish between normal business operations and actions that may indicate exfiltration, whether purposeful or accidental.

    Security teams don’t need to manually define data loss indicators; the system learns them dynamically. Teams can enrich the database when needed, but the heavy lifting is automated.

    To understand how ORION forms these indicators, it’s helpful to look at the agents that collect and interpret the underlying signals.

    ORION’s Agents

    ORION comprises six specialized agents. Five agents collect core contextual signals, and a sixth agent analyzes them to detect abnormal or risky behavior. Together, they enable a holistic and intent-aware approach to DLP.

    A detailed overview of use cases our agents know how to cover can be seen here.

    Below is a breakdown of each agent’s role:

    Data Classification Agent

    The Data Classification Agent analyzes both structured and unstructured data, classifies and tags its content, assesses its sensitivity level, and generates a concise summary.

    Tagging often includes labels such as PCI, PII, HIPAA, Secrets, Code, Product, Marketing Materials and more.

    ORION also supports custom classifications via simple prompts, allowing teams to tailor classification to their organization’s unique data landscape.

    This ensures that every indicator accounts not only for how data moved but also for the type of data at risk.

    Data Lineage Agent

    The Data Lineage Agent collects and maps all data movement traces within the organization, including:

    • The source of data (Storage, codebases, cloud, etc.)
    • The type of action performed (Download, copy/paste, zip, encrypt, rename, send, and more)
    • The destination (AI tools, personal communications channels, devices, browsers, etc.)

    A detailed list of potential sources, destinations, and actions can be found on the Use Cases page.

    Combined with the Identity Agent, lineage determines whether an action deviates from typical usage patterns.

    This agent provides the behavioral context needed to understand whether movement is routine, anomalous, or suspicious.

    Identity Agent

    The Identity Agent integrates with IDP and HR systems to extract identity attributes such as title, department, seniority, tenure, and potential departure status. Indicators incorporate not just what happened but who did it and whether that person’s role makes the action reasonable.

    Environment Agent

    The Environment Agent collects environmental signals such as geography, site location, network zone, and working hours. These signals help determine whether the timing and location of data movement align with legitimate work patterns.

    External Relation Agent

    The External Relations Agent connects to CRM platforms and extracts customer and vendor information, including BAAs, contracts, and permitted data-sharing levels. This ensures ORION knows whether a destination is legitimate, expected, or unapproved.

    Analysis Agent

    The Analysis Agent aggregates all collected signals, transforms them into data loss indicators, and detects deviations from expected behavior.

    Examples include:

    • Scope creep in data access: A developer who normally works in one repository suddenly starts pulling large amounts of data from systems outside their typical scope.
    • Unusual destinations for sensitive files. An employee transfers sensitive files to a personal email, an unknown domain, or a storage provider not commonly used by the organization.
    • Outlier behavior within a role: A member of the finance team downloads significantly more customer data than peers in the same role, despite no change in responsibilities.
    • Suspicious timing and location: A late-night upload from a country where the employee doesn’t typically work, combined with an unusual spike in data access, makes the action high-risk.
    • Slow-drip exfiltration: A user moves small volumes of data in ways that seem harmless individually but, when combined, form a clear pattern of data siphoning.

    Once an indicator suggests risk, ORION can automatically block, prompt, or notify based on predefined sensitivity settings.

    Policy Support – Without Policy Reliance

    The shift away from policies doesn’t mean abandoning the concept completely. Policies remain valuable for compliance and deterministic scenarios. ORION’s Policy Engine stores predefined manual policy definitions, reviews them, and suggests enhancements and new policies when needed.

    The Benefit of Data Loss Indicators

    By analyzing data loss indicators rather than relying solely on static rules, ORION understands the intent behind user behavior in the context of identity, data sensitivity, environment, and organizational norms.

    This enables ORION to provide an automated DLP solution at a much larger scale, with far better accuracy, drastically reduced false positives, and, most importantly, the ability to continuously detect and prevent new forms of data exfiltration – not just the ones you anticipate.

    ORION’s agents work alongside security analysts, cover every single data movement in the organization, and help them do a better, more effective job on an unprecedented scale, while constantly learning and evolving their reasoning over time.

    It’s Time to Move On

    The shift from relying on static, human-authored policies to data loss indicator analysis represents the next natural step in DLP – one that brings context, intent, and real-time reasoning into the heart of data protection.

    By focusing on why actions occur, ORION delivers the visibility and precision needed to stop both accidental leaks and sophisticated exfiltration attempts at scale, closing the blind spots that policies have never been able to cover.