Author: Harel Panker

  • Deepfake North Korean Remote Workers: a New Data Loss Threat

    Deepfake North Korean Remote Workers: a New Data Loss Threat

    Deepfake remote workers from North Korea have been making cybersecurity headlines for a while now. Using fabricated identities, AI-generated résumés, and even deepfaked video calls, these malicious agents secure jobs at Western companies. Once inside, they operate like trusted employees while carrying out their true mission: stealing sensitive intellectual property and exfiltrating customer data.

    What makes this new threat so dangerous is that it blurs the line between insider risk and external attack. On the surface, these individuals appear to be legitimate staff members with proper access, yet behind this facade are highly skilled agents with malicious intentions.

    Preventing data exfiltration across these data loss venues is hard enough as it is, making the hybrid nature of this threat even harder to stop before the damage is done – especially with traditional security tools.

    In this blog, we’ll explore the rise of this new threat, why it represents a unique challenge to security teams, and outline how context-aware Data Loss Prevention (DLP) tools can help you stay ahead of this threat.

    The Rise of Deepfake Remote Workers

    Over the past several years, U.S. and European authorities have issued repeated warnings about North Korean operatives posing as remote tech workers. Their goal is twofold: earn foreign currency for the regime despite sanctions, and gain access to sensitive corporate systems that can be exploited for espionage or sold on criminal markets.

    The scale and impact of this threat are no longer theoretical. CrowdStrike found over 320 incidents in the past 12 months where North Korean operatives obtained remote jobs at Western companies using deceptive identities and AI tools. That’s a 220% increase from the previous year.

    In May 2025, U.S. officials revealed that North Korean operatives had compromised dozens of Fortune 500 companies, using fraudulent remote hires to siphon sensitive corporate data and intellectual property back to Pyongyang. This was not a one-off tactic, but a systematic infiltration campaign targeting some of the world’s largest enterprises.

    In one notable example, KnowBe4’s HR team hired a software engineer back in 2024 who turned out to be a North Korean fake IT worker who immediately manipulated session history files, transferred potentially harmful files, and executed unauthorized software as soon as he got his Mac workstation, but was caught before too much damage was done.

    On the left – original stock footage picture. On the right – the AI fake submitted to KnowBe4’s HR.

    In another case from June 2025, a group posing as blockchain developers infiltrated a crypto platform (Favrr) through 31 fake identities, advanced identity deception techniques, and stealthy remote access tools, pulling off a $680K crypto heist.

    It’s clear that North Korea is actively using remote employment as a vector for data exfiltration. Unlike conventional phishing or malware campaigns, this strategy weaponizes corporate trust, embedding adversaries directly into the workforce. For security teams, the challenge is no longer just about keeping attackers out of the perimeter; it’s about recognizing when the attacker is already “inside,” disguised as a colleague.

    Why This Threat is Unique

    In our previous article on the three data loss hazards, we described the classic categories security teams face: human error, insider risk, and external attackers. Deepfake remote workers are a unique case in this regard – they begin as external adversaries but, once hired, become indistinguishable from insiders.

    This hybrid nature is precisely what makes the problem so difficult:

    Unlike a careless employee or even a disgruntled insider, these operatives are mission-driven from day one – every credential, system, and dataset they obtain is used for exploitation.

    Unlike smash-and-grab attackers, North Korean operatives are willing to play the long game -working quietly for months, building trust, and extracting value without tripping obvious alarms.

    From the defender’s side, traditional tools are designed to detect either insiders or outsiders, not both at once. A fake employee who is actually a foreign adversary doesn’t fit neatly into existing categories, creating a blind spot that static rules, basic anomaly detection, and perimeter defenses cannot cover.

    Let’s look into that a little deeper, specifically in the context of DLP tools –

    The DLP Challenge

    Data Loss Prevention (DLP) tools were never built for adversaries who look like employees. Most solutions are tuned for one of three scenarios: stopping accidental leaks from well-meaning staff, preventing data exfiltration by ‘amateur’ insiders (e.g., stealing leads before leaving the company), or blocking clear signs of an external exfiltration attack. Usually, most tools aren’t even equipped to handle these scenarios effectively, let alone this new threat.

    Static DLP policies like “block uploads over 50MB” or “alert on large downloads” don’t help when the operative’s role legitimately involves handling large volumes of sensitive data. Similarly, keyword- or pattern-based detection fails because the data movement appears to be business-related. By the time a spike in activity is noticed, the data may already be gone.

    Trying to prevent something like this from happening usually results in either a flood of false positives or missed detections. What’s missing, is a true understanding of context.

    Automated, Contextual DLP

    Traditional DLP relies on static rules and policies that treat all data movement the same. In reality, not every file transfer or database query carries the same risk. The difference between a legitimate business action and malicious exfiltration often lies in the why, not the what.

    A different, more suitable approach to solving this issue is through automated, contextual DLP.

    At ORION, we use a set of AI agents that learn the natural flow of data across your organization – which teams normally access which systems, how files are typically shared, and where data is expected to go. Understanding both the source of the data and the context of the action can make sense of behavior in ways that manually defining policies never could.

    This allows us to detect when:

    • A developer suddenly starts pulling data from repositories outside their usual scope.
    • An employee transfers sensitive files to an unfamiliar domain or to themselves.
    • A team member’s data usage sharply diverges from peers in the same role.
    • A seemingly normal upload becomes suspicious when combined with time, location, or unusual access patterns.

    These data loss indicators collected by ORION’s agents are context-based signals that suggest data may be leaving the organization inappropriately. By focusing on intent rather than policies and thresholds, ORION flags and prevents dangerous actions in real time, while minimizing false positives that frustrate employees and overload analysts.

    This contextual, adaptive approach enables countermeasures against threats like deepfake remote workers. When the attacker is an employee, only tools that understand the bigger picture of behavior and intent can distinguish between normal business operations and malicious exfiltration.

    Context Is the Core of Modern Data Defense

    Deepfake remote workers are just one of an ever-growing list of new challenges facing security teams. They blur the line between insider risk and external attack, embedding adversaries directly into the workforce under the guise of legitimate employees. Traditional DLP tools designed for a simpler world of accidental leaks or ‘amateur’ insiders are not equipped to address this hybrid threat.

    The only way forward is to embrace intent-aware, contextual defenses. By understanding how data normally flows through an organization and detecting the subtle deviations that signal something’s wrong, security teams can finally close the blind spot exploited by North Korea’s deepfake operatives and others like them.

    This was exactly what led us to build ORION in a way that reduces noise, prevent real-time exfiltration, and gives security teams the advantage they need against attackers who now look just like employees.

    The question you might want to ask yourself is this: If tomorrow a fake remote worker slipped through your hiring process, would I be able to spot them before sensitive data walked out the door?

  • From Data Visibility to Real-Time Protection: WIZ – ORION Partnership Announcement

    From Data Visibility to Real-Time Protection: WIZ – ORION Partnership Announcement

    Company data is scattered across countless databases, storage buckets, and applications, spanning multiple clouds and on-premises systems. That’s the reality of every modern organization, whether we like it or not.

    In such a complex system, having visibility into where your data resides is more important than ever – but it’s just one part of the challenge. Once you have it mapped out, how do you actually protect it?

    Today, we are thrilled to announce a new integration between ORION, a fully automated Data Loss Prevention (DLP) and data security platform, and Wizthe DSPM market-leading giant.

    While Wiz gives organizations deep visibility into where sensitive information resides in the cloud, identifying PII, PCI, and other regulated data at rest, ORION builds on that foundation by tracking and protecting that data in motion across the entire enterprise.

    To put it simply, while Wiz can tell you, “Your sensitive data lives in this S3 bucket.” ORION can then tell you, “A developer just downloaded that data, encrypted it, and uploaded it to a personal Dropbox – and we blocked it.”

    With this integration, you no longer just have visibility into where your data is, but also the ability to ensure it stays safe.

    DSPM vs DLP

    To understand why this integration matters, let’s break down the two sides of the equation:

    DSPM (Data Security Posture Management) tools like Wiz focus on data at rest. They scan your cloud environments, map out every database, storage bucket, and service, giving you a static snapshot of your data landscape.

    ORION focuses on data in motion. It monitors the movement of that data across your organization – whether in the cloud, on endpoints, via email, on a USB drive, in a SaaS app, or in a ChatGPT conversation. Then, after analyzing context via AI, when something looks risky, ORION can step in and stop it from happening.

    Wiz shows you where your data is, ORION ensures it stays where it’s supposed to be. Together, these tools give you both the complete picture and the operational tools to act on it.

    Why You Need Both

    Full Data Lineage Across Your Enterprise

    Wiz provides cloud data lineage, showing how your data flows between cloud services. ORION extends that visibility to the rest of your environment.

    Now, you can see not just that an S3 bucket writes to another cloud database, but also that, from there, a developer pulled the data to their laptop, encrypted it, and tried to upload it to a personal Dropbox account. With this integration, the entire chain is visible and controllable.

    From Visibility to Active Protection

    While Wiz is the expert at mapping your sensitive data, ORION stops it from leaking out of the organization. Together, they close the gap between knowing and doing.

    ORION prevents actions in real time – using Wiz’s classification as the trigger.

    Flexible, AI-Powered Classification

    Already using Wiz for classification? Perfect — ORION can work directly with Wiz’s findings.

    If you don’t, you can choose to use our own AI-driven classification engine. What matters to us is that you have a way to protect your data exactly the way you want, whether you’re starting fresh with ORION or already invested in Wiz.

    WIZ <> ORION: How It Works

    Here’s the high-level flow of how these two platforms work together:

    1. Wiz scans your cloud, discovers and classifies sensitive data at rest across your cloud environments, and maps out exactly where regulated information like PII or PCI is stored.
    2. Wiz sends classification insights to ORION. We ingest this context from Wiz, instantly expanding our knowledge of what data needs the highest level of protection.
    3. ORION monitors data in motion, from endpoints to SaaS tools, email to removable media, tracking every movement across the entire enterprise.
    4. ORION enforces protection policies. If unusual or unsafe activity is detected, ORION automatically blocks the action and alerts security teams.
    5. You see the full lineage, from discovery to movement to prevention, giving you a complete, end-to-end story of your data’s lifecycle, with both visibility and control in a single view.

    Getting Started

    Enabling the ORION–Wiz integration is a very straightforward process.

    If you’re already a Wiz customer, simply connect your Wiz account to ORION using our built-in integration settings. ORION will immediately begin ingesting Wiz’s classification results, enabling you to detect indicators of data loss in real time for your most sensitive assets.

    For new ORION customers, our team can walk you through setup in minutes – from connecting to your existing DSPM tools to deploying active data protection across your endpoints, SaaS apps, and cloud services.

    With ORION and Wiz working together, you don’t have to choose between visibility and protection. You get both in one connected workflow.

    See where your sensitive data lives. Track how it moves. And stop it from leaving your control.

    Contact us to schedule a demo and see the ORION–Wiz integration in action.

  • Finding Intent: The Three DLP Hazards Every Security Team Must Know

    Finding Intent: The Three DLP Hazards Every Security Team Must Know

    Most DLP tools fail for a simple reason: they’re built to look at a single aspect of data loss. Sensitive data leaks out of organizations for three main reasons: human errorinsider risk, and external attackers. Each behaves differently, requires a different approach to recognize and solve, and has a different frequency vs. impact curve.

    A DLP policy tuned for accidental sharing might miss slow insider exfiltration, while a strict policy for handling external attackers might frustrate and limit employees. The only durable path is to detect intent – why the action is happening, then respond accordingly.

    This article will break down the three hazards of data loss, map them on the frequency/impact axis, share real-world examples, and highlight the contextual intent signals (who’s acting, what’s the data, where it’s going, and how behavior deviates from normal) that separate harmless mistakes from real threats.

    Human Error: Common but Manageable

    Human error sits at the “frequent but lower impact” end of our frequency vs. impact curve.

    It’s by far the most common cause of data loss – employees accidentally emailing the wrong file, sharing the wrong Google Drive folder with the wrong people, or consulting GPT about a sensitive matter. Most of these incidents can be detected quickly and cause less damage than insider abuse or targeted attacks.

    That said, these incidents are extremely common, and the potential damage they cause can vary widely.

    All it takes is a sales manager accidentally sharing a Google Drive folder with “Anyone with the link” instead of restricting it to the client team for sensitive pricing data to be exposed. This will rarely result in malicious exploitation, but when it does, the consequences can range from mild to disastrous.

    In July 2025, TalentHook, a recruitment software firm, inadvertently left an Azure Blob storage container misconfigured, exposing nearly 26 million resumes. The breach revealed sensitive personal information, including names, email addresses, phone numbers, and educational and employment histories.

    Traditional DLP policies are often meant to block or flag such mistakes, but they do so bluntly by either flooding security teams with false positives or interrupting legitimate workflows.

    A modern approach should therefore infer user intent: analyze context (Was this file ever shared externally before? Does this domain look like a partner or not?) and nudge the user in real time, resulting in fewer breaches and fewer frustrated employees.

    AI Agents are Changing the Picture

    Another thing to consider here is AI agents – just like human employees (if not more so), they can make mistakes, but at a much larger speed and scale. An AI assistant given broad access to corporate systems might accidentally share the wrong files, misroute sensitive information, or expose data in the process of answering a seemingly benign request. What makes this especially dangerous is volume: where a human might misconfigure a single folder, an AI could replicate the same mistake across thousands of records in seconds.

    Insider Risk: Less Frequent, More Damaging

    Insider risk sits in the middle of the frequency vs. impact curve – less common than human error, but far more damaging when it happens. Unlike accidents, these incidents often involve intent: an employee misusing legitimate access to steal, leak, or sabotage sensitive data.

    In late 2024, a new staffer at Toronto-Dominion Bank hired to detect money laundering used her access to leak sensitive customer data to criminals via Telegram. Prosecutors reported that her phone contained images of 255 customer checks and personal information for 70 additional clients, and what began as trusted data access inside a bank turned into a direct pipeline for fraud.

    This case shows how difficult it is to contain insider risk. On the surface, the employee’s activity of accessing customer records was within her role. Traditional DLP rules like “large downloads = suspicious” or “sensitive files sent externally = block” wouldn’t have caught this, because the behavior didn’t break those patterns until it was too late.

    A real solution to the problem can only be found by analyzing indicators of data loss and learning what “normal” looks like for each role and user. Context matters:

    • Is this employee suddenly accessing 10× more customer data than usual?
    • Is data being copied at odd hours or just before the employee leaves the company?
    • Are peers in the same department performing the same operations?

    Malicious Actors: Rare, Catastrophic

    At the far end of the frequency vs. impact curve sit external attackers – the least common hazard, but by far the most destructive. These are the ransomware groups, cybercriminals, and state-sponsored attackers whose goal is to extract maximum value from your most sensitive data. Unlike human error or insider misuse, their tactics are deliberate, well-resourced, and often devastating.

    One such case was the attack on Change Healthcare in January 2025, when ransomware operators disrupted services across U.S. hospitals for weeks, demanding a $22 million ransom. The breach not only exposed data but also crippled operations, disrupted patient care, and inflicted massive reputational damage.

    Traditional DLP policies are rarely effective in these scenarios. Once attackers penetrate the perimeter, they often blend into legitimate traffic, moving data in ways that mimic normal business processes.

    The only viable defense in these cases from a DLP standpoint is real-time, intent-aware detection: spotting unusual data flows, recognizing when files are headed to suspicious destinations, and correlating this with attacker-like behaviors (use of admin accounts at odd hours or from odd locations, mass compression of files, or lateral movement across systems). In other words, security teams need the same behavioral context used for insider risk – but tuned for external actors who adapt quickly.

    What Can We Do? Unified and Intelligent DLP

    The real problem with most DLP tools isn’t that they’re useless. It’s that they’re incomplete.

    One solution is designed to prevent accidental sharing. Another tries to catch insider misuse. A third focuses on malware and exfiltration. Each may work in isolation, but together they leave significant blind spots behind. Attackers, as well as well-meaning employees, exploit those gaps every day.

    The only durable way forward is a unified approach that understands intent. That means:

    • Contextual intelligence that knows the difference between an employee sending a customer contract to the wrong inbox vs. an engineer siphoning code to a personal repo.
    • Real-time prevention that can stop an exfiltration attempt in the moment, not weeks later in an audit log.
    • Adaptive, AI-driven learning that continuously tunes itself to normal behavior, supporting employees instead of blocking them with rigid policies.

    We must move from a one-dimensional filter to a living system that interprets why something is happening, not just what.

    Ask yourself this: Does our current DLP strategy actually cover all three hazards: human error, insider risk, and malicious actors, or are we betting everything on just one?

    An organization that answers this question honestly and builds for intent will be the one that prevents tomorrow’s data losses, not just react to them.