When the pandemic pushed businesses to embrace remote work, it inadvertently opened the door to this new breed of insider threat. As one FBI cyber official noted, the surging demand for IT talent and the normalization of remote hiring created an ideal cover for North Korean operatives.
Companies eager to fill tech roles and accustomed to interviewing candidates solely via Zoom or Skype have proven disturbingly easy to deceive. These shadow workers typically pose as non-North Korean nationals. For instance, claiming to be South Korean, Chinese, Eastern European, or even U.S. citizens. They come into the interview process prepared with fabricated resumes and fake identities. In one case, investigators discovered a candidate who “didn’t exist” at all – a synthetic persona created with deepfake technology.
Behind the scenes, North Korean operatives meticulously train for these interviews. According to threat intelligence analysts, this is an organized, state-backed operation – not just a few rogue actors. The job applicants are often highly skilled IT professionals handpicked by the regime. Many operate from safe locations in China or Russia, where they have better internet access, while maintaining the ruse that they’re based in the U.S. or elsewhere.
To avoid detection, they deploy a gamut of evasion tactics: hiring willing Americans to rent out their home Wi-Fi networks so login IP addresses appear U.S.-based, using VPNs and remote desktops, and collaborating with local enablers who handle any on-the-ground requirements.
Inside the North Korean remote work scheme
So how exactly does this scheme play out? It usually looks something like this: A shadow IT worker applies for a contract developer position at a U.S. company, pretending to be a U.S. resident. They might use a stolen identity or a fictitious one backed by fake documents. In some cases, criminal intermediaries in the U.S. help “validate” the applicant’s credentials – setting up sham LLCs or websites that pretend to be past employers or educational institutions.
Once the candidate is hired, a U.S.-based accomplice receives the company-issued laptop at their address (to avoid international shipping). That accomplice runs a so-called “laptop farm”, keeping the device connected to U.S. internet and power, while the real worker in North Korea (or China/Russia) remotely controls it.
A recently unsealed case provided a startling window into this setup. A 50-year-old woman in Arizona, pleaded guilty to running a clandestine “laptop farm” out of her home on behalf of North Korean operatives. Over three years, the woman hosted computers for North Korean tech workers posing as Americans, making it appear they were working locally. She even shipped at least 49 company laptops overseas and maintained more than 90 devices connected in her house – each labeled with the name of the U.S. company it belonged to. This operation helped North Koreans infiltrate 309 U.S. companies (including an aerospace/defense contractor, a major TV network, a Silicon Valley tech giant, and others) and funnel more than $17 million in total earnings to the regime.
North Korean IT workers often coordinate in teams, effectively turning one remote “employee” into the output of multiple people. The goal for the regime is to maximize billable work hours and income, so these shadow workers are highly motivated – and unusually productive. This overachieving work ethic comes from the reality that each “worker” is actually a team back in North Korea, essentially running a relay race to extend working hours and quickly solve tasks. (It’s much easier to appear like a coding superstar when you have a bench of colleagues taking shifts behind one online identity.)
This productivity comes at a steep price for employers. The North Korean hires are not just earning salary – they are gathering intelligence and sometimes sabotaging from within. U.S. authorities say that in some instances, these workers have used their access to steal sensitive data and extort their employers. For example, when one North Korean operative’s cover was blown, he retaliated by hacking into his startup employer’s cryptocurrency wallet and stealing roughly $900,000 in crypto before fleeing.
Why companies fail to spot the imposters
It’s easy to wonder: How do these fakes get past savvy tech hiring teams? The uncomfortable truth is that traditional hiring practices weren’t designed to catch highly coordinated fraud. Many HR departments and team managers focus on verifying a candidate’s skills via coding tests, portfolio reviews, technical questions. If those check out, they trust the rest of the resume at face value.
The North Korean operatives excel at technical interviews (many are genuinely skilled developers), which lowers suspicion. Any minor oddities – camera off due to “connection issues,” reluctance to meet in person – might be shrugged off in today’s remote-first work culture.
Moreover, the fraudsters have continually escalated their deception game. One alarming trend is the use of real-time deepfake video technology. In these cases, the person on the video call isn’t showing their true face at all – it’s an AI-generated face puppeted in real-time to match the imposter’s lip movements.
Security researchers at Palo Alto Networks’ Unit 42 noted this “logical evolution” of the scheme: a North Korean operative can now interview multiple times for the same job under different fake personas by simply switching deepfake avatars.
Business and security repercussions
For businesses, the idea that a foreign adversary could be lurking within their DevOps team or IT department is chilling. The immediate risk is legal and financial: paying a North Korean agent is a violation of U.S. sanctions and laws, even if done unwittingly.
Companies caught up in these schemes may face government scrutiny, reputational damage, and the headache of unwinding years’ worth of work done by the imposter. If code was written by a sanctioned entity, does it need to be audited or thrown out? If sensitive data was accessed, do breach disclosure rules apply? These are thorny questions some firms have had to tackle after discovering a shadow worker on payroll.
The security implications are equally dire. By hiring an unknown outsider under false pretenses, an organization essentially grants system credentials and trust to a potential hostile actor. North Korean state-backed workers have been linked not only to revenue generation but also to cyber espionage. Google’s threat intelligence team observed that recently these operatives shifted from just earning paychecks to using their privileged access to steal data and facilitate cyberattacks against their employers.
In some cases, they pivot to become insider threats, exfiltrating proprietary information, customer data, or financial secrets. There’s also the extortion angle: as noted, some have attempted to hold company data hostage for ransom when their cover was blown.
Consider also the broader supply chain risk. If a fake developer managed to insert backdoors into software at a major tech company, that vulnerability could trickle out to thousands of downstream users or clients. North Korea’s cyber operatives are known for playing the long game. They might quietly place a foothold that isn’t exploited until years later. Thus, the presence of even one such mole could compromise not just the direct employer but potentially partner networks and customer systems.
Spotting the red flags during hiring
Given the stakes, what can companies do to weed out these imposters before they get inside? It starts with retooling the hiring and vetting process with security in mind. Hiring managers and security teams need to work hand-in-hand when screening remote candidates.
Some red flags are easier to catch than others. One is the profile picture on resumes or LinkedIn. If it looks oddly perfect or slightly unreal, it could be AI-generated (consider running a reverse image search or using AI-detection tools). Another is the employment history: fake workers often claim experience at companies or on projects that no one can quite verify. A thorough reference check can unravel these lies – call up the supposed previous employer directly and confirm that the person actually worked there. Additionally, skills that don’t add up chronologically (like a 22-year-old claiming 8 years of senior engineering experience) should trigger scrutiny. And of course, insistence on fully remote interaction with excuses for why they can’t ever meet in person can be a warning sign in certain roles.
During interviews, companies should not shy away from requiring candidates to show their face on video and present ID early in the process. The FBI explicitly recommends video interviews as a baseline for remote hires.
But don’t stop at just a casual FaceTime: consider a more deliberate identity verification step. Document authenticity checks and facial matching can catch a fraud using someone else’s ID before they’re hired.
How document and identity proofing with ID Dataweb reduces shadow worker risk
Shadow workers thrive on weak identity proofing and risk mitigation. They exploit the gaps between a résumé and a real person. ID Dataweb closes that gap with a multilayered approach to identity proofing:
- Instant document authentication. Candidates photograph a government-issued ID, which is scanned for tampering, expiration, and country-specific security features in real time.
- Biometric face-to-document matching. BioGovID prompts the applicant to take a short selfie video. Liveness detection rules out deepfakes; facial comparison links the live image to the ID photo and flags any mismatch across 120 + countries.
- Device and phone possession checks. MobileMatch ties the asserted identity to a real mobile-carrier record and verifies one-time passcode delivery to the same device, proving the person controls the phone number.
- Risk-adaptive orchestration. ID Dataweb’s policy engine watches for TOR traffic, unusual IP shifts, or remote-desktop artifacts. When something feels off—say, a new laptop suddenly logs in from Moscow at 3 a.m.—the workflow steps up to an extra proofing challenge automatically.
Continuous reverification can be scheduled around key risk moments like first code commit, privilege escalation, or invoice approval so that a stolen session token doesn’t turn into six months of undetected access.
The result: recruiters can still hire fast, but only real people with real identities make it through security steps.