Louis Zezeran
5. märts 2026
Human risk is the hardest “attack surface” to measure—until you watch real behavior
Security teams have become very good at spotting attackers. We can detect malware, exploit chains, suspicious logins, and unusual movement across systems. But there’s a category of incidents that doesn’t begin with “someone hacked in.”
It begins with a normal user.
A user with valid access. A user who knows the business. A user who can open the documents. A user who is allowed to do what they’re doing—until the pattern shifts just enough to become dangerous.
That’s the heart of this Cybercast conversation, recorded during NEVERHACK Estonia’s Client Day 2026 in Tallinn, Louis Zezeran sits down with Sander van den Nieuwenhuijzen, Incydr Sales Specialist Northern Europe at Mimecast, to discuss Human Risk Management and the practical realities of insider threat detection.
Sander frames the challenge in a way that instantly makes sense: most organizations have invested heavily in awareness programs and simulated phishing tests. That’s useful—but it’s simulated risk. The moment people return to their real work—under time pressure, juggling tasks, sharing files, printing documents, using new tools like AI—the real risk shows up.
And critically: those risks don’t always look malicious at first glance.
The “8% causes 80%” reality: focus matters
Sander points to a pattern many SOC teams have felt in practice: a small portion of users can account for a large portion of incidents—an “8% drives 80%” kind of distribution. This isn’t about blaming staff. It’s about recognizing that risk is uneven, and the job of Human Risk Management is to identify where behavior is trending toward harm, before that harm becomes a reportable incident, a breach, or a reputational crisis.
This is especially relevant in a world where, as Louis notes, credential-based attacks and session theft make “legitimate activity” a bigger and bigger part of the threat landscape. If attackers can walk in the front door using valid identity signals, the line between “user” and “threat actor” can blur—and detecting unsafe behavior becomes essential.
What Incydr is actually doing: building a holistic risk picture
Louis summarizes Incydr as an advanced correlation engine for user behavior—and that’s a useful mental model. Incydr’s approach is not simply “one alert equals one incident.” It’s about observing multiple signals and shaping them into a risk narrative.
Sander breaks down the detection model into four layers:
- Source (where the activity originates)
- File/content (what the data actually is, and how sensitive it may be)
- User behavior (what’s normal vs anomalous for this person/role)
- Destination (where the data is going—apps, repositories, printers, etc.)
Those signals correlate into a risk score from 0 to 10, creating a consistent language for severity and response.
This matters because insider incidents often hide in plain sight. When each event is evaluated in isolation, it’s easy to say: “they’re allowed.” But when several “allowed” events stack together, the pattern becomes suspicious.
Why “blocking” isn’t the default—and why that’s smarter than it sounds
One of the most practical insights in this episode is the strategy of influencing behavior before enforcing hard controls.
Sander argues that default blocking can backfire—particularly with modern tools like AI. If you block one AI service, people may simply move to another. If you block a behavior without context, you also risk harming productivity, triggering shadow IT, or creating hostility toward security.
Incydr leans into a graded model:
- Visibility and baseline understanding of behavior
- Nudges and guidance when risk rises (educate, steer toward sanctioned tools)
- Escalation when signals are ignored
- Control actions when risk crosses a critical threshold (often via integrated systems)
The nudges are especially interesting because they create two effects at once:
- They guide users toward safer choices in the moment
- They make users conscious that security controls are present and watching patterns—not just checking boxes
Louis highlights one nudge concept where, at a higher risk level, a user may be asked to write a justification for the action they’re attempting. That creates an audit trail, but it also triggers a human pause: “Should I really do this if I have to explain it?”
The law firm printer story: a perfect example of “allowed” becoming risky
Sander shares a story that captures the value of correlation.
A lawyer in a law firm printed documents late in the evening—around 10:30pm. In many environments, printing at night might be unusual but not inherently malicious. The lawyer also had legitimate permissions to access and print those documents. A typical compliance tool might shrug and say: “Authorized.”
But Incydr observed more than authorization:
- Time anomaly: late-night printing
- Destination anomaly: printing to a different printer than the user normally used—chosen to avoid attention
- Content sensitivity: through content inspection, the printed material contained PII
None of those signals alone automatically proves wrongdoing. Together, they create a pattern that deserves attention. Sander explains that the user likely selected a printer on another floor to avoid being noticed, then physically removed the printed PII from the office to sell it to a competitor—an insider scenario that standard controls can miss because it doesn’t look like an “attack.”
The takeaway is simple and powerful:
Permissions tell you what’s allowed. Patterns tell you what’s risky.
Identity context: the “departing employee” lens
Another strong section of the discussion is about how insider risk changes when HR context changes.
Sander explains a common reality: employees who plan to leave often show behavioral signals weeks or months before it becomes official. Once HR/IAM signals indicate someone is departing, Incydr can place them in a higher-risk group and apply a different analytical lens—potentially raising risk scoring and reviewing prior activity with the new context.
This is exactly the kind of situation where static policies fail. You can’t write a perfect policy for “what a departing employee might do,” because the behavior varies and the most valuable detections are often the ones you didn’t anticipate. But a system that can adapt context—role, case sensitivity, group membership, departure signals—can uncover patterns that otherwise blend into normal noise.
Integrations: from visibility to action (SOC, SIEM, EDR, controls)
A big practical question Louis asks is: How does this fit into an organization that already has a SIEM, SOAR, or Open XDR?
Sander’s answer is operationally grounded:
- Alerts can be forwarded using standard methods (e.g., syslog) into external SIEM/XDR stacks
- Analysts can receive not just an alert, but the context required to investigate efficiently
- Integrations with ecosystem partners and EDR tools allow containment actions (isolate endpoint, isolate identity) when needed
This is where the “NEVERHACK angle” becomes important. Sander notes that when risk reaches high/critical, organizations need a clear path to escalation into a capable team—one that can investigate, validate, and respond. He references an operational benefit: up to 180 days of lookback to support incident response, so investigators aren’t stuck trying to reconstruct an insider timeline without sufficient data.
In other words: it’s not just “we flagged something.” It’s “here is the story of what happened, and here’s enough evidence to take action with confidence.”
Why this episode matters right now
The episode lands at an important moment. AI tools are accelerating productivity, but they’re also creating new exfiltration paths: copying sensitive content into prompts, moving data into unsanctioned tools, sharing files in new ways, and creating a “speed vs security” tension that many organizations feel daily. Sander calls out AI as a dual-use force—helpful on one hand, risk-amplifying on the other.
The episode matters because it reframes insider risk away from “catch the bad employee” and toward a more realistic mission:
- Identify risk patterns early
- Influence behavior to reduce incidents
- Escalate only when needed
- Give SOC teams the context to respond fast
Key takeaways you can apply
If you’re a security leader, IT manager, or business owner, here’s what you can take from this conversation:
- Awareness training is necessary—but it’s not enough. Real risk happens in real workflows.
- Policies catch what you already know. The incidents that hurt most are often the ones you didn’t predict.
- Context beats permission checks. “Allowed” activity can still be risky when patterns shift.
- Influence first, enforce later. Nudges and friction can reduce risk without breaking productivity.
- SOC outcomes improve when alerts include evidence. Lookback + correlated signals shorten investigation and reduce false positives.
Call to action
If you want to understand how modern insider risk detection works—especially in a world of AI tools, credential theft, and “legitimate” activity—this episode is for you.
Listen now to the NEVERHACK Estonia Cybercast episode with Sander van den Nieuwenhuijzen (Mimecast).
For more episodes, resources, and security insights, visit NEVERHACK Estonia, and subscribe so you don’t miss what’s next.