Louis Zezeran
22. jaan. 2026
Phishing in 2026 isn’t “an email problem” anymore
Phishing has quietly changed category. What used to be an annoying, easy-to-spot email has become a multi-channel social engineering ecosystem—powered by AI, accelerated by deepfake media, and backed by an underground market where attackers can buy capabilities “as a service.” In this CyberCast episode, Louis Zezeran sits down with Urmo Keskel from Phishbite to map the phishing landscape for 2026 and, more importantly, explain what real organizations can do about it.
Urmo brings a rare blend of experience: a long career in Estonia’s identity ecosystem (including years at SK ID Solutions) and hands-on exposure to how real people behave when they’re busy, stressed, distracted—and suddenly presented with a believable message that demands action. That human context matters, because the thread running through the entire discussion is simple: phishing works not because users are “stupid,” but because users are human.
What is phishing, really?
Urmo defines phishing in plain language: a scenario where someone tries to trick you by pretending to be someone else—often a trusted brand—so you’ll click, enter data, confirm a transaction, or otherwise do something that benefits the attacker. And crucially, it’s not limited to email. It can arrive via SMS, WhatsApp, Facebook Messenger, LinkedIn, phone calls, or any channel where trust and urgency can be manufactured.
That channel expansion is one of the biggest shifts for 2026. Email still matters, but attackers increasingly mix channels, moving victims from one platform to another to increase credibility and bypass defenses.
Why awareness training has to feel “real”
A standout theme is realism. Louis draws an analogy from training environments: people don’t learn effectively by memorizing theory. They learn when training mirrors real conditions—pressure, distractions, imperfect information, and emotional triggers. Phishing works the same way. It’s easy to spot a scam in a classroom. It’s harder at 3 p.m. on a Friday when your brain is already thinking about the weekend.
Phishbite’s core approach started with phishing simulations and expanded into broader “cyber hygiene” education (password habits, public networks, software updating, and more). But the big value is repeated exposure: giving staff the lived experience of being targeted and learning how quickly mistakes happen—even among people who consider themselves careful. Urmo notes that across continuously tested employees, a large portion still make mistakes, and the point isn’t to shame anyone—it’s to change behavior over time.
The AI shift: from generic scams to hyper-personalized lures
Historically, one reason Estonians could dismiss many scams was language quality: broken Estonian often gave attackers away. That advantage is fading. Urmo explains that AI tools make it easy to generate believable local-language content, and attackers can also tailor messages using public information—your role, your company, your industry, even your interests.
The scary part isn’t just “better grammar.” It’s personalization at scale. If an attacker knows your title and your sector, they can craft invitations, invoices, HR messages, and partner requests that sound exactly like the kind of thing you’d receive on a normal workday. And as Louis observes, legitimate marketing emails are often not very personalized—so when a phishing email is personalized, it can feel even more convincing than the real thing.
Deepfakes are moving from novelty to weapon
One of the most memorable parts of the episode is Urmo’s personal story: he received a deepfake video in the context of a scam involving an impersonated Estonian public figure. Even with strong awareness and a professional background, the timing and emotional context mattered—he accepted a connection request that he later recognized as suspicious. The lesson isn’t “gotcha.” It’s that attackers aim for moments when you’re least guarded: weekends, family situations, busy periods, and personal interests.
The scam also reveals an operational detail many people miss: attackers often use “money mules” or compromised accounts to move funds, creating distance between the scammer and the transaction. That complexity makes investigations harder and increases the chance a victim blames the wrong person.
Hybrid phishing: one attack, many channels
Urmo outlines how phishing increasingly jumps platforms—email to phone, LinkedIn to Telegram, message to QR code, comment thread to credential capture page. He shares a classic example: fake “LinkedIn Security” accounts posting warnings like “your account violates the rules—verify here.” It’s designed to trigger fear and immediate action, not careful thinking.
The practical takeaway: “Check the channel” is no longer enough. We have to check the context and the requested action. If something pushes you to act fast, to move to a different platform, or to bypass normalprocess, treat it as a red flag.
ClickFix: phishing that tricks you into infecting yourself
If there’s one tactic in this episode that deserves a big warning label for 2026, it’s ClickFix.
Urmo explains how traditional phishing often tries to steal credentials. ClickFix is different: it gets you onto a page that looks legitimate, then instructs you to “verify” or “fix” something by copying and executing a command—often via PowerShell or similar tools. In other words, the user becomes the delivery mechanism for malware or an infostealer.
The reason it’s effective is psychological. Many users don’t fully understand what they’re doing; they’re simply following “steps” that appear necessary. And on Windows, the path from “copy” to “run” can be streamlined enough that people never pause to question what’s happening.
Urmo also shares a very actionable mitigation: if typical end users don’t need PowerShell, restrict or disable it where possible. Reducing available tools reduces attack surface.
Session hijacking “as a service”
Another shift Louis calls out is broader than phishing: many incidents are no longer “break in through a vulnerability,” but “log in through the front door” using stolen credentials, active sessions, or cookies. Urmo connects ClickFix and credential phishing to a bigger underground economy: sessions can be bought in bulk and sold cheaply, with pricing rising based on role and access.
This matters because it changes the threat model. You don’t need a highly skilled attacker targeting you personally; you can become collateral damage in a marketplace where access is commoditized.
Hiring attacks: weaponizing recruitment workflows
A particularly nasty variant targets IT and technical staff: “hiring attacks.” The attacker poses as a recruiter or company and, as part of the assessment, asks candidates to download code, compile something, or run a “test task.” Because the activity resembles legitimate development work, traditional endpoint protection may not catch it—especially if the payload is embedded in a cloned repository or obfuscated inside a build process.
Urmo notes these scams often offer above-market salaries to override skepticism. The emotional hook is real: excitement, curiosity, career opportunity—and suddenly caution drops.
Supply-chain and invoice fraud: attack the weakest link
Phishing isn’t only about stealing passwords. It’s also about manipulating business processes—like payments.
Urmo and Louis discuss the logic of supply-chain attacks: larger organizations may have stronger controls, so attackers compromise smaller partners first, sit quietly to learn processes, then step in at the right moment—often with a business email compromise (BEC) style invoice redirect. In this model, a small company may not even be the “real” target, but it becomes the entry point.
That’s why this episode matters for SMEs: your security posture affects your customers, your partners, and your reputation.
QR phishing: the link you can’t see
QR codes are “just links,” but they come with two modern twists:
- QR codes in emails can move victims from a more protected environment (work PC) to a less protected one (mobile phone), increasing success rates.
- Physical QR codes can be replaced with stickers—restaurants, parking meters, posters—redirecting you to scam payment pages or credential traps. Urmo mentions cases in neighboring countries where restaurant QR codes were tampered with.
The simple habit shift: treat QR codes like URLs. Preview where they go. Be suspicious of shortened links. And in physical spaces, consider whether a sticker overlay looks “off.”
Measuring what works: behavior change over time
One of the strongest credibility moments in the episode is when Urmo shares outcome data from continuous training: new customers may start with a high click rate, but after sustained testing, that rate can drop significantly. That’s the point—behavior change, not “catching people out.”
They also discuss an insight leaders often miss: click rates can spike during busy seasons. That’s valuable signal. It shows that risk increases under pressure, and it gives teams a chance to adjust—slow down processes, add approvals, increase reminders, or schedule awareness nudges during high-stress periods.
The mindset takeaway: slow down and train continuously
Urmo’s closing message is refreshingly human: everyone makes mistakes. The goal is awareness, practice, and slowing down before clicking, entering data, or confirming transactions.
For listeners, the immediate action steps are clear:
- Assume phishing can arrive via any channel.
- Treat urgency as a red flag.
- Be especially cautious when asked to run commands or “fix” something via copy/paste.
- Reduce unnecessary tools and privileges on endpoints.
- Run continuous, realistic simulations—not one-off training.
- Use metrics (click rate, repeat offenders, seasonal spikes) to guide improvements without blame.
Listen now to hear the full breakdown of 2026’s phishing tactics, and how to build real-world resistance across your organization. For more episodes, subscribe to CyberCast and follow Louis Zezeran and Urmo Keskel on LinkedIn.