Louis Zezeran
19. veebr. 2026
The future of identity isn’t just “more secure”—it’s more delegated
AI is rapidly moving from “answering questions” to “doing work.” That shift is exciting—until you ask a simple question: when an AI agent acts for you, whose identity is it using? And if something goes wrong, who is responsible? In this episode of the NEVERHACK Estonia Cybercast, host Louis Zezeran sits down with Mihkel Tammsalu, Head of eID & Trust Services at SK ID Solutions, to explore the next chapter of digital trust: identity in an AI-driven world.
Estonia is a perfect place to ask these questions. For years, Estonians have lived with a remarkably mature digital identity system—ID cards, Mobile-ID, Smart-ID, digital signatures, and widespread online services that make “trusting the internet” feel normal. But AI introduces a new actor into that trust relationship. The internet was largely built for humans interacting with services. Now we’re entering an era of agents interacting with services—and that requires new rules, new architectures, and very likely new legal thinking.
What “trust” means when machines are involved
Early in the conversation, Louis goes back to fundamentals: before digital signatures, there were physical signatures—and before that, trust was personal. You believed a signature meant something because society collectively agreed it represented the individual behind it. Mihkel offers a useful reframing: trust between humans is social and psychological, but digital trust is different. When machines are on both ends, trust becomes less about intuition and more about verification—a structured process of checking claims and ensuring rules were followed.
This distinction matters because AI doesn’t “earn trust” the way humans do. It doesn’t have a reputation in the human sense. It doesn’t feel accountable. So if we want AI agents to safely perform meaningful actions, we need systems that can verify:
- what the agent is allowed to do,
- under which conditions,
- for how long,
- and with what level of oversight.
Identity as “a set of claims” (and why signatures aren’t the whole story)
Mihkel breaks identity down in a way that immediately clarifies why AI complicates everything: identity isn’t a single thing—it’s a set of claims. A signature can be one claim. A certificate can be another. Your attributes, roles, and permissions can be additional claims. Identity is essentially what other parties are willing to accept as true about you, based on evidence and verification.
And that leads naturally to certificates and trust service providers. A certificate on its own is an assertion—something that claims to be true. The real power comes from verification: confirming that the certificate was issued by a trustworthy authority operating under a known policy framework. Mihkel describes SK ID Solutions’ role bluntly and memorably: their business is to sell trust—to be the third party that anchors verification so others can rely on it.
The AI problem: “acting on my behalf” with my credentials
Here’s where the episode gets practical—and slightly unsettling. Today, when an AI agent does something for you inside a service, it usually does it using access that ultimately belongs to you. That means that—legally and operationally—responsibility flows back to the human. If the AI does something helpful, we celebrate productivity. If it does something harmful, we immediately ask: “Why did you let it do that?”
Louis repeatedly brings the conversation back to that accountability question because it’s the one that will shape regulation, product design, and user adoption. And the conclusion is clear: full delegation is too dangerous. The future needs something more nuanced.
Delegation is the missing layer
A major idea in this episode is that we need a formal mechanism to delegate limited rights to AI agents—rather than giving them full access. Think of it as the difference between:
- handing someone your house keys, bank card, and passport, versus
- giving them a one-time door code that only works for the next hour and only opens one door.
In the episode, the example is updating a government registry—like changing an address. You might want an agent to do it for you, but you wouldn’t want that same agent to have broad, ongoing access to everything you can do in e-services.
The delegation concept naturally introduces guardrails: scope, time limits, and approval steps. And it leads to a model many people already understand: “human in the loop.” Low-risk tasks can be done autonomously (“handle these emails, show me what you did”). Higher-risk tasks require confirmation (“draft the purchase, but come back before you pay”). Over time, trust can be earned through consistent performance—just like with people.
A surprisingly useful analogy: managing AI like a junior employee
One of the most memorable moments in the episode is when Louis compares AI delegation to leadership. If you’re a manager, you don’t delegate a critical, high-stakes task to an unpredictable person with no supervision. You delegate based on proven reliability, clear instructions, and appropriate oversight. The same approach applies to AI agents.
That framing is valuable because it turns “AI safety” into something concrete:
- define boundaries,
- specify success criteria,
- require approvals where risk is high,
- and gradually increase autonomy only when justified.
It also highlights a hidden challenge: the mental overhead of managing delegation. Even if an agent could save you time, it might not feel worth it if you have to configure permissions, time windows, and constraints every time. So the systems that win will likely be those that make delegation intuitive—perhaps even presenting daily summaries and “pending approvals” in a simple, human-friendly way.
Beyond individuals: identity already exists for companies and machines
A key insight is that we’re not starting from zero. We already use identity concepts for multiple categories:
- natural persons,
- legal entities (companies),
- and machines (like IoT devices that have certificates and participate in trust chains).
AI agents don’t neatly fit into only one category, but the existence of these parallel identity models suggests the future may involve multiple delegation schemes—different rules for different contexts, and different levels of traceability depending on the service.
What changes next: APIs, machine-to-machine services, and business model disruption
Mihkel’s outlook on Estonia’s architecture is pragmatic: we likely don’t need to rebuild everything. Human-to-service interaction will continue for a long time. But services—especially e-services—will increasingly need machine-to-machine interfaces so agents can act reliably without screen-scraping or “pretending to be a human clicking buttons.”
Louis then takes the idea one step further: if services become clean APIs for agents, many business models built on human attention—ads, upsells, dark patterns, and interface manipulation—could be disrupted. If your AI can book a flight via an API, the entire experience of being “marketed to” changes. That doesn’t mean advertising disappears—it means the battleground shifts. The episode even toys with the idea of user-controlled “marketing tolerance” for agents: how open should your agent be to suggestions, upsells, and alternative offers?
Why this episode matters
This isn’t just a technical conversation. It’s a preview of how society may adapt when “acting online” no longer requires a human hand on the keyboard. The trust systems we built for people will be tested by autonomous, scalable agents. The winners—countries, platforms, and companies—will be those that solve delegation cleanly: enabling productivity without handing over unlimited power.
If you work in cybersecurity, identity, compliance, government digital services, or product design, this episode gives you a sharp mental model: trust becomes verification; identity becomes claims; and AI forces delegation to become explicit.
Call to action: Listen to the episode now, visit NEVERHACK Estonia online, and subscribe for more conversations that connect real-world security challenges with what’s coming next.