88% of companies have experienced AI agent security incidents. But only 22% treat agents as "identities" to manage them.


Okta CEO Todd McKinnon appeared on The Verge and mentioned something that caught my attention:
AI agents shouldn't just be tools; they should have their own identities. Log in, authenticate, and leave logs just like employees do.
Here's the background.
Currently, more and more AI agents are in enterprises, capable of accessing databases, calling APIs, and sending emails. But most companies still manage agents using the creator's account permissions.
What does this mean? If an agent causes an issue, you have no idea who authorized it, what it did, or when it did it.
McKinnon's logic is: agents need independent identities, independent permissions, independent logs, and a kill switch. If an agent behaves abnormally, it can be shut down with one click.
I believe that agent identity will become a core topic for enterprise AI in the second half of 2026.
Whoever gets this infrastructure right first will be the tollbooth for the next round of AI infrastructure development.
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin