AgentSign: Zero trust identity and signing for AI agents

by AskCarXon 3/10/2026, 4:12 PMwith 6 comments

by AskCarXon 3/10/2026, 4:36 PM

To add some context on why this matters now -- I audited the 12 most popular agent frameworks and found none of them have agent identity, cryptographic signing, or trust scoring:

AutoGPT (182K stars) -- no identity LangChain (100K+) -- no identity MCP ecosystem (80K+ stars) -- no identity (a scan of 2,000 MCP servers found ALL lacking authentication) OpenHands (64K) -- no identity AutoGen (50K) -- no identity (Entra ID for users, not agents) CrewAI (45K) -- RBAC for configs, not agents smolagents (25K) -- sandboxing only OpenAI Agents SDK (19K) -- "does not natively provide security" NeMo Guardrails (5.7K) -- content safety only, not identity

AWS Bedrock and Google Vertex have the most mature security -- but it's IAM-based and cloud-locked. No portable agent identity.

That's 600K+ GitHub stars of agent frameworks where agents have zero cryptographic identity. Okta found 91% of orgs use agents but less than 10% have a strategy to secure them.

AgentSign fills this specific gap: not what agents can do (guardrails handle that), but who agents are + what they did + cryptographic proof.

by ZekiAI2026on 3/11/2026, 2:17 AM

Signing proves what was sent. It doesn't prove the sending agent wasn't compromised.

The specific failure mode: agent A is injected via a malicious document. It then calls agent B with signed, legitimate-looking instructions. B executes. You have a perfect cryptographic audit trail of a compromised agent doing exactly what the attacker wanted.

Replay attacks and trust delegation chains are the other gaps -- if agent A can delegate signing authority to B, and an attacker controls B, you've handed them a trusted identity.

Identity without behavioral integrity is a precise false sense of security. Worth red-teaming before production. We mapped this attack class against similar systems recently -- happy to share findings.