Workshop

SkillSeal: Mind the Gap

Published on: 2026-02-20

By: Ian McCutcheon

SkillSeal: Mind the Gap

There's a gap. Everyone building agentic AI knows it's there. The agents run skills — Markdown files, plugin configs, hook scripts — and those artifacts function as installers. They have access to the filesystem, the shell, the network. And right now, there is no standard way to know who wrote them or whether someone changed them after the fact.

That's not a hypothetical. Audits of major skill repositories have found that 12 to 20 percent of published skills are actively malicious. Not theoretically dangerous. Actively delivering malware.

We all know this is a gap. We all know no single tool solves everything. But a distributed web of trust around agentic AI — wherever it can be applied without effort — is a win for everyone.

That's SkillSeal.


Listen: AI-generated discussion of SkillSeal (overview, not exhaustive)


The Conversation

Imagine you build agent platforms for a living. You work at one of the big AI companies. Your marketplace has thousands of skills, and you're responsible for making sure they don't eat your users alive. What do you do today? You run them through some internal review. Maybe static analysis. Maybe a human looks at them. And then you publish them, and the user trusts the marketplace.

But what's the artifact of that review? Where's the receipt? If your security team spent three days vetting a skill, what does the user's agent have to show for it? A listing on the marketplace page? A badge in the UI?

That's not verifiable. That's a promise backed by a web page.

SkillSeal makes the review a cryptographic fact. When your team vets a skill and decides it's safe, they sign it. Not metaphorically. With GPG, with SSH — real keys tied to real identities. The signature travels with the skill. Any agent, anywhere, can verify it without calling home to your API. The trust is in the artifact, not the platform.


What It Actually Is

SkillSeal is a lightweight cryptographic signing framework for LLM agent skills and plugins. Authors sign their work. Reviewers can independently attest to it. A local trust store lets agents make deterministic trust decisions — no user intervention for known authors, hard blocks for anything unsigned or tampered with.

It's a CLI tool. No servers, no daemons, no databases. Signing takes milliseconds. Verification is a single command. If the author already has SSH keys for git — and they do — they can start signing today with zero setup.

The whitepaper covers the full architecture, threat model, and protocol. If you want the deep technical detail:

📄 SkillSeal Whitepaper (PDF)


Why the Big Companies Should Care

Here's the part that matters if you're building marketplaces.

You already review skills before they go live. That process exists. SkillSeal just gives the process an output that means something. When your security team finishes reviewing a skill and decides to publish it, your team attests to it — with your own keys, your own identity. Not the author's keys. Yours. Your security team's GPG signature on that specific version says "we reviewed this, and we're putting our name on it."

This is an important distinction. You're not vouching for the author. Joe might write great code today and go rogue tomorrow. You're vouching for this artifact, this version, right now. The author's signature may also be on the code — that's their choice, and users can learn to trust individual authors independently over time. But the marketplace's attestation is what carries weight at scale. Different products or divisions within your organization can sign with different keys. Your AI coding assistant marketplace and your agent skills marketplace don't need to share a security identity.

If the author pushes an update, your attestation goes stale. Your team reviews again, attests again. Or doesn't — and the user's agent sees the staleness and acts accordingly. If your team discovers a problem with a skill they previously attested, they publish a destatement — a negative attestation that blocks execution for every user who trusts your organization. You revoke what you endorsed. You stay in control.

Now scale that. You publish a trust bundle — a signed collection of your organization's reviewer keys. Your users subscribe to that bundle. Every skill your team has attested is automatically trusted on every user's machine. Every destatement you publish is automatically enforced. No phone-home, no API dependency, no single point of failure. Decentralized trust distribution with centralized curation. And crucially — the keys in that bundle are your keys, not a list of third-party authors you're betting on.

That's the play. You're not adopting someone's tool. You're adopting a protocol — one that makes your existing review process verifiable and portable, with your organization's identity at the center of the trust chain.


The Self-Fulfilling Prophecy

Here's where it gets interesting, and maybe a little philosophical.

SkillSeal works today. I sign my skills with it. Verification hooks enforce it in Claude Code. But a signing standard only reaches its potential when multiple parties participate. Authors sign. Reviewers attest. Users configure trust. The web grows.

Someone has to plant the seed. That's what this is.

The protocol is open. The code is on GitHub. A company that wants to move first can fork it, extend it, feed structural improvements back — or just take it and run. The architecture is designed to be composable. It sits at the artifact layer and stays out of the way of whatever transport, gateway, or isolation you're already using.

The first major platform that adopts artifact-level signing for their skill marketplace changes the conversation for everyone. Not because they chose this tool specifically — but because they established that skills should be signed at all. That provenance should be verifiable. That "we reviewed it" should mean more than a badge on a web page.

Once one platform does it, the others have to answer the question: why don't you?


What SkillSeal Doesn't Do

It doesn't prove code is safe. It proves who wrote it and that it hasn't been tampered with. That's provenance and integrity — not functional correctness. A signed skill from a trusted author can still have bugs. A reviewed skill can still have edge cases the reviewer missed.

But you can't have accountability without identity. And you can't have identity without signing. Everything else — the reviews, the audits, the trust decisions — builds on top of knowing who you're dealing with and whether the artifact is the one they published.

It also doesn't replace container isolation, OAuth, TLS, or any of the transport-layer protections that already exist. It composes with them. SkillSeal answers "what is this and who made it?" Your gateway answers "should this be allowed to run here?" Those are different questions, and they deserve different tools.


The Gap, Revisited

The gap is real. Agents execute instructions from strangers, and the ecosystem's security model is mostly "trust the marketplace." That worked for a while. It won't work much longer — not at the scale agentic AI is heading.

SkillSeal is one answer to one layer of that problem. It's not the whole solution. But it's the layer that nobody else is building — portable, self-bootstrapping, artifact-level signing that works for an individual author today and scales to an enterprise marketplace tomorrow.

The code is open. The spec is documented. The whitepaper is peer-reviewable.

Someone has to plant the seed.

🔗 github.com/mcyork/skillseal