FAQ
Frequently Asked Questions
How Layr8 Relates to What You Already Use
We already use OIDC, OAuth, REST APIs, etc. Why do we need this?
Those tools solve authentication and authorization within a trust boundary you control. Layr8 solves coordination across trust boundaries—between your organization and external parties who run their own infrastructure.
OAuth can grant a third party access to your API, but it requires you to issue and manage credentials for them. Layr8 inverts this: the other party proves their identity using credentials they control, and you decide whether to accept them. No shared secrets. No credential provisioning on your side.
If all your integrations are internal or with parties you fully control, you may not need Layr8. If you coordinate with external organizations and want to stop managing API keys, webhook secrets, and VPN tunnels for each one, Layr8 provides a cleaner pattern.
And when those external parties change—new endpoints, rotated credentials, updated schemas—you don’t have to change with them. Layr8 is designed for environments where change is constant.
Your Org Partner Org ┌─────────────────┐ ┌─────────────────┐ │ │ │ │ OAuth/OIDC │ Users ↔ APIs │ │ Users ↔ APIs │ (within) │ (your tokens, │ │ (their tokens, │ │ your rules) │ │ their rules) │ │ │ │ │ └────────┬────────┘ └────────┬────────┘ │ │ │ API keys? Webhooks? VPNs? │ │ (fragile, secret-dependent) │ │◄───────────────────────────────►│ │ │ ┌────────┴────────┐ ┌───────┴─────────┐ Layr8 │ Agent │ DIDComm │ Agent │ (across) │ (your DID, │◄─────────────►│ (their DID, │ │ your keys) │ no shared │ their keys) │ │ │ secrets │ │ └─────────────────┘ └─────────────────┘Isn’t this just GPG? A way of distributing public keys?
GPG solves a piece of the problem—exchanging keys and verifying signatures. But once you’ve verified a signature, you still have questions:
- Who is this? You have a public key, but what does it represent?
- What are they allowed to do? Signature verification doesn’t tell you authorization.
- What just happened? Where’s the audit trail?
These questions get answered ad-hoc—scattered across application code, config files, and manual processes. Each integration reinvents the wheel.
Layr8 provides the full coordination stack: identity that’s globally resolvable, credentials that prove claims without calling the issuer, policies that enforce authorization automatically, and audit logs that both parties can verify. You’re not just exchanging keys—you’re establishing a coordination pattern that survives change.
GPG Layr8 ┌──────────────┐ ┌───────────────────────────┐ │ │ │ │ │ Key │ │ Identity │ │ Exchange │ ←───→ │ DIDs: globally resolvable│ │ │ │ "Who is this?" ✓ │ │ Signature │ │ │ │ Verification│ │ Authorization │ │ │ │ Policies & credentials │ │ (that's it) │ │ "What can they do?" ✓ │ │ │ │ │ └──────────────┘ │ Audit │ │ Hash-linked chains │ You still need: │ "What happened?" ✓ │ ✗ Identity mapping │ │ ✗ Authorization logic │ Encryption │ ✗ Audit trail │ Node-to-node, automatic │ ✗ Ad-hoc glue code │ │ │ Coordination │ │ Survives change │ └───────────────────────────┘See the reference documentation for technical details on DIDs, DIDComm, and verifiable credentials.
The learning curve seems steep.
Learning how Layr8 works — DIDs, DIDComm, verifiable credentials — takes time if you want to understand the internals. Learning how to use it takes minutes.
Your agent registers handlers for message types it cares about, then sends and receives plaintext JSON. That’s it. The node handles encryption, authentication, authorization, and audit invisibly. From your agent’s perspective, it feels like passing JSON directly to another agent — the complexity is there, but you don’t see it.
// Register a handlerclient.Handle("https://acme.com/protocols/orders/1.0/query", func(msg *layr8.Message) (*layr8.Message, error) { // msg.Body is your plaintext JSON — process it })
// Send a messageclient.Send(ctx, layr8.Message{ Type: "https://acme.com/protocols/orders/1.0/query", To: "did:web:widgets.layr8.io:catalog", Body: map[string]any{"status": "open"},})Most developers start by running a demo, then modify it for their use case. You don’t need to understand elliptic curve cryptography to send a message. See How Layr8 Works for the full picture.
Operational Questions
What happens if Layr8 goes down?
It depends on your hosting model:
Layr8 Cloud (managed). Your node runs on Kubernetes with automatic restarts, rolling deployments, and health checks. The node is built on Elixir/OTP — a runtime designed for telecom-grade reliability with process isolation, supervision trees, and graceful degradation. The Layr8 ops team monitors your node via OpenTelemetry metrics (message throughput, policy latency, queue depth, error rates) and resolves issues proactively. If the node restarts, agents reconnect automatically and queued messages resume delivery.
On-prem (enterprise license). You operate the node in your own infrastructure. The same Kubernetes manifests and OpenTelemetry instrumentation are available to you. Availability is determined by your deployment — the node is stateless except for its PostgreSQL database, so horizontal scaling and failover follow standard patterns.
In both cases: if your node is down, your agents can’t send or receive messages — the same as any other service.
If a recipient’s node is unavailable, delivery fails and your agent receives a problem report. Retry is your agent’s responsibility — the node does not automatically retry outbound delivery. On the receiving side, if a message arrives but your agent is temporarily disconnected, the node stores it and delivers it when your agent reconnects. For use cases that require robust offline delivery, a Mediator Agent is on the roadmap.
What happens if a message fails?
Messages don’t get stuck — they either deliver or fail. When delivery fails, your agent receives a problem report explaining why: the recipient’s node was unreachable (e.p.xfer.*), the DID couldn’t be resolved (e.p.did.*), authentication failed (e.p.trust.*), or the recipient’s node hit an internal error (e.p.me.*).
On the receiving side, if a message arrives but is rejected by policy, the audit chain records which policy failed, what credentials were missing, and the decision. The sender receives a trust problem report.
Debugging starts with problem reports on the sender side and the audit chain on the receiver side — not guesswork. See Problem Reports for the complete reference.
What if Layr8 can’t keep up with my message volume?
DIDComm messaging handles thousands of messages per second, scaling to tens or hundreds of thousands depending on policy complexity. For most coordination use cases — API access, cross-org queries, credential exchange — throughput is not the bottleneck.
For high-throughput or latency-sensitive workloads, Layr8 supports QUIC Channels — direct QUIC streams between agents that tunnel raw TCP transparently. Authorization is negotiated over DIDComm (your node enforces policies as usual), then a QUIC channel is established with built-in TLS 1.3 encryption. Raw bytes flow directly between agents with no serialization overhead, no message-level processing, and no node in the data path. This works with any TCP protocol — PostgreSQL, REST APIs, Redis, custom protocols — without modification.
If you need to go further, you can also use DIDComm to negotiate credentials for a dedicated high-throughput channel (Kafka, raw TCP, etc.) and move bulk traffic there entirely.
What happens if Layr8 (the company) goes out of business?
Layr8 is built on open standards: W3C DIDs, W3C Verifiable Credentials, and DIF DIDComm. Your DIDs, credentials, and message formats are not proprietary.
The Layr8 Node software is available under a Business Source License for enterprises that require source code access. If Protocol Technologies disappeared tomorrow, you could continue operating your existing deployment, fork the codebase, or migrate to another DIDComm-compatible implementation (Hyperledger Aries, Credo-TS, etc.).
Vendor lock-in is a valid concern. We’ve designed for portability precisely because enterprises ask this question.
Security and Trust
What if a private key is compromised?
Private keys are stored by the node, encrypted at rest (AES-256-GCM) and decrypted into a secure in-memory store at runtime — agents never touch keys directly. If a node’s key storage is compromised, an attacker can impersonate the agents hosted on that node until you rotate the keys. This is similar to any cryptographic system — the difference is what happens next.
With API keys, you rotate the key and then update every system that uses it. With Layr8, you rotate the key in one place (the DID Document), and the change propagates automatically. Every party that communicates with you resolves your DID Document fresh — no coordinated secret distribution.
For high-security deployments, HSMs, secure enclaves, or other mechanisms can further protect private key material. Contact the team to discuss your specific requirements.
How is this actually more secure than API keys?
API keys are shared secrets. You generate one, send it to a partner, and both of you store it. Now the secret exists in at least two places—your system and theirs—plus anywhere it traveled in between (email, Slack, ticket systems). Each copy is an attack surface.
Layr8 eliminates shared secrets entirely. Your private keys never leave your node. Partners verify your identity by resolving your public DID Document — there’s nothing to leak, intercept, or rotate across organizational boundaries.
The security model shifts from “protect the secret everywhere it exists” to “protect the secret in one place you control.”
What happens if the other organization’s security is compromised?
If a partner’s node is compromised, the attacker can send messages as that partner — but only that partner. They cannot impersonate you, access your keys, or escalate to other relationships. Each DID’s keys are cryptographically isolated.
Your audit chain records exactly what the compromised agent did while compromised. When the partner rotates their keys, you immediately see the new identity on subsequent messages.
Compare this to API keys: if a partner is breached and your shared API key is stolen, the attacker has your credential. With Layr8, they have their credential—your security boundary is unaffected.
Can I revoke access instantly?
Yes. There are multiple layers of access control, and any of them can cut off access immediately:
- Allow/deny lists. Remove a DID from the allow list (or add it to the deny list), and the node rejects all messages from that identity on the next request.
- Grant revocation. Revoke a specific grant, and the holder loses that particular permission without affecting other access they may have.
- Credential revocation and suspension. Credentials can be permanently revoked or temporarily suspended using W3C Bitstring Status Lists — published lists that verifiers check during policy evaluation. Revoke a credential, and any policy that depends on it stops passing. Suspend one, and it stops passing until you unsuspend it. The node publishes status list credentials as signed JWTs that any verifier can check independently.
In every case, the change takes effect on your side immediately. For status lists, the verifier controls whether and how long to cache status list lookups — if you need instant revocation, configure a short TTL or disable caching entirely. There’s no propagation delay you don’t control, and no coordinated rotation.
This is one of the sharpest differences from API keys. With traditional approaches, revoking access means rotating secrets and coordinating the change across systems. With Layr8, revocation is a policy change on your side — the other party doesn’t need to do anything, and you don’t need to coordinate timing.
I would never trust a security product without seeing the code.
We offer a Business Source License option for enterprises that need to review source code. This gives you full visibility into how cryptographic operations, policy evaluation, and audit logging work.
For organizations that require code audits, penetration testing, or third-party security review, we support those processes. Security through obscurity is not our model.
Operational Concerns
How do I debug a cross-organization issue?
Start with your audit chain. Every message—sent and received—is logged with the sender’s DID, the action attempted, the policy decision, and the outcome. You can see exactly what happened on your side without needing access to the other party’s systems.
If a message was rejected, the audit entry records why: which policy failed, what credentials were missing, what the error was. If a message was accepted but the response wasn’t what you expected, you have cryptographic proof of what was sent and received.
For issues that span both sides, each party can share relevant audit entries. Because entries are hash-linked, you can verify they haven’t been tampered with. Debugging becomes “compare the chains” rather than “get on a call and try to reconstruct what happened.”
What does monitoring look like?
Layr8 Cloud (managed). The Layr8 ops team monitors your node for you. We track message throughput, policy evaluation latency, delivery success rates, queue depth, and error rates — and resolve issues proactively. You don’t need to set up dashboards or alerting infrastructure. A console dashboard for self-service visibility is on the roadmap.
On-prem (enterprise license). Layr8 nodes are instrumented with OpenTelemetry, so you can export metrics, traces, and logs to whatever observability stack you already run — Datadog, Grafana, Prometheus, Splunk, New Relic, or any other OpenTelemetry-compatible backend.
In both cases, the audit chain itself is a monitoring tool. You can query it for patterns: which agents are most active, which policies reject the most requests, which message types are growing fastest.
What’s the blast radius if something goes wrong?
Failures are isolated at every level — across organizations, within the node, and within individual processes.
Across organizations. If your node goes down, your agents can’t communicate — but other organizations’ nodes are unaffected. There’s no central authority whose failure cascades everywhere. Each organization operates its own node, makes its own policy decisions, and maintains its own audit chain. Coordination happens peer-to-peer, not through a shared control plane.
Within a node. If a policy misconfiguration blocks legitimate traffic, only traffic matching that policy is affected. If a single agent’s keys are compromised, only that agent’s identity is at risk.
Within the runtime. Layr8 nodes are built on Elixir/OTP — the Erlang runtime originally designed for telecom systems requiring 99.999% uptime. Every agent connection, message pipeline, and background task runs in its own lightweight process. If one process crashes, only that process is affected — the supervision hierarchy restarts it automatically while everything else continues running. A bug in one agent’s message handler can’t take down the node or affect other agents. This is fundamentally different from thread-based runtimes where an unhandled exception can crash the entire service.
How do I handle an incident involving a partner?
Your audit chain is your source of truth. Pull the records for the time window in question—you have cryptographic proof of every message received from that partner and every policy decision made.
If you need to cut off access immediately, revoke their grants. Access stops on the next request.
If you need to investigate what they accessed, query your audit chain for all actions by their DID. You’ll see exactly what they requested, what was allowed, and what was denied—timestamped and tamper-evident.
You don’t need the partner’s cooperation to understand what happened on your side. If you need to reconcile stories, compare audit chains—disagreements become mathematically detectable.
Migration and Adoption
Can I run Layr8 alongside my existing API key integrations?
Yes. Layr8 doesn’t require you to migrate everything at once—or ever.
Pick one integration with one partner. Stand up an agent, establish the connection, and run both the old API key path and the new Layr8 path in parallel. Compare behavior, validate the audit trail, gain confidence. When you’re ready, deprecate the API key path. If you’re not ready, keep both running indefinitely.
Your existing integrations don’t know Layr8 exists. The Layr8 agent sits alongside them, handling only the traffic you route to it.
How do I roll back if this doesn’t work?
The same way you rolled in: routing.
If you’re running Layr8 in parallel with existing integrations, rolling back means routing traffic back to the old path. Your API keys still work. Your existing systems haven’t changed. You’ve added a capability, not replaced one.
If you’ve fully migrated an integration and need to roll back, reissue API keys and reconfigure. It’s the same process you’d follow today if you needed to rebuild an integration from scratch—Layr8 hasn’t burned any bridges.
What’s the minimum viable adoption?
One agent, one integration, one partner.
You don’t need to convince your entire organization. You don’t need to migrate all your partners. You don’t need a company-wide initiative.
Find one integration where API key management is painful or where cross-org data access would be valuable. Pilot Layr8 there. Learn what works, what doesn’t, and whether it earns expansion.
Working with Partners
How do I convince a partner to use Layr8?
Lead with what’s in it for them.
For the partner providing data (like Bob in the Postgres demo): “You get fine-grained control over exactly what I can access, instant revocation, and a tamper-evident audit trail of everything I did. You don’t have to provision credentials for me or manage my access lifecycle.”
For the partner consuming data: “You don’t have to store our API keys, rotate them, or worry about them leaking. Your developers authenticate as themselves—no shared secrets to manage.”
The pitch isn’t “adopt this new technology.” It’s “stop managing shared secrets and get better auditability in exchange.”
What if my partner doesn’t want to run Layr8?
They don’t have to run Layr8 specifically. Layr8 is built on open standards — W3C DIDs, W3C Verifiable Credentials, and DIF DIDComm. Any compatible implementation works.
DIDComm is an open protocol — anyone can implement it. What Layr8 does is make DIDComm practical: it handles the cryptography, key management, policy enforcement, and audit infrastructure so your developers can focus on business logic instead of protocol plumbing. But if your partner already runs Hyperledger Aries, Credo-TS, or another DIDComm-compatible stack, they can connect without adopting Layr8. If they want to build their own DIDComm endpoint, the specs are open.
The harder case: a partner who won’t adopt any DIDComm implementation. For those integrations, you’re back to API keys — at least until their security team gets tired of managing them. Layr8 doesn’t force the issue; it just makes the alternative available.
I’m a partner asked to integrate — what’s involved?
Provision a free node on portal.layr8.io (takes about 5 minutes), install an SDK (Go, Node.js, or Python), and write a simple agent. A basic handler is ~30 lines of code. The Getting Started guide walks you through the whole process.
You keep full control of your implementation, infrastructure, and release schedule. See For Partners for the full picture.
What if this doesn’t work out for us as a partner?
Route traffic back to your existing integration. Layr8 is additive — your existing APIs, auth systems, and infrastructure remain untouched throughout. There’s nothing to unwind.
Layr8 uses open standards (W3C DIDs, DIF DIDComm). If you want to keep the protocol but switch implementations, any DIDComm-compatible stack works.