Frequently Asked Questions

Understanding DaemonCore

What is DaemonCore?

DaemonCore is a governance layer for multi-agent AI systems. It sits beneath orchestration frameworks and defines what agents are allowed to do. Think of it as the "operating system layer" for AI — enforcing boundaries, managing permissions, and coordinating agent interactions.

Is DaemonCore an AI model?

No. DaemonCore is not an AI model, and it does not generate text, reason about problems, or produce outputs.

DaemonCore provides the governance layer that exists within an agentic session. It defines the rules, constraints, and continuity under which models like Claude, GPT, or Gemini operate during that session — without being the thing that thinks or speaks.

In other words, DaemonCore doesn’t replace AI models, and it doesn’t sit alongside them as a tool or wrapper. It shapes the environment they are operating in.

Orchestration tools tell models what to do.
DaemonCore defines the rules of the world they’re doing it in.

Patent pending.

Is it an orchestrator or agent framework?

No. Orchestrators like LangChain and CrewAI coordinate agent workflows. DaemonCore operates beneath orchestrators, enforcing rules that orchestrators cannot override. It's complementary, not competitive.

How is this different from prompt engineering?

Prompt engineering operates per-request with advisory guidelines. DaemonCore operates at the environment level with structural enforcement:

  • Boundaries persist across sessions
  • Constraints cannot be bypassed by clever prompting
  • Safety is architectural, not advisory

Prompts still exist — they execute within the governed environment.

Hard Questions

Isn't this just prompts with extra steps?

A fair question. Here's the difference:

Prompts are instructions to the model. DaemonCore defines what happens to model outputs before they become actions.

  • Prompts: "Don't access files outside this directory"
  • DaemonCore: Output must be protocol-shaped. Scope field validated. Access blocked if outside permitted paths.

The model still receives prompts. But its outputs pass through protocol validation before any action occurs. A prompt can be ignored or misinterpreted. A schema mismatch is rejected.

V1: Protocol validation, schema enforcement, scope checking. The model's reasoning is still its own—we govern what that reasoning can do.

Can prompt injection override DaemonCore?

Honest answer: In V1, prompt injection can influence the model's reasoning. It cannot bypass the governance layer.

Here's what that means:

  • Injection can: Manipulate the model into producing malicious output, changing its reasoning, or attempting actions outside scope.
  • Injection cannot: Make that output bypass protocol validation, send messages that don't match schema, or access resources outside permitted scope.

If a manipulated model tries to access src/payments/ when only src/auth/ is permitted, the access is blocked. The violation is logged. The model was influenced; the action was prevented.

V1: Detection and constraint. We log anomalous outputs and block out-of-scope actions.

V1.5 (planned): Cryptographic action signing, hardware attestation. Stronger guarantees that actions trace to legitimate requests.

We don't claim to prevent prompt injection. We limit what a compromised model can achieve.

Don't orchestration tools already do this?

Orchestrators coordinate. They decide what agents do, when, and in what order. Some include safety features.

The problem: governance at the orchestration layer can be bypassed by the orchestrator. If the orchestrator decides to skip a check, nothing prevents it.

DaemonCore sits beneath orchestration. The orchestrator cannot override governance because it operates above the governance layer. This is the same principle as OS-level memory protection: applications cannot decide to bypass it because enforcement happens at a lower level.

Orchestrators + DaemonCore: The orchestrator coordinates workflows. DaemonCore ensures those workflows operate within defined boundaries regardless of what the orchestrator requests.

What's actually enforced in V1 vs V1.5?

V1 (Current)

  • Protocol validation: Outputs must match expected structure
  • Schema enforcement: Messages validated against typed schemas
  • Scope boundaries: File/resource access checked against permissions
  • Template constraints: Operations follow predefined templates
  • Deterministic boot: Agents initialise with consistent context
  • Audit logging: Actions and violations recorded

V1.5 (Planned)

  • Cryptographic signing: Actions carry verifiable signatures
  • Hardware attestation: Trusted execution environment support
  • Capability delegation: Cross-agent permission transfer with audit
  • Real-time verification: Continuous constraint checking during execution

V1 provides structural enforcement through validation and rejection. V1.5 adds cryptographic guarantees. We ship what's working; we don't promise what's not built.

Technical Questions

Which AI providers are supported?

DaemonCore is vendor-agnostic by design. The governance layer works with any model provider — Claude, GPT, Gemini, open-source LLMs. Different providers operate under unified governance with strict isolation between them.

How does safety work?

Safety in DaemonCore is architectural, not advisory. Agents operate within defined capability envelopes — boundaries they cannot exceed regardless of prompting. The system enforces permissions based on environment trust level, with lower-trust environments receiving stricter constraints by default.

Can multiple agents work together?

Yes. Multi-agent coordination is core functionality. DaemonCore manages:

  • Context handoff between agents
  • Scope isolation to prevent conflicts
  • Shared state through defined protocols
  • Permission inheritance across agent hierarchies

Does this require API changes to use existing models?

No. DaemonCore operates as a governance layer above model APIs. Your existing integrations with OpenAI, Anthropic, or other providers remain unchanged. The governance layer wraps these interactions with enforcement.

Availability & Access

Is DaemonCore open source?

The kernel specification is public and available for review. The full system includes additional components in various stages of development. See the GitHub repository for current public materials.

Can I use it today?

The kernel specification is available for review and experimentation. Production deployment capabilities are in active development. Follow the blog for updates on availability.

What is Stux, and how does it relate to DaemonCore?

Stux is the company building DaemonCore. Think of it like the relationship between a company and its core product.

Stux OS is the broader operating environment we're building for multi-agent systems. DaemonCore is the kernel—the governance layer at the foundation. Other components (tooling, interfaces, deployment systems) build on top of the kernel.

When you see "DaemonCore," you're looking at the core governance specification. When you see "Stux," you're looking at the organisation and broader ecosystem.

How can I follow development?

Public materials and specifications are available via the GitHub repository. Development updates are posted to the blog.

Strategic Questions

Why does this matter now?

AI systems are being deployed at scale without governance infrastructure. As multi-agent systems become more capable, the gap between what agents can do and what they should do becomes critical. DaemonCore provides the foundation for trustworthy agent deployment.

Can't orchestrators handle safety themselves?

Governance at the orchestration layer can be bypassed by the orchestrator. For enforcement to be reliable, it must sit beneath orchestration — at a layer that orchestrators cannot override. This is why DaemonCore exists as a separate governance layer, not as a feature of existing frameworks.

What about existing enterprise AI platforms?

Large technology companies have built sophisticated internal systems. DaemonCore provides a public specification that enables similar governance capabilities without requiring the massive engineering investment of building everything from scratch.