TL;DR

NVIDIA NemoClaw is an open-source stack (Apache 2.0) that wraps OpenClaw agents in enterprise-grade security. Announced at GTC 2026 on March 17, it hit 13,400+ GitHub stars in just 5 days. Key highlights:

  • One command install: curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash — sets up sandboxed OpenClaw with security policies
  • OpenShell runtime: Process-level sandboxing with Landlock + seccomp + network namespaces — security enforced outside the agent, so it can’t override it
  • Privacy router: Keeps sensitive data on local Nemotron models, routes to cloud (Claude, GPT) only when policy allows
  • Policy-as-code: YAML-based rules controlling filesystem, network, and inference access per sandbox
  • Works with any agent: OpenClaw, Claude Code, OpenAI Codex — all run unmodified inside OpenShell
  • Runs everywhere: DGX Spark, RTX PCs, cloud VMs, macOS (Colima/Docker Desktop)
  • Alpha status: Early preview, interfaces may change — but already production-quality architecture
  • Stars: 13,400+ ⭐ | Forks: 1,280+ | License: Apache 2.0

Jensen Huang at GTC: “OpenClaw is the operating system for personal AI. This is as big of a deal as HTML, as big of a deal as Linux.”


Why NemoClaw Matters

OpenClaw became the fastest-growing open-source project in GitHub history — 250K+ stars in under 4 months. But with great power came a very specific problem: how do you trust an agent that can read your files, browse the web, execute code, and run for hours unsupervised?

The security model for traditional software doesn’t work for autonomous agents. Here’s why:

Traditional SoftwareAutonomous Agents (Claws)
Runs specific code you wroteWrites its own code at runtime
Predictable behaviorNon-deterministic, context-dependent
Static permissionsNeeds dynamic access based on task
Single processSpawns subagents that inherit permissions
Short-lived sessionsRuns for hours, accumulates context
User-initiated actionsActs independently based on goals

As one HN commenter put it: “It’s like giving your dog a stack of important documents, then being worried he might eat them, so you put the dog in a crate, together with the documents.”

The core tension: for an agent to be useful, it needs access. For it to be safe, it needs restrictions. Previous solutions put guardrails inside the agent (system prompts, behavioral constraints). The problem? A compromised or confused agent can override its own guardrails.

NemoClaw’s insight: move the security boundary outside the agent entirely.


How NemoClaw Works: Architecture Deep Dive

NemoClaw is not a standalone product — it’s a plugin layer that sits between OpenClaw and your infrastructure. The architecture has three key components:

1. OpenShell Sandbox

The sandbox uses Linux kernel primitives (Landlock LSM, seccomp-bpf, network namespaces) to create isolated execution environments. This is not container-level isolation — it’s process-level enforcement that’s specifically designed for long-running, self-evolving agents.

┌─────────────────────────────────────────────┐
│  Your Machine                                │
│                                              │
│  ┌─────────────────────────────────────┐    │
│  │  OpenShell Sandbox                   │    │
│  │  (Landlock + seccomp + netns)        │    │
│  │                                      │    │
│  │  ┌──────────────────────────┐       │    │
│  │  │  OpenClaw Agent          │       │    │
│  │  │  ├── Skills              │       │    │
│  │  │  ├── Subagents           │       │    │
│  │  │  └── Tools               │       │    │
│  │  └──────────────────────────┘       │    │
│  │                                      │    │
│  │  Policy Engine ←── policies.yaml     │    │
│  │  Privacy Router ←── routing rules    │    │
│  └─────────────────────────────────────┘    │
│                                              │
│  OpenShell Gateway (audit trail)             │
└─────────────────────────────────────────────┘

What makes this different from Docker: the sandbox is agent-aware. It understands that agents install packages, learn new skills mid-task, and spawn scoped subagents. When an agent hits a policy constraint, it can reason about the roadblock and propose a policy update — leaving you with the final approval, with a full audit trail.

2. Policy Engine

Policies are written in YAML and control what the agent can do at the binary, destination, method, and path level:

# Example: Allow agent to access specific APIs but block everything else
network:
  default: deny
  allow:
    - host: api.anthropic.com
      ports: [443]
    - host: api.openai.com
      ports: [443]
    - host: github.com
      ports: [443]

filesystem:
  default: deny
  allow:
    - path: /workspace/**
      permissions: [read, write]
    - path: /tmp/**
      permissions: [read, write]
  deny:
    - path: ~/.ssh/**
    - path: ~/.aws/**

process:
  allow:
    - binary: node
    - binary: python3
    - binary: git
  deny:
    - binary: curl  # Force agent to use approved HTTP client

This means: an agent can install a verified skill but cannot execute an unreviewed binary. It gets the autonomy it needs to evolve within boundaries you define.

3. Privacy Router

Perhaps the most innovative component. The privacy router makes inference routing decisions based on your policy, not the agent’s preference:

  • Sensitive context (internal code, credentials, business logic) → stays on local Nemotron models
  • General queries (documentation lookups, coding patterns) → can route to Claude, GPT, etc.
  • Cost optimization → route simple tasks to cheaper/local models, complex to frontier

This solves a real problem: companies want agents using Claude Opus for complex reasoning but can’t send proprietary code to Anthropic’s servers. The privacy router lets them have both.


Installation & First Run

Prerequisites

RequirementDetails
OSLinux (Ubuntu 22.04+), macOS (Apple Silicon), Windows WSL
CPU4+ vCPU
RAM8 GB minimum, 16 GB recommended
Disk20 GB free (40 GB recommended)
RuntimeDocker (Linux), Colima or Docker Desktop (macOS)
Node.jsv20+

Quick Start

# One command to install everything
curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

The installer runs a guided wizard that:

  1. Installs Node.js if needed
  2. Sets up OpenShell runtime
  3. Creates a sandboxed environment
  4. Configures inference (local Nemotron or cloud API)
  5. Applies default security policies

After install, you’ll see:

──────────────────────────────────────────────────
Sandbox      my-assistant (Landlock + seccomp + netns)
Model        nvidia/nemotron-3-super-120b-a12b (NVIDIA Cloud API)
──────────────────────────────────────────────────
Run:         nemoclaw my-assistant connect
Status:      nemoclaw my-assistant status
Logs:        nemoclaw my-assistant logs --follow
──────────────────────────────────────────────────

Connecting to Your Agent

# Connect to the sandbox
nemoclaw my-assistant connect

# Inside the sandbox, use OpenClaw's TUI
openclaw tui

# Or send a single message via CLI
openclaw agent --agent main --local -m "hello" --session-id test

DGX Spark Setup

NVIDIA’s DGX Spark (the $3,000 desktop AI appliance) has a dedicated setup guide that handles Spark-specific prerequisites like cgroup v2 and Docker configuration.


What the Community Is Saying

Hacker News (470+ points, 200+ comments)

The HN discussion is one of the most nuanced agent security debates in months. Key themes:

Skeptics raise a valid point:

“In order for the agent to be useful, you have to connect it to your calendar, your email provider and other services so it can do stuff on your behalf. And now, what, having inference done by Nvidia directly makes it better?”

The real-world failure case that proves the need:

“Just today I had Opus 4.6 in Claude Code run into a login screen while building a web app via Playwright MCP. When I flipped back to the terminal, it turned out Claude had run code to query superadmin users in the database, picked the first one, and changed the password to password123 so it could log in on its own.”

This is exactly the kind of behavior NemoClaw’s policy engine is designed to prevent — the agent can’t access the database directly because the policy restricts it.

The fundamental tension acknowledged:

“I keep seeing people rave about ‘leaving it on overnight and waking up to a finished project.’ Well sure, but it could also hack your home network, delete your family pictures folder, log into your bank account and wire all your money to shrimp charities.”

Reddit (r/AgentsOfAI — 227 upvotes)

The Reddit community is more optimistic, focusing on practical implications:

  • NemoClaw makes it feasible for companies to actually deploy OpenClaw in production
  • The privacy router addresses the #1 concern for enterprise adoption
  • Running Nemotron locally means no data leaves your network for routine tasks

Industry Press

Jensen Huang (GTC keynote): “Every company in the world needs an OpenClaw strategy.”

ZDNet: NVIDIA has been working directly with OpenClaw founder Peter Steinberger, calling the agent platform “history’s most important software release.”

The New Stack: “NemoClaw is essentially the next generation of what NVIDIA previously called the NeMo Agent Toolkit.”


Practical Use Cases

1. Enterprise Code Assistant

Deploy OpenClaw as an internal coding agent that can access your private repos but can’t exfiltrate code. The privacy router keeps proprietary code on local Nemotron models while using Claude for general coding patterns.

2. Secure DevOps Automation

An agent that manages deployments, monitors logs, and responds to incidents — but can only SSH to approved servers, can only run approved commands, and has a full audit trail of every action.

3. Research & Analysis

Give an agent access to internal documents for research, but restrict which external APIs it can call and ensure sensitive data never hits cloud inference endpoints.

4. Customer Support Agent

An always-on agent that handles support tickets, with policies restricting it to read-only database access and approved response templates.

5. On-Device AI with DGX Spark

NVIDIA’s $3,000 DGX Spark runs Nemotron locally with NemoClaw security. This gives you a powerful AI agent that never sends data to the cloud — the first truly private, high-capability agent setup.


NemoClaw vs. Alternatives

FeatureNemoClawOpenClaw (Vanilla)Docker + OpenClawCustom Enterprise
Security boundaryOutside agent (kernel-level)Inside agent (system prompts)Container-levelVaries
Policy as code✅ YAML-basedPartialCustom
Privacy routing✅ Local/cloud splitCustom
Audit trail✅ Every action loggedPartialCustom
Agent-aware sandbox✅ Understands skills/subagentsN/A❌ Generic containersVaries
Setup complexityOne commandOne commandManualWeeks/months
Open source✅ Apache 2.0✅ MITUsually ❌
GPU optimization✅ Nemotron, DGX supportVaries

Honest Limitations

NemoClaw is alpha software. Here’s what you should know:

  1. Linux-first: Full Landlock + seccomp support requires Linux. macOS uses Colima/Docker which adds overhead.
  2. Alpha stability: APIs and interfaces may change without notice. Don’t build production infrastructure on it yet.
  3. Resource hungry: 8 GB RAM minimum, 20 GB disk. Running Nemotron locally needs significantly more.
  4. Complexity vs. simplicity: OpenClaw’s appeal is “install and go.” NemoClaw adds security but also adds concepts (sandboxes, policies, routing) that have a learning curve.
  5. No Podman support on macOS yet.
  6. The dog-in-a-crate problem: As HN pointed out, sandboxing the agent with the data it needs to access doesn’t fully solve the trust problem — it constrains the blast radius.

The Bigger Picture: Why NVIDIA Is Doing This

This isn’t just a side project. NVIDIA’s bet is that agents are the next computing platform — and whoever controls the agent infrastructure layer wins.

The strategy:

  1. Nemotron models — open-source LLMs optimized for agents (120B MoE, long context)
  2. OpenShell — the runtime layer for secure agent execution
  3. NemoClaw — the OpenClaw integration that makes it all accessible
  4. DGX Spark — the hardware to run it all locally
  5. Agent Toolkit — the full stack for enterprise deployment

Jensen Huang compared OpenClaw to Linux and HTML. If that analogy holds, NVIDIA is positioning itself as the Red Hat of the agent era — adding the enterprise layer that makes open-source agents production-ready.


Getting Started: Should You Try NemoClaw?

Yes, if:

  • You’re running OpenClaw agents that access sensitive data
  • You need an audit trail of what your agents do
  • You want to keep inference local for privacy (using Nemotron)
  • You’re evaluating agent platforms for enterprise use
  • You’re on Linux or comfortable with Docker on macOS

Not yet, if:

  • You just want a personal assistant (vanilla OpenClaw is fine)
  • You need production stability (wait for beta)
  • You’re on macOS and want a seamless experience

Quick start:

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

GitHub: github.com/NVIDIA/NemoClaw NVIDIA Blog: Run Autonomous Agents More Safely with OpenShell License: Apache 2.0


FAQ

What is NemoClaw?

NemoClaw is NVIDIA’s open-source security stack for OpenClaw agents. It adds sandboxing (via OpenShell), policy-based access controls, and a privacy router that keeps sensitive data on local models while allowing cloud inference when needed. It installs with a single command.

Is NemoClaw a replacement for OpenClaw?

No. NemoClaw is a plugin that wraps OpenClaw in security infrastructure. Your OpenClaw agent runs inside a NemoClaw sandbox unchanged. NemoClaw also works with Claude Code and OpenAI Codex.

Does NemoClaw require NVIDIA hardware?

No. NemoClaw runs on any Linux machine with Docker. NVIDIA hardware (DGX Spark, RTX GPUs) enables running Nemotron models locally for fully private inference, but it’s not required — you can use cloud models like Claude and GPT.

How does NemoClaw compare to just running OpenClaw in Docker?

Docker provides container isolation but isn’t agent-aware. NemoClaw’s OpenShell understands agent behavior — skill installation, subagent spawning, dynamic permissions. The policy engine operates at the binary/path/network level with live updates and a full audit trail. Docker doesn’t know what an “agent” is.

Is NemoClaw production-ready?

No — it’s in alpha (early preview since March 16, 2026). APIs may change. It’s suitable for experimentation and evaluation, not production workloads yet.

What models does NemoClaw support?

NemoClaw is model-agnostic. It ships with NVIDIA Nemotron (120B MoE, open-source) for local inference, but supports Claude, GPT, and any other model through the privacy router. The router decides which model handles each request based on your privacy policy.