Learn how SHARED_CORE enforces security and consistency across Sovereign AI projects while automating setup with standardized scaffolding.

How to Bootstrap New Sovereign AI Projects with SHARED_CORE


Every new Sovereign AI project starts by importing the same core components. You don’t rebuild privacy routing, injection protection, or probability gates in each project. Instead, you pull them from SHARED_CORE, a private library that enforces consistency across your Sovereign Grid. This isn’t about code reuse, it’s about preventing drift in security posture and operational behavior. A single misconfigured agent can leak data or violate privacy guarantees, so the core exists to make that failure mode impossible by design. Watch out: If you bypass SHARED_CORE’s built-in health checks during development, you risk deploying agents that silently fail to route traffic through Tor, exposing your system to eavesdropping. Always validate Tor connectivity with torsocks curl --socks5-hostname localhost:9050 https://check.torproject.org before proceeding.

Quick Take

  • SHARED_CORE provides the foundational privacy and routing logic every new project needs
  • Projects fail fast if Tor or the LLM endpoint isn’t reachable
  • Secrets stay isolated in per-project directories with strict permissions
  • Standardized scaffolding scripts cut new project setup from minutes to seconds
  • Gotcha: The PYTHONPATH injection in venv/bin/activate only persists for the current shell session unless you add it to your shell’s startup file (e.g., .bashrc). Forgetting this leads to frustrating import errors during debugging.

Setting up a new project begins with directory hygiene. You create a project-specific workspace under /data/projects, lock down its secrets directory to 700 permissions, and initialize a Python virtual environment. Warning: Skipping the chmod 700 on /data/secrets/$PROJECT_NAME exposes your configuration files to other users on the system, potentially leaking API keys or LLM endpoints. The critical step is injecting SHARED_CORE into the Python path so imports resolve correctly. Without this, you’ll chase missing module errors while debugging unrelated issues.

PROJECT_NAME="trading_bot_v2"
mkdir -p /data/projects/$PROJECT_NAME/{src,tests,data,logs}
mkdir -p /data/secrets/$PROJECT_NAME
chmod 700 /data/secrets/$PROJECT_NAME

cd /data/projects/$PROJECT_NAME
python3 -m venv venv
source venv/bin/activate

echo "export PYTHONPATH=/data/projects/shared:\$PYTHONPATH" >> venv/bin/activate

torsocks pip install \
    langgraph langchain langchain-openai \
    requests python-dotenv pydantic

Caveat: The torsocks wrapper only works if Tor is running locally (systemctl status tor). If your LLM endpoint requires Tor but the service isn’t active, the install will hang indefinitely. Always verify Tor’s status first with curl --socks5-hostname localhost:9050 https://check.torproject.org/api/ip before running the pip install command.

The project’s main.py imports from SHARED_CORE’s privacy router, health checks, and secret loader. If Tor isn’t reachable or the LLM endpoint fails, the agent exits immediately rather than making unprotected calls. The privacy router intercepts all outbound requests, adding Tor routing and injection checks transparently. Critical limitation: The health check’s require_tor=True flag enforces Tor usage for all outbound traffic, but this can break integrations with non-Tor endpoints (e.g., local LLMs on localhost). Use require_tor=False in development and only enable it for production deployments.

import sys
from core import load_secrets, HealthCheck, PrivacyRouter, create_agent_llm

def main():
    config = load_secrets(
        f'/data/secrets/{PROJECT_NAME}/config.env',
        required_keys=['LLM_BASE_URL']
    )

    if not HealthCheck.check_all(config=config, require_tor=True, require_llm=True):
        print("Health check failed: Tor or LLM unreachable")
        sys.exit(1)

    response = PrivacyRouter.get("https://alti.amsterdam/bootstrapping-self-sovereign-identity/", op_type="interactive")
    llm = create_agent_llm("general", config)

For faster iteration, a scaffolding script automates the repetitive parts. It creates directory structures, sets permissions, and drops a minimal main.py template. Watch out: The script’s hardcoded paths (e.g., /run/secrets/${NAME}_config) assume a Kubernetes-style secrets mount. If you’re running locally, update the path to /data/secrets/${NAME}/config.env or the script will fail silently. The script’s output is predictable, so you spend less time fixing typos and more time building.

cat > /data/scripts/new_agent.sh << 'SCRIPT'
#!/bin/bash
NAME=$1
if [ -z "$NAME" ]; then
    echo "Usage: new_agent.sh <project_name>"
    exit 1
fi

mkdir -p /data/projects/$NAME/{src/agents,tests,data,logs}
mkdir -p /data/secrets/$NAME
chmod 700 /data/secrets/$NAME

cat > /data/projects/$NAME/src/main.py << 'EOF'
import sys
from core import load_secrets, HealthCheck, PrivacyRouter, create_agent_llm

config = load_secrets(f'/data/secrets/${NAME}/config.env')
if not HealthCheck.check_all(config=config):
    sys.exit(1)
EOF

echo "Project $NAME created at /data/projects/$NAME"
echo "Secrets: /data/secrets/$NAME/config.env"
SCRIPT
chmod +x /data/scripts/new_agent.sh

New features, bug fixes, and security audits all follow the same workflow pattern. OpenHands provides a consistent interface whether you’re working in the browser or terminal. Limitation: OpenHands’ autonomous audits may flag false positives for “unprotected API calls” if your project uses non-standard endpoints (e.g., WebSockets). Review the generated report carefully and adjust the exclusion rules in .openhands/audit.yml to avoid blocking legitimate traffic. The tooling enforces structure, every task starts with context, ends with tests, and integrates into the shared codebase without manual coordination.

In browser mode, OpenHands spins up a task with a context sandwich: project scope, dependencies, and integration points. It generates the new file, writes tests, and commits changes while you review the diff. No manual file creation, no forgotten steps. Gotcha: If your project uses a custom Python interpreter (e.g., PyPy), OpenHands may generate code incompatible with your runtime. Always verify the generated files against your project’s requirements before committing. In terminal mode, Aider becomes your pair programmer, guiding you through fixes with precise instructions. For security work, OpenHands runs an autonomous audit, flagging injection risks, unprotected API calls, and missing validations. The output is a markdown report with actionable fixes, not a wall of false positives.

The static site workflow mirrors this pattern. OpenHands generates a Cypherpunk-styled podcast website using Eleventy, pulling episode data from a local JSON file and embedding IPFS-hosted audio. Warning: The static site generator assumes your audio files are already pinned to IPFS. If you haven’t pre-pinned them, the build will fail with Error: File not found in IPFS. Always run ipfs add audio.mp3 before generating the site. No external CDNs, no telemetry, just a static site built entirely from local sources. The build command is a single line, executed over Tor to avoid fingerprinting.


Code-server runs in a container, giving you VS Code in the browser with access to your project directories. Critical limitation: The PASSWORD and SUDO_PASSWORD environment variables in the docker-compose.yml are stored in plaintext in your config files. If an attacker gains access to your host machine, they can extract these credentials and compromise your code-server instance. Use docker secret or a secrets manager to store these values securely. The Continue extension configures two backends: SGLang for chat (optimized for repository context caching) and vLLM for autocomplete (prioritizing low latency). This split backend approach isn’t theoretical, it’s the difference between a responsive editor and a laggy one when working with large codebases.

  code-server:
    image: lscr.io/linuxserver/code-server:latest
    platform: linux/arm64
    container_name: code-server
    environment:
      - PUID=1000
      - PGID=1000
      - PASSWORD=strong-local-password
      - SUDO_PASSWORD=strong-sudo-password
      - DEFAULT_WORKSPACE=/data/projects
    volumes:
      - /data/code-server-config:/config
      - /data/projects:/data/projects
      - /data/projects/shared:/shared:ro
    ports:
      - "8443:8443"
    restart: unless-stopped

The Continue configuration maps models to specific endpoints, ensuring chat and autocomplete use the right backend for the job. SGLang’s RadixAttention caches repository context across multiple questions, while vLLM’s minimal time-to-first-token keeps autocomplete snappy. Watch out: If you switch between models mid-session, Continue may cache stale context from the previous model, leading to inconsistent suggestions. Restart your editor session after changing models to avoid this issue. This isn’t a theoretical optimization, it’s a practical necessity when your AI stack runs on local hardware.

{
  "models": [
    {
      "title": "Qwen3-Coder (SGLang: Chat)",
      "provider": "openai",
      "model": "Intel/Qwen3-Coder-Next-int4-AutoRound",
      "apiBase": "http://localhost:8001/v1",
      "apiKey": "not-needed"
    },
    {
      "title": "Qwen3-Fast (vLLM: Fast Queries)",
      "provider": "openai",
      "model": "qwen2.5:32b",
      "apiBase": "http://localhost:8000/v1",
      "apiKey": "not-needed"
    }
  ],
  "tabAutocompleteModel": {
    "title": "Qwen3-Fast (vLLM: Autocomplete)",
    "provider": "openai",
    "model": "qwen2.5:7b",
    "apiBase": "http://localhost:8000/v1",
    "apiKey": "not-needed"
  }
}

What I Actually Use

  • OpenHands: The only way I let AI write code in my Sovereign Grid. It enforces structure and writes tests I’d otherwise skip. Warning: OpenHands may generate code that assumes your project uses a specific Python version. If you’re running Python 3.11 but OpenHands targets 3.10, you’ll need to manually adjust the generated files.
  • code-server with Continue: Replaces my local VS Code instance without sacrificing extensions or workflow. Gotcha: The Continue extension’s autocomplete may slow down significantly if your project has >10k files. Disable it for large repositories or use a dedicated autocomplete model.
  • SHARED_CORE: The reason I don’t wake up at 3 AM debugging a privacy leak I introduced myself. Critical limitation: SHARED_CORE’s privacy router doesn’t support HTTP/3 yet. If your project requires HTTP/3 for performance, you’ll need to extend the router or use a custom solution.