Use Cases

Deploy Anywhere. Do Everything.

The XE Genesis Node combines always-on AI (XClaw + Gemma 4), a full Docker app marketplace, and self-hosting into one compact device. Here's what you can do with it.

🧠

Always-On AI Companion

XClaw with Gemma 4 running 24/7. Natural language control, intelligent system management, and agentic AI workflows – all local, all private. The AI-Accelerated model unlocks full 26B/31B performance.

👪

Family AI Hub

Child Accounts provide safe, filtered AI for the whole family. Smaller Gemma 4 models for age-appropriate responses, content filtering, parental controls, and activity logging. Adults get full-power mode.

📺

Media Server

Run Jellyfin or Plex via the Docker marketplace. Stream to every device in your home. The AI-Accelerated model adds GPU-accelerated 4K transcoding for smoother streaming.

Bitcoin Node

One-click Bitcoin Core deployment via Docker. Run a full node, contribute to the network, and verify your own transactions. 1 TB NVMe for the full blockchain with room to grow.

☁️

Private Cloud

Self-host Nextcloud, ownCloud, or any cloud platform. Your files, your infrastructure. No third-party storage providers. Expandable storage for growing libraries.

🏠

Smart Home Hub

Run Home Assistant for IoT device management and automation. Combine with XClaw for natural language control of your smart home – all processed locally.

🔒

Network Security

Network-wide ad blocking (Pi-hole), private VPN server, and DNS management. Your own network infrastructure with no third-party dependencies.

💻

Development Server

16 cores, 32 threads, 32 GB RAM – a serious development machine. Run containers, test deployments, and iterate on projects with a local server always available.

XE Network Node

Participate in the XE block-lattice network. Run a full node, lease compute as a provider, and earn XE emission rewards. Genesis Edition includes a pre-loaded XE stake.

🌐

Edge AI Deployment

Compact enough for edge locations. Run inference at the point of need – retail, healthcare, manufacturing. The NVIDIA A2000 handles heavier models without cloud latency.

🎓

Research & Education

Affordable, self-contained AI research platform. Run Gemma 4 models locally for experimentation, learning, and prototyping without cloud costs or data exposure.

📡

Nostr & Social

Run your own Nostr relay, RSS aggregator, or decentralised social infrastructure. Sovereign, always connected, and backed by XClaw for intelligent content management.

Ready to Deploy?

Choose Base or AI-Accelerated. Same powerful Ryzen 9 platform, same COOJ chassis. Add a GPU when you need more.

Order Genesis Node Compare Models