The Mac Mini is the best hardware you can buy for running a personal AI server. I run my entire business on one. 13 agents, 24/7, for about $3/month in electricity. Here's exactly how to set yours up.
What's in This Guide
Why the Mac Mini Is the Best AI Server Hardware
Most people overthink their AI server setup. They look at Linux boxes, Raspberry Pis, old gaming PCs, cloud VPS instances. I tried several options before landing on the Mac Mini. Nothing comes close.
Three reasons.
Power efficiency. The M4 Mac Mini idles at 3-4 watts. That's not a typo. Jeff Geerling measured this independently. Under moderate AI workloads, you're looking at 10-15 watts. A typical desktop PC doing the same work pulls 150-300 watts. Over a year of 24/7 operation, that difference is hundreds of dollars in electricity.
Unified memory. Apple Silicon shares memory between CPU and GPU. When you run local AI models, the entire 16GB (or more) is available to the model. No separate VRAM budget. No bottleneck copying data between system RAM and a graphics card.
It just works. macOS doesn't need babysitting. No driver issues, no kernel panics from GPU passthrough, no Docker networking headaches. You plug it in, configure it once, and forget about it.
This is exactly what I do. My Mac Mini sits on a shelf in my apartment in Bali. It runs OpenClaw with 13 agents that handle my podcast, newsletter, social media, sponsorship outreach, and analytics. I check in from my laptop over Tailscale. The Mini just runs.
Which Mac Mini to Buy for AI
Apple sells the M4 Mac Mini starting at $499 on sale (retail $599). Here's what actually matters for AI workloads.
| Config | Price | RAM | Best For |
|---|---|---|---|
| M4 Base | $499-$599 | 16GB | Cloud API agents (OpenClaw + Claude/GPT) |
| M4 24GB | $699-$799 | 24GB | Cloud APIs + occasional local 7B-8B models |
| M4 Pro | $1,399+ | 24-48GB | Heavy local model usage, multiple concurrent models |
My recommendation: If you're using cloud APIs like Claude or GPT (which is what OpenClaw does by default), the base 16GB M4 is more than enough. The AI processing happens on Anthropic's or OpenAI's servers. Your Mac Mini just needs to run the agent framework, manage memory files, and handle tool calls.
If you want to run local models through Ollama alongside your cloud agents, go for 24GB. The extra 8GB lets you keep a 7B parameter model loaded while OpenClaw runs comfortably.
Don't overspend. A $499 Mac Mini running OpenClaw with Claude API calls will outperform a $3,000 Linux box running local models for most business automation tasks. Cloud models are smarter. The Mac Mini just needs to orchestrate.
How to Configure macOS for Always-On Operation
Out of the box, macOS will put your Mac Mini to sleep. That kills your AI server. Here's the exact configuration to prevent that.
Step 1: Disable sleep.
Open System Settings > Energy. Set "Turn display off after" to Never. Enable "Prevent automatic sleeping when the display is off." Enable "Start up automatically after a power failure."
Step 2: Terminal hardening.
Run these commands to make absolutely sure it stays awake:
sudo pmset -a sleep 0 disables system sleep.sudo pmset -a disksleep 0 keeps disks spinning.sudo pmset -a displaysleep 0 prevents display sleep (even without a monitor).sudo pmset -a autorestart 1 auto-restarts after power loss.
Step 3: Enable SSH.
System Settings > General > Sharing > Remote Login. Turn it on. This lets you manage the Mini remotely without needing a monitor connected.
Step 4: Auto-login.
System Settings > Users & Groups > Automatic Login. Select your user. After a power outage, the Mini boots right back into your user session and OpenClaw starts automatically.
Installing OpenClaw on Your Mac Mini
This is the fastest part. OpenClaw installs in under 5 minutes.
Head to installopenclawnow.com and follow the one-line installer. It handles Node.js, the OpenClaw package, and initial configuration.
Once installed, you configure your API keys (Anthropic, OpenAI, or both), connect a messaging channel (Telegram is the easiest), and start the gateway daemon. That's your always-on AI agent.
To make OpenClaw start automatically on boot, add it as a Login Item in System Settings > General > Login Items. Or create a launchd plist that starts the gateway daemon on system startup. The OpenClaw docs cover both methods.
I walk through the full Mac Mini setup in this video. Hardware, macOS configuration, OpenClaw install, and connecting Telegram so you can talk to your AI agent from your phone.
Running Local AI Models with Ollama
Cloud APIs give you the smartest models (Claude Opus, GPT-4). But sometimes you want local models for privacy, cost savings, or offline access. Ollama makes this dead simple on Mac.
Install Ollama from ollama.com. One download, drag to Applications, done. Then pull a model:
ollama pull llama3.2 grabs Meta's Llama 3.2 8B model. On the M4 16GB, expect around 28-35 tokens per second. That's fast enough for real-time conversation.
OpenClaw connects to Ollama natively. Point your config to http://localhost:11434 and you can use local models for any agent or task. I use cloud APIs for complex work (research, writing, analysis) and local models for simple automation tasks to keep costs down.
With 16GB unified memory, you can comfortably run 7B-8B parameter models. With 24GB, you can push to 13B. The M4 Pro with 48GB can handle 34B+ models. But honestly, for most business use cases, 8B local + cloud API for heavy lifting is the sweet spot.
Read more about this in our complete guide to OpenClaw with Ollama and local models.
Remote Access from Anywhere
Your Mac Mini sits at home (or wherever you leave it). You need to access it from anywhere. Two options.
Option 1: Tailscale (recommended).
Tailscale creates a private network between your devices. Install it on the Mac Mini and on your laptop/phone. You get a private IP address that works from anywhere in the world. No port forwarding, no dynamic DNS, no security risks from exposing SSH to the internet.
The free tier covers up to 100 devices. For a personal AI server, that's more than enough.
Option 2: SSH over the internet.
If you want direct SSH access, you need to set up port forwarding on your router and use a dynamic DNS service. This works but exposes your Mac Mini to the internet. Use key-based authentication only. Disable password login.
Option 3: Just use Telegram.
Here's the thing most people miss. If you set up OpenClaw with Telegram bot integration, you don't need SSH for daily use at all. You message your AI agent on Telegram, and it responds. The Mac Mini is just the engine running in the background. You interact with it through chat, not through a terminal.
My setup: Tailscale for maintenance and debugging. Telegram for daily interaction. I rarely SSH into the Mini directly unless I'm updating OpenClaw or checking logs.
Real Costs: Power, API, and Hardware
Let's do the actual math.
| Cost | Amount | Notes |
|---|---|---|
| Mac Mini M4 (one-time) | $499-$599 | Base model, 16GB |
| Electricity (monthly) | $2-4 | At ~10W average, $0.12/kWh |
| Claude API (monthly) | $20-100 | Depends on usage. Most solopreneurs spend $30-50. |
| Tailscale | $0 | Free tier |
| Ollama | $0 | Open source |
Compare that to a VPS. A cloud server with similar specs costs $50-150/month. Over two years, that's $1,200-$3,600. A Mac Mini pays for itself in 4-6 months versus renting cloud compute.
Compare that to a virtual assistant. Even a part-time VA costs $500-2,000/month. An AI agent on a Mac Mini handles scheduling, email, social media, research, and content creation for a fraction of that. Check out our full comparison of AI agents vs. virtual assistants.
The Mac Mini is the cheapest always-on compute you can get. Period. Read our guide to self-hosted AI assistants for more deployment options.
OpenClaw Lab is the #1 community for founders building AI agent systems. I share the exact playbooks, skill files, and workflows inside. Weekly lives, expert AMAs, and 265+ members building real systems.
Join OpenClaw Lab →