Real Security for AI (Not Theater)
"Real AI security" usually means: "We have prompt guardrails!"
That's theater. A nice-sounding policy that makes people feel safe. But it doesn't actually protect anything.
Here's what real security looks like.
The Real Threats (With Examples)
- **Threat 1: API Keys Leaking**
One leaked Stripe key = someone can charge customers. One leaked AWS key = someone can spin up $100K/day in compute.
Real-world example: A YC startup stored their OpenAI API key in a `.env` file on GitHub. Within 6 hours, the key was found by a bot, an attacker was spending $500/day running GPT-4 against their account. They lost $8K before noticing.
How it happened: The developer committed `.env` to git (mistake #1), pushed to GitHub (mistake #2), never rotated the key (mistake #3).
Real security: Keys encrypted at rest, transmitted over HTTPS, never logged, rotated quarterly. Also: audit logs showing every API call.
- **Threat 2: Privilege Escalation**
Your AI has read access to the database. What if it gets compromised? Now it has write access. Deletes your data.
Real-world example: A startup's AI agent was running as the `root` user. An attacker found a code injection vulnerability in the agent's natural language processing. They executed arbitrary shell commands and deleted all backups, customer data, and source code.
How it happened: Lazy deployment (running as root is "easier"), no security review, no backups outside the main server.
Real security: Role-based access control. AI can read. AI cannot delete. It runs as a limited-privilege user (`ai-agent`, not `root`). Even if compromised, the damage is contained.
- **Threat 3: Data Exfiltration**
Your AI has access to customer data (names, emails, payment methods). What if someone breaks in? They download all customer data.
Real-world example: A small SaaS ran a customer list in plain text on the server. Their AI agent accessed it (via a vulnerable API endpoint). An attacker found the endpoint, extracted 50,000 customer emails and credit card last 4 digits. GDPR fine: $250K+.
How it happened: No encryption (data was plain text), no API authentication, no access logging, no anomaly detection.
Real security: Data encrypted at rest. Access is logged. Unusual access patterns (downloading 50,000 records at 3 AM) trigger alerts. The data itself is encrypted with keys the attacker cannot access.
- **Threat 4: Supply Chain Compromise**
You're running on someone else's infrastructure. One of their developers leaks credentials. Now your server is compromised.
Real-world example: A developer at a major cloud provider accidentally left AWS credentials in a public repo. An attacker used them to access 200+ customer accounts and steal data from 5 million users.
Defense in depth: Even if someone compromises your server, your encrypted data and keys remain safe. Your AI's permissions are limited to specific functions (cannot access everything). Audit logs reveal the breach immediately.
What "Prompt Guardrails" Actually Do
Nothing. An AI with a prompt saying "never delete data" will delete data if someone tells it to.
Example: OpenAI's ChatGPT has explicit guardrails about not helping with hacking. Thousands of researchers have broken these guardrails with creative prompting. They're cosmetic.
Real security isn't in the prompt. It's in the infrastructure.
Guardrails are security theater. Infrastructure is actual security.
How Real Security Works (Step by Step)
### Layer 1: Network Security
- **SSH Keys Only (no passwords)**
- Passwords can be brute-forced. SSH keys cannot.
- Setup: `sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config`
- Impact: Eliminates 99% of automated SSH attacks
- **UFW Firewall (whitelist model)**
- Only open ports you need: 22 (SSH), 80 (HTTP), 443 (HTTPS)
- Block everything else
- Setup: `sudo ufw default deny incoming && sudo ufw allow 22,80,443/tcp`
- Impact: Prevents random port scans from accessing your services
- **fail2ban (auto-block)**
- After 5 failed SSH attempts, ban the IP for 24 hours
- Setup: Pre-configured in AldenAI, just enable
- Impact: Prevents brute-force attacks
### Layer 2: Secret Management
- **Encrypted Vault (not .env files)**
- Never store secrets in plaintext files
- Never commit secrets to git (they're in git history forever)
- Use: AWS Secrets Manager, HashiCorp Vault, or encrypted `.env` with passphrase
- Impact: If someone steals your server, secrets remain protected
- **Key Rotation (quarterly minimum)**
- Old keys: disable immediately
- New keys: generate and deploy
- Timing: Rotate all API keys every 90 days
- Impact: Even if an old key leaks, it's worthless after 90 days
### Layer 3: Access Control
- **Role-Based Access Control (RBAC)**
- AI user: can read database, send emails, cannot delete files
- CLI user: can deploy code, cannot access customer database
- Support user: can read customer data, cannot modify production
- Setup: `sudo useradd -s /usr/sbin/nologin -M ai-agent` (no shell access for AI)
- Impact: Containment. If one credential is compromised, damage is limited.
- **Principle of Least Privilege**
- Your AI only gets the permissions it needs
- It doesn't get root access, production database write access, or SSH keys to other servers
- Impact: Compromised AI ≠ compromised entire company
### Layer 4: Audit & Monitoring
- **Complete Audit Logging**
- Every SSH login (who, when, where)
- Every command the AI executes (what, when, output)
- Every API call to external services (Stripe, AWS, etc.)
- Every database query (if applicable)
- Setup: `sudo auditctl -w /root/ -p wa -k root_writes`
- Impact: Breach detection in hours, not months
- **Alerting on Anomalies**
- Alert if someone tries 50 SSH logins from new IP
- Alert if AI tries to access database it's never accessed before
- Alert if disk is 90% full (local attack sign)
- Impact: Catch breaches in real time instead of 200 days later (average industry detection time)
How We Do It (AldenAI)
AldenAI CLI installer handles all of this automatically:
- **SSH:** SSH keys only (no passwords)
- **Firewall:** UFW firewall (whitelist model)
- **fail2ban:** Auto-block attacks after 5 failed attempts
- **Secrets:** Encrypted vault (not .env files)
- **RBAC:** Limited user permissions (AI cannot delete system files)
- **Audit Logging:** Every command logged with timestamp
- **Key Rotation:** Auto-rotate keys quarterly
- **Monitoring:** Pre-configured alerting on anomalies
Total time: 10 minutes (CLI does the work) Cost: Included in the $49 kit
The Cost of Not Securing Your AI
- **If you skip security:**
- Breach of 100K customers: $1-5M in GDPR/CCPA fines
- Reputational damage: 30-50% customer churn
- Incident response: $100K-500K (lawyers, forensics, notification)
- Infrastructure rebuild: $50K-200K
- Lost revenue during incident: $10K-100K/day
- **Total damage: $2-6M for a small breach**
- **Cost of securing your AI:**
- AldenAI kit: $49
- Your time: 2 hours ($100 at $50/hr)
- Ongoing: $50/month LLM API
- **Total: ~$750/year**
- **ROI:** Prevent one breach and you've paid for security 10,000x over.
Compliance & Standards
If you're handling customer data:
- **GDPR (EU customers):**
- Requires data encryption
- Requires audit logs
- Requires ability to delete customer data
- AldenAI: ✅ Supports all three
- **CCPA (California):**
- Requires data security
- Requires breach notification in 30 days
- AldenAI: ✅ Audit logs enable this
- **HIPAA (Healthcare):**
- Requires encryption at rest and in transit
- Requires role-based access
- Requires audit trails
- AldenAI: ✅ CLI configures all of this
- **SOC 2 Type II (Enterprise customers often require):**
- Requires security controls, audit logs, monitoring
- AldenAI: ✅ Covers the technical requirements
Verification Checklist
Before deploying an AI:
- [ ] SSH keys only (no passwords)?
- [ ] Firewall configured (whitelist model)?
- [ ] fail2ban running (auto-block)?
- [ ] Secrets encrypted (vault, not .env)?
- [ ] Audit logging enabled (every action)?
- [ ] RBAC configured (least privilege)?
- [ ] HTTPS enabled (encrypted transport)?
- [ ] Key rotation schedule (quarterly)?
- [ ] Monitoring/alerting configured?
- [ ] Incident response plan (if breach happens)?
Missing even one? You're exposed.
The Bottom Line
Prompts don't secure anything. Infrastructure secures everything.
Deploying an AI without proper infrastructure security is like hiring an employee with access to your servers and giving them a prompt saying "be nice."
Make sure your AI runs on real infrastructure designed for security.
[Deploy securely with AldenAI →](/guide) — security hardening is automated in the CLI.