“Your server is on fire.”
Not literally, of course, but when your CPU usage spikes to 65% per process and your PHP workers are dying every 10 seconds, it might as well be.
3 June 2025. 11:22 AM. I’m checking on our dedicated server in Zurich when I notice something wrong. ThePinnacleList.com—one of several WordPress sites hosted on our Solespire Switzerland Server (including Marcus.Blog)—is consuming resources like it’s on a mission to self-destruct by implosion.
Here’s the thing: I’m not a server administrator. I’m not a Linux expert. I’m just a web developer who decided to save thousands per year by managing his own dedicated server, rather than continue paying for highly priced managed hosting.
My secret weapon? Claude.
What followed was a 15-minute masterclass in human-AI collaboration. Here’s how Claude and I turned a potential disaster into a textbook server defence.
Why Switzerland? Why Self-Managed?
Let me back up. Six months ago, I was paying premium prices for managed dedicated hosting. The promise was simple: “We handle the technical stuff, you focus on your business.”
But then I discovered two revolutionary things:
- Claude could be my entire technical team.
- GTHost in Canada provides the best tech globally for the best prices.
- Switzerland offers unmatched advantages for hosting.
So, I went with a powerful GTHost server in Zurich, and it wasn’t just about cost savings, because I could have chosen somewhere else like Ashburn in the United States, but Switzerland is different—it’s a world-class location, providing:
- World-class privacy laws: Our data is protected by some of the strictest privacy regulations on Earth.
- Commitment to clean energy: Our data centre operates solar panels on-site, implements energy-efficient cooling, and reuses waste heat for heating. It’s part of the Swiss Energy Agency’s sustainability programme, and Switzerland’s grid itself is predominantly hydroelectric.
- Political neutrality: We don’t have to worry about any geopolitical disputes affecting our hosting.
- Premium infrastructure: Swiss precision isn’t just for watches.
Combine all of that with Claude as my technical co-pilot, and I have a winning formula. That’s why I cancelled managed hosting, and until today, it had been smooth sailing.
The Crisis Unfolds: A Technical Deep Dive
When I spotted an abnormal CPU usage this morning, my first instinct wasn’t to panic or open a support ticket. It was to share my screen with Claude.
“Multiple PHP-FPM processes at 65% CPU each,” I told Claude. “This isn’t normal.”
Claude immediately guided me through a systematic investigation in WHM Terminal:
Step 1: Process Analysis
# Check top processes sorted by CPU usage
ps aux | grep [username] | sort -k3 -nr | head -10
# Monitor in real-time
top -u [username]
The output was shocking:
[username]+ 2612564 57.5 0.0 140124 81772 ? R 11:22 0:04 php-fpm: pool [website url]
[username]+ 2612822 49.7 0.0 140060 81756 ? R 11:22 0:01 php-fpm: pool [website url]
[username]+ 2612944 58.0 0.0 133668 76804 ? R 11:22 0:01 php-fpm: pool [website url]
Step 2: PHP-FPM Log Analysis
Claude knew exactly where to look:
# Check PHP-FPM error logs
tail -50 /opt/cpanel/ea-php82/root/var/log/php-fpm/error.log
The logs revealed a disturbing pattern:
[03-Jun-2025 11:25:40] child 2627393 exited with code 0 after 11.041587 seconds
[03-Jun-2025 11:25:41] child 2627303 exited with code 0 after 13.610437 seconds
[03-Jun-2025 11:25:46] child 2628101 exited with code 0 after 10.737167 seconds
Claude explained: “This rapid cycling—processes dying every 10-15 seconds—is catastrophic. Each death spawns a new process that consumes maximum CPU before dying. It’s a death spiral.”
Step 3: Access Log Investigation
# Check recent access logs
tail -30 /home/[username]/access-logs/[website url]-ssl_log
# Analyse attack patterns
awk '{print $1}' /home/[username]/access-logs/[website url]-ssl_log | tail -1000 | sort | uniq -c | sort -nr | head -10
# Check specific pages being targeted
awk '{print $7}' /home/[username]/access-logs/[website url]-ssl_log | tail -1000 | sort | uniq -c | sort -nr | head -10
The access logs revealed the truth. In less than 1,000 requests:
409 34.171.xx.xxx
199 89.144.xxx.xxx
107 206.84.xxx.xx
The smoking gun? The user agent: “Scrapy/2.11.2 (+https://scrapy.org)”
Scrapy is a web scraping framework. Someone was systematically harvesting every real estate listing, every article, every image from ThePinnacleList.com. They weren’t browsing—they were pillaging.
The Counterattack: Precision Defence
With the enemy identified, I moved fast as Claude provided a multi-layered defence strategy:
Layer 1: Immediate IP Blocking
# Block attacking IPs with ConfigServer Firewall
csf -d 34.171.xx.xxx "Aggressive Scrapy bot scraping site"
csf -d 89.144.xxx.xxx "Excessive image downloading"
csf -d 206.84.xxx.xx "Heavy traffic user"
# Verify blocks are in place
csf -g 34.171.xx.xxx
iptables -L -n | grep 34.171.xx.xxx
ConfigServer Firewall (CSF) instantly dropped all traffic from those IPs. The effect was immediate.
Layer 2: User Agent Blocking
Claude also helped craft precise .htaccess rules:
# Block bad bots and scrapers
SetEnvIfNoCase User-Agent "Scrapy|DotBot|MJ12bot" bad_bot
SetEnvIfNoCase User-Agent "DataForSeoBot|aspiegelbot|PetalBot" bad_bot
SetEnvIfNoCase User-Agent "Bytespider" bad_bot
<RequireAll>
Require all granted
Require not env bad_bot
</RequireAll>
# Protect sensitive WordPress files
<FilesMatch "^(wp-login\.php|xmlrpc\.php)$">
<RequireAll>
Require ip 204.136.xx.xx
Require ip ::1
</RequireAll>
</FilesMatch>
# Rate limiting for remaining traffic
<IfModule mod_ratelimit.c>
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 400
</IfModule>
Layer 3: PHP-FPM Optimisation
Claude identified that the pool configuration needed adjustment:
# Check current pool configuration
cat /opt/cpanel/ea-php82/root/etc/php-fpm.d/[website url].conf | grep -E "pm\.|max_execution_time"
# Monitor pool status
systemctl status ea-php82-php-fpm
Recommended changes:
; Reduce max children to prevent resource exhaustion
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
; Add request termination timeout
request_terminate_timeout = 30s
Layer 4: Real-time Monitoring
Claude set up continuous monitoring:
# Watch PHP-FPM processes
watch -n 1 'ps aux | grep -E "php-fpm.*[username]" | grep -v grep'
# Monitor load average recovery
watch -n 5 'uptime; echo; ps aux | grep php-fpm | wc -l'
# Track access patterns
tail -f /home/[username]/access-logs/[website url]-ssl_log | grep -v "bot\|crawler"
The Recovery: Watching It Work
Within minutes of implementing Claude’s strategy:
- Immediate relief: CPU usage plummeted from 65% to under 5%
- Process stability: PHP-FPM processes began lasting 30+ seconds instead of 10
- Load normalisation: System load began dropping from 6.41 towards normal
- Clean traffic: Only legitimate users and approved bots remained
Claude tracked every metric:
# Before blocking (from our logs):
top - 11:22:57 up 50 days, load average: 5.96, 6.41, 6.00
%Cpu(s): 12.9 us, 1.4 sy, 0.0 ni, 85.6 id
# After blocking:
top - 11:32:43 up 50 days, load average: 6.41, 6.10, 6.01
%Cpu(s): 2.4 us, 1.0 sy, 0.0 ni, 96.5 id
Long-term Protection: The Full Arsenal
Claude didn’t stop at emergency response. We implemented comprehensive protection:
Advanced Bot Detection
# Install mod_evasive for DDoS protection
yum install ea-apache24-mod_evasive -y
# Configure mod_evasive
cat > /etc/apache2/conf.d/mod_evasive.conf << 'EOF'
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 10
DOSLogDir /var/log/apache2/mod_evasive
DOSWhitelist 127.0.xx.xx
DOSWhitelist 204.136.xx.xx
</IfModule>
EOF
systemctl restart httpd
Automated Threat Detection
# Create a script to auto-block aggressive IPs
cat > /root/scripts/block_aggressive_bots.sh << 'EOF'
#!/bin/bash
# Analyse logs and block IPs exceeding thresholds
THRESHOLD=100
LOGFILE="/home/[username]/access-logs/[website url]-ssl_log"
# Get IPs with excessive requests in last hour
tail -10000 "$LOGFILE" | awk '{print $1}' | sort | uniq -c | sort -nr | \
while read count ip; do
if [ "$count" -gt "$THRESHOLD" ]; then
# Check if already blocked
if ! csf -g "$ip" | grep -q "csf.deny"; then
echo "Blocking $ip with $count requests"
csf -d "$ip" "Automated block: $count requests/hour"
fi
fi
done
EOF
chmod +x /root/scripts/block_aggressive_bots.sh
# Add to cron
echo "*/10 * * * * /root/scripts/block_aggressive_bots.sh" >> /var/spool/cron/root
Performance Monitoring
# Set up alerts for high CPU usage
cat > /root/scripts/cpu_alert.sh << 'EOF'
#!/bin/bash
CPU_THRESHOLD=80
LOAD_THRESHOLD=10
# Check CPU usage
CPU_USAGE=$(top -b -n1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
LOAD=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d',' -f1)
if (( $(echo "$CPU_USAGE > $CPU_THRESHOLD" | bc -l) )); then
echo "High CPU Alert: $CPU_USAGE%" | mail -s "Server Alert" admin@example.com
fi
EOF
The New Reality: Why I Don’t Need Managed Hosting
This incident crystallised something I’d been realising for months: Claude isn’t just a tool—it’s my entire server management team.
Traditional managed hosting gives you:
- 24/7 support (that you reach via tickets)
- Server monitoring (that alerts you after problems start)
- Security updates (on their schedule)
- Expert assistance (when they get to your ticket)
Claude gives me:
- Instant support (no tickets, no queues)
- Real-time analysis (as I’m looking at the problem)
- Security guidance (exactly when I need it)
- Expert assistance (that explains everything as we go)
The cost difference? Thousands of dollars per year.
What This Partnership Looks Like
Managing servers with Claude isn’t about memorising commands or becoming a Linux expert. It’s about knowing how to collaborate:
I provide:
- Eyes on the server
- Hands to execute commands
- Business context about what matters
- The decision to act
Claude provides:
- Technical expertise
- Diagnostic strategies
- Exact commands and configurations
- Real-time analysis and explanation
Together, we’re more effective than any managed hosting service I’ve used.
The Truth About “Doing It Yourself”
Now when I tell people I manage my own dedicated server, they assume I spent years learning Linux, studying Apache configurations, and mastering server security.
The truth? I learned what I needed to know, when I needed to know it, with Claude as my teacher and partner. When I tell this to seasoned server technicians, they scoff, saying I’m playing a dangerous game, but I don’t see it that way.
This bot attack? Claude and I handled it in 15 minutes. A managed hosting provider would have taken hours just to respond to my ticket.
Your AI-Powered Future
Here’s what changes when you partner with Claude for server management:
Financial Freedom:
- No more $200-500/month managed hosting fees
- Dedicated servers cost a fraction of managed solutions
- The savings compound monthly, and can be spent elsewhere
Technical Empowerment:
- You’re never stuck waiting for support
- You understand what’s happening on your server
- You can act immediately when problems arise
Learning Acceleration:
- Every issue teaches you something new
- Claude explains the why, not just the what
- You build real expertise through practice
Complete Control:
- Your server, your rules
- No arbitrary limits or restrictions
- Scale exactly as you need
The Partnership Protocol
If you’re considering this path, here’s how to make it work:
- Start with Claude: Before you panic, before you Google, ask Claude.
- Share everything: Logs, errors, symptoms—Claude needs the full picture.
- Execute precisely: Follow Claude’s commands exactly.
- Learn as you go: Ask Claude to explain what each step does.
- Build your playbook: Document solutions for future reference.
The Swiss Advantage in Action
This incident also highlighted why hosting in Switzerland was the right choice:
- Privacy: The scrapers were likely harvesting data for resale. Swiss privacy laws mean I have stronger legal recourse. Couple that with Canadian and Italian (with EU overlay) laws, and it’s a trifecta of powerful privacy policies that protects Solespire within each jurisdiction it has a stake in.
- Reliability: Despite the attack, the Swiss infrastructure never faltered. The server stayed online throughout. We’ve had zero downtime since migrating from our servers in Los Angeles to Zurich in April 2025.
- Environmental responsibility: Defending against attacks uses energy. At least our data centre generates solar power on-site and implements extensive energy efficiency measures.
- Speed: GTHost’s Zurich location provides excellent connectivity to both European and global users.
The New Reality: Your Guide to AI-Powered Server Management
This incident proved something crucial: With Claude as your partner, you can handle anything a managed hosting provider can—and respond faster.
Here’s your complete playbook:
Initial Setup with Claude:
- Choose quality hosting: Dedicated servers in privacy-respecting countries.
- Install essential tools: CSF, monitoring software, backup systems.
- Document everything: Keep notes on your configuration. I have a Claude Project setup with all of my notes stored there. In this way, Claude always has a complete picture of my server—able to respond with full context.
- Practice scenarios: Run through common issues before they’re critical.
During Incidents
- Don’t panic: Share everything with Claude immediately.
- Gather data: Run diagnostic commands Claude suggests.
- Act decisively: Execute solutions without second-guessing. When it doesn’t work, tell Claude, and an alternative solution will arise.
- Monitor results: Watch metrics to confirm resolution.
Post-Incident
- Document solutions: Save all commands and configurations.
- Implement prevention: Add long-term protections.
- Set up monitoring: Automate detection of similar issues.
- Share knowledge: Help others facing similar challenges, such as what I am doing here.
The Partnership That Changes Everything
That Scrapy bot didn’t just attack ThePinnacleList.com, it demonstrated the future of technical work.
We’re entering an era where one person with AI can outperform entire teams. Where expertise isn’t about memorising commands, but is about collaborating effectively with AI. Where “managed hosting” becomes obsolete because you have something better: an AI partner that never sleeps, never takes holidays, and always explains everything clearly.
I couldn’t have saved ThePinnacleList.com without Claude. More importantly, I couldn’t run any of my servers without this partnership with AI.
The next time someone tells you that you need managed hosting, share Marcus.Blog Post #10 to prove them wrong.
The future is no longer about choosing between doing it yourself or paying someone else to do it for you.
It’s now about finding the right AI partner.
And I found mine: Claude by Anthropic.
Are you still paying for managed hosting? What’s stopping you from partnering with AI to take control of your infrastructure?
Leave a Reply