Skip to main content

Operasional

Kemudahan Pengelolaan Sehari-hari

Dokumentasi ini fokus pada aspek operasional praktis dalam mengelola aplikasi sehari-hari, membandingkan kemudahan antara Monolith Server vs Gitea + K3s dalam 3 aspek utama:

  1. Service Management - Kelola lifecycle service (start, stop, restart, scale)
  2. Deployment Management - Proses deployment dan rollback
  3. Monitoring & Observability - Troubleshooting dan observasi sistem

1. Service Management (Kelola Layanan)

1.1 Restart Service

Use Case: Service mengalami memory leak atau perlu di-restart setelah config change.

Environment A: Monolith Server

# Step 1: SSH ke server
ssh user@dev-server.example.com

# Step 2: Navigate ke aplikasi
cd /opt/applications

# Step 3: Restart service menggunakan PM2
pm2 restart users-service
pm2 restart products-service
pm2 restart orders-service

# Step 4: Verify service running
pm2 status

# Step 5: Check logs untuk error
pm2 logs users-service --lines 50

# Step 6: Logout
exit

# Repeat untuk staging & production server

Kompleksitas:

  • ❌ 3 server × 5 steps = 15 manual operations
  • ❌ Butuh SSH access ke setiap server
  • ❌ Butuh remember PM2 commands
  • ❌ Downtime ~5-10 detik per service
  • ❌ Manual verification diperlukan

Estimated Time: 5-10 menit untuk 3 environment


Environment B: Gitea + K3s

# Restart service di semua environment
kubectl rollout restart deployment/users -n development
kubectl rollout restart deployment/users -n staging
kubectl rollout restart deployment/users -n production

# Verify rollout status
kubectl rollout status deployment/users -n production

# Check if pods running
kubectl get pods -n production

Kompleksitas:

  • ✅ 3 commands untuk 3 environment
  • ✅ Tidak perlu SSH ke server
  • ✅ Consistent syntax (kubectl)
  • Zero downtime (rolling update)
  • ✅ Auto health-check built-in

Estimated Time: 1-2 menit untuk 3 environment


1.2 Scale Service (Horizontal Scaling)

Use Case: Traffic spike, butuh scale service untuk handle beban.

Environment A: Monolith Server

# Kompleksitas tinggi - butuh:

# 1. Setup load balancer (nginx/haproxy)
# 2. Modify PM2 ecosystem file
pm2 start ecosystem.config.js --instances 4

# 3. Configure port binding untuk multiple instances
# 4. Setup health check di load balancer
# 5. Test load distribution

# Problem:
# - Single server = limited scaling
# - Butuh restart untuk change instance count
# - Manual load balancer config
# - Shared resources (CPU/Memory contention)

Kompleksitas:

  • ❌ Butuh load balancer setup
  • ❌ Limited by single server capacity
  • ❌ Manual configuration multiple files
  • ❌ Restart diperlukan
  • ❌ Resource contention antar service

Estimated Time: 30-60 menit (initial setup)


Environment B: Gitea + K3s

# Scale service to 5 replicas
kubectl scale deployment/orders --replicas=5 -n production

# Verify scaling
kubectl get pods -n production -l app=orders

# Watch real-time scaling
kubectl get pods -n production -w

# Auto load-balanced by Kubernetes Service
# No additional configuration needed!

Kompleksitas:

  • 1 command untuk scale
  • ✅ Auto load-balancing built-in
  • ✅ Instant scaling (seconds)
  • ✅ Resource isolation per pod
  • ✅ Dapat scale beyond single server (multi-node)

Estimated Time: 10-30 detik


1.3 Stop/Start Service

Use Case: Maintenance window, butuh stop service sementara.

Environment A: Monolith Server

# Stop service
ssh user@server
pm2 stop users-service
pm2 save

# Start service
pm2 start users-service
pm2 save

# Problem: Service lain tetap running, sharing resources

Downtime: Immediate stop, 5-10 detik untuk start


Environment B: Gitea + K3s

# Stop service (scale to 0)
kubectl scale deployment/users --replicas=0 -n staging

# Start service (scale back up)
kubectl scale deployment/users --replicas=2 -n staging

# Benefit: Resource freed up, isolated

Downtime: Graceful shutdown, controlled startup


1.4 Update Environment Variables / Config

Use Case: Update API key, database connection, atau config lainnya.

Environment A: Monolith Server

# Step 1: SSH ke server
ssh user@production-server

# Step 2: Edit .env file atau ecosystem.config.js
nano /opt/app/.env
# atau
nano /opt/app/ecosystem.config.js

# Step 3: Restart semua service untuk apply changes
pm2 restart all

# Step 4: Verify config loaded
pm2 logs --lines 20

# Problem:
# - Config di file, butuh restart untuk apply
# - Risk typo saat manual edit
# - No version control untuk config changes
# - Shared .env file (security risk)

Estimated Time: 5-10 menit, with downtime


Environment B: Gitea + K3s

# Step 1: Update ConfigMap atau Secret
kubectl create configmap app-config \
--from-literal=API_KEY=new-value \
--dry-run=client -o yaml | kubectl apply -f -

# Step 2: Rollout restart untuk apply changes
kubectl rollout restart deployment/users -n production

# Step 3: Verify
kubectl get configmap app-config -o yaml

# Benefit:
# - Config versioned in Git (GitOps)
# - Secrets encrypted at rest
# - Audit trail siapa ubah apa
# - Rolling update (zero downtime)

Estimated Time: 2-3 menit, zero downtime


2. Deployment Management

2.1 Deploy New Version

Use Case: Deploy new feature atau bug fix.

Environment A: Monolith Server

# Manual deployment workflow:

# 1. SSH ke server development
ssh user@dev-server

# 2. Navigate ke app directory
cd /opt/applications/users-service

# 3. Backup current version (safety)
cp -r /opt/applications/users-service /opt/backups/users-service-$(date +%Y%m%d)

# 4. Pull latest code
git pull origin main

# 5. Install dependencies (jika ada perubahan)
npm install

# 6. Run build (jika ada)
npm run build

# 7. Restart service
pm2 restart users-service

# 8. Check logs untuk error
pm2 logs users-service --lines 50

# 9. Test endpoint
curl http://localhost:3001/health

# 10. Repeat steps 1-9 untuk products-service & orders-service
# 11. Repeat ALL steps untuk staging server
# 12. Repeat ALL steps untuk production server

# Risk Points:
# - Lupa git pull salah satu service ❌
# - Dependency conflict ❌
# - Port already in use ❌
# - Lupa restart service ❌
# - Human error saat copy-paste command ❌

Kompleksitas:

  • ❌ 10 steps × 3 services × 3 environments = 90 manual operations
  • ❌ ~5-10 menit per environment
  • ❌ High risk human error
  • ❌ Downtime saat restart (~30 detik)
  • ❌ No rollback mechanism

Total Time: 15-30 menit (with anxiety 😰)


Environment B: Gitea + K3s

# Automated deployment workflow:

# 1. Developer push code
git add .
git commit -m "feat: add new payment method"
git push origin development

# 2. Gitea Runner automatically:
# - Build Docker image
# - Push to registry
# - Deploy to namespace: development
# - Rolling update (zero downtime)
#
# Time: 45-60 seconds

# 3. Verify deployment
kubectl get pods -n development
kubectl logs -f deployment/users -n development

# 4. Promote to staging (via Pull Request)
# - Open PR: development → staging
# - Review & approve
# - Merge → auto deploy to staging
#
# Time: ~1 minute (after approval)

# 5. Promote to production (via Pull Request)
# - Open PR: staging → production
# - Final approval
# - Merge → auto deploy to production
# - Rolling update ensures zero downtime
#
# Time: ~1 minute (after approval)

Kompleksitas:

  • ✅ 1 git push → auto deploy
  • ✅ ~45-60 detik per environment (automated)
  • Zero downtime (rolling update)
  • ✅ Consistent process semua environment
  • ✅ Automatic rollback on failure
  • ✅ Full audit trail (Git history + CI logs)

Total Time: 3-5 menit (mostly waiting for approval, actual deployment ~3 minutes)

Time Savings: 80-85% faster! 🚀


2.2 Rollback to Previous Version

Use Case: Bug ditemukan di production, butuh rollback cepat.

Environment A: Monolith Server

# Manual rollback workflow:

# 1. SSH ke production server
ssh user@production-server

# 2. Stop service
pm2 stop users-service

# 3. Git revert atau checkout previous commit
git log --oneline -5 # find previous commit
git checkout abc1234 # checkout previous version

# 4. Rebuild dependencies (jika perlu)
npm install
npm run build

# 5. Restart service
pm2 start users-service

# 6. Pray it works 🙏
pm2 logs users-service

# 7. Repeat untuk staging & development (untuk sync)

# Problems:
# - Manual process, error-prone
# - Downtime ~2-5 menit
# - Butuh remember/find previous commit hash
# - Risk: wrong commit, forgotten dependencies

Downtime: 2-5 menit
Risk: High (manual, stressful)


Environment B: Gitea + K3s

# Simple rollback workflow:

# Method 1: Kubernetes native rollback
kubectl rollout undo deployment/users -n production

# Verify rollback
kubectl rollout status deployment/users -n production

# Time: 10-30 seconds
# Downtime: ZERO (rolling update)

# Method 2: Rollback specific revision
kubectl rollout history deployment/users -n production
kubectl rollout undo deployment/users --to-revision=3 -n production

# Method 3: Git-based rollback (untuk sync source)
git revert HEAD
git push origin production
# Auto trigger CI/CD → deploy previous version

Downtime: ZERO (rolling update)
Risk: Low (automated, tested)
Time: 10-30 seconds

Benefit:

  • 1 command rollback
  • Zero downtime
  • ✅ Automatic health checks
  • ✅ Can rollback to any previous revision
  • ✅ No SSH needed

2.3 Deploy Hotfix

Use Case: Critical bug di production, butuh deploy fix segera.

Environment A: Monolith Server

# Urgent hotfix workflow:

# 1. Fix bug di local
# 2. Test di local
# 3. Commit & push
git commit -m "hotfix: critical payment bug"
git push origin main

# 4. SSH ke production
ssh user@production-server

# 5. Pull & deploy manually (same 10 steps as normal deploy)
cd /opt/app
git pull
npm install
npm run build
pm2 restart all

# 6. Monitor logs frantically
pm2 logs --lines 100

# 7. Hope no new issues
# 8. Manually update staging & dev (jika ingat)

# Issues:
# - High pressure situation
# - Manual steps, prone to error
# - Downtime during restart
# - Might forget to update other environments

Time: 10-15 menit (with stress level 📈)
Downtime: 30-60 detik


Environment B: Gitea + K3s

# Efficient hotfix workflow:

# 1. Create hotfix branch from production
git checkout production
git checkout -b hotfix/payment-bug

# 2. Fix bug
# 3. Test locally
# 4. Push hotfix
git push origin hotfix/payment-bug

# 5. Merge to production
git checkout production
git merge hotfix/payment-bug
git push origin production

# 6. Gitea Runner automatically:
# - Build image
# - Deploy to production (rolling update)
# - Auto merge downstream (production → staging → development)
#
# Time: 2-3 minutes
# Downtime: ZERO

# 7. Verify
kubectl get pods -n production
kubectl logs -f deployment/app -n production

Time: 3-5 menit (mostly build time)
Downtime: ZERO
Auto-sync: Hotfix otomatis turun ke staging & development

Benefit:

  • ✅ Fast deployment (automated)
  • ✅ Zero downtime
  • ✅ Auto-sync all environments
  • ✅ Full audit trail
  • ✅ Can rollback easily if needed

3. Monitoring & Observability

3.1 Check Application Logs

Use Case: Debug error atau monitor application behavior.

Environment A: Monolith Server

# View logs workflow:

# 1. SSH ke server
ssh user@production-server

# 2. Navigate atau use PM2
pm2 logs users-service --lines 100

# 3. Grep untuk error
pm2 logs users-service | grep ERROR

# 4. Check system logs (jika perlu)
tail -f /var/log/syslog

# 5. Repeat untuk service lain & environment lain

# Problems:
# - Logs scattered per server
# - No centralized logging
# - Butuh SSH access
# - Hard to correlate logs across services
# - No retention policy (log rotation manual)

Kompleksitas:

  • ❌ SSH ke multiple servers
  • ❌ Logs not centralized
  • ❌ Hard to search/filter
  • ❌ No log aggregation

Environment B: Gitea + K3s

# View logs workflow:

# Real-time logs
kubectl logs -f deployment/users -n production

# Last 100 lines
kubectl logs deployment/users -n production --tail=100

# Logs from all pods (multiple replicas)
kubectl logs -l app=users -n production --all-containers=true

# Grep for errors
kubectl logs deployment/users -n production | grep ERROR

# Logs from specific time range
kubectl logs deployment/users -n production --since=1h

# Previous pod logs (jika crash)
kubectl logs deployment/users -n production --previous

# Benefit:
# - Centralized from kubectl
# - No SSH needed
# - Easy filtering
# - Works across all pods/replicas
# - Can integrate with ELK/Loki for long-term storage

Kompleksitas:

  • ✅ Centralized access
  • ✅ No SSH needed
  • ✅ Easy filtering & searching
  • ✅ Works with multiple replicas
  • ✅ Previous container logs accessible

3.2 Check Resource Usage (CPU/Memory)

Use Case: Monitor resource consumption untuk troubleshooting performance.

Environment A: Monolith Server

# Check resource usage:

# 1. SSH ke server
ssh user@production-server

# 2. Use top/htop
top

# 3. Find specific process
ps aux | grep node

# 4. Manual interpretation
# - Which PID is which service?
# - Total memory for all services?
# - Individual service consumption?

# 5. PM2 monitoring (limited)
pm2 monit

# Problems:
# - No per-service breakdown (shared server)
# - Manual calculation needed
# - No historical data
# - No alerting

Visibility: Limited, manual interpretation


Environment B: Gitea + K3s

# Check resource usage:

# Top pods (real-time CPU/Memory)
kubectl top pods -n production

# Top nodes
kubectl top nodes

# Specific deployment
kubectl top pods -n production -l app=users

# Describe pod untuk detailed info
kubectl describe pod users-xyz -n production

# Output example:
# NAME CPU MEMORY
# users-abc123 50m 128Mi
# products-def456 30m 64Mi
# orders-ghi789 120m 256Mi

# Benefit:
# - Per-pod resource visibility
# - Real-time metrics
# - Compare against limits/requests
# - Easy to identify resource hogs
# - Can integrate with Prometheus/Grafana

Visibility: Excellent, per-service breakdown


3.3 Check Service Health Status

Use Case: Quick check apakah semua service running dengan baik.

Environment A: Monolith Server

# Check health status:

# 1. SSH ke server
ssh user@production-server

# 2. Check PM2 status
pm2 status

# Output:
# ┌─────┬───────────────┬─────────┬─────────┐
# │ id │ name │ status │ restart │
# ├─────┼───────────────┼─────────┼─────────┤
# │ 0 │ users │ online │ 5 │
# │ 1 │ products │ errored │ 12 │ ← Problem!
# │ 2 │ orders │ online │ 3 │
# └─────┴───────────────┴─────────┴─────────┘

# 3. Manual curl untuk test endpoint
curl http://localhost:3001/health
curl http://localhost:3002/health
curl http://localhost:3003/health

# 4. Repeat untuk staging & dev

# Problems:
# - Manual check per server
# - PM2 "online" ≠ healthy (might be crashing loop)
# - No automatic health checks

Environment B: Gitea + K3s

# Check health status:

# Get all pods across environments
kubectl get pods -n production
kubectl get pods -n staging
kubectl get pods -n development

# Output example:
# NAME READY STATUS RESTARTS AGE
# users-abc123 1/1 Running 0 2d
# products-def456 1/1 Running 0 2d
# orders-ghi789 0/1 CrashLoop 5 10m ← Problem!

# READY 1/1 = Healthy ✅
# READY 0/1 = Problem ❌

# Automatic health checks built-in:
# - Liveness probe (restart if unhealthy)
# - Readiness probe (remove from load balancer)

# Check events untuk troubleshooting
kubectl get events -n production --sort-by='.lastTimestamp'

# Benefit:
# - Visual status (Ready column)
# - Automatic health checks
# - Auto-restart unhealthy pods
# - No manual curl needed

Visibility: Excellent, automatic health monitoring


3.4 Troubleshooting Failed Deployment

Use Case: Deployment gagal, butuh debug kenapa.

Environment A: Monolith Server

# Troubleshooting workflow:

# 1. SSH ke server
# 2. Check PM2 logs
pm2 logs users-service --err

# 3. Check system logs
tail -f /var/log/syslog

# 4. Check application logs
tail -f /opt/app/logs/error.log

# 5. Check git status
git status
git log

# 6. Manual investigation:
# - Dependency issue?
# - Config wrong?
# - Port conflict?
# - Permission issue?

# No structured approach, manual digging

Time to Resolution: Varies, 15-60 menit


Environment B: Gitea + K3s

# Structured troubleshooting:

# 1. Check deployment status
kubectl rollout status deployment/users -n production

# 2. Check pod status
kubectl get pods -n production

# 3. Describe pod untuk detailed events
kubectl describe pod users-abc123 -n production

# Output shows:
# - Image pull issues
# - Health check failures
# - Resource constraints
# - Config errors

# 4. Check logs
kubectl logs users-abc123 -n production

# 5. Check previous container (jika crash)
kubectl logs users-abc123 -n production --previous

# 6. Check events
kubectl get events -n production --field-selector involvedObject.name=users-abc123

# Benefit:
# - Structured troubleshooting flow
# - Clear error messages
# - All info in kubectl (no SSH)
# - Events show what happened

Time to Resolution: 5-15 menit (structured approach)


4. Summary Operational Comparison

4.1 Time Savings

TaskMonolith TimeK3s TimeSavings
Restart service (3 envs)5-10 min1-2 min70-80%
Scale service30-60 min10-30 sec99%
Deploy new version15-30 min3-5 min80%
Rollback2-5 min10-30 sec90%
Hotfix deploy10-15 min3-5 min70%
Check logs5 min30 sec90%
Check health status3 min10 sec95%
Troubleshoot issue15-60 min5-15 min70%

Average Time Savings: 75-85% 🚀


4.2 Ease of Use Scoring

AspectMonolithK3sWinner
Learning Curve⭐⭐⭐⭐⭐ Easy⭐⭐⭐ MediumMonolith
Daily Operations⭐⭐ Hard⭐⭐⭐⭐⭐ EasyK3s
Multi-Environment⭐ Very Hard⭐⭐⭐⭐⭐ EasyK3s
Troubleshooting⭐⭐ Hard⭐⭐⭐⭐ EasyK3s
Deployment Speed⭐⭐ Slow⭐⭐⭐⭐⭐ FastK3s
Rollback Speed⭐⭐ Slow⭐⭐⭐⭐⭐ InstantK3s
Monitoring⭐⭐ Limited⭐⭐⭐⭐⭐ ExcellentK3s
Team Collaboration⭐⭐ Hard⭐⭐⭐⭐⭐ EasyK3s

4.3 Key Insights

Monolith Server:

  • Pros:

    • Familiar technology (SSH, PM2)
    • Low initial learning curve
    • Quick to setup for simple apps
  • Cons:

    • Manual operations everywhere
    • High human error risk
    • Downtime during updates
    • Hard to manage multiple environments
    • No built-in health checks
    • Scattered logs and monitoring

Gitea + K3s:

  • Pros:

    • Automated operations (75-85% time savings)
    • Zero downtime deployments
    • Easy multi-environment management
    • Built-in health checks & auto-healing
    • Centralized logging & monitoring
    • One-command rollback
    • Declarative configuration
    • Full audit trail
  • Cons:

    • Initial learning curve (kubectl basics)
    • More complex initial setup

5. Conclusion

Meskipun Monolith Server lebih mudah dipelajari di awal, Gitea + K3s jauh lebih mudah untuk operasional sehari-hari:

Untuk Manage Service:

  • 🚀 Same commands across environments
  • 🚀 No SSH needed
  • 🚀 Zero downtime operations
  • 🚀 Auto health checks

Untuk Deployment:

  • 🚀 1 command vs 10+ manual steps
  • 🚀 Automated & consistent
  • 🚀 Git-based workflow (audit trail)
  • 🚀 One-command rollback

Untuk Monitoring:

  • 🚀 Centralized logs & metrics
  • 🚀 Real-time visibility
  • 🚀 Structured troubleshooting
  • 🚀 Built-in observability

Trade-off: Butuh investasi belajar kubectl/K8s basics (~1-2 minggu), tapi terbayar dengan operational excellence yang signifikan untuk jangka panjang.

ROI: Learning effort 1-2 minggu = 75-85% time savings setiap hari operations! 💰


Rekomendasi: Untuk production environment yang serius, Gitea + K3s adalah pilihan yang jauh lebih unggul dalam hal kemudahan operasional, reliability, dan maintainability.