Skip to main content

Real Case

1. Pengantar Real Case

1.1 Mengapa Real Case Penting?

Dalam penelitian ini, penggunaan real case bukan sekedar aplikasi dummy atau proof-of-concept sederhana. Real case dipilih untuk:

  1. Representasi Kondisi Nyata: Mensimulasikan kondisi aplikasi production dengan kompleksitas yang realistis
  2. Validitas Pengujian: Memastikan hasil pengujian dapat diaplikasikan pada kasus nyata
  3. Demonstrasi Praktis: Membuktikan bahwa solusi dapat diimplementasikan di dunia industri

1.2 Kriteria Pemilihan Real Case

Aplikasi real case dipilih berdasarkan kriteria:

Kompleksitas Moderat: Tidak terlalu sederhana (1 service) namun tidak terlalu kompleks (10+ services)

Inter-service Communication: Terdapat komunikasi antar service untuk menguji network reliability

Replikabilitas: Mudah di-setup dan di-replicate untuk validasi hasil

Representatif: Mencerminkan pola umum aplikasi business logic modern


2. Arsitektur Aplikasi Real Case

2.1 Gambaran Umum Sistem

Aplikasi merepresentasikan sistem pemesanan e-commerce sederhana dengan tiga domain bisnis utama:

DomainFungsi UtamaKompleksitas
UsersManajemen data pelangganRendah
ProductsKatalog dan inventory produkRendah
OrdersTransaksi dan order processingTinggi

2.2 Diagram Arsitektur Microservices

┌─────────────────────────────────────────────────────────────┐
│ CLIENT / API GATEWAY │
└────────────┬────────────────┬─────────────────┬─────────────┘
│ │ │
│ │ │
┌─────────▼─────────┐ ┌──▼──────────┐ ┌──▼──────────┐
│ Users Service │ │ Products │ │ Orders │
│ Port: 3001 │ │ Service │ │ Service │
│ │ │ Port: 3002 │ │ Port: 3003 │
│ ┌─────────────┐ │ │ ┌────────┐ │ │ ┌────────┐ │
│ │ GET /users │ │ │ │ GET │ │ │ │ POST │ │
│ │ POST /users │ │ │ │ /products│ │ │ │/orders │◄─┼─┐
│ │ GET /health │ │ │ │ POST │ │ │ │ GET │ │ │
│ └─────────────┘ │ │ │ /products│ │ │ │/orders │ │ │
└───────────────────┘ │ │ GET │ │ │ │ GET │ │ │
│ │ /health │ │ │ │/health │ │ │
│ └────────┘ │ │ └────┬───┘ │ │
└──────────────┘ └───────┼──────┘ │
│ │
┌──────────────────────────┘ │
│ HTTP Call: GET /users/:id │
│ HTTP Call: GET /products/:id │
└───────────────────────────────────┘

Keterangan Diagram:

  • Setiap service independen dengan port berbeda
  • Orders service memiliki dependency ke Users dan Products
  • Komunikasi antar service menggunakan HTTP REST API
  • Setiap service memiliki health check endpoint

2.3 Karakteristik Teknis Aplikasi

Tech Stack:

  • Runtime: Node.js v18.x
  • Framework: Express.js v4.x
  • Architecture: RESTful API
  • Data Storage: In-memory (untuk simplifikasi pengujian)
  • Containerization: Docker

2.4 Detail Service: Users Service

Fungsi Bisnis: Service Users mengelola data pelanggan dan menjadi foundation service dalam sistem.

Karakteristik Teknis:

AspekDetail
Port3001
DependenciesTidak ada (independent service)
Data Model{ id, name, email, createdAt }
Startup Time~2 detik
Memory Usage~50-70 MB

API Endpoints:

POST   /users          // Create new user
GET /users // Get all users
GET /users/:id // Get user by ID
GET /health // Health check endpoint

Contoh Request/Response:

// POST /users
Request: {
"name": "John Doe",
"email": "john@example.com"
}

Response: {
"id": "usr_123",
"name": "John Doe",
"email": "john@example.com",
"createdAt": "2024-01-15T10:30:00Z"
}

Tujuan dalam Pengujian:

  • ✅ Baseline untuk mengukur deployment time service tanpa dependency
  • ✅ Observasi proses startup dan recovery sederhana
  • ✅ Benchmark resource consumption minimal

2.5 Detail Service: Products Service

Fungsi Bisnis: Service Products mengelola katalog produk dan inventory.

Karakteristik Teknis:

AspekDetail
Port3002
DependenciesTidak ada (independent service)
Data Model{ id, name, price, stock, createdAt }
Startup Time~2 detik
Memory Usage~50-70 MB

API Endpoints:

POST   /products       // Create new product
GET /products // Get all products
GET /products/:id // Get product by ID
GET /health // Health check endpoint

Contoh Request/Response:

// POST /products
Request: {
"name": "Laptop ASUS ROG",
"price": 15000000,
"stock": 10
}

Response: {
"id": "prd_456",
"name": "Laptop ASUS ROG",
"price": 15000000,
"stock": 10,
"createdAt": "2024-01-15T10:35:00Z"
}

Tujuan dalam Pengujian:

  • ✅ Menguji isolasi resource antar service independen
  • ✅ Validasi bahwa service dapat berjalan paralel tanpa interferensi
  • ✅ Observasi penggunaan CPU dan memory pada concurrent deployment

2.6 Detail Service: Orders Service (Kompleks)

Fungsi Bisnis: Service Orders mengelola transaksi pemesanan dan merupakan service paling kritis karena mengintegrasikan Users dan Products.

Karakteristik Teknis:

AspekDetail
Port3003
Dependencies✅ Users Service, ✅ Products Service
Data Model{ id, userId, productId, quantity, total, createdAt }
Startup Time~3 detik (menunggu dependency services)
Memory Usage~60-80 MB
Network Calls2 HTTP requests per order creation

API Endpoints:

POST   /orders         // Create new order (calls Users & Products)
GET /orders // Get all orders
GET /orders/:id // Get order by ID
GET /health // Health check endpoint

Business Logic Flow:

POST /orders

├──► 1. Validate request body

├──► 2. HTTP GET /users/:userId
│ └─► Verify user exists

├──► 3. HTTP GET /products/:productId
│ └─► Verify product exists & stock available

├──► 4. Calculate total = price × quantity

└──► 5. Create order & return response

Contoh Request/Response:

// POST /orders
Request: {
"userId": "usr_123",
"productId": "prd_456",
"quantity": 2
}

Response: {
"id": "ord_789",
"userId": "usr_123",
"productId": "prd_456",
"quantity": 2,
"total": 30000000,
"status": "pending",
"createdAt": "2024-01-15T10:40:00Z"
}

Tujuan dalam Pengujian:

  • Critical Point: Menguji stabilitas komunikasi antar service
  • Failure Scenario: Mengobservasi dampak kegagalan dependency service
  • Recovery Testing: Validasi kemampuan environment dalam auto-healing
  • Network Latency: Mengukur overhead komunikasi HTTP internal
  • Cascading Failure: Menguji apakah failure di satu service mempengaruhi yang lain

2.7 Alur Proses Bisnis End-to-End

Skenario Business Use Case:

┌──────────────────────────────────────────────────────────────┐
│ PHASE 1: Setup Master Data │
└──────────────────────────────────────────────────────────────┘

Step 1: Register User
POST /users
├─► Input: \{ name: "Ahmad", email: "ahmad@mail.com" \}
└─► Output: \{ id: "usr_001", ... \}

Step 2: Add Product
POST /products
├─► Input: \{ name: "Laptop", price: 15000000, stock: 5 \}
└─► Output: \{ id: "prd_001", ... \}


┌──────────────────────────────────────────────────────────────┐
│ PHASE 2: Transaction (Critical Path) │
└──────────────────────────────────────────────────────────────┘

Step 3: Create Order
POST /orders
├─► Input: \{ userId: "usr_001", productId: "prd_001", qty: 2 \}

├─► Internal: GET /users/usr_001 ──► Validate user

├─► Internal: GET /products/prd_001 ──► Validate product & stock

└─► Output: \{ id: "ord_001", total: 30000000, status: "pending" \}


┌──────────────────────────────────────────────────────────────┐
│ SKENARIO PENGUJIAN DARI ALUR INI │
└──────────────────────────────────────────────────────────────┘

1. Normal Flow Testing
✅ Semua service online → Order created successfully

2. Partial Failure Testing
❌ Users service down → Order creation failed
❌ Products service down → Order creation failed

3. Recovery Testing
🔄 Kill Orders pod/process → Auto-restart & self-healing

4. Network Latency Testing
📊 Measure response time dengan inter-service communication

Testing Points dari Alur Bisnis:

Test CaseEnvironment A (Monolith)Environment B (K3s)
Deployment semua serviceManual, sequentialAutomated, parallel
Restart saat failureManual interventionAuto self-healing
Isolasi resourceTidak adaNamespace + limits
Rollback saat errorSulit & manualEasy kubectl rollout

2.8 Justifikasi Pemilihan Real Case

Alasan Akademis:

  1. Kompleksitas yang Terukur

    • Tidak terlalu sederhana (hello world)
    • Tidak terlalu kompleks (enterprise-level)
    • Sweet spot untuk demonstrasi proof-of-concept
  2. Validitas Penelitian

    • Merepresentasikan pola umum aplikasi bisnis (CRUD + Business Logic)
    • Memiliki dependency antar service (real-world scenario)
    • Dapat direproduksi oleh peneliti lain
  3. Fokus pada Environment, Bukan Aplikasi

    • Aplikasi cukup sederhana sehingga tidak mengalihkan fokus
    • Perbedaan performa jelas berasal dari environment, bukan kompleksitas kode
    • Mudah di-maintain selama penelitian

Alasan Praktis:

  1. Replikabilitas

    • Setup mudah (< 30 menit)
    • Tidak memerlukan database eksternal
    • Dokumentasi lengkap
  2. Observabilitas

    • Mudah di-monitor dan di-debug
    • Logs jelas dan terstruktur
    • Health check tersedia di setiap service
  3. Skalabilitas Penelitian

    • Dapat di-extend dengan service tambahan
    • Dapat ditambahkan database untuk penelitian lanjutan
    • Arsitektur mendukung future enhancement

Dengan real case ini:

  • ✅ Perbandingan deployment objektif dan terukur
  • ✅ Hasil penelitian applicable ke industri
  • ✅ Penelitian dapat direplikasi dan divalidasi

3. Deployment Strategy & Multi-Environment Management

Salah satu aspek krusial dalam production-ready application adalah kemampuan mengelola multiple environments (development, staging, production) dengan konsisten dan efisien. Section ini menjelaskan bagaimana strategi deployment diimplementasikan dan membandingkan kompleksitas pengelolaannya antara kedua environment.


3.1 Branching Strategy & Environment Mapping

Penelitian ini menggunakan Git Flow yang disederhanakan dengan pendekatan branch-based deployment:

┌─────────────────────────────────────────────────────────────────┐
│ Git Branching Strategy │
└─────────────────────────────────────────────────────────────────┘

Branch Namespace K8s Environment
────────────────────────────────────────────────────────────────
development → development → Dev/Testing
staging → staging → UAT/Pre-Prod
production → production → Production
hotfix/* → production → Urgent Fixes

Karakteristik:

  • 1 Branch = 1 Environment - mapping jelas dan mudah dipahami
  • Namespace Isolation - setiap environment terisolasi di K8s namespace
  • Immutable Tags - menggunakan Git SHA sebagai image tag (traceability)
  • Single Manifest Template - 1 template untuk semua environment

3.2 Workflow Deployment: Normal Release

Alur Normal dari Development ke Production:

┌─────────────────────────────────────────────────────────────────┐
│ PHASE 1: Development │
└─────────────────────────────────────────────────────────────────┘

Developer Feature Branch (feature/new-order-api)

├─► git commit & push

└─► Pull Request → development branch

├─► Code Review
├─► CI Checks (lint, test, build)

└─► Merge (approved)


┌───────────────────────┐
│ AUTO TRIGGER CI/CD │
└───────────────────────┘

├─► Build Docker Image (tag: git-sha)
├─► Push to Registry
├─► Deploy to namespace: development

└─► ✅ Deployed to Dev Environment


┌─────────────────────────────────────────────────────────────────┐
│ PHASE 2: Staging (QA Testing) │
└─────────────────────────────────────────────────────────────────┘

Development Branch (tested & stable)

└─► Pull Request → staging branch

├─► QA Approval
├─► All tests passed

└─► Merge


┌───────────────────────┐
│ AUTO TRIGGER CI/CD │
└───────────────────────┘

├─► Build Docker Image (tag: git-sha)
├─► Push to Registry
├─► Deploy to namespace: staging

└─► ✅ Deployed to Staging Environment

├─► UAT Testing
├─► Performance Testing
└─► Security Scanning


┌─────────────────────────────────────────────────────────────────┐
│ PHASE 3: Production (Go Live) │
└─────────────────────────────────────────────────────────────────┘

Staging Branch (UAT passed)

└─► Pull Request → production branch

├─► Final Review & Approval
├─► All staging tests passed
├─► Change Management Ticket

└─► Merge


┌───────────────────────┐
│ AUTO TRIGGER CI/CD │
└───────────────────────┘

├─► Build Docker Image (tag: git-sha)
├─► Push to Registry
├─► Deploy to namespace: production
├─► Rolling Update (Zero Downtime)

└─► ✅ Deployed to Production

└─► Auto Merge Downstream ⬇️

├─► production → staging
└─► staging → development

(Ensure consistency across envs)

Total Time: ~5-10 menit per environment (automated)


3.3 Workflow Deployment: Hotfix

Alur Hotfix untuk Bug Critical di Production:

┌─────────────────────────────────────────────────────────────────┐
│ HOTFIX Workflow (Urgent Production Fix) │
└─────────────────────────────────────────────────────────────────┘

Production Branch (current state)

├─► Create Hotfix Branch (hotfix/critical-payment-bug)
│ │
│ ├─► Fix bug
│ ├─► Test locally
│ └─► git push

└─► Merge Hotfix → Production


┌───────────────────────┐
│ AUTO TRIGGER CI/CD │
└───────────────────────┘

├─► Build Docker Image
├─► Push to Registry
├─► Deploy to production namespace
├─► Rolling Update (minimal downtime)

└─► ✅ Hotfix Deployed

└─► Auto Merge Downstream ⬇️

├─► production → staging
└─► staging → development

(Sync fix to all environments)

Total Time: ~3-5 menit (automated, no manual SSH)


3.4 Auto-Merge Downstream Strategy

Mekanisme Auto-Merge untuk Environment Consistency:

Workflow .gitea/workflows/automerge.yml:

name: Auto Merge Downstream Branches

on:
push:
branches:
- production
- staging

jobs:
auto-merge:
runs-on: k8s-runner-02
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0

# If push to production, merge to staging
- name: Merge production → staging
if: gitea.ref == 'refs/heads/production'
run: |
git checkout staging
git merge origin/production --no-edit
git push origin staging

# If push to staging, merge to development
- name: Merge staging → development
if: gitea.ref == 'refs/heads/staging'
run: |
git checkout development
git merge origin/staging --no-edit
git push origin development

Benefit:

  • Prevent Environment Drift - semua environment selalu sync
  • Hotfix Propagation - fix otomatis turun ke staging & dev
  • Zero Manual Intervention - tidak perlu cherry-pick manual

3.5 CI/CD Pipeline Implementation

Workflow .gitea/workflows/cicd.yml:

name: Build, Push, and Deploy

on:
push:
branches:
- production
- staging
- development

jobs:
build:
runs-on: k8s-runner-02
env:
REGISTRY: registry.staging
IMAGE_NAME: $\{{ gitea.repository \}}
IMAGE_TAG: $\{{ gitea.sha \}}
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Buildx builder
run: |
docker buildx create --use --name builder

- name: Build & Push Docker
run: |
docker buildx build \
-t $REGISTRY/$IMAGE_NAME:$IMAGE_TAG \
--push .

outputs:
image: registry.bigdata.pens.ac.id/$IMAGE_NAME:$IMAGE_TAG

deploy:
needs: build
runs-on: k8s-runner-02
env:
APP_NAME: $\{{ gitea.repository \}}
DIGEST_IMAGE: $\{{ needs.build.outputs.image \}}
BRANCH: $\{{ gitea.ref_name \}}
steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Set name, image and environment
run: |
NAME=$(echo "$APP_NAME" | sed 's/\//-/g')
sed -i "s|IMAGE_NAME|$DIGEST_IMAGE|g" k8s/deployment.yml
sed -i "s|APP_NAME|$NAME|g" k8s/deployment.yml
sed -i "s|ENVIRONMENT|$BRANCH|g" k8s/deployment.yml

- name: Deploy to Cluster
run: kubectl apply -f k8s

Key Features:

  • Dynamic Environment Selection - branch name → namespace
  • Immutable Image Tags - Git SHA untuk traceability
  • Self-Hosted Registry - tidak depend eksternal service
  • Idempotent Deployment - kubectl apply safe untuk re-run

3.6 Kubernetes Manifest Templates

Template k8s/deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: APP_NAME
namespace: ENVIRONMENT # ← Dynamic: development/staging/production
spec:
replicas: 1
selector:
matchLabels:
app: APP_NAME
template:
metadata:
labels:
app: APP_NAME
spec:
containers:
- name: APP_NAME
image: IMAGE_NAME # ← Dynamic: registry/repo:git-sha
ports:
- containerPort: 3000
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 250m
memory: 128Mi

Placeholders yang di-replace saat deployment:

  • APP_NAME → nama aplikasi dari repository
  • IMAGE_NAME → full image path dengan Git SHA tag
  • ENVIRONMENT → branch name (development/staging/production)

Benefit:

  • Single Source of Truth - 1 template untuk 3 environment
  • Resource Limits - prevent resource exhaustion
  • Namespace Isolation - environment tidak saling ganggu

3.7 Comparison: Multi-Environment Management

Tabel Perbandingan Pengelolaan Multiple Environments:

AspekMonolith Server (Manual)Gitea + K3s (Automated)Impact
Setup Environment3 server terpisah atau 3 path berbeda di 1 server3 namespace di 1 cluster🚀 Unified management
Deploy ke Devssh dev-servercd /appgit pullnpm installpm2 restartPush ke branch development → Auto!⚡ 90% faster
Deploy ke StagingRepeat manual SSH steps ke staging-serverPR developmentstaging → Auto!🎯 Consistent process
Deploy ke ProductionRepeat manual SSH steps (dengan anxiety 😰)PR stagingproduction → Auto!✅ Reduced risk
Hotfix DeploySSH production → manual changes → pray 🙏Push ke production → Auto build+deploy⚡ 5 menit vs 30 menit
RollbackManual: git reset → re-deploy semua servicekubectl rollout undo -n production🔄 1 command vs 10+
Environment ParityDrift inevitable (beda config/version)Sama persis (hanya beda namespace)🎯 "Works on my machine" solved
Audit TrailSiapa deploy kapan? Cek SSH logs/chat? 😅Git history + PR + Actions logs📊 Full traceability
Approval GateManual communication (email/chat/meeting)Pull Request + required approvals🔒 Enforced governance
Concurrent DeploysRisk conflict (1 orang deploy, lain juga deploy)Git merge conflict prevention🛡️ Safe collaboration
Config ManagementEnvironment variables scattered/hardcodedConfigMap/Secrets per namespace🔐 Secure & centralized
Dependencies UpdateUpdate 3 kali (dev, staging, prod)Update once → auto propagate⏱️ 3x effort reduction

3.8 Real-World Scenario: Day-to-Day Operations

Scenario 1: Restart Service karena Memory Leak

# ❌ Monolith Way:
ssh dev-server "pm2 restart users-service"
ssh staging-server "pm2 restart users-service"
ssh prod-server "pm2 restart users-service"
# Total: 3 manual SSH commands, prone to typo

# ✅ K3s Way:
kubectl rollout restart deployment/users -n development
kubectl rollout restart deployment/users -n staging
kubectl rollout restart deployment/users -n production
# Total: 3 commands, same syntax, dapat di-script

Scenario 2: Check Logs untuk Debug Error

# ❌ Monolith Way:
ssh dev-server
cd /var/log
tail -f app.log | grep ERROR
# Repeat for staging & production

# ✅ K3s Way:
kubectl logs -f deployment/users -n development | grep ERROR
kubectl logs -f deployment/users -n staging | grep ERROR
kubectl logs -f deployment/users -n production | grep ERROR
# Tidak perlu SSH, semua dari 1 terminal

Scenario 3: Scale Service untuk Handle Traffic Spike

# ❌ Monolith Way:
# 1. Setup load balancer config
# 2. Setup multiple PM2 instances
# 3. Manual coordination
# Time: 30-60 menit

# ✅ K3s Way:
kubectl scale deployment/orders --replicas=5 -n production
# Time: 10 detik, auto load-balanced

Scenario 4: Check Health Status Semua Services

# ❌ Monolith Way:
ssh dev-server "pm2 status"
ssh staging-server "pm2 status"
ssh prod-server "pm2 status"
# Manual interpretation needed

# ✅ K3s Way:
kubectl get pods -n development
kubectl get pods -n staging
kubectl get pods -n production
# Visual: Ready 1/1 = healthy, 0/1 = problem

3.9 Keunggulan untuk Penelitian & Production

Dari Perspektif Kemudahan (Ease of Use):

KategoriMonolithGitea + K3sWinner
Manage ServiceSSH hell, manual commandsDeclarative, kubectl commands✅ K3s
DeploymentManual, error-proneAutomated, consistent✅ K3s
MonitoringScattered logs, manual checkCentralized, real-time✅ K3s
RollbackManual git revert + redeploy1 command rollback✅ K3s
Multi-Environment3x manual effort1 template, 3 namespaces✅ K3s
Learning CurveLow (familiar SSH/PM2)Medium (learn kubectl)⚠️ Monolith

Kesimpulan: Meskipun Monolith lebih mudah untuk initial learning, Gitea + K3s signifikan lebih mudah untuk:

  • Day-to-day operations (manage, deploy, monitor)
  • Multi-environment management (consistency & efficiency)
  • Team collaboration (git-based workflow, audit trail)
  • Production reliability (auto-healing, rollback, isolation)

Trade-off learning curve di awal terbayar dengan operational excellence dalam jangka panjang.


4. Perbandingan Dua Environment Deployment

4.1 Environment A – Monolith Server (Baseline/Control)

Representasi: Environment ini mewakili status quo yang masih banyak digunakan di industri, terutama perusahaan skala kecil-menengah.

Karakteristik Deployment:

AspekDetail
Deployment MethodManual (SSH + Git commands)
Process ManagementPM2 / systemd
IsolationTidak ada (shared resources)
Port ManagementManual configuration
Startup Scriptnpm start per service
MonitoringManual log checking

Workflow Deployment Manual:

┌─────────────────────────────────────────────────────────────┐
│ 1. SSH ke server │
│ $ ssh user@server.com │
└────────────┬────────────────────────────────────────────────┘

┌────────────▼────────────────────────────────────────────────┐
│ 2. Pull latest code dari Git │
│ $ cd /app/users-service && git pull origin main │
│ $ cd /app/products-service && git pull origin main │
│ $ cd /app/orders-service && git pull origin main │
└────────────┬────────────────────────────────────────────────┘

┌────────────▼────────────────────────────────────────────────┐
│ 3. Install dependencies (jika ada perubahan) │
│ $ npm install │
│ (Ulangi untuk 3 service) │
└────────────┬────────────────────────────────────────────────┘

┌────────────▼────────────────────────────────────────────────┐
│ 4. Restart services satu per satu │
│ $ pm2 restart users-service │
│ $ pm2 restart products-service │
│ $ pm2 restart orders-service │
└────────────┬────────────────────────────────────────────────┘

┌────────────▼────────────────────────────────────────────────┐
│ 5. Manual verification │
│ $ curl http://localhost:3001/health │
│ $ curl http://localhost:3002/health │
│ $ curl http://localhost:3003/health │
└─────────────────────────────────────────────────────────────┘

Estimasi Waktu: ~3-5 menit per deployment

Potensi Error Points:

  • ❌ Lupa git pull salah satu service
  • ❌ Dependency version mismatch
  • ❌ Port already in use
  • ❌ Environment variable tidak di-set
  • ❌ Lupa restart service
  • ❌ Human error saat copy-paste command

3.2 Environment B – Cloud-Native K3s (Proposed/Treatment)

Representasi: Environment ini merupakan solusi modern yang diusulkan dalam penelitian, menggunakan best practices cloud-native dengan tools open-source.

Karakteristik Deployment:

AspekDetail
Deployment MethodAutomated CI/CD Pipeline
Container RuntimeDocker + containerd
OrchestrationK3s (Lightweight Kubernetes)
IsolationNamespace + Resource Limits
Service DiscoveryKubernetes DNS (ClusterIP)
Load BalancingBuilt-in Kubernetes Service
Self-HealingKubernetes Deployment Controller
MonitoringBuilt-in kubectl logs & describe

Arsitektur Komponen:

┌─────────────────────────────────────────────────────────────┐
│ DEVELOPER │
│ │ │
│ git push origin main │
└───────────────────────────┼──────────────────────────────────┘

┌───────────────────────────▼──────────────────────────────────┐
│ GITEA (SCM) │
│ - Source code repository │
│ - Webhook trigger ke Gitea Runner │
└───────────────────────────┬──────────────────────────────────┘

Webhook Event

┌───────────────────────────▼──────────────────────────────────┐
│ GITEA RUNNER (CI/CD) │
│ │
│ Step 1: Checkout Code │
│ Step 2: Build Docker Image │
│ Step 3: Push to Container Registry │
│ Step 4: Deploy to K3s (kubectl apply) │
└───────────────────────────┬──────────────────────────────────┘

kubectl apply -f deployment.yaml

┌───────────────────────────▼──────────────────────────────────┐
│ K3s CLUSTER │
│ │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Namespace: default │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Pod │ │ Pod │ │ Pod │ │ │
│ │ │ Users │ │ Products │ │ Orders │ │ │
│ │ │ Service │ │ Service │ │ Service │ │ │
│ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │
│ │ │ │ │ │ │
│ │ ┌────▼─────────────▼─────────────▼─────┐ │ │
│ │ │ Kubernetes Service (ClusterIP) │ │ │
│ │ │ - DNS: users-svc.default.svc │ │ │
│ │ │ - DNS: products-svc.default.svc │ │ │
│ │ │ - DNS: orders-svc.default.svc │ │ │
│ │ └──────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────┘ │
│ │
│ Self-Healing: If pod crashes → Auto restart │
│ Rolling Update: Zero-downtime deployment │
└─────────────────────────────────────────────────────────────┘

Workflow Automated Deployment:

Developer: git push

Pipeline Triggered (< 5 detik)

Build Image (15-30 detik)

Push to Registry (5-10 detik)

Deploy to K3s (10-20 detik)

Rolling Update (auto zero-downtime)

Health Check Passed ✅

Estimasi Waktu: ~45-60 detik (OTOMATIS)

Keunggulan Teknis:

  • ✅ Zero human intervention
  • ✅ Konsisten setiap deployment
  • ✅ Auto rollback jika health check failed
  • ✅ Resource isolation per pod
  • ✅ Service discovery otomatis
  • ✅ Self-healing tanpa manual restart

3.3 Perbandingan Side-by-Side

KriteriaMonolith ManualCloud-Native K3sWinner
Deployment Time3-5 menit45-60 detik✅ K3s
Human InterventionTinggi (SSH, command)Tidak ada (push only)✅ K3s
ConsistencyVariasi tinggiKonsisten✅ K3s
RollbackManual (git revert+deploy)kubectl rollout undo✅ K3s
RecoveryManual restartAuto self-healing✅ K3s
Error Rate~15-20% (human error)< 5% (automated)✅ K3s
Downtime30-60 detik (restart all)0 detik (rolling update)✅ K3s
IsolationNone (shared process)Namespace + cgroup limits✅ K3s
Learning CurveRendahSedang⚠️ Monolith
Initial SetupCepat (~30 menit)Medium (~2 jam)⚠️ Monolith

Kesimpulan Perbandingan: Meskipun Environment A (Monolith) lebih mudah di-setup awal, Environment B (K3s) secara signifikan lebih unggul dalam operational efficiency, consistency, dan reliability—yang merupakan faktor krusial dalam production environment.


5. Rencana Pengujian Komparatif

Pengujian dirancang untuk membuktikan keunggulan environment secara terukur dan objektif, bukan berdasarkan opini.

5.1 Deployment Time Testing

Tujuan: Mengukur waktu yang dibutuhkan aplikasi hingga dapat diakses setelah perubahan kode.

Metode:

  • Melakukan perubahan kecil pada kode
  • Mengukur waktu dari git push hingga endpoint dapat diakses

Metrik:

  • Waktu deployment (detik)

5.2 Konsistensi Deployment

Tujuan: Menilai kestabilan proses deployment.

Metode:

  • Melakukan deployment berulang (minimal 3 kali)
  • Mencatat hasil berhasil atau gagal

Metrik:

  • Deployment success rate (%)

5.3 Recovery Testing

Tujuan: Mengukur kemampuan sistem dalam menangani kegagalan.

Metode:

  • Mematikan proses aplikasi atau pod secara paksa
  • Mengamati proses pemulihan

Metrik:

  • Downtime (detik)
  • Recovery otomatis (ya/tidak)

5.4 Resource Isolation Testing

Tujuan: Menilai isolasi resource antar service.

Metode:

  • Memberikan beban tinggi pada satu service (misalnya service orders)
  • Mengamati dampaknya pada service lain

Metrik:

  • Penggunaan CPU dan memory
  • Dampak pada service lain

5.5 Reproducibility Testing

Tujuan: Mengukur kemudahan replikasi environment.

Metode:

  • Mencatat langkah setup dari awal
  • Membandingkan kompleksitas proses

Metrik:

  • Jumlah langkah setup
  • Waktu setup
  • Potensi human error

6. Hasil yang Diharapkan

Berdasarkan rencana pengujian, environment yang diusulkan diharapkan memiliki keunggulan sebagai berikut:

  • Deployment lebih cepat dan konsisten
  • Risiko kesalahan manual lebih rendah
  • Kemampuan recovery otomatis
  • Isolasi resource yang lebih baik
  • Environment lebih mudah direplikasi

7. Ringkasan Perbandingan

AspekMonolith ServerGitea + Runner + K3s
DeploymentManualOtomatis
KonsistensiRendahTinggi
RecoveryManualOtomatis
IsolasiTidak adaNamespace Kubernetes
ReplikasiSulitMudah

8. Penutup

Guide Book ini diharapkan dapat membantu dosen pembimbing dan dosen penguji dalam memahami:

  • Alasan pemilihan arsitektur
  • Metode pengujian yang dilakukan
  • Pembuktian keunggulan environment yang diusulkan

Dokumentasi ini juga dirancang agar dapat digunakan oleh engineer lain untuk memahami dan mereplikasi environment yang dibangun.