Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Deploy: SLA monitor & dispute indexer (VPS + k3s, 1 pod each)

Коротко: k3s на VPS, один namespace, по одному Deployment, managed Postgres на облаке (отдельные БД). Ничего не хранится локально, только стейт в БД и метрики в Prometheus.

Пререквизиты

  • VPS c Ubuntu 22.04+, 2 vCPU/4GB+.
  • k3s без traefik (используем встроенный Service/Ingress по вкусу).
  • Managed Postgres x2 (sla_db, dispute_db), security group разрешает доступ с VPS.
  • Доступ к RPC через rpc-gateway-rotator URL.

Установка k3s (пример)

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -
kubectl create ns escrow-obs

Секреты (подставь URI)

kubectl -n escrow-obs create secret generic sla-secrets \
  --from-literal=DB_DSN=postgres://user:pass@host:5432/sla_db?sslmode=require \
  --from-literal=RPC_URL=https://rpc-gw.internal \
  --from-literal=WS_URL=wss://rpc-gw.internal/ws

kubectl -n escrow-obs create secret generic dispute-secrets \
  --from-literal=DB_DSN=postgres://user:pass@host:5432/dispute_db?sslmode=require \
  --from-literal=RPC_URL=https://rpc-gw.internal \
  --from-literal=WS_URL=wss://rpc-gw.internal/ws

Минимальный манифест (оба сервиса)

apiVersion: apps/v1
kind: Deployment
metadata: {name: sla-monitor, namespace: escrow-obs}
spec:
  replicas: 1
  selector: {matchLabels: {app: sla-monitor}}
  template:
    metadata: {labels: {app: sla-monitor}}
    spec:
      containers:
        - name: app
          image: registry/sla-monitor:TAG
          envFrom: [{secretRef: {name: sla-secrets}}]
          ports: [{containerPort: 8080, name: http}]
          readinessProbe: {httpGet: {path: /ready, port: http}, periodSeconds: 10}
          livenessProbe: {httpGet: {path: /health, port: http}, periodSeconds: 20}
          resources: {requests: {cpu: "200m", memory: "256Mi"}, limits: {cpu: "500m", memory: "512Mi"}}
---
apiVersion: v1
kind: Service
metadata: {name: sla-monitor, namespace: escrow-obs}
spec:
  selector: {app: sla-monitor}
  ports: [{port: 80, targetPort: http}]
---
apiVersion: apps/v1
kind: Deployment
metadata: {name: dispute-indexer, namespace: escrow-obs}
spec:
  replicas: 1
  selector: {matchLabels: {app: dispute-indexer}}
  template:
    metadata: {labels: {app: dispute-indexer}}
    spec:
      containers:
        - name: app
          image: registry/dispute-indexer:TAG
          envFrom: [{secretRef: {name: dispute-secrets}}]
          ports: [{containerPort: 8080, name: http}]
          readinessProbe: {httpGet: {path: /ready, port: http}, periodSeconds: 10}
          livenessProbe: {httpGet: {path: /health, port: http}, periodSeconds: 20}
          resources: {requests: {cpu: "200m", memory: "256Mi"}, limits: {cpu: "500m", memory: "512Mi"}}
---
apiVersion: v1
kind: Service
metadata: {name: dispute-indexer, namespace: escrow-obs}
spec:
  selector: {app: dispute-indexer}
  ports: [{port: 80, targetPort: http}]

Экспонирование

  • Для приватки хватит kubectl port-forward или internal LB.
  • Для внешки: k3s ingress + TLS (Caddy/NGINX) → сервисы выше.

Наблюдаемость

  • /metrics в Prometheus; алерты: ingest_lag_blocks, alert_latency_seconds, rpc_errors_total.
  • /recent-violations и /events должны отвечать после старта (smoke).

Апдейт

kubectl -n escrow-obs set image deploy/sla-monitor app=registry/sla-monitor:NEW
kubectl -n escrow-obs set image deploy/dispute-indexer app=registry/dispute-indexer:NEW