Kubernetes Deployment Guide
Complete guide for deploying Cannabrands CRM to Kubernetes across dev, staging, and production environments.
Table of Contents
- Architecture Overview
- Prerequisites
- Initial Setup
- Deploying to Development
- Deploying to Staging
- Deploying to Production
- Managing Secrets
- Database Management
- Troubleshooting
- Maintenance Tasks
Architecture Overview
Directory Structure
k8s/
├── base/ # Base Kubernetes manifests
│ ├── namespace.yaml # Namespace definitions
│ ├── postgres-statefulset.yaml # PostgreSQL database
│ ├── app-deployment.yaml # Laravel application
│ ├── ingress.yaml # Ingress routing
│ ├── configmap.yaml # Configuration
│ ├── secrets.yaml.example # Secret templates
│ └── kustomization.yaml # Base kustomization
│
└── overlays/ # Environment-specific configs
├── dev/ # Development overrides
│ ├── kustomization.yaml
│ ├── deployment-patch.yaml
│ ├── ingress-patch.yaml
│ └── postgres-patch.yaml
│
├── staging/ # Staging overrides
│ ├── kustomization.yaml
│ ├── deployment-patch.yaml
│ ├── ingress-patch.yaml
│ └── postgres-patch.yaml
│
└── production/ # Production overrides
├── kustomization.yaml
├── deployment-patch.yaml
├── ingress-patch.yaml
└── postgres-patch.yaml
Environment Comparison
| Resource | Development | Staging | Production |
|---|---|---|---|
| Namespace | development |
staging |
production |
| Domain | dev.cannabrands.app | staging.cannabrands.app | cannabrands.app |
| App Replicas | 1 | 2 | 3 |
| DB Replicas | 1 | 1 | 2 (HA) |
| DB Storage | 10Gi | 50Gi | 100Gi (SSD) |
| Docker Image | :dev |
:staging |
:latest |
| Debug Mode | Enabled | Disabled | Disabled |
| Log Level | debug | info | warning |
Prerequisites
Required Tools
# kubectl
brew install kubectl
# kustomize (built into kubectl 1.14+)
kubectl version --client
# Verify cluster access
kubectl get nodes
Required Cluster Components
Before deploying, ensure your Kubernetes cluster has:
-
Ingress Controller (NGINX)
kubectl get pods -n ingress-nginx -
Cert-Manager (for TLS certificates)
kubectl get pods -n cert-manager -
Storage Class (for persistent volumes)
kubectl get storageclass
If missing, install these first (see Cluster Setup below).
Initial Setup
1. Create Namespaces
kubectl apply -f k8s/base/namespace.yaml
Verify:
kubectl get namespaces | grep -E 'development|staging|production'
2. Configure Gitea Registry Access
Create Docker registry pull secrets for each environment:
# Development
kubectl create secret docker-registry gitea-registry \
--docker-server=code.cannabrands.app \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_GITEA_TOKEN \
--docker-email=your@email.com \
-n development
# Staging
kubectl create secret docker-registry gitea-registry \
--docker-server=code.cannabrands.app \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_GITEA_TOKEN \
--docker-email=your@email.com \
-n staging
# Production
kubectl create secret docker-registry gitea-registry \
--docker-server=code.cannabrands.app \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_GITEA_TOKEN \
--docker-email=your@email.com \
-n production
Get Gitea token:
- Login to https://code.cannabrands.app
- Settings → Applications → Generate New Token
- Scope:
read:packages
3. Generate Application Secrets
For each environment, create secrets:
# Generate APP_KEY
php artisan key:generate --show
# Output: base64:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Generate strong database password
openssl rand -base64 32
Create secrets (example for development):
kubectl create secret generic postgres-secret \
--from-literal=username=cannabrands_dev \
--from-literal=password=$(openssl rand -base64 32) \
-n development
kubectl create secret generic app-secrets \
--from-literal=APP_KEY="base64:YOUR_GENERATED_KEY" \
--from-literal=DB_USERNAME=cannabrands_dev \
--from-literal=DB_PASSWORD="SAME_AS_POSTGRES_SECRET" \
--from-literal=MAIL_HOST=smtp.example.com \
--from-literal=MAIL_PORT=587 \
--from-literal=MAIL_USERNAME="" \
--from-literal=MAIL_PASSWORD="" \
--from-literal=MAIL_ENCRYPTION=tls \
-n development
Repeat for staging and production namespaces with different credentials.
Deploying to Development
1. Deploy Development Environment
kubectl apply -k k8s/overlays/dev
2. Verify Deployment
# Check all resources
kubectl get all -n development
# Check pods are running
kubectl get pods -n development
# Expected output:
# NAME READY STATUS RESTARTS AGE
# app-xxxxxxxxxx-xxxxx 1/1 Running 0 2m
# postgres-0 1/1 Running 0 2m
# Check services
kubectl get svc -n development
# Check ingress
kubectl get ingress -n development
3. Check Application Logs
# Application logs
kubectl logs -f deployment/app -n development
# Database logs
kubectl logs -f statefulset/postgres -n development
4. Run Database Migrations
The init container runs migrations automatically, but if you need to run them manually:
kubectl exec -it deployment/app -n development -- php artisan migrate --force
5. Seed Development Database
kubectl exec -it deployment/app -n development -- php artisan db:seed --force
6. Access Development Site
Point DNS for dev.cannabrands.app to your cluster's ingress IP:
# Get ingress IP
kubectl get ingress -n development
Visit: https://dev.cannabrands.app
Deploying to Staging
1. Deploy Staging Environment
kubectl apply -k k8s/overlays/staging
2. Verify Deployment
kubectl get all -n staging
kubectl get pods -n staging
3. Restore Sanitized Production Data
Follow the database strategy guide in docs/DATABASE_STRATEGY.md:
# Run sanitization script (see docs/DATABASE_STRATEGY.md)
./scripts/sanitize-production-for-staging.sh
4. Run Migrations
kubectl exec -it deployment/app -n staging -- php artisan migrate --force
5. Access Staging Site
Point DNS for staging.cannabrands.app to cluster ingress IP.
Visit: https://staging.cannabrands.app
Deploying to Production
1. Pre-Deployment Checklist
- Staging is stable and tested
- All tests passing in CI
- Production secrets created and verified
- Database backup completed (if upgrading)
- Team notified of deployment
- Rollback plan prepared
2. Deploy Production Environment
kubectl apply -k k8s/overlays/production
3. Verify Deployment
kubectl get all -n production
kubectl get pods -n production
# Expected:
# app-xxxxxxxxxx-xxxxx 1/1 Running 0 2m (3 replicas)
# postgres-0 1/1 Running 0 2m
# postgres-1 1/1 Running 0 2m (HA replica)
4. Run Migrations (Carefully!)
ALWAYS backup first:
# Backup production database
kubectl exec -it statefulset/postgres-0 -n production -- \
pg_dump -U cannabrands_prod cannabrands_production \
| gzip > prod-backup-$(date +%Y%m%d-%H%M%S).sql.gz
Run migrations:
# Enable maintenance mode
kubectl exec -it deployment/app -n production -- php artisan down
# Run migrations
kubectl exec -it deployment/app -n production -- php artisan migrate --force
# Disable maintenance mode
kubectl exec -it deployment/app -n production -- php artisan up
5. Monitor Production
# Watch pod status
kubectl get pods -n production -w
# Check application logs
kubectl logs -f deployment/app -n production
# Check for errors
kubectl logs deployment/app -n production | grep -i error
6. Access Production Site
Point DNS for cannabrands.app to cluster ingress IP.
Visit: https://cannabrands.app
Managing Secrets
Viewing Secrets
# List secrets
kubectl get secrets -n development
# View secret (base64 encoded)
kubectl get secret app-secrets -n development -o yaml
# Decode secret value
kubectl get secret app-secrets -n development -o jsonpath='{.data.APP_KEY}' | base64 -d
Updating Secrets
# Delete old secret
kubectl delete secret app-secrets -n development
# Create new secret
kubectl create secret generic app-secrets \
--from-literal=APP_KEY="new-key" \
--from-literal=DB_PASSWORD="new-password" \
-n development
# Restart pods to pick up new secrets
kubectl rollout restart deployment/app -n development
Rotating Database Passwords
- Create new secret with new password
- Update PostgreSQL password:
kubectl exec -it statefulset/postgres-0 -n development -- \ psql -U postgres -c "ALTER USER cannabrands_dev PASSWORD 'new-password';" - Restart application pods
Database Management
Accessing PostgreSQL
# Development
kubectl exec -it statefulset/postgres-0 -n development -- psql -U cannabrands_dev cannabrands_dev
# Production
kubectl exec -it statefulset/postgres-0 -n production -- psql -U cannabrands_prod cannabrands_production
Database Backups
Manual backup:
kubectl exec -it statefulset/postgres-0 -n production -- \
pg_dump -U cannabrands_prod cannabrands_production \
| gzip > backup-$(date +%Y%m%d).sql.gz
Restore from backup:
gunzip -c backup-20251023.sql.gz | \
kubectl exec -i statefulset/postgres-0 -n staging -- \
psql -U cannabrands_staging cannabrands_staging
Resetting Development Database
kubectl exec -it deployment/app -n development -- \
php artisan migrate:fresh --seed --force
Troubleshooting
Pods Won't Start
# Check pod status
kubectl describe pod POD_NAME -n NAMESPACE
# Common issues:
# 1. Image pull errors → Check gitea-registry secret
# 2. CrashLoopBackOff → Check application logs
# 3. Pending → Check storage class exists
Database Connection Errors
# Verify database is running
kubectl get pods -n development | grep postgres
# Check database logs
kubectl logs statefulset/postgres-0 -n development
# Verify secrets match
kubectl get secret postgres-secret -n development -o yaml
kubectl get secret app-secrets -n development -o yaml
Ingress Not Working
# Check ingress status
kubectl describe ingress app-ingress -n development
# Verify cert-manager created certificate
kubectl get certificate -n development
# Check NGINX ingress logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
Application Errors
# View recent logs
kubectl logs deployment/app -n development --tail=100
# Follow logs in real-time
kubectl logs -f deployment/app -n development
# Get shell access to debug
kubectl exec -it deployment/app -n development -- bash
Maintenance Tasks
Update Application Image
# Development (automatic via CI)
# CI pushes code.cannabrands.app/cannabrands/hub:dev
kubectl rollout restart deployment/app -n development
# Production (manual)
# Tag new release: git tag 2025.10.1
# CI builds code.cannabrands.app/cannabrands/hub:latest
kubectl rollout restart deployment/app -n production
# Monitor rollout
kubectl rollout status deployment/app -n production
Scale Application
# Scale up
kubectl scale deployment app --replicas=5 -n production
# Scale down
kubectl scale deployment app --replicas=2 -n production
# Autoscaling (optional)
kubectl autoscale deployment app --cpu-percent=70 --min=2 --max=10 -n production
View Resource Usage
# Pod resource usage
kubectl top pods -n production
# Node resource usage
kubectl top nodes
Clean Up Old Resources
# Delete completed jobs
kubectl delete jobs --field-selector status.successful=1 -n development
# Delete old replica sets
kubectl delete replicaset --all -n development
Cluster Setup
If you need to install required cluster components:
Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
Install Cert-Manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Create Let's Encrypt cluster issuer
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@cannabrands.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
Next Steps
- Set up monitoring: Install Prometheus + Grafana for metrics
- Set up logging: Install ELK stack or Loki for centralized logs
- Automate backups: Create CronJobs for daily database backups
- Configure alerts: Set up alerts for pod failures, high resource usage
- Implement GitOps: Consider ArgoCD or FluxCD for automated deployments
Last Updated: October 2025 Maintained By: DevOps Team
For questions or issues, refer to docs/DATABASE_STRATEGY.md and docs/DEPLOYMENT_WORKFLOW.md.