Compare commits

...

1 Commits

Author SHA1 Message Date
kelly
723a961d5e docs: Consolidate and organize project documentation
## Summary
- Moved all core docs from root to /docs/ for better organization
- Consolidated analytics documentation (4 files → organized structure)
- Archived legacy setup guides and one-off reports
- Streamlined k8s/README.md to operational essentials only

## Changes Made
- Root MD files reduced from 16 → 1 (CLAUDE.md only)
- Created /docs/guides/ for how-to documentation
- Created /docs/archive/ for historical/setup documentation
- Updated CLAUDE.md with proper /docs/ references
- Merged overlapping content (notifications, executive access)
- Slim down k8s/README.md (610 lines → 121 lines)

## File Movements
**To /docs/ root:**
- README.md, CHANGELOG.md, CONTRIBUTING.md
- SYSTEM_ARCHITECTURE.md, claude.kelly.md, CLAUDE.local.md

**To /docs/guides/:**
- ANALYTICS_IMPLEMENTATION.md → analytics-implementation.md
- ANALYTICS_QUICK_START.md → analytics-quick-start.md
- ANALYTICS_TRACKING_EXAMPLES.md → analytics-examples.md

**To /docs/archive/:**
- EXECUTIVE_ACCESS_GUIDE.md → archive/guides/
- PUSH_NOTIFICATIONS_SETUP.md → archive/guides/
- QUICK-HANDOFF-CLAUDE-CODE.md → archive/guides/
- MISSING_FILES_REPORT.md → archive/reports/
- docs/deployment/KUBERNETES.md → archive/k8s-setup/

**Deleted (duplicate):**
- 01-analytics-system.md (exact copy of docs/supplements/analytics.md)

## Test Plan
- [x] All file references in CLAUDE.md updated
- [x] Cross-references between docs updated
- [x] No broken internal links
- [x] Clean root directory (only CLAUDE.md remains)
2025-11-20 08:04:50 -07:00
19 changed files with 211 additions and 2264 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@
## 📌 IMPORTANT: Check Personal Context Files
**ALWAYS read `claude.kelly.md` first** - Contains personal preferences and session tracking workflow
**ALWAYS read `docs/claude.kelly.md` first** - Contains personal preferences and session tracking workflow
---
@@ -150,7 +150,7 @@ Product::where('is_active', true)->get(); // No business_id filter!
## Architecture Docs (Read When Needed)
**🎯 START HERE:**
- **`SYSTEM_ARCHITECTURE.md`** - Complete system guide covering ALL architectural patterns, security rules, modules, departments, performance, and development workflow
- **`docs/SYSTEM_ARCHITECTURE.md`** - Complete system guide covering ALL architectural patterns, security rules, modules, departments, performance, and development workflow
**Deep Dives (when needed):**
- `docs/supplements/departments.md` - Department system, permissions, access control
@@ -166,10 +166,21 @@ Product::where('is_active', true)->get(); // No business_id filter!
- `docs/architecture/URL_STRUCTURE.md` - **READ BEFORE** routing changes
- `docs/architecture/DATABASE.md` - **READ BEFORE** migrations
- `docs/architecture/API.md` - API endpoints and contracts
- `docs/architecture/MEDIA_STORAGE.md` - MinIO storage architecture and paths
**Other:**
- `VERSIONING_AND_AUDITING.md` - Quicksave and Laravel Auditing
- `CONTRIBUTING.md` - Detailed git workflow
**Features:**
- `docs/features/NOTIFICATIONS.md` - Notification system and web push setup
- `docs/features/PARENT_COMPANY_SUBDIVISIONS.md` - Multi-division organizations
**How-To Guides:**
- `docs/guides/analytics-quick-start.md` - Analytics system quick start
- `docs/guides/analytics-examples.md` - Analytics tracking code examples
**Project Info:**
- `docs/README.md` - Project overview
- `docs/CHANGELOG.md` - Version history
- `docs/CONTRIBUTING.md` - Detailed git workflow
- `docs/VERSIONING_AND_AUDITING.md` - Quicksave and Laravel Auditing
---

View File

@@ -192,6 +192,139 @@ Sends sample notifications to specified user for testing the complete system.
---
**Last Updated**: August 2025
**Version**: 1.0
**Dependencies**: Laravel 10+, Alpine.js, DaisyUI, Nexus Theme
## 📲 **Web Push Notifications & Laravel Horizon Setup**
> **Note:** This section covers browser push notifications for high-intent buyer signals as part of the Premium Buyer Analytics module.
### **Prerequisites**
- HTTPS (production/staging) or localhost (development)
- Browser that supports Web Push API (Chrome, Firefox, Edge, Safari 16+)
### **Installation**
**1. Install Dependencies:**
```bash
composer update
```
Installs:
- `laravel-notification-channels/webpush: ^10.2`
- `laravel/horizon: ^5.39`
**2. Install Horizon Assets:**
```bash
php artisan horizon:install
```
**3. Run Migrations:**
```bash
php artisan migrate
```
Creates `push_subscriptions` table.
**4. Generate VAPID Keys:**
```bash
php artisan webpush:vapid
```
⚠️ **IMPORTANT**: Never commit VAPID keys to git. Generate different keys for each environment.
### **Configuration**
Add to `bootstrap/providers.php`:
```php
return [
App\Providers\AppServiceProvider::class,
App\Providers\HorizonServiceProvider::class,
];
```
### **Local Development**
**Start Horizon:**
```bash
php artisan horizon
```
**Access Dashboards:**
- Horizon: `/horizon`
- Analytics: `/s/{business}/buyer-intelligence/buyers`
### **Testing**
1. Grant notification permission in browser
2. Trigger test event:
```bash
php artisan db:seed --class=PushNotificationTestDataSeeder
```
3. Check Horizon dashboard for job processing
### **Production Deployment**
**Supervisor Configuration** (`/etc/supervisor/conf.d/horizon.conf`):
```ini
[program:horizon]
process_name=%(program_name)s
command=php /path/to/artisan horizon
autostart=true
autorestart=true
user=www-data
redirect_stderr=true
stdout_logfile=/path/to/storage/logs/horizon.log
stopwaitsecs=3600
```
Then:
```bash
sudo supervisorctl reread && sudo supervisorctl update && sudo supervisorctl start horizon
```
### **Push Notification Triggers**
| Trigger | Threshold | Signal Type |
|---------|-----------|-------------|
| Repeated product views | 3+ views | `repeated_view` |
| High engagement score | ≥ 60% | `high_engagement` |
| Spec download | Any | `spec_download` |
| Contact button click | Any | `contact_click` |
All triggers require `has_analytics = true` on the business.
### **Architecture Flow**
```
User Action → Analytics Tracking → CalculateEngagementScore Job
→ HighIntentBuyerDetected Event → SendHighIntentSignalPushNotification
→ Horizon Queue → Browser Push Notification
```
### **Troubleshooting Web Push**
**Notifications Not Sending:**
1. Check Horizon is running: `/horizon`
2. Check Redis: `redis-cli ping`
3. Check logs: `tail -f storage/logs/laravel.log`
**VAPID Key Issues:**
```bash
php artisan webpush:vapid --force
php artisan horizon:terminate && php artisan horizon
```
**Browser Not Receiving:**
1. Check browser permissions (Allow notifications)
2. Verify HTTPS or localhost
3. Check `push_subscriptions` table
4. Check browser console for errors
### **Security Notes**
- ✅ Push notifications only sent to authorized users
- ✅ VAPID keys are environment-specific
- ✅ All triggers respect `has_analytics` module flag
- ⚠️ Never commit VAPID keys or run test seeders in production
---
**Last Updated**: November 2025
**Version**: 2.0 (Added Web Push)
**Dependencies**: Laravel 10+, Alpine.js, DaisyUI, Nexus Theme, Horizon, Web Push

View File

@@ -4,6 +4,8 @@
The parent company and subdivision system enables large cannabis organizations to manage multiple business entities under a single corporate umbrella. This architecture supports:
> **📚 Additional Reference:** For detailed operational examples and security implementation details, see [EXECUTIVE_ACCESS_GUIDE.md](../archive/guides/EXECUTIVE_ACCESS_GUIDE.md)
- **Parent Companies**: Top-level corporate entities (e.g., "Canopy AZ Group")
- **Divisions**: Operational subdivisions with their own branding and operations (e.g., "Hash Factory AZ", "Leopard AZ")
- **Department-Based Access Control**: Users can be assigned to specific departments within divisions

View File

@@ -1,609 +1,120 @@
# Kubernetes Deployment Guide
# Kubernetes Deployment - Operational Guide
Complete guide for deploying Cannabrands CRM to Kubernetes across dev, staging, and production environments.
Quick reference for deploying and managing the Cannabrands application on Kubernetes.
## Table of Contents
1. [Architecture Overview](#architecture-overview)
2. [Prerequisites](#prerequisites)
3. [Initial Setup](#initial-setup)
4. [Deploying to Development](#deploying-to-development)
5. [Deploying to Staging](#deploying-to-staging)
6. [Deploying to Production](#deploying-to-production)
7. [Managing Secrets](#managing-secrets)
8. [Database Management](#database-management)
9. [Troubleshooting](#troubleshooting)
10. [Maintenance Tasks](#maintenance-tasks)
> **For detailed K8s architecture and planning:** See `docs/archive/k8s-setup/KUBERNETES.md`
---
## Architecture Overview
### Directory Structure
## Directory Structure
```
k8s/
├── base/ # Base Kubernetes manifests
│ ├── namespace.yaml # Namespace definitions
│ ├── postgres-statefulset.yaml # PostgreSQL database
│ ├── app-deployment.yaml # Laravel application
│ ├── ingress.yaml # Ingress routing
── configmap.yaml # Configuration
│ ├── secrets.yaml.example # Secret templates
── kustomization.yaml # Base kustomization
└── overlays/ # Environment-specific configs
├── dev/ # Development overrides
│ ├── kustomization.yaml
│ ├── deployment-patch.yaml
│ ├── ingress-patch.yaml
│ └── postgres-patch.yaml
├── staging/ # Staging overrides
│ ├── kustomization.yaml
│ ├── deployment-patch.yaml
│ ├── ingress-patch.yaml
│ └── postgres-patch.yaml
└── production/ # Production overrides
├── kustomization.yaml
├── deployment-patch.yaml
├── ingress-patch.yaml
└── postgres-patch.yaml
├── base/ # Base manifests
│ ├── app-deployment.yaml
│ ├── postgres-statefulset.yaml
│ ├── redis-statefulset.yaml
│ ├── reverb-deployment.yaml
── ingress.yaml
├── overlays/ # Environment-specific
── dev/
├── staging/
│ └── production/
└── local/ # Local dev configs
```
### Environment Comparison
| Resource | Development | Staging | Production |
|----------|------------|---------|------------|
| **Namespace** | `development` | `staging` | `production` |
| **Domain** | dev.cannabrands.app | staging.cannabrands.app | cannabrands.app |
| **App Replicas** | 1 | 2 | 3 |
| **DB Replicas** | 1 | 1 | 2 (HA) |
| **DB Storage** | 10Gi | 50Gi | 100Gi (SSD) |
| **Docker Image** | `:dev` | `:staging` | `:latest` |
| **Debug Mode** | Enabled | Disabled | Disabled |
| **Log Level** | debug | info | warning |
---
## Prerequisites
### Required Tools
```bash
# kubectl
brew install kubectl
# kustomize (built into kubectl 1.14+)
kubectl version --client
# Verify cluster access
kubectl get nodes
```
### Required Cluster Components
Before deploying, ensure your Kubernetes cluster has:
1. **Ingress Controller** (NGINX)
```bash
kubectl get pods -n ingress-nginx
```
2. **Cert-Manager** (for TLS certificates)
```bash
kubectl get pods -n cert-manager
```
3. **Storage Class** (for persistent volumes)
```bash
kubectl get storageclass
```
If missing, install these first (see [Cluster Setup](#cluster-setup) below).
---
## Initial Setup
### 1. Create Namespaces
```bash
kubectl apply -f k8s/base/namespace.yaml
```
Verify:
```bash
kubectl get namespaces | grep -E 'development|staging|production'
```
### 2. Configure Gitea Registry Access
Create Docker registry pull secrets for each environment:
```bash
# Development
kubectl create secret docker-registry gitea-registry \
--docker-server=code.cannabrands.app \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_GITEA_TOKEN \
--docker-email=your@email.com \
-n development
# Staging
kubectl create secret docker-registry gitea-registry \
--docker-server=code.cannabrands.app \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_GITEA_TOKEN \
--docker-email=your@email.com \
-n staging
# Production
kubectl create secret docker-registry gitea-registry \
--docker-server=code.cannabrands.app \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_GITEA_TOKEN \
--docker-email=your@email.com \
-n production
```
**Get Gitea token:**
1. Login to https://code.cannabrands.app
2. Settings → Applications → Generate New Token
3. Scope: `read:packages`
### 3. Generate Application Secrets
For **each environment**, create secrets:
```bash
# Generate APP_KEY
php artisan key:generate --show
# Output: base64:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Generate strong database password
openssl rand -base64 32
```
Create secrets (example for development):
```bash
kubectl create secret generic postgres-secret \
--from-literal=username=cannabrands_dev \
--from-literal=password=$(openssl rand -base64 32) \
-n development
kubectl create secret generic app-secrets \
--from-literal=APP_KEY="base64:YOUR_GENERATED_KEY" \
--from-literal=DB_USERNAME=cannabrands_dev \
--from-literal=DB_PASSWORD="SAME_AS_POSTGRES_SECRET" \
--from-literal=MAIL_HOST=smtp.example.com \
--from-literal=MAIL_PORT=587 \
--from-literal=MAIL_USERNAME="" \
--from-literal=MAIL_PASSWORD="" \
--from-literal=MAIL_ENCRYPTION=tls \
-n development
```
**Repeat for `staging` and `production` namespaces** with different credentials.
---
## Deploying to Development
### 1. Deploy Development Environment
## Quick Deploy
### Development
```bash
kubectl apply -k k8s/overlays/dev
```
### 2. Verify Deployment
```bash
# Check all resources
kubectl get all -n development
# Check pods are running
kubectl get pods -n development
# Expected output:
# NAME READY STATUS RESTARTS AGE
# app-xxxxxxxxxx-xxxxx 1/1 Running 0 2m
# postgres-0 1/1 Running 0 2m
# Check services
kubectl get svc -n development
# Check ingress
kubectl get ingress -n development
```
### 3. Check Application Logs
```bash
# Application logs
kubectl logs -f deployment/app -n development
# Database logs
kubectl logs -f statefulset/postgres -n development
```
### 4. Run Database Migrations
The init container runs migrations automatically, but if you need to run them manually:
```bash
kubectl exec -it deployment/app -n development -- php artisan migrate --force
```
### 5. Seed Development Database
```bash
kubectl exec -it deployment/app -n development -- php artisan db:seed --force
```
### 6. Access Development Site
Point DNS for `dev.cannabrands.app` to your cluster's ingress IP:
```bash
# Get ingress IP
kubectl get ingress -n development
```
Visit: https://dev.cannabrands.app
---
## Deploying to Staging
### 1. Deploy Staging Environment
### Staging
```bash
kubectl apply -k k8s/overlays/staging
```
### 2. Verify Deployment
```bash
kubectl get all -n staging
kubectl get pods -n staging
```
### 3. Restore Sanitized Production Data
Follow the database strategy guide in `docs/DATABASE_STRATEGY.md`:
```bash
# Run sanitization script (see docs/DATABASE_STRATEGY.md)
./scripts/sanitize-production-for-staging.sh
```
### 4. Run Migrations
```bash
kubectl exec -it deployment/app -n staging -- php artisan migrate --force
```
### 5. Access Staging Site
Point DNS for `staging.cannabrands.app` to cluster ingress IP.
Visit: https://staging.cannabrands.app
---
## Deploying to Production
### 1. Pre-Deployment Checklist
- [ ] Staging is stable and tested
- [ ] All tests passing in CI
- [ ] Production secrets created and verified
- [ ] Database backup completed (if upgrading)
- [ ] Team notified of deployment
- [ ] Rollback plan prepared
### 2. Deploy Production Environment
### Production
```bash
kubectl apply -k k8s/overlays/production
```
### 3. Verify Deployment
---
## Common Operations
### Check Deployment Status
```bash
kubectl get all -n production
kubectl get pods -n production
# Expected:
# app-xxxxxxxxxx-xxxxx 1/1 Running 0 2m (3 replicas)
# postgres-0 1/1 Running 0 2m
# postgres-1 1/1 Running 0 2m (HA replica)
kubectl get services -n production
kubectl get ingress -n production
```
### 4. Run Migrations (Carefully!)
**ALWAYS backup first:**
### View Logs
```bash
# Backup production database
kubectl exec -it statefulset/postgres-0 -n production -- \
pg_dump -U cannabrands_prod cannabrands_production \
| gzip > prod-backup-$(date +%Y%m%d-%H%M%S).sql.gz
kubectl logs -f deployment/cannabrands-app -n production
kubectl logs -f statefulset/postgres -n production
```
**Run migrations:**
### Restart Deployment
```bash
# Enable maintenance mode
kubectl exec -it deployment/app -n production -- php artisan down
# Run migrations
kubectl exec -it deployment/app -n production -- php artisan migrate --force
# Disable maintenance mode
kubectl exec -it deployment/app -n production -- php artisan up
kubectl rollout restart deployment/cannabrands-app -n production
```
### 5. Monitor Production
### Scale Application
```bash
# Watch pod status
kubectl get pods -n production -w
# Check application logs
kubectl logs -f deployment/app -n production
# Check for errors
kubectl logs deployment/app -n production | grep -i error
kubectl scale deployment/cannabrands-app --replicas=5 -n production
```
### 6. Access Production Site
Point DNS for `cannabrands.app` to cluster ingress IP.
Visit: https://cannabrands.app
### Database Access
```bash
kubectl exec -it postgres-0 -n production -- psql -U cannabrands
```
---
## Managing Secrets
## Environment Comparison
### Viewing Secrets
```bash
# List secrets
kubectl get secrets -n development
# View secret (base64 encoded)
kubectl get secret app-secrets -n development -o yaml
# Decode secret value
kubectl get secret app-secrets -n development -o jsonpath='{.data.APP_KEY}' | base64 -d
```
### Updating Secrets
```bash
# Delete old secret
kubectl delete secret app-secrets -n development
# Create new secret
kubectl create secret generic app-secrets \
--from-literal=APP_KEY="new-key" \
--from-literal=DB_PASSWORD="new-password" \
-n development
# Restart pods to pick up new secrets
kubectl rollout restart deployment/app -n development
```
### Rotating Database Passwords
1. Create new secret with new password
2. Update PostgreSQL password:
```bash
kubectl exec -it statefulset/postgres-0 -n development -- \
psql -U postgres -c "ALTER USER cannabrands_dev PASSWORD 'new-password';"
```
3. Restart application pods
---
## Database Management
### Accessing PostgreSQL
```bash
# Development
kubectl exec -it statefulset/postgres-0 -n development -- psql -U cannabrands_dev cannabrands_dev
# Production
kubectl exec -it statefulset/postgres-0 -n production -- psql -U cannabrands_prod cannabrands_production
```
### Database Backups
**Manual backup:**
```bash
kubectl exec -it statefulset/postgres-0 -n production -- \
pg_dump -U cannabrands_prod cannabrands_production \
| gzip > backup-$(date +%Y%m%d).sql.gz
```
**Restore from backup:**
```bash
gunzip -c backup-20251023.sql.gz | \
kubectl exec -i statefulset/postgres-0 -n staging -- \
psql -U cannabrands_staging cannabrands_staging
```
### Resetting Development Database
```bash
kubectl exec -it deployment/app -n development -- \
php artisan migrate:fresh --seed --force
```
| Resource | Dev | Staging | Production |
|----------|-----|---------|------------|
| **Namespace** | `development` | `staging` | `production` |
| **Domain** | dev.cannabrands.app | staging.cannabrands.app | cannabrands.app |
| **App Replicas** | 1 | 2 | 3+ |
| **DB Replicas** | 1 | 1 | 2 (HA) |
---
## Troubleshooting
### Pods Won't Start
### Pods Not Starting
```bash
# Check pod status
kubectl describe pod POD_NAME -n NAMESPACE
# Common issues:
# 1. Image pull errors → Check gitea-registry secret
# 2. CrashLoopBackOff → Check application logs
# 3. Pending → Check storage class exists
kubectl describe pod <pod-name> -n production
kubectl logs <pod-name> -n production
```
### Database Connection Errors
### Service Not Accessible
```bash
# Verify database is running
kubectl get pods -n development | grep postgres
# Check database logs
kubectl logs statefulset/postgres-0 -n development
# Verify secrets match
kubectl get secret postgres-secret -n development -o yaml
kubectl get secret app-secrets -n development -o yaml
kubectl get endpoints -n production
kubectl describe service cannabrands-app -n production
```
### Ingress Not Working
### Database Connection Issues
```bash
# Check ingress status
kubectl describe ingress app-ingress -n development
# Verify cert-manager created certificate
kubectl get certificate -n development
# Check NGINX ingress logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
```
### Application Errors
```bash
# View recent logs
kubectl logs deployment/app -n development --tail=100
# Follow logs in real-time
kubectl logs -f deployment/app -n development
# Get shell access to debug
kubectl exec -it deployment/app -n development -- bash
kubectl exec -it postgres-0 -n production -- psql -U cannabrands -c "SELECT 1;"
```
---
## Maintenance Tasks
## Additional Documentation
### Update Application Image
```bash
# Development (automatic via CI)
# CI pushes code.cannabrands.app/cannabrands/hub:dev
kubectl rollout restart deployment/app -n development
# Production (manual)
# Tag new release: git tag 2025.10.1
# CI builds code.cannabrands.app/cannabrands/hub:latest
kubectl rollout restart deployment/app -n production
# Monitor rollout
kubectl rollout status deployment/app -n production
```
### Scale Application
```bash
# Scale up
kubectl scale deployment app --replicas=5 -n production
# Scale down
kubectl scale deployment app --replicas=2 -n production
# Autoscaling (optional)
kubectl autoscale deployment app --cpu-percent=70 --min=2 --max=10 -n production
```
### View Resource Usage
```bash
# Pod resource usage
kubectl top pods -n production
# Node resource usage
kubectl top nodes
```
### Clean Up Old Resources
```bash
# Delete completed jobs
kubectl delete jobs --field-selector status.successful=1 -n development
# Delete old replica sets
kubectl delete replicaset --all -n development
```
- **kubectl Commands:** `k8s/KUBECTL_COMMANDS.md`
- **Redis Setup:** `k8s/REDIS_SETUP.md`
- **Reverb Deployment:** `k8s/REVERB_DEPLOYMENT.md`
- **Full K8s Guide:** `docs/archive/k8s-setup/KUBERNETES.md`
---
## Cluster Setup
If you need to install required cluster components:
### Install NGINX Ingress Controller
```bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
```
### Install Cert-Manager
```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Create Let's Encrypt cluster issuer
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@cannabrands.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
```
---
## Next Steps
1. **Set up monitoring**: Install Prometheus + Grafana for metrics
2. **Set up logging**: Install ELK stack or Loki for centralized logs
3. **Automate backups**: Create CronJobs for daily database backups
4. **Configure alerts**: Set up alerts for pod failures, high resource usage
5. **Implement GitOps**: Consider ArgoCD or FluxCD for automated deployments
---
**Last Updated**: October 2025
**Maintained By**: DevOps Team
For questions or issues, refer to `docs/DATABASE_STRATEGY.md` and `docs/DEPLOYMENT_WORKFLOW.md`.
**Last Updated:** November 2025