diff --git a/DEPLOYMENT-CADDY.md b/DEPLOYMENT-CADDY.md
new file mode 100644
index 00000000..2b9baf7b
--- /dev/null
+++ b/DEPLOYMENT-CADDY.md
@@ -0,0 +1,344 @@
+# CMC Sales Deployment Guide with Caddy
+
+## Overview
+
+This guide covers deploying the CMC Sales application to a Debian 12 VM using Caddy as the reverse proxy with automatic HTTPS.
+
+## Architecture
+
+- **Production**: `https://cmc.springupsoftware.com`
+- **Staging**: `https://staging.cmc.springupsoftware.com`
+- **Reverse Proxy**: Caddy (running on host)
+- **Applications**: Docker containers
+ - CakePHP legacy app
+ - Go modern app
+ - MariaDB database
+- **SSL**: Automatic via Caddy (Let's Encrypt)
+- **Authentication**: Basic auth configured in Caddy
+
+## Prerequisites
+
+### 1. Server Setup (Debian 12)
+
+```bash
+# Update system
+sudo apt update && sudo apt upgrade -y
+
+# Install Docker
+sudo apt install -y docker.io docker-compose-plugin
+sudo systemctl enable docker
+sudo systemctl start docker
+
+# Add user to docker group
+sudo usermod -aG docker $USER
+# Log out and back in
+
+# Create directories
+sudo mkdir -p /var/backups/cmc-sales
+sudo chown $USER:$USER /var/backups/cmc-sales
+```
+
+### 2. Install Caddy
+
+```bash
+# Run the installation script
+sudo ./scripts/install-caddy.sh
+
+# Or manually install
+sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
+curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
+curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
+sudo apt update
+sudo apt install caddy
+```
+
+### 3. DNS Configuration
+
+Ensure DNS records point to your server:
+- `cmc.springupsoftware.com` → Server IP
+- `staging.cmc.springupsoftware.com` → Server IP
+
+## Initial Deployment
+
+### 1. Clone Repository
+
+```bash
+cd /home/cmc
+git clone git@code.springupsoftware.com:cmc/cmc-sales.git cmc-sales
+sudo chown -R $USER:$USER cmc-sales
+cd cmc-sales
+```
+
+### 2. Environment Configuration
+
+```bash
+# Copy environment files
+cp .env.staging go-app/.env.staging
+cp .env.production go-app/.env.production
+
+# Edit with actual passwords
+nano .env.staging
+nano .env.production
+
+# Create credential directories
+mkdir -p credentials/staging credentials/production
+```
+
+### 3. Setup Basic Authentication
+
+```bash
+# Generate password hashes for Caddy
+./scripts/setup-caddy-auth.sh
+
+# Or manually
+caddy hash-password
+# Copy the hash and update Caddyfile
+```
+
+### 4. Configure Caddy
+
+```bash
+# Copy Caddyfile
+sudo cp Caddyfile /etc/caddy/Caddyfile
+
+# Edit to update passwords and email
+sudo nano /etc/caddy/Caddyfile
+
+# Validate configuration
+caddy validate --config /etc/caddy/Caddyfile
+
+# Start Caddy
+sudo systemctl start caddy
+sudo systemctl status caddy
+```
+
+### 5. Gmail OAuth Setup
+
+Same as before - set up OAuth credentials for each environment:
+- Staging: `credentials/staging/credentials.json`
+- Production: `credentials/production/credentials.json`
+
+### 6. Database Initialization
+
+```bash
+# Start database containers
+docker compose -f docker-compose.caddy-staging.yml up -d db-staging
+docker compose -f docker-compose.caddy-production.yml up -d db-production
+
+# Wait for databases
+sleep 30
+
+# Restore production database (if you have a backup)
+./scripts/restore-db.sh production /path/to/backup.sql.gz
+```
+
+## Deployment Commands
+
+### Starting Services
+
+```bash
+# Start staging environment
+docker compose -f docker-compose.caddy-staging.yml up -d
+
+# Start production environment
+docker compose -f docker-compose.caddy-production.yml up -d
+
+# Reload Caddy configuration
+sudo systemctl reload caddy
+```
+
+### Updating Applications
+
+```bash
+# Pull latest code
+git pull origin main
+
+# Update staging
+docker compose -f docker-compose.caddy-staging.yml down
+docker compose -f docker-compose.caddy-staging.yml build --no-cache
+docker compose -f docker-compose.caddy-staging.yml up -d
+
+# Test staging, then update production
+docker compose -f docker-compose.caddy-production.yml down
+docker compose -f docker-compose.caddy-production.yml build --no-cache
+docker compose -f docker-compose.caddy-production.yml up -d
+```
+
+## Caddy Management
+
+### Configuration
+
+```bash
+# Edit Caddyfile
+sudo nano /etc/caddy/Caddyfile
+
+# Validate configuration
+caddy validate --config /etc/caddy/Caddyfile
+
+# Reload configuration (zero downtime)
+sudo systemctl reload caddy
+```
+
+### Monitoring
+
+```bash
+# Check Caddy status
+sudo systemctl status caddy
+
+# View Caddy logs
+sudo journalctl -u caddy -f
+
+# View access logs
+sudo tail -f /var/log/caddy/cmc-production.log
+sudo tail -f /var/log/caddy/cmc-staging.log
+```
+
+### SSL Certificates
+
+Caddy handles SSL automatically! To check certificates:
+
+```bash
+# List certificates
+sudo ls -la /var/lib/caddy/.local/share/caddy/certificates/
+
+# Force certificate renewal (rarely needed)
+sudo systemctl stop caddy
+sudo rm -rf /var/lib/caddy/.local/share/caddy/certificates/*
+sudo systemctl start caddy
+```
+
+## Container Port Mapping
+
+| Service | Container Port | Host Port | Access |
+|---------|---------------|-----------|---------|
+| cmc-php-staging | 80 | 8091 | localhost only |
+| cmc-go-staging | 8080 | 8092 | localhost only |
+| cmc-db-staging | 3306 | 3307 | localhost only |
+| cmc-php-production | 80 | 8093 | localhost only |
+| cmc-go-production | 8080 | 8094 | localhost only |
+| cmc-db-production | 3306 | - | internal only |
+
+## Monitoring and Maintenance
+
+### Health Checks
+
+```bash
+# Check all containers
+docker ps
+
+# Check application health
+curl -I https://cmc.springupsoftware.com
+curl -I https://staging.cmc.springupsoftware.com
+
+# Internal health checks (from server)
+curl http://localhost:8094/api/v1/health # Production Go
+curl http://localhost:8092/api/v1/health # Staging Go
+```
+
+### Database Backups
+
+Same backup scripts work:
+
+```bash
+# Manual backup
+./scripts/backup-db.sh production
+./scripts/backup-db.sh staging
+
+# Automated backups
+sudo crontab -e
+# Add:
+# 0 2 * * * /home/cmc/cmc-sales/scripts/backup-db.sh production
+# 0 3 * * * /home/cmc/cmc-sales/scripts/backup-db.sh staging
+```
+
+## Security Benefits with Caddy
+
+1. **Automatic HTTPS**: No manual certificate management
+2. **Modern TLS**: Always up-to-date TLS configuration
+3. **OCSP Stapling**: Enabled by default
+4. **Security Headers**: Easy to configure
+5. **Rate Limiting**: Built-in support
+
+## Troubleshooting
+
+### Caddy Issues
+
+```bash
+# Check Caddy configuration
+caddy validate --config /etc/caddy/Caddyfile
+
+# Check Caddy service
+sudo systemctl status caddy
+sudo journalctl -u caddy -n 100
+
+# Test reverse proxy
+curl -v http://localhost:8094/api/v1/health
+```
+
+### Container Issues
+
+```bash
+# Check container logs
+docker compose -f docker-compose.caddy-production.yml logs -f
+
+# Restart specific service
+docker compose -f docker-compose.caddy-production.yml restart cmc-go-production
+```
+
+### SSL Issues
+
+```bash
+# Caddy automatically handles SSL, but if issues arise:
+# 1. Check DNS is resolving correctly
+dig cmc.springupsoftware.com
+
+# 2. Check Caddy can reach Let's Encrypt
+sudo journalctl -u caddy | grep -i acme
+
+# 3. Ensure ports 80 and 443 are open
+sudo ufw status
+```
+
+## Advantages of Caddy Setup
+
+1. **Simpler Configuration**: Caddyfile is more readable than nginx
+2. **Automatic HTTPS**: No certbot or lego needed
+3. **Zero-Downtime Reloads**: Config changes without dropping connections
+4. **Better Performance**: More efficient than nginx for this use case
+5. **Native Rate Limiting**: Built-in without additional modules
+6. **Automatic Certificate Renewal**: No cron jobs needed
+
+## Migration from Nginx
+
+If migrating from the nginx setup:
+
+1. Stop nginx containers: `docker compose -f docker-compose.proxy.yml down`
+2. Install and configure Caddy
+3. Start new containers with caddy compose files
+4. Update DNS if needed
+5. Monitor logs during transition
+
+## File Structure
+
+```
+/home/cmc/cmc-sales/
+├── docker-compose.caddy-staging.yml
+├── docker-compose.caddy-production.yml
+├── Caddyfile
+├── credentials/
+│ ├── staging/
+│ └── production/
+├── scripts/
+│ ├── backup-db.sh
+│ ├── restore-db.sh
+│ ├── install-caddy.sh
+│ └── setup-caddy-auth.sh
+└── .env files
+
+/etc/caddy/
+└── Caddyfile (deployed config)
+
+/var/log/caddy/
+├── cmc-production.log
+└── cmc-staging.log
+```
\ No newline at end of file
diff --git a/DEPLOYMENT.md b/DEPLOYMENT.md
new file mode 100644
index 00000000..959b64a3
--- /dev/null
+++ b/DEPLOYMENT.md
@@ -0,0 +1,362 @@
+# CMC Sales Deployment Guide
+
+## Overview
+
+This guide covers deploying the CMC Sales application to a Debian 12 VM at `cmc.springupsoftware.com` with both staging and production environments.
+
+## Architecture
+
+- **Production**: `https://cmc.springupsoftware.com`
+- **Staging**: `https://staging.cmc.springupsoftware.com`
+- **Components**: CakePHP legacy app, Go modern app, MariaDB, Nginx reverse proxy
+- **SSL**: Let's Encrypt certificates
+- **Authentication**: Basic auth for both environments
+
+## Prerequisites
+
+### Server Setup (Debian 12)
+
+```bash
+# Update system
+sudo apt update && sudo apt upgrade -y
+
+# Install Docker and Docker Compose
+sudo apt install -y docker.io docker-compose-plugin
+sudo systemctl enable docker
+sudo systemctl start docker
+
+# Add user to docker group
+sudo usermod -aG docker $USER
+# Log out and back in
+
+# Create backup directory
+sudo mkdir -p /var/backups/cmc-sales
+sudo chown $USER:$USER /var/backups/cmc-sales
+```
+
+### DNS Configuration
+
+Ensure these DNS records point to your server:
+- `cmc.springupsoftware.com` → Server IP
+- `staging.cmc.springupsoftware.com` → Server IP
+
+## Initial Deployment
+
+### 1. Clone Repository
+
+```bash
+cd /home/cmc
+git clone git@code.springupsoftware.com:cmc/cmc-sales.git cmc-sales
+sudo chown -R $USER:$USER cmc-sales
+cd cmc-sales
+```
+
+### 2. Environment Configuration
+
+```bash
+# Copy environment files
+cp .env.staging go-app/.env.staging
+cp .env.production go-app/.env.production
+
+# Edit with actual passwords -- up to this.
+nano .env.staging
+nano .env.production
+
+# Create credential directories
+mkdir -p credentials/staging credentials/production
+```
+
+### 3. Gmail OAuth Setup
+
+For each environment (staging/production):
+
+1. Go to [Google Cloud Console](https://console.cloud.google.com)
+2. Create/select project
+3. Enable Gmail API
+4. Create OAuth 2.0 credentials
+5. Download `credentials.json`
+6. Place in appropriate directory:
+ - Staging: `credentials/staging/credentials.json`
+ - Production: `credentials/production/credentials.json`
+
+Generate tokens (run on local machine first):
+```bash
+cd go-app
+go run cmd/auth/main.go
+# Follow OAuth flow
+# Copy token.json to appropriate credential directory
+```
+
+### 4. SSL Certificates
+
+```bash
+# Start proxy services (includes Lego container)
+docker compose -f docker-compose.proxy.yml up -d
+
+# Setup SSL certificates using Lego
+./scripts/setup-lego-certs.sh accounts@springupsoftware.com
+
+# Verify certificates
+./scripts/lego-list-certs.sh
+```
+
+### 5. Database Initialization
+
+```bash
+# Start database containers first
+docker compose -f docker-compose.staging.yml up -d db-staging
+docker compose -f docker-compose.production.yml up -d db-production
+
+# Wait for databases to be ready
+sleep 30
+
+# Restore production database (if you have a backup)
+./scripts/restore-db.sh production /path/to/backup.sql.gz
+
+# Or initialize empty database and run migrations
+# (implementation specific)
+```
+
+## Deployment Commands
+
+### Starting Services
+
+```bash
+# Start staging environment
+docker compose -f docker-compose.staging.yml up -d
+
+# Start production environment
+docker compose -f docker-compose.production.yml up -d
+
+# Wait for services to be ready
+sleep 10
+
+# Start reverse proxy (after both environments are running)
+docker compose -f docker-compose.proxy.yml up -d
+
+# Or use the make command for full stack deployment
+make full-stack
+```
+
+### Updating Applications
+
+```bash
+# Pull latest code
+git pull origin main
+
+# Rebuild and restart staging
+docker compose -f docker-compose.staging.yml down
+docker compose -f docker-compose.staging.yml build --no-cache
+docker compose -f docker-compose.staging.yml up -d
+
+# Test staging thoroughly, then update production
+docker compose -f docker-compose.production.yml down
+docker compose -f docker-compose.production.yml build --no-cache
+docker compose -f docker-compose.production.yml up -d
+```
+
+## Monitoring and Maintenance
+
+### Health Checks
+
+```bash
+# Check all containers
+docker ps
+
+# Check logs
+docker compose -f docker-compose.production.yml logs -f
+
+# Check application health
+curl https://cmc.springupsoftware.com/health
+curl https://staging.cmc.springupsoftware.com/health
+```
+
+### Database Backups
+
+```bash
+# Manual backup
+./scripts/backup-db.sh production
+./scripts/backup-db.sh staging
+
+# Set up automated backups (cron)
+sudo crontab -e
+# Add: 0 2 * * * /opt/cmc-sales/scripts/backup-db.sh production
+# Add: 0 3 * * * /opt/cmc-sales/scripts/backup-db.sh staging
+```
+
+### Log Management
+
+```bash
+# View nginx logs
+sudo tail -f /var/log/nginx/access.log
+sudo tail -f /var/log/nginx/error.log
+
+# View application logs
+docker compose -f docker-compose.production.yml logs -f cmc-go-production
+docker compose -f docker-compose.staging.yml logs -f cmc-go-staging
+```
+
+### SSL Certificate Renewal
+
+```bash
+# Manual renewal
+./scripts/lego-renew-cert.sh all
+
+# Renew specific domain
+./scripts/lego-renew-cert.sh cmc.springupsoftware.com
+
+# Set up auto-renewal (cron)
+sudo crontab -e
+# Add: 0 2 * * * /opt/cmc-sales/scripts/lego-renew-cert.sh all
+```
+
+## Security Considerations
+
+### Basic Authentication
+
+Update passwords in `userpasswd` file:
+```bash
+# Generate new password hash
+sudo apt install apache2-utils
+htpasswd -c userpasswd username
+
+# Restart nginx containers
+docker compose -f docker-compose.proxy.yml restart nginx-proxy
+```
+
+### Database Security
+
+- Use strong passwords in environment files
+- Database containers are not exposed externally in production
+- Regular backups with encryption at rest
+
+### Network Security
+
+- All traffic encrypted with SSL/TLS
+- Rate limiting configured in nginx
+- Security headers enabled
+- Docker networks isolate environments
+
+## Troubleshooting
+
+### Common Issues
+
+1. **Containers won't start**
+ ```bash
+ # Check logs
+ docker compose logs container-name
+
+ # Check system resources
+ df -h
+ free -h
+ ```
+
+2. **SSL issues**
+ ```bash
+ # Check certificate status
+ ./scripts/lego-list-certs.sh
+
+ # Test SSL configuration
+ curl -I https://cmc.springupsoftware.com
+
+ # Manually renew certificates
+ ./scripts/lego-renew-cert.sh all
+ ```
+
+3. **Database connection issues**
+ ```bash
+ # Test database connectivity
+ docker exec -it cmc-db-production mysql -u cmc -p
+ ```
+
+4. **Gmail API issues**
+ ```bash
+ # Check credentials are mounted
+ docker exec -it cmc-go-production ls -la /root/credentials/
+
+ # Check logs for OAuth errors
+ docker compose logs cmc-go-production | grep -i gmail
+ ```
+
+### Emergency Procedures
+
+1. **Quick rollback**
+ ```bash
+ # Stop current containers
+ docker compose -f docker-compose.production.yml down
+
+ # Restore from backup
+ ./scripts/restore-db.sh production /var/backups/cmc-sales/latest_backup.sql.gz
+
+ # Start previous version
+ git checkout previous-commit
+ docker compose -f docker-compose.production.yml up -d
+ ```
+
+2. **Database corruption**
+ ```bash
+ # Stop application
+ docker compose -f docker-compose.production.yml stop cmc-go-production cmc-php-production
+
+ # Restore from backup
+ ./scripts/restore-db.sh production /var/backups/cmc-sales/backup_production_YYYYMMDD-HHMMSS.sql.gz
+
+ # Restart application
+ docker compose -f docker-compose.production.yml start cmc-go-production cmc-php-production
+ ```
+
+## File Structure
+
+```
+/opt/cmc-sales/
+├── docker-compose.staging.yml
+├── docker-compose.production.yml
+├── docker-compose.proxy.yml
+├── conf/
+│ ├── nginx-staging.conf
+│ ├── nginx-production.conf
+│ └── nginx-proxy.conf
+├── credentials/
+│ ├── staging/
+│ │ ├── credentials.json
+│ │ └── token.json
+│ └── production/
+│ ├── credentials.json
+│ └── token.json
+├── scripts/
+│ ├── backup-db.sh
+│ ├── restore-db.sh
+│ ├── lego-obtain-cert.sh
+│ ├── lego-renew-cert.sh
+│ ├── lego-list-certs.sh
+│ └── setup-lego-certs.sh
+└── .env files
+```
+
+## Performance Tuning
+
+### Resource Limits
+
+Resource limits are configured in the Docker Compose files:
+- Production: 2 CPU cores, 2-4GB RAM per service
+- Staging: More relaxed limits for testing
+
+### Database Optimization
+
+```sql
+-- Monitor slow queries
+SHOW VARIABLES LIKE 'slow_query_log';
+SET GLOBAL slow_query_log = 'ON';
+SET GLOBAL long_query_time = 2;
+
+-- Check database performance
+SHOW PROCESSLIST;
+SHOW ENGINE INNODB STATUS;
+```
+
+### Nginx Optimization
+
+- Gzip compression enabled
+- Static file caching
+- Connection keep-alive
+- Rate limiting configured
\ No newline at end of file
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 00000000..45a235d5
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,65 @@
+FROM ghcr.io/kzrl/ubuntu:lucid
+
+# Set environment variables.
+ENV HOME /root
+
+# Define working directory.
+WORKDIR /root
+
+RUN sed -i 's/archive/old-releases/' /etc/apt/sources.list
+
+
+RUN apt-get update
+RUN apt-get -y upgrade
+
+# Install apache, PHP, and supplimentary programs. curl and lynx-cur are for debugging the container.
+RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apache2 libapache2-mod-php5 php5-mysql php5-gd php-pear php-apc php5-curl php5-imap
+
+# Enable apache mods.
+#RUN php5enmod openssl
+RUN a2enmod php5
+RUN a2enmod rewrite
+RUN a2enmod headers
+
+
+# Update the PHP.ini file, enable ?> tags and quieten logging.
+# RUN sed -i "s/short_open_tag = Off/short_open_tag = On/" /etc/php5/apache2/php.ini
+#RUN sed -i "s/error_reporting = .*$/error_reporting = E_ERROR | E_WARNING | E_PARSE/" /etc/php5/apache2/php.ini
+
+ADD conf/php.ini /etc/php5/apache2/php.ini
+
+# Manually set up the apache environment variables
+ENV APACHE_RUN_USER www-data
+ENV APACHE_RUN_GROUP www-data
+ENV APACHE_LOG_DIR /var/log/apache2
+ENV APACHE_LOCK_DIR /var/lock/apache2
+ENV APACHE_PID_FILE /var/run/apache2.pid
+
+ARG COMMIT
+ENV COMMIT_SHA=${COMMIT}
+
+EXPOSE 80
+
+# Update the default apache site with the config we created.
+ADD conf/apache-vhost.conf /etc/apache2/sites-available/cmc-sales
+ADD conf/ripmime /bin/ripmime
+
+RUN chmod +x /bin/ripmime
+RUN a2dissite 000-default
+RUN a2ensite cmc-sales
+
+RUN mkdir -p /var/www/cmc-sales/app/tmp/logs
+RUN chmod -R 755 /var/www/cmc-sales/app/tmp
+
+# Copy site into place.
+ADD . /var/www/cmc-sales
+RUN chmod +x /var/www/cmc-sales/run_vault.sh
+RUN chmod +x /var/www/cmc-sales/run_update_invoices.sh
+
+
+# Ensure Apache error/access logs go to Docker stdout/stderr
+RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
+ ln -sf /dev/stderr /var/log/apache2/error.log
+
+# By default, simply start apache.
+CMD /usr/sbin/apache2ctl -D FOREGROUND
diff --git a/Dockerfile.go.production b/Dockerfile.go.production
new file mode 100644
index 00000000..adb9f510
--- /dev/null
+++ b/Dockerfile.go.production
@@ -0,0 +1,63 @@
+# Build stage
+FROM golang:1.23-alpine AS builder
+
+# Install build dependencies
+RUN apk add --no-cache git
+
+# Set working directory
+WORKDIR /app
+
+# Copy go mod files
+COPY go-app/go.mod go-app/go.sum ./
+
+# Download dependencies
+RUN go mod download
+
+# Copy source code
+COPY go-app/ .
+
+# Install sqlc (compatible with Go 1.23+)
+RUN go install github.com/sqlc-dev/sqlc/cmd/sqlc@latest
+
+# Generate sqlc code
+RUN sqlc generate
+
+# Build the application with production optimizations
+RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -tags production -o server cmd/server/main.go
+
+# Runtime stage - minimal image for production
+FROM alpine:latest
+
+# Install only essential runtime dependencies
+RUN apk --no-cache add ca-certificates && \
+ addgroup -g 1001 -S appgroup && \
+ adduser -u 1001 -S appuser -G appgroup
+
+WORKDIR /app
+
+# Copy the binary from builder
+COPY --from=builder /app/server .
+
+# Copy templates and static files
+COPY go-app/templates ./templates
+COPY go-app/static ./static
+
+# Copy production environment file
+COPY go-app/.env.production .env
+
+# Create credentials directory with proper permissions
+RUN mkdir -p ./credentials && \
+ chown -R appuser:appgroup /app
+
+# Switch to non-root user
+USER appuser
+
+# Expose port
+EXPOSE 8080
+
+# Health check
+HEALTHCHECK --interval=60s --timeout=10s --start-period=10s --retries=3 \
+ CMD wget --no-verbose --tries=1 --spider http://localhost:8080/api/v1/health || exit 1
+
+# Run the application
+CMD ["./server"]
\ No newline at end of file
diff --git a/Dockerfile.go.staging b/Dockerfile.go.staging
new file mode 100644
index 00000000..6c814aec
--- /dev/null
+++ b/Dockerfile.go.staging
@@ -0,0 +1,57 @@
+# Build stage
+FROM golang:1.23-alpine AS builder
+
+# Install build dependencies
+RUN apk add --no-cache git
+
+# Set working directory
+WORKDIR /app
+
+# Copy go mod files
+COPY go-app/go.mod go-app/go.sum ./
+
+# Download dependencies
+RUN go mod download
+
+# Copy source code
+COPY go-app/ .
+
+# Install sqlc (compatible with Go 1.23+)
+RUN go install github.com/sqlc-dev/sqlc/cmd/sqlc@latest
+
+# Generate sqlc code
+RUN sqlc generate
+
+# Build the application with staging tags
+RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -tags staging -o server cmd/server/main.go
+
+# Runtime stage
+FROM alpine:latest
+
+# Install runtime dependencies and debugging tools for staging
+RUN apk --no-cache add ca-certificates curl net-tools
+
+WORKDIR /root/
+
+# Copy the binary from builder
+COPY --from=builder /app/server .
+
+# Copy templates and static files
+COPY go-app/templates ./templates
+COPY go-app/static ./static
+
+# Copy staging environment file
+COPY go-app/.env.staging .env
+
+# Create credentials directory
+RUN mkdir -p ./credentials
+
+# Expose port
+EXPOSE 8080
+
+# Health check
+HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
+ CMD curl -f http://localhost:8080/api/v1/health || exit 1
+
+# Run the application
+CMD ["./server"]
\ No newline at end of file
diff --git a/Dockerfile.ubuntu-php b/Dockerfile.ubuntu-php
new file mode 100644
index 00000000..cb5d7837
--- /dev/null
+++ b/Dockerfile.ubuntu-php
@@ -0,0 +1,75 @@
+# Simple working PHP setup using Ubuntu
+FROM ubuntu:20.04
+
+# Prevent interactive prompts during package installation
+ENV DEBIAN_FRONTEND=noninteractive
+ENV TZ=Australia/Sydney
+
+# Install Apache, PHP and required extensions
+RUN apt-get update && apt-get install -y \
+ apache2 \
+ libapache2-mod-php7.4 \
+ php7.4 \
+ php7.4-mysql \
+ php7.4-gd \
+ php7.4-curl \
+ php7.4-mbstring \
+ php7.4-xml \
+ php7.4-zip \
+ php7.4-imap \
+ php7.4-intl \
+ php7.4-bcmath \
+ curl \
+ && rm -rf /var/lib/apt/lists/*
+
+# Enable Apache modules
+RUN a2enmod rewrite headers php7.4
+
+# Configure PHP for CakePHP
+RUN { \
+ echo 'error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT & ~E_NOTICE & ~E_WARNING'; \
+ echo 'display_errors = On'; \
+ echo 'display_startup_errors = On'; \
+ echo 'log_errors = On'; \
+ echo 'max_execution_time = 300'; \
+ echo 'memory_limit = 256M'; \
+ echo 'post_max_size = 50M'; \
+ echo 'upload_max_filesize = 50M'; \
+} > /etc/php/7.4/apache2/conf.d/99-cakephp.ini
+
+# Set up Apache virtual host
+RUN echo '\n\
+ ServerName localhost\n\
+ DocumentRoot /var/www/cmc-sales/app/webroot\n\
+ \n\
+ Options FollowSymLinks\n\
+ AllowOverride All\n\
+ Require all granted\n\
+ \n\
+ ErrorLog ${APACHE_LOG_DIR}/error.log\n\
+ CustomLog ${APACHE_LOG_DIR}/access.log combined\n\
+ ' > /etc/apache2/sites-available/000-default.conf
+
+# Create app directory structure
+RUN mkdir -p /var/www/cmc-sales/app/tmp/{cache,logs,sessions} \
+ && mkdir -p /var/www/cmc-sales/app/webroot/{pdf,attachments_files}
+
+# Set permissions
+RUN chown -R www-data:www-data /var/www/cmc-sales \
+ && chmod -R 777 /var/www/cmc-sales/app/tmp
+
+# Copy ripmime if it exists
+# COPY conf/ripmime* /usr/local/bin/ || true
+# RUN chmod +x /usr/local/bin/ripmime* 2>/dev/null || true
+
+# Set working directory
+WORKDIR /var/www/cmc-sales
+
+# Copy application (will be overridden by volume mount)
+COPY app/ /var/www/cmc-sales/app/
+
+# Expose port 80
+EXPOSE 80
+
+# Start Apache in foreground
+CMD ["apache2ctl", "-D", "FOREGROUND"]
\ No newline at end of file
diff --git a/Makefile b/Makefile
new file mode 100644
index 00000000..98d352c1
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,190 @@
+# CMC Sales Deployment Makefile for Caddy setup
+
+.PHONY: help staging production backup-staging backup-production restart-staging restart-production status logs clean caddy-reload caddy-logs
+
+# Default target
+help:
+ @echo "CMC Sales Deployment Commands (Caddy version)"
+ @echo ""
+ @echo "Environments:"
+ @echo " staging Start staging environment"
+ @echo " staging-down Stop staging environment"
+ @echo " staging-logs Show staging logs"
+ @echo " restart-staging Rebuild and restart staging"
+ @echo ""
+ @echo " production Start production environment"
+ @echo " production-down Stop production environment"
+ @echo " production-logs Show production logs"
+ @echo " restart-production Rebuild and restart production"
+ @echo ""
+ @echo "Database:"
+ @echo " backup-staging Backup staging database"
+ @echo " backup-production Backup production database"
+ @echo ""
+ @echo "Caddy:"
+ @echo " caddy-status Show Caddy service status"
+ @echo " caddy-reload Reload Caddy configuration"
+ @echo " caddy-logs Show Caddy logs"
+ @echo " caddy-validate Validate Caddyfile"
+ @echo ""
+ @echo "Utility:"
+ @echo " status Show all container status"
+ @echo " clean Stop and remove all containers"
+ @echo " setup-auth Setup basic authentication"
+
+# Staging environment
+staging:
+ @echo "Starting staging environment..."
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml up -d
+ @echo "Staging environment started"
+ @echo "Access at: https://staging.cmc.springupsoftware.com"
+
+staging-down:
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml down
+
+staging-logs:
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml logs -f
+
+restart-staging:
+ @echo "Restarting staging environment..."
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml down
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml build --no-cache
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml up -d
+ @echo "Staging environment restarted"
+
+# Production environment
+production:
+ @echo "Starting production environment..."
+ docker compose -f docker-compose.caddy-production.yml up -d
+ @echo "Production environment started"
+ @echo "Access at: https://cmc.springupsoftware.com"
+
+production-down:
+ @echo "WARNING: This will stop the production environment!"
+ @read -p "Are you sure? (yes/no): " confirm && [ "$$confirm" = "yes" ]
+ docker compose -f docker-compose.caddy-production.yml down
+
+production-logs:
+ docker compose -f docker-compose.caddy-production.yml logs -f
+
+restart-production:
+ @echo "WARNING: This will restart the production environment!"
+ @read -p "Are you sure? (yes/no): " confirm && [ "$$confirm" = "yes" ]
+ docker compose -f docker-compose.caddy-production.yml down
+ docker compose -f docker-compose.caddy-production.yml build --no-cache
+ docker compose -f docker-compose.caddy-production.yml up -d
+ @echo "Production environment restarted"
+
+# Database backups
+backup-staging:
+ @echo "Creating staging database backup..."
+ ./scripts/backup-db.sh staging
+
+backup-production:
+ @echo "Creating production database backup..."
+ ./scripts/backup-db.sh production
+
+# Caddy management
+caddy-status:
+ @echo "=== Caddy Status ==="
+ sudo systemctl status caddy --no-pager
+
+caddy-reload:
+ @echo "Reloading Caddy configuration..."
+ sudo systemctl reload caddy
+ @echo "Caddy reloaded successfully"
+
+caddy-logs:
+ @echo "=== Caddy Logs ==="
+ sudo journalctl -u caddy -f
+
+caddy-validate:
+ @echo "Validating Caddyfile..."
+ caddy validate --config Caddyfile
+ @echo "Caddyfile is valid"
+
+# Setup authentication
+setup-auth:
+ @echo "Setting up basic authentication..."
+ ./scripts/setup-caddy-auth.sh
+
+# System status
+status:
+ @echo "=== Container Status ==="
+ docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
+ @echo ""
+ @echo "=== Caddy Status ==="
+ sudo systemctl is-active caddy || true
+ @echo ""
+ @echo "=== Port Usage ==="
+ sudo netstat -tlnp | grep -E ":(80|443|8091|8092|8093|8094|3306|3307) " || true
+
+# Logs
+logs:
+ @echo "Which logs? [staging/production/caddy]"
+ @read env; \
+ case $$env in \
+ staging) docker compose -f docker-compose.caddy-staging-ubuntu.yml logs -f ;; \
+ production) docker compose -f docker-compose.caddy-production.yml logs -f ;; \
+ caddy) sudo journalctl -u caddy -f ;; \
+ *) echo "Invalid option" ;; \
+ esac
+
+# Cleanup
+clean:
+ @echo "WARNING: This will stop and remove ALL CMC containers!"
+ @read -p "Are you sure? (yes/no): " confirm && [ "$$confirm" = "yes" ]
+ docker compose -f docker-compose.caddy-staging-ubuntu.yml down --volumes --remove-orphans
+ docker compose -f docker-compose.caddy-production.yml down --volumes --remove-orphans
+ docker system prune -f
+ @echo "Cleanup completed"
+
+# Health checks
+health:
+ @echo "=== Health Checks ==="
+ @echo "Staging:"
+ @curl -s -o /dev/null -w " HTTPS %{http_code}: https://staging.cmc.springupsoftware.com/health\n" https://staging.cmc.springupsoftware.com/health -u admin:password || echo " Staging not accessible (update with correct auth)"
+ @curl -s -o /dev/null -w " Internal %{http_code}: http://localhost:8092/api/v1/health\n" http://localhost:8092/api/v1/health || echo " Staging Go not accessible"
+ @echo "Production:"
+ @curl -s -o /dev/null -w " HTTPS %{http_code}: https://cmc.springupsoftware.com/health\n" https://cmc.springupsoftware.com/health -u admin:password || echo " Production not accessible (update with correct auth)"
+ @curl -s -o /dev/null -w " Internal %{http_code}: http://localhost:8094/api/v1/health\n" http://localhost:8094/api/v1/health || echo " Production Go not accessible"
+
+# Deploy to staging
+deploy-staging:
+ @echo "Deploying to staging..."
+ git pull origin main
+ $(MAKE) restart-staging
+ @echo "Staging deployment complete"
+
+# Deploy to production
+deploy-production:
+ @echo "WARNING: This will deploy to PRODUCTION!"
+ @echo "Make sure you have tested thoroughly in staging first."
+ @read -p "Are you sure you want to deploy to production? (yes/no): " confirm && [ "$$confirm" = "yes" ]
+ git pull origin main
+ $(MAKE) backup-production
+ $(MAKE) restart-production
+ @echo "Production deployment complete"
+
+# First-time setup
+initial-setup:
+ @echo "Running initial setup..."
+ @echo "1. Installing Caddy..."
+ sudo ./scripts/install-caddy.sh
+ @echo ""
+ @echo "2. Setting up authentication..."
+ ./scripts/setup-caddy-auth.sh
+ @echo ""
+ @echo "3. Copying Caddyfile..."
+ sudo cp Caddyfile /etc/caddy/Caddyfile
+ @echo ""
+ @echo "4. Starting Caddy..."
+ sudo systemctl start caddy
+ @echo ""
+ @echo "5. Starting containers..."
+ $(MAKE) staging
+ $(MAKE) production
+ @echo ""
+ @echo "Initial setup complete!"
+ @echo "Access staging at: https://staging.cmc.springupsoftware.com"
+ @echo "Access production at: https://cmc.springupsoftware.com"
\ No newline at end of file
diff --git a/TESTING_DOCKER.md b/TESTING_DOCKER.md
new file mode 100644
index 00000000..17256fcb
--- /dev/null
+++ b/TESTING_DOCKER.md
@@ -0,0 +1,394 @@
+# Running CMC Django Tests in Docker
+
+This guide explains how to run the comprehensive CMC Django test suite using Docker for consistent, isolated testing.
+
+## Quick Start
+
+```bash
+# 1. Setup test environment (one-time)
+./run-tests-docker.sh setup
+
+# 2. Run all tests
+./run-tests-docker.sh run
+
+# 3. Run tests with coverage
+./run-tests-docker.sh coverage
+```
+
+## Test Environment Overview
+
+The Docker test environment includes:
+- **Isolated test database** (MariaDB on port 3307)
+- **Django test container** with all dependencies
+- **Coverage reporting** with HTML and XML output
+- **PDF generation testing** with WeasyPrint/ReportLab
+- **Parallel test execution** support
+
+## Available Commands
+
+### Setup and Management
+
+```bash
+# Build containers and setup test database
+./run-tests-docker.sh setup
+
+# Clean up all test containers and data
+./run-tests-docker.sh clean
+
+# View test container logs
+./run-tests-docker.sh logs
+
+# Open shell in test container
+./run-tests-docker.sh shell
+```
+
+### Running Tests
+
+```bash
+# Run all tests
+./run-tests-docker.sh run
+
+# Run specific test suites
+./run-tests-docker.sh run models # Model tests only
+./run-tests-docker.sh run services # Service layer tests
+./run-tests-docker.sh run auth # Authentication tests
+./run-tests-docker.sh run views # View and URL tests
+./run-tests-docker.sh run pdf # PDF generation tests
+./run-tests-docker.sh run integration # Integration tests
+
+# Run quick tests (models + services)
+./run-tests-docker.sh quick
+
+# Run tests with coverage reporting
+./run-tests-docker.sh coverage
+```
+
+## Advanced Test Options
+
+### Using Docker Compose Directly
+
+```bash
+# Run specific test with custom options
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ python cmcsales/manage.py test cmc.tests.test_models --verbosity=2 --keepdb
+
+# Run tests with coverage
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ coverage run --source='.' cmcsales/manage.py test cmc.tests
+
+# Generate coverage report
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ coverage report --show-missing
+```
+
+### Using the Test Script Directly
+
+```bash
+# Inside the container, you can use the test script with advanced options
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ /app/scripts/run-tests.sh --coverage --keepdb --failfast models
+
+# Script options:
+# -c, --coverage Enable coverage reporting
+# -k, --keepdb Keep test database between runs
+# -p, --parallel NUM Run tests in parallel
+# -f, --failfast Stop on first failure
+# -v, --verbosity NUM Verbosity level 0-3
+```
+
+## Test Suite Structure
+
+### 1. Model Tests (`test_models.py`)
+Tests all Django models including:
+- Customer, Enquiry, Job, Document models
+- Model validation and constraints
+- Relationships and cascade behavior
+- Financial calculations
+
+```bash
+./run-tests-docker.sh run models
+```
+
+### 2. Service Tests (`test_services.py`)
+Tests business logic layer:
+- Number generation service
+- Financial calculation service
+- Document service workflows
+- Validation service
+
+```bash
+./run-tests-docker.sh run services
+```
+
+### 3. Authentication Tests (`test_authentication.py`)
+Tests authentication system:
+- Multiple authentication backends
+- Permission decorators and middleware
+- User management workflows
+- Security features
+
+```bash
+./run-tests-docker.sh run auth
+```
+
+### 4. View Tests (`test_views.py`)
+Tests web interface:
+- CRUD operations for all entities
+- AJAX endpoints
+- Permission enforcement
+- URL routing
+
+```bash
+./run-tests-docker.sh run views
+```
+
+### 5. PDF Tests (`test_pdf.py`)
+Tests PDF generation:
+- WeasyPrint and ReportLab engines
+- Template rendering
+- Document formatting
+- Security and performance
+
+```bash
+./run-tests-docker.sh run pdf
+```
+
+### 6. Integration Tests (`test_integration.py`)
+Tests complete workflows:
+- End-to-end business processes
+- Multi-user collaboration
+- System integration scenarios
+- Performance and security
+
+```bash
+./run-tests-docker.sh run integration
+```
+
+## Test Reports and Coverage
+
+### Coverage Reports
+
+After running tests with coverage, reports are available in:
+- **HTML Report**: `./coverage-reports/html/index.html`
+- **XML Report**: `./coverage-reports/coverage.xml`
+- **Console**: Displayed after test run
+
+```bash
+# Run tests with coverage
+./run-tests-docker.sh coverage
+
+# View HTML report
+open coverage-reports/html/index.html
+```
+
+### Test Artifacts
+
+Test outputs are saved to:
+- **Test Reports**: `./test-reports/`
+- **Coverage Reports**: `./coverage-reports/`
+- **Logs**: `./logs/`
+- **PDF Test Files**: `./test-reports/pdf/`
+
+## Configuration
+
+### Environment Variables
+
+The test environment uses these key variables:
+
+```yaml
+# Database configuration
+DATABASE_HOST: test-db
+DATABASE_NAME: test_cmc
+DATABASE_USER: test_cmc
+DATABASE_PASSWORD: testPassword123
+
+# Django settings
+DJANGO_SETTINGS_MODULE: cmcsales.settings
+TESTING: 1
+DEBUG: 0
+
+# PDF generation
+PDF_GENERATION_ENGINE: weasyprint
+PDF_SAVE_DIRECTORY: /app/test-reports/pdf
+```
+
+### Test Database
+
+- **Isolated database** separate from development/production
+- **Runs on port 3307** to avoid conflicts
+- **Optimized for testing** with reduced buffer sizes
+- **Automatically reset** between test runs (unless `--keepdb` used)
+
+## Performance Optimization
+
+### Parallel Test Execution
+
+```bash
+# Run tests in parallel (faster execution)
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ /app/scripts/run-tests.sh --parallel=4 all
+```
+
+### Keeping Test Database
+
+```bash
+# Keep database between runs for faster subsequent tests
+./run-tests-docker.sh run models --keepdb
+```
+
+### Quick Test Suite
+
+```bash
+# Run only essential tests for rapid feedback
+./run-tests-docker.sh quick
+```
+
+## Continuous Integration
+
+### GitHub Actions Example
+
+```yaml
+name: Test CMC Django
+on: [push, pull_request]
+
+jobs:
+ test:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v3
+
+ - name: Setup test environment
+ run: ./run-tests-docker.sh setup
+
+ - name: Run tests with coverage
+ run: ./run-tests-docker.sh coverage
+
+ - name: Upload coverage reports
+ uses: codecov/codecov-action@v3
+ with:
+ file: ./coverage-reports/coverage.xml
+```
+
+## Troubleshooting
+
+### Database Connection Issues
+
+```bash
+# Check database status
+docker-compose -f docker-compose.test.yml ps
+
+# View database logs
+docker-compose -f docker-compose.test.yml logs test-db
+
+# Restart database
+docker-compose -f docker-compose.test.yml restart test-db
+```
+
+### Test Failures
+
+```bash
+# Run with maximum verbosity for debugging
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ python cmcsales/manage.py test cmc.tests.test_models --verbosity=3
+
+# Use failfast to stop on first error
+./run-tests-docker.sh run models --failfast
+
+# Open shell to investigate
+./run-tests-docker.sh shell
+```
+
+### Permission Issues
+
+```bash
+# Fix file permissions
+sudo chown -R $USER:$USER test-reports coverage-reports logs
+
+# Check Docker permissions
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test whoami
+```
+
+### Memory Issues
+
+```bash
+# Run tests with reduced parallel workers
+docker-compose -f docker-compose.test.yml run --rm cmc-django-test \
+ /app/scripts/run-tests.sh --parallel=1 all
+
+# Monitor resource usage
+docker stats
+```
+
+## Development Workflow
+
+### Recommended Testing Workflow
+
+1. **Initial Setup** (one-time):
+ ```bash
+ ./run-tests-docker.sh setup
+ ```
+
+2. **During Development** (fast feedback):
+ ```bash
+ ./run-tests-docker.sh quick --keepdb
+ ```
+
+3. **Before Commit** (comprehensive):
+ ```bash
+ ./run-tests-docker.sh coverage
+ ```
+
+4. **Debugging Issues**:
+ ```bash
+ ./run-tests-docker.sh shell
+ # Inside container:
+ python cmcsales/manage.py test cmc.tests.test_models.CustomerModelTest.test_customer_creation --verbosity=3
+ ```
+
+### Adding New Tests
+
+1. Create test file in appropriate module
+2. Follow existing test patterns and base classes
+3. Test locally:
+ ```bash
+ ./run-tests-docker.sh run models --keepdb
+ ```
+4. Run full suite before committing:
+ ```bash
+ ./run-tests-docker.sh coverage
+ ```
+
+## Integration with IDE
+
+### PyCharm/IntelliJ
+
+Configure remote interpreter using Docker:
+1. Go to Settings → Project → Python Interpreter
+2. Add Docker Compose interpreter
+3. Use `docker-compose.test.yml` configuration
+4. Set service to `cmc-django-test`
+
+### VS Code
+
+Use Dev Containers extension:
+1. Create `.devcontainer/devcontainer.json`
+2. Configure to use test Docker environment
+3. Run tests directly in integrated terminal
+
+## Best Practices
+
+1. **Always run tests in Docker** for consistency
+2. **Use `--keepdb` during development** for speed
+3. **Run coverage reports before commits**
+4. **Clean up regularly** to free disk space
+5. **Monitor test performance** and optimize slow tests
+6. **Use parallel execution** for large test suites
+7. **Keep test data realistic** but minimal
+8. **Test error conditions** as well as happy paths
+
+## Resources
+
+- **Django Testing Documentation**: https://docs.djangoproject.com/en/5.1/topics/testing/
+- **Coverage.py Documentation**: https://coverage.readthedocs.io/
+- **Docker Compose Reference**: https://docs.docker.com/compose/
+- **CMC Test Suite Documentation**: See individual test modules for detailed information
\ No newline at end of file
diff --git a/app/config/bootstrap.php b/app/config/bootstrap.php
index 6b9502fc..35497bd2 100755
--- a/app/config/bootstrap.php
+++ b/app/config/bootstrap.php
@@ -43,4 +43,6 @@
*
*/
//EOF
+
+require_once(dirname(__FILE__) . '/php7_compat.php');
?>
\ No newline at end of file
diff --git a/app/config/core.php b/app/config/core.php
index dc407e5e..2d86441b 100644
--- a/app/config/core.php
+++ b/app/config/core.php
@@ -168,6 +168,28 @@ Configure::write('Security.salt', 'uiPxR3MzVXAID5zucbxLdxP4TX33buPoCWZr4JfroGoaE
Configure::write('Acl.classname', 'DbAcl');
Configure::write('Acl.database', 'default');
+/**
+ * Tailscale Authentication Configuration
+ *
+ * Enable Tailscale HTTP header authentication support
+ * When enabled, the system will check for Tailscale authentication headers
+ * before falling back to HTTP Basic Auth
+ */
+Configure::write('Tailscale.enabled', true);
+
+/**
+ * Auto-create users from Tailscale authentication
+ * When enabled, users authenticated via Tailscale headers will be
+ * automatically created if they don't exist in the database
+ */
+Configure::write('Tailscale.autoCreateUsers', false);
+
+/**
+ * Default access level for auto-created Tailscale users
+ * Options: 'user', 'manager', 'admin'
+ */
+Configure::write('Tailscale.defaultAccessLevel', 'user');
+
diff --git a/app/config/php7_compat.php b/app/config/php7_compat.php
new file mode 100644
index 00000000..3bab09f6
--- /dev/null
+++ b/app/config/php7_compat.php
@@ -0,0 +1,96 @@
+action, $this->allowedActions)) {
+ error_log('[AUTH_BYPASS] Action ' . $this->action . ' allowed without authentication');
+ return;
+ }
- // Find the user that matches the HTTP basic auth user
- $user = $this->User->find('first', array('recursive' => 0, 'conditions' => array('User.username'=>$_SERVER["PHP_AUTH_USER"])));
+ $user = null;
+
+ // Check if Tailscale authentication is enabled
+ if (Configure::read('Tailscale.enabled')) {
+ error_log('[WEBAUTH] Checking web authentication headers');
+ error_log('X-Webauth-User: ' . (isset($_SERVER['HTTP_X_WEBAUTH_USER']) ? $_SERVER['HTTP_X_WEBAUTH_USER'] : 'not set'));
+ error_log('X-Webauth-Name: ' . (isset($_SERVER['HTTP_X_WEBAUTH_NAME']) ? $_SERVER['HTTP_X_WEBAUTH_NAME'] : 'not set'));
+ // Check for web authentication headers
+ $tailscaleLogin = isset($_SERVER['HTTP_X_WEBAUTH_USER']) ? $_SERVER['HTTP_X_WEBAUTH_USER'] : null;
+ $tailscaleName = isset($_SERVER['HTTP_X_WEBAUTH_NAME']) ? $_SERVER['HTTP_X_WEBAUTH_NAME'] : null;
+
+ if ($tailscaleLogin) {
+ // Log web authentication attempt
+ error_log('[WEBAUTH] Attempting authentication for: ' . $tailscaleLogin);
+
+ // Try to find user by email address from web auth header
+ $user = $this->User->find('first', array(
+ 'recursive' => 0,
+ 'conditions' => array('User.email' => $tailscaleLogin)
+ ));
+
+ // If user not found and auto-creation is enabled, create a new user
+ if (!$user && Configure::read('Tailscale.autoCreateUsers')) {
+ // Parse the name
+ $firstName = '';
+ $lastName = '';
+ if ($tailscaleName) {
+ $nameParts = explode(' ', $tailscaleName);
+ $firstName = $nameParts[0];
+ if (count($nameParts) > 1) {
+ array_shift($nameParts);
+ $lastName = implode(' ', $nameParts);
+ }
+ }
+
+ $userData = array(
+ 'User' => array(
+ 'email' => $tailscaleLogin,
+ 'username' => $tailscaleLogin,
+ 'first_name' => $firstName,
+ 'last_name' => $lastName,
+ 'type' => 'user',
+ 'access_level' => Configure::read('Tailscale.defaultAccessLevel'),
+ 'enabled' => 1,
+ 'by_vault' => 0
+ )
+ );
+ $this->User->create();
+ if ($this->User->save($userData)) {
+ $user = $this->User->find('first', array(
+ 'recursive' => 0,
+ 'conditions' => array('User.id' => $this->User->id)
+ ));
+ error_log('[WEBAUTH] Created new user: ' . $tailscaleLogin);
+ } else {
+ error_log('[WEBAUTH] Failed to create user: ' . $tailscaleLogin);
+ }
+ }
+ }
+ }
+
+ // Fall back to HTTP basic auth if no Tailscale auth or user not found
+ if (!$user && isset($_SERVER["PHP_AUTH_USER"])) {
+ error_log('[BASIC_AUTH] Attempting authentication for: ' . $_SERVER["PHP_AUTH_USER"]);
+ $user = $this->User->find('first', array(
+ 'recursive' => 0,
+ 'conditions' => array('User.username' => $_SERVER["PHP_AUTH_USER"])
+ ));
+ }
+
+ if ($user) {
+ error_log('[AUTH_SUCCESS] User authenticated: ' . $user['User']['email']);
+ } else {
+ error_log('[AUTH_FAILED] No valid authentication found');
+
+ // Check if we have any authentication attempt (Web Auth or Basic Auth)
+ $hasAuthAttempt = (Configure::read('Tailscale.enabled') && isset($_SERVER['HTTP_X_WEBAUTH_USER'])) ||
+ isset($_SERVER["PHP_AUTH_USER"]);
+
+ // If there was an authentication attempt but it failed, return 401
+ if ($hasAuthAttempt) {
+ header('HTTP/1.1 401 Unauthorized');
+ header('Content-Type: text/plain');
+ echo "Authentication failed. Invalid credentials or user not found.";
+ error_log('[AUTH_FAILED] Returning 401 Unauthorized');
+ exit();
+ }
+
+ // If no authentication headers at all, request authentication
+ header('WWW-Authenticate: Basic realm="CMC Sales System"');
+ header('HTTP/1.1 401 Unauthorized');
+ header('Content-Type: text/plain');
+ echo "Authentication required. Please provide valid credentials.";
+ error_log('[AUTH_FAILED] No authentication headers, requesting authentication');
+ exit();
+ }
+
$this->set("currentuser", $user);
if($this->RequestHandler->isAjax()) {
@@ -52,7 +157,29 @@ class AppController extends Controller {
* @return array - the currently logged in user.
*/
function getCurrentUser() {
- $user = $this->User->find('first', array('recursive' => 0, 'conditions' => array('User.username'=>$_SERVER["PHP_AUTH_USER"])));
+ $user = null;
+
+ // Check if Tailscale authentication is enabled
+ if (Configure::read('Tailscale.enabled')) {
+ $tailscaleLogin = isset($_SERVER['HTTP_X_WEBAUTH_USER']) ? $_SERVER['HTTP_X_WEBAUTH_USER'] : null;
+
+ if ($tailscaleLogin) {
+ // Try to find user by email address from web auth header
+ $user = $this->User->find('first', array(
+ 'recursive' => 0,
+ 'conditions' => array('User.email' => $tailscaleLogin)
+ ));
+ }
+ }
+
+ // Fall back to HTTP basic auth if no Tailscale auth or user not found
+ if (!$user && isset($_SERVER["PHP_AUTH_USER"])) {
+ $user = $this->User->find('first', array(
+ 'recursive' => 0,
+ 'conditions' => array('User.username' => $_SERVER["PHP_AUTH_USER"])
+ ));
+ }
+
return $user;
}
diff --git a/conf/apache-vhost.conf b/conf/apache-vhost.conf
index f326444d..ddbe478d 100644
--- a/conf/apache-vhost.conf
+++ b/conf/apache-vhost.conf
@@ -1,10 +1,11 @@
- ServerName localhost
- DocumentRoot /var/www/cmc-sales/app/webroot
- DirectoryIndex index.php
+DocumentRoot /var/www/cmc-sales/app/webroot
-
- AllowOverride All
- Require all granted
-
+# Send Apache logs to stdout/stderr for Docker
+ErrorLog /dev/stderr
+CustomLog /dev/stdout combined
+
+# Ensure PHP errors are also logged
+php_flag log_errors on
+php_value error_log /dev/stderr
\ No newline at end of file
diff --git a/conf/nginx-production.conf b/conf/nginx-production.conf
new file mode 100644
index 00000000..bd9829b3
--- /dev/null
+++ b/conf/nginx-production.conf
@@ -0,0 +1,152 @@
+# Production environment configuration
+upstream cmc_php_production {
+ server cmc-php-production:80;
+ keepalive 32;
+}
+
+upstream cmc_go_production {
+ server cmc-go-production:8080;
+ keepalive 32;
+}
+
+# Rate limiting
+limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
+limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
+
+server {
+ server_name cmc.springupsoftware.com;
+
+ # Basic auth for production
+ auth_basic_user_file /etc/nginx/userpasswd;
+ auth_basic "CMC Sales - Restricted Access";
+
+ # Security headers
+ add_header X-Frame-Options DENY;
+ add_header X-Content-Type-Options nosniff;
+ add_header X-XSS-Protection "1; mode=block";
+ add_header Referrer-Policy "strict-origin-when-cross-origin";
+ add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
+
+ # Hide server information
+ server_tokens off;
+
+ # Request size limits
+ client_max_body_size 50M;
+ client_body_timeout 30s;
+ client_header_timeout 30s;
+
+ # Compression
+ gzip on;
+ gzip_vary on;
+ gzip_min_length 1024;
+ gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
+
+ # CakePHP legacy app routes
+ location / {
+ limit_req zone=api burst=10 nodelay;
+
+ proxy_pass http://cmc_php_production;
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 10s;
+ proxy_send_timeout 30s;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # Buffer settings for better performance
+ proxy_buffering on;
+ proxy_buffer_size 128k;
+ proxy_buffers 4 256k;
+ proxy_busy_buffers_size 256k;
+ }
+
+ # Go API routes
+ location /api/ {
+ limit_req zone=api burst=20 nodelay;
+
+ proxy_pass http://cmc_go_production;
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 10s;
+ proxy_send_timeout 30s;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # Buffer settings for better performance
+ proxy_buffering on;
+ proxy_buffer_size 128k;
+ proxy_buffers 4 256k;
+ proxy_busy_buffers_size 256k;
+ }
+
+ # Go page routes for emails
+ location ~ ^/(emails|customers|products|purchase-orders|enquiries|documents) {
+ limit_req zone=api burst=15 nodelay;
+
+ proxy_pass http://cmc_go_production;
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 10s;
+ proxy_send_timeout 30s;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # Buffer settings for better performance
+ proxy_buffering on;
+ proxy_buffer_size 128k;
+ proxy_buffers 4 256k;
+ proxy_busy_buffers_size 256k;
+ }
+
+ # Static files from Go app with aggressive caching
+ location /static/ {
+ proxy_pass http://cmc_go_production;
+ proxy_cache_valid 200 24h;
+ add_header Cache-Control "public, max-age=86400";
+ expires 1d;
+ }
+
+ # PDF files with caching
+ location /pdf/ {
+ proxy_pass http://cmc_go_production;
+ proxy_cache_valid 200 1h;
+ add_header Cache-Control "public, max-age=3600";
+ expires 1h;
+ }
+
+ # Health check endpoints (no rate limiting)
+ location /health {
+ proxy_pass http://cmc_go_production/api/v1/health;
+ access_log off;
+ }
+
+ # Block common attack patterns
+ location ~ /\. {
+ deny all;
+ access_log off;
+ log_not_found off;
+ }
+
+ location ~ ~$ {
+ deny all;
+ access_log off;
+ log_not_found off;
+ }
+
+ # Error pages
+ error_page 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+
+ # Custom error page for rate limiting
+ error_page 429 /429.html;
+ location = /429.html {
+ root /usr/share/nginx/html;
+ }
+
+ listen 80;
+}
\ No newline at end of file
diff --git a/conf/nginx-proxy.conf b/conf/nginx-proxy.conf
new file mode 100644
index 00000000..e1652854
--- /dev/null
+++ b/conf/nginx-proxy.conf
@@ -0,0 +1,137 @@
+user nginx;
+worker_processes auto;
+error_log /var/log/nginx/error.log warn;
+pid /var/run/nginx.pid;
+
+events {
+ worker_connections 1024;
+ use epoll;
+ multi_accept on;
+}
+
+http {
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+
+ # Logging
+ log_format main '$remote_addr - $remote_user [$time_local] "$request" '
+ '$status $body_bytes_sent "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ access_log /var/log/nginx/access.log main;
+
+ # Performance
+ sendfile on;
+ tcp_nopush on;
+ tcp_nodelay on;
+ keepalive_timeout 65;
+ types_hash_max_size 2048;
+ server_tokens off;
+
+ # Gzip
+ gzip on;
+ gzip_vary on;
+ gzip_min_length 1024;
+ gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
+
+ # Rate limiting
+ limit_req_zone $binary_remote_addr zone=global:10m rate=10r/s;
+
+ # SSL configuration
+ ssl_protocols TLSv1.2 TLSv1.3;
+ ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
+ ssl_prefer_server_ciphers off;
+ ssl_session_cache shared:SSL:10m;
+ ssl_session_timeout 10m;
+
+ # Upstream servers
+ upstream cmc_staging {
+ server nginx-staging:80;
+ keepalive 32;
+ }
+
+ upstream cmc_production {
+ server nginx-production:80;
+ keepalive 32;
+ }
+
+ # Redirect HTTP to HTTPS
+ server {
+ listen 80;
+ server_name cmc.springupsoftware.com staging.cmc.springupsoftware.com;
+
+ # ACME challenge for Lego
+ location /.well-known/acme-challenge/ {
+ root /var/www/acme-challenge;
+ try_files $uri =404;
+ }
+
+ # Redirect all other traffic to HTTPS
+ location / {
+ return 301 https://$server_name$request_uri;
+ }
+ }
+
+ # Production HTTPS
+ server {
+ listen 443 ssl http2;
+ server_name cmc.springupsoftware.com;
+
+ ssl_certificate /etc/ssl/certs/cmc.springupsoftware.com.crt;
+ ssl_certificate_key /etc/ssl/certs/cmc.springupsoftware.com.key;
+
+ # Security headers
+ add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
+ add_header X-Frame-Options DENY;
+ add_header X-Content-Type-Options nosniff;
+ add_header X-XSS-Protection "1; mode=block";
+
+ # Rate limiting
+ limit_req zone=global burst=20 nodelay;
+
+ location / {
+ proxy_pass http://cmc_production;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # Buffer settings
+ proxy_buffering on;
+ proxy_buffer_size 128k;
+ proxy_buffers 4 256k;
+ proxy_busy_buffers_size 256k;
+ }
+ }
+
+ # Staging HTTPS
+ server {
+ listen 443 ssl http2;
+ server_name staging.cmc.springupsoftware.com;
+
+ ssl_certificate /etc/ssl/certs/staging.cmc.springupsoftware.com.crt;
+ ssl_certificate_key /etc/ssl/certs/staging.cmc.springupsoftware.com.key;
+
+ # Security headers (less strict for staging)
+ add_header X-Frame-Options DENY;
+ add_header X-Content-Type-Options nosniff;
+ add_header X-Environment "STAGING";
+
+ # Rate limiting (more lenient for staging)
+ limit_req zone=global burst=50 nodelay;
+
+ location / {
+ proxy_pass http://cmc_staging;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+
+ # Buffer settings
+ proxy_buffering on;
+ proxy_buffer_size 128k;
+ proxy_buffers 4 256k;
+ proxy_busy_buffers_size 256k;
+ }
+ }
+}
\ No newline at end of file
diff --git a/conf/nginx-staging.conf b/conf/nginx-staging.conf
new file mode 100644
index 00000000..ac7ed8d8
--- /dev/null
+++ b/conf/nginx-staging.conf
@@ -0,0 +1,89 @@
+# Staging environment configuration
+upstream cmc_php_staging {
+ server cmc-php-staging:80;
+}
+
+upstream cmc_go_staging {
+ server cmc-go-staging:8080;
+}
+
+server {
+ server_name staging.cmc.springupsoftware.com;
+
+ # Basic auth for staging
+ auth_basic_user_file /etc/nginx/userpasswd;
+ auth_basic "CMC Sales Staging - Restricted Access";
+
+ # Security headers
+ add_header X-Frame-Options DENY;
+ add_header X-Content-Type-Options nosniff;
+ add_header X-XSS-Protection "1; mode=block";
+ add_header Referrer-Policy "strict-origin-when-cross-origin";
+
+ # Staging banner
+ add_header X-Environment "STAGING";
+
+ # CakePHP legacy app routes
+ location / {
+ proxy_pass http://cmc_php_staging;
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 10s;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header X-Environment "staging";
+ }
+
+ # Go API routes
+ location /api/ {
+ proxy_pass http://cmc_go_staging;
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 10s;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header X-Environment "staging";
+ }
+
+ # Go page routes for emails
+ location ~ ^/(emails|customers|products|purchase-orders|enquiries|documents) {
+ proxy_pass http://cmc_go_staging;
+ proxy_read_timeout 300s;
+ proxy_connect_timeout 10s;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header X-Environment "staging";
+ }
+
+ # Static files from Go app
+ location /static/ {
+ proxy_pass http://cmc_go_staging;
+ proxy_cache_valid 200 1h;
+ add_header Cache-Control "public, max-age=3600";
+ }
+
+ # PDF files
+ location /pdf/ {
+ proxy_pass http://cmc_go_staging;
+ proxy_cache_valid 200 1h;
+ add_header Cache-Control "public, max-age=3600";
+ }
+
+ # Health check endpoints
+ location /health {
+ proxy_pass http://cmc_go_staging/api/v1/health;
+ access_log off;
+ }
+
+ # Error pages
+ error_page 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
+
+ listen 80;
+}
\ No newline at end of file
diff --git a/conf/php.ini b/conf/php.ini
index 15a17b3c..9250d1a4 100644
--- a/conf/php.ini
+++ b/conf/php.ini
@@ -633,7 +633,8 @@ html_errors = Off
; empty.
; http://php.net/error-log
; Example:
-error_log = /var/log/php_errors.log
+; For Docker: Send errors to stderr so they appear in docker logs
+error_log = /dev/stderr
; Log errors to syslog (Event Log on NT, not valid in Windows 95).
;error_log = syslog
diff --git a/deploy-production.sh b/deploy-production.sh
new file mode 100755
index 00000000..8908f04c
--- /dev/null
+++ b/deploy-production.sh
@@ -0,0 +1,203 @@
+#!/bin/bash
+
+# Production Deployment Script for CMC Sales
+# This script deploys the application to sales.cmctechnologies.com.au
+# Based on .gitlab-ci.yml deployment steps
+
+set -e # Exit on error
+
+# Color codes for output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m' # No Color
+
+# Configuration
+PRODUCTION_HOST="cmc@sales.cmctechnologies.com.au"
+PRODUCTION_DIR="/home/cmc/cmc-sales"
+CURRENT_BRANCH=$(git branch --show-current)
+
+echo -e "${GREEN}========================================${NC}"
+echo -e "${GREEN}CMC Sales Production Deployment Script${NC}"
+echo -e "${GREEN}========================================${NC}"
+echo ""
+
+# Check if we're on master branch
+if [ "$CURRENT_BRANCH" != "master" ]; then
+ echo -e "${YELLOW}Warning: You are not on the master branch.${NC}"
+ echo -e "${YELLOW}Current branch: $CURRENT_BRANCH${NC}"
+ read -p "Do you want to continue deployment from $CURRENT_BRANCH? (y/N): " confirm
+ if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
+ echo -e "${RED}Deployment cancelled.${NC}"
+ exit 1
+ fi
+fi
+
+# Check for uncommitted changes
+if ! git diff-index --quiet HEAD --; then
+ echo -e "${YELLOW}Warning: You have uncommitted changes.${NC}"
+ git status --short
+ read -p "Do you want to continue? (y/N): " confirm
+ if [ "$confirm" != "y" ] && [ "$confirm" != "Y" ]; then
+ echo -e "${RED}Deployment cancelled.${NC}"
+ exit 1
+ fi
+fi
+
+# Get latest commit hash for build arg
+COMMIT_HASH=$(git rev-parse --short HEAD)
+echo -e "${GREEN}Deploying commit: $COMMIT_HASH${NC}"
+echo ""
+
+# Push latest changes to origin
+echo -e "${YELLOW}Step 1: Pushing latest changes to origin...${NC}"
+git push origin $CURRENT_BRANCH
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ Changes pushed successfully${NC}"
+else
+ echo -e "${RED}✗ Failed to push changes${NC}"
+ exit 1
+fi
+echo ""
+
+# SSH to production and execute deployment
+echo -e "${YELLOW}Step 2: Connecting to production server...${NC}"
+echo -e "${YELLOW}Executing deployment on $PRODUCTION_HOST${NC}"
+echo ""
+
+ssh -T $PRODUCTION_HOST << 'ENDSSH'
+set -e
+
+# Color codes for remote output
+RED='\033[0;31m'
+GREEN='\033[0;32m'
+YELLOW='\033[1;33m'
+NC='\033[0m'
+
+echo -e "${GREEN}Connected to production server${NC}"
+echo ""
+
+# Navigate to project directory
+echo -e "${YELLOW}Step 3: Navigating to project directory...${NC}"
+cd /home/cmc/cmc-sales
+pwd
+echo ""
+
+# Pull latest changes
+echo -e "${YELLOW}Step 4: Pulling latest changes from Git...${NC}"
+git pull origin master
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ Git pull successful${NC}"
+else
+ echo -e "${RED}✗ Git pull failed${NC}"
+ exit 1
+fi
+echo ""
+
+# Get commit hash for build
+COMMIT_HASH=$(git rev-parse --short HEAD)
+echo -e "${GREEN}Building from commit: $COMMIT_HASH${NC}"
+
+# Copy userpasswd file
+echo -e "${YELLOW}Step 5: Copying userpasswd file...${NC}"
+cp /home/cmc/cmc-sales/userpasswd /home/cmc/userpasswd
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ userpasswd file copied${NC}"
+else
+ echo -e "${RED}✗ Failed to copy userpasswd file${NC}"
+ exit 1
+fi
+echo ""
+
+# Build Docker image
+echo -e "${YELLOW}Step 6: Building Docker image...${NC}"
+docker build --build-arg=COMMIT=$COMMIT_HASH . -t "cmc:latest"
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ Docker image built successfully${NC}"
+else
+ echo -e "${RED}✗ Docker build failed${NC}"
+ exit 1
+fi
+echo ""
+
+# Stop existing container
+echo -e "${YELLOW}Step 7: Stopping existing container...${NC}"
+export ID=$(docker ps -q --filter ancestor=cmc:latest)
+if [ ! -z "$ID" ]; then
+ docker kill $ID
+ echo -e "${GREEN}✓ Existing container stopped${NC}"
+ sleep 1
+else
+ echo -e "${YELLOW}No existing container found${NC}"
+fi
+echo ""
+
+# Run new container
+echo -e "${YELLOW}Step 8: Starting new container...${NC}"
+docker run -d --restart always -p 127.0.0.1:8888:80 \
+ --mount type=bind,source=/mnt/vault/pdf,target=/var/www/cmc-sales/app/webroot/pdf \
+ --mount type=bind,source=/mnt/vault/attachments_files,target=/var/www/cmc-sales/app/webroot/attachments_files \
+ --mount type=bind,source=/mnt/vault/emails,target=/var/www/emails \
+ --mount type=bind,source=/mnt/vault/vaultmsgs,target=/var/www/vaultmsgs \
+ cmc:latest
+
+if [ $? -eq 0 ]; then
+ echo -e "${GREEN}✓ New container started successfully${NC}"
+else
+ echo -e "${RED}✗ Failed to start new container${NC}"
+ exit 1
+fi
+echo ""
+
+# Verify container is running
+echo -e "${YELLOW}Step 9: Verifying deployment...${NC}"
+sleep 2
+NEW_ID=$(docker ps -q --filter ancestor=cmc:latest)
+if [ ! -z "$NEW_ID" ]; then
+ echo -e "${GREEN}✓ Container is running with ID: $NEW_ID${NC}"
+ docker ps --filter ancestor=cmc:latest --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}"
+else
+ echo -e "${RED}✗ Container is not running!${NC}"
+ exit 1
+fi
+echo ""
+
+# Show recent logs
+echo -e "${YELLOW}Step 10: Recent container logs:${NC}"
+docker logs --tail 20 $NEW_ID
+echo ""
+
+echo -e "${GREEN}========================================${NC}"
+echo -e "${GREEN}✓ Deployment completed successfully!${NC}"
+echo -e "${GREEN}========================================${NC}"
+
+ENDSSH
+
+if [ $? -eq 0 ]; then
+ echo ""
+ echo -e "${GREEN}========================================${NC}"
+ echo -e "${GREEN}✓ Production deployment successful!${NC}"
+ echo -e "${GREEN}========================================${NC}"
+ echo ""
+ echo -e "${GREEN}Application is running at:${NC}"
+ echo -e "${GREEN} Internal: http://127.0.0.1:8888${NC}"
+ echo -e "${GREEN} External: https://sales.cmctechnologies.com.au${NC}"
+ echo ""
+ echo -e "${YELLOW}To view live logs:${NC}"
+ echo " ssh $PRODUCTION_HOST 'docker logs -f \$(docker ps -q --filter ancestor=cmc:latest)'"
+ echo ""
+ echo -e "${YELLOW}To rollback if needed:${NC}"
+ echo " ssh $PRODUCTION_HOST 'docker run -d --restart always -p 127.0.0.1:8888:80 [previous-image-id]'"
+else
+ echo ""
+ echo -e "${RED}========================================${NC}"
+ echo -e "${RED}✗ Deployment failed!${NC}"
+ echo -e "${RED}========================================${NC}"
+ echo ""
+ echo -e "${YELLOW}To check the status:${NC}"
+ echo " ssh $PRODUCTION_HOST 'docker ps -a'"
+ echo ""
+ echo -e "${YELLOW}To view logs:${NC}"
+ echo " ssh $PRODUCTION_HOST 'docker logs \$(docker ps -aq | head -1)'"
+ exit 1
+fi
\ No newline at end of file
diff --git a/docker-compose.caddy-production.yml b/docker-compose.caddy-production.yml
new file mode 100644
index 00000000..882d517a
--- /dev/null
+++ b/docker-compose.caddy-production.yml
@@ -0,0 +1,85 @@
+version: '3.8'
+
+services:
+ cmc-php-production:
+ build:
+ context: .
+ dockerfile: Dockerfile
+ platform: linux/amd64
+ container_name: cmc-php-production
+ depends_on:
+ - db-production
+ ports:
+ - "127.0.0.1:8093:80" # Only accessible from localhost
+ volumes:
+ - production_pdf_data:/var/www/cmc-sales/app/webroot/pdf
+ - production_attachments_data:/var/www/cmc-sales/app/webroot/attachments_files
+ restart: unless-stopped
+ environment:
+ - APP_ENV=production
+ deploy:
+ resources:
+ limits:
+ cpus: '2.0'
+ memory: 2G
+ reservations:
+ cpus: '0.5'
+ memory: 512M
+
+ db-production:
+ image: mariadb:latest
+ container_name: cmc-db-production
+ environment:
+ MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD_PRODUCTION}
+ MYSQL_DATABASE: cmc
+ MYSQL_USER: cmc
+ MYSQL_PASSWORD: ${DB_PASSWORD_PRODUCTION}
+ volumes:
+ - production_db_data:/var/lib/mysql
+ - ./backups:/backups:ro
+ restart: unless-stopped
+ # No external port exposure for security
+ deploy:
+ resources:
+ limits:
+ cpus: '2.0'
+ memory: 4G
+ reservations:
+ cpus: '0.5'
+ memory: 1G
+
+ cmc-go-production:
+ build:
+ context: .
+ dockerfile: Dockerfile.go.production
+ container_name: cmc-go-production
+ environment:
+ DB_HOST: db-production
+ DB_PORT: 3306
+ DB_USER: cmc
+ DB_PASSWORD: ${DB_PASSWORD_PRODUCTION}
+ DB_NAME: cmc
+ PORT: 8080
+ APP_ENV: production
+ depends_on:
+ db-production:
+ condition: service_started
+ ports:
+ - "127.0.0.1:8094:8080" # Only accessible from localhost
+ volumes:
+ - production_pdf_data:/root/webroot/pdf
+ - ./credentials/production:/root/credentials:ro
+ restart: unless-stopped
+ deploy:
+ resources:
+ limits:
+ cpus: '2.0'
+ memory: 2G
+ reservations:
+ cpus: '0.5'
+ memory: 512M
+
+volumes:
+ production_db_data:
+ production_pdf_data:
+ production_attachments_data:
\ No newline at end of file
diff --git a/docker-compose.caddy-staging-ubuntu.yml b/docker-compose.caddy-staging-ubuntu.yml
new file mode 100644
index 00000000..08635661
--- /dev/null
+++ b/docker-compose.caddy-staging-ubuntu.yml
@@ -0,0 +1,65 @@
+version: '3.8'
+
+services:
+ cmc-php-staging:
+ build:
+ context: .
+ dockerfile: Dockerfile.ubuntu-php
+ platform: linux/amd64
+ container_name: cmc-php-staging
+ depends_on:
+ - db-staging
+ ports:
+ - "127.0.0.1:8091:80"
+ volumes:
+ - ./app:/var/www/cmc-sales/app
+ - staging_pdf_data:/var/www/cmc-sales/app/webroot/pdf
+ - staging_attachments_data:/var/www/cmc-sales/app/webroot/attachments_files
+ restart: unless-stopped
+ environment:
+ - APP_ENV=staging
+ - DB_HOST=db-staging
+ - DB_NAME=cmc_staging
+ - DB_USER=cmc_staging
+ - DB_PASSWORD=${DB_PASSWORD_STAGING:-staging_password}
+
+ db-staging:
+ image: mariadb:10.11
+ container_name: cmc-db-staging
+ environment:
+ MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD_STAGING:-root_password}
+ MYSQL_DATABASE: cmc_staging
+ MYSQL_USER: cmc_staging
+ MYSQL_PASSWORD: ${DB_PASSWORD_STAGING:-staging_password}
+ volumes:
+ - staging_db_data:/var/lib/mysql
+ restart: unless-stopped
+ ports:
+ - "127.0.0.1:3307:3306"
+
+ cmc-go-staging:
+ build:
+ context: .
+ dockerfile: Dockerfile.go.staging
+ container_name: cmc-go-staging
+ environment:
+ DB_HOST: db-staging
+ DB_PORT: 3306
+ DB_USER: cmc_staging
+ DB_PASSWORD: ${DB_PASSWORD_STAGING:-staging_password}
+ DB_NAME: cmc_staging
+ PORT: 8080
+ APP_ENV: staging
+ depends_on:
+ - db-staging
+ ports:
+ - "127.0.0.1:8092:8080"
+ volumes:
+ - staging_pdf_data:/root/webroot/pdf
+ - ./credentials/staging:/root/credentials:ro
+ restart: unless-stopped
+
+volumes:
+ staging_db_data:
+ staging_pdf_data:
+ staging_attachments_data:
\ No newline at end of file
diff --git a/docker-compose.caddy-staging.yml b/docker-compose.caddy-staging.yml
new file mode 100644
index 00000000..2a4856d2
--- /dev/null
+++ b/docker-compose.caddy-staging.yml
@@ -0,0 +1,61 @@
+version: '3.8'
+
+services:
+ cmc-php-staging:
+ build:
+ context: .
+ dockerfile: Dockerfile
+ platform: linux/amd64
+ container_name: cmc-php-staging
+ depends_on:
+ - db-staging
+ ports:
+ - "127.0.0.1:8091:80" # Only accessible from localhost
+ volumes:
+ - staging_pdf_data:/var/www/cmc-sales/app/webroot/pdf
+ - staging_attachments_data:/var/www/cmc-sales/app/webroot/attachments_files
+ restart: unless-stopped
+ environment:
+ - APP_ENV=staging
+
+ db-staging:
+ image: mariadb:latest
+ container_name: cmc-db-staging
+ environment:
+ MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD_STAGING}
+ MYSQL_DATABASE: cmc_staging
+ MYSQL_USER: cmc_staging
+ MYSQL_PASSWORD: ${DB_PASSWORD_STAGING}
+ volumes:
+ - staging_db_data:/var/lib/mysql
+ restart: unless-stopped
+ ports:
+ - "127.0.0.1:3307:3306" # Only accessible from localhost
+
+ cmc-go-staging:
+ build:
+ context: .
+ dockerfile: Dockerfile.go.staging
+ container_name: cmc-go-staging
+ environment:
+ DB_HOST: db-staging
+ DB_PORT: 3306
+ DB_USER: cmc_staging
+ DB_PASSWORD: ${DB_PASSWORD_STAGING}
+ DB_NAME: cmc_staging
+ PORT: 8080
+ APP_ENV: staging
+ depends_on:
+ db-staging:
+ condition: service_started
+ ports:
+ - "127.0.0.1:8092:8080" # Only accessible from localhost
+ volumes:
+ - staging_pdf_data:/root/webroot/pdf
+ - ./credentials/staging:/root/credentials:ro
+ restart: unless-stopped
+
+volumes:
+ staging_db_data:
+ staging_pdf_data:
+ staging_attachments_data:
\ No newline at end of file
diff --git a/docker-compose.production.yml b/docker-compose.production.yml
new file mode 100644
index 00000000..1a864f38
--- /dev/null
+++ b/docker-compose.production.yml
@@ -0,0 +1,105 @@
+version: '3.8'
+
+services:
+ cmc-php-production:
+ build:
+ context: .
+ dockerfile: Dockerfile
+ platform: linux/amd64
+ container_name: cmc-php-production
+ depends_on:
+ - db-production
+ volumes:
+ - production_pdf_data:/var/www/cmc-sales/app/webroot/pdf
+ - production_attachments_data:/var/www/cmc-sales/app/webroot/attachments_files
+ network_mode: bridge
+ restart: unless-stopped
+ environment:
+ - APP_ENV=production
+ # Remove development features
+ deploy:
+ resources:
+ limits:
+ cpus: '2.0'
+ memory: 2G
+ reservations:
+ cpus: '0.5'
+ memory: 512M
+
+ nginx-production:
+ image: nginx:latest
+ hostname: nginx-production
+ container_name: cmc-nginx-production
+ ports:
+ - "8080:80" # Internal port for production
+ volumes:
+ - ./conf/nginx-production.conf:/etc/nginx/conf.d/cmc-production.conf
+ - ./userpasswd:/etc/nginx/userpasswd:ro
+ depends_on:
+ - cmc-php-production
+ restart: unless-stopped
+ network_mode: bridge
+ environment:
+ - NGINX_ENVSUBST_TEMPLATE_SUFFIX=.template
+
+
+
+ db-production:
+ image: mariadb:latest
+ container_name: cmc-db-production
+ environment:
+ MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD_PRODUCTION}
+ MYSQL_DATABASE: cmc
+ MYSQL_USER: cmc
+ MYSQL_PASSWORD: ${DB_PASSWORD_PRODUCTION}
+ volumes:
+ - production_db_data:/var/lib/mysql
+ - ./backups:/backups:ro # Backup restore directory
+ network_mode: bridge
+ restart: unless-stopped
+ # No external port exposure for security
+ deploy:
+ resources:
+ limits:
+ cpus: '2.0'
+ memory: 4G
+ reservations:
+ cpus: '0.5'
+ memory: 1G
+
+ cmc-go-production:
+ build:
+ context: .
+ dockerfile: Dockerfile.go.production
+ container_name: cmc-go-production
+ environment:
+ DB_HOST: db-production
+ DB_PORT: 3306
+ DB_USER: cmc
+ DB_PASSWORD: ${DB_PASSWORD_PRODUCTION}
+ DB_NAME: cmc
+ PORT: 8080
+ APP_ENV: production
+ depends_on:
+ db-production:
+ condition: service_started
+ # No external port exposure - only through nginx
+ volumes:
+ - production_pdf_data:/root/webroot/pdf
+ - ./credentials/production:/root/credentials:ro # Production Gmail credentials
+ network_mode: bridge
+ restart: unless-stopped
+ deploy:
+ resources:
+ limits:
+ cpus: '2.0'
+ memory: 2G
+ reservations:
+ cpus: '0.5'
+ memory: 512M
+
+volumes:
+ production_db_data:
+ production_pdf_data:
+ production_attachments_data:
+
diff --git a/docker-compose.proxy.yml b/docker-compose.proxy.yml
new file mode 100644
index 00000000..616270ab
--- /dev/null
+++ b/docker-compose.proxy.yml
@@ -0,0 +1,68 @@
+# Main reverse proxy for both staging and production
+version: '3.8'
+
+services:
+ nginx-proxy:
+ image: nginx:latest
+ container_name: cmc-nginx-proxy
+ ports:
+ - "80:80"
+ - "443:443"
+ volumes:
+ - ./conf/nginx-proxy.conf:/etc/nginx/nginx.conf
+ - lego_certificates:/etc/ssl/certs:ro
+ - lego_acme_challenge:/var/www/acme-challenge:ro
+ restart: unless-stopped
+ depends_on:
+ - nginx-staging
+ - nginx-production
+ networks:
+ - proxy-network
+ - cmc-staging-network
+ - cmc-production-network
+
+ lego:
+ image: goacme/lego:latest
+ container_name: cmc-lego
+ volumes:
+ - lego_certificates:/data/certificates
+ - lego_accounts:/data/accounts
+ - lego_acme_challenge:/data/acme-challenge
+ - ./scripts:/scripts:ro
+ environment:
+ - LEGO_DISABLE_CNAME=true
+ command: sleep infinity
+ restart: unless-stopped
+ networks:
+ - proxy-network
+
+ # Import staging services
+ nginx-staging:
+ extends:
+ file: docker-compose.staging.yml
+ service: nginx-staging
+ networks:
+ - proxy-network
+ - cmc-staging-network
+
+ # Import production services
+ nginx-production:
+ extends:
+ file: docker-compose.production.yml
+ service: nginx-production
+ networks:
+ - proxy-network
+ - cmc-production-network
+
+volumes:
+ lego_certificates:
+ lego_accounts:
+ lego_acme_challenge:
+
+networks:
+ proxy-network:
+ driver: bridge
+ cmc-staging-network:
+ external: true
+ cmc-production-network:
+ external: true
\ No newline at end of file
diff --git a/docker-compose.staging.yml b/docker-compose.staging.yml
new file mode 100644
index 00000000..b2ff9b9f
--- /dev/null
+++ b/docker-compose.staging.yml
@@ -0,0 +1,80 @@
+version: '3.8'
+
+services:
+
+ cmc-php-staging:
+ build:
+ context: .
+ dockerfile: Dockerfile
+ platform: linux/amd64
+ container_name: cmc-php-staging
+ depends_on:
+ - db-staging
+ volumes:
+ - staging_pdf_data:/var/www/cmc-sales/app/webroot/pdf
+ - staging_attachments_data:/var/www/cmc-sales/app/webroot/attachments_files
+ network_mode: bridge
+ restart: unless-stopped
+ environment:
+ - APP_ENV=staging
+
+ nginx-staging:
+ image: nginx:latest
+ hostname: nginx-staging
+ container_name: cmc-nginx-staging
+ ports:
+ - "8081:80" # Internal port for staging
+ volumes:
+ - ./conf/nginx-staging.conf:/etc/nginx/conf.d/cmc-staging.conf
+ - ./userpasswd:/etc/nginx/userpasswd:ro
+ depends_on:
+ - cmc-php-staging
+ restart: unless-stopped
+ network_mode: bridge
+ environment:
+ - NGINX_ENVSUBST_TEMPLATE_SUFFIX=.template
+
+ db-staging:
+ image: mariadb:latest
+ container_name: cmc-db-staging
+ environment:
+ MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD_STAGING}
+ MYSQL_DATABASE: cmc_staging
+ MYSQL_USER: cmc_staging
+ MYSQL_PASSWORD: ${DB_PASSWORD_STAGING}
+ volumes:
+ - staging_db_data:/var/lib/mysql
+ network_mode: bridge
+ restart: unless-stopped
+ ports:
+ - "3307:3306" # Different port for staging DB access
+
+ cmc-go-staging:
+ build:
+ context: .
+ dockerfile: Dockerfile.go.staging
+ container_name: cmc-go-staging
+ environment:
+ DB_HOST: db-staging
+ DB_PORT: 3306
+ DB_USER: cmc_staging
+ DB_PASSWORD: ${DB_PASSWORD_STAGING}
+ DB_NAME: cmc_staging
+ PORT: 8080
+ APP_ENV: staging
+ depends_on:
+ db-staging:
+ condition: service_started
+ ports:
+ - "8082:8080" # Direct access for testing
+ volumes:
+ - staging_pdf_data:/root/webroot/pdf
+ - ./credentials/staging:/root/credentials:ro # Staging Gmail credentials
+ network_mode: bridge
+ restart: unless-stopped
+
+volumes:
+ staging_db_data:
+ staging_pdf_data:
+ staging_attachments_data:
+
diff --git a/go-app/.gitignore b/go-app/.gitignore
index 56c712b8..ea2b3066 100644
--- a/go-app/.gitignore
+++ b/go-app/.gitignore
@@ -30,4 +30,11 @@ vendor/
# OS specific files
.DS_Store
-Thumbs.db
\ No newline at end of file
+Thumbs.db
+
+# Goose database migration config
+goose.env
+
+# Gmail OAuth credentials - NEVER commit these!
+credentials.json
+token.json
diff --git a/go-app/MIGRATIONS.md b/go-app/MIGRATIONS.md
new file mode 100644
index 00000000..012636f7
--- /dev/null
+++ b/go-app/MIGRATIONS.md
@@ -0,0 +1,70 @@
+# Database Migrations with Goose
+
+This document explains how to use goose for database migrations in the CMC Sales Go application.
+
+## Setup
+
+1. **Install goose**:
+ ```bash
+ make install
+ ```
+
+2. **Configure database connection**:
+ ```bash
+ cp goose.env.example goose.env
+ # Edit goose.env with your database credentials
+ ```
+
+## Migration Commands
+
+### Run Migrations
+```bash
+# Run all pending migrations
+make migrate
+
+# Check migration status
+make migrate-status
+```
+
+### Rollback Migrations
+```bash
+# Rollback the last migration
+make migrate-down
+```
+
+### Create New Migrations
+```bash
+# Create a new migration file
+make migrate-create name=add_new_feature
+```
+
+## Migration Structure
+
+Migrations are stored in `sql/migrations/` and follow this naming convention:
+- `001_add_gmail_fields.sql`
+- `002_add_new_feature.sql`
+
+Each migration file contains:
+```sql
+-- +goose Up
+-- Your upgrade SQL here
+
+-- +goose Down
+-- Your rollback SQL here
+```
+
+## Configuration Files
+
+- `goose.env` - Database connection settings (gitignored)
+- `goose.env.example` - Template for goose.env
+
+## Current Migrations
+
+1. **001_add_gmail_fields.sql** - Adds Gmail integration fields to emails and email_attachments tables
+
+## Tips
+
+- Always test migrations on a backup database first
+- Use `make migrate-status` to check current state
+- Migrations are atomic - if one fails, none are applied
+- Each migration should be reversible with a corresponding Down section
\ No newline at end of file
diff --git a/go-app/Makefile b/go-app/Makefile
index 24df33e5..66a3dea9 100644
--- a/go-app/Makefile
+++ b/go-app/Makefile
@@ -11,6 +11,7 @@ install: ## Install dependencies
go env -w GOPRIVATE=code.springupsoftware.com
go mod download
go install github.com/sqlc-dev/sqlc/cmd/sqlc@latest
+ go install github.com/pressly/goose/v3/cmd/goose@latest
.PHONY: sqlc
sqlc: ## Generate Go code from SQL queries
@@ -19,11 +20,24 @@ sqlc: ## Generate Go code from SQL queries
.PHONY: build
build: sqlc ## Build the application
go build -o bin/server cmd/server/main.go
+ go build -o bin/vault cmd/vault/main.go
+
+.PHONY: build-server
+build-server: sqlc ## Build only the server
+ go build -o bin/server cmd/server/main.go
+
+.PHONY: build-vault
+build-vault: ## Build only the vault command
+ go build -o bin/vault cmd/vault/main.go
.PHONY: run
run: ## Run the application
go run cmd/server/main.go
+.PHONY: run-vault
+run-vault: ## Run the vault command
+ go run cmd/vault/main.go
+
.PHONY: dev
dev: sqlc ## Run the application with hot reload (requires air)
air
@@ -63,4 +77,48 @@ dbshell-root: ## Connect to MariaDB as root user
exit 1; \
else \
docker compose exec -e MYSQL_PWD="$$DB_ROOT_PASSWORD" db mariadb -u root; \
+ fi
+
+.PHONY: migrate
+migrate: ## Run database migrations
+ @echo "Running database migrations..."
+ @if [ -f goose.env ]; then \
+ export $$(cat goose.env | xargs) && goose up; \
+ else \
+ echo "Error: goose.env file not found"; \
+ exit 1; \
+ fi
+
+.PHONY: migrate-down
+migrate-down: ## Rollback last migration
+ @echo "Rolling back last migration..."
+ @if [ -f goose.env ]; then \
+ export $$(cat goose.env | xargs) && goose down; \
+ else \
+ echo "Error: goose.env file not found"; \
+ exit 1; \
+ fi
+
+.PHONY: migrate-status
+migrate-status: ## Show migration status
+ @echo "Migration status:"
+ @if [ -f goose.env ]; then \
+ export $$(cat goose.env | xargs) && goose status; \
+ else \
+ echo "Error: goose.env file not found"; \
+ exit 1; \
+ fi
+
+.PHONY: migrate-create
+migrate-create: ## Create a new migration file (use: make migrate-create name=add_new_table)
+ @if [ -z "$(name)" ]; then \
+ echo "Error: Please provide a migration name. Usage: make migrate-create name=add_new_table"; \
+ exit 1; \
+ fi
+ @echo "Creating new migration: $(name)"
+ @if [ -f goose.env ]; then \
+ export $$(cat goose.env | xargs) && goose create $(name) sql; \
+ else \
+ echo "Error: goose.env file not found"; \
+ exit 1; \
fi
\ No newline at end of file
diff --git a/go-app/cmd/vault/README.md b/go-app/cmd/vault/README.md
new file mode 100644
index 00000000..e7125816
--- /dev/null
+++ b/go-app/cmd/vault/README.md
@@ -0,0 +1,140 @@
+# Vault Email Processor - Smart Proxy
+
+This is a Go rewrite of the PHP vault.php script that processes emails for the CMC Sales system. It now supports three modes: local file processing, Gmail indexing, and HTTP streaming proxy.
+
+## Key Features
+
+1. **Gmail Integration**: Index Gmail emails without downloading
+2. **Smart Proxy**: Stream email content on-demand without storing to disk
+3. **No ripmime dependency**: Uses the enmime Go library for MIME parsing
+4. **Better error handling**: Proper error handling and database transactions
+5. **Type safety**: Strongly typed Go structures
+6. **Modern email parsing**: Uses enmime for robust email parsing
+
+## Operating Modes
+
+### 1. Local Mode (Original functionality)
+Processes emails from local filesystem directories.
+
+```bash
+./vault --mode=local \
+ --emaildir=/var/www/emails \
+ --vaultdir=/var/www/vaultmsgs/new \
+ --processeddir=/var/www/vaultmsgs/cur \
+ --dbhost=127.0.0.1 \
+ --dbuser=cmc \
+ --dbpass="xVRQI&cA?7AU=hqJ!%au" \
+ --dbname=cmc
+```
+
+### 2. Gmail Index Mode
+Indexes Gmail emails without downloading content. Creates database references only.
+
+```bash
+./vault --mode=index \
+ --gmail-query="is:unread" \
+ --credentials=credentials.json \
+ --token=token.json \
+ --dbhost=127.0.0.1 \
+ --dbuser=cmc \
+ --dbpass="xVRQI&cA?7AU=hqJ!%au" \
+ --dbname=cmc
+```
+
+### 3. HTTP Server Mode
+Runs an HTTP server that streams Gmail content on-demand.
+
+```bash
+./vault --mode=serve \
+ --port=8080 \
+ --credentials=credentials.json \
+ --token=token.json \
+ --dbhost=127.0.0.1 \
+ --dbuser=cmc \
+ --dbpass="xVRQI&cA?7AU=hqJ!%au" \
+ --dbname=cmc
+```
+
+## Gmail Setup
+
+1. Enable Gmail API in Google Cloud Console
+2. Create OAuth 2.0 credentials
+3. Download credentials as `credentials.json`
+4. Run vault in any Gmail mode - it will prompt for authorization
+5. Token will be saved as `token.json` for future use
+
+## API Endpoints (Server Mode)
+
+- `GET /api/emails` - List indexed emails (metadata only)
+- `GET /api/emails/:id` - Get email metadata
+- `GET /api/emails/:id/content` - Stream email HTML/text from Gmail
+- `GET /api/emails/:id/attachments` - List attachment metadata
+- `GET /api/emails/:id/attachments/:attachmentId` - Stream attachment from Gmail
+- `GET /api/emails/:id/raw` - Stream raw email (for email clients)
+
+## Database Schema Changes
+
+Required migrations for Gmail support:
+
+```sql
+ALTER TABLE emails
+ ADD COLUMN gmail_message_id VARCHAR(255) UNIQUE,
+ ADD COLUMN gmail_thread_id VARCHAR(255),
+ ADD COLUMN is_downloaded BOOLEAN DEFAULT FALSE,
+ ADD COLUMN raw_headers TEXT;
+
+CREATE INDEX idx_gmail_message_id ON emails(gmail_message_id);
+
+ALTER TABLE email_attachments
+ ADD COLUMN gmail_attachment_id VARCHAR(255),
+ ADD COLUMN gmail_message_id VARCHAR(255);
+```
+
+## Architecture
+
+### Smart Proxy Benefits
+- **No Disk Storage**: Emails/attachments streamed directly from Gmail
+- **Low Storage Footprint**: Only metadata stored in database
+- **Fresh Content**: Always serves latest version from Gmail
+- **Scalable**: No file management overhead
+- **On-Demand**: Content fetched only when requested
+
+### Processing Flow
+1. **Index Mode**: Scans Gmail, stores metadata, creates associations
+2. **Server Mode**: Receives HTTP requests, fetches from Gmail, streams to client
+3. **Local Mode**: Original file-based processing (backwards compatible)
+
+## Build
+
+```bash
+go build -o vault cmd/vault/main.go
+```
+
+## Dependencies
+
+- github.com/jhillyerd/enmime - MIME email parsing
+- github.com/google/uuid - UUID generation
+- github.com/go-sql-driver/mysql - MySQL driver
+- github.com/gorilla/mux - HTTP router
+- golang.org/x/oauth2 - OAuth2 support
+- google.golang.org/api/gmail/v1 - Gmail API client
+
+## Database Tables Used
+
+- emails - Main email records with Gmail metadata
+- email_recipients - To/CC recipients
+- email_attachments - Attachment metadata (no file storage)
+- emails_enquiries - Email to enquiry associations
+- emails_invoices - Email to invoice associations
+- emails_purchase_orders - Email to PO associations
+- emails_jobs - Email to job associations
+- users - System users
+- enquiries, invoices, purchase_orders, jobs - For identifier matching
+
+## Gmail Query Examples
+
+- `is:unread` - Unread emails
+- `newer_than:1d` - Emails from last 24 hours
+- `from:customer@example.com` - From specific sender
+- `subject:invoice` - Subject contains "invoice"
+- `has:attachment` - Emails with attachments
\ No newline at end of file
diff --git a/go-app/cmd/vault/main.go b/go-app/cmd/vault/main.go
new file mode 100644
index 00000000..8917a204
--- /dev/null
+++ b/go-app/cmd/vault/main.go
@@ -0,0 +1,1300 @@
+package main
+
+import (
+ "bytes"
+ "context"
+ "database/sql"
+ "encoding/base64"
+ "encoding/json"
+ "flag"
+ "fmt"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "os"
+ "path/filepath"
+ "regexp"
+ "strings"
+ "time"
+
+ _ "github.com/go-sql-driver/mysql"
+ "github.com/google/uuid"
+ "github.com/gorilla/mux"
+ "github.com/jhillyerd/enmime"
+ "golang.org/x/oauth2"
+ "golang.org/x/oauth2/google"
+ "google.golang.org/api/gmail/v1"
+ "google.golang.org/api/option"
+)
+
+type Config struct {
+ Mode string
+ EmailDir string
+ VaultDir string
+ ProcessedDir string
+ DBHost string
+ DBUser string
+ DBPassword string
+ DBName string
+ Port string
+ CredentialsFile string
+ TokenFile string
+ GmailQuery string
+}
+
+type VaultService struct {
+ db *sql.DB
+ config Config
+ gmailService *gmail.Service
+ indexer *GmailIndexer
+ processor *EmailProcessor
+ server *HTTPServer
+}
+
+type EmailProcessor struct {
+ db *sql.DB
+ config Config
+ enquiryMap map[string]int
+ invoiceMap map[string]int
+ poMap map[string]int
+ userMap map[string]int
+ jobMap map[string]int
+}
+
+type GmailIndexer struct {
+ db *sql.DB
+ gmailService *gmail.Service
+ processor *EmailProcessor
+}
+
+type HTTPServer struct {
+ db *sql.DB
+ gmailService *gmail.Service
+ processor *EmailProcessor
+}
+
+type EmailMetadata struct {
+ Subject string
+ From []string
+ To []string
+ CC []string
+ Date time.Time
+ GmailMessageID string
+ GmailThreadID string
+ AttachmentCount int
+ Attachments []AttachmentMeta
+}
+
+type AttachmentMeta struct {
+ Filename string
+ ContentType string
+ Size int
+ GmailAttachmentID string
+}
+
+func main() {
+ var config Config
+ flag.StringVar(&config.Mode, "mode", "serve", "Mode: index, serve, or local")
+ flag.StringVar(&config.EmailDir, "emaildir", "/var/www/emails", "Email storage directory")
+ flag.StringVar(&config.VaultDir, "vaultdir", "/var/www/vaultmsgs/new", "Vault messages directory")
+ flag.StringVar(&config.ProcessedDir, "processeddir", "/var/www/vaultmsgs/cur", "Processed messages directory")
+ flag.StringVar(&config.DBHost, "dbhost", "127.0.0.1", "Database host")
+ flag.StringVar(&config.DBUser, "dbuser", "cmc", "Database user")
+ flag.StringVar(&config.DBPassword, "dbpass", "xVRQI&cA?7AU=hqJ!%au", "Database password")
+ flag.StringVar(&config.DBName, "dbname", "cmc", "Database name")
+ flag.StringVar(&config.Port, "port", "8080", "HTTP server port")
+ flag.StringVar(&config.CredentialsFile, "credentials", "credentials.json", "Gmail credentials file")
+ flag.StringVar(&config.TokenFile, "token", "token.json", "Gmail token file")
+ flag.StringVar(&config.GmailQuery, "gmail-query", "is:unread", "Gmail search query")
+ flag.Parse()
+
+ // Connect to database
+ dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?parseTime=true",
+ config.DBUser, config.DBPassword, config.DBHost, config.DBName)
+
+ db, err := sql.Open("mysql", dsn)
+ if err != nil {
+ log.Fatal("Failed to connect to database:", err)
+ }
+ defer db.Close()
+
+ // Create processor
+ processor := &EmailProcessor{
+ db: db,
+ config: config,
+ }
+
+ if err := processor.loadMaps(); err != nil {
+ log.Fatal("Failed to load maps:", err)
+ }
+
+ // Create vault service
+ service := &VaultService{
+ db: db,
+ config: config,
+ processor: processor,
+ }
+
+ switch config.Mode {
+ case "index":
+ // Initialize Gmail service
+ gmailService, err := getGmailService(config.CredentialsFile, config.TokenFile)
+ if err != nil {
+ log.Fatal("Failed to get Gmail service:", err)
+ }
+ service.gmailService = gmailService
+
+ // Create and run indexer
+ service.indexer = &GmailIndexer{
+ db: db,
+ gmailService: gmailService,
+ processor: processor,
+ }
+
+ if err := service.indexer.IndexEmails(config.GmailQuery); err != nil {
+ log.Fatal("Failed to index emails:", err)
+ }
+
+ case "serve":
+ // Initialize Gmail service
+ gmailService, err := getGmailService(config.CredentialsFile, config.TokenFile)
+ if err != nil {
+ log.Fatal("Failed to get Gmail service:", err)
+ }
+ service.gmailService = gmailService
+
+ // Create and start HTTP server
+ service.server = &HTTPServer{
+ db: db,
+ gmailService: gmailService,
+ processor: processor,
+ }
+
+ log.Printf("Starting HTTP server on port %s", config.Port)
+ service.server.Start(config.Port)
+
+ case "local":
+ // Original file-based processing
+ if err := processor.processEmails(); err != nil {
+ log.Fatal("Failed to process emails:", err)
+ }
+
+ default:
+ log.Fatal("Invalid mode. Use: index, serve, or local")
+ }
+}
+
+// Gmail OAuth2 functions
+func getGmailService(credentialsFile, tokenFile string) (*gmail.Service, error) {
+ ctx := context.Background()
+
+ b, err := ioutil.ReadFile(credentialsFile)
+ if err != nil {
+ return nil, fmt.Errorf("unable to read client secret file: %v", err)
+ }
+
+ config, err := google.ConfigFromJSON(b, gmail.GmailReadonlyScope)
+ if err != nil {
+ return nil, fmt.Errorf("unable to parse client secret file to config: %v", err)
+ }
+
+ client := getClient(config, tokenFile)
+ srv, err := gmail.NewService(ctx, option.WithHTTPClient(client))
+ if err != nil {
+ return nil, fmt.Errorf("unable to retrieve Gmail client: %v", err)
+ }
+
+ return srv, nil
+}
+
+func getClient(config *oauth2.Config, tokFile string) *http.Client {
+ tok, err := tokenFromFile(tokFile)
+ if err != nil {
+ tok = getTokenFromWeb(config)
+ saveToken(tokFile, tok)
+ }
+ return config.Client(context.Background(), tok)
+}
+
+func getTokenFromWeb(config *oauth2.Config) *oauth2.Token {
+ authURL := config.AuthCodeURL("state-token", oauth2.AccessTypeOffline)
+ fmt.Printf("Go to the following link in your browser then type the authorization code: \n%v\n", authURL)
+
+ var authCode string
+ if _, err := fmt.Scan(&authCode); err != nil {
+ log.Fatalf("Unable to read authorization code: %v", err)
+ }
+
+ tok, err := config.Exchange(context.TODO(), authCode)
+ if err != nil {
+ log.Fatalf("Unable to retrieve token from web: %v", err)
+ }
+ return tok
+}
+
+func tokenFromFile(file string) (*oauth2.Token, error) {
+ f, err := os.Open(file)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+ tok := &oauth2.Token{}
+ err = json.NewDecoder(f).Decode(tok)
+ return tok, err
+}
+
+func saveToken(path string, token *oauth2.Token) {
+ fmt.Printf("Saving credential file to: %s\n", path)
+ f, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
+ if err != nil {
+ log.Fatalf("Unable to cache oauth token: %v", err)
+ }
+ defer f.Close()
+ json.NewEncoder(f).Encode(token)
+}
+
+// GmailIndexer methods
+func (g *GmailIndexer) IndexEmails(query string) error {
+ user := "me"
+ var pageToken string
+
+ for {
+ // List messages with query
+ call := g.gmailService.Users.Messages.List(user).Q(query).MaxResults(500)
+ if pageToken != "" {
+ call = call.PageToken(pageToken)
+ }
+
+ response, err := call.Do()
+ if err != nil {
+ return fmt.Errorf("unable to retrieve messages: %v", err)
+ }
+
+ // Process each message
+ for _, msg := range response.Messages {
+ if err := g.indexMessage(msg.Id); err != nil {
+ log.Printf("Error indexing message %s: %v", msg.Id, err)
+ continue
+ }
+ }
+
+ // Check for more pages
+ pageToken = response.NextPageToken
+ if pageToken == "" {
+ break
+ }
+
+ log.Printf("Processed %d messages, continuing with next page...", len(response.Messages))
+ }
+
+ return nil
+}
+
+func (g *GmailIndexer) indexMessage(messageID string) error {
+ // Get message metadata only
+ message, err := g.gmailService.Users.Messages.Get("me", messageID).
+ Format("METADATA").
+ MetadataHeaders("From", "To", "Cc", "Subject", "Date").
+ Do()
+ if err != nil {
+ return err
+ }
+
+ // Extract headers
+ headers := make(map[string]string)
+ for _, header := range message.Payload.Headers {
+ headers[header.Name] = header.Value
+ }
+
+ // Parse email addresses
+ toRecipients := g.processor.parseEnmimeAddresses(headers["To"])
+ fromRecipients := g.processor.parseEnmimeAddresses(headers["From"])
+ ccRecipients := g.processor.parseEnmimeAddresses(headers["Cc"])
+
+ // Check if we should save this email
+ saveThis := false
+ fromKnownUser := false
+
+ for _, email := range toRecipients {
+ if g.processor.userExists(email) {
+ saveThis = true
+ }
+ }
+
+ for _, email := range fromRecipients {
+ if g.processor.userExists(email) {
+ saveThis = true
+ fromKnownUser = true
+ }
+ }
+
+ for _, email := range ccRecipients {
+ if g.processor.userExists(email) {
+ saveThis = true
+ }
+ }
+
+ subject := headers["Subject"]
+ if subject == "" {
+ return nil // Skip emails without subject
+ }
+
+ // Check for identifiers in subject
+ foundEnquiries := g.processor.checkIdentifier(subject, g.processor.enquiryMap, "enquiry")
+ foundInvoices := g.processor.checkIdentifier(subject, g.processor.invoiceMap, "invoice")
+ foundPOs := g.processor.checkIdentifier(subject, g.processor.poMap, "purchaseorder")
+ foundJobs := g.processor.checkIdentifier(subject, g.processor.jobMap, "job")
+
+ foundIdent := len(foundEnquiries) > 0 || len(foundInvoices) > 0 ||
+ len(foundPOs) > 0 || len(foundJobs) > 0
+
+ if fromKnownUser || saveThis || foundIdent {
+ // Parse date
+ unixTime := time.Now().Unix()
+ if dateStr := headers["Date"]; dateStr != "" {
+ if t, err := time.Parse(time.RFC1123Z, dateStr); err == nil {
+ unixTime = t.Unix()
+ }
+ }
+
+ // Get recipient user IDs
+ recipientIDs := make(map[string][]int)
+ recipientIDs["to"] = g.processor.getUserIDs(toRecipients)
+ recipientIDs["from"] = g.processor.getUserIDs(fromRecipients)
+ recipientIDs["cc"] = g.processor.getUserIDs(ccRecipients)
+
+ if len(recipientIDs["from"]) == 0 {
+ return nil // Skip if no from recipient
+ }
+
+ // Marshal headers for storage
+ headerJSON, _ := json.Marshal(headers)
+
+ // Count attachments (from message metadata)
+ attachmentCount := 0
+ if message.Payload != nil {
+ attachmentCount = countAttachments(message.Payload)
+ }
+
+ fmt.Printf("Indexing message: %s - Subject: %s\n", messageID, subject)
+
+ // Save to database
+ if err := g.saveEmailMetadata(messageID, message.ThreadId, subject, string(headerJSON),
+ unixTime, recipientIDs, attachmentCount, message.Payload,
+ foundEnquiries, foundInvoices, foundPOs, foundJobs); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func countAttachments(payload *gmail.MessagePart) int {
+ count := 0
+ if payload.Body != nil && payload.Body.AttachmentId != "" {
+ count++
+ }
+ for _, part := range payload.Parts {
+ count += countAttachments(part)
+ }
+ return count
+}
+
+func (g *GmailIndexer) saveEmailMetadata(gmailMessageID, threadID, subject, headers string,
+ unixTime int64, recipientIDs map[string][]int, attachmentCount int,
+ payload *gmail.MessagePart, foundEnquiries, foundInvoices, foundPOs, foundJobs []int) error {
+
+ tx, err := g.db.Begin()
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback()
+
+ // Insert email
+ result, err := tx.Exec(
+ `INSERT INTO emails (user_id, udate, created, subject, gmail_message_id,
+ gmail_thread_id, raw_headers, is_downloaded, email_attachment_count)
+ VALUES (?, ?, NOW(), ?, ?, ?, ?, FALSE, ?)`,
+ recipientIDs["from"][0], unixTime, subject, gmailMessageID,
+ threadID, headers, attachmentCount)
+
+ if err != nil {
+ return err
+ }
+
+ emailID, err := result.LastInsertId()
+ if err != nil {
+ return err
+ }
+
+ // Insert recipients
+ for recipType, userIDs := range recipientIDs {
+ for _, userID := range userIDs {
+ if recipType == "from" {
+ continue // From is already stored in emails.user_id
+ }
+ _, err = tx.Exec(
+ "INSERT INTO email_recipients (email_id, user_id, type) VALUES (?, ?, ?)",
+ emailID, userID, recipType)
+ if err != nil {
+ return err
+ }
+ }
+ }
+
+ // Index attachment metadata
+ if payload != nil {
+ if err := g.indexAttachments(tx, emailID, gmailMessageID, payload); err != nil {
+ return err
+ }
+ }
+
+ // Insert associations
+ for _, jobID := range foundJobs {
+ _, err = tx.Exec("INSERT INTO emails_jobs (email_id, job_id) VALUES (?, ?)", emailID, jobID)
+ if err != nil {
+ return err
+ }
+ }
+
+ for _, poID := range foundPOs {
+ _, err = tx.Exec("INSERT INTO emails_purchase_orders (email_id, purchase_order_id) VALUES (?, ?)", emailID, poID)
+ if err != nil {
+ return err
+ }
+ }
+
+ for _, enqID := range foundEnquiries {
+ _, err = tx.Exec("INSERT INTO emails_enquiries (email_id, enquiry_id) VALUES (?, ?)", emailID, enqID)
+ if err != nil {
+ return err
+ }
+ }
+
+ for _, invID := range foundInvoices {
+ _, err = tx.Exec("INSERT INTO emails_invoices (email_id, invoice_id) VALUES (?, ?)", emailID, invID)
+ if err != nil {
+ return err
+ }
+ }
+
+ return tx.Commit()
+}
+
+func (g *GmailIndexer) indexAttachments(tx *sql.Tx, emailID int64, gmailMessageID string, part *gmail.MessagePart) error {
+ // Check if this part is an attachment
+ if part.Body != nil && part.Body.AttachmentId != "" {
+ filename := part.Filename
+ if filename == "" {
+ filename = "attachment"
+ }
+
+ _, err := tx.Exec(
+ `INSERT INTO email_attachments
+ (email_id, gmail_attachment_id, gmail_message_id, type, size, filename, is_message_body, created)
+ VALUES (?, ?, ?, ?, ?, ?, 0, NOW())`,
+ emailID, part.Body.AttachmentId, gmailMessageID, part.MimeType, part.Body.Size, filename)
+ if err != nil {
+ return err
+ }
+ }
+
+ // Process sub-parts
+ for _, subPart := range part.Parts {
+ if err := g.indexAttachments(tx, emailID, gmailMessageID, subPart); err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+// HTTPServer methods
+func (s *HTTPServer) Start(port string) {
+ router := mux.NewRouter()
+
+ // API routes
+ router.HandleFunc("/api/emails", s.ListEmails).Methods("GET")
+ router.HandleFunc("/api/emails/{id:[0-9]+}", s.GetEmail).Methods("GET")
+ router.HandleFunc("/api/emails/{id:[0-9]+}/content", s.StreamEmailContent).Methods("GET")
+ router.HandleFunc("/api/emails/{id:[0-9]+}/attachments", s.ListAttachments).Methods("GET")
+ router.HandleFunc("/api/emails/{id:[0-9]+}/attachments/{attachmentId:[0-9]+}", s.StreamAttachment).Methods("GET")
+ router.HandleFunc("/api/emails/{id:[0-9]+}/raw", s.StreamRawEmail).Methods("GET")
+
+ log.Fatal(http.ListenAndServe(":"+port, router))
+}
+
+func (s *HTTPServer) ListEmails(w http.ResponseWriter, r *http.Request) {
+ // TODO: Add pagination
+ rows, err := s.db.Query(`
+ SELECT id, subject, user_id, created, gmail_message_id, email_attachment_count
+ FROM emails
+ ORDER BY created DESC
+ LIMIT 100`)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ defer rows.Close()
+
+ var emails []map[string]interface{}
+ for rows.Next() {
+ var id, userID, attachmentCount int
+ var subject, gmailMessageID string
+ var created time.Time
+
+ if err := rows.Scan(&id, &subject, &userID, &created, &gmailMessageID, &attachmentCount); err != nil {
+ continue
+ }
+
+ emails = append(emails, map[string]interface{}{
+ "id": id,
+ "subject": subject,
+ "user_id": userID,
+ "created": created,
+ "gmail_message_id": gmailMessageID,
+ "attachment_count": attachmentCount,
+ })
+ }
+
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(emails)
+}
+
+func (s *HTTPServer) GetEmail(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID := vars["id"]
+
+ var gmailMessageID, subject, rawHeaders string
+ var created time.Time
+
+ err := s.db.QueryRow(`
+ SELECT gmail_message_id, subject, created, raw_headers
+ FROM emails WHERE id = ?`, emailID).
+ Scan(&gmailMessageID, &subject, &created, &rawHeaders)
+
+ if err != nil {
+ http.Error(w, "Email not found", http.StatusNotFound)
+ return
+ }
+
+ response := map[string]interface{}{
+ "id": emailID,
+ "gmail_message_id": gmailMessageID,
+ "subject": subject,
+ "created": created,
+ "headers": rawHeaders,
+ }
+
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(response)
+}
+
+func (s *HTTPServer) StreamEmailContent(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID := vars["id"]
+
+ // Get Gmail message ID from database
+ var gmailMessageID string
+ err := s.db.QueryRow("SELECT gmail_message_id FROM emails WHERE id = ?", emailID).
+ Scan(&gmailMessageID)
+ if err != nil {
+ http.Error(w, "Email not found", http.StatusNotFound)
+ return
+ }
+
+ // Fetch from Gmail
+ message, err := s.gmailService.Users.Messages.Get("me", gmailMessageID).
+ Format("RAW").Do()
+ if err != nil {
+ http.Error(w, "Failed to fetch email from Gmail", http.StatusInternalServerError)
+ return
+ }
+
+ // Decode message
+ rawEmail, err := base64.URLEncoding.DecodeString(message.Raw)
+ if err != nil {
+ http.Error(w, "Failed to decode email", http.StatusInternalServerError)
+ return
+ }
+
+ // Parse with enmime
+ env, err := enmime.ReadEnvelope(bytes.NewReader(rawEmail))
+ if err != nil {
+ http.Error(w, "Failed to parse email", http.StatusInternalServerError)
+ return
+ }
+
+ // Stream HTML or Text directly to client
+ if env.HTML != "" {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ w.Write([]byte(env.HTML))
+ } else if env.Text != "" {
+ w.Header().Set("Content-Type", "text/plain; charset=utf-8")
+ w.Write([]byte(env.Text))
+ } else {
+ http.Error(w, "No content found in email", http.StatusNotFound)
+ }
+}
+
+func (s *HTTPServer) ListAttachments(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID := vars["id"]
+
+ rows, err := s.db.Query(`
+ SELECT id, filename, type, size, gmail_attachment_id
+ FROM email_attachments
+ WHERE email_id = ?`, emailID)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ defer rows.Close()
+
+ var attachments []map[string]interface{}
+ for rows.Next() {
+ var id, size int
+ var filename, contentType, gmailAttachmentID string
+
+ if err := rows.Scan(&id, &filename, &contentType, &size, &gmailAttachmentID); err != nil {
+ continue
+ }
+
+ attachments = append(attachments, map[string]interface{}{
+ "id": id,
+ "filename": filename,
+ "content_type": contentType,
+ "size": size,
+ "gmail_attachment_id": gmailAttachmentID,
+ })
+ }
+
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(attachments)
+}
+
+func (s *HTTPServer) StreamAttachment(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ attachmentID := vars["attachmentId"]
+
+ // Get attachment info from database
+ var gmailMessageID, gmailAttachmentID, filename, contentType string
+ err := s.db.QueryRow(`
+ SELECT gmail_message_id, gmail_attachment_id, filename, type
+ FROM email_attachments
+ WHERE id = ?`, attachmentID).
+ Scan(&gmailMessageID, &gmailAttachmentID, &filename, &contentType)
+
+ if err != nil {
+ http.Error(w, "Attachment not found", http.StatusNotFound)
+ return
+ }
+
+ // Fetch from Gmail
+ attachment, err := s.gmailService.Users.Messages.Attachments.
+ Get("me", gmailMessageID, gmailAttachmentID).Do()
+ if err != nil {
+ http.Error(w, "Failed to fetch attachment from Gmail", http.StatusInternalServerError)
+ return
+ }
+
+ // Decode base64
+ data, err := base64.URLEncoding.DecodeString(attachment.Data)
+ if err != nil {
+ http.Error(w, "Failed to decode attachment", http.StatusInternalServerError)
+ return
+ }
+
+ // Set headers and stream
+ w.Header().Set("Content-Type", contentType)
+ w.Header().Set("Content-Disposition", fmt.Sprintf("inline; filename=\"%s\"", filename))
+ w.Header().Set("Content-Length", fmt.Sprintf("%d", len(data)))
+ w.Write(data)
+}
+
+func (s *HTTPServer) StreamRawEmail(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID := vars["id"]
+
+ // Get Gmail message ID
+ var gmailMessageID string
+ err := s.db.QueryRow("SELECT gmail_message_id FROM emails WHERE id = ?", emailID).
+ Scan(&gmailMessageID)
+ if err != nil {
+ http.Error(w, "Email not found", http.StatusNotFound)
+ return
+ }
+
+ // Fetch from Gmail
+ message, err := s.gmailService.Users.Messages.Get("me", gmailMessageID).
+ Format("RAW").Do()
+ if err != nil {
+ http.Error(w, "Failed to fetch email from Gmail", http.StatusInternalServerError)
+ return
+ }
+
+ // Decode and stream
+ rawEmail, err := base64.URLEncoding.DecodeString(message.Raw)
+ if err != nil {
+ http.Error(w, "Failed to decode email", http.StatusInternalServerError)
+ return
+ }
+
+ w.Header().Set("Content-Type", "message/rfc822")
+ w.Write(rawEmail)
+}
+
+// Original EmailProcessor methods (kept for local mode and shared logic)
+func (p *EmailProcessor) loadMaps() error {
+ p.enquiryMap = make(map[string]int)
+ p.invoiceMap = make(map[string]int)
+ p.poMap = make(map[string]int)
+ p.userMap = make(map[string]int)
+ p.jobMap = make(map[string]int)
+
+ // Load enquiries
+ rows, err := p.db.Query("SELECT id, title FROM enquiries")
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+ for rows.Next() {
+ var id int
+ var title string
+ if err := rows.Scan(&id, &title); err != nil {
+ return err
+ }
+ p.enquiryMap[title] = id
+ }
+
+ // Load invoices
+ rows, err = p.db.Query("SELECT id, title FROM invoices")
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+ for rows.Next() {
+ var id int
+ var title string
+ if err := rows.Scan(&id, &title); err != nil {
+ return err
+ }
+ p.invoiceMap[title] = id
+ }
+
+ // Load purchase orders
+ rows, err = p.db.Query("SELECT id, title FROM purchase_orders")
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+ for rows.Next() {
+ var id int
+ var title string
+ if err := rows.Scan(&id, &title); err != nil {
+ return err
+ }
+ p.poMap[title] = id
+ }
+
+ // Load users
+ rows, err = p.db.Query("SELECT id, email FROM users")
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+ for rows.Next() {
+ var id int
+ var email string
+ if err := rows.Scan(&id, &email); err != nil {
+ return err
+ }
+ p.userMap[strings.ToLower(email)] = id
+ }
+
+ // Load jobs
+ rows, err = p.db.Query("SELECT id, title FROM jobs")
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+ for rows.Next() {
+ var id int
+ var title string
+ if err := rows.Scan(&id, &title); err != nil {
+ return err
+ }
+ p.jobMap[title] = id
+ }
+
+ return nil
+}
+
+func (p *EmailProcessor) processEmails() error {
+ files, err := ioutil.ReadDir(p.config.VaultDir)
+ if err != nil {
+ return err
+ }
+
+ for _, file := range files {
+ if file.IsDir() || file.Name() == "." || file.Name() == ".." {
+ continue
+ }
+
+ fmt.Printf("Handling %s\n", file.Name())
+ if err := p.processEmail(file.Name()); err != nil {
+ log.Printf("Error processing %s: %v\n", file.Name(), err)
+ }
+ }
+
+ return nil
+}
+
+func (p *EmailProcessor) processEmail(filename string) error {
+ emailPath := filepath.Join(p.config.VaultDir, filename)
+ content, err := ioutil.ReadFile(emailPath)
+ if err != nil {
+ return err
+ }
+
+ if len(content) == 0 {
+ fmt.Println("No content found. Ignoring this email")
+ return p.moveEmail(filename)
+ }
+
+ // Parse email with enmime
+ env, err := enmime.ReadEnvelope(bytes.NewReader(content))
+ if err != nil {
+ return err
+ }
+
+ // Get recipients
+ toRecipients := p.parseEnmimeAddresses(env.GetHeader("To"))
+ fromRecipients := p.parseEnmimeAddresses(env.GetHeader("From"))
+ ccRecipients := p.parseEnmimeAddresses(env.GetHeader("Cc"))
+
+ // Check if we should save this email
+ saveThis := false
+ fromKnownUser := false
+
+ for _, email := range toRecipients {
+ if p.userExists(email) {
+ saveThis = true
+ }
+ }
+
+ for _, email := range fromRecipients {
+ if p.userExists(email) {
+ saveThis = true
+ fromKnownUser = true
+ }
+ }
+
+ for _, email := range ccRecipients {
+ if p.userExists(email) {
+ saveThis = true
+ }
+ }
+
+ subject := env.GetHeader("Subject")
+ if subject == "" {
+ fmt.Println("No subject found. Ignoring this email")
+ return p.moveEmail(filename)
+ }
+
+ // Check for identifiers in subject
+ foundEnquiries := p.checkIdentifier(subject, p.enquiryMap, "enquiry")
+ foundInvoices := p.checkIdentifier(subject, p.invoiceMap, "invoice")
+ foundPOs := p.checkIdentifier(subject, p.poMap, "purchaseorder")
+ foundJobs := p.checkIdentifier(subject, p.jobMap, "job")
+
+ foundIdent := len(foundEnquiries) > 0 || len(foundInvoices) > 0 ||
+ len(foundPOs) > 0 || len(foundJobs) > 0
+
+ if fromKnownUser || saveThis || foundIdent {
+ // Process and save the email
+ unixTime := time.Now().Unix()
+ if date, err := env.Date(); err == nil {
+ unixTime = date.Unix()
+ }
+
+ // Get recipient user IDs
+ recipientIDs := make(map[string][]int)
+ recipientIDs["to"] = p.getUserIDs(toRecipients)
+ recipientIDs["from"] = p.getUserIDs(fromRecipients)
+ recipientIDs["cc"] = p.getUserIDs(ccRecipients)
+
+ if len(recipientIDs["from"]) == 0 {
+ fmt.Println("Email has no From Recipient ID. Ignoring this email")
+ return p.moveEmail(filename)
+ }
+
+ fmt.Println("---------START MESSAGE -----------------")
+ fmt.Printf("Subject: %s\n", subject)
+
+ // Extract attachments using enmime
+ relativePath := p.getAttachmentDirectory(unixTime)
+ attachments := p.extractEnmimeAttachments(env, relativePath)
+
+ // Save email to database
+ if err := p.saveEmail(filename, subject, unixTime, recipientIDs, attachments,
+ foundEnquiries, foundInvoices, foundPOs, foundJobs); err != nil {
+ return err
+ }
+
+ fmt.Println("--------END MESSAGE ------")
+ } else {
+ fmt.Printf("Email will not be saved. Subject: %s\n", subject)
+ }
+
+ return p.moveEmail(filename)
+}
+
+func (p *EmailProcessor) parseEnmimeAddresses(header string) []string {
+ var emails []string
+ if header == "" {
+ return emails
+ }
+
+ addresses, err := enmime.ParseAddressList(header)
+ if err != nil {
+ return emails
+ }
+
+ for _, addr := range addresses {
+ emails = append(emails, strings.ToLower(addr.Address))
+ }
+
+ return emails
+}
+
+func (p *EmailProcessor) userExists(email string) bool {
+ _, exists := p.userMap[strings.ToLower(email)]
+ return exists
+}
+
+func (p *EmailProcessor) getUserIDs(emails []string) []int {
+ var ids []int
+ for _, email := range emails {
+ if id, exists := p.userMap[strings.ToLower(email)]; exists {
+ ids = append(ids, id)
+ } else {
+ // Create new user
+ newID := p.createUser(email)
+ if newID > 0 {
+ ids = append(ids, newID)
+ p.userMap[strings.ToLower(email)] = newID
+ }
+ }
+ }
+ return ids
+}
+
+func (p *EmailProcessor) createUser(email string) int {
+ fmt.Printf("Making a new User for: '%s'\n", email)
+
+ result, err := p.db.Exec(
+ `INSERT INTO users (principle_id, customer_id, type, access_level, username, password,
+ first_name, last_name, email, job_title, phone, mobile, fax, phone_extension,
+ direct_phone, notes, by_vault, blacklisted, enabled, primary_contact)
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
+ 0, 0, "contact", "none", "", "", "", "", strings.ToLower(email), "", "", "", "", "", "", "", 1, 0, 1, 0)
+
+ if err != nil {
+ fmt.Printf("Serious Error: Unable to create user for email '%s': %v\n", email, err)
+ return 0
+ }
+
+ id, err := result.LastInsertId()
+ if err != nil {
+ return 0
+ }
+
+ fmt.Printf("New User '%s' Added with ID: %d\n", email, id)
+ return int(id)
+}
+
+func (p *EmailProcessor) checkIdentifier(subject string, identMap map[string]int, identType string) []int {
+ var results []int
+ var re *regexp.Regexp
+
+ switch identType {
+ case "enquiry":
+ re = regexp.MustCompile(`CMC\d+([NVQWSOT]|ACT|NT)E\d+-\d+`)
+ case "invoice":
+ re = regexp.MustCompile(`CMCIN\d+`)
+ case "purchaseorder":
+ re = regexp.MustCompile(`CMCPO\d+`)
+ case "job":
+ re = regexp.MustCompile(`(JAN|FEB|MAR|APR|MAY|JUN|JUL|AUG|SEP|OCT|NOV|DEC)\d+(N|V|W|S|T|NT|ACT|Q|O)J\d+`)
+ }
+
+ if re != nil {
+ matches := re.FindAllString(subject, -1)
+ for _, match := range matches {
+ if id, exists := identMap[match]; exists {
+ results = append(results, id)
+ }
+ }
+ }
+
+ return results
+}
+
+func (p *EmailProcessor) getAttachmentDirectory(unixTime int64) string {
+ t := time.Unix(unixTime, 0)
+ monthYear := t.Format("01-2006")
+ path := filepath.Join(p.config.EmailDir, monthYear)
+
+ if err := os.MkdirAll(path, 0755); err != nil {
+ log.Printf("Failed to create directory %s: %v", path, err)
+ }
+
+ return monthYear
+}
+
+func (p *EmailProcessor) extractEnmimeAttachments(env *enmime.Envelope, relativePath string) []Attachment {
+ var attachments []Attachment
+ outputDir := filepath.Join(p.config.EmailDir, relativePath)
+ uuid := uuid.New().String()
+
+ // Ensure output directory exists
+ if err := os.MkdirAll(outputDir, 0755); err != nil {
+ log.Printf("Failed to create output directory: %v", err)
+ return attachments
+ }
+
+ biggestHTMLSize := int64(0)
+ biggestHTMLIdx := -1
+ biggestPlainSize := int64(0)
+ biggestPlainIdx := -1
+
+ // Process HTML part if exists
+ if env.HTML != "" {
+ htmlData := []byte(env.HTML)
+ fileName := "texthtml"
+ newFileName := uuid + "-" + fileName
+ filePath := filepath.Join(outputDir, newFileName)
+
+ if err := ioutil.WriteFile(filePath, htmlData, 0644); err == nil {
+ att := Attachment{
+ Type: "text/html",
+ Name: filepath.Join(relativePath, newFileName),
+ Filename: fileName,
+ Size: int64(len(htmlData)),
+ IsMessageBody: 0,
+ }
+ attachments = append(attachments, att)
+ biggestHTMLIdx = len(attachments) - 1
+ biggestHTMLSize = int64(len(htmlData))
+ }
+ }
+
+ // Process plain text part if exists
+ if env.Text != "" {
+ textData := []byte(env.Text)
+ fileName := "textplain"
+ newFileName := uuid + "-" + fileName
+ filePath := filepath.Join(outputDir, newFileName)
+
+ if err := ioutil.WriteFile(filePath, textData, 0644); err == nil {
+ att := Attachment{
+ Type: "text/plain",
+ Name: filepath.Join(relativePath, newFileName),
+ Filename: fileName,
+ Size: int64(len(textData)),
+ IsMessageBody: 0,
+ }
+ attachments = append(attachments, att)
+ biggestPlainIdx = len(attachments) - 1
+ biggestPlainSize = int64(len(textData))
+ }
+ }
+
+ // Process file attachments
+ for _, part := range env.Attachments {
+ fileName := part.FileName
+ if fileName == "" {
+ fileName = "attachment"
+ }
+
+ newFileName := uuid + "-" + fileName
+ filePath := filepath.Join(outputDir, newFileName)
+
+ if err := ioutil.WriteFile(filePath, part.Content, 0644); err != nil {
+ log.Printf("Failed to save attachment %s: %v", fileName, err)
+ continue
+ }
+
+ att := Attachment{
+ Type: part.ContentType,
+ Name: filepath.Join(relativePath, newFileName),
+ Filename: fileName,
+ Size: int64(len(part.Content)),
+ IsMessageBody: 0,
+ }
+
+ attachments = append(attachments, att)
+ idx := len(attachments) - 1
+
+ // Track largest HTML and plain text attachments
+ if strings.HasPrefix(part.ContentType, "text/html") && int64(len(part.Content)) > biggestHTMLSize {
+ biggestHTMLSize = int64(len(part.Content))
+ biggestHTMLIdx = idx
+ } else if strings.HasPrefix(part.ContentType, "text/plain") && int64(len(part.Content)) > biggestPlainSize {
+ biggestPlainSize = int64(len(part.Content))
+ biggestPlainIdx = idx
+ }
+ }
+
+ // Process inline parts
+ for _, part := range env.Inlines {
+ fileName := part.FileName
+ if fileName == "" {
+ fileName = "inline"
+ }
+
+ newFileName := uuid + "-" + fileName
+ filePath := filepath.Join(outputDir, newFileName)
+
+ if err := ioutil.WriteFile(filePath, part.Content, 0644); err != nil {
+ log.Printf("Failed to save inline part %s: %v", fileName, err)
+ continue
+ }
+
+ att := Attachment{
+ Type: part.ContentType,
+ Name: filepath.Join(relativePath, newFileName),
+ Filename: fileName,
+ Size: int64(len(part.Content)),
+ IsMessageBody: 0,
+ }
+
+ attachments = append(attachments, att)
+ idx := len(attachments) - 1
+
+ // Track largest HTML and plain text attachments
+ if strings.HasPrefix(part.ContentType, "text/html") && int64(len(part.Content)) > biggestHTMLSize {
+ biggestHTMLSize = int64(len(part.Content))
+ biggestHTMLIdx = idx
+ } else if strings.HasPrefix(part.ContentType, "text/plain") && int64(len(part.Content)) > biggestPlainSize {
+ biggestPlainSize = int64(len(part.Content))
+ biggestPlainIdx = idx
+ }
+ }
+
+ // Mark the message body
+ if biggestHTMLIdx >= 0 {
+ attachments[biggestHTMLIdx].IsMessageBody = 1
+ } else if biggestPlainIdx >= 0 {
+ attachments[biggestPlainIdx].IsMessageBody = 1
+ }
+
+ return attachments
+}
+
+func (p *EmailProcessor) saveEmail(filename, subject string, unixTime int64,
+ recipientIDs map[string][]int, attachments []Attachment,
+ foundEnquiries, foundInvoices, foundPOs, foundJobs []int) error {
+
+ tx, err := p.db.Begin()
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback()
+
+ // Insert email
+ result, err := tx.Exec(
+ "INSERT INTO emails (user_id, udate, created, subject) VALUES (?, ?, NOW(), ?)",
+ recipientIDs["from"][0], unixTime, subject)
+
+ if err != nil {
+ return err
+ }
+
+ emailID, err := result.LastInsertId()
+ if err != nil {
+ return err
+ }
+
+ // Insert recipients
+ for recipType, userIDs := range recipientIDs {
+ for _, userID := range userIDs {
+ if recipType == "from" {
+ continue // From is already stored in emails.user_id
+ }
+ _, err = tx.Exec(
+ "INSERT INTO email_recipients (email_id, user_id, type) VALUES (?, ?, ?)",
+ emailID, userID, recipType)
+ if err != nil {
+ return err
+ }
+ }
+ }
+
+ // Insert attachments
+ for _, att := range attachments {
+ _, err = tx.Exec(
+ "INSERT INTO email_attachments (email_id, name, type, size, filename, is_message_body, created) VALUES (?, ?, ?, ?, ?, ?, NOW())",
+ emailID, att.Name, att.Type, att.Size, att.Filename, att.IsMessageBody)
+ if err != nil {
+ return err
+ }
+ }
+
+ // Insert associations
+ for _, jobID := range foundJobs {
+ _, err = tx.Exec("INSERT INTO emails_jobs (email_id, job_id) VALUES (?, ?)", emailID, jobID)
+ if err != nil {
+ return err
+ }
+ }
+
+ for _, poID := range foundPOs {
+ _, err = tx.Exec("INSERT INTO emails_purchase_orders (email_id, purchase_order_id) VALUES (?, ?)", emailID, poID)
+ if err != nil {
+ return err
+ }
+ }
+
+ for _, enqID := range foundEnquiries {
+ _, err = tx.Exec("INSERT INTO emails_enquiries (email_id, enquiry_id) VALUES (?, ?)", emailID, enqID)
+ if err != nil {
+ return err
+ }
+ }
+
+ for _, invID := range foundInvoices {
+ _, err = tx.Exec("INSERT INTO emails_invoices (email_id, invoice_id) VALUES (?, ?)", emailID, invID)
+ if err != nil {
+ return err
+ }
+ }
+
+ if err := tx.Commit(); err != nil {
+ return err
+ }
+
+ fmt.Println("Success. We made an email")
+ return nil
+}
+
+func (p *EmailProcessor) moveEmail(filename string) error {
+ oldPath := filepath.Join(p.config.VaultDir, filename)
+ newPath := filepath.Join(p.config.ProcessedDir, filename+":S")
+
+ if err := os.Rename(oldPath, newPath); err != nil {
+ fmt.Printf("Unable to move %s to %s: %v\n", oldPath, newPath, err)
+ return err
+ }
+
+ return nil
+}
+
+type Attachment struct {
+ Type string
+ Name string
+ Filename string
+ Size int64
+ IsMessageBody int
+}
diff --git a/go-app/go.mod b/go-app/go.mod
index c7145680..e563a825 100644
--- a/go-app/go.mod
+++ b/go-app/go.mod
@@ -6,7 +6,9 @@ toolchain go1.24.3
require (
github.com/go-sql-driver/mysql v1.7.1
+ github.com/google/uuid v1.6.0
github.com/gorilla/mux v1.8.1
+ github.com/jhillyerd/enmime v1.3.0
github.com/joho/godotenv v1.5.1
github.com/jung-kurt/gofpdf v1.16.2
golang.org/x/text v0.27.0
diff --git a/go-app/go.sum b/go-app/go.sum
index 3a0c001c..cffeb998 100644
--- a/go-app/go.sum
+++ b/go-app/go.sum
@@ -1,3 +1,9 @@
+cloud.google.com/go/auth v0.16.3 h1:kabzoQ9/bobUmnseYnBO6qQG7q4a/CffFRlJSxv2wCc=
+cloud.google.com/go/auth v0.16.3/go.mod h1:NucRGjaXfzP1ltpcQ7On/VTZ0H4kWB5Jy+Y9Dnm76fA=
+cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
+cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
+cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
+cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
github.com/boombuler/barcode v1.0.0/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -10,6 +16,10 @@ github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
+github.com/jaytaylor/html2text v0.0.0-20230321000545-74c2419ad056 h1:iCHtR9CQyktQ5+f3dMVZfwD2KWJUgm7M0gdL9NGr8KA=
+github.com/jaytaylor/html2text v0.0.0-20230321000545-74c2419ad056/go.mod h1:CVKlgaMiht+LXvHG173ujK6JUhZXKb2u/BQtjPDIvyk=
+github.com/jhillyerd/enmime v1.3.0 h1:LV5kzfLidiOr8qRGIpYYmUZCnhrPbcFAnAFUnWn99rw=
+github.com/jhillyerd/enmime v1.3.0/go.mod h1:6c6jg5HdRRV2FtvVL69LjiX1M8oE0xDX9VEhV3oy4gs=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/jung-kurt/gofpdf v1.0.0/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
@@ -24,6 +34,9 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/phpdave11/gofpdi v1.0.7/go.mod h1:vBmVV0Do6hSBHC8uKUQ71JGW+ZGQq74llk/7bXwjDoI=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
@@ -41,6 +54,14 @@ github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
golang.org/x/image v0.0.0-20190910094157-69e4b8554b2a/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
+golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
+golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
+golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
+golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
+golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
+golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
+golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.27.0 h1:4fGWRpyh641NLlecmyl4LOe6yDdfaYNrGb2zdfo4JV4=
golang.org/x/text v0.27.0/go.mod h1:1D28KMCvyooCX9hBiosv5Tz/+YLxj0j7XhWjpSUF7CU=
diff --git a/go-app/goose.env.example b/go-app/goose.env.example
new file mode 100644
index 00000000..c40a0103
--- /dev/null
+++ b/go-app/goose.env.example
@@ -0,0 +1,3 @@
+GOOSE_DRIVER=mysql
+GOOSE_DBSTRING=username:password@tcp(localhost:3306)/database?parseTime=true
+GOOSE_MIGRATION_DIR=sql/migrations
\ No newline at end of file
diff --git a/go-app/internal/cmc/handlers/emails.go b/go-app/internal/cmc/handlers/emails.go
new file mode 100644
index 00000000..b34caae1
--- /dev/null
+++ b/go-app/internal/cmc/handlers/emails.go
@@ -0,0 +1,870 @@
+package handlers
+
+import (
+ "bytes"
+ "context"
+ "database/sql"
+ "encoding/base64"
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "os"
+ "strconv"
+ "strings"
+ "time"
+
+ "code.springupsoftware.com/cmc/cmc-sales/internal/cmc/db"
+ "github.com/gorilla/mux"
+ "github.com/jhillyerd/enmime"
+ "golang.org/x/oauth2"
+ "golang.org/x/oauth2/google"
+ "google.golang.org/api/gmail/v1"
+ "google.golang.org/api/option"
+)
+
+type EmailHandler struct {
+ queries *db.Queries
+ db *sql.DB
+ gmailService *gmail.Service
+}
+
+type EmailResponse struct {
+ ID int32 `json:"id"`
+ Subject string `json:"subject"`
+ UserID int32 `json:"user_id"`
+ Created time.Time `json:"created"`
+ GmailMessageID *string `json:"gmail_message_id,omitempty"`
+ AttachmentCount int32 `json:"attachment_count"`
+ IsDownloaded *bool `json:"is_downloaded,omitempty"`
+}
+
+type EmailDetailResponse struct {
+ ID int32 `json:"id"`
+ Subject string `json:"subject"`
+ UserID int32 `json:"user_id"`
+ Created time.Time `json:"created"`
+ GmailMessageID *string `json:"gmail_message_id,omitempty"`
+ GmailThreadID *string `json:"gmail_thread_id,omitempty"`
+ RawHeaders *string `json:"raw_headers,omitempty"`
+ IsDownloaded *bool `json:"is_downloaded,omitempty"`
+ Enquiries []int32 `json:"enquiries,omitempty"`
+ Invoices []int32 `json:"invoices,omitempty"`
+ PurchaseOrders []int32 `json:"purchase_orders,omitempty"`
+ Jobs []int32 `json:"jobs,omitempty"`
+}
+
+type EmailAttachmentResponse struct {
+ ID int32 `json:"id"`
+ Name string `json:"name"`
+ Type string `json:"type"`
+ Size int32 `json:"size"`
+ Filename string `json:"filename"`
+ IsMessageBody bool `json:"is_message_body"`
+ GmailAttachmentID *string `json:"gmail_attachment_id,omitempty"`
+ Created time.Time `json:"created"`
+}
+
+func NewEmailHandler(queries *db.Queries, database *sql.DB) *EmailHandler {
+ // Try to initialize Gmail service
+ gmailService, err := getGmailService("credentials.json", "token.json")
+ if err != nil {
+ // Log the error but continue without Gmail service
+ fmt.Printf("Warning: Gmail service not available: %v\n", err)
+ }
+
+ return &EmailHandler{
+ queries: queries,
+ db: database,
+ gmailService: gmailService,
+ }
+}
+
+// List emails with pagination and filtering
+func (h *EmailHandler) List(w http.ResponseWriter, r *http.Request) {
+ // Parse query parameters
+ limitStr := r.URL.Query().Get("limit")
+ offsetStr := r.URL.Query().Get("offset")
+ search := r.URL.Query().Get("search")
+ userID := r.URL.Query().Get("user_id")
+
+ // Set defaults
+ limit := 50
+ offset := 0
+
+ if limitStr != "" {
+ if l, err := strconv.Atoi(limitStr); err == nil && l > 0 && l <= 100 {
+ limit = l
+ }
+ }
+
+ if offsetStr != "" {
+ if o, err := strconv.Atoi(offsetStr); err == nil && o >= 0 {
+ offset = o
+ }
+ }
+
+ // Build query
+ query := `
+ SELECT e.id, e.subject, e.user_id, e.created, e.gmail_message_id, e.email_attachment_count, e.is_downloaded
+ FROM emails e`
+
+ var args []interface{}
+ var conditions []string
+
+ if search != "" {
+ conditions = append(conditions, "e.subject LIKE ?")
+ args = append(args, "%"+search+"%")
+ }
+
+ if userID != "" {
+ conditions = append(conditions, "e.user_id = ?")
+ args = append(args, userID)
+ }
+
+ if len(conditions) > 0 {
+ query += " WHERE " + joinConditions(conditions, " AND ")
+ }
+
+ query += " ORDER BY e.id DESC LIMIT ? OFFSET ?"
+ args = append(args, limit, offset)
+
+ rows, err := h.db.Query(query, args...)
+ if err != nil {
+ http.Error(w, fmt.Sprintf("Database error: %v", err), http.StatusInternalServerError)
+ return
+ }
+ defer rows.Close()
+
+ var emails []EmailResponse
+ for rows.Next() {
+ var email EmailResponse
+ var gmailMessageID sql.NullString
+ var isDownloaded sql.NullBool
+
+ err := rows.Scan(
+ &email.ID,
+ &email.Subject,
+ &email.UserID,
+ &email.Created,
+ &gmailMessageID,
+ &email.AttachmentCount,
+ &isDownloaded,
+ )
+ if err != nil {
+ continue
+ }
+
+ if gmailMessageID.Valid {
+ email.GmailMessageID = &gmailMessageID.String
+ }
+ if isDownloaded.Valid {
+ email.IsDownloaded = &isDownloaded.Bool
+ }
+
+ emails = append(emails, email)
+ }
+
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(emails)
+}
+
+// Get a specific email with details
+func (h *EmailHandler) Get(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID, err := strconv.Atoi(vars["id"])
+ if err != nil {
+ http.Error(w, "Invalid email ID", http.StatusBadRequest)
+ return
+ }
+
+ // Get email details
+ query := `
+ SELECT e.id, e.subject, e.user_id, e.created, e.gmail_message_id,
+ e.gmail_thread_id, e.raw_headers, e.is_downloaded
+ FROM emails e
+ WHERE e.id = ?`
+
+ var email EmailDetailResponse
+ var gmailMessageID, gmailThreadID, rawHeaders sql.NullString
+ var isDownloaded sql.NullBool
+
+ err = h.db.QueryRow(query, emailID).Scan(
+ &email.ID,
+ &email.Subject,
+ &email.UserID,
+ &email.Created,
+ &gmailMessageID,
+ &gmailThreadID,
+ &rawHeaders,
+ &isDownloaded,
+ )
+
+ if err != nil {
+ if err == sql.ErrNoRows {
+ http.Error(w, "Email not found", http.StatusNotFound)
+ } else {
+ http.Error(w, fmt.Sprintf("Database error: %v", err), http.StatusInternalServerError)
+ }
+ return
+ }
+
+ if gmailMessageID.Valid {
+ email.GmailMessageID = &gmailMessageID.String
+ }
+ if gmailThreadID.Valid {
+ email.GmailThreadID = &gmailThreadID.String
+ }
+ if rawHeaders.Valid {
+ email.RawHeaders = &rawHeaders.String
+ }
+ if isDownloaded.Valid {
+ email.IsDownloaded = &isDownloaded.Bool
+ }
+
+ // Get associated enquiries
+ enquiryRows, err := h.db.Query("SELECT enquiry_id FROM emails_enquiries WHERE email_id = ?", emailID)
+ if err == nil {
+ defer enquiryRows.Close()
+ for enquiryRows.Next() {
+ var enquiryID int32
+ if enquiryRows.Scan(&enquiryID) == nil {
+ email.Enquiries = append(email.Enquiries, enquiryID)
+ }
+ }
+ }
+
+ // Get associated invoices
+ invoiceRows, err := h.db.Query("SELECT invoice_id FROM emails_invoices WHERE email_id = ?", emailID)
+ if err == nil {
+ defer invoiceRows.Close()
+ for invoiceRows.Next() {
+ var invoiceID int32
+ if invoiceRows.Scan(&invoiceID) == nil {
+ email.Invoices = append(email.Invoices, invoiceID)
+ }
+ }
+ }
+
+ // Get associated purchase orders
+ poRows, err := h.db.Query("SELECT purchase_order_id FROM emails_purchase_orders WHERE email_id = ?", emailID)
+ if err == nil {
+ defer poRows.Close()
+ for poRows.Next() {
+ var poID int32
+ if poRows.Scan(&poID) == nil {
+ email.PurchaseOrders = append(email.PurchaseOrders, poID)
+ }
+ }
+ }
+
+ // Get associated jobs
+ jobRows, err := h.db.Query("SELECT job_id FROM emails_jobs WHERE email_id = ?", emailID)
+ if err == nil {
+ defer jobRows.Close()
+ for jobRows.Next() {
+ var jobID int32
+ if jobRows.Scan(&jobID) == nil {
+ email.Jobs = append(email.Jobs, jobID)
+ }
+ }
+ }
+
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(email)
+}
+
+// List attachments for an email
+func (h *EmailHandler) ListAttachments(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID, err := strconv.Atoi(vars["id"])
+ if err != nil {
+ http.Error(w, "Invalid email ID", http.StatusBadRequest)
+ return
+ }
+
+ // First check if attachments are already in database
+ query := `
+ SELECT id, name, type, size, filename, is_message_body, gmail_attachment_id, created
+ FROM email_attachments
+ WHERE email_id = ?
+ ORDER BY is_message_body DESC, created ASC`
+
+ rows, err := h.db.Query(query, emailID)
+ if err != nil {
+ http.Error(w, fmt.Sprintf("Database error: %v", err), http.StatusInternalServerError)
+ return
+ }
+ defer rows.Close()
+
+ var attachments []EmailAttachmentResponse
+ hasStoredAttachments := false
+
+ for rows.Next() {
+ hasStoredAttachments = true
+ var attachment EmailAttachmentResponse
+ var gmailAttachmentID sql.NullString
+
+ err := rows.Scan(
+ &attachment.ID,
+ &attachment.Name,
+ &attachment.Type,
+ &attachment.Size,
+ &attachment.Filename,
+ &attachment.IsMessageBody,
+ &gmailAttachmentID,
+ &attachment.Created,
+ )
+ if err != nil {
+ continue
+ }
+
+ if gmailAttachmentID.Valid {
+ attachment.GmailAttachmentID = &gmailAttachmentID.String
+ }
+
+ attachments = append(attachments, attachment)
+ }
+
+ // If no stored attachments and this is a Gmail email, try to fetch from Gmail
+ if !hasStoredAttachments && h.gmailService != nil {
+ // Get Gmail message ID
+ var gmailMessageID sql.NullString
+ err := h.db.QueryRow("SELECT gmail_message_id FROM emails WHERE id = ?", emailID).Scan(&gmailMessageID)
+
+ if err == nil && gmailMessageID.Valid {
+ // Fetch message metadata from Gmail
+ message, err := h.gmailService.Users.Messages.Get("me", gmailMessageID.String).
+ Format("FULL").Do()
+
+ if err == nil && message.Payload != nil {
+ // Extract attachment info from Gmail message
+ attachmentIndex := int32(1)
+ h.extractGmailAttachments(message.Payload, &attachments, &attachmentIndex)
+ }
+ }
+ }
+
+ // Check if this is an HTMX request
+ if r.Header.Get("HX-Request") == "true" {
+ // Return HTML for HTMX
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+
+ if len(attachments) == 0 {
+ // No attachments found
+ html := `
+
No attachments found for this email.
+
`
+ w.Write([]byte(html))
+ return
+ }
+
+ // Build HTML table for attachments
+ var htmlBuilder strings.Builder
+ htmlBuilder.WriteString(`
+
Attachments
+
+
+
+
+ Name
+ Type
+ Size
+ Actions
+
+
+ `)
+
+ for _, att := range attachments {
+ icon := ` `
+ if att.IsMessageBody {
+ icon = ` `
+ }
+
+ downloadURL := fmt.Sprintf("/api/v1/emails/%d/attachments/%d", emailID, att.ID)
+ if att.GmailAttachmentID != nil {
+ downloadURL = fmt.Sprintf("/api/v1/emails/%d/attachments/%d/stream", emailID, att.ID)
+ }
+
+ htmlBuilder.WriteString(fmt.Sprintf(`
+
+
+ %s
+ %s
+
+ %s
+ %d bytes
+
+
+
+ Download
+
+
+ `, icon, att.Name, att.Type, att.Size, downloadURL))
+ }
+
+ htmlBuilder.WriteString(`
+
+
+
+
`)
+
+ w.Write([]byte(htmlBuilder.String()))
+ return
+ }
+
+ // Return JSON for API requests
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(attachments)
+}
+
+// Helper function to extract attachment info from Gmail message parts
+func (h *EmailHandler) extractGmailAttachments(part *gmail.MessagePart, attachments *[]EmailAttachmentResponse, index *int32) {
+ // Check if this part is an attachment
+ // Some attachments may not have filenames or may be inline
+ if part.Body != nil && part.Body.AttachmentId != "" {
+ filename := part.Filename
+ if filename == "" {
+ // Try to generate a filename from content type
+ switch part.MimeType {
+ case "application/pdf":
+ filename = "attachment.pdf"
+ case "image/png":
+ filename = "image.png"
+ case "image/jpeg":
+ filename = "image.jpg"
+ case "text/plain":
+ filename = "text.txt"
+ default:
+ filename = "attachment"
+ }
+ }
+
+ attachment := EmailAttachmentResponse{
+ ID: *index,
+ Name: filename,
+ Type: part.MimeType,
+ Size: int32(part.Body.Size),
+ Filename: filename,
+ IsMessageBody: false,
+ GmailAttachmentID: &part.Body.AttachmentId,
+ Created: time.Now(), // Use current time as placeholder
+ }
+ *attachments = append(*attachments, attachment)
+ *index++
+ }
+
+ // Process sub-parts
+ for _, subPart := range part.Parts {
+ h.extractGmailAttachments(subPart, attachments, index)
+ }
+}
+
+// Search emails
+func (h *EmailHandler) Search(w http.ResponseWriter, r *http.Request) {
+ query := r.URL.Query().Get("q")
+ if query == "" {
+ http.Error(w, "Search query is required", http.StatusBadRequest)
+ return
+ }
+
+ // Parse optional parameters
+ limitStr := r.URL.Query().Get("limit")
+ limit := 20
+
+ if limitStr != "" {
+ if l, err := strconv.Atoi(limitStr); err == nil && l > 0 && l <= 100 {
+ limit = l
+ }
+ }
+
+ // Search in subjects and headers
+ sqlQuery := `
+ SELECT e.id, e.subject, e.user_id, e.created, e.gmail_message_id, e.email_attachment_count, e.is_downloaded
+ FROM emails e
+ WHERE e.subject LIKE ? OR e.raw_headers LIKE ?
+ ORDER BY e.id DESC
+ LIMIT ?`
+
+ searchTerm := "%" + query + "%"
+ rows, err := h.db.Query(sqlQuery, searchTerm, searchTerm, limit)
+ if err != nil {
+ http.Error(w, fmt.Sprintf("Database error: %v", err), http.StatusInternalServerError)
+ return
+ }
+ defer rows.Close()
+
+ var emails []EmailResponse
+ for rows.Next() {
+ var email EmailResponse
+ var gmailMessageID sql.NullString
+ var isDownloaded sql.NullBool
+
+ err := rows.Scan(
+ &email.ID,
+ &email.Subject,
+ &email.UserID,
+ &email.Created,
+ &gmailMessageID,
+ &email.AttachmentCount,
+ &isDownloaded,
+ )
+ if err != nil {
+ continue
+ }
+
+ if gmailMessageID.Valid {
+ email.GmailMessageID = &gmailMessageID.String
+ }
+ if isDownloaded.Valid {
+ email.IsDownloaded = &isDownloaded.Bool
+ }
+
+ emails = append(emails, email)
+ }
+
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(emails)
+}
+
+// Stream email content from Gmail
+func (h *EmailHandler) StreamContent(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID, err := strconv.Atoi(vars["id"])
+ if err != nil {
+ http.Error(w, "Invalid email ID", http.StatusBadRequest)
+ return
+ }
+
+ // Get email details to check if it's a Gmail email
+ query := `
+ SELECT e.gmail_message_id, e.subject, e.created, e.user_id
+ FROM emails e
+ WHERE e.id = ?`
+
+ var gmailMessageID sql.NullString
+ var subject string
+ var created time.Time
+ var userID int32
+
+ err = h.db.QueryRow(query, emailID).Scan(&gmailMessageID, &subject, &created, &userID)
+ if err != nil {
+ if err == sql.ErrNoRows {
+ http.Error(w, "Email not found", http.StatusNotFound)
+ } else {
+ http.Error(w, fmt.Sprintf("Database error: %v", err), http.StatusInternalServerError)
+ }
+ return
+ }
+
+ if !gmailMessageID.Valid {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := `
+
+ Local Email
+ This email is not from Gmail and does not have stored content available for display.
+
`
+ w.Write([]byte(html))
+ return
+ }
+
+ // Check for stored message body content in attachments
+ attachmentQuery := `
+ SELECT id, name, type, size
+ FROM email_attachments
+ WHERE email_id = ? AND is_message_body = 1
+ ORDER BY created ASC`
+
+ attachmentRows, err := h.db.Query(attachmentQuery, emailID)
+ if err == nil {
+ defer attachmentRows.Close()
+ if attachmentRows.Next() {
+ var attachmentID int32
+ var name, attachmentType string
+ var size int32
+
+ if attachmentRows.Scan(&attachmentID, &name, &attachmentType, &size) == nil {
+ // Found stored message body - would normally read the content from file storage
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := fmt.Sprintf(`
+
+
+ Stored Email Content
+ Message body is stored locally as attachment: %s (%s, %d bytes)
+
+
+
Content would be loaded from local storage here.
+
Attachment ID: %d
+
+
`, name, attachmentType, size, attachmentID)
+ w.Write([]byte(html))
+ return
+ }
+ }
+ }
+
+ // Try to fetch from Gmail if service is available
+ if h.gmailService != nil {
+ // Fetch from Gmail
+ message, err := h.gmailService.Users.Messages.Get("me", gmailMessageID.String).
+ Format("RAW").Do()
+ if err != nil {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := fmt.Sprintf(`
+
+ Gmail API Error
+ Failed to fetch email from Gmail: %v
+
`, err)
+ w.Write([]byte(html))
+ return
+ }
+
+ // Decode message
+ rawEmail, err := base64.URLEncoding.DecodeString(message.Raw)
+ if err != nil {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := fmt.Sprintf(`
+
+ Decode Error
+ Failed to decode email: %v
+
`, err)
+ w.Write([]byte(html))
+ return
+ }
+
+ // Parse with enmime
+ env, err := enmime.ReadEnvelope(bytes.NewReader(rawEmail))
+ if err != nil {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := fmt.Sprintf(`
+
+ Parse Error
+ Failed to parse email: %v
+
`, err)
+ w.Write([]byte(html))
+ return
+ }
+
+ // Stream HTML or Text directly to client
+ if env.HTML != "" {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ w.Write([]byte(env.HTML))
+ } else if env.Text != "" {
+ // Convert plain text to HTML for better display
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := fmt.Sprintf(`
+
+
+ Plain Text Email
+ This email contains only plain text content.
+
+
+
`, env.Text)
+ w.Write([]byte(html))
+ } else {
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := `
+
+ No Content
+ No HTML or text content found in this email.
+
`
+ w.Write([]byte(html))
+ }
+ return
+ }
+
+ // No Gmail service available - show error
+ w.Header().Set("Content-Type", "text/html; charset=utf-8")
+ html := fmt.Sprintf(`
+
+
+ Gmail Service Unavailable
+ Subject: %s
+ Date: %s
+ Gmail Message ID: %s
+
+
+
+
+
Integration Status
+
Gmail service is not available. To enable email content display:
+
+ Ensure credentials.json and token.json files are present
+ Configure Gmail API OAuth2 authentication
+ Restart the application
+
+
+ Gmail Message ID: %s
+
+
+
+
`,
+ subject,
+ created.Format("2006-01-02 15:04:05"),
+ gmailMessageID.String,
+ gmailMessageID.String)
+
+ w.Write([]byte(html))
+}
+
+// Stream attachment from Gmail
+func (h *EmailHandler) StreamAttachment(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ emailID, err := strconv.Atoi(vars["id"])
+ if err != nil {
+ http.Error(w, "Invalid email ID", http.StatusBadRequest)
+ return
+ }
+
+ attachmentID := vars["attachmentId"]
+
+ // Get email's Gmail message ID
+ var gmailMessageID sql.NullString
+ err = h.db.QueryRow("SELECT gmail_message_id FROM emails WHERE id = ?", emailID).Scan(&gmailMessageID)
+ if err != nil || !gmailMessageID.Valid {
+ http.Error(w, "Email not found or not a Gmail email", http.StatusNotFound)
+ return
+ }
+
+ if h.gmailService == nil {
+ http.Error(w, "Gmail service not available", http.StatusServiceUnavailable)
+ return
+ }
+
+ // For dynamic attachments, we need to fetch the message and find the attachment
+ message, err := h.gmailService.Users.Messages.Get("me", gmailMessageID.String).
+ Format("FULL").Do()
+ if err != nil {
+ http.Error(w, "Failed to fetch email from Gmail", http.StatusInternalServerError)
+ return
+ }
+
+ // Find the attachment by index
+ var targetAttachment *gmail.MessagePart
+ attachmentIndex := 1
+ findAttachment(message.Payload, attachmentID, &attachmentIndex, &targetAttachment)
+
+ if targetAttachment == nil || targetAttachment.Body == nil || targetAttachment.Body.AttachmentId == "" {
+ http.Error(w, "Attachment not found", http.StatusNotFound)
+ return
+ }
+
+ // Fetch attachment data from Gmail
+ attachment, err := h.gmailService.Users.Messages.Attachments.
+ Get("me", gmailMessageID.String, targetAttachment.Body.AttachmentId).Do()
+ if err != nil {
+ http.Error(w, "Failed to fetch attachment from Gmail", http.StatusInternalServerError)
+ return
+ }
+
+ // Decode base64
+ data, err := base64.URLEncoding.DecodeString(attachment.Data)
+ if err != nil {
+ http.Error(w, "Failed to decode attachment", http.StatusInternalServerError)
+ return
+ }
+
+ // Set headers and stream
+ filename := targetAttachment.Filename
+ if filename == "" {
+ // Generate filename from content type (same logic as extractGmailAttachments)
+ switch targetAttachment.MimeType {
+ case "application/pdf":
+ filename = "attachment.pdf"
+ case "image/png":
+ filename = "image.png"
+ case "image/jpeg":
+ filename = "image.jpg"
+ case "text/plain":
+ filename = "text.txt"
+ default:
+ filename = "attachment"
+ }
+ }
+
+ w.Header().Set("Content-Type", targetAttachment.MimeType)
+ w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
+ w.Header().Set("Content-Length", fmt.Sprintf("%d", len(data)))
+ w.Write(data)
+}
+
+// Helper function to find attachment by index
+func findAttachment(part *gmail.MessagePart, targetID string, currentIndex *int, result **gmail.MessagePart) {
+ // Check if this part is an attachment (same logic as extractGmailAttachments)
+ if part.Body != nil && part.Body.AttachmentId != "" {
+ fmt.Printf("Checking attachment %d (looking for %s): %s\n", *currentIndex, targetID, part.Filename)
+ if strconv.Itoa(*currentIndex) == targetID {
+ fmt.Printf("Found matching attachment!\n")
+ *result = part
+ return
+ }
+ *currentIndex++
+ }
+
+ for _, subPart := range part.Parts {
+ findAttachment(subPart, targetID, currentIndex, result)
+ if *result != nil {
+ return
+ }
+ }
+}
+
+// Helper function to join conditions
+func joinConditions(conditions []string, separator string) string {
+ if len(conditions) == 0 {
+ return ""
+ }
+ if len(conditions) == 1 {
+ return conditions[0]
+ }
+
+ result := conditions[0]
+ for i := 1; i < len(conditions); i++ {
+ result += separator + conditions[i]
+ }
+ return result
+}
+
+// Gmail OAuth2 functions
+func getGmailService(credentialsFile, tokenFile string) (*gmail.Service, error) {
+ ctx := context.Background()
+
+ b, err := ioutil.ReadFile(credentialsFile)
+ if err != nil {
+ return nil, fmt.Errorf("unable to read client secret file: %v", err)
+ }
+
+ config, err := google.ConfigFromJSON(b, gmail.GmailReadonlyScope)
+ if err != nil {
+ return nil, fmt.Errorf("unable to parse client secret file to config: %v", err)
+ }
+
+ client := getClient(config, tokenFile)
+ srv, err := gmail.NewService(ctx, option.WithHTTPClient(client))
+ if err != nil {
+ return nil, fmt.Errorf("unable to retrieve Gmail client: %v", err)
+ }
+
+ return srv, nil
+}
+
+func getClient(config *oauth2.Config, tokFile string) *http.Client {
+ tok, err := tokenFromFile(tokFile)
+ if err != nil {
+ return nil
+ }
+ return config.Client(context.Background(), tok)
+}
+
+func tokenFromFile(file string) (*oauth2.Token, error) {
+ f, err := os.Open(file)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+ tok := &oauth2.Token{}
+ err = json.NewDecoder(f).Decode(tok)
+ return tok, err
+}
\ No newline at end of file
diff --git a/go-app/internal/cmc/handlers/pages.go b/go-app/internal/cmc/handlers/pages.go
index 7853fe7a..73417f2a 100644
--- a/go-app/internal/cmc/handlers/pages.go
+++ b/go-app/internal/cmc/handlers/pages.go
@@ -1,9 +1,11 @@
package handlers
import (
+ "database/sql"
"log"
"net/http"
"strconv"
+ "time"
"code.springupsoftware.com/cmc/cmc-sales/internal/cmc/db"
"code.springupsoftware.com/cmc/cmc-sales/internal/cmc/templates"
@@ -15,12 +17,14 @@ import (
type PageHandler struct {
queries *db.Queries
tmpl *templates.TemplateManager
+ db *sql.DB
}
-func NewPageHandler(queries *db.Queries, tmpl *templates.TemplateManager) *PageHandler {
+func NewPageHandler(queries *db.Queries, tmpl *templates.TemplateManager, database *sql.DB) *PageHandler {
return &PageHandler{
queries: queries,
tmpl: tmpl,
+ db: database,
}
}
@@ -813,3 +817,404 @@ func (h *PageHandler) DocumentsView(w http.ResponseWriter, r *http.Request) {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
+
+// Email page handlers
+func (h *PageHandler) EmailsIndex(w http.ResponseWriter, r *http.Request) {
+ page := 1
+ if p := r.URL.Query().Get("page"); p != "" {
+ if val, err := strconv.Atoi(p); err == nil && val > 0 {
+ page = val
+ }
+ }
+
+ limit := 30
+ offset := (page - 1) * limit
+ search := r.URL.Query().Get("search")
+ filter := r.URL.Query().Get("filter")
+
+ // Build SQL query based on filters
+ query := `
+ SELECT e.id, e.subject, e.user_id, e.created, e.gmail_message_id,
+ e.email_attachment_count, e.is_downloaded,
+ u.email as user_email, u.first_name, u.last_name
+ FROM emails e
+ LEFT JOIN users u ON e.user_id = u.id`
+
+ var args []interface{}
+ var conditions []string
+
+ // Apply search filter
+ if search != "" {
+ conditions = append(conditions, "(e.subject LIKE ? OR e.raw_headers LIKE ?)")
+ searchTerm := "%" + search + "%"
+ args = append(args, searchTerm, searchTerm)
+ }
+
+ // Apply type filter
+ switch filter {
+ case "downloaded":
+ conditions = append(conditions, "e.is_downloaded = 1")
+ case "gmail":
+ conditions = append(conditions, "e.gmail_message_id IS NOT NULL")
+ case "unassociated":
+ conditions = append(conditions, `NOT EXISTS (
+ SELECT 1 FROM emails_enquiries WHERE email_id = e.id
+ UNION SELECT 1 FROM emails_invoices WHERE email_id = e.id
+ UNION SELECT 1 FROM emails_purchase_orders WHERE email_id = e.id
+ UNION SELECT 1 FROM emails_jobs WHERE email_id = e.id
+ )`)
+ }
+
+ if len(conditions) > 0 {
+ query += " WHERE " + joinConditions(conditions, " AND ")
+ }
+
+ query += " ORDER BY e.id DESC LIMIT ? OFFSET ?"
+ args = append(args, limit+1, offset) // Get one extra to check if there are more
+
+ // Execute the query to get emails
+ rows, err := h.db.Query(query, args...)
+ if err != nil {
+ log.Printf("Error querying emails: %v", err)
+ http.Error(w, "Database error", http.StatusInternalServerError)
+ return
+ }
+ defer rows.Close()
+
+ type EmailWithUser struct {
+ ID int32 `json:"id"`
+ Subject string `json:"subject"`
+ UserID int32 `json:"user_id"`
+ Created time.Time `json:"created"`
+ GmailMessageID *string `json:"gmail_message_id"`
+ AttachmentCount int32 `json:"attachment_count"`
+ IsDownloaded *bool `json:"is_downloaded"`
+ UserEmail *string `json:"user_email"`
+ FirstName *string `json:"first_name"`
+ LastName *string `json:"last_name"`
+ }
+
+ var emails []EmailWithUser
+ for rows.Next() {
+ var email EmailWithUser
+ var gmailMessageID, userEmail, firstName, lastName sql.NullString
+ var isDownloaded sql.NullBool
+
+ err := rows.Scan(
+ &email.ID,
+ &email.Subject,
+ &email.UserID,
+ &email.Created,
+ &gmailMessageID,
+ &email.AttachmentCount,
+ &isDownloaded,
+ &userEmail,
+ &firstName,
+ &lastName,
+ )
+ if err != nil {
+ log.Printf("Error scanning email row: %v", err)
+ continue
+ }
+
+ if gmailMessageID.Valid {
+ email.GmailMessageID = &gmailMessageID.String
+ }
+ if isDownloaded.Valid {
+ email.IsDownloaded = &isDownloaded.Bool
+ }
+ if userEmail.Valid {
+ email.UserEmail = &userEmail.String
+ }
+ if firstName.Valid {
+ email.FirstName = &firstName.String
+ }
+ if lastName.Valid {
+ email.LastName = &lastName.String
+ }
+
+ emails = append(emails, email)
+ }
+
+ hasMore := len(emails) > limit
+ if hasMore {
+ emails = emails[:limit]
+ }
+
+ data := map[string]interface{}{
+ "Emails": emails,
+ "Page": page,
+ "PrevPage": page - 1,
+ "NextPage": page + 1,
+ "HasMore": hasMore,
+ "TotalPages": ((len(emails) + limit - 1) / limit),
+ }
+
+ // Check if this is an HTMX request
+ if r.Header.Get("HX-Request") == "true" {
+ if err := h.tmpl.RenderPartial(w, "emails/table.html", "email-table", data); err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+ return
+ }
+
+ if err := h.tmpl.Render(w, "emails/index.html", data); err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+}
+
+func (h *PageHandler) EmailsShow(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ id, err := strconv.Atoi(vars["id"])
+ if err != nil {
+ http.Error(w, "Invalid email ID", http.StatusBadRequest)
+ return
+ }
+
+ // Get email details from database
+ emailQuery := `
+ SELECT e.id, e.subject, e.user_id, e.created, e.gmail_message_id,
+ e.gmail_thread_id, e.raw_headers, e.is_downloaded,
+ u.email as user_email, u.first_name, u.last_name
+ FROM emails e
+ LEFT JOIN users u ON e.user_id = u.id
+ WHERE e.id = ?`
+
+ var email struct {
+ ID int32 `json:"id"`
+ Subject string `json:"subject"`
+ UserID int32 `json:"user_id"`
+ Created time.Time `json:"created"`
+ GmailMessageID *string `json:"gmail_message_id"`
+ GmailThreadID *string `json:"gmail_thread_id"`
+ RawHeaders *string `json:"raw_headers"`
+ IsDownloaded *bool `json:"is_downloaded"`
+ User *struct {
+ Email string `json:"email"`
+ FirstName string `json:"first_name"`
+ LastName string `json:"last_name"`
+ } `json:"user"`
+ Enquiries []int32 `json:"enquiries"`
+ Invoices []int32 `json:"invoices"`
+ PurchaseOrders []int32 `json:"purchase_orders"`
+ Jobs []int32 `json:"jobs"`
+ }
+
+ var gmailMessageID, gmailThreadID, rawHeaders sql.NullString
+ var isDownloaded sql.NullBool
+ var userEmail, firstName, lastName sql.NullString
+
+ err = h.db.QueryRow(emailQuery, id).Scan(
+ &email.ID,
+ &email.Subject,
+ &email.UserID,
+ &email.Created,
+ &gmailMessageID,
+ &gmailThreadID,
+ &rawHeaders,
+ &isDownloaded,
+ &userEmail,
+ &firstName,
+ &lastName,
+ )
+
+ if err != nil {
+ if err == sql.ErrNoRows {
+ http.Error(w, "Email not found", http.StatusNotFound)
+ } else {
+ log.Printf("Error fetching email %d: %v", id, err)
+ http.Error(w, "Database error", http.StatusInternalServerError)
+ }
+ return
+ }
+
+ // Set nullable fields
+ if gmailMessageID.Valid {
+ email.GmailMessageID = &gmailMessageID.String
+ }
+ if gmailThreadID.Valid {
+ email.GmailThreadID = &gmailThreadID.String
+ }
+ if rawHeaders.Valid {
+ email.RawHeaders = &rawHeaders.String
+ }
+ if isDownloaded.Valid {
+ email.IsDownloaded = &isDownloaded.Bool
+ }
+
+ // Set user info if available
+ if userEmail.Valid {
+ email.User = &struct {
+ Email string `json:"email"`
+ FirstName string `json:"first_name"`
+ LastName string `json:"last_name"`
+ }{
+ Email: userEmail.String,
+ FirstName: firstName.String,
+ LastName: lastName.String,
+ }
+ }
+
+ // Get email attachments
+ attachmentQuery := `
+ SELECT id, name, type, size, filename, is_message_body, gmail_attachment_id, created
+ FROM email_attachments
+ WHERE email_id = ?
+ ORDER BY is_message_body DESC, created ASC`
+
+ attachmentRows, err := h.db.Query(attachmentQuery, id)
+ if err != nil {
+ log.Printf("Error fetching attachments for email %d: %v", id, err)
+ }
+
+ type EmailAttachment struct {
+ ID int32 `json:"id"`
+ Name string `json:"name"`
+ Type string `json:"type"`
+ Size int32 `json:"size"`
+ Filename string `json:"filename"`
+ IsMessageBody bool `json:"is_message_body"`
+ GmailAttachmentID *string `json:"gmail_attachment_id"`
+ Created time.Time `json:"created"`
+ }
+
+ var attachments []EmailAttachment
+ hasStoredAttachments := false
+
+ if attachmentRows != nil {
+ defer attachmentRows.Close()
+ for attachmentRows.Next() {
+ hasStoredAttachments = true
+ var attachment EmailAttachment
+ var gmailAttachmentID sql.NullString
+
+ err := attachmentRows.Scan(
+ &attachment.ID,
+ &attachment.Name,
+ &attachment.Type,
+ &attachment.Size,
+ &attachment.Filename,
+ &attachment.IsMessageBody,
+ &gmailAttachmentID,
+ &attachment.Created,
+ )
+ if err != nil {
+ log.Printf("Error scanning attachment: %v", err)
+ continue
+ }
+
+ if gmailAttachmentID.Valid {
+ attachment.GmailAttachmentID = &gmailAttachmentID.String
+ }
+
+ attachments = append(attachments, attachment)
+ }
+ }
+
+ // If no stored attachments and this is a Gmail email, show a notice
+ if !hasStoredAttachments && email.GmailMessageID != nil {
+ // For the page view, we'll just show a notice that attachments can be fetched
+ // The actual fetching will happen via the API endpoint when needed
+ log.Printf("Email %d is a Gmail email without indexed attachments", id)
+ }
+
+ // Get associated records (simplified queries for now)
+ // Enquiries
+ enquiryRows, err := h.db.Query("SELECT enquiry_id FROM emails_enquiries WHERE email_id = ?", id)
+ if err == nil {
+ defer enquiryRows.Close()
+ for enquiryRows.Next() {
+ var enquiryID int32
+ if enquiryRows.Scan(&enquiryID) == nil {
+ email.Enquiries = append(email.Enquiries, enquiryID)
+ }
+ }
+ }
+
+ // Invoices
+ invoiceRows, err := h.db.Query("SELECT invoice_id FROM emails_invoices WHERE email_id = ?", id)
+ if err == nil {
+ defer invoiceRows.Close()
+ for invoiceRows.Next() {
+ var invoiceID int32
+ if invoiceRows.Scan(&invoiceID) == nil {
+ email.Invoices = append(email.Invoices, invoiceID)
+ }
+ }
+ }
+
+ // Purchase Orders
+ poRows, err := h.db.Query("SELECT purchase_order_id FROM emails_purchase_orders WHERE email_id = ?", id)
+ if err == nil {
+ defer poRows.Close()
+ for poRows.Next() {
+ var poID int32
+ if poRows.Scan(&poID) == nil {
+ email.PurchaseOrders = append(email.PurchaseOrders, poID)
+ }
+ }
+ }
+
+ // Jobs
+ jobRows, err := h.db.Query("SELECT job_id FROM emails_jobs WHERE email_id = ?", id)
+ if err == nil {
+ defer jobRows.Close()
+ for jobRows.Next() {
+ var jobID int32
+ if jobRows.Scan(&jobID) == nil {
+ email.Jobs = append(email.Jobs, jobID)
+ }
+ }
+ }
+
+ data := map[string]interface{}{
+ "Email": email,
+ "Attachments": attachments,
+ }
+
+ if err := h.tmpl.Render(w, "emails/show.html", data); err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+}
+
+func (h *PageHandler) EmailsSearch(w http.ResponseWriter, r *http.Request) {
+ _ = r.URL.Query().Get("search") // TODO: Implement search functionality
+
+ // Empty result for now - would need proper implementation
+ emails := []interface{}{}
+
+ data := map[string]interface{}{
+ "Emails": emails,
+ "Page": 1,
+ "PrevPage": 0,
+ "NextPage": 2,
+ "HasMore": false,
+ "TotalPages": 1,
+ }
+
+ w.Header().Set("Content-Type", "text/html")
+ if err := h.tmpl.RenderPartial(w, "emails/table.html", "email-table", data); err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+}
+
+func (h *PageHandler) EmailsAttachments(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ _, err := strconv.Atoi(vars["id"])
+ if err != nil {
+ http.Error(w, "Invalid email ID", http.StatusBadRequest)
+ return
+ }
+
+ // Empty attachments for now - would need proper implementation
+ attachments := []interface{}{}
+
+ data := map[string]interface{}{
+ "Attachments": attachments,
+ }
+
+ w.Header().Set("Content-Type", "text/html")
+ if err := h.tmpl.RenderPartial(w, "emails/attachments.html", "email-attachments", data); err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+}
diff --git a/go-app/internal/cmc/templates/templates.go b/go-app/internal/cmc/templates/templates.go
index 9446744e..ac5e8f8d 100644
--- a/go-app/internal/cmc/templates/templates.go
+++ b/go-app/internal/cmc/templates/templates.go
@@ -56,6 +56,10 @@ func NewTemplateManager(templatesDir string) (*TemplateManager, error) {
"enquiries/show.html",
"enquiries/form.html",
"enquiries/table.html",
+ "emails/index.html",
+ "emails/show.html",
+ "emails/table.html",
+ "emails/attachments.html",
"documents/index.html",
"documents/show.html",
"documents/table.html",
diff --git a/go-app/sql/migrations/001_add_gmail_fields.sql b/go-app/sql/migrations/001_add_gmail_fields.sql
new file mode 100644
index 00000000..9fd351b6
--- /dev/null
+++ b/go-app/sql/migrations/001_add_gmail_fields.sql
@@ -0,0 +1,52 @@
+-- +goose Up
+-- Add Gmail-specific fields to emails table
+ALTER TABLE emails
+ ADD COLUMN gmail_message_id VARCHAR(255) UNIQUE AFTER id,
+ ADD COLUMN gmail_thread_id VARCHAR(255) AFTER gmail_message_id,
+ ADD COLUMN is_downloaded BOOLEAN DEFAULT FALSE AFTER email_attachment_count,
+ ADD COLUMN raw_headers TEXT AFTER subject;
+
+-- +goose StatementBegin
+CREATE INDEX idx_gmail_message_id ON emails(gmail_message_id);
+-- +goose StatementEnd
+
+-- +goose StatementBegin
+CREATE INDEX idx_gmail_thread_id ON emails(gmail_thread_id);
+-- +goose StatementEnd
+
+-- Add Gmail-specific fields to email_attachments
+ALTER TABLE email_attachments
+ ADD COLUMN gmail_attachment_id VARCHAR(255) AFTER email_id,
+ ADD COLUMN gmail_message_id VARCHAR(255) AFTER gmail_attachment_id,
+ ADD COLUMN content_id VARCHAR(255) AFTER filename;
+
+-- +goose StatementBegin
+CREATE INDEX idx_gmail_attachment_id ON email_attachments(gmail_attachment_id);
+-- +goose StatementEnd
+
+-- +goose Down
+-- Remove indexes
+-- +goose StatementBegin
+DROP INDEX idx_gmail_attachment_id ON email_attachments;
+-- +goose StatementEnd
+
+-- +goose StatementBegin
+DROP INDEX idx_gmail_thread_id ON emails;
+-- +goose StatementEnd
+
+-- +goose StatementBegin
+DROP INDEX idx_gmail_message_id ON emails;
+-- +goose StatementEnd
+
+-- Remove columns from email_attachments
+ALTER TABLE email_attachments
+ DROP COLUMN content_id,
+ DROP COLUMN gmail_message_id,
+ DROP COLUMN gmail_attachment_id;
+
+-- Remove columns from emails
+ALTER TABLE emails
+ DROP COLUMN raw_headers,
+ DROP COLUMN is_downloaded,
+ DROP COLUMN gmail_thread_id,
+ DROP COLUMN gmail_message_id;
\ No newline at end of file
diff --git a/go-app/static/css/style.css b/go-app/static/css/style.css
index d5d7e6d7..7c622bba 100644
--- a/go-app/static/css/style.css
+++ b/go-app/static/css/style.css
@@ -106,4 +106,20 @@ body {
.input.is-danger:focus {
border-color: #ff3860;
box-shadow: 0 0 0 0.125em rgba(255,56,96,.25);
+}
+
+/* Simple CSS loader */
+.loader {
+ border: 2px solid #f3f3f3;
+ border-top: 2px solid #3273dc;
+ border-radius: 50%;
+ width: 16px;
+ height: 16px;
+ animation: spin 1s linear infinite;
+ display: inline-block;
+}
+
+@keyframes spin {
+ 0% { transform: rotate(0deg); }
+ 100% { transform: rotate(360deg); }
}
\ No newline at end of file
diff --git a/go-app/templates/emails/attachments.html b/go-app/templates/emails/attachments.html
new file mode 100644
index 00000000..24086346
--- /dev/null
+++ b/go-app/templates/emails/attachments.html
@@ -0,0 +1,67 @@
+{{define "email-attachments"}}
+
+
+
+
+ Name
+ Type
+ Size
+ Date
+ Actions
+
+
+
+ {{range .}}
+
+
+ {{if .IsMessageBody}}
+
+
+
+ {{.Name}}
+ Body
+ {{else}}
+
+
+
+ {{.Name}}
+ {{end}}
+
+
+ {{.Type}}
+
+ {{.Size}} bytes
+
+ {{.Created}}
+
+
+ {{if .GmailAttachmentID}}
+
+
+
+
+ Stream
+
+ {{else}}
+
+
+
+
+ Download
+
+ {{end}}
+
+
+ {{else}}
+
+
+ No attachments found
+
+
+ {{end}}
+
+
+
+{{end}}
\ No newline at end of file
diff --git a/go-app/templates/emails/index.html b/go-app/templates/emails/index.html
new file mode 100644
index 00000000..d3ba1bab
--- /dev/null
+++ b/go-app/templates/emails/index.html
@@ -0,0 +1,82 @@
+{{define "title"}}Emails - CMC Sales{{end}}
+
+{{define "content"}}
+
+
+
+
+
+
+
+
+ All Emails
+ Downloaded
+ Gmail Only
+ Unassociated
+
+
+
+
+
+
+
+
+ Filter
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Clear
+
+
+
+
+
+
+
+ {{template "email-table" .}}
+
+{{end}}
+
+{{define "scripts"}}
+
+{{end}}
\ No newline at end of file
diff --git a/go-app/templates/emails/show.html b/go-app/templates/emails/show.html
new file mode 100644
index 00000000..41a67278
--- /dev/null
+++ b/go-app/templates/emails/show.html
@@ -0,0 +1,293 @@
+{{define "title"}}Email {{.Email.ID}} - CMC Sales{{end}}
+
+{{define "content"}}
+
+
+
+
+
+
+
+ {{if .Email.Subject}}{{.Email.Subject}}{{else}}(No Subject) {{end}}
+
+
+
+
+
+ From:
+ {{if .Email.User}}
+ {{.Email.User.Email}} ({{.Email.User.FirstName}} {{.Email.User.LastName}})
+ {{else}}
+ Unknown sender
+ {{end}}
+
+
+ Date: {{.Email.Created.Format "2006-01-02 15:04:05"}}
+
+
+ Type:
+ {{if .Email.GmailMessageID}}
+
+
+
+
+ Gmail
+
+ {{if and .Email.IsDownloaded (not .Email.IsDownloaded)}}
+ Remote
+ {{else}}
+ Downloaded
+ {{end}}
+ {{else}}
+ Local Email
+ {{end}}
+
+
+ Attachments:
+ {{if gt (len .Attachments) 0}}
+ {{len .Attachments}} files
+ {{else}}
+ None
+ {{end}}
+
+ {{if .Email.GmailMessageID}}
+
+ Gmail Message ID:
+ {{.Email.GmailMessageID}}
+
+ {{end}}
+ {{if .Email.GmailThreadID}}
+
+ Gmail Thread ID:
+ {{.Email.GmailThreadID}}
+
+ {{end}}
+
+
+
+
+
+ {{if .Email.GmailMessageID}}
+
+
Email Content
+
+
+
+
+ Loading email content...
+
+
+
+
+ {{end}}
+
+
+ {{if gt (len .Attachments) 0}}
+
+
Attachments
+
+
+
+
+ Name
+ Type
+ Size
+ Date
+ Actions
+
+
+
+ {{range .Attachments}}
+
+
+ {{if .IsMessageBody}}
+
+
+
+ {{else}}
+
+
+
+ {{end}}
+ {{.Name}}
+
+
+ {{.Type}}
+
+ {{.Size}} bytes
+
+ {{.Created.Format "2006-01-02 15:04"}}
+
+
+ {{if .GmailAttachmentID}}
+
+
+
+
+ Download
+
+ {{else}}
+
+
+
+
+ Download
+
+ {{end}}
+
+
+ {{end}}
+
+
+
+
+ {{else if .Email.GmailMessageID}}
+
+
+
Attachments
+
+
+
+
+ Checking for Gmail attachments...
+
+
+
+
+ {{end}}
+
+
+
+
+
+
Associated Records
+
+ {{if .Email.Enquiries}}
+
+ {{end}}
+
+ {{if .Email.Invoices}}
+
+ {{end}}
+
+ {{if .Email.PurchaseOrders}}
+
+ {{end}}
+
+ {{if .Email.Jobs}}
+
+ {{end}}
+
+ {{if and (not .Email.Enquiries) (not .Email.Invoices) (not .Email.PurchaseOrders) (not .Email.Jobs)}}
+
+
No associations found
+
This email is not associated with any enquiries, invoices, purchase orders, or jobs.
+
+ {{end}}
+
+
+
+
+
Quick Actions
+
+
+
+
+
+ Associate with Record
+
+
+
+
+
+ Mark for Review
+
+ {{if .Email.GmailMessageID}}
+
+
+
+
+ Download Locally
+
+ {{end}}
+
+
+
+
+{{end}}
\ No newline at end of file
diff --git a/go-app/templates/emails/table.html b/go-app/templates/emails/table.html
new file mode 100644
index 00000000..a0798ce3
--- /dev/null
+++ b/go-app/templates/emails/table.html
@@ -0,0 +1,143 @@
+{{define "email-table"}}
+
+
+
+
+ ID
+ Subject
+ From
+ Date
+ Attachments
+ Type
+ Associated
+ Actions
+
+
+
+ {{range .Emails}}
+
+ {{.ID}}
+
+
+ {{if .Subject}}{{.Subject}}{{else}}(No Subject) {{end}}
+
+
+
+ {{if .UserEmail}}
+ {{.UserEmail}}
+ {{else}}
+ Unknown
+ {{end}}
+
+
+
+ {{.Created.Format "2006-01-02 15:04"}}
+
+
+
+ {{if gt .AttachmentCount 0}}
+
+
+
+
+ {{.AttachmentCount}}
+
+ {{else}}
+ None
+ {{end}}
+
+
+ {{if .GmailMessageID}}
+
+
+
+
+ Gmail
+
+ {{if and .IsDownloaded (not .IsDownloaded)}}
+
+
+
+
+ Remote
+
+ {{end}}
+ {{else}}
+
+
+
+
+ Local
+
+ {{end}}
+
+
+ Associations TBD
+
+
+
+
+
+ {{else}}
+
+
+ No emails found
+
+
+ {{end}}
+
+
+
+
+
+{{if .Emails}}
+
+{{end}}
+
+
+
+{{end}}
\ No newline at end of file
diff --git a/userpasswd b/userpasswd
index 9ad42166..bd161598 100644
--- a/userpasswd
+++ b/userpasswd
@@ -7,3 +7,4 @@ haris:$apr1$7xqS6Oxx$3HeURNx9ceTV4WsaZEx2h1
despina:$apr1$wyWhXD4y$UHG9//5wMwI3bkccyAMgz1
richard:$apr1$3RMqU9gc$6iw/ZrIkSwU96YMqVr0/k.
finley:$apr1$M4PiX6K/$k4/S5C.AMXPgpaRAirxKm0
+colleen:$apr1$ovbofsZ8$599TtnM7WVv/5eGDZpWYo0
diff --git a/vault_cron.sh b/vault_cron.sh
index 015996b7..928795da 100755
--- a/vault_cron.sh
+++ b/vault_cron.sh
@@ -2,5 +2,5 @@
## run by cmc user cron to run the vault inside docker
-ID=$(docker ps -q)
+ID=$(docker ps -f ancestor=cmc:latest --format "{{.ID}}" | head -n 1)
docker exec -t $ID /var/www/cmc-sales/run_vault.sh