Files
proxmox-infra/docs/CHANGELOG.md
kavren 7f59443424 add: Pi-hole LXC with Unbound recursive DNS
- IP: 10.4.2.129 on pm2
- Domain: pihole.kavcorp.com
- Unbound configured for recursive DNS resolution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 09:54:44 -05:00

227 lines
8.1 KiB
Markdown

# Changelog
> **Purpose**: Historical record of all significant infrastructure changes
## 2025-12-18
### Service Additions
- **Pi-hole**: Added network-wide ad blocker with recursive DNS
- LXC on pm2
- IP: 10.4.2.129
- Domain: pihole.kavcorp.com
- Unbound configured for recursive DNS resolution
- Deployed via ProxmoxVE community helper script
- Tagged: adblock, dns
## 2025-12-15
### Frigate Migration & Upgrade
- **Frigate**: Migrated from source install (LXC 111) to Docker-based (LXC 128)
- Old: LXC 111 on pm3 (source install, 0.14.1)
- New: LXC 128 on pm3 (Docker, 0.17.0-beta1)
- IP: 10.4.2.8
- Domain: frigate.kavcorp.com
- Privileged LXC required for USB device passthrough (Coral TPU)
- Coral USB TPU successfully passed through
- NFS mount for media storage: `/mnt/pve/KavNas/frigate-media`
- **Frigate Configuration Updates**:
- Enabled built-in authentication (port 8971)
- Updated MQTT to correct Home Assistant IP (10.4.2.199)
- Consolidated camera configs using global defaults
- Fixed garage stream bug (was using wrong ffmpeg source)
- Added stationary car filtering (stops tracking after 30 seconds)
- **Traefik Updates**:
- Updated Frigate route to use HTTPS backend (port 8971)
- Added serversTransport for self-signed cert (insecureSkipVerify)
- Fixed disk full issue (removed 903MB old access log)
- Added logrotate config: 50MB max, 3 rotations, daily
### Service Recovery
- **Power Outage Recovery**: Started all stopped LXCs on pm2, pm3, pm4
- **VM 109 (docker-pm3)**: Fixed missing onboot setting
### Infrastructure Notes
- LXC 111 (old Frigate) pending deletion after new setup confirmed
- Port 5000 on Frigate remains available for Home Assistant integration (unauthenticated)
- Admin credentials logged on first auth-enabled startup
## 2025-12-08
### Service Configuration
- **Shinobi (LXC 103)**: Configured NVR storage and Traefik endpoint
- Added to Traefik reverse proxy: shinobi.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/shinobi.yaml`
- Created NFS storage on elantris (`/el-pool/shinobi`) - 11TB available
- Added Proxmox NFS storage: `elantris-shinobi`
- Mounted NFS to LXC 103: `/opt/Shinobi/videos`
- Coral USB TPU device passed through to container
- Coral object detection plugin attempted but blocked by TensorFlow Lite unavailability for Ubuntu 24.04/Python 3.12
- Motion detection available and working
### Notes
- Coral TPU native plugin requires building TensorFlow Lite from source, which is complex for Ubuntu 24.04
- Basic motion detection works out of the box for event recording
- Object detection may require alternative approach (Frigate, or CPU-based detection)
## 2025-12-07
### Service Additions
- **Vaultwarden**: Created new password manager LXC
- LXC 125 on pm4
- IP: 10.4.2.212
- Domain: vtw.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/vaultwarden.yaml`
- Tagged: community-script, password-manager
- **Immich**: Migrated from Docker (dockge LXC 107 on pm3) to native LXC
- LXC 126 on pm4
- IP: 10.4.2.24:2283
- Domain: immich.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/immich.yaml`
- Library storage: NFS mount from elantris (`/el-pool/downloads/immich/`)
- 38GB photo library transferred via rsync
- Fresh database (version incompatibility: old v1.129.0 → new v2.3.1)
- Services: immich-web.service, immich-ml.service
- Tagged: community-script, photos
- **Gitea**: Added self-hosted Git server
- LXC 127 on pm4
- IP: 10.4.2.7:3000
- Domain: git.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/gitea.yaml`
- Config: `/etc/gitea/app.ini`
- Push-to-create enabled for users and orgs
- Initial repo: `proxmox-infra` (infrastructure documentation)
- Tagged: community-script, git
### Infrastructure Maintenance
- **Traefik (LXC 104)**: Fixed disk full issue
- Truncated 895MB access log that filled 2GB rootfs
- Added logrotate config to prevent recurrence (50MB max, 7 day rotation)
- Cleaned apt cache and journal logs
## 2025-11-20
### Service Changes
- **AMP**: Added to Traefik reverse proxy
- LXC 124 on elantris (10.4.2.26:8080)
- Domain: amp.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/amp.yaml`
- Purpose: Game server management via CubeCoders AMP
## 2025-11-19
### Service Changes
- **LXC 123 (elantris)**: Migrated from Ollama to llama.cpp
- Removed Ollama installation and service
- Built llama.cpp from source with CURL support
- Downloaded TinyLlama 1.1B Q4_K_M model (~667MB)
- Created systemd service for llama.cpp server
- Server running on port 11434 (OpenAI-compatible API)
- Model path: `/opt/llama.cpp/models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf`
- Service: `llama-cpp.service`
- Domain remains: ollama.kavcorp.com (pointing to llama.cpp now)
- **LXC 124 (elantris)**: Created new AMP (Application Management Panel) container
- IP: 10.4.2.26
- Resources: 4 CPU cores, 4GB RAM, 16GB storage
- Storage: local-lvm on elantris
- OS: Ubuntu 24.04 LTS
- Purpose: Game server management via CubeCoders AMP
- Tagged: gaming, amp
## 2025-11-17
### Service Additions
- **Ollama**: Added to Traefik reverse proxy
- LXC 123 on elantris
- IP: 10.4.2.224:11434
- Domain: ollama.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/ollama.yaml`
- Downloaded Qwen 3 Coder 30B model
- **Frigate**: Added to Traefik reverse proxy
- LXC 111 on pm3
- IP: 10.4.2.215:5000
- Domain: frigate.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/frigate.yaml`
- **Foundry VTT**: Added to Traefik reverse proxy
- LXC 112 on pm3
- IP: 10.4.2.37:30000
- Domain: vtt.kavcorp.com
- Traefik config: `/etc/traefik/conf.d/foundry.yaml`
### Infrastructure Changes
- **SSH Access**: Regenerated SSH keys on pm2 and distributed to all cluster nodes
- pm3 SSH service was down, enabled and configured
- All nodes (pm1, pm2, pm3, pm4, elantris) now accessible from pm2 via Proxmox web UI
### Service Configuration
- **NZBGet**: Fixed file permissions
- Set `UMask=0000` in nzbget.conf to create files with 777 permissions
- Fixed permission issues causing Sonarr import failures
- **Sonarr**: Enabled automatic permission setting
- Media Management → Set Permissions → chmod 777
- Ensures imported files are accessible by Jellyfin
- **Jellyseerr**: Fixed Traefik routing
- Corrected IP from 10.4.2.20 to 10.4.2.18 in media-services.yaml
- **Jellyfin**: Fixed LXC mount issues
- Restarted LXC 121 to activate media mounts
- Media now visible in `/media/tv`, `/media/movies`, `/media/anime`
### Documentation
- **Major Reorganization**: Consolidated scattered docs into structured system
- Created `README.md` - Documentation index and guide
- Created `INFRASTRUCTURE.md` - All infrastructure details
- Created `CONFIGURATIONS.md` - Service configurations
- Created `DECISIONS.md` - Architecture decisions and patterns
- Created `TASKS.md` - Current and pending tasks
- Created `CHANGELOG.md` - This file
- Updated `CLAUDE.md` - Added documentation policy
## 2025-11-16
### Service Deployments
- **Home Assistant**: Added to Traefik reverse proxy
- Domain: hass.kavcorp.com
- Configured trusted proxies in Home Assistant
- **Frigate**: Added to Traefik reverse proxy
- Domain: frigate.kavcorp.com
- **Proxmox**: Added to Traefik reverse proxy
- Domain: pm.kavcorp.com
- Backend: pm2 (10.4.2.6:8006)
- **Recyclarr**: Configured TRaSH Guides automation
- Sonarr and Radarr quality profiles synced
- Dolby Vision blocking implemented
- Daily sync schedule via cron
### Configuration Changes
- **Traefik**: Removed Authelia from *arr services
- Services now use only built-in authentication
- Simplified access for Sonarr, Radarr, Prowlarr, Bazarr, Whisparr, NZBGet
### Issues Encountered
- Media organization script moved files incorrectly
- Sonarr database corruption (lost TV series tracking)
- Permission issues with NZBGet downloads
- Jellyfin LXC mount not active after deployment
### Lessons Learned
- Always verify file permissions (777 required for NFS media)
- Backup service databases before running automation scripts
- LXC mounts may need container restart to activate
- Traefik auto-reloads configs, no restart needed
## Earlier History
*To be documented from previous sessions if needed*