Compare commits

...

29 Commits

Author SHA1 Message Date
72f1b3f100 Auto-commit: 2026-01-17 21:28 session changes 2026-01-17 21:28:07 -05:00
42e2241617 fix: Update RomM mount path and add network note
- Changed library mount from /mnt/kavnas/Roms/roms to /mnt/kavnas/Roms
  to match RomM's expected Structure A (/library/roms/<platform>/)
- Added docker volumes for romm_resources and romm_redis_data
- Documented docker-pm3 gateway (10.4.2.1) in netplan config

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 19:35:28 -05:00
2391a5bf86 config: Add RomM metadata provider API keys
Migrated from old KavNas config:
- ScreenScraper (kavren)
- RetroAchievements (kavren)
- SteamGridDB
- PlayMatch

Also added Emulationdrive exclusion to config.yml

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:54:20 -05:00
800defa479 config: Enable RomM auto-scanning
- Added ENABLE_SCHEDULED_RESCAN (daily at 3 AM)
- Added ENABLE_RESCAN_ON_FILESYSTEM_CHANGE (5 min delay)
- Updated docker-compose and documentation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:47:51 -05:00
16de4929bc config: Add Traefik route for RomM
- romm.kavcorp.com → 10.4.2.202:8998
- SSL via Let's Encrypt

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:36:35 -05:00
2fefe45ebf add: RomM 4.5 ROM manager on docker-pm3
- Deployed RomM via Docker on VM 109 (docker-pm3)
- URL: http://10.4.2.202:8998
- ROM library mounted from KavNas:/volume1/Media/Roms
- MariaDB backend for metadata
- Added persistent NFS mount on docker-pm3

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:33:08 -05:00
96d413602f docs: Update Sonarr IP to 10.4.2.20, Jellyseerr to 10.4.2.25
Fixed outdated IP addresses across documentation and scripts.
Sonarr LXC 105 is at 10.4.2.20, not 10.4.2.15.
Jellyseerr LXC 115 is at 10.4.2.25, not 10.4.2.20.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 18:24:17 -05:00
561bea3028 fix: Foundry VTT bad gateway - set static IP on LXC 112
LXC 112 was using DHCP and got IP 10.4.2.177 instead of expected
10.4.2.37, causing Traefik routing to fail.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 12:44:57 -05:00
c51fae4e7f add: Sunshine remote desktop on KavHTPC
Installed Sunshine game streaming host for low-latency media streaming.
Configured UFW firewall rules for Trusted and LAN VLANs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 20:54:19 -05:00
41ccc28800 Auto-commit: 2026-01-03 16:13 session changes 2026-01-03 16:13:13 -05:00
2705ad59bd update: Rename HTPC to KavHTPC
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 15:39:25 -05:00
c9f30559b5 add: Basement HTPC to infrastructure docs
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 15:38:10 -05:00
2c8dbe29c8 add: Traefik route for dsm.kavcorp.com
Added Synology DSM access via Traefik at dsm.kavcorp.com
Backend: http://10.4.2.13:5001 (DSM uses HTTP on this port)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 20:24:48 -05:00
c12c531c33 add: Guest VLAN access to Traefik HTTPS
Allow Guest VLAN to access Traefik on port 443 so guests can use
https://jellyfin.kavcorp.com etc. with valid Let's Encrypt certs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 22:06:17 -05:00
3f0c050ff6 add: Internal DNS for kavcorp.com domains via Pi-hole
- Added *.kavcorp.com DNS entries pointing to Traefik (10.4.2.10)
- Internal clients can use https://jellyfin.kavcorp.com with valid certs
- Same URLs work internally and externally, no port numbers needed
- Also added Traefik internal entrypoint on :8080 for .kav HTTP access

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 22:04:18 -05:00
e4506d6a07 Auto-commit: 2025-12-28 22:00 session changes 2025-12-28 22:00:45 -05:00
9002edcf9a add: Guest VLAN access to Jellyfin and Jellyseerr
Added OPNsense firewall rules allowing Guest VLAN (10.4.30.0/24) to
access media services:
- Jellyseerr: 10.4.2.25
- Jellyfin: 10.4.2.26

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 21:53:40 -05:00
1cfd87f450 fix: Guest VLAN internet - DNS pointed to non-existent IP
Root cause: OPNsense DHCP and firewall rules referenced 10.4.2.129
for Pi-hole DNS, but that IP doesn't exist. Pi-hole is at 10.4.2.11.

Updated all references in OPNsense config.xml and documentation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 21:45:29 -05:00
24728729f2 add: RustDesk server deployment
- LXC 129 on pm2 with static IP 10.4.2.36
- Local DNS: rustdesk.kav
- Updated INFRASTRUCTURE.md and CHANGELOG.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 18:14:00 -05:00
87e3a74aa1 Auto-commit: 2025-12-28 17:41 session changes 2025-12-28 17:41:52 -05:00
4ee4f357c8 config: Update Stop hook to use SSH for git push
Changed from HTTPS to SSH URL (gitea@git.kavcorp.com:kavren/proxmox-infra.git)
to fix authentication issues with auto-push.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 17:35:17 -05:00
3dd1581a3e docs: Add Gitea integration and tea CLI instructions
- Added Gitea Integration section with server details
- Documented tea CLI as the tool for interacting with Gitea
- Added issue tracking workflow for enhancement management
- Included common tea commands reference

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 17:32:33 -05:00
9051c84bae docs: Network infrastructure cleanup - static IPs, local DNS, SSH access
- Complete static IP migration for all containers
- Configure Pi-hole local DNS with .kav hostnames
- Add SSH provisioning script for all containers
- Create NETWORK-MAP.md with complete IP allocation
- Create network-map.sh for dynamic map generation
- Update INFRASTRUCTURE.md with new service map
- Add .kav TLD and SSH policy decisions to DECISIONS.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 17:11:32 -05:00
1d649c4349 Auto-commit: 2025-12-28 16:32 session changes 2025-12-28 16:32:51 -05:00
128b38a457 Auto-commit: 2025-12-23 15:51 session changes 2025-12-23 15:51:16 -05:00
6b6e029728 docs: Update Frigate and Home Assistant IPs
Both services use DHCP and IPs changed:
- Frigate: 10.4.2.8 → 10.4.2.176
- Home Assistant: 10.4.2.62 → 10.4.2.175

Traefik configs updated on LXC 104.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 11:56:47 -05:00
9cdf6e31ef Auto-commit: 2025-12-22 16:52 session changes 2025-12-22 16:52:44 -05:00
8cdda4933c docs: Complete media stack IP migration
Migrated all media services to 10.4.2.20-29 range:
- Sonarr: 10.4.2.20
- Whisparr: 10.4.2.21
- Prowlarr: 10.4.2.22
- Bazarr: 10.4.2.23
- Radarr: 10.4.2.24
- Jellyseerr: 10.4.2.25
- Jellyfin: 10.4.2.26
- Kometa: 10.4.2.27
- Recyclarr: 10.4.2.28
- Notifiarr: 10.4.2.29
- Immich: 10.4.2.30

All Traefik configs updated.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 16:51:20 -05:00
9c96acb428 docs: Update git remote to HTTPS domain
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 16:40:12 -05:00
18 changed files with 937 additions and 92 deletions

View File

@@ -4,7 +4,9 @@
"Bash(chmod:*)", "Bash(chmod:*)",
"Bash(ssh:*)", "Bash(ssh:*)",
"Bash(ping:*)", "Bash(ping:*)",
"WebSearch" "WebSearch",
"Bash(tee:*)",
"Bash(curl:*)"
], ],
"deny": [], "deny": [],
"ask": [] "ask": []

View File

@@ -29,7 +29,34 @@ Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Types: `add`, `update`, `fix`, `docs`, `config` Types: `add`, `update`, `fix`, `docs`, `config`
**Remote**: `origin` → http://10.4.2.7:3000/kavren/proxmox-infra.git **Remote**: `origin` → https://git.kavcorp.com/kavren/proxmox-infra.git
## Gitea Integration
**Server**: https://git.kavcorp.com (self-hosted Gitea)
**CLI Tool**: `tea` (Gitea's official CLI, similar to GitHub's `gh`)
### Issue Tracking
Use Gitea issues for enhancement tracking and task management:
```bash
# List issues
tea issues list
# Create new issue
tea issues create --title "Title" --body "Description"
# View issue
tea issues view <number>
```
### Common tea Commands
```bash
tea repos list # List repositories
tea pr list # List pull requests
tea pr create # Create pull request
tea issues list # List issues
tea repo clone <repo> # Clone a repository
```
## Repository Purpose ## Repository Purpose
@@ -167,12 +194,12 @@ ip -br addr show
- 10.4.2.14: elantris - 10.4.2.14: elantris
**Media Stack**: **Media Stack**:
- 10.4.2.15: sonarr (LXC 105) - 10.4.2.20: sonarr (LXC 105)
- 10.4.2.16: radarr (LXC 108) - 10.4.2.16: radarr (LXC 108)
- 10.4.2.17: prowlarr (LXC 114) - 10.4.2.17: prowlarr (LXC 114)
- 10.4.2.18: bazarr (LXC 119) - 10.4.2.18: bazarr (LXC 119)
- 10.4.2.19: whisparr (LXC 117) - 10.4.2.19: whisparr (LXC 117)
- 10.4.2.20: jellyseerr (LXC 115) - 10.4.2.25: jellyseerr (LXC 115)
- 10.4.2.21: kometa (LXC 120) - 10.4.2.21: kometa (LXC 120)
- 10.4.2.22: jellyfin (LXC 121) - 10.4.2.22: jellyfin (LXC 121)

View File

@@ -2,6 +2,124 @@
> **Purpose**: Historical record of all significant infrastructure changes > **Purpose**: Historical record of all significant infrastructure changes
## 2026-01-12
### Sonarr Disk Space Fix
- Fixed SQLite disk I/O errors in Sonarr (LXC 105)
- Root cause: 4GB rootfs was 100% full, causing database corruption
- Resized rootfs from 4GB to 12GB
- Ran e2fsck to repair filesystem corruption
- Deleted corrupted logs.db (Sonarr recreates automatically)
### Documentation IP Corrections
- Updated Sonarr IP from 10.4.2.15 to 10.4.2.20 across all docs
- Updated Jellyseerr IP from 10.4.2.20 to 10.4.2.25 in infrastructure-map.md
- Files updated: CLAUDE.md, network.md, recyclarr-setup.md, CONFIGURATIONS.md, inventory.md, infrastructure-map.md, services.md, scripts/cleanup/*
### RomM 4.5 Installation
- Deployed RomM (ROM Manager) via Docker on docker-pm3 (VM 109)
- URL: http://10.4.2.202:8998
- ROM library mounted from KavNas at /mnt/kavnas/Roms/roms
- Assets directory at /mnt/kavnas/Roms/assets
- Added persistent NFS mount for KavNas on docker-pm3
- MariaDB backend for metadata storage
## 2026-01-04
### Foundry VTT Network Fix
- Fixed bad gateway error for vtt.kavcorp.com
- Issue: LXC 112 was using DHCP and got IP 10.4.2.177 instead of expected 10.4.2.37
- Fix: Set static IP 10.4.2.37/24 with gateway 10.4.2.1 on LXC 112
- Rebooted container to apply network changes
## 2026-01-03
### Sunshine Remote Desktop on KavHTPC
- Installed Sunshine (game streaming host) on KavHTPC for remote desktop access
- Configured UFW firewall rules for Sunshine ports from Trusted (10.4.0.0/24) and LAN (10.4.2.0/24)
- TCP: 47984, 47989, 47990, 48010
- UDP: 47998-48000
- Enabled sunshine.service as user service (auto-starts with user session)
- Hardware encoding: NVIDIA nvenc (H.264, HEVC, AV1)
- Web UI: https://10.4.2.190:47990
### Basement HTPC Added
- Added basement HTPC to infrastructure (10.4.2.190)
- Created SSH profile `htpc` in ~/.ssh/config
- Added new "Clients / Endpoints" section to INFRASTRUCTURE.md for DHCP-range devices
## 2026-01-02
### Synology DSM Traefik Route
- Added Traefik route for `dsm.kavcorp.com` → KavNas DSM (10.4.2.13:5001)
- Config: `/etc/traefik/conf.d/dsm.yaml`
- Note: DSM is serving HTTP on port 5001 (not HTTPS), Traefik terminates TLS
## 2025-12-28
### Guest VLAN Traefik Access
- Added firewall rule allowing Guest VLAN to access Traefik (10.4.2.10:443)
- Guests can now use `https://jellyfin.kavcorp.com` etc. with valid certs
### Internal DNS for kavcorp.com Domains
- Added Pi-hole DNS entries for `*.kavcorp.com` pointing to Traefik (10.4.2.10)
- Internal clients can now access `https://jellyfin.kavcorp.com` etc. with valid Let's Encrypt certs
- No port numbers needed, same URLs work internally and externally
- Also added Traefik `internal` entrypoint on port 8080 for .kav HTTP access (optional)
### Guest VLAN Media Access
- Added firewall rules allowing Guest VLAN to access Jellyseerr (10.4.2.25) and Jellyfin (10.4.2.26)
- Rules inserted before "Block Guest to LAN" to allow media streaming for guests
### Guest VLAN Internet Fix
- Fixed Guest VLAN (10.4.30.0/24) having no internet access
- Root cause: OPNsense DHCP and firewall rules referenced non-existent 10.4.2.129 for DNS
- Fix: Updated all DNS references in OPNsense config.xml from 10.4.2.129 to 10.4.2.11 (Pi-hole)
- Affected: DHCP DNS server settings for all VLANs, firewall DNS allow rules
- Guest clients need DHCP lease renewal to get correct DNS server
### RustDesk Server Deployment
- Deployed RustDesk server LXC 129 on pm2 via ProxmoxVE helper script
- Configured static IP: 10.4.2.36
- Added local DNS: rustdesk.kav
- Public key: `UCLpXJifKwWZRWIPqVkyrVfFH89DE8Ca0iBNZselaSU=`
- Services: hbbs (signal), hbbr (relay), api
- Ports: 21115-21119 (TCP), 21116 (UDP)
### Network Infrastructure Cleanup
#### Static IP Migration Complete
All containers now have static IPs in organized ranges:
- **Core Infrastructure** (10.4.2.10-19): Pi-hole→.11, Authelia→.12, Vaultwarden→.15
- **Media Stack** (10.4.2.20-29): All *arr services, Jellyfin, etc.
- **Services** (10.4.2.30-39): Immich→.30, Gitea→.31, Frigate→.32, Ollama→.34
- **IoT** (10.4.2.50-99): Z-Wave→.50, MQTT→.51
- **Docker Hosts** (10.4.2.200-209): docker-pm2→.200, docker-pm4→.201
#### Pi-hole Local DNS (.kav domain)
- Configured Pi-hole (10.4.2.11) as local DNS resolver
- All services now have `.kav` hostnames (e.g., traefik.kav, sonarr.kav)
- DNS records added via `dns.hosts` array in `/etc/pihole/pihole.toml`
#### SSH Access to All Containers
- Created provisioning script: `scripts/provisioning/setup-ssh-access.sh`
- All LXC containers now have SSH enabled with key-based auth
- Access via: `ssh root@<service>.kav`
#### Traefik Route Updates
- Updated backend IPs for: authelia.yaml, vaultwarden.yaml, pihole.yaml
- All routes now point to new static IPs
#### Documentation Updates
- Created `docs/NETWORK-MAP.md` with complete IP allocation
- Created `scripts/monitoring/network-map.sh` for dynamic map generation
- Updated `docs/INFRASTRUCTURE.md` with new service map
- Updated gateway references from 10.4.2.254 to 10.4.2.1
#### Pending
- Update OPNsense DHCP to distribute Pi-hole (10.4.2.11) as DNS
- Configure Home Assistant static IP (10.4.2.33) via HAOS UI
## 2025-12-22 ## 2025-12-22
### NAT Reflection & External Access Fix ### NAT Reflection & External Access Fix

View File

@@ -141,6 +141,33 @@ http:
insecureSkipVerify: true insecureSkipVerify: true
``` ```
## Synology DSM
**Location**: KavNas (Synology NAS)
**IP**: 10.4.2.13:5001
**Domain**: dsm.kavcorp.com
**Traefik Config** (`/etc/traefik/conf.d/dsm.yaml`):
```yaml
http:
routers:
dsm:
rule: "Host(`dsm.kavcorp.com`)"
entryPoints:
- websecure
service: dsm
tls:
certResolver: letsencrypt
services:
dsm:
loadBalancer:
servers:
- url: "http://10.4.2.13:5001"
```
**Note**: DSM is configured for HTTP on port 5001 (not HTTPS). Traefik terminates TLS.
## AMP (Application Management Panel) ## AMP (Application Management Panel)
**Location**: LXC 124 on elantris **Location**: LXC 124 on elantris
@@ -183,7 +210,7 @@ http:
## Sonarr ## Sonarr
**Location**: LXC 105 on pm2 **Location**: LXC 105 on pm2
**IP**: 10.4.2.15:8989 **IP**: 10.4.2.20:8989
**API Key**: b331fe18ec2144148a41645d9ce8b249 **API Key**: b331fe18ec2144148a41645d9ce8b249
**Media Management Settings**: **Media Management Settings**:
@@ -352,3 +379,81 @@ http:
- url: "http://10.4.2.24:2283" - url: "http://10.4.2.24:2283"
``` ```
## RomM
**Location**: Docker on docker-pm3 (VM 109)
**IP**: 10.4.2.202:8998
**Version**: 4.5.0
**Docker Compose** (`/opt/romm/docker-compose.yml`):
```yaml
services:
romm:
image: rommapp/romm:latest
container_name: romm
ports:
- 8998:8080
environment:
- DB_HOST=romm-db
- DB_NAME=romm
- DB_USER=romm-user
- DB_PASSWD=55e7720ac5100322678bacf0a7705bf9
- ROMM_AUTH_SECRET_KEY=05817a5501383c44287fc4079082f9fc0543013f186e61789aa2cc2be58d22e8
- HASHEOUS_API_ENABLED=true
- ENABLE_SCHEDULED_RESCAN=true
- SCHEDULED_RESCAN_CRON=0 3 * * *
- ENABLE_RESCAN_ON_FILESYSTEM_CHANGE=true
- RESCAN_ON_FILESYSTEM_CHANGE_DELAY=5
- SCREENSCRAPER_USER=kavren
- SCREENSCRAPER_PASSWORD=outlaw
- RETROACHIEVEMENTS_USERNAME=kavren
- RETROACHIEVEMENTS_API_KEY=obwPWYqylOhy9LA0Mapr64LrUuKNUZLc
- STEAMGRIDDB_API_KEY=447ec66ceaef54d52e249a403ec4d4ec
- PLAYMATCH_API_ENABLED=true
volumes:
- romm_resources:/romm/resources
- romm_redis_data:/redis-data
- /mnt/kavnas/Roms:/romm/library
- /opt/romm/config:/romm/config
romm-db:
image: mariadb:latest
container_name: romm-db
environment:
- MARIADB_ROOT_PASSWORD=55e7720ac5100322678bacf0a7705bf9
- MARIADB_DATABASE=romm
- MARIADB_USER=romm-user
- MARIADB_PASSWORD=55e7720ac5100322678bacf0a7705bf9
```
**NFS Mount** (docker-pm3 `/etc/fstab`):
```
10.4.2.13:/volume1/Media /mnt/kavnas nfs rw,soft,nfsvers=4 0 0
```
**ROM Library Structure** (RomM expects Structure A: `/library/roms/<platform>/`):
- Mount: `/mnt/kavnas/Roms:/romm/library`
- ROMs location: `/mnt/kavnas/Roms/roms/<platform>/``/romm/library/roms/<platform>/`
- Assets: `/mnt/kavnas/Roms/assets/` (cover art, screenshots)
**Network Note**: docker-pm3 gateway is 10.4.2.1 (configured in `/etc/netplan/50-cloud-init.yaml`)
**Traefik Config** (`/etc/traefik/conf.d/romm.yaml`):
```yaml
http:
routers:
romm:
rule: "Host(`romm.kavcorp.com`)"
entryPoints:
- websecure
service: romm
tls:
certResolver: letsencrypt
services:
romm:
loadBalancer:
servers:
- url: "http://10.4.2.202:8998"
```

View File

@@ -41,6 +41,59 @@
## Network Architecture ## Network Architecture
### Local DNS (.kav TLD)
**Decision**: Use `.kav` as the local top-level domain for internal services
**Reason**:
- Unique to KavCorp network, avoids conflicts with real TLDs
- Short and memorable
- Works without additional configuration
- Pi-hole handles resolution via `dns.hosts` in pihole.toml
**Alternatives Considered**:
- `.lan` - Common but can conflict with some routers
- `.local` - Conflicts with mDNS/Bonjour
- `.home.arpa` - RFC 8375 compliant but verbose
**Usage**:
- **HTTPS (recommended)**: `https://<service>.kavcorp.com` - valid Let's Encrypt certs, works internally and externally
- **HTTP (optional)**: `http://<service>.kav:8080/` - internal only, no certs needed
**Internal DNS Configuration**:
- Pi-hole resolves `*.kavcorp.com` to Traefik (10.4.2.10) for internal HTTPS access
- Pi-hole resolves `.kav` domains to Traefik for HTTP:8080 access
- Direct access (no Traefik): pm1-4.kav, elantris.kav, kavnas.kav, docker hosts, mqtt.kav, zwave.kav
### SSH Access Policy
**Decision**: SSH from workstation only, no container-to-container SSH
**Reason**:
- Reduces attack surface
- Single key to manage
- Containers don't need to communicate via SSH
**Implementation**:
- Workstation ed25519 key added to all containers
- `PermitRootLogin prohibit-password` (key-only)
- Provisioning script: `scripts/provisioning/setup-ssh-access.sh`
### IP Allocation Scheme
**Decision**: Organized IP ranges by service type
**Reason**: Easy to identify service type from IP, logical grouping
| Range | Purpose |
|-------|---------|
| 10.4.2.1 | Gateway (OPNsense) |
| 10.4.2.2-9 | Proxmox nodes |
| 10.4.2.10-19 | Core infrastructure |
| 10.4.2.20-29 | Media stack |
| 10.4.2.30-39 | Other services |
| 10.4.2.40-49 | Game servers |
| 10.4.2.50-99 | IoT / Reserved |
| 10.4.2.100-199 | DHCP pool |
| 10.4.2.200-209 | Docker hosts |
### Network Isolation Strategy ### Network Isolation Strategy
**Goal**: Isolate IoT (KavCorp-IOT) and Guest (KavCorp-Guest) WiFi networks from the main LAN, while allowing Smart Home VMs to access IoT devices. **Goal**: Isolate IoT (KavCorp-IOT) and Guest (KavCorp-Guest) WiFi networks from the main LAN, while allowing Smart Home VMs to access IoT devices.
@@ -61,16 +114,18 @@ Unmanaged Gigabyte switches pass VLAN tags through (they just don't understand t
#### DHCP Configuration #### DHCP Configuration
All DHCP served by OPNsense: All DHCP served by OPNsense:
- LAN: 10.4.2.100-200, DNS: 10.4.2.129 (Pi-hole) - LAN: 10.4.2.100-200, DNS: 10.4.2.11 (Pi-hole)
- Trusted: 10.4.10.100-200, DNS: 10.4.2.129 - Trusted: 10.4.10.100-200, DNS: 10.4.2.11
- IoT: 10.4.20.100-200, DNS: 10.4.2.129 - IoT: 10.4.20.100-200, DNS: 10.4.2.11
- Guest: 10.4.30.100-200, DNS: 10.4.2.129 - Guest: 10.4.30.100-200, DNS: 10.4.2.11
#### OPNsense Firewall Rules (Implemented) #### OPNsense Firewall Rules (Implemented)
| Rule | Source | Destination | Action | | Rule | Source | Destination | Action |
|------|--------|-------------|--------| |------|--------|-------------|--------|
| Allow DNS | IoT/Guest | 10.4.2.129:53 | Pass | | Allow DNS | IoT/Guest | 10.4.2.11:53 | Pass |
| Allow Guest→Traefik | 10.4.30.0/24 | 10.4.2.10:443 | Pass |
| Allow Guest→Media | 10.4.30.0/24 | 10.4.2.25, 10.4.2.26 | Pass |
| Block IoT→LAN | 10.4.20.0/24 | 10.4.2.0/24 | Block | | Block IoT→LAN | 10.4.20.0/24 | 10.4.2.0/24 | Block |
| Block Guest→LAN | 10.4.30.0/24 | 10.4.2.0/24 | Block | | Block Guest→LAN | 10.4.30.0/24 | 10.4.2.0/24 | Block |
| Block Guest→IoT | 10.4.30.0/24 | 10.4.20.0/24 | Block | | Block Guest→IoT | 10.4.30.0/24 | 10.4.20.0/24 | Block |

0
docs/IDEAS.md Normal file
View File

View File

@@ -15,38 +15,88 @@
**Cluster Name**: KavCorp **Cluster Name**: KavCorp
**Network**: 10.4.2.0/24 **Network**: 10.4.2.0/24
**Gateway**: 10.4.2.254 **Gateway**: 10.4.2.1 (OPNsense)
**DNS**: 10.4.2.11 (Pi-hole)
**Local Domain**: .kav
## Service Map ## Service Map
| Service | IP:Port | Location | Domain | Auth | > See [NETWORK-MAP.md](NETWORK-MAP.md) for complete IP allocation details.
|---------|---------|----------|--------|------|
| **Proxmox Web UI** | 10.4.2.6:8006 | pm2 | pm.kavcorp.com | Proxmox built-in | ### Core Infrastructure (10.4.2.10-19)
| **Traefik** | 10.4.2.10 | LXC 104 (pm2) | - | None (reverse proxy) |
| **Authelia** | 10.4.2.19 | LXC 116 (pm2) | auth.kavcorp.com | SSO provider | | Service | IP:Port | Location | Domain | Local DNS |
| **Sonarr** | 10.4.2.15:8989 | LXC 105 (pm2) | sonarr.kavcorp.com | Built-in | |---------|---------|----------|--------|-----------|
| **Radarr** | 10.4.2.16:7878 | LXC 108 (pm2) | radarr.kavcorp.com | Built-in | | **OPNsense** | 10.4.2.1 | VM 130 (pm4) | opnsense.kavcorp.com | opnsense.kav |
| **Prowlarr** | 10.4.2.17:9696 | LXC 114 (pm2) | prowlarr.kavcorp.com | Built-in | | **Traefik** | 10.4.2.10 | LXC 104 (pm2) | - | traefik.kav |
| **Jellyseerr** | 10.4.2.18:5055 | LXC 115 (pm2) | jellyseerr.kavcorp.com | Built-in | | **Pi-hole** | 10.4.2.11 | LXC 103 (pm4) | pihole.kavcorp.com | pihole.kav |
| **Whisparr** | 10.4.2.20:6969 | LXC 117 (pm2) | whisparr.kavcorp.com | Built-in | | **Authelia** | 10.4.2.12:9091 | LXC 116 (pm2) | auth.kavcorp.com | authelia.kav |
| **Notifiarr** | 10.4.2.21 | LXC 118 (pm2) | - | API key | | **KavNas** | 10.4.2.13 | Synology NAS | - | kavnas.kav |
| **Jellyfin** | 10.4.2.21:8096 | LXC 121 (elantris) | jellyfin.kavcorp.com | Built-in | | **Vaultwarden** | 10.4.2.15 | LXC 125 (pm4) | vtw.kavcorp.com | vaultwarden.kav |
| **Bazarr** | 10.4.2.22:6767 | LXC 119 (pm2) | bazarr.kavcorp.com | Built-in | | **UniFi Controller** | 10.4.2.16:8443 | LXC 111 (pm4) | unifi.kavcorp.com | unifi.kav |
| **Kometa** | 10.4.2.23 | LXC 120 (pm2) | - | N/A |
| **Recyclarr** | 10.4.2.25 | LXC 122 (pm2) | - | CLI only | ### Media Stack (10.4.2.20-29)
| **NZBGet** | 10.4.2.13:6789 | Docker (kavnas) | nzbget.kavcorp.com | Built-in |
| **Home Assistant** | 10.4.2.62:8123 | VM 100 (pm1) | hass.kavcorp.com | Built-in | | Service | IP:Port | Location | Domain | Local DNS |
| **Frigate** | 10.4.2.8:8971 | LXC 128 (pm3) | frigate.kavcorp.com | Built-in (auth required) | |---------|---------|----------|--------|-----------|
| **Foundry VTT** | 10.4.2.37:30000 | LXC 112 (pm3) | vtt.kavcorp.com | Built-in | | **Sonarr** | 10.4.2.20:8989 | LXC 105 (pm2) | sonarr.kavcorp.com | sonarr.kav |
| **llama.cpp** | 10.4.2.224:11434 | LXC 123 (elantris) | ollama.kavcorp.com | None (API) | | **Whisparr** | 10.4.2.21:6969 | LXC 117 (pm2) | whisparr.kavcorp.com | whisparr.kav |
| **AMP** | 10.4.2.26:8080 | LXC 124 (elantris) | amp.kavcorp.com | Built-in | | **Prowlarr** | 10.4.2.22:9696 | LXC 114 (pm2) | prowlarr.kavcorp.com | prowlarr.kav |
| **Vaultwarden** | 10.4.2.212 | LXC 125 (pm4) | vtw.kavcorp.com | Built-in | | **Bazarr** | 10.4.2.23:6767 | LXC 119 (pm2) | bazarr.kavcorp.com | bazarr.kav |
| **Immich** | 10.4.2.24:2283 | LXC 126 (pm4) | immich.kavcorp.com | Built-in | | **Radarr** | 10.4.2.24:7878 | LXC 108 (pm2) | radarr.kavcorp.com | radarr.kav |
| **Gitea** | 10.4.2.7:3000 | LXC 127 (pm4) | git.kavcorp.com | Built-in | | **Jellyseerr** | 10.4.2.25:5055 | LXC 115 (pm2) | jellyseerr.kavcorp.com | jellyseerr.kav |
| **Pi-hole** | 10.4.2.129 | LXC 103 (pm4) | pihole.kavcorp.com | Built-in | | **Jellyfin** | 10.4.2.26:8096 | LXC 121 (elantris) | jellyfin.kavcorp.com | jellyfin.kav |
| **UniFi Controller** | 10.4.2.242:8443 | LXC 111 (pm4) | unifi.kavcorp.com | Built-in | | **Kometa** | 10.4.2.27 | LXC 120 (pm2) | - | kometa.kav |
| **OPNsense (KavSense)** | 10.4.2.1 | VM 130 (pm4) | opnsense.kavcorp.com | Built-in (net0: vmbr0/LAN, net1: vmbr1/WAN) | | **Recyclarr** | 10.4.2.28 | LXC 122 (pm2) | - | recyclarr.kav |
| **KavNas** | 10.4.2.13 | Synology NAS | - | NAS auth | | **Notifiarr** | 10.4.2.29 | LXC 118 (pm2) | - | notifiarr.kav |
### Services (10.4.2.30-39)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **Immich** | 10.4.2.30:2283 | LXC 126 (pm4) | immich.kavcorp.com | immich.kav |
| **Gitea** | 10.4.2.31:3000 | LXC 127 (pm4) | git.kavcorp.com | gitea.kav |
| **Frigate** | 10.4.2.32:8971 | LXC 128 (pm3) | frigate.kavcorp.com | frigate.kav |
| **Home Assistant** | 10.4.2.33:8123 | VM 100 (pm1) | hass.kavcorp.com | homeassistant.kav |
| **Ollama** | 10.4.2.34:11434 | LXC 123 (elantris) | ollama.kavcorp.com | ollama.kav |
| **Twingate** | 10.4.2.35 | LXC 101 (pm1) | - | twingate.kav |
| **RustDesk** | 10.4.2.36:21115-21119 | LXC 129 (pm2) | - | rustdesk.kav |
| **Foundry VTT** | 10.4.2.37:30000 | LXC 112 (pm3) | vtt.kavcorp.com | foundryvtt.kav |
### Game Servers (10.4.2.40-49)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **AMP** | 10.4.2.40:8080 | LXC 124 (elantris) | amp.kavcorp.com | amp.kav |
### IoT / Home Automation (10.4.2.50-99)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **Z-Wave JS UI** | 10.4.2.50 | LXC 102 (pm1) | - | zwave.kav |
| **MQTT** | 10.4.2.51:1883 | LXC 106 (pm3) | - | mqtt.kav |
### Docker Hosts (10.4.2.200-209)
| Service | IP | Location | Local DNS |
|---------|-----|----------|-----------|
| **docker-pm2** | 10.4.2.200 | LXC 113 (pm2) | docker-pm2.kav |
| **docker-pm4** | 10.4.2.201 | LXC 110 (pm4) | docker-pm4.kav |
| **docker-pm3** | 10.4.2.202 | VM 109 (pm3) | docker-pm3.kav |
| **Dockge** | 10.4.2.203 | LXC 107 (pm3) | dockge.kav |
### Clients / Endpoints (DHCP Range 10.4.2.100-199)
| Device | IP | Location | Notes |
|--------|-----|----------|-------|
| **KavHTPC** | 10.4.2.190 | Basement | CachyOS HTPC for media playback, Sunshine (47990) |
### Other
| Service | IP:Port | Location | Domain |
|---------|---------|----------|--------|
| **NZBGet** | 10.4.2.13:6789 | Docker (kavnas) | nzbget.kavcorp.com |
| **Proxmox Web UI** | 10.4.2.6:8006 | pm2 | pm.kavcorp.com |
## Storage Architecture ## Storage Architecture
@@ -98,7 +148,7 @@ All `*.kavcorp.com` subdomains route through Traefik reverse proxy (10.4.2.10) f
| Bridge | vmbr0 | | Bridge | vmbr0 |
| Physical Interface | eno1 | | Physical Interface | eno1 |
| CIDR | 10.4.2.0/24 | | CIDR | 10.4.2.0/24 |
| Gateway | 10.4.2.254 | | Gateway | 10.4.2.1 (OPNsense) |
#### pm4 Only (vmbr1 - WAN for OPNsense) #### pm4 Only (vmbr1 - WAN for OPNsense)
| Setting | Value | | Setting | Value |
@@ -134,10 +184,18 @@ All `*.kavcorp.com` subdomains route through Traefik reverse proxy (10.4.2.10) f
### SSH Access ### SSH Access
- **User**: kavren (from local machine) **Proxmox Nodes:**
- **User**: root (between cluster nodes) - **User**: root (from workstation or between nodes)
- **Key Type**: ed25519 - **Key Type**: ed25519
- **Node-to-Node**: Passwordless SSH configured for cluster operations - **Access**: `ssh pm1`, `ssh pm2`, `ssh pm3`, `ssh pm4`, `ssh elantris`
**LXC Containers:**
- **User**: root
- **Key Type**: ed25519 (workstation key provisioned)
- **Access**: `ssh root@<service>.kav` (e.g., `ssh root@traefik.kav`)
- **Provisioning Script**: `scripts/provisioning/setup-ssh-access.sh`
All containers have SSH enabled with key-based authentication (PermitRootLogin prohibit-password).
### Important Paths ### Important Paths

View File

@@ -8,41 +8,40 @@
- [x] Traefik gateway fixed (was 10.4.2.254, now 10.4.2.1) - [x] Traefik gateway fixed (was 10.4.2.254, now 10.4.2.1)
- [x] NAT reflection enabled in OPNsense - [x] NAT reflection enabled in OPNsense
- [x] UniFi, Immich, Gitea set to static IPs and verified working through Traefik - [x] UniFi, Immich, Gitea set to static IPs and verified working through Traefik
- [x] Radarr IP conflict resolved (moved from 10.4.2.16 to 10.4.2.24) - [x] Media stack IP reorganization complete (all in 10.4.2.20-29 range)
- [x] Traefik configs updated for all new IPs
**Current Static IPs (verified working):**
- UniFi (111): 10.4.2.16 - Traefik verified
- Gitea (127): 10.4.2.31 - Traefik verified
- Immich (126): 10.4.2.30 - Traefik verified
- Radarr (108): 10.4.2.24 - Traefik updated
**Pending:** **Pending:**
- [ ] Media stack IP reorganization (10.4.2.20-29) - [ ] Core infrastructure IP updates (Authelia, Vaultwarden, Pi-hole)
- [ ] Pi-hole migration (10.4.2.129 → 10.4.2.11) - [ ] Other services (Frigate, Foundry, Home Assistant, llama.cpp, AMP)
- [ ] KavNas / Elantris IP updates (deferred) - [ ] Docker hosts IP updates
- [ ] Update docs/INFRASTRUCTURE.md with final IPs - [ ] Update docs/INFRASTRUCTURE.md with final IPs
**Current IP Map (pm2 - 10.4.2.6):** **Current IP Map (pm2 - 10.4.2.6):**
- 104 traefik: 10.4.2.10 - 104 traefik: 10.4.2.10
- 105 sonarr: 10.4.2.15 - 105 sonarr: 10.4.2.20 ✓
- 108 radarr: 10.4.2.24 - 108 radarr: 10.4.2.24
- 113 docker-pm2: 10.4.2.203 - 113 docker-pm2: 10.4.2.203
- 114 prowlarr: 10.4.2.17 - 114 prowlarr: 10.4.2.22 ✓
- 115 jellyseerr: 10.4.2.18 - 115 jellyseerr: 10.4.2.25 ✓
- 116 authelia: 10.4.2.19 - 116 authelia: 10.4.2.19
- 117 whisparr: 10.4.2.20 - 117 whisparr: 10.4.2.21 ✓
- 118 notifiarr: 10.4.2.21 - 118 notifiarr: 10.4.2.29 ✓
- 119 bazarr: 10.4.2.22 - 119 bazarr: 10.4.2.23 ✓
- 120 kometa: 10.4.2.23 - 120 kometa: 10.4.2.27 ✓
- 122 recyclarr: 10.4.2.25 - 122 recyclarr: 10.4.2.28 ✓
**Current IP Map (pm4 - 10.4.2.5):** **Current IP Map (pm4 - 10.4.2.5):**
- 103 pihole: 10.4.2.129 - 103 pihole: 10.4.2.129
- 110 docker-pm4: 10.4.2.204 - 110 docker-pm4: 10.4.2.204
- 111 unifi: 10.4.2.16 - 111 unifi: 10.4.2.16
- 125 vaultwarden: 10.4.2.212 - 125 vaultwarden: 10.4.2.212
- 126 immich: 10.4.2.30 - 126 immich: 10.4.2.30
- 127 gitea: 10.4.2.31 - 127 gitea: 10.4.2.31
**Current IP Map (elantris - 10.4.2.14):**
- 121 jellyfin: 10.4.2.26 ✓
- 124 amp: 10.4.2.40 ✓
## New IP Allocation Scheme ## New IP Allocation Scheme
@@ -73,21 +72,21 @@
| Vaultwarden | 125 | pm4 | 10.4.2.212 | 10.4.2.15 | 10.4.2.254→10.4.2.1 | | Vaultwarden | 125 | pm4 | 10.4.2.212 | 10.4.2.15 | 10.4.2.254→10.4.2.1 |
| UniFi | 111 | pm4 | 10.4.2.242 (DHCP) | 10.4.2.16 | Set to 10.4.2.1 | | UniFi | 111 | pm4 | 10.4.2.242 (DHCP) | 10.4.2.16 | Set to 10.4.2.1 |
### Services - Media Stack (10.4.2.20-29) ### Services - Media Stack (10.4.2.20-29) ✅ COMPLETE
| Service | VMID | Node | Current IP | New IP | Gateway Fix | | Service | VMID | Node | IP | Status |
|---------|------|------|------------|--------|-------------| |---------|------|------|-----|--------|
| Sonarr | 105 | pm2 | 10.4.2.15 | 10.4.2.20 | 10.4.2.254→10.4.2.1 | | Sonarr | 105 | pm2 | 10.4.2.20 | ✅ Done |
| Radarr | 108 | pm2 | 10.4.2.24 | 10.4.2.24 | ✅ Done | | Whisparr | 117 | pm2 | 10.4.2.21 | ✅ Done |
| Prowlarr | 114 | pm2 | 10.4.2.17 | 10.4.2.22 | 10.4.2.254→10.4.2.1 | | Prowlarr | 114 | pm2 | 10.4.2.22 | ✅ Done |
| Bazarr | 119 | pm2 | 10.4.2.22 | 10.4.2.23 | 10.4.2.254→10.4.2.1 | | Bazarr | 119 | pm2 | 10.4.2.23 | ✅ Done |
| Whisparr | 117 | pm2 | 10.4.2.20 | 10.4.2.24 | 10.4.2.254→10.4.2.1 | | Radarr | 108 | pm2 | 10.4.2.24 | ✅ Done |
| Jellyseerr | 115 | pm2 | 10.4.2.18 | 10.4.2.25 | 10.4.2.254→10.4.2.1 | | Jellyseerr | 115 | pm2 | 10.4.2.25 | ✅ Done |
| Jellyfin | 121 | elantris | 10.4.2.21 | 10.4.2.26 | Check | | Jellyfin | 121 | elantris | 10.4.2.26 | ✅ Done |
| Kometa | 120 | pm2 | 10.4.2.23 | 10.4.2.27 | 10.4.2.254→10.4.2.1 | | Kometa | 120 | pm2 | 10.4.2.27 | ✅ Done |
| Recyclarr | 122 | pm2 | 10.4.2.25 | 10.4.2.28 | 10.4.2.254→10.4.2.1 | | Recyclarr | 122 | pm2 | 10.4.2.28 | ✅ Done |
| Notifiarr | 118 | pm2 | 10.4.2.21 | 10.4.2.29 | 10.4.2.254→10.4.2.1 | | Notifiarr | 118 | pm2 | 10.4.2.29 | ✅ Done |
| Immich | 126 | pm4 | DHCP | 10.4.2.30 | Set to 10.4.2.1 | | Immich | 126 | pm4 | 10.4.2.30 | ✅ Done |
### Services - Other (10.4.2.30-39) ### Services - Other (10.4.2.30-39)
@@ -95,7 +94,7 @@
|---------|------|------|------------|--------|-------------| |---------|------|------|------------|--------|-------------|
| Immich | 126 | pm4 | DHCP | 10.4.2.30 | Set to 10.4.2.1 | | Immich | 126 | pm4 | DHCP | 10.4.2.30 | Set to 10.4.2.1 |
| Frigate | 128 | pm3 | 10.4.2.8 | 10.4.2.31 | Check | | Frigate | 128 | pm3 | 10.4.2.8 | 10.4.2.31 | Check |
| Foundry VTT | 112 | pm3 | 10.4.2.37 | 10.4.2.32 | Check | | Foundry VTT | 112 | pm3 | 10.4.2.37 | 10.4.2.37 | ✅ Already in range |
| Home Assistant | 100 | pm1 | 10.4.2.62 | 10.4.2.33 | Check | | Home Assistant | 100 | pm1 | 10.4.2.62 | 10.4.2.33 | Check |
| llama.cpp | 123 | elantris | 10.4.2.224 | 10.4.2.34 | Check | | llama.cpp | 123 | elantris | 10.4.2.224 | 10.4.2.34 | Check |
@@ -103,7 +102,7 @@
| Service | VMID | Node | Current IP | New IP | Gateway Fix | | Service | VMID | Node | Current IP | New IP | Gateway Fix |
|---------|------|------|------------|--------|-------------| |---------|------|------|------------|--------|-------------|
| AMP | 124 | elantris | 10.4.2.26 | 10.4.2.40 | Check | | AMP | 124 | elantris | 10.4.2.40 | 10.4.2.40 | ✅ Done |
### Docker Hosts (10.4.2.200-209) ### Docker Hosts (10.4.2.200-209)

157
docs/NETWORK-MAP.md Normal file
View File

@@ -0,0 +1,157 @@
# KavCorp Network Map
> **Last Updated**: 2025-12-28
> **Network**: 10.4.2.0/24
> **Gateway**: 10.4.2.1 (OPNsense)
> **DNS**: 10.4.2.11 (Pi-hole)
## Network Topology
```
INTERNET
│ WAN (AT&T)
│ Public: 99.74.188.161
┌─────────────────────────────┐
│ OPNsense (VM 130) │
│ 10.4.2.1 │
│ WAN: vmbr1 | LAN: vmbr0 │
└──────────────┬──────────────┘
┌───────────────────────────┼───────────────────────────┐
│ vmbr0 Bridge │
│ 10.4.2.0/24 (LAN) │
└───┬───────┬───────┬───────┬───────┬───────────────────┘
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌──────────┐ ┌─────────┐
│ pm1 │ │ pm2 │ │ pm3 │ │ pm4 │ │ elantris │ │ KavNas │
│ .2 │ │ .6 │ │ .3 │ │ .5 │ │ .14 │ │ .13 │
└──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └────┬─────┘ └─────────┘
│ │ │ │ │
┌───────┘ │ │ │ └────────┐
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌──────┐ ┌───────────┐ ┌────┐ ┌──────────┐ ┌─────────┐
│HA │ │Media Stack│ │Game│ │ Infra │ │ Media │
│Zwave │ │ Services │ │Svcs│ │ Services │ │ Storage │
│Twing.│ │ │ │ │ │ │ │ │
└──────┘ └───────────┘ └────┘ └──────────┘ └─────────┘
```
## IP Address Allocation
### Proxmox Nodes (10.4.2.2-9)
| IP | Hostname | Description |
|----|----------|-------------|
| 10.4.2.2 | pm1.kav | Proxmox node 1 |
| 10.4.2.3 | pm3.kav | Proxmox node 3 |
| 10.4.2.5 | pm4.kav | Proxmox node 4 |
| 10.4.2.6 | pm2.kav | Proxmox node 2 (primary management) |
| 10.4.2.14 | elantris.kav | Proxmox node 5 (128GB RAM, ZFS) |
### Core Infrastructure (10.4.2.10-19)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.1 | opnsense.kav | OPNsense Gateway | 130 | pm4 |
| 10.4.2.10 | traefik.kav | Reverse Proxy | 104 | pm2 |
| 10.4.2.11 | pihole.kav | DNS Server | 103 | pm4 |
| 10.4.2.12 | authelia.kav | SSO Authentication | 116 | pm2 |
| 10.4.2.13 | kavnas.kav | Synology NAS | - | - |
| 10.4.2.15 | vaultwarden.kav | Password Manager | 125 | pm4 |
| 10.4.2.16 | unifi.kav | UniFi Controller | 111 | pm4 |
### Media Stack (10.4.2.20-29)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.20 | sonarr.kav | TV Shows | 105 | pm2 |
| 10.4.2.21 | whisparr.kav | Adult Content | 117 | pm2 |
| 10.4.2.22 | prowlarr.kav | Indexer Manager | 114 | pm2 |
| 10.4.2.23 | bazarr.kav | Subtitles | 119 | pm2 |
| 10.4.2.24 | radarr.kav | Movies | 108 | pm2 |
| 10.4.2.25 | jellyseerr.kav | Media Requests | 115 | pm2 |
| 10.4.2.26 | jellyfin.kav | Media Server | 121 | elantris |
| 10.4.2.27 | kometa.kav | Plex Meta Manager | 120 | pm2 |
| 10.4.2.28 | recyclarr.kav | Quality Profiles | 122 | pm2 |
| 10.4.2.29 | notifiarr.kav | Notifications | 118 | pm2 |
### Services (10.4.2.30-39)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.30 | immich.kav | Photo Management | 126 | pm4 |
| 10.4.2.31 | gitea.kav | Git Server | 127 | pm4 |
| 10.4.2.32 | frigate.kav | NVR | 128 | pm3 |
| 10.4.2.33 | homeassistant.kav | Home Automation | 100 | pm1 (VM) |
| 10.4.2.34 | ollama.kav | LLM Server | 123 | elantris |
| 10.4.2.35 | twingate.kav | Zero Trust Access | 101 | pm1 |
| 10.4.2.37 | foundryvtt.kav | Virtual Tabletop | 112 | pm3 |
### Game Servers (10.4.2.40-49)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.40 | amp.kav | Game Server Manager | 124 | elantris |
### IoT / Home Automation (10.4.2.50-99)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.50 | zwave.kav | Z-Wave JS UI | 102 | pm1 |
| 10.4.2.51 | mqtt.kav | MQTT Broker | 106 | pm3 |
### Docker Hosts (10.4.2.200-209)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.200 | docker-pm2.kav | Docker Host | 113 | pm2 |
| 10.4.2.201 | docker-pm4.kav | Docker Host | 110 | pm4 |
| 10.4.2.202 | docker-pm3.kav | Docker Host | 109 | pm3 (VM) |
| 10.4.2.203 | dockge.kav | Docker Management | 107 | pm3 |
## IP Range Summary
| Range | Purpose | Status |
|-------|---------|--------|
| 10.4.2.1 | OPNsense Gateway | Assigned |
| 10.4.2.2-9 | Proxmox Nodes | Assigned |
| 10.4.2.10-19 | Core Infrastructure | Assigned |
| 10.4.2.20-29 | Media Stack | Assigned |
| 10.4.2.30-39 | Services | Partially used |
| 10.4.2.40-49 | Game Servers | Partially used |
| 10.4.2.50-99 | IoT / Reserved | Partially used |
| 10.4.2.100-199 | DHCP Pool | Dynamic |
| 10.4.2.200-209 | Docker Hosts | Assigned |
| 10.4.2.210-239 | Reserved | Available |
| 10.4.2.240-249 | Network Controllers | Reserved |
| 10.4.2.250-254 | Network Gear | Reserved |
## Access Methods
### SSH Access
All containers have SSH enabled with key-based authentication:
```bash
ssh root@<service>.kav
# Example: ssh root@traefik.kav
```
### Web Access
All web services are accessible via Traefik reverse proxy:
- External: `https://<service>.kavcorp.com`
- Internal: `http://<ip>:<port>`
### Local DNS
Pi-hole provides `.kav` domain resolution for all services.
Configure your device to use `10.4.2.11` as DNS server.
## Generating Updated Map
Use the network map script to generate a current view:
```bash
cd /home/kavren/proxmox-infra
./scripts/monitoring/network-map.sh
```

View File

@@ -14,17 +14,18 @@
| Service | IP:Port | Location | Notes | | Service | IP:Port | Location | Notes |
|---------|---------|----------|-------| |---------|---------|----------|-------|
| Sonarr | 10.4.2.15:8989 | LXC 105 on pm2 | TV shows | | Sonarr | 10.4.2.20:8989 | LXC 105 on pm2 | TV shows |
| Radarr | 10.4.2.16:7878 | - | Movies | | Radarr | 10.4.2.16:7878 | - | Movies |
| Prowlarr | 10.4.2.17:9696 | - | Indexer manager | | Prowlarr | 10.4.2.17:9696 | - | Indexer manager |
| Bazarr | 10.4.2.18:6767 | - | Subtitles | | Bazarr | 10.4.2.18:6767 | - | Subtitles |
| Whisparr | 10.4.2.19:6969 | - | Adult content | | Whisparr | 10.4.2.19:6969 | - | Adult content |
| Jellyseerr | 10.4.2.20:5055 | LXC 115 on pm2 | Request management | | Jellyseerr | 10.4.2.25:5055 | LXC 115 on pm2 | Request management |
| Jellyfin | 10.4.2.21:8096 | LXC 121 on elantris | Media server | | Jellyfin | 10.4.2.21:8096 | LXC 121 on elantris | Media server |
| NZBGet | 10.4.2.13:6789 | Docker on kavnas | Download client | | NZBGet | 10.4.2.13:6789 | Docker on kavnas | Download client |
| Traefik | 10.4.2.10 | LXC 104 on pm2 | Reverse proxy | | Traefik | 10.4.2.10 | LXC 104 on pm2 | Reverse proxy |
| Home Assistant | 10.4.2.62:8123 | VM 100 on pm1 | Home automation | | Home Assistant | 10.4.2.62:8123 | VM 100 on pm1 | Home automation |
| Frigate | 10.4.2.63:5000 | - | NVR/Camera system | | Frigate | 10.4.2.63:5000 | - | NVR/Camera system |
| RomM | 10.4.2.202:8998 | Docker on docker-pm3 | ROM manager |
## Storage ## Storage
@@ -37,7 +38,7 @@
All services accessible via `*.kavcorp.com` through Traefik reverse proxy: All services accessible via `*.kavcorp.com` through Traefik reverse proxy:
- pm.kavcorp.com → pm2 (10.4.2.6:8006) - pm.kavcorp.com → pm2 (10.4.2.6:8006)
- sonarr.kavcorp.com → 10.4.2.15:8989 - sonarr.kavcorp.com → 10.4.2.20:8989
- radarr.kavcorp.com → 10.4.2.16:7878 - radarr.kavcorp.com → 10.4.2.16:7878
- jellyfin.kavcorp.com → 10.4.2.21:8096 - jellyfin.kavcorp.com → 10.4.2.21:8096
- hass.kavcorp.com → 10.4.2.62:8123 - hass.kavcorp.com → 10.4.2.62:8123

View File

@@ -69,7 +69,7 @@
#### VMID 105 - sonarr #### VMID 105 - sonarr
- **Node**: pm2 - **Node**: pm2
- **IP**: 10.4.2.15 - **IP**: 10.4.2.20
- **CPU**: 2 cores - **CPU**: 2 cores
- **Memory**: 1GB - **Memory**: 1GB
- **Disk**: 4GB (KavNas) - **Disk**: 4GB (KavNas)

View File

@@ -50,7 +50,7 @@ iface vmbr0 inet static
#### Media Automation Stack #### Media Automation Stack
| IP | Service | VMID | Node | Purpose | | IP | Service | VMID | Node | Purpose |
|---|---|---|---|---| |---|---|---|---|---|
| 10.4.2.15 | sonarr | 105 | pm2 | TV show management | | 10.4.2.20 | sonarr | 105 | pm2 | TV show management |
| 10.4.2.16 | radarr | 108 | pm2 | Movie management | | 10.4.2.16 | radarr | 108 | pm2 | Movie management |
| 10.4.2.17 | prowlarr | 114 | pm2 | Indexer manager | | 10.4.2.17 | prowlarr | 114 | pm2 | Indexer manager |
| 10.4.2.18 | bazarr | 119 | pm2 | Subtitle management | | 10.4.2.18 | bazarr | 119 | pm2 | Subtitle management |

View File

@@ -32,7 +32,7 @@ Recyclarr automatically syncs TRaSH Guides recommended custom formats and qualit
- Uses TRaSH Guides release group tiers (BD, WEB, Anime BD, Anime WEB) - Uses TRaSH Guides release group tiers (BD, WEB, Anime BD, Anime WEB)
### Sonarr (TV Shows) ### Sonarr (TV Shows)
- **URL**: http://10.4.2.15:8989 - **URL**: http://10.4.2.20:8989
- **API Key**: b331fe18ec2144148a41645d9ce8b249 - **API Key**: b331fe18ec2144148a41645d9ce8b249
- **Quality Profiles**: - **Quality Profiles**:
- WEB-1080p (standard) - WEB-1080p (standard)
@@ -122,7 +122,7 @@ ssh pm2 "pct exec 122 -- /usr/local/bin/recyclarr sync --preview"
curl -H "X-Api-Key: 5e6796988abf4d6d819a2b506a44f422" http://10.4.2.16:7878/api/v3/system/status curl -H "X-Api-Key: 5e6796988abf4d6d819a2b506a44f422" http://10.4.2.16:7878/api/v3/system/status
# Test Sonarr # Test Sonarr
curl -H "X-Api-Key: b331fe18ec2144148a41645d9ce8b249" http://10.4.2.15:8989/api/v3/system/status curl -H "X-Api-Key: b331fe18ec2144148a41645d9ce8b249" http://10.4.2.20:8989/api/v3/system/status
``` ```
### Force resync all custom formats ### Force resync all custom formats

View File

@@ -34,7 +34,7 @@
#### Sonarr (VMID 105) #### Sonarr (VMID 105)
- **Node**: pm2 - **Node**: pm2
- **IP**: 10.4.2.15 - **IP**: 10.4.2.20
- **Port**: 8989 (default) - **Port**: 8989 (default)
- **Purpose**: TV show automation - **Purpose**: TV show automation
- **Dependencies**: Prowlarr - **Dependencies**: Prowlarr

View File

@@ -34,7 +34,7 @@ ssh pm2 "python3 /root/organize-media.py --quiet"
- Gets all movies and their file paths - Gets all movies and their file paths
- Identifies which files are actively managed - Identifies which files are actively managed
2. **Queries Sonarr API** (http://10.4.2.15:8989) 2. **Queries Sonarr API** (http://10.4.2.20:8989)
- Gets all TV series and their episode files - Gets all TV series and their episode files
- Identifies which files are actively managed - Identifies which files are actively managed
@@ -106,7 +106,7 @@ The script has hardcoded configuration at the top:
```python ```python
RADARR_URL = "http://10.4.2.16:7878" RADARR_URL = "http://10.4.2.16:7878"
RADARR_API_KEY = "5e6796988abf4d6d819a2b506a44f422" RADARR_API_KEY = "5e6796988abf4d6d819a2b506a44f422"
SONARR_URL = "http://10.4.2.15:8989" SONARR_URL = "http://10.4.2.20:8989"
SONARR_API_KEY = "b331fe18ec2144148a41645d9ce8b249" SONARR_API_KEY = "b331fe18ec2144148a41645d9ce8b249"
MEDIA_DIRS = { MEDIA_DIRS = {

View File

@@ -19,7 +19,7 @@ import urllib.error
# Configuration # Configuration
RADARR_URL = "http://10.4.2.16:7878" RADARR_URL = "http://10.4.2.16:7878"
RADARR_API_KEY = "5e6796988abf4d6d819a2b506a44f422" RADARR_API_KEY = "5e6796988abf4d6d819a2b506a44f422"
SONARR_URL = "http://10.4.2.15:8989" SONARR_URL = "http://10.4.2.20:8989"
SONARR_API_KEY = "b331fe18ec2144148a41645d9ce8b249" SONARR_API_KEY = "b331fe18ec2144148a41645d9ce8b249"
MEDIA_DIRS = { MEDIA_DIRS = {

170
scripts/monitoring/network-map.sh Executable file
View File

@@ -0,0 +1,170 @@
#!/bin/bash
# network-map.sh - Generate network map from Proxmox cluster
#
# Usage:
# ./network-map.sh # Print to stdout
# ./network-map.sh --json # Output as JSON
# ./network-map.sh --markdown # Output as Markdown table
set -euo pipefail
# Node configuration
declare -A NODES=(
["pm1"]="10.4.2.2"
["pm2"]="10.4.2.6"
["pm3"]="10.4.2.3"
["pm4"]="10.4.2.5"
["elantris"]="10.4.2.14"
)
OUTPUT_FORMAT="${1:-text}"
get_container_info() {
local node=$1
ssh -o ConnectTimeout=5 "$node" "
# Get LXC containers
for vmid in \$(pct list 2>/dev/null | tail -n +2 | awk '{print \$1}'); do
config=\$(pct config \$vmid 2>/dev/null)
hostname=\$(echo \"\$config\" | grep -E '^hostname:' | awk '{print \$2}')
net0=\$(echo \"\$config\" | grep -E '^net0:' | sed 's/net0: //')
ip=\$(echo \"\$net0\" | grep -oP 'ip=\\K[^/,]+' || echo 'dhcp')
status=\$(pct status \$vmid 2>/dev/null | awk '{print \$2}')
echo \"lxc|\$vmid|\$hostname|\$ip|\$status|$node\"
done
# Get VMs
for vmid in \$(qm list 2>/dev/null | tail -n +2 | awk '{print \$1}'); do
config=\$(qm config \$vmid 2>/dev/null)
name=\$(echo \"\$config\" | grep -E '^name:' | awk '{print \$2}')
ipconfig=\$(echo \"\$config\" | grep -E '^ipconfig0:' | sed 's/ipconfig0: //')
ip=\$(echo \"\$ipconfig\" | grep -oP 'ip=\\K[^/,]+' || echo 'dhcp')
status=\$(qm status \$vmid 2>/dev/null | awk '{print \$2}')
echo \"vm|\$vmid|\$name|\$ip|\$status|$node\"
done
" 2>/dev/null || true
}
print_header() {
echo "========================================"
echo " KAVCORP NETWORK MAP"
echo " Generated: $(date '+%Y-%m-%d %H:%M:%S')"
echo "========================================"
echo ""
}
print_nodes() {
echo "PROXMOX NODES"
echo "----------------------------------------"
printf " %-15s %s\n" "IP" "HOSTNAME"
echo "----------------------------------------"
for node in "${!NODES[@]}"; do
printf " %-15s %s\n" "${NODES[$node]}" "$node"
done | sort -t. -k4 -n
echo ""
}
print_resources() {
local resources="$1"
echo "LXC CONTAINERS"
echo "----------------------------------------"
printf " %-15s %-6s %-20s %-10s %s\n" "IP" "VMID" "NAME" "NODE" "STATUS"
echo "----------------------------------------"
echo "$resources" | grep "^lxc|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
printf " %-15s %-6s %-20s %-10s %s\n" "$ip" "$vmid" "$name" "$node" "$status"
done
echo ""
echo "VIRTUAL MACHINES"
echo "----------------------------------------"
printf " %-15s %-6s %-20s %-10s %s\n" "IP" "VMID" "NAME" "NODE" "STATUS"
echo "----------------------------------------"
echo "$resources" | grep "^vm|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
printf " %-15s %-6s %-20s %-10s %s\n" "$ip" "$vmid" "$name" "$node" "$status"
done
}
print_json() {
local resources="$1"
echo "{"
echo " \"generated\": \"$(date -Iseconds)\","
echo " \"nodes\": ["
first=true
for node in "${!NODES[@]}"; do
if [ "$first" = true ]; then first=false; else echo ","; fi
printf " {\"name\": \"%s\", \"ip\": \"%s\"}" "$node" "${NODES[$node]}"
done
echo ""
echo " ],"
echo " \"containers\": ["
first=true
echo "$resources" | grep "^lxc|" | while IFS='|' read -r type vmid name ip status node; do
if [ "$first" = true ]; then first=false; else echo ","; fi
printf " {\"vmid\": %s, \"name\": \"%s\", \"ip\": \"%s\", \"status\": \"%s\", \"node\": \"%s\"}" "$vmid" "$name" "$ip" "$status" "$node"
done
echo ""
echo " ],"
echo " \"vms\": ["
first=true
echo "$resources" | grep "^vm|" | while IFS='|' read -r type vmid name ip status node; do
if [ "$first" = true ]; then first=false; else echo ","; fi
printf " {\"vmid\": %s, \"name\": \"%s\", \"ip\": \"%s\", \"status\": \"%s\", \"node\": \"%s\"}" "$vmid" "$name" "$ip" "$status" "$node"
done
echo ""
echo " ]"
echo "}"
}
print_markdown() {
local resources="$1"
echo "# Network Map"
echo ""
echo "Generated: $(date '+%Y-%m-%d %H:%M:%S')"
echo ""
echo "## Proxmox Nodes"
echo ""
echo "| IP | Hostname |"
echo "|---|---|"
for node in "${!NODES[@]}"; do
echo "| ${NODES[$node]} | $node |"
done | sort -t. -k4 -n
echo ""
echo "## LXC Containers"
echo ""
echo "| IP | VMID | Name | Node | Status |"
echo "|---|---|---|---|---|"
echo "$resources" | grep "^lxc|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
echo "| $ip | $vmid | $name | $node | $status |"
done
echo ""
echo "## Virtual Machines"
echo ""
echo "| IP | VMID | Name | Node | Status |"
echo "|---|---|---|---|---|"
echo "$resources" | grep "^vm|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
echo "| $ip | $vmid | $name | $node | $status |"
done
}
# Main
resources=""
for node in "${!NODES[@]}"; do
node_resources=$(get_container_info "$node")
resources="${resources}${node_resources}"$'\n'
done
case "$OUTPUT_FORMAT" in
--json)
print_json "$resources"
;;
--markdown)
print_markdown "$resources"
;;
*)
print_header
print_nodes
print_resources "$resources"
;;
esac

View File

@@ -0,0 +1,153 @@
#!/bin/bash
# setup-ssh-access.sh - Provision SSH access to all LXC containers
#
# This script:
# 1. Ensures openssh-server is installed in each container
# 2. Creates /root/.ssh directory with correct permissions
# 3. Adds the workstation public key to authorized_keys
# 4. Configures PermitRootLogin with key-only authentication
# 5. Starts and enables sshd
#
# Usage: ./setup-ssh-access.sh [vmid...]
# Without arguments: provisions all containers
# With arguments: provisions only specified VMIDs
set -euo pipefail
WORKSTATION_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtmU1h0wIQiIF0UajcUKV4wQQ4f3dFIAHV8j9pQlNnT kavren@KavDesktop-Cachy"
# Map VMIDs to their host nodes
declare -A CONTAINER_NODES=(
[101]="pm1" # twingate
[102]="pm1" # zwave-js-ui
[103]="pm4" # pihole
[104]="pm2" # traefik
[105]="pm2" # sonarr
[106]="pm3" # mqtt
[107]="pm3" # dockge
[108]="pm2" # radarr
[110]="pm4" # docker-pm4
[111]="pm4" # unifi
[112]="pm3" # foundryvtt
[113]="pm2" # docker-pm2
[114]="pm2" # prowlarr
[115]="pm2" # jellyseerr
[116]="pm2" # authelia
[117]="pm2" # whisparr
[118]="pm2" # notifiarr
[119]="pm2" # bazarr
[120]="pm2" # kometa
[121]="elantris" # jellyfin
[122]="pm2" # recyclarr
[123]="elantris" # ollama
[124]="elantris" # amp
[125]="pm4" # vaultwarden
[126]="pm4" # immich
[127]="pm4" # gitea
[128]="pm3" # frigate
)
setup_ssh() {
local vmid=$1
local node=${CONTAINER_NODES[$vmid]:-}
if [[ -z "$node" ]]; then
echo "ERROR: Unknown VMID $vmid"
return 1
fi
local hostname
hostname=$(ssh "$node" "pct config $vmid 2>/dev/null | grep hostname | awk '{print \$2}'" 2>/dev/null || echo "unknown")
echo "=== Setting up SSH for VMID $vmid ($hostname) on $node ==="
# Check if container is running
if ! ssh "$node" "pct status $vmid 2>/dev/null" | grep -q "running"; then
echo " WARNING: Container $vmid is not running, skipping..."
return 1
fi
# Detect package manager and install openssh-server
echo " Installing openssh-server..."
ssh "$node" "pct exec $vmid -- bash -c '
if command -v apt-get &> /dev/null; then
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq && apt-get install -y -qq openssh-server 2>/dev/null
elif command -v apk &> /dev/null; then
apk add --quiet openssh openssh-server 2>/dev/null
elif command -v dnf &> /dev/null; then
dnf install -y -q openssh-server 2>/dev/null
elif command -v pacman &> /dev/null; then
pacman -Sy --noconfirm openssh 2>/dev/null
else
echo \"Unknown package manager\"
exit 1
fi
'" 2>/dev/null || echo " Note: openssh may already be installed"
# Create .ssh directory and set permissions
echo " Configuring SSH keys..."
ssh "$node" "pct exec $vmid -- bash -c '
mkdir -p /root/.ssh
chmod 700 /root/.ssh
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
'"
# Add the workstation key (idempotent - only adds if not present)
ssh "$node" "pct exec $vmid -- bash -c \"
grep -qF '$WORKSTATION_KEY' /root/.ssh/authorized_keys 2>/dev/null || \
echo '$WORKSTATION_KEY' >> /root/.ssh/authorized_keys
\""
# Enable PermitRootLogin with key only (more secure than password)
echo " Configuring sshd..."
ssh "$node" "pct exec $vmid -- bash -c '
if [[ -f /etc/ssh/sshd_config ]]; then
sed -i \"s/^#*PermitRootLogin.*/PermitRootLogin prohibit-password/\" /etc/ssh/sshd_config
fi
'" 2>/dev/null || true
# Start and enable sshd
echo " Starting sshd..."
ssh "$node" "pct exec $vmid -- bash -c '
if command -v systemctl &> /dev/null; then
systemctl enable ssh 2>/dev/null || systemctl enable sshd 2>/dev/null || true
systemctl restart ssh 2>/dev/null || systemctl restart sshd 2>/dev/null || true
elif command -v rc-service &> /dev/null; then
rc-update add sshd default 2>/dev/null || true
rc-service sshd restart 2>/dev/null || true
fi
'" 2>/dev/null || true
echo " SSH setup complete for $vmid ($hostname)"
echo ""
}
# Main execution
if [[ $# -gt 0 ]]; then
# Provision specific VMIDs
CONTAINERS=("$@")
else
# Provision all containers
CONTAINERS=(${!CONTAINER_NODES[@]})
fi
echo "Starting SSH provisioning for ${#CONTAINERS[@]} containers..."
echo "Using key: $WORKSTATION_KEY"
echo ""
failed=()
for vmid in "${CONTAINERS[@]}"; do
if ! setup_ssh "$vmid"; then
failed+=("$vmid")
fi
done
echo "=== SSH Provisioning Complete ==="
if [[ ${#failed[@]} -gt 0 ]]; then
echo "Failed containers: ${failed[*]}"
fi
echo ""
echo "Test with: ssh root@<container-ip>"
echo "Or after DNS setup: ssh root@<service>.kav"