Compare commits

..

6 Commits

Author SHA1 Message Date
9051c84bae docs: Network infrastructure cleanup - static IPs, local DNS, SSH access
- Complete static IP migration for all containers
- Configure Pi-hole local DNS with .kav hostnames
- Add SSH provisioning script for all containers
- Create NETWORK-MAP.md with complete IP allocation
- Create network-map.sh for dynamic map generation
- Update INFRASTRUCTURE.md with new service map
- Add .kav TLD and SSH policy decisions to DECISIONS.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 17:11:32 -05:00
1d649c4349 Auto-commit: 2025-12-28 16:32 session changes 2025-12-28 16:32:51 -05:00
128b38a457 Auto-commit: 2025-12-23 15:51 session changes 2025-12-23 15:51:16 -05:00
6b6e029728 docs: Update Frigate and Home Assistant IPs
Both services use DHCP and IPs changed:
- Frigate: 10.4.2.8 → 10.4.2.176
- Home Assistant: 10.4.2.62 → 10.4.2.175

Traefik configs updated on LXC 104.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 11:56:47 -05:00
9cdf6e31ef Auto-commit: 2025-12-22 16:52 session changes 2025-12-22 16:52:44 -05:00
8cdda4933c docs: Complete media stack IP migration
Migrated all media services to 10.4.2.20-29 range:
- Sonarr: 10.4.2.20
- Whisparr: 10.4.2.21
- Prowlarr: 10.4.2.22
- Bazarr: 10.4.2.23
- Radarr: 10.4.2.24
- Jellyseerr: 10.4.2.25
- Jellyfin: 10.4.2.26
- Kometa: 10.4.2.27
- Recyclarr: 10.4.2.28
- Notifiarr: 10.4.2.29
- Immich: 10.4.2.30

All Traefik configs updated.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-22 16:51:20 -05:00
8 changed files with 683 additions and 71 deletions

View File

@@ -2,6 +2,42 @@
> **Purpose**: Historical record of all significant infrastructure changes
## 2025-12-28
### Network Infrastructure Cleanup
#### Static IP Migration Complete
All containers now have static IPs in organized ranges:
- **Core Infrastructure** (10.4.2.10-19): Pi-hole→.11, Authelia→.12, Vaultwarden→.15
- **Media Stack** (10.4.2.20-29): All *arr services, Jellyfin, etc.
- **Services** (10.4.2.30-39): Immich→.30, Gitea→.31, Frigate→.32, Ollama→.34
- **IoT** (10.4.2.50-99): Z-Wave→.50, MQTT→.51
- **Docker Hosts** (10.4.2.200-209): docker-pm2→.200, docker-pm4→.201
#### Pi-hole Local DNS (.kav domain)
- Configured Pi-hole (10.4.2.11) as local DNS resolver
- All services now have `.kav` hostnames (e.g., traefik.kav, sonarr.kav)
- DNS records added via `dns.hosts` array in `/etc/pihole/pihole.toml`
#### SSH Access to All Containers
- Created provisioning script: `scripts/provisioning/setup-ssh-access.sh`
- All LXC containers now have SSH enabled with key-based auth
- Access via: `ssh root@<service>.kav`
#### Traefik Route Updates
- Updated backend IPs for: authelia.yaml, vaultwarden.yaml, pihole.yaml
- All routes now point to new static IPs
#### Documentation Updates
- Created `docs/NETWORK-MAP.md` with complete IP allocation
- Created `scripts/monitoring/network-map.sh` for dynamic map generation
- Updated `docs/INFRASTRUCTURE.md` with new service map
- Updated gateway references from 10.4.2.254 to 10.4.2.1
#### Pending
- Update OPNsense DHCP to distribute Pi-hole (10.4.2.11) as DNS
- Configure Home Assistant static IP (10.4.2.33) via HAOS UI
## 2025-12-22
### NAT Reflection & External Access Fix

View File

@@ -41,6 +41,52 @@
## Network Architecture
### Local DNS (.kav TLD)
**Decision**: Use `.kav` as the local top-level domain for internal services
**Reason**:
- Unique to KavCorp network, avoids conflicts with real TLDs
- Short and memorable
- Works without additional configuration
- Pi-hole handles resolution via `dns.hosts` in pihole.toml
**Alternatives Considered**:
- `.lan` - Common but can conflict with some routers
- `.local` - Conflicts with mDNS/Bonjour
- `.home.arpa` - RFC 8375 compliant but verbose
**Usage**: All services accessible via `<service>.kav` (e.g., traefik.kav, sonarr.kav)
### SSH Access Policy
**Decision**: SSH from workstation only, no container-to-container SSH
**Reason**:
- Reduces attack surface
- Single key to manage
- Containers don't need to communicate via SSH
**Implementation**:
- Workstation ed25519 key added to all containers
- `PermitRootLogin prohibit-password` (key-only)
- Provisioning script: `scripts/provisioning/setup-ssh-access.sh`
### IP Allocation Scheme
**Decision**: Organized IP ranges by service type
**Reason**: Easy to identify service type from IP, logical grouping
| Range | Purpose |
|-------|---------|
| 10.4.2.1 | Gateway (OPNsense) |
| 10.4.2.2-9 | Proxmox nodes |
| 10.4.2.10-19 | Core infrastructure |
| 10.4.2.20-29 | Media stack |
| 10.4.2.30-39 | Other services |
| 10.4.2.40-49 | Game servers |
| 10.4.2.50-99 | IoT / Reserved |
| 10.4.2.100-199 | DHCP pool |
| 10.4.2.200-209 | Docker hosts |
### Network Isolation Strategy
**Goal**: Isolate IoT (KavCorp-IOT) and Guest (KavCorp-Guest) WiFi networks from the main LAN, while allowing Smart Home VMs to access IoT devices.

0
docs/IDEAS.md Normal file
View File

View File

@@ -15,38 +15,81 @@
**Cluster Name**: KavCorp
**Network**: 10.4.2.0/24
**Gateway**: 10.4.2.254
**Gateway**: 10.4.2.1 (OPNsense)
**DNS**: 10.4.2.11 (Pi-hole)
**Local Domain**: .kav
## Service Map
| Service | IP:Port | Location | Domain | Auth |
|---------|---------|----------|--------|------|
| **Proxmox Web UI** | 10.4.2.6:8006 | pm2 | pm.kavcorp.com | Proxmox built-in |
| **Traefik** | 10.4.2.10 | LXC 104 (pm2) | - | None (reverse proxy) |
| **Authelia** | 10.4.2.19 | LXC 116 (pm2) | auth.kavcorp.com | SSO provider |
| **Sonarr** | 10.4.2.15:8989 | LXC 105 (pm2) | sonarr.kavcorp.com | Built-in |
| **Radarr** | 10.4.2.16:7878 | LXC 108 (pm2) | radarr.kavcorp.com | Built-in |
| **Prowlarr** | 10.4.2.17:9696 | LXC 114 (pm2) | prowlarr.kavcorp.com | Built-in |
| **Jellyseerr** | 10.4.2.18:5055 | LXC 115 (pm2) | jellyseerr.kavcorp.com | Built-in |
| **Whisparr** | 10.4.2.20:6969 | LXC 117 (pm2) | whisparr.kavcorp.com | Built-in |
| **Notifiarr** | 10.4.2.21 | LXC 118 (pm2) | - | API key |
| **Jellyfin** | 10.4.2.21:8096 | LXC 121 (elantris) | jellyfin.kavcorp.com | Built-in |
| **Bazarr** | 10.4.2.22:6767 | LXC 119 (pm2) | bazarr.kavcorp.com | Built-in |
| **Kometa** | 10.4.2.23 | LXC 120 (pm2) | - | N/A |
| **Recyclarr** | 10.4.2.25 | LXC 122 (pm2) | - | CLI only |
| **NZBGet** | 10.4.2.13:6789 | Docker (kavnas) | nzbget.kavcorp.com | Built-in |
| **Home Assistant** | 10.4.2.62:8123 | VM 100 (pm1) | hass.kavcorp.com | Built-in |
| **Frigate** | 10.4.2.8:8971 | LXC 128 (pm3) | frigate.kavcorp.com | Built-in (auth required) |
| **Foundry VTT** | 10.4.2.37:30000 | LXC 112 (pm3) | vtt.kavcorp.com | Built-in |
| **llama.cpp** | 10.4.2.224:11434 | LXC 123 (elantris) | ollama.kavcorp.com | None (API) |
| **AMP** | 10.4.2.26:8080 | LXC 124 (elantris) | amp.kavcorp.com | Built-in |
| **Vaultwarden** | 10.4.2.212 | LXC 125 (pm4) | vtw.kavcorp.com | Built-in |
| **Immich** | 10.4.2.24:2283 | LXC 126 (pm4) | immich.kavcorp.com | Built-in |
| **Gitea** | 10.4.2.7:3000 | LXC 127 (pm4) | git.kavcorp.com | Built-in |
| **Pi-hole** | 10.4.2.129 | LXC 103 (pm4) | pihole.kavcorp.com | Built-in |
| **UniFi Controller** | 10.4.2.242:8443 | LXC 111 (pm4) | unifi.kavcorp.com | Built-in |
| **OPNsense (KavSense)** | 10.4.2.1 | VM 130 (pm4) | opnsense.kavcorp.com | Built-in (net0: vmbr0/LAN, net1: vmbr1/WAN) |
| **KavNas** | 10.4.2.13 | Synology NAS | - | NAS auth |
> See [NETWORK-MAP.md](NETWORK-MAP.md) for complete IP allocation details.
### Core Infrastructure (10.4.2.10-19)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **OPNsense** | 10.4.2.1 | VM 130 (pm4) | opnsense.kavcorp.com | opnsense.kav |
| **Traefik** | 10.4.2.10 | LXC 104 (pm2) | - | traefik.kav |
| **Pi-hole** | 10.4.2.11 | LXC 103 (pm4) | pihole.kavcorp.com | pihole.kav |
| **Authelia** | 10.4.2.12:9091 | LXC 116 (pm2) | auth.kavcorp.com | authelia.kav |
| **KavNas** | 10.4.2.13 | Synology NAS | - | kavnas.kav |
| **Vaultwarden** | 10.4.2.15 | LXC 125 (pm4) | vtw.kavcorp.com | vaultwarden.kav |
| **UniFi Controller** | 10.4.2.16:8443 | LXC 111 (pm4) | unifi.kavcorp.com | unifi.kav |
### Media Stack (10.4.2.20-29)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **Sonarr** | 10.4.2.20:8989 | LXC 105 (pm2) | sonarr.kavcorp.com | sonarr.kav |
| **Whisparr** | 10.4.2.21:6969 | LXC 117 (pm2) | whisparr.kavcorp.com | whisparr.kav |
| **Prowlarr** | 10.4.2.22:9696 | LXC 114 (pm2) | prowlarr.kavcorp.com | prowlarr.kav |
| **Bazarr** | 10.4.2.23:6767 | LXC 119 (pm2) | bazarr.kavcorp.com | bazarr.kav |
| **Radarr** | 10.4.2.24:7878 | LXC 108 (pm2) | radarr.kavcorp.com | radarr.kav |
| **Jellyseerr** | 10.4.2.25:5055 | LXC 115 (pm2) | jellyseerr.kavcorp.com | jellyseerr.kav |
| **Jellyfin** | 10.4.2.26:8096 | LXC 121 (elantris) | jellyfin.kavcorp.com | jellyfin.kav |
| **Kometa** | 10.4.2.27 | LXC 120 (pm2) | - | kometa.kav |
| **Recyclarr** | 10.4.2.28 | LXC 122 (pm2) | - | recyclarr.kav |
| **Notifiarr** | 10.4.2.29 | LXC 118 (pm2) | - | notifiarr.kav |
### Services (10.4.2.30-39)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **Immich** | 10.4.2.30:2283 | LXC 126 (pm4) | immich.kavcorp.com | immich.kav |
| **Gitea** | 10.4.2.31:3000 | LXC 127 (pm4) | git.kavcorp.com | gitea.kav |
| **Frigate** | 10.4.2.32:8971 | LXC 128 (pm3) | frigate.kavcorp.com | frigate.kav |
| **Home Assistant** | 10.4.2.33:8123 | VM 100 (pm1) | hass.kavcorp.com | homeassistant.kav |
| **Ollama** | 10.4.2.34:11434 | LXC 123 (elantris) | ollama.kavcorp.com | ollama.kav |
| **Twingate** | 10.4.2.35 | LXC 101 (pm1) | - | twingate.kav |
| **Foundry VTT** | 10.4.2.37:30000 | LXC 112 (pm3) | vtt.kavcorp.com | foundryvtt.kav |
### Game Servers (10.4.2.40-49)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **AMP** | 10.4.2.40:8080 | LXC 124 (elantris) | amp.kavcorp.com | amp.kav |
### IoT / Home Automation (10.4.2.50-99)
| Service | IP:Port | Location | Domain | Local DNS |
|---------|---------|----------|--------|-----------|
| **Z-Wave JS UI** | 10.4.2.50 | LXC 102 (pm1) | - | zwave.kav |
| **MQTT** | 10.4.2.51:1883 | LXC 106 (pm3) | - | mqtt.kav |
### Docker Hosts (10.4.2.200-209)
| Service | IP | Location | Local DNS |
|---------|-----|----------|-----------|
| **docker-pm2** | 10.4.2.200 | LXC 113 (pm2) | docker-pm2.kav |
| **docker-pm4** | 10.4.2.201 | LXC 110 (pm4) | docker-pm4.kav |
| **docker-pm3** | 10.4.2.202 | VM 109 (pm3) | docker-pm3.kav |
| **Dockge** | 10.4.2.203 | LXC 107 (pm3) | dockge.kav |
### Other
| Service | IP:Port | Location | Domain |
|---------|---------|----------|--------|
| **NZBGet** | 10.4.2.13:6789 | Docker (kavnas) | nzbget.kavcorp.com |
| **Proxmox Web UI** | 10.4.2.6:8006 | pm2 | pm.kavcorp.com |
## Storage Architecture
@@ -98,7 +141,7 @@ All `*.kavcorp.com` subdomains route through Traefik reverse proxy (10.4.2.10) f
| Bridge | vmbr0 |
| Physical Interface | eno1 |
| CIDR | 10.4.2.0/24 |
| Gateway | 10.4.2.254 |
| Gateway | 10.4.2.1 (OPNsense) |
#### pm4 Only (vmbr1 - WAN for OPNsense)
| Setting | Value |
@@ -134,10 +177,18 @@ All `*.kavcorp.com` subdomains route through Traefik reverse proxy (10.4.2.10) f
### SSH Access
- **User**: kavren (from local machine)
- **User**: root (between cluster nodes)
**Proxmox Nodes:**
- **User**: root (from workstation or between nodes)
- **Key Type**: ed25519
- **Node-to-Node**: Passwordless SSH configured for cluster operations
- **Access**: `ssh pm1`, `ssh pm2`, `ssh pm3`, `ssh pm4`, `ssh elantris`
**LXC Containers:**
- **User**: root
- **Key Type**: ed25519 (workstation key provisioned)
- **Access**: `ssh root@<service>.kav` (e.g., `ssh root@traefik.kav`)
- **Provisioning Script**: `scripts/provisioning/setup-ssh-access.sh`
All containers have SSH enabled with key-based authentication (PermitRootLogin prohibit-password).
### Important Paths

View File

@@ -8,41 +8,40 @@
- [x] Traefik gateway fixed (was 10.4.2.254, now 10.4.2.1)
- [x] NAT reflection enabled in OPNsense
- [x] UniFi, Immich, Gitea set to static IPs and verified working through Traefik
- [x] Radarr IP conflict resolved (moved from 10.4.2.16 to 10.4.2.24)
**Current Static IPs (verified working):**
- UniFi (111): 10.4.2.16 - Traefik verified
- Gitea (127): 10.4.2.31 - Traefik verified
- Immich (126): 10.4.2.30 - Traefik verified
- Radarr (108): 10.4.2.24 - Traefik updated
- [x] Media stack IP reorganization complete (all in 10.4.2.20-29 range)
- [x] Traefik configs updated for all new IPs
**Pending:**
- [ ] Media stack IP reorganization (10.4.2.20-29)
- [ ] Pi-hole migration (10.4.2.129 → 10.4.2.11)
- [ ] KavNas / Elantris IP updates (deferred)
- [ ] Core infrastructure IP updates (Authelia, Vaultwarden, Pi-hole)
- [ ] Other services (Frigate, Foundry, Home Assistant, llama.cpp, AMP)
- [ ] Docker hosts IP updates
- [ ] Update docs/INFRASTRUCTURE.md with final IPs
**Current IP Map (pm2 - 10.4.2.6):**
- 104 traefik: 10.4.2.10
- 105 sonarr: 10.4.2.15
- 108 radarr: 10.4.2.24
- 105 sonarr: 10.4.2.20 ✓
- 108 radarr: 10.4.2.24
- 113 docker-pm2: 10.4.2.203
- 114 prowlarr: 10.4.2.17
- 115 jellyseerr: 10.4.2.18
- 114 prowlarr: 10.4.2.22 ✓
- 115 jellyseerr: 10.4.2.25 ✓
- 116 authelia: 10.4.2.19
- 117 whisparr: 10.4.2.20
- 118 notifiarr: 10.4.2.21
- 119 bazarr: 10.4.2.22
- 120 kometa: 10.4.2.23
- 122 recyclarr: 10.4.2.25
- 117 whisparr: 10.4.2.21 ✓
- 118 notifiarr: 10.4.2.29 ✓
- 119 bazarr: 10.4.2.23 ✓
- 120 kometa: 10.4.2.27 ✓
- 122 recyclarr: 10.4.2.28 ✓
**Current IP Map (pm4 - 10.4.2.5):**
- 103 pihole: 10.4.2.129
- 110 docker-pm4: 10.4.2.204
- 111 unifi: 10.4.2.16
- 111 unifi: 10.4.2.16
- 125 vaultwarden: 10.4.2.212
- 126 immich: 10.4.2.30
- 127 gitea: 10.4.2.31
- 126 immich: 10.4.2.30
- 127 gitea: 10.4.2.31
**Current IP Map (elantris - 10.4.2.14):**
- 121 jellyfin: 10.4.2.26 ✓
- 124 amp: 10.4.2.40 ✓
## New IP Allocation Scheme
@@ -73,21 +72,21 @@
| Vaultwarden | 125 | pm4 | 10.4.2.212 | 10.4.2.15 | 10.4.2.254→10.4.2.1 |
| UniFi | 111 | pm4 | 10.4.2.242 (DHCP) | 10.4.2.16 | Set to 10.4.2.1 |
### Services - Media Stack (10.4.2.20-29)
### Services - Media Stack (10.4.2.20-29) ✅ COMPLETE
| Service | VMID | Node | Current IP | New IP | Gateway Fix |
|---------|------|------|------------|--------|-------------|
| Sonarr | 105 | pm2 | 10.4.2.15 | 10.4.2.20 | 10.4.2.254→10.4.2.1 |
| Radarr | 108 | pm2 | 10.4.2.24 | 10.4.2.24 | ✅ Done |
| Prowlarr | 114 | pm2 | 10.4.2.17 | 10.4.2.22 | 10.4.2.254→10.4.2.1 |
| Bazarr | 119 | pm2 | 10.4.2.22 | 10.4.2.23 | 10.4.2.254→10.4.2.1 |
| Whisparr | 117 | pm2 | 10.4.2.20 | 10.4.2.24 | 10.4.2.254→10.4.2.1 |
| Jellyseerr | 115 | pm2 | 10.4.2.18 | 10.4.2.25 | 10.4.2.254→10.4.2.1 |
| Jellyfin | 121 | elantris | 10.4.2.21 | 10.4.2.26 | Check |
| Kometa | 120 | pm2 | 10.4.2.23 | 10.4.2.27 | 10.4.2.254→10.4.2.1 |
| Recyclarr | 122 | pm2 | 10.4.2.25 | 10.4.2.28 | 10.4.2.254→10.4.2.1 |
| Notifiarr | 118 | pm2 | 10.4.2.21 | 10.4.2.29 | 10.4.2.254→10.4.2.1 |
| Immich | 126 | pm4 | DHCP | 10.4.2.30 | Set to 10.4.2.1 |
| Service | VMID | Node | IP | Status |
|---------|------|------|-----|--------|
| Sonarr | 105 | pm2 | 10.4.2.20 | ✅ Done |
| Whisparr | 117 | pm2 | 10.4.2.21 | ✅ Done |
| Prowlarr | 114 | pm2 | 10.4.2.22 | ✅ Done |
| Bazarr | 119 | pm2 | 10.4.2.23 | ✅ Done |
| Radarr | 108 | pm2 | 10.4.2.24 | ✅ Done |
| Jellyseerr | 115 | pm2 | 10.4.2.25 | ✅ Done |
| Jellyfin | 121 | elantris | 10.4.2.26 | ✅ Done |
| Kometa | 120 | pm2 | 10.4.2.27 | ✅ Done |
| Recyclarr | 122 | pm2 | 10.4.2.28 | ✅ Done |
| Notifiarr | 118 | pm2 | 10.4.2.29 | ✅ Done |
| Immich | 126 | pm4 | 10.4.2.30 | ✅ Done |
### Services - Other (10.4.2.30-39)
@@ -95,7 +94,7 @@
|---------|------|------|------------|--------|-------------|
| Immich | 126 | pm4 | DHCP | 10.4.2.30 | Set to 10.4.2.1 |
| Frigate | 128 | pm3 | 10.4.2.8 | 10.4.2.31 | Check |
| Foundry VTT | 112 | pm3 | 10.4.2.37 | 10.4.2.32 | Check |
| Foundry VTT | 112 | pm3 | 10.4.2.37 | 10.4.2.37 | ✅ Already in range |
| Home Assistant | 100 | pm1 | 10.4.2.62 | 10.4.2.33 | Check |
| llama.cpp | 123 | elantris | 10.4.2.224 | 10.4.2.34 | Check |
@@ -103,7 +102,7 @@
| Service | VMID | Node | Current IP | New IP | Gateway Fix |
|---------|------|------|------------|--------|-------------|
| AMP | 124 | elantris | 10.4.2.26 | 10.4.2.40 | Check |
| AMP | 124 | elantris | 10.4.2.40 | 10.4.2.40 | ✅ Done |
### Docker Hosts (10.4.2.200-209)

157
docs/NETWORK-MAP.md Normal file
View File

@@ -0,0 +1,157 @@
# KavCorp Network Map
> **Last Updated**: 2025-12-28
> **Network**: 10.4.2.0/24
> **Gateway**: 10.4.2.1 (OPNsense)
> **DNS**: 10.4.2.11 (Pi-hole)
## Network Topology
```
INTERNET
│ WAN (AT&T)
│ Public: 99.74.188.161
┌─────────────────────────────┐
│ OPNsense (VM 130) │
│ 10.4.2.1 │
│ WAN: vmbr1 | LAN: vmbr0 │
└──────────────┬──────────────┘
┌───────────────────────────┼───────────────────────────┐
│ vmbr0 Bridge │
│ 10.4.2.0/24 (LAN) │
└───┬───────┬───────┬───────┬───────┬───────────────────┘
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌──────────┐ ┌─────────┐
│ pm1 │ │ pm2 │ │ pm3 │ │ pm4 │ │ elantris │ │ KavNas │
│ .2 │ │ .6 │ │ .3 │ │ .5 │ │ .14 │ │ .13 │
└──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ └────┬─────┘ └─────────┘
│ │ │ │ │
┌───────┘ │ │ │ └────────┐
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
┌──────┐ ┌───────────┐ ┌────┐ ┌──────────┐ ┌─────────┐
│HA │ │Media Stack│ │Game│ │ Infra │ │ Media │
│Zwave │ │ Services │ │Svcs│ │ Services │ │ Storage │
│Twing.│ │ │ │ │ │ │ │ │
└──────┘ └───────────┘ └────┘ └──────────┘ └─────────┘
```
## IP Address Allocation
### Proxmox Nodes (10.4.2.2-9)
| IP | Hostname | Description |
|----|----------|-------------|
| 10.4.2.2 | pm1.kav | Proxmox node 1 |
| 10.4.2.3 | pm3.kav | Proxmox node 3 |
| 10.4.2.5 | pm4.kav | Proxmox node 4 |
| 10.4.2.6 | pm2.kav | Proxmox node 2 (primary management) |
| 10.4.2.14 | elantris.kav | Proxmox node 5 (128GB RAM, ZFS) |
### Core Infrastructure (10.4.2.10-19)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.1 | opnsense.kav | OPNsense Gateway | 130 | pm4 |
| 10.4.2.10 | traefik.kav | Reverse Proxy | 104 | pm2 |
| 10.4.2.11 | pihole.kav | DNS Server | 103 | pm4 |
| 10.4.2.12 | authelia.kav | SSO Authentication | 116 | pm2 |
| 10.4.2.13 | kavnas.kav | Synology NAS | - | - |
| 10.4.2.15 | vaultwarden.kav | Password Manager | 125 | pm4 |
| 10.4.2.16 | unifi.kav | UniFi Controller | 111 | pm4 |
### Media Stack (10.4.2.20-29)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.20 | sonarr.kav | TV Shows | 105 | pm2 |
| 10.4.2.21 | whisparr.kav | Adult Content | 117 | pm2 |
| 10.4.2.22 | prowlarr.kav | Indexer Manager | 114 | pm2 |
| 10.4.2.23 | bazarr.kav | Subtitles | 119 | pm2 |
| 10.4.2.24 | radarr.kav | Movies | 108 | pm2 |
| 10.4.2.25 | jellyseerr.kav | Media Requests | 115 | pm2 |
| 10.4.2.26 | jellyfin.kav | Media Server | 121 | elantris |
| 10.4.2.27 | kometa.kav | Plex Meta Manager | 120 | pm2 |
| 10.4.2.28 | recyclarr.kav | Quality Profiles | 122 | pm2 |
| 10.4.2.29 | notifiarr.kav | Notifications | 118 | pm2 |
### Services (10.4.2.30-39)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.30 | immich.kav | Photo Management | 126 | pm4 |
| 10.4.2.31 | gitea.kav | Git Server | 127 | pm4 |
| 10.4.2.32 | frigate.kav | NVR | 128 | pm3 |
| 10.4.2.33 | homeassistant.kav | Home Automation | 100 | pm1 (VM) |
| 10.4.2.34 | ollama.kav | LLM Server | 123 | elantris |
| 10.4.2.35 | twingate.kav | Zero Trust Access | 101 | pm1 |
| 10.4.2.37 | foundryvtt.kav | Virtual Tabletop | 112 | pm3 |
### Game Servers (10.4.2.40-49)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.40 | amp.kav | Game Server Manager | 124 | elantris |
### IoT / Home Automation (10.4.2.50-99)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.50 | zwave.kav | Z-Wave JS UI | 102 | pm1 |
| 10.4.2.51 | mqtt.kav | MQTT Broker | 106 | pm3 |
### Docker Hosts (10.4.2.200-209)
| IP | Hostname | Service | VMID | Node |
|----|----------|---------|------|------|
| 10.4.2.200 | docker-pm2.kav | Docker Host | 113 | pm2 |
| 10.4.2.201 | docker-pm4.kav | Docker Host | 110 | pm4 |
| 10.4.2.202 | docker-pm3.kav | Docker Host | 109 | pm3 (VM) |
| 10.4.2.203 | dockge.kav | Docker Management | 107 | pm3 |
## IP Range Summary
| Range | Purpose | Status |
|-------|---------|--------|
| 10.4.2.1 | OPNsense Gateway | Assigned |
| 10.4.2.2-9 | Proxmox Nodes | Assigned |
| 10.4.2.10-19 | Core Infrastructure | Assigned |
| 10.4.2.20-29 | Media Stack | Assigned |
| 10.4.2.30-39 | Services | Partially used |
| 10.4.2.40-49 | Game Servers | Partially used |
| 10.4.2.50-99 | IoT / Reserved | Partially used |
| 10.4.2.100-199 | DHCP Pool | Dynamic |
| 10.4.2.200-209 | Docker Hosts | Assigned |
| 10.4.2.210-239 | Reserved | Available |
| 10.4.2.240-249 | Network Controllers | Reserved |
| 10.4.2.250-254 | Network Gear | Reserved |
## Access Methods
### SSH Access
All containers have SSH enabled with key-based authentication:
```bash
ssh root@<service>.kav
# Example: ssh root@traefik.kav
```
### Web Access
All web services are accessible via Traefik reverse proxy:
- External: `https://<service>.kavcorp.com`
- Internal: `http://<ip>:<port>`
### Local DNS
Pi-hole provides `.kav` domain resolution for all services.
Configure your device to use `10.4.2.11` as DNS server.
## Generating Updated Map
Use the network map script to generate a current view:
```bash
cd /home/kavren/proxmox-infra
./scripts/monitoring/network-map.sh
```

170
scripts/monitoring/network-map.sh Executable file
View File

@@ -0,0 +1,170 @@
#!/bin/bash
# network-map.sh - Generate network map from Proxmox cluster
#
# Usage:
# ./network-map.sh # Print to stdout
# ./network-map.sh --json # Output as JSON
# ./network-map.sh --markdown # Output as Markdown table
set -euo pipefail
# Node configuration
declare -A NODES=(
["pm1"]="10.4.2.2"
["pm2"]="10.4.2.6"
["pm3"]="10.4.2.3"
["pm4"]="10.4.2.5"
["elantris"]="10.4.2.14"
)
OUTPUT_FORMAT="${1:-text}"
get_container_info() {
local node=$1
ssh -o ConnectTimeout=5 "$node" "
# Get LXC containers
for vmid in \$(pct list 2>/dev/null | tail -n +2 | awk '{print \$1}'); do
config=\$(pct config \$vmid 2>/dev/null)
hostname=\$(echo \"\$config\" | grep -E '^hostname:' | awk '{print \$2}')
net0=\$(echo \"\$config\" | grep -E '^net0:' | sed 's/net0: //')
ip=\$(echo \"\$net0\" | grep -oP 'ip=\\K[^/,]+' || echo 'dhcp')
status=\$(pct status \$vmid 2>/dev/null | awk '{print \$2}')
echo \"lxc|\$vmid|\$hostname|\$ip|\$status|$node\"
done
# Get VMs
for vmid in \$(qm list 2>/dev/null | tail -n +2 | awk '{print \$1}'); do
config=\$(qm config \$vmid 2>/dev/null)
name=\$(echo \"\$config\" | grep -E '^name:' | awk '{print \$2}')
ipconfig=\$(echo \"\$config\" | grep -E '^ipconfig0:' | sed 's/ipconfig0: //')
ip=\$(echo \"\$ipconfig\" | grep -oP 'ip=\\K[^/,]+' || echo 'dhcp')
status=\$(qm status \$vmid 2>/dev/null | awk '{print \$2}')
echo \"vm|\$vmid|\$name|\$ip|\$status|$node\"
done
" 2>/dev/null || true
}
print_header() {
echo "========================================"
echo " KAVCORP NETWORK MAP"
echo " Generated: $(date '+%Y-%m-%d %H:%M:%S')"
echo "========================================"
echo ""
}
print_nodes() {
echo "PROXMOX NODES"
echo "----------------------------------------"
printf " %-15s %s\n" "IP" "HOSTNAME"
echo "----------------------------------------"
for node in "${!NODES[@]}"; do
printf " %-15s %s\n" "${NODES[$node]}" "$node"
done | sort -t. -k4 -n
echo ""
}
print_resources() {
local resources="$1"
echo "LXC CONTAINERS"
echo "----------------------------------------"
printf " %-15s %-6s %-20s %-10s %s\n" "IP" "VMID" "NAME" "NODE" "STATUS"
echo "----------------------------------------"
echo "$resources" | grep "^lxc|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
printf " %-15s %-6s %-20s %-10s %s\n" "$ip" "$vmid" "$name" "$node" "$status"
done
echo ""
echo "VIRTUAL MACHINES"
echo "----------------------------------------"
printf " %-15s %-6s %-20s %-10s %s\n" "IP" "VMID" "NAME" "NODE" "STATUS"
echo "----------------------------------------"
echo "$resources" | grep "^vm|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
printf " %-15s %-6s %-20s %-10s %s\n" "$ip" "$vmid" "$name" "$node" "$status"
done
}
print_json() {
local resources="$1"
echo "{"
echo " \"generated\": \"$(date -Iseconds)\","
echo " \"nodes\": ["
first=true
for node in "${!NODES[@]}"; do
if [ "$first" = true ]; then first=false; else echo ","; fi
printf " {\"name\": \"%s\", \"ip\": \"%s\"}" "$node" "${NODES[$node]}"
done
echo ""
echo " ],"
echo " \"containers\": ["
first=true
echo "$resources" | grep "^lxc|" | while IFS='|' read -r type vmid name ip status node; do
if [ "$first" = true ]; then first=false; else echo ","; fi
printf " {\"vmid\": %s, \"name\": \"%s\", \"ip\": \"%s\", \"status\": \"%s\", \"node\": \"%s\"}" "$vmid" "$name" "$ip" "$status" "$node"
done
echo ""
echo " ],"
echo " \"vms\": ["
first=true
echo "$resources" | grep "^vm|" | while IFS='|' read -r type vmid name ip status node; do
if [ "$first" = true ]; then first=false; else echo ","; fi
printf " {\"vmid\": %s, \"name\": \"%s\", \"ip\": \"%s\", \"status\": \"%s\", \"node\": \"%s\"}" "$vmid" "$name" "$ip" "$status" "$node"
done
echo ""
echo " ]"
echo "}"
}
print_markdown() {
local resources="$1"
echo "# Network Map"
echo ""
echo "Generated: $(date '+%Y-%m-%d %H:%M:%S')"
echo ""
echo "## Proxmox Nodes"
echo ""
echo "| IP | Hostname |"
echo "|---|---|"
for node in "${!NODES[@]}"; do
echo "| ${NODES[$node]} | $node |"
done | sort -t. -k4 -n
echo ""
echo "## LXC Containers"
echo ""
echo "| IP | VMID | Name | Node | Status |"
echo "|---|---|---|---|---|"
echo "$resources" | grep "^lxc|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
echo "| $ip | $vmid | $name | $node | $status |"
done
echo ""
echo "## Virtual Machines"
echo ""
echo "| IP | VMID | Name | Node | Status |"
echo "|---|---|---|---|---|"
echo "$resources" | grep "^vm|" | sort -t'|' -k4 -V | while IFS='|' read -r type vmid name ip status node; do
echo "| $ip | $vmid | $name | $node | $status |"
done
}
# Main
resources=""
for node in "${!NODES[@]}"; do
node_resources=$(get_container_info "$node")
resources="${resources}${node_resources}"$'\n'
done
case "$OUTPUT_FORMAT" in
--json)
print_json "$resources"
;;
--markdown)
print_markdown "$resources"
;;
*)
print_header
print_nodes
print_resources "$resources"
;;
esac

View File

@@ -0,0 +1,153 @@
#!/bin/bash
# setup-ssh-access.sh - Provision SSH access to all LXC containers
#
# This script:
# 1. Ensures openssh-server is installed in each container
# 2. Creates /root/.ssh directory with correct permissions
# 3. Adds the workstation public key to authorized_keys
# 4. Configures PermitRootLogin with key-only authentication
# 5. Starts and enables sshd
#
# Usage: ./setup-ssh-access.sh [vmid...]
# Without arguments: provisions all containers
# With arguments: provisions only specified VMIDs
set -euo pipefail
WORKSTATION_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtmU1h0wIQiIF0UajcUKV4wQQ4f3dFIAHV8j9pQlNnT kavren@KavDesktop-Cachy"
# Map VMIDs to their host nodes
declare -A CONTAINER_NODES=(
[101]="pm1" # twingate
[102]="pm1" # zwave-js-ui
[103]="pm4" # pihole
[104]="pm2" # traefik
[105]="pm2" # sonarr
[106]="pm3" # mqtt
[107]="pm3" # dockge
[108]="pm2" # radarr
[110]="pm4" # docker-pm4
[111]="pm4" # unifi
[112]="pm3" # foundryvtt
[113]="pm2" # docker-pm2
[114]="pm2" # prowlarr
[115]="pm2" # jellyseerr
[116]="pm2" # authelia
[117]="pm2" # whisparr
[118]="pm2" # notifiarr
[119]="pm2" # bazarr
[120]="pm2" # kometa
[121]="elantris" # jellyfin
[122]="pm2" # recyclarr
[123]="elantris" # ollama
[124]="elantris" # amp
[125]="pm4" # vaultwarden
[126]="pm4" # immich
[127]="pm4" # gitea
[128]="pm3" # frigate
)
setup_ssh() {
local vmid=$1
local node=${CONTAINER_NODES[$vmid]:-}
if [[ -z "$node" ]]; then
echo "ERROR: Unknown VMID $vmid"
return 1
fi
local hostname
hostname=$(ssh "$node" "pct config $vmid 2>/dev/null | grep hostname | awk '{print \$2}'" 2>/dev/null || echo "unknown")
echo "=== Setting up SSH for VMID $vmid ($hostname) on $node ==="
# Check if container is running
if ! ssh "$node" "pct status $vmid 2>/dev/null" | grep -q "running"; then
echo " WARNING: Container $vmid is not running, skipping..."
return 1
fi
# Detect package manager and install openssh-server
echo " Installing openssh-server..."
ssh "$node" "pct exec $vmid -- bash -c '
if command -v apt-get &> /dev/null; then
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq && apt-get install -y -qq openssh-server 2>/dev/null
elif command -v apk &> /dev/null; then
apk add --quiet openssh openssh-server 2>/dev/null
elif command -v dnf &> /dev/null; then
dnf install -y -q openssh-server 2>/dev/null
elif command -v pacman &> /dev/null; then
pacman -Sy --noconfirm openssh 2>/dev/null
else
echo \"Unknown package manager\"
exit 1
fi
'" 2>/dev/null || echo " Note: openssh may already be installed"
# Create .ssh directory and set permissions
echo " Configuring SSH keys..."
ssh "$node" "pct exec $vmid -- bash -c '
mkdir -p /root/.ssh
chmod 700 /root/.ssh
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
'"
# Add the workstation key (idempotent - only adds if not present)
ssh "$node" "pct exec $vmid -- bash -c \"
grep -qF '$WORKSTATION_KEY' /root/.ssh/authorized_keys 2>/dev/null || \
echo '$WORKSTATION_KEY' >> /root/.ssh/authorized_keys
\""
# Enable PermitRootLogin with key only (more secure than password)
echo " Configuring sshd..."
ssh "$node" "pct exec $vmid -- bash -c '
if [[ -f /etc/ssh/sshd_config ]]; then
sed -i \"s/^#*PermitRootLogin.*/PermitRootLogin prohibit-password/\" /etc/ssh/sshd_config
fi
'" 2>/dev/null || true
# Start and enable sshd
echo " Starting sshd..."
ssh "$node" "pct exec $vmid -- bash -c '
if command -v systemctl &> /dev/null; then
systemctl enable ssh 2>/dev/null || systemctl enable sshd 2>/dev/null || true
systemctl restart ssh 2>/dev/null || systemctl restart sshd 2>/dev/null || true
elif command -v rc-service &> /dev/null; then
rc-update add sshd default 2>/dev/null || true
rc-service sshd restart 2>/dev/null || true
fi
'" 2>/dev/null || true
echo " SSH setup complete for $vmid ($hostname)"
echo ""
}
# Main execution
if [[ $# -gt 0 ]]; then
# Provision specific VMIDs
CONTAINERS=("$@")
else
# Provision all containers
CONTAINERS=(${!CONTAINER_NODES[@]})
fi
echo "Starting SSH provisioning for ${#CONTAINERS[@]} containers..."
echo "Using key: $WORKSTATION_KEY"
echo ""
failed=()
for vmid in "${CONTAINERS[@]}"; do
if ! setup_ssh "$vmid"; then
failed+=("$vmid")
fi
done
echo "=== SSH Provisioning Complete ==="
if [[ ${#failed[@]} -gt 0 ]]; then
echo "Failed containers: ${failed[*]}"
fi
echo ""
echo "Test with: ssh root@<container-ip>"
echo "Or after DNS setup: ssh root@<service>.kav"