Files
proxmox-infra/README.md
kavren 120c2ec809 Initial commit: KavCorp infrastructure documentation
- CLAUDE.md: Project configuration for Claude Code
- docs/: Infrastructure documentation
  - INFRASTRUCTURE.md: Service map, storage, network
  - CONFIGURATIONS.md: Service configs and credentials
  - CHANGELOG.md: Change history
  - DECISIONS.md: Architecture decisions
  - TASKS.md: Task tracking
- scripts/: Automation scripts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-07 22:07:01 -05:00

117 lines
3.4 KiB
Markdown

# KavCorp Proxmox Infrastructure
Documentation and management repository for the KavCorp Proxmox cluster.
## Quick Start
```bash
# Connect to primary management node
ssh pm2
# View cluster status
pvecm status
# List all containers
pvesh get /cluster/resources --type vm --output-format json
```
## Repository Structure
```
proxmox-infra/
├── CLAUDE.md # Development guidance for Claude Code
├── README.md # This file
├── docs/ # Documentation
│ ├── cluster-state.md # Current cluster topology
│ ├── inventory.md # VM/LXC inventory with specs
│ ├── network.md # Network topology and IP assignments
│ ├── storage.md # Storage layout and usage
│ └── services.md # Service mappings and dependencies
└── scripts/ # Management scripts
├── backup/ # Backup automation
├── provisioning/ # LXC/VM creation scripts
└── monitoring/ # Health checks and monitoring
```
## Cluster Overview
- **Cluster Name**: KavCorp
- **Nodes**: 5 (pm1, pm2, pm3, pm4, elantris)
- **Total VMs**: 2
- **Total LXCs**: 19
- **Primary Network**: 10.4.2.0/24
- **Management Node**: pm2 (10.4.2.6)
### Nodes
| Node | IP | CPU | RAM | Role |
|---|---|---|---|---|
| pm1 | 10.4.2.2 | 4 cores | 16GB | General purpose |
| pm2 | 10.4.2.6 | 12 cores | 31GB | **Primary management, media stack** |
| pm3 | 10.4.2.3 | 16 cores | 33GB | Docker, NVR, gaming |
| pm4 | 10.4.2.5 | 12 cores | 31GB | Docker, NVR |
| elantris | 10.4.2.14 | 16 cores | 128GB | **Storage node, media server** |
## Key Services
- **Traefik** (10.4.2.10): Reverse proxy
- **Jellyfin** (10.4.2.22): Media server - **Recently added to Traefik**
- **Media Automation**: Sonarr, Radarr, Prowlarr, Bazarr, Whisparr (on pm2)
- **Home Assistant** (VMID 100): Home automation
- **Frigate** (VMID 111): NVR with object detection
## Recent Changes
**2025-11-16**:
- ✅ Created initial repository structure and documentation
- ✅ Documented 5-node cluster configuration
- ✅ Added Jellyfin to Traefik configuration (jellyfin.kavcorp.com)
- ✅ Inventoried 21 containers (2 VMs, 19 LXCs)
## Documentation
See the `docs/` directory for detailed information:
- [Cluster State](docs/cluster-state.md) - Node details and health
- [Inventory](docs/inventory.md) - Complete VM/LXC listing
- [Network](docs/network.md) - IP allocations and network topology
- [Storage](docs/storage.md) - Storage pools and usage
- [Services](docs/services.md) - Service mappings and access URLs
## Common Tasks
### Managing LXCs
```bash
# Start/stop/restart
pct start <vmid>
pct stop <vmid>
pct restart <vmid>
# View config
pct config <vmid>
# Execute command
pct exec <vmid> -- <command>
```
### Checking Resources
```bash
# Cluster-wide resources
pvesh get /cluster/resources --output-format json
# Storage usage
pvesh get /cluster/resources --type storage --output-format json
```
## Access
- **Web UI**: https://pm2.kavcorp.com:8006 (or any node)
- **Traefik Dashboard**: https://traefik.kavcorp.com
- **Jellyfin**: https://jellyfin.kavcorp.com
## Notes
- This is a migration project from a messy `~/infrastructure` repo
- Goal: Move services from Docker to LXCs where appropriate
- Primary new LXC deployment node: **pm2**
- Most services use community helper scripts from https://helper-scripts.com