Air-Gapped Deployment
Deploy QBITEL Bridge in environments with no internet connectivity, using on-premise LLM inference and pre-loaded container images.
Overview
QBITEL Bridge fully supports air-gapped deployments where no outbound internet connectivity is available. All AI/ML inference runs locally using Ollama, and all container images and dependencies are pre-loaded.
Key Benefits
- All LLM inference runs locally via Ollama -- no cloud API calls
- No internet connectivity required after initial setup
- All data stays within your infrastructure perimeter
- No API keys or cloud accounts needed
- Compliant with the strictest data residency regulations
Step 1: Prepare on a Connected Machine
On a machine with internet access, download all required images and models:
# Pull LLM models via Ollama
ollama pull llama3.2:8b
ollama pull llama3.2:70b
# Save container images to tar files
docker save qbitel/controlplane:latest > controlplane.tar
docker save qbitel/mgmtapi:latest > mgmtapi.tar
docker save qbitel/xds-server:latest > xds-server.tar
docker save qbitel/admission-webhook:latest > admission-webhook.tar
docker save qbitel/qbitel-engine:latest > qbitel-engine.tar
# Download Python packages for offline install
pip download -r requirements.txt -d ./offline-packages/ Step 2: Transfer to Air-Gapped Environment
Transfer the downloaded artifacts to the air-gapped environment using approved media (USB drive, secure file transfer):
- Container image tar files
- Ollama model files
- Python offline packages directory
- QBITEL Bridge source code or Helm chart
Step 3: Load Images
# Load container images on the air-gapped machine
docker load < controlplane.tar
docker load < mgmtapi.tar
docker load < xds-server.tar
docker load < admission-webhook.tar
docker load < qbitel-engine.tar
# Install Python packages from local directory
pip install --no-index --find-links=./offline-packages/ -r requirements.txt Step 4: Configure for Air-Gapped Mode
# Set air-gapped environment variables
export QBITEL_LLM_PROVIDER=ollama
export QBITEL_LLM_ENDPOINT=http://localhost:11434
export QBITEL_AIRGAPPED_MODE=true
export QBITEL_DISABLE_CLOUD_LLMS=true
export QBITEL_LLM_MODEL=llama3.2:8b Step 5: Deploy
# Start Ollama with the loaded model
ollama serve &
# Deploy with air-gapped flag
python -m ai_engine --airgapped
# Or via Helm with air-gapped values
helm install qbitel-bridge ./helm/qbitel-bridge \
--namespace qbitel-service-mesh \
--create-namespace \
--set airgapped.enabled=true \
--set llm.provider=ollama \
--set llm.endpoint=http://ollama:11434 LLM Model Selection
| Model | Size | RAM Required | Best For |
|---|---|---|---|
| llama3.2:8b | 4.7 GB | 8 GB | Development, resource-constrained |
| llama3.2:70b | 39 GB | 48 GB | Production, highest quality |
Verify Air-Gapped Operation
# Verify Ollama is serving the model
curl http://localhost:11434/api/tags
# Verify the AI Engine is running in air-gapped mode
curl http://localhost:8000/health
# Test a discovery request without internet
curl -X POST http://localhost:8000/api/v1/discover \
-H "Content-Type: application/json" \
-d '{"packet_data": "R0VUIC8gSFRUUC8xLjE="}' Next Steps
- Production Checklist -- security hardening for production environments
- Compliance Frameworks -- meet data residency requirements
- Troubleshooting -- diagnose air-gapped deployment issues