SecureGate Docs

Getting Started

Quick start guide — run SecureGate locally, create an event, register attendees, connect cameras

Prerequisites

  • Docker with NVIDIA Container Toolkit (for GPU inference)
  • Node.js 22+ and pnpm (for the frontend)
  • A Satschel IAM account with an org configured

Run the Backend

The full local stack runs via compose.yml. This starts the Go API gateway, Python microservices (ingest, embed, enhance), and supporting infrastructure (MinIO, Redis).

git clone https://github.com/securegate-ai/api.git
cd api/deploy
docker compose up

Services will be available at:

ServiceURLDescription
API Gatewayhttp://localhost:8080Base API + v1 routing
Ingesthttp://localhost:8002Face detection + weapon detection
Embedhttp://localhost:8001ArcFace embedding + vector search
Enhancehttp://localhost:8003CodeFormer + RealESRGAN

Models are downloaded automatically on first start from Hugging Face and InsightFace repos. First boot takes 2-5 minutes depending on bandwidth.

Run the Frontend

git clone https://github.com/securegate-ai/app.git
cd app
pnpm install
pnpm dev

The operator dashboard starts at http://localhost:5173.

Create Your First Event

1. Create an Event

curl -X POST http://localhost:8080/events \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
    "name": "Security Demo",
    "description": "First SecureGate event",
    "date": "2026-04-15T09:00:00Z"
  }'

2. Add a Room

curl -X POST http://localhost:8080/rooms \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
    "event_id": "<event_id>",
    "name": "Main Entrance",
    "capacity": 500
  }'

3. Add a Camera

curl -X POST http://localhost:8080/cameras \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -d '{
    "room_id": "<room_id>",
    "name": "Entry Cam 1",
    "type": "entry",
    "source": "rtsp://192.168.1.100:554/stream1"
  }'

Camera types: entry (tracks arrivals), exit (tracks departures), in-event (monitors inside).

4. Register an Attendee

curl -X POST http://localhost:8080/register-attendee/<event_id> \
  -H "Authorization: Bearer $TOKEN" \
  -F "name=John Doe" \
  -F "email=john@example.com" \
  -F "photo=@john_face.jpg"

The photo is processed through the full pipeline: detect face, check quality (blur, angle, size), align to 112x112, extract 512-d embedding, encrypt with tenant CEK, store in sqlite-vec index.

5. Start Live Detection

Open the operator dashboard at http://localhost:5173, navigate to your event, and click on a camera to start the live detection view. Detected faces are matched in real-time against registered attendees.

Next Steps

ArchitectureUnderstand the full multi-tenant microservices design
API ReferenceComplete endpoint reference
Face DetectionHow the detection pipeline works
DeploymentDeploy to K8s with GPU scheduling

On this page