Local Setup
Purpose
Get a new engineer from git clone to a working local HUPH
environment (API + RAG + admin + infra) in about 15 minutes. After
this page, you should be able to send a test message to the local
chatbot and see the response rendered in the admin inbox.
Prerequisites
- Node.js ≥ 20 (check:
node --version) - Docker + docker-compose — for Postgres + Valkey (Redis-compatible)
- Git access to the repo:
https://github.com/wabiwabo/huph - 8 GB RAM minimum for the core HUPH stack (API + admin + Postgres
- Valkey). If you also bring up the local Dify stack via
docker-compose.dify.yml, add another ~4–6 GB. - Disk ~5 GB free for Docker images and Postgres data
- Access to the Dify deployment for RAG / chat completion — either
the production one at
https://dify.huph.val.id(read-only for testing) or a local Dify stack viadocker-compose -f docker-compose.dify.yml up -d. See Architecture → Integrations for which Dify dataset ID and API key you need.
Steps
1. Clone the repo
git clone https://github.com/wabiwabo/huph.git
cd huph
2. Install JavaScript workspaces
npm install
This installs all workspace dependencies (apps/api, apps/admin,
packages/*).
3. Start infrastructure (Postgres + Valkey)
docker-compose up -d postgres
docker-compose ps
Valkey (Redis-compatible) runs as huph-valkey in a separate
docker-compose.infra.yml stack shared across val.id projects. On a
fresh machine, bring it up from /opt/infra (or wherever your ops
keeps the shared infra compose file):
cd /opt/infra && docker-compose -f docker-compose.infra.yml up -d huph-valkey
Expected containers running after the two commands:
huph-postgreson host port 5433 (container 5432)huph-valkeyon host port 49379 (container 6379), shared networkhuph_huph-infra
Historical note: Qdrant and a dedicated RAG service used to live in
this stack but were removed in late March / early April 2026 — see
Architecture overview for the
current Dify-backed pipeline. The current docker-compose.yml
carries comments that call this out explicitly.
If any container is unhealthy, check docker-compose logs <service>.
4. Configure environment
cp .env.example .env
Minimum env vars to set before starting services:
ANTHROPIC_API_KEY— required (Claude Haiku, used by the API for intent routing LLM fallback + eval judging)OPENAI_API_KEY— required (used by Dify for embeddings; the API passes through)DIFY_APP_API_KEY— required for Dify chat-messages integrationDIFY_KB_API_KEY— required for Dify Knowledge Base management endpoints (KB crawl, upload, gap detection)DIFY_DATASET_ID— the main KB dataset UUID in DifyDIFY_INTENT_DATASET_ID— the intent router's secondary dataset UUID (optional if the router's Dify layer is disabled)D360_API_KEY— optional locally (only needed for real WhatsApp webhook; you can exercise the API without it)D360_WEBHOOK_SECRET— used by thehub.verify_tokenone-time subscription bootstrap onlyDATABASE_URL— defaults topostgresql://huph:...@postgres:5432/huph(inside containers) orpostgresql://huph:...@localhost:5433/huph(from host)REDIS_URL— defaults toredis://huph-valkey:6379(inside containers) orredis://localhost:49379(from host)LEAD_CAPTURE_ENABLED=true(default)TEAM_OWNERSHIP_ENABLED=true(default)ESCALATION_ROUTING_ENABLED=true(default)API_AUTH_MODE=disabled(Phase 0 default; see API architecture for the warn/enforce rollout)
Also make sure NEXTAUTH_SECRET and JWT_SECRET are set — they are
required for admin auth AND for Socket.io realtime JWE decoding in the
API. A missing NEXTAUTH_SECRET in the API container is the single
most common "realtime stuck Offline" symptom.
For production credentials (admin password, 360dialog key,
integration URLs), see CREDENTIALS.md at the repo root — it is
gitignored.
5. Run database migrations
The root package.json has db:migrate wired to a future
packages/database workspace that is not yet materialized in the
current repo. For now, migrations are applied manually via raw SQL
files in scripts/migrate-*.sql:
# Apply each migration script in order (check git log for the latest)
for f in scripts/migrate-*.sql; do
echo "--- Applying $f ---"
docker exec -i huph-postgres psql -U huph -d huph < "$f"
done
On a fresh Postgres, run the migrations in chronological order. For
any individual migration, you can also copy-paste into a psql
session. See Deploy for the production
pattern.
Historical note: npm run db:migrate will fail today because
packages/database doesn't exist yet. Do not use it until the
workspace is created.
6. Start the API server
In a new terminal:
npm run dev:api
Expected: API listening on 3101 within ~3 seconds. tsx watch is
used for hot reload.
7. Start the admin dashboard
In another terminal:
npm run dev:admin
Expected: ready - started server on 0.0.0.0:3103 (dev port).
Note: in production the admin runs via systemd on port 47293 behind
nginx at admin.huph.val.id.
8. (Optional) Start the crawler worker
The crawler worker (apps/crawler-worker) is a background service
that runs Dify KB ingestion jobs. You only need it running if you're
actively testing knowledge-base ingestion:
docker-compose up -d crawler-worker
docker-compose logs -f crawler-worker
For most dev work it can stay off.
9. Smoke test
curl http://localhost:3101/health
# Expected: {"status":"ok"}
# Open admin in browser
open http://localhost:3103
Log in with local admin creds (see CREDENTIALS.md). Navigate to
Conversations — should load the inbox.
There is no local RAG service — AI chat completion and KB
retrieval now go through Dify. For testing the chat path end-to-end,
either point the API at the production Dify at
https://dify.huph.val.id (read access, careful with writes) or
bring up a local Dify stack via docker-compose -f
docker-compose.dify.yml up -d.
Gotchas
- There is no
apps/ragornpm run dev:rag. The self-hosted RAG service was removed in late March / early April 2026 and replaced by Dify. OlderCLAUDE.mdtext may still describe it — preferdocker-compose.ymlandapps/directory listings as the source of truth for current reality. npm run db:migratewill fail today becausepackages/databaseis not materialized yet. Run SQL migrations directly against Postgres (see Step 5).- Postgres port: host port is 5433, not 5432. The container still listens on 5432, but docker-compose maps it out to 5433 to avoid colliding with a local Postgres install.
- Valkey port: host port is 49379 (container 6379). Inside
containers use the service name
huph-valkey:6379. - Docker internal URLs differ from host URLs. Inside containers,
use
postgres:5432andhuph-valkey:6379. From the host, uselocalhost:5433andlocalhost:49379. - NextAuth uses JWE, not signed JWT. Do not use
jsonwebtoken.sign/verifyfor session tokens — usenext-auth/jwtencode/decode. Seedev/architecture/api.en.md. - Realtime requires
NEXTAUTH_SECRETin the API container env (not just the admin). Missing secret → Socket.io clients stuck "Offline". Add todocker-compose.yml+ root.env. - 360dialog tier in use does NOT expose an App Secret. Webhook
HMAC signing is not possible on this tier — the auth middleware at
apps/api/src/middleware/relies on internal HMAC + network isolation (port 3101 bound to 127.0.0.1) instead.
See also
- Repository tour — what each app and package does
- Running tests — how to verify your changes
- Debugging — where logs live and how to trace issues