Remote-first Docker dev for use with Cursor Remote-SSH on a cloud VM. Includes a right-side chat assistant (SSE streaming) and two viz pages.
- FastAPI (api/main.py)
- Postgres 17 (Docker service)
- Docker + docker compose
- Alembic migrations (alembic/)
- HTML/CSS/JS static viz pages (api/static)
- LLM via OpenRouter or mock provider (SSE streaming)
- Copy env: cp env.sample .env
- (Optional) Configure LLM: echo "LLM_PROVIDER=openrouter" >> .env echo "OPENROUTER_API_KEY=sk-..." >> .env
- Build & run: make up
- Apply DB migrations (inside the api container): docker compose exec api bash -lc "alembic upgrade head"
- Load indicator/series registry metadata: make load-registry docker compose exec api bash -lc "python -m app.registry_loader series_registry.yaml"
- Open the app: open http://localhost:8000/ open http://localhost:8000/health
- Stop: make down
- Static viz pages:
viz_indicators.html,viz_series.html, shared top nav. - Right sidebar "Liquidity Assistant" chat (SSE stream): brief + ask, raw stream log, answer pane.
- Root redirect to
/static/viz_indicators.html. - Access logging middleware: timestamp, client IP, method, path, status, duration, user-agent.
- Unified indicator/series metadata via
indicator_registry.yaml(+ optionalseries_registry.yaml).
- GET
/→ redirects to/static/viz_indicators.html - GET
/health→ basic health - Static pages under
/static/*(e.g.,/static/viz_indicators.html) - LLM SSE stream: GET
/llm/ask_stream?question=...(optionalas_of)- Example: curl -sS -N "http://localhost:8000/llm/ask_stream?question=what%20is%20bill%20share"
make up— build and start API + Postgresmake down— stopmake logs— follow api/db logsmake rebuild— rebuild api image without cachemake shell— shell into api containermake load-registry— loadindicator_registry.yaml(and embeddedseries_registryif present)make fetch-core— optional data fetcher (supports FETCH_PAGES, FETCH_LIMIT)make test— run tests
- Run inside the container: docker compose exec api bash -lc "alembic upgrade head"
- Alembic scripts live in
alembic/versions/(e.g.,add_series_registry).
.envkeys:LLM_PROVIDER—openrouterormockOPENROUTER_API_KEY— required when usingopenrouterLLM_MODEL— e.g.,gpt-4o-mini
- SSE chat widget reads
/llm/ask_streamand displays raw events + answer.
- Dataset:
docs/llm-eval-dataset.jsonl(JSONL, one prompt per line) - Run:
python app/scripts/llm_eval_runner.py --api-base http://localhost:8000
--dataset docs/llm-eval-dataset.jsonl --out eval_runs --verbose - Output:
eval_runs/<timestamp>/results.json(raw SSE captured inraw_textandraw_lines).
- Provision Ubuntu 22.04/24.04 VM (>=2 vCPU, 4GB RAM). Open port 22 only.
- Install Docker + compose: curl -fsSL https://get.docker.com | sh sudo usermod -aG docker $USER && newgrp docker
- Clone repo and start: git clone https://github.com/your-org/liquidity-pulse.git cd liquidity-pulse cp env.sample .env && edit .env make up
- Tunnel API to your laptop: ssh -N -L 8000:localhost:8000 user@your-vm Then visit http://localhost:8000/
- Add SSH host in Cursor → Remote-SSH: user@your-vm
- Open the repo folder on the VM and run
make upin the terminal.
api/main.py— FastAPI entrypointapi/routers/*— API routers (registry, history, llm, etc.)api/static/*— viz pages + chat widgetdocker-compose.yml— API + Postgres servicesDockerfile— API imagerequirements.txt— Python depsenv.sample— example environment varsMakefile— convenience targetsalembic/— migrations