Upload a log file → backend flags lines and returns a concise summary with suggested fixes.
- Frontend: Next.js (deployed on Vercel)
- Backend: FastAPI (separate service)
- DB: Postgres (stores sessions/messages; optional for log mode)
- LLM: OpenAI (server-side only)
- Frontend install
npm install
- Backend deps
pip3 install --break-system-packages -r backend/requirements.txt
- Environment
cp .env.example .env
# set OPENAI_API_KEY in .env (optional but recommended)
- Start backend (SQLite by default)
bash backend/run.sh
# backend: http://localhost:8000
- Start frontend
export NEXT_PUBLIC_API_URL=http://localhost:8000
npm run dev
# app: http://localhost:3000
OPENAI_API_KEY=sk-... docker compose up --build
# backend: http://localhost:8000 | postgres: localhost:5432 (app/app)
Apply SQL (optional; app auto-creates SQLite tables, Postgres uses SQL):
psql postgresql://app:app@localhost:5432/app -f db/migrations/0001_init.sql
-
Frontend (Vercel):
- Framework Preset: Next.js
- Root Directory:
./
- Build Command:
npm run build
- Env:
NEXT_PUBLIC_API_URL=https://YOUR-BACKEND
-
Backend (Railway/Render/Fly):
- Entrypoint:
bash backend/run.sh
- Env:
DATABASE_URL
,OPENAI_API_KEY
,CORS_ORIGINS=https://YOUR-VERCEL
- Entrypoint:
POST /chat/log-analyze
(multipart file:file
) → { flagged[], analysis }POST /chat/sessions
→ create sessionGET /chat/sessions/{id}
→ session with messagesPOST /chat/sessions/{id}/message
→ add message and get assistant reply