SysDraw is a small full‑stack TypeScript app that:
- Defines a shared
Graphschema (graphSchema.ts) - Renders graphs with ReactFlow on the frontend
- Exposes backend APIs that:
- Return a static graph (
GET /graph) - Generate a graph from a prompt via an LLM (
POST /generate-graph)
- Return a static graph (
Typing a prompt in the frontend and clicking Generate Diagram will call the backend AI endpoint, validate the returned graph, map it to ReactFlow nodes/edges, and render it.
graphSchema.ts– sharedGraphNode,GraphEdge,Graphinterfacesfrontend/graphMapper.ts–mapGraphToReactFlow(graph: Graph)frontend/GraphCanvas.tsx– thin wrapper aroundReactFlowfrontend/App.tsx– main UI: prompt box, button, calls backend, renders graphbackend/server.ts– Express server,/graph+/generate-graphendpointsbackend/tsconfig.json– backend TS configsampleGraph.json– example graph (not used by default now)vite.config.ts– Vite + React configtsconfig.json/tsconfig.node.json– root TS configs
- Node.js 18+ (20.x recommended)
- npm
npm installCreate .env in the project root:
OPENAI_API_KEY=sk-...your-key-here...Make sure .env is not committed (it is already listed in .gitignore).
When starting the backend, load the env vars into the shell:
set -a; source .env; set +anpm run dev:backend- Default port:
5000 - Endpoints:
GET /graph– returns a static graph conforming toGraphPOST /generate-graph– request body{ "prompt": "..." }, calls the LLM, validates the JSON againstGraph, and returns:
{
"nodes": [{ "id": "string", "label": "string" }],
"edges": [{ "source": "string", "target": "string" }]
}Static graph:
curl http://localhost:5000/graph | jqAI‑generated graph:
curl -s -X POST http://localhost:5000/generate-graph \
-H "Content-Type: application/json" \
-d '{"prompt": "Simple web app with client, API server and database"}' | jqIf you see EADDRINUSE on port 5000:
lsof -ti:5000 | xargs kill
npm run dev:backendnpm run devThen open:
http://localhost:5173
- Prompt input (textarea) for the system description
- Generate Diagram button:
- Sends
POST http://localhost:5000/generate-graphwith{ prompt } - While waiting: button shows “Generating…” and
loadingis true - On success: response is typed as
Graph, passed tomapGraphToReactFlow, and rendered viaGraphCanvas - On error: shows a simple error message above the canvas
- Sends
- Initial state:
- Canvas is empty
- A hint text explains to enter a prompt and click Generate Diagram
All AI logic (prompting, calling the LLM, JSON parsing, schema validation) lives only in the backend; the frontend just sends a prompt and renders whatever valid Graph it receives.