A comprehensive fleet management system built with Next.js frontend and Django backend.
eye-fleet/
├── frontend/ # Next.js frontend
│ ├── src/
│ │ └── app/ # Next.js pages and components
│ └── package.json
└── backend/ # Django backend
└── eyefleet/
├── manage.py
└── eyefleet/ # Django settings
-
Navigate to the backend directory:
cd backend/eyefleet -
Install Python dependencies:
pip3 install django djangorestframework django-cors-headers
-
Apply database migrations:
python3 manage.py makemigrations python3 manage.py migrate
-
Start the Django development server:
python3 manage.py runserver
The backend will be available at
http://localhost:8000/docs/
-
From the project root directory, install Node.js dependencies:
npm install
-
Start the Next.js development server:
npm run dev
The frontend will be available at
http://localhost:3000
The fleet-vision page includes an AI assistant powered by Gemma-2b through Ollama. Follow these steps to set it up:
First, install Ollama on your system:
curl -fsSL https://ollama.com/install.sh | shAfter installing Ollama, pull the Gemma-2b model:
ollama pull gemma2:2bStart the Ollama server in a terminal:
ollama serveThe Flask server acts as a bridge between the frontend and Ollama. Set it up as follows:
- Install Python dependencies:
pip install -r requirements.txt- Start the Flask server:
# From the project root directory
python3 ollama_api/generate.pyThe Flask server will run on http://localhost:5328 and provide the following endpoint:
- POST
/ollama_api/generate: Handles chat messages and communicates with Ollama
To verify everything is working:
- Ensure Ollama server is running (default: http://localhost:11434)
- Ensure Flask server is running (http://localhost:5328)
- Navigate to the fleet-vision page in your Next.js app
- Try sending a message in the chat interface
If you encounter issues:
- Check if Ollama server is running:
curl http://localhost:11434/api/tags- Check if Flask server is running:
curl http://localhost:5328/ollama_api/generate -X POST -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Hello"}]}'- Common issues:
- "Network Error": Make sure both Ollama and Flask servers are running
- "Model not found": Run
ollama pull gemma2:2bagain - CORS errors: Ensure Flask server is running with CORS enabled
- Add new vehicles with detailed information
- View fleet overview with statistics
- Interactive map showing vehicle locations
- Comprehensive vehicle list with status indicators
- Dark mode support
- Responsive design for all screen sizes
add an asset:(http://localhost:8000/api/maintenance/assets/)
To run both servers for development:
-
Start the Django backend:
cd backend/eyefleet python3 manage.py runserver -
In a new terminal, start the Next.js frontend:
npm run dev