Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ben-xD/eye-fleet

 
 

Repository files navigation

Eye Fleet - Fleet Management System

A comprehensive fleet management system built with Next.js frontend and Django backend.

Project Structure

eye-fleet/
├── frontend/          # Next.js frontend
│   ├── src/
│   │   └── app/      # Next.js pages and components
│   └── package.json
└── backend/          # Django backend
    └── eyefleet/
        ├── manage.py
        └── eyefleet/  # Django settings

Setup Instructions

Backend Setup (Django)

  1. Navigate to the backend directory:

    cd backend/eyefleet
  2. Install Python dependencies:

    pip3 install django djangorestframework django-cors-headers
  3. Apply database migrations:

    python3 manage.py makemigrations
    python3 manage.py migrate
  4. Start the Django development server:

    python3 manage.py runserver

    The backend will be available at http://localhost:8000/docs/

Frontend Setup (Next.js)

  1. From the project root directory, install Node.js dependencies:

    npm install
  2. Start the Next.js development server:

    npm run dev

    The frontend will be available at http://localhost:3000

Ollama API Setup

The fleet-vision page includes an AI assistant powered by Gemma-2b through Ollama. Follow these steps to set it up:

1. Install Ollama

First, install Ollama on your system:

curl -fsSL https://ollama.com/install.sh | sh

2. Pull the Gemma-2b Model

After installing Ollama, pull the Gemma-2b model:

ollama pull gemma2:2b

3. Start Ollama Server

Start the Ollama server in a terminal:

ollama serve

4. Set Up Python Flask Server

The Flask server acts as a bridge between the frontend and Ollama. Set it up as follows:

  1. Install Python dependencies:
pip install -r requirements.txt
  1. Start the Flask server:
# From the project root directory
python3 ollama_api/generate.py

The Flask server will run on http://localhost:5328 and provide the following endpoint:

  • POST /ollama_api/generate: Handles chat messages and communicates with Ollama

Verification

To verify everything is working:

  1. Ensure Ollama server is running (default: http://localhost:11434)
  2. Ensure Flask server is running (http://localhost:5328)
  3. Navigate to the fleet-vision page in your Next.js app
  4. Try sending a message in the chat interface

Troubleshooting

If you encounter issues:

  1. Check if Ollama server is running:
curl http://localhost:11434/api/tags
  1. Check if Flask server is running:
curl http://localhost:5328/ollama_api/generate -X POST -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Hello"}]}'
  1. Common issues:
    • "Network Error": Make sure both Ollama and Flask servers are running
    • "Model not found": Run ollama pull gemma2:2b again
    • CORS errors: Ensure Flask server is running with CORS enabled

Features

  • Add new vehicles with detailed information
  • View fleet overview with statistics
  • Interactive map showing vehicle locations
  • Comprehensive vehicle list with status indicators
  • Dark mode support
  • Responsive design for all screen sizes

API Endpoints

add an asset:(http://localhost:8000/api/maintenance/assets/)

Development

To run both servers for development:

  1. Start the Django backend:

    cd backend/eyefleet
    python3 manage.py runserver
  2. In a new terminal, start the Next.js frontend:

    npm run dev

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 77.0%
  • TypeScript 22.7%
  • Other 0.3%