Built with Microsoft Custom Vision & Flutter
This project leverages AI and computer vision to classify the freshness of meat (Beef) based on color features.
Key Features:
AI-Powered Freshness Detection: Classifies meat freshness into categories (e.g., Fresh, Moderately Fresh, Spoiled) using image color characteristics. Cross-Platform Mobile App: Built with Flutter for both Android and iOS, offering real-time, user-friendly inspections. Offline Model Deployment: Uses TensorFlow Lite for on-device inference, even in low-connectivity environments. Easy to Train & Extend: New categories or meat types can be added easily with Microsoft Custom Vision’s intuitive interface.
Why This Project? Manual inspection of meat freshness can be subjective, inconsistent, and prone to error. This AI solution brings:
- Consistency in meat quality assessment
- Better consumer protection
- Reduced health risks from spoiled meat
- Less food waste and more efficient quality control
Tech Stack
Microsoft Custom Vision (Image Classification) - Link: https://www.customvision.ai/ , Flutter (Mobile UI), TensorFlow Lite (Offline Model Deployment), C++
Learn about Microsoft Custom Vision : https://learn.microsoft.com/en-us/azure/ai-services/custom-vision-service/
Before you begin, ensure you have the following installed and set up:
- Flutter SDK: Install Flutter
- Azure Account: A Microsoft Azure subscription is required to use Custom Vision. You can create a free account.
- IDE: An IDE like Visual Studio Code or Android Studio with Flutter plugins installed.
- Image Dataset: A collection of images to train your classification model.
- Go to Custom Vision Portal: Navigate to the Custom Vision web page and sign in with your Azure account.
- Create New Project:
- Click New Project.
- Name & Description: Give your project a relevant name and description.
- Resource: Select an existing Custom Vision Training Resource or create a new one.
- Project Types: Select Classification.
- Classification Types: Choose Multiclass (single tag per image). (Or Multilabel if your use case requires multiple tags per image).
- Domains: Select a Compact domain. This is crucial for exporting to TFLite. Options include General (compact), Food (compact), etc..
- Click Create project.
- Upload & Tag Images:
- Click Add images.
- Upload images in groups based on the classification tag you want to assign. A minimum of 5 images per tag is recommended for basic training, but more diverse images (varying angles, lighting, backgrounds) lead to better results.
- Create or select tags for each group of images.
- Click Upload [number] files. Repeat for all your classes/tags.
- Train the Model:
- Once images are uploaded and tagged, click the Train button.
- Choose a training budget (Quick Training is suitable for initial tests).
- Wait for the training to complete.
- Go to Performance Tab: After training, navigate to the Performance tab in your Custom Vision project.
- Select Iteration: Choose the iteration you want to export (usually the latest one).
- Export Model: Click the Export button.
- Choose Platform: Select TensorFlow.
- Choose Flavor/Format: Select TensorFlowLite from the dropdown.
- Download: Click Export, then Download. This will download a
.zipfile. - Extract Files: Unzip the downloaded file. Inside, you should find essential files including:
model.tflite: Your trained TensorFlow Lite model.labels.txt: A text file listing your classification tags, one per line.
- Replace Model Files:
- Navigate to the
vesta/assets/model/directory within your Flutter project structure. - Delete the existing
model.tfliteandlabels.txtfiles. - Copy your newly downloaded and extracted
model.tfliteandlabels.txtfiles into thisvesta/assets/model/directory.
- Navigate to the
- Verify
pubspec.yaml:- Open the
vesta/pubspec.yamlfile. - Confirm that the
tflite_flutterdependency is listed (it should be present as^0.11.0). - Confirm that the
assets/model/directory and its contents are declared under theflutter:->assets:section (they should be listed asassets/model/model.tfliteandassets/model/labels.txt). No changes are likely needed here if you just replaced the files.
dependencies: flutter: sdk: flutter # ... other dependencies tflite_flutter: ^0.11.0 # Ensure this or a compatible version is present flutter: uses-material-design: true assets: - assets/model/model.tflite # Ensure this line exists - assets/model/labels.txt # Ensure this line exists - assets/images/ - assets/animation/
- Open the
- Check Code Adjustments (If Necessary):
- Open the
vesta/lib/utils/detector_services.dartfile. - Input Size: Find the
mlModelInputSizeconstant within the_DetectorServerclass. The current value is224. Verify if your exported Custom Vision model expects this input size (common for compact domains). If your model requires a different input size (e.g., 300x300), update this constant accordingly.class _DetectorServer { static const int mlModelInputSize = 224; // <-- Check/Update this value // ... rest of the class }
- Output Shape/Number of Labels: Locate the
_runInferencefunction in the_DetectorServerclass. Find theoutputmap definition. The current code initializes the output buffer assuming 3 labels (List<num>.filled(3, 0)). Count the number of lines (labels) in yourlabels.txtfile and update the number3inList<num>.filled(3, 0)to match your label count.List<List<Object>> _runInference( List<List<List<num>>> imageMatrix, ) { final input = [imageMatrix]; // Update the '3' here to match the number of labels in your labels.txt final output = { 0: [List<num>.filled(YOUR_LABEL_COUNT, 0)], // <-- Update this number }; _interpreter!.runForMultipleInputs([input], output); return output.values.toList(); }
- Preprocessing: The current code resizes the image. Custom Vision models usually handle necessary normalization internally, but review the
analyseImagefunction if you suspect specific preprocessing (beyond resizing) is needed for your model.
- Open the
- Get Dependencies: Open a terminal in the
vestaproject root directory and run:flutter pub get
- Run: Connect a device or start an emulator/simulator and run:
flutter run
Demo Day Preseantation - https://stdntpartners-my.sharepoint.com/:p:/g/personal/shakthi_lakmal_studentambassadors_com/EY1yjS2mWU9PlzF-FTGodqgByjX6J2ZFZC9PqSazMR8ykg?e=7uv007&nav=eyJzSWQiOjI3MSwiY0lkIjoyNDczNDgyNTI4fQ