AnyVolunteer is an Android application that provides a privacy-protecting AI assistant with advanced face obfuscation capabilities. The application uses large language models (LLMs) to process user queries while ensuring sensitive personal information is protected through local desensitization and intelligent face detection/obfuscation algorithms.
- 🔒 Privacy Protection: Automatically detects and desensitizes sensitive information like phone numbers, IDs, addresses, etc.
- 👤 Face Obfuscation: Advanced face detection and multiple obfuscation techniques (pixelation, solid mask, translucent, color jitter, face replacement)
- 🤖 Multi-Model Support: Supports multiple LLM models including deepseek-r1, doubao-1.5-lite-32k, doubao-1.5-pro-32k, and doubao-vision-pro-32k
- 👤 User Authentication: Secure login and registration system with SHA-256 password hashing
- 📝 Chat History: Comprehensive chat and image processing history management
- 🖼️ Image Processing: Upload images for AI analysis with automatic face protection
- 💾 Local Data Storage: SQLite database for secure local data management
- 🎯 Three-Part Display: Shows desensitized content, LLM's reply, and restored information
- Computer Vision: OpenCV integration for face detection and image processing
- Security: SHA-256 password hashing and local data desensitization
- Database: SQLite with structured tables for users, chat history, and image tasks
- API Integration: Volcengine Ark Runtime for LLM services
- Multi-threading: Asynchronous processing for image operations and API calls
- Android Studio Hedgehog (2023.1.1) or newer
- JDK 17 or newer
- Gradle 8.0 or newer
- Android SDK 34 (minimum SDK 24)
- Android 7.0 (API level 24) or higher
- At least 100MB of free storage space
- Internet connection for LLM API calls
- Camera access for image upload functionality
- Install Android Studio from developer.android.com
- Make sure you have Git installed to clone the repository
git clone <repository-url>
cd AnyVolunteer_GroupProjectThe application uses the Volcengine Ark Runtime service for LLM API calls. You need to configure your API key:
- Open the
app/src/main/java/com/hkucs/groupproject/config/Config.javafile - Replace the placeholder API key with your actual key from Volcengine
public static final String API_KEY = "your_api_key_here";- Open Android Studio
- Select "Open an existing Android Studio project"
- Navigate to the cloned repository folder and select it
- Wait for Gradle sync to complete
- Build the project by selecting Build > Make Project or pressing Ctrl+F9 (Windows/Linux) or Cmd+F9 (Mac)
You can also build the project using the command line:
# On Windows
.\gradlew assembleDebug
# On macOS/Linux
./gradlew assembleDebugThe APK file will be generated in app/build/outputs/apk/debug/app-debug.apk
- In Android Studio, click on Run > Run 'app' or press Shift+F10 (Windows/Linux) or Control+R (Mac)
- Select an existing emulator or create a new one
- Wait for the application to install and launch
- Enable USB debugging on your Android device:
- Go to Settings > About phone
- Tap Build number seven times to enable developer options
- Go back to Settings > System > Developer options and enable USB debugging
- Connect your device to your computer via USB
- In Android Studio, click on Run > Run 'app' and select your device
- Allow the installation when prompted on your device
- Transfer the generated APK file to your Android device
- On your device, navigate to the APK file using a file manager
- Tap the APK file and follow the installation prompts
- You may need to enable installation from unknown sources in your device settings
- When you first launch the app, you'll see the welcome screen
- If you don't have an account, tap on "Register" to create one
- Enter your username and password, then tap "Register"
- After registration, you'll be redirected to the login screen
- Enter your credentials and tap "Login"
- After logging in, you'll see the main chat interface
- Select an LLM model from the dropdown menu at the bottom
- Type your message in the input field and tap the send button
- The app will process your message, desensitize any personal information, and send it to the LLM
- You'll see three parts in the response:
- The desensitized version of your message
- The LLM's reply to your query
- The information with sensitive data restored
- Select the "doubao-vision-pro-32k" model from the dropdown
- Tap the camera button to upload an image
- The app will automatically detect faces in the image
- Enter your prompt describing what you want to do with the image
- The system will choose appropriate obfuscation techniques based on your prompt
- View the processed image with faces protected according to privacy requirements
- Tap the history button (top left corner) to view your chat history
- Select any chat from the list to view the full conversation
- For image tasks, you can view both original and processed images
- Tap the user icon (top right corner) to access your profile
- You can see your username, remaining tokens, and image credits
- Tap "Back to Main" to return to the chat interface
- Tap "Logout" to end your session
- User Management: Registration, login, session management with SHA-256 password hashing
- Face Detection: OpenCV-based face detection using Haar cascade classifiers
- Image Processing: Multiple obfuscation techniques (pixelation, solid mask, translucent, color jitter, face replacement)
- Data Storage: SQLite database with structured tables for users, chat history, and image tasks
- API Integration: Volcengine Ark Runtime for LLM services
- Privacy Protection: Local data desensitization with regex pattern matching
- users: User accounts with hashed passwords, token quotas, and image credits
- chat_history: Text conversation history with desensitization tracking
- image_tasks: Image processing tasks with original and processed image paths
- SHA-256 password hashing for secure authentication
- Local data desensitization to protect sensitive information
- Face obfuscation to protect privacy in images
- SQLite database with proper access controls
-
App crashes on startup
- Make sure you have the correct Android SDK version installed
- Check if your device meets the minimum requirements
- Ensure OpenCV is properly initialized
-
Cannot connect to LLM service
- Verify your internet connection
- Check if your API key is correctly configured
- Ensure the LLM service is available
-
Login issues
- If you forgot your password, you'll need to reinstall the app (password recovery not implemented)
- Make sure you're entering the correct username and password
-
Image processing fails
- Check if you have sufficient image credits
- Ensure the image contains detectable faces
- Verify camera permissions are granted
If you encounter issues, you can enable logging:
- Open
app/src/main/java/com/hkucs/groupproject/config/Config.java - Set the debug flag to true:
public static final boolean DEBUG = true;
- Check the Android logcat for detailed error messages
app/src/main/java/com/hkucs/groupproject/
├── activity/ # UI activities and screens
├── adapter/ # RecyclerView adapters
├── config/ # Configuration and constants
├── database/ # Database management classes
├── handler/ # Business logic handlers
├── model/ # LLM model implementations
├── response/ # API response classes
├── utils/ # Utility classes (ImageProcessor)
└── util/ # Helper utilities
app/src/main/res/
├── layout/ # XML layout files
├── drawable/ # Graphics and drawable resources
└── values/ # String, color, and dimension resources
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenCV for computer vision capabilities
- Volcengine Ark Runtime for LLM services
- Android development community for best practices