Companies like OpenAI built "Super AI" that threatens human independence. We crave individuality: AI that amplifies, not erases, you.
Weβre challenging that with "Second Me": an open-source prototype where you craft your own AI selfβa new AI species that preserves you, delivers your context, and defends your interests.
Itβs locally trained and hostedβyour data, your controlβyet globally connected, scaling your intelligence across an AI network. Beyond that, itβs your AI identity interfaceβa bold standard linking your AI to the world, sparks collaboration among AI selves, and builds tomorrowβs truly native AI apps.
Join us. Tech enthusiasts, AI pros, domain expertsβSecond Me is your launchpad to extend your mind into the digital horizon.
Train Your AI Self with AI-Native Memory (Paper)
Start training your Second Me today with your own memories! Using Hierarchical Memory Modeling (HMM) and the Me-Alignment Algorithm, your AI self captures your identity, understands your context, and reflects you authentically.
Launch your AI self from your laptop onto our decentralized networkβanyone or any app can connect with your permission, sharing your context as your digital identity.
Roleplay: Your AI self switches personas to represent you in different scenarios.
AI Space: Collaborate with other Second Mes to spark ideas or solve problems.
Unlike traditional centralized AI systems, Second Me ensures that your information and intelligence remain local and completely private.
Star and join us, and you will receive all release notifications from GitHub without any delay!
Click to expand/collapse Mac setup details
- Python 3.12 or higher
- Xcode Command Line Tools
If you haven't installed Xcode Command Line Tools yet, you can install them by running:
xcode-select --installAfter installation, you may need to accept the license agreement:
sudo xcodebuild -license accept-
Clone the repository
git clone [email protected]:Mindverse/Second-Me.git cd Second-Me
-
Set up the environment
Choose one of the following options:
Option A: For users with existing conda environment
If you already have conda installed:
-
Create a new environment from our environment file:
conda env create -f environment.yml # This will create an environment named 'second-me' conda activate second-me -
Set the custom conda mode in
.env:CUSTOM_CONDA_MODE=true
-
Run setup:
make setup
Option B: For new users
If you're new or want a fresh environment:
make setup
This command will automatically:
- Install all required system dependencies (including conda if not present)
- Create a new Python environment named 'second-me'
- Build llama.cpp
- Set up frontend environment
-
-
Start the service
make start
Click to expand/collapse Docker setup details
Docker provides a consistent environment across different operating systems.
- Docker and Docker Compose installed on your system
Important: You must install both Docker and Docker Compose before proceeding. If you haven't installed them yet:
- For Docker installation: Get Docker
- For Docker Compose installation: Install Docker Compose
- Clone the repository
git clone [email protected]:Mindverse/Second-Me.git
cd Second-Me- Build the Docker images
make docker-build- Start the containers
make docker-up- To stop the containers when you're done
make docker-down- Restart all services
make docker-restart-all- Rebuild and restart only the backend
make docker-restart-backend- Rebuild and restart only the frontend
make docker-restart-frontend- Please notice that if you are using Apple Silicon and you want to run docker commands directly, you need to set the
PLATFORMenvironment variable toappleand theDOCKER_BACKEND_DOCKERFILEenvironment variable toDockerfile.backend.apple. For example:
PLATFORM=apple DOCKER_BACKEND_DOCKERFILE=Dockerfile.backend.apple docker-compose up -d --buildClick to expand/collapse single or multi-OS setup details
In this section, we explore how to deploy both the frontend and backend on a single server, as well as how to enable cross-server communication between the frontend and backend using separate servers.
- Miniforge/Miniconda
The following scripts are sourced from scripts/setup.sh and scripts\start_local.sh.
π Python Environment Setup with Conda and Poetry We recommend managing the Python environment using Miniconda, and handling dependencies with Poetry. While Conda and Poetry are independent tools, they can be used together effectively:
- Conda provides flexible and isolated environment management.
- Poetry offers strict and declarative dependency management.
Below is a step-by-step example of combining them:
# Set up Python Environment
conda create -n secondme python=3.12
conda activate secondme
# (Recommand) Install Poetry inside the Conda environment
# This avoids using system-wide Poetry and keeps dependencies isolated
pip install poetry
# (Optional) Set a custom Python package index (e.g., TUNA mirror for better speed in China)
poetry source add tuna https://pypi.tuna.tsinghua.edu.cn/simple
poetry source set-default tuna
poetry install --no-root --no-interaction
# Install specific version of GraphRAG from local archive
# β οΈ Adjust the path separator based on your OS (e.g., \ on Windows, / on Unix)
pip install --force-reinstall dependencies\graphrag-1.2.1.dev27.tar.gz # Install Frontend Dependencies
cd lpm_frontend
npm install
cd ..
# Build llama.cpp Dependencies
unzip -q dependencies/llama.cpp.zip
cd llama.cpp
mkdir -p build && cd build
cmake ..
cmake --build . --config Release
cd ../..# Initialize SQL Database
mkdir -p "./data/sqlite"
cat docker/sqlite/init.sql | sqlite3 ./data/sqlite/lpm.db
# Initialize ChromaDB Database
mkdir -p logs
python docker/app/init_chroma.py
# Start the Backend Server (develop mode)
python -m flask run --host=0.0.0.0 --port=8002 >> "logs/backend.log" 2>&1
# If deploying in a production environment, please use `nohup` and `disown` commands to keep it running persistently in the background.
# Start the Frontend Server (Open Another Terminal Shell)
cd lpm_frontend
npm run build
npm run startβΉοΈ Note: If the frontend and backend are deployed on separate servers, make sure to configure the
HOST_ADDRESSin the.envfile accordingly.
After starting the service (either with local setup or Docker), open your browser and visit:
http://localhost:3000
make help- Ensure you have sufficient disk space (at least 10GB recommended)
- If using local setup with an existing conda environment, ensure there are no conflicting package versions
- First startup may take a few minutes to download and install dependencies
- Some commands may require sudo privileges
If you encounter issues, check:
- For local setup: Python and Node.js versions meet requirements
- For local setup: You're in the correct conda environment
- All dependencies are properly installed
- System firewall allows the application to use required ports
- For Docker setup: Docker daemon is running and you have sufficient permissions
π οΈ Feel free to follow User tutorial to build your Second Me.
π‘ Check out the links below to see how Second Me can be used in real-life scenarios:
- Felix AMA (Roleplay app)
- Brainstorming a 15-Day European City Itinerary (Network app)
- Icebreaking as a Speed Dating Match (Network app)
The following features have been completed internally and are being gradually integrated into the open-source project. For detailed experimental results and technical specifications, please refer to our Technical Report.
- Long Chain-of-Thought Training Pipeline: Enhanced reasoning capabilities through extended thought process training
- Direct Preference Optimization for L2 Model: Improved alignment with user preferences and intent
- Data Filtering for Training: Advanced techniques for higher quality training data selection
- Apple Silicon Support: Native support for Apple Silicon processors with MLX Training and Serving capabilities
- Natural Language Memory Summarization: Intuitive memory organization in natural language format
We welcome contributions to Second Me! Whether you're interested in fixing bugs, adding new features, or improving documentation, please check out our Contribution Guide. You can also support Second Me by sharing your experience with it in your community, at tech conferences, or on social media.
For more detailed information about development, please refer to our Contributing Guide.
We would like to express our gratitude to all the individuals who have contributed to Second Me! If you're interested in contributing to the future of intelligence uploading, whether through code, documentation, or ideas, please feel free to submit a pull request to our repository: Second-Me.
Made with contrib.rocks.
This work leverages the power of the open-source community.
For data synthesis, we utilized GraphRAG from Microsoft.
For model deployment, we utilized llama.cpp, which provides efficient inference capabilities.
Our base models primarily come from the Qwen2.5 series.
We also want to extend our sincere gratitude to all users who have experienced Second Me. We recognize that there is significant room for optimization throughout the entire pipeline, and we are fully committed to iterative improvements to ensure everyone can enjoy the best possible experience locally.
Second Me is open source software licensed under the Apache License 2.0. See the LICENSE file for more details.