TransFire is a simple tool that allows you to use your locally running LLMs while far from home, without requiring port forwarding. TransFire will route an OpenAI compatible API exposed by LMStudio or Ollama through your firebase instance of choice, encrypting all traffic with a pre-shared AES key so not even Google will be able to read your conversations.
To get started download and install the apk and follow the setup instructions below.
Note
If you want to access LLMs remotely, Tailscale is a much better solution than this one. I built this fir two main reasons:
- For fun
- For experimenting with an uncommon use of Firebase RTDB
First you will need to setup the client:
- Install the APK on your phone
- Go to the Firebase console.
- Click on
Create a new Firebase project
- Proceed through the whole process, opting out of Analytics and Gemini preferably
- Now on the left panel, expand the
Build
dropdown and selectRealtime Database
- Click on
Create Database
- Select a database region of your choice
- Select
Start in locked mode
and proceed - Click on the URL icon to copy the database URL. This is your Firebase Database URL
- Now click on the settings icon in the top-left corner and open
Project Settings
- Go to the
Service accounts
tab and then click onDatabase secrets
- You should see one secret in the list, if not click on
Add secret
- Hover on the secret to reveal the
Show
button and click it, then copy the key. This is your Firebase Database API key - Now go to the app and click on
Get started
- Put the
Firebase Database URL
andFirebase Database API key
in the corresponding fields, then choose an AES password to encrypt the traffic to/from Firebase and put it intoEncryption password
. - Click on
Save configuration
and thenNext
. - You can now proceed to server configuration
Now you can setup and start the server:
- Clone the repository with
git clone https://github.com/Belluxx/TransFire
- Navigate to the server directory with
cd TransFire/transfire-server/
- Create a virtual environment with
python3 -m venv .venv
- Activate it with
source .venv/bin/activate
- Install dependencies with
pip3 install -r requirements.txt
- Now copy
example.env
to.env
- Open the new
.env
file - Fill in the fields
FIREBASE_URL
,FIREBASE_API_KEY
,ENCRYPTION_PASSWORD
with the same values used during the client setup - Choose a
POLL_INTERVAL
, not too low (use >2 seconds) or you risk finishing your free firebase daily usage - Put the correct
OPENAI_LIKE_API_URL
. It will behttp://127.0.0.1:1234
if you are using LMStudio orhttp://127.0.0.1:11434
if you are using Ollama OPENAI_LIKE_API_KEY
should stay as is, change it only if you know what you are doing (for example using remote APIs)- Start the server with
python3 server.py
sequenceDiagram
participant TransFire App
participant Firebase
participant TransFire Server
participant LLM Server
TransFire App->>TransFire App: Append user message to chat
TransFire App->>TransFire App: Encrypt chat
TransFire App->>Firebase: Send chat
TransFire Server->>Firebase: Get chat
TransFire Server->>TransFire Server: Decrypt chat
TransFire Server->>LLM Server: Send chat
LLM Server->>TransFire Server: Return LLM response
TransFire Server->>TransFire Server: Encrypt response
TransFire Server->>Firebase: Send response
TransFire App->>Firebase: Get response
TransFire App->>TransFire App: Decrypt response
TransFire App->>TransFire App: Append response to chat
- No support for two simultaneous clients using the same Firebase Database
- No automatic detection of available models due to the heterogeneity of Ollama, LMStudio, etc...
- No support for chat history and multiple chats (will be added in the future)
Thanks to compose-richtext I was able to add markdown parsing that is essential to make LLM output readable.