Loggle is a self-hosted log monitoring solution that stitches together the best available tools for log management. If you're looking to take control of your logs without relying on third-party services, Loggle is for you. This is a fun project intended for experimentation and learning, and it is not recommended for production use.
Before diving into cloud deployment, try Loggle locally:
-
Prerequisites:
- Docker Desktop installed and running
- Visual Studio or VS Code with .NET SDK
-
Run with Docker:
cd examples .\loggle-compose.ps1 start # Starts all required containers
This will provision:
- Elasticsearch
- Kibana
- .NET Aspire Dashboard
- OpenTelemetry Collector
- Loggle.Web
-
Run the Example App:
- Open
Loggle.slnin Visual Studio - Set
Examples.Loggle.Consoleas startup project - Run the application (F5)
- Open
-
View Your Logs:
- Open Kibana Log Explorer
- Open .NET Aspire Dashboard for an Aspire-first log browsing experience
- Watch your logs flow in real-time
-
Cleanup:
.\loggle-compose.ps1 stop # Stops and removes all containers
If you're already instrumenting applications with .NET, wiring Loggle into your existing logging pipeline takes just a couple of minutes:
-
Add the NuGet package
dotnet add package Loggle
Or with the Package Manager Console:
Install-Package LoggleThis brings in the
AddLoggleExporter()extension method used below. -
Add configuration to
appsettings.json:{ "Logging": { "OpenTelemetry": { "IncludeFormattedMessage": true, "IncludeScopes": true, "ParseStateValues": true }, "Loggle": { "ServiceName": "Examples.Loggle.Console", "ServiceVersion": "v0.99.5-rc.7", "OtelCollector": { "BearerToken": "REPLACE_WITH_YOUR_OWN_SECRET", "LogsReceiverEndpoint": "http://your-domain-or-ip:4318/v1/logs" } } } } -
Register the Loggle exporter in
Program.cs:var builder = Host.CreateDefaultBuilder(args) .ConfigureServices((hostContext, services) => { // Register the Loggle exporter services.AddLoggleExporter(); });
That’s it—run your app and the logs stream straight into Loggle alongside the rest of your stack.
The examples folder contains OpenTelemetry logging snippets for .NET, Python, JavaScript, TypeScript, and Go. Run any combination from PowerShell:
cd examples
.\run-examples.ps1 -Language python
# The script keeps running until you press Ctrl+C.Each sample now ships with its own configuration (config.json, .env, or appsettings.json). Adjust those files to point at your collector or change service metadata. The runner simply installs per-language dependencies (for example pip install or npm install --legacy-peer-deps) and loops the program until you stop it.
When you run the local Docker stack, Loggle ships a self-contained .NET Aspire dashboard that reads directly from the same Elasticsearch data stream as Kibana.
- Access the dashboard UI at
http://localhost:18888/(default local setup does not require authentication). - Ports
18889and18890stay exposed for OTLP and gRPC endpoints, matching Aspire defaults. - Update
examples/aspire-dashboard/appsettings.Development.jsonif you need the dashboard to target a different Elasticsearch host or data stream. ⚠️ Experimental integration: the current Aspire dashboard work is a persistence experiment. Active development happens in the fork at jgador/loggle_aspire, where the Aspire-specific updates will continue to evolve.
Watch this short video on Google Drive for a walkthrough of setting up and using Loggle:
This video provides a concise overview of deploying Loggle, configuring log forwarding, and accessing Kibana for log visualization.
- Self-Hosted Monitoring: Manage your logs on your own server.
- Complete Toolset:
- OpenTelemetry Collector: Collects your logs.
- Elasticsearch: Stores your logs.
- Kibana: Visualizes your logs.
- .NET Aspire Dashboard: Offers an Aspire-native observability view backed by Elasticsearch.
- Easy Deployment:
- Provision a virtual machine with Terraform on Azure (support for AWS and GCP coming soon).
- Automatically obtain and renew SSL/TLS certificates using Certbot with Let's Encrypt.
- Simple Setup: Provision your VM, send your logs, and access them in Kibana.
Your applications forward their logs to the OpenTelemetry Collector, which exports them to the Log Ingestion API. The Log Ingestion API processes the data and stores it in Elasticsearch, from where Kibana pulls the data for visualization.
flowchart TB
csharp["C#"]
go["Go"]
javascript["JavaScript"]
python["Python"]
typescript["TypeScript"]
others["Other"]
subgraph sources["Application Logs"]
csharp --> apps
go --> apps
javascript --> apps
python --> apps
typescript --> apps
others --> apps
end
apps --> collector["OpenTelemetry Collector"]
collector --> ingestion["Log Ingestion API"]
ingestion --> elastic["Elasticsearch"]
elastic --> kibana["Kibana"]
elastic --> aspire[".NET Aspire Dashboard"]
Prerequisite:
Ensure you have Terraform with Azure CLI working. For more information, refer to this guide.
Important Note: The SSL certificate generation is currently hardcoded to use "kibana.loggle.co". Since you'll be using your own domain, you'll need to manually update the deployment scripts to reflect that. This will be made configurable in future updates.
-
Generate an SSH Key:
The SSH key will be used to authenticate your virtual machine.
If you're using PowerShell, run:ssh-keygen -t rsa -b 4096 -C "loggle" -f "$env:USERPROFILE\.ssh\loggle" -N ""
-
Clone the Repository:
git clone https://github.com/jgador/loggle cd terraform\azure
Multiple Azure subscriptions?
List your available subscriptions and set the one Terraform should use:az account list -o table az account set --subscription "<subscription name or id>"
Replace the placeholder with the subscription you want to target before running any Terraform commands.
-
Provision the Public IP:
This will allocate a public IP for your VM.terraform apply -target="azurerm_public_ip.public_ip" -auto-approve -
Update Your Domain Registrar:
Configure your domain's DNS settings by adding an A record that points to your public IP address with a TTL of 600 seconds. For example, in GoDaddy, go to your domain's DNS management panel, create a new A record with the host set to "@" (or your preferred subdomain), enter your public IP address, and set the TTL to 600. -
Deploy with Terraform:
This step deploys all the necessary resources including the resource group, virtual network, subnet, public IP, network security group, network interface, and the virtual machine.terraform apply -auto-approve
Note: If you rebuild the VM while reusing the same static public IP, clear the old SSH host fingerprint before reconnecting:
ssh-keygen -R 52.230.2.122
Replace the IP if you change it. This prevents host key warnings when you SSH back in. Kibana is locked down to a default allow list. Update
kibana_allowed_ipsinterraform/azure/variables.tf(or override viaterraform.tfvars) with your own public IPs before applying if34.126.86.243is not yours.
The VM stores the managed identity in /etc/loggle/identity.env, so /etc/loggle/setup.sh can be run repeatedly without additional parameters. After SSH-ing into the host:
sudo /bin/bash /etc/loggle/setup.shThis replays package installs, certificate sync, and service configuration in an idempotent manner.
-
Send Your Logs:
Configure your application to forward logs using the following steps:- Add configuration to
appsettings.json:
{ "Logging": { "OpenTelemetry": { "IncludeFormattedMessage": true, "IncludeScopes": true, "ParseStateValues": true }, "Loggle": { "ServiceName": "Examples.Loggle.Console", "ServiceVersion": "v0.99.5-rc.7", "OtelCollector": { "BearerToken": "REPLACE_WITH_YOUR_OWN_SECRET", "LogsReceiverEndpoint": "http://your-domain-or-ip:4318/v1/logs" } } } }- Add the Loggle exporter in your
Program.cs:
var builder = Host.CreateDefaultBuilder(args) .ConfigureServices((hostContext, services) => { // Register the loggle exporter services.AddLoggleExporter(); });
- Add configuration to
-
Access Kibana:
Kibana is automatically set up as part of the deployment and exposed on standard HTTPS. Open your browser and navigate tohttps://kibana.loggle.co(replace with your domain) to view your logs. Remember: the OpenTelemetry Collector listens on port 4318 and Kibana is now published on port 443. -
Tear Down (Optional):
A helper script keeps the resource group and static public IP while destroying everything else:pwsh .\destroy.ps1 # Use -AutoApprove:$false if you want to confirm the destroyRun it from
terraform\azure. The wrapper builds aterraform destroycall that targets every managed resource except the protected resource group, public IP, and Key Vault, so those stay in place while the rest is removed.