A modern log viewer tool designed for Cisco engineers to quickly view, filter, and analyze log files. The tool features a responsive UI, customizable filters, intelligent summarization and issue detection powered by AI agents.
Note This is not an official Cisco repository and is simply a prototype we made to present to Cisco, porting some of these changes into their existing tooling.
The Log Viewer Tool is built to help engineers quickly identify issues in large log files by:
- Uploading and displaying logs in a responsive table format.
- Providing real‑time filtering using basic text search, regex, and predefined or custom filter groups.
- Enabling an AI agent to analyze logs, generate summaries, detect known issues, and suggest filtering options.
⚠️ Warning:
The setup instructions provided below are currently tested only on macOS and Windows, as non of us had Linux machines to test on.
First, you need to ensure you have the latest version of Node.js installed on your machine and can run npm commands.
-
Clone the repository.
-
Navigate to the client directory.
-
Install dependencies:
npm install
-
Start the development server:
npm run dev
You may also build the project and preview it in production mode:
npm run build
npm run previewYou can find test log files in the test-logs directory inside the root directory.
⚠️ Important Note: Once you set up the server (backend) and it is fully running, you may need to refresh the client page to ensure we receive the available AImodelsfrom the server. This is becausemodelsare completely handled on the server side.
⚠️ Important Note: The server requires Python 3.12 or higher to run (we recommend the latest). Make sure you have the correct version installed and set as your environment's default Python interpreter.
The following are instructions to set up the backend. However, the frontend will fully work without the backend and will have all of the features except for the AI agent and the database. So if you only want to test the frontend, you can skip this section.
Do the following steps in a separate terminal instance
First navigate to the server directory:
-
Create a Python virtual environment:
python -m venv venv
-
Activate the virtual environment:
source venv/bin/activate -
Install required packages:
pip install -r requirements.txt
-
Run the server:
python main.py
First navigate to the server directory:
Note: Make sure your
pycommand is set to Python 3.12 or higher.
-
Create a Python virtual environment:
py -m venv venv
-
Activate the virtual environment:
venv\Scripts\activate
-
Install required packages:
pip install -r requirements.txt
-
Run the server:
py main.py
Do the following steps in a separate terminal instance
For efficient log storage and search capabilities, we use Elasticsearch:
-
Install Elasticsearch by following the instructions at this article:
-
The set up can be quite lengthy, with setting up security being the longest part. For testing purposes, you can simply navigate to the Elasticsearch installation directory (e.g. elasticsearch-8.17.3) and then to config/elasticsearch.yml, and change the following config to be false. This will disable security for testing purposes:
xpack.security.enabled: false -
Start Elasticsearch (after navigating to the Elasticsearch installation directory):
-
Mac:
./bin/elasticsearch
-
Windows:
.\bin\elasticsearch
- Verify Elasticsearch is running:
-
Mac:
curl -X GET "http://localhost:9200/?pretty" -
Windows:
curl.exe -X GET "http://localhost:9200/?pretty"
-
The expected output should look something like this:
{ "name" : "Mujtabas-MacBook-Air-2.local", "cluster_name" : "elasticsearch", "cluster_uuid" : "da6pIJZERnOwSsUxFlwK7A", "version" : { "number" : "8.17.3", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "a091390de485bd4b127884f7e565c0cad59b10d2", "build_date" : "2025-02-28T10:07:26.089129809Z", "build_snapshot" : false, "lucene_version" : "9.12.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" }
These steps should set up Elasticsearch on your machine at http://localhost:9200.
If you run into issues, please follow the detailed instructions on the Elasticsearch website.
Create a .env file in the server root directory with the following key (you may need to get your own OpenAI API key or come request one from us):
OPENAI_API_KEY=your_openai_api_key_hereMake sure to restart the server by terminating and rerunning the main.py file.
⚠️ IMPORTANT WARNING: We've deliberately disabled the AI agent by default since it causes compatibility issues with Windows machines. The following section will give you instructions on how to enable it. It has been tested on macOS and runs well on Metal. So feel free to enable it if you're on macOS, otherwise you can stick to the online agent.
You can also set up the AI agent to run on your own offline models. For this we use llama.cpp for inference as it's one of the fastest ways to run compiled and quantized models (unfortunately it doesn't do well on compatibility, so if we had more time we might have ported to a more compatible framework).
To install llama.cpp's bindings for python, run the following commands:
CMAKE_ARGS="-DGGML_METAL=on"
pip install llama-cpp-python==0.3.4Next, you need to download the .gguf file of the model. We recommend getting started with a lightweight model such as granite3.2 instruct 2b. Once you downloaded the .gguf file, create a models/granite folder and put the .gguf file in there. Note that you may put the .gguf file anywhere as long as you later reference the correct path in the main.py file.
Then, you need to go to the main.py file and uncomment the following import:
# from model_client.offline_model import OfflineModelClient # Uncomment for offline model (disabled by default)Further down, you'll find where models are defined which should look like this unedited:
# Initialize models and client
models: dict[str, ModelClient] = {
"gpt-4o": OpenAIModelClient(os.getenv("OPENAI_API_KEY") or "", "gpt-4o"),
# Uncomment for offline model (disabled by default)
# "granite-3.2-2b": OfflineModelClient(
# "models/granite/granite-3.2-2b-instruct-Q6_K.gguf",
# context_window=3072,
# ),
}You can now uncomment the offline model part for the model I recommended to start with, but feel free to put any other models (perhaps try 8b ones if your machine can handle it). Also feel fre to tweak the context_window parameter to your liking.
Make sure to restart the server by terminating and rerunning the main.py file.
Once again, there are example logs in the test-logs directory, so use those for testing if you'd like.
- Upload:
Use the "Upload" button to load a JSON log file. The logs will be rendered in a table. You may also upload to a database. By pressing the "View" button next to the "Upload" button, you can view/delete logs uploaded there.
- View:
Logs are displayed in a double table responsive view that allows scrolling and pagination. The top table shows all the logs, while the bottom table displays the filtered logs. Clicking on a row in the bottom table will automatically highlight and scroll to the corresponding row in the top table. Also notice the gutter separating the two tables, which can be used to resize the tables based on how much space the user would like to give to either table.
- Efficient Filtering:
Filter log entries by typing in the search bar. Enable/disable case sensitivity and regular expressions for advanced filtering.
- Quick Isolation:
Choose from a set of predefined filter groups to quickly isolate log entries. Multiple groups can be applied simultaneously.
- Create and Save:
Click the "+" button to create custom filter groups. These filters can be saved for later use and appear in the dropdown.
The integrated AI agent assists in log analysis by generating summaries, detecting known issues, and suggesting filters.
- Automated Summaries:
The agent analyzes log statistics and generates a concise summary to highlight trends and anomalies.
- Contextual Analysis:
The agent compares log data against known issues and flags potential problems, providing an explanation and suggested resolution. It does this by utilizing keywords and regex patterns as well as semantic search to find relevant logs and then see if they match known issues.
- Natural Language Conversion:
The agent converts natural language queries into structured filter groups (with keywords or regex) to refine log output.
The agent needs context to provide accurate recommendations. Workspaces and categories help organize filters and known issues for better management.
- Workspace Management:
Organize filters and known issues into workspaces for better management.
- Category Modal:
View, edit, and delete filter categories in a modern, responsive modal interface.
For the Front-End, the only major dependencies are jQuery and Bootsrap. However, for quick development we also used CORS, Marked.js and Split.js. We also have jest for running tests.
We use vite as our bundler.
For the Back-End, here is what the requirements.txt has
fastapi==0.115.11
uvicorn[standard]==0.33.0
elasticsearch==8.13.0
openai==1.68.2
sentence-transformers==3.4.1
pytest==8.3.5
pytest-asyncio==0.25.3
However, on macOS, if the user decides to enable offline AI, then there's also llama-cpp-python==0.3.4.
The dependencies are all open source and match our license.