NaLaMap is an open-source platform that helps users find and analyze geospatial data in a natural way. It combines modern web technologies with AI capabilities to create an intuitive interface for interacting with geographic information.
- Upload and display vector data on a map.
- Geocode Locations using OSM and GeoNames (e.g. hospitals, schools etc.).
- Find and integrate data from existing Open Data Portals or own databases.
- Chat with AI-agent to retrieve information on data content and quality.
- Multi-Provider LLM Support: Choose from OpenAI, Azure OpenAI, Google Gemini, Mistral AI, or DeepSeek.
- AI-assisted map and layer styling.
- Automated Geoprocessing using natural language (e.g buffer, centroids, intersections).
- Create and share GIS-AI-Applications for people without geodata expertise based on custom use-cases, processing logic and data-sources.
- Flexible Extension Possibilities of Toolbox e.g. for including document or web-search
- Color Customization: Customize the application's color scheme to match corporate branding or personal preferences. See Color Customization Guide.
We use GitHub Milestones and a Kanban board to collaborate on our Minimal Viable Product (MVP). We hope to realize this first major release (V.1.0.0) in February 2026.
Our next Milestone is V 0.2.0 scheduled for the 20th of September 2025. You can See a collection of planned improvements below. Issues describing those improvements will be added to the Kanban Board continously.
NaLaMap follows Semantic Versioning for all releases using the format MAJOR.MINOR.PATCH
:
- MAJOR version increments for incompatible API changes, significant architectural changes, or breaking changes to existing functionality
- MINOR version increments for new features, enhancements, or backwards-compatible functionality additions (e.g., new geospatial tools, additional data sources, UI improvements)
- PATCH version increments for backwards-compatible bug fixes, security patches, and minor improvements
Release Tags: All releases are tagged in Git using the format v{MAJOR}.{MINOR}.{PATCH}
(e.g., v1.0.0
, v1.2.3
).
Pre-release versions may use suffixes like -alpha
, -beta
, or -rc
for testing purposes (e.g., v1.1.0-beta.1
).
Current Version: The project is currently in active development. The first stable release will be tagged as v1.0.0
once core functionality is complete and thoroughly tested.
nalamap/
βββ backend/ # Python FastAPI backend
β βββ api/ # API endpoints
β βββ core/ # Core configurations
β βββ models/ # Data models
β βββ services/ # Business logic services
β β βββ agents/ # AI agent implementations
β β βββ ai/ # AI service providers
β β βββ database/ # Database connectors
β β βββ tools/ # Utility tools
β βββ main.py # Application entry point
βββ frontend/ # Next.js frontend
β βββ app/ # Next.js application
β β βββ components/ # React components
β β βββ hooks/ # Custom React hooks
β β βββ page.tsx # Main page component
β βββ public/ # Static assets
βββ nginx/ # Nginx configuration for serving the application
π For detailed architecture documentation, see ARCHITECTURE.md
π€ For AI agent development guidelines, see AGENTS.md
The following model was created to give you a high level overview of how NaLaMap works. It shows an example user-request to change the sytling of a vector layer in the map.
- Git
- Python 3.10+
- Node.js 18+
- Docker & Docker Compose (optional)
- Poetry (for backend)
Follow these steps to get the application running locally:
git clone [email protected]:nalamap/nalamap.git
cd nalamap
Create your environment file:
Create a .env
file in the root directory based on the provided .env.example
:
cp .env.example .env
Configure your environment variables:
Edit the .env
file to include your configuration. The environment file contains several categories of settings:
- AI Provider Configuration: Choose between OpenAI, Azure OpenAI, Google AI, Mistral AI, or DeepSeek and provide the corresponding API keys
- Database Settings: PostgreSQL connection details (a demo database is pre-configured)
- API Endpoints: Backend API base URL configuration
- Optional Services: LangSmith tracing for monitoring AI interactions
Map / WMTS Projection Safety:
To avoid rendering projection-mismatched WMTS layers, the backend filters out any WMTS layer that lacks a WebMercator (EPSG:3857 family) TileMatrixSet by default.
Environment variable to control this behavior:
NALAMAP_FILTER_NON_WEBMERCATOR_WMTS
(default: true
)
Set to false
to allow all WMTS layers (may lead to visual misalignment unless tiles are in WebMercator).
Details: see docs/wmts.md
.
Note: The
.env.example
includes a demo database connection that you can use for testing. For production use, configure your own database credentials.
LLM_PROVIDER
environment variable. To switch providers, change this value and restart the application.
Supported LLM_PROVIDER values and their models:
Provider | LLM_PROVIDER Value | Default Model | Model Configuration | Additional Configuration |
---|---|---|---|---|
OpenAI | openai |
gpt-4o-mini |
OPENAI_MODEL |
OPENAI_API_KEY |
Azure OpenAI | azure |
User-defined | AZURE_OPENAI_DEPLOYMENT |
AZURE_OPENAI_ENDPOINT , AZURE_OPENAI_API_KEY , AZURE_OPENAI_API_VERSION |
| Google AI | google
| gemini-1.5-pro-latest
| GOOGLE_MODEL
| GOOGLE_API_KEY
|
| Mistral AI | mistral
| mistral-large-latest
| MISTRAL_MODEL
| MISTRAL_API_KEY
|
| DeepSeek | deepseek
| deepseek-chat
| DEEPSEEK_MODEL
| DEEPSEEK_API_KEY
|
Example configuration:
# Choose your provider
LLM_PROVIDER=openai
# Configure the model (optional - defaults to recommended model)
OPENAI_MODEL=gpt-4o-mini
# Add the corresponding API key
OPENAI_API_KEY=your_openai_api_key_here
# Note: You only need to configure the provider you're using
π― Model Selection: All providers now support configurable model selection via environment variables. If you don't specify a model, NaLaMap uses cost-effective default models optimized for geospatial tasks.
βοΈ Advanced Parameter Customization:
To modify advanced LLM parameters (temperature, max_tokens, timeout, etc.), edit the provider files in backend/services/ai/
:
openai.py
- OpenAI configurationgoogle_genai.py
- Google AI configurationmistralai.py
- Mistral AI configurationdeepseek.py
- DeepSeek configurationazureai.py
- Azure OpenAI configuration
Each file contains a get_llm()
function where you can adjust parameters like temperature
, max_tokens
, max_retries
, etc.
# Navigate to backend directory
cd backend
# We recommend poetry config virtualenvs.create true to manage your .venv inside the repo
poetry install
# Start the backend server
poetry run python main.py
The frontend will be available at http://localhost:3000
The backend will be available at http://localhost:8000
- API Documentation:
http://localhost:8000/docs
Open a new terminal and run:
# Navigate to frontend directory
cd frontend
# Install dependencies
npm i
# Start development server
npm run dev
If you prefer using Docker:
-
Configure your environment variables as described above.
-
Start the application using Docker Compose:
docker-compose up
-
Access the application at
http://localhost:80
For a complete development environment with hot-reload capabilities:
docker-compose -f dev.docker-compose.yml up --build
- FastAPI: Modern, fast web framework for building APIs
- LangChain: Framework for developing applications powered by language models
- LangGraph: For building complex AI agent workflows
- OpenAI/Azure/DeepSeek: AI model providers for natural language processing
- Uvicorn: ASGI server for serving the FastAPI application
- Next.js 15: React framework for building web applications
- React 19: JavaScript library for building user interfaces
- Leaflet: Open-source JavaScript library for interactive maps
- Tailwind CSS: Utility-first CSS framework
- TypeScript: Typed JavaScript for safer code
- Docker: Container platform
- Nginx: Web server and reverse proxy
The project includes comprehensive test suites for both backend and frontend components.
Navigate to the backend directory and run the tests:
cd backend
poetry run pytest tests/
# Run with verbose output
poetry run pytest tests/ -v
# Run specific test markers (unit, integration, styling, etc.)
poetry run pytest tests/ -m unit
Navigate to the frontend directory and run the E2E tests:
cd frontend
# Install Playwright browsers (first time only)
npx playwright install --with-deps
# Run all tests
npm test
# Run in interactive UI mode
npx playwright test --ui
π For detailed testing guidelines, see AGENTS.md
π For frontend test documentation, see frontend/tests/README.md
Note: Some tests may require environment variables (e.g., OPENAI_API_KEY
). Mock data is used where possible to avoid external dependencies.
Backend fails to start with "Address already in use" error:
- Check if port 8000 is already in use:
lsof -i :8000
- Kill any existing processes:
kill <PID>
LLM API errors:
- Verify your
.env
file is in the root directory - Check that your provider's API key is set correctly (e.g.,
OPENAI_API_KEY
,GOOGLE_API_KEY
,MISTRAL_API_KEY
, etc.) - Ensure
LLM_PROVIDER
matches your chosen provider (openai, azure, google, mistral, or deepseek)
Frontend fails to start:
- Ensure Node.js 18+ is installed:
node --version
- Clear npm cache:
npm cache clean --force
- Delete node_modules and reinstall:
rm -rf node_modules && npm i
Note: Additional README files are available in the
/frontend
directory with more specific instructions for each component.
π Important Security Notes:
- Never commit
.env
files with real API keys to version control - Use
.env.example
as a template and add your own credentials - Rotate API keys regularly and monitor usage
- File uploads are not committed to version control for privacy
For production deployments:
- Use environment variables or secure secret management
- Enable HTTPS/TLS encryption
- Implement proper authentication and authorization
- Regular security audits and dependency updates
Reporting Security Vulnerabilities: If you discover a security vulnerability, please send an email to [[email protected]] instead of using the issue tracker.
NaLaMap has comprehensive documentation to help you get started and contribute:
- README.md (this file) - Project overview and quick start guide
- AGENTS.md - Development guidelines for AI agents and developers
- How to run components (backend, frontend, Docker)
- Testing guidelines (pytest, Playwright)
- Code quality & linting (flake8, black)
- Development workflow and best practices
- ARCHITECTURE.md - System architecture and design
- High-level architecture overview
- Backend and frontend structure
- AI agent system (LangGraph)
- Data flow and communication patterns
- Deployment architecture
- CONTRIBUTING.md - How to contribute to the project
- CODE_OF_CONDUCT.md - Community guidelines
- frontend/tests/README.md - Frontend testing guide
- docs/ - Feature-specific documentation
- Color customization
- Azure deployment
- Performance optimizations
- And more...
We welcome contributions from the community! If you're interested in helping improve NaLaMap, please check out our documentation:
- Contributing Guide - Guidelines for contributing
- AGENTS.md - Development workflow and best practices for AI agents and developers
- ARCHITECTURE.md - System architecture and component structure
- Code of Conduct - Community guidelines
Feel also free to join our community channel (Discord) to get to know us. We have regular meetings where we discuss the roadmap, feature requirements and ongoing work.
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by the need for more intuitive geospatial data exploration
- Built with open-source mapping libraries and AI technologies
- Special thanks to all contributors and the open-source community