Elastic Datashader combines the power of ElasticSearch with Datashader. So you can go from this:
To this:
Poetry takes care of installing dependencies within the virtual environment. First install poetry.
python3 -m pip install poetryNow we can create the virtual environment and install dependencies into it with
poetry installNote that there are extras that can also be installed with --extras which are specified below.
First enter the virtualenv created by poetry.
poetry shellFirst you need to install the localwebserver optional extra.
poetry install --extras localwebserveruvicorn is now available for you within the virtualenv (you can reenter with poetry shell). Note that the log level for the datashader logger can be set within the logging_config.yml or by setting the DATASHADER_LOG_LEVEL environment variable; the latter takes precedence.
DATASHADER_ELASTIC=http://user:password@localhost:9200 uvicorn elastic_datashader:app --reload --port 6002 --log-config deployment/logging_config.yml First build the Docker container by running 'make' within the folder:
makeTo run in production mode via Docker+Uvicorn:
$ docker run -it --rm=true -p 5000:5000 \
elastic_datashader:latest \
--log-level=debug \
-b :5000 \
--workers 32 \
--env DATASHADER_ELASTIC=http://user:passwordt@host:9200 \
--env DATASHADER_LOG_LEVEL=DEBUGdocker run -it --rm=true -p 5000:5000 \
elastic_datashader:latest \
--log-level=debug \
-b :5000 \
--workers 32 \
--env DATASHADER_ELASTIC=http://user:passwordt@host:9200 \
--env DATASHADER_LOG_LEVEL=DEBUG \
--certfile <path> \
--keyfile <path> \
--ca-certs <path>
Run datashader as normal and use the following NGINX configuration snippet:
location /datashader/ {
proxy_pass http://ip-to-datashader-server:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
From within the virtualenv (poetry shell) just run the following.
pytestDatashader layers will be generated faster if Elastic search.max_buckets is increase to 65536.
Integration with Kibana Maps can be found here. This code requires changes to code covered via the Elastic License. It is your responsibility to use this code in compliance with this license.
You can build a Kibana with Elastic-Datashader support:
cd kibana
makeThe API is currently provisional and may change in future releases.
URL : /tms/{index-name}/{z}/{x}/{y}.png
Method : GET
QueryParameter :
Required:
geopoint_field=[alphanumeric]: the field to use for geopoint coordinates.
Optional:
geopfield_type=[alphanumeric]: the field type to use for the query (default:geo_point) this is needed because crosscluster get_field_mapping doesn't worktimestamp_field=[string]: the field to use for time (default:@timestamp)params=[json]: query/filter parameters from kibana.cmap=[alphanumeric]: the colorcet map to use (default:bmyfor heatmap andglasbey_category10for colored points)category_field=[alphanumeric]: the field to be used for coloring points/ellipsescategory_type=[alphanumeric]: the type of the category_field (as found in Kibana Index Pattern)category_format=[alphanumeric]: the format for numeric category fields (in NumeralJS format)ellipses=[boolean]: if ellipse shapes should be drawn (default:false)ellipse_major=[alphanumeric]: the field that contains the ellipse major axis sizeellipse_minor=[alphanumeric]: the field that contains the ellipse minor axis sizeellipse_tilt=[alphanumeric]: the field that contains the ellipse tilt degreesellipse_units=[alphanumeric]: the units for the ellipse axis (one ofmajmin_nm,semi_majmin_nm, orsemi_majmin_m)ellipse_search=[alphanumeric]: how far to search for ellipse when generating tiles (one ofnarrow,normal, orwide)spread=[alphanumeric]: how large points should be rendered (one oflarge,medium,small,auto)span_range=[alphanumeric]: the dyanmic range to be applied for alpha channel (one offlat,narrow,normal,wide,auto)resolution=[alphanumeric]: the aggregation grid size (default:finest),bucket_min=[numeric]: a filter to filterout lower count grid points (percentage of maximum records per grid point)bucket_max=[numeric]: a filter to filter out higher count grid points (percentage of maximum records per grid point) Params
{
"lucene_query": "a lucene query"
"timeFilters": {
"from": "now-5h"
"to": "now"
}
"filters" : { ... filter information extracted from Kibana ...}
}
URL : /legend/{index-name}/fieldname
Method : GET
Required:
geopoint_field=[alphanumeric]: the field to use for geopoint coordinates.
Optional:
timestamp_field=[string]: the field to use for time (default:@timestamp)params=[json]: query/filter parameters from kibana.category_field=[alphanumeric]: the field to be used for coloring points/ellipsescategory_type=[alphanumeric]: the type of the category_field (as found in Kibana Index Pattern)category_format=[alphanumeric]: the format for numeric category fields (in NumeralJS format)cmap=[alphanumeric]: the colorcet map to use (default:bmyfor heatmap andglasbey_category10for colored points)
Params
{
"lucene_query": "a lucene query"
"timeFilters": {
"from": "now-5h"
"to": "now"
}
"filters" : { ... filter information extracted from Kibana ...}
"extent": {
"minLat": 0.0, "maxLat": 0.0,
"minLon: 0.0, "maxLon: 0.0
}
}
Returns:
[
{"key"="xyz", "color"="acolor", "count"=100},
{"key"="abc", "color"="acolor", "count"=105},
]
Releases
Draft New Release
Create tag with one-up build number, Target:Master
[Publish Release]

