I see that all the database implementations are using the iterator from the for finding and querying records. E.g. the document based implementation for get is using the iterator until it finds a matching key. I assume this linear search does not scale for several 10000 or million records. What are the limits here to have an acceptable access performance? Are there any memory recommendations and maximum number of records?