This API provides a set of functions for managing databases and tables in both local file storage and S3-compatible storage. It supports creating databases and tables, inserting data, querying using SQL, and more.
These functions manage databases and tables stored locally on the file system. Data can be inserted, queried, and organized using SQL-like operations.
// Initialize Timon with a local storage path
external fun initTimon(storagePath: String, bucketInterval: Number, userName: String): String
// Create a new database
external fun createDatabase(dbName: String): String
// Create a new table within a specific database
external fun createTable(dbName: String, tableName: String): String
// List all available databases
external fun listDatabases(): String
// List all tables within a specific database
external fun listTables(dbName: String): String
// Delete a specific database
external fun deleteDatabase(dbName: String): String
// Delete a specific table within a database
external fun deleteTable(dbName: String, tableName: String): String
// Insert data into a table in JSON format
external fun insert(dbName: String, tableName: String, jsonData: String): String
// Query a database with SQL query
external fun query(dbName: String, sqlQuery: String): String
// Pre-load specific tables at startup to eliminate first-query latency
// Returns JSON with status and list of successfully loaded tables
external fun preloadTables(dbName: String, tableNames: Array<String>, userName: String?): StringThe preloadTables function is designed to be called at app startup to register frequently-used tables with DataFusion, eliminating the latency of on-demand registration during the first query.
React Native/TypeScript:
import { preloadTables } from './test-rust-module';
// At app initialization (e.g., in useEffect or App startup)
useEffect(() => {
preloadTables("mydb", ["users", "posts", "comments"], "username");
}, []);
// Later queries to these tables will be instant!
const result = await query("mydb", "SELECT * FROM users", "username");Response Format:
{
"status": 200,
"message": "Successfully preloaded 3 table(s) in database 'mydb'",
"json_value": ["users", "posts", "comments"]
}Benefits:
- ✅ Eliminates first-query latency by pre-registering tables
- ✅ Parallel table loading (up to 10 concurrent registrations)
- ✅ Automatically skips already-registered tables
- ✅ Gracefully handles missing tables (logs warning, continues loading others)
These functions manage data stored in an S3-compatible bucket, allowing for querying and saving daily data as Parquet files.
// Initialize S3-compatible storage with endpoint and credentials
external fun initBucket(bucket_endpoint: String, bucket_name: String, access_key_id: String, secret_access_key: String, bucket_region: String): String
// Sink daily data to Parquet format in the bucket
external fun cloudSinkParquet(dbName: String, tableName: String): String
// Fetch data from a given user and save it locally
external fun cloudFetchParquet(userName: String, dbName: String, tableName: String, dateRange: Map<String, String>): StringRun the following command to build the utility with the necessary features:
Rust provides tools to cross-compile your code for different platforms. This involves building the binary for a platform different from your current one.
On Linux or macOS, you can compile for Windows:
rustup target add x86_64-pc-windows-gnu
cargo build --features dev_cli --release --target x86_64-pc-windows-gnuOn Linux, you can compile for macOS:
rustup target add x86_64-apple-darwin
cargo build --features dev_cli --release --target x86_64-apple-darwinIf cross-compilation is not feasible, you can build the binary on each target platform natively. This ensures compatibility.
cargo build --releasecargo build --releaseThe cross tool simplifies cross-compiling by providing pre-configured Docker containers for various targets. It automatically handles dependencies and toolchains.
cargo install crosscross build --release --target x86_64-pc-windows-gnu
cross build --release --target x86_64-apple-darwinIf targeting Linux systems with no shared libraries, you can build a statically linked binary using MUSL:
rustup target add x86_64-unknown-linux-musl
cargo build --release --target x86_64-unknown-linux-muslThis produces a binary that works on most Linux distributions.
- Use cross-compilation to build for other platforms without a native environment.
- Use
crossfor easier cross-compilation. - If you have access to all platforms, build natively on each.
To convert a JSON file to a Parquet file, use the following command:
./tsdb_timon convert <json_file_path> <parquet_file_path>Example:
./tsdb_timon convert test_input.json test_output.parquetRun an SQL query against the Parquet file:
./tsdb_timon query <parquet_file_path> "<sql_query>"Example:
./tsdb_timon query test_output.parquet "SELECT * FROM timon"- The table name is always set to
timon. Ensure all SQL queries reference thetimontable explicitly. - Replace
<json_file_path>,<parquet_file_path>, and<sql_query>with your respective input file paths and query.