- β‘οΈ Lightweight & Efficient: Built with performance in mind using fully async libraries (aiohttp, aiofiles) and orjson to boost JSON serialization.
- π§ Fully customizable table schema: Make no assumptions about the log table structure, define your own schema to fit your needs.
- πΎ SQLite Backend: Store log messages in SQLite, enabling efficient and complex queries.
- π Database Migrations: Built-in migration utilities to manage database schema changes.
- π Web API: RESTful endpoint for log ingestion and query. Support server-sent events (SSE) for real-time log streaming.
- β¨β¨β¨ More cool features in my wishlist:
- Bulk insert: Buffer log entries in memory for a short while or when a limit is reached, and bulk insert them into the database.
- Column based compression: Mark some columns as "enums", silently create a "Enums" table which the main log table points to. Gradually grow the enums table to captures all distinct values of that column.
- Time based partitioning: One SQLite database per date or month.
- Just a logging handler: Allow to be used as a basic logging handler without the Web API part.
- Log redirection: When used as service, allow redirecting logs to local file or other external sink.
- More ingestion interfaces: Support log ingestion through ZeroMQ, TCP socket and Unix socket.
- CLI utilities: More CLI utilities to directly query the database, and export the query results to a file.
pip install loglite
LogLite requires a YAML configuration file. Here's a sample configuration:
# Server configuration
host: 0.0.0.0 # Web API server bind host
port: 7788 # Web API server bind port
debug: true # More verbose logging when enabled
log_table_name: Log # Name of the main log entry table in SQLite
sqlite_dir: ./db # Directory for SQLite database
allow_origin: "*" # CORS configuration (default: *)
sse_limit: 1000 # Maximum number of logs to push in a single SSE event payload
sse_debounce_ms: 500 # Debounce time in milliseconds for SSE, logs may be pushed later if they arrive too frequently
sqlite_params: # you can set any SQLite parameters, no default values
journal_mode: WAL
synchronous: NORMAL
cache_size: -32000 # 32MB
foreign_keys: OFF
temp_store: MEMORY
mmap_size: 52428800 # 50MB
# Database migrations
migrations:
- version: 1 # Incremental migration version
rollout: # Raw SQLite statements
- |
CREATE TABLE Log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME NOT NULL,
message TEXT NOT NULL,
level TEXT NOT NULL CHECK (level IN ('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')),
service TEXT NOT NULL,
filename TEXT,
path TEXT,
line INTEGER,
function TEXT,
pid INTEGER,
process_name TEXT,
extra JSON
);
- CREATE INDEX IF NOT EXISTS idx_timestamp ON Log(timestamp);
- CREATE INDEX IF NOT EXISTS idx_level ON Log(level);
- CREATE INDEX IF NOT EXISTS idx_service ON Log(service);
rollback: # SQL statements to apply when rolling back
- DROP INDEX IF EXISTS idx_service;
- DROP INDEX IF EXISTS idx_level;
- DROP INDEX IF EXISTS idx_timestamp;
- DROP TABLE IF EXISTS Log;
- migrations: At least one migration must be defined with version, rollout and rollback statements
- version: A unique integer for each migration
- rollout: SQL statements to apply when migrating forward
- rollback: SQL statements to apply when rolling back
Other items are optional with sensible defaults.
Before starting the server, you need to apply migrations to set up the database schema:
loglite migrate rollout -c /path/to/config.yaml
To start the LogLite server:
loglite server run -c /path/to/config.yaml
If you need to roll back a specific migration version (e.g., version id = 3):
loglite migrate rollback -c /path/to/config.yaml -v 3
Add the -f
flag to force rollback without confirmation.
Insert a new log entry. The payload format must be consistent with your log table schema.
curl -X POST http://localhost:7788/logs \
-H "Content-Type: application/json" \
-d '{
"timestamp": "2023-04-01T12:34:56",
"message": "This is a test log message",
"level": "INFO",
"service": "my-service",
"extra": {"request_id": "12345"}
}'
Query logs with filters. Each query parameter specifies a field and its filters. A filter defines the operator (e.g. =, ~=, !=, >=, <=, >, <) and the value. Filters are comma-separated.
Example request:
curl "http://localhost:7788/logs?fields=message,timestamp&limit=10&offset=0×tamp=>=2023-04-01T00:00:00,<=2023-04-01T05:00:00&level==WARNING"
Example response:
{
"status": "success",
"result": {
"total": 3,
"offset": 0,
"limit": 2,
"results": [
{
"timestamp": "2025-04-01T02:44:04.207515",
"message": "hello world!"
},
{
"timestamp": "2025-04-01T01:44:04.207515",
"message": "hello world!"
}
]
}
}
The following are special query parameters, just provide the exact value:
fields
: Comma-separated list of fields to return, defaults to "*" (select all fields).limit
: Maximum number of logs to return.offset
: Offset in the result set.
Subscribe to real-time log updates via Server-Sent Events (SSE). Query parameters only accept fields
, not filtering is applied.
curl -H "Accept: text/event-stream" http://localhost:7788/logs/sse?fields=message,timestamp
Example events:
data: [{"timestamp": "2025-04-01T02:44:04.207515", "message": "first msg"}]
data: [{"timestamp": "2025-04-01T02:44:10.207515", "message": "third msg"}, {"timestamp": "2025-04-01T01:44:05.207515", "message": "second msg"}]
- Add basic documentation.
- Customize SQLite configuration.
- Implement more features in the wishlist.
- Add tests.