Skip to content

Releases: darshjme/darshjdb

DarshJDB v0.3.3 — Executor Rewire + SqliteStore::query + mlua ddb.kv.*

15 Apr 09:03

Choose a tag to compare

[0.3.3] - 2026-04-15 — Executor Rewire + SqliteStore::query + mlua ddb.kv

Sprint internally referred to as "v0.3.2.1" in agent SUMMARY files
and DEFERRED.md. Released as v0.3.3 because Cargo workspace versions
are strict three-part semver — the post-release "0.3.2.1" naming is
not a valid cargo version field. No semantic difference; this is
the patch release immediately following v0.3.2.

Closes the two integration deferrals from the v0.3.2 sprint so the
SQLite backend has a real read path and the DarshanQL executor knows
which dialects can run which statement types.

Added

  • SqliteStore::query real implementation — the v0.3.2 stub that
    returned InvalidQuery for every plan is gone. SqliteStore::query
    now binds serde_json::Value params through a small ToSql adapter,
    expands the v0.3.2 M-3 __UUID_LIST__ token into per-uuid ?N
    placeholders for nested-plan resolution, executes via rusqlite on a
    blocking task, and materialises rows in the same QueryResultRow
    JSON shape PgStore::query returns so downstream consumers see no
    shape drift across backends. Plans carrying the
    __SQLITE_VECTOR_UNSUPPORTED__ /
    __SQLITE_COSINE_DISTANCE_UNSUPPORTED__ sentinels are refused up
    front with a clear InvalidQuery message.
  • SqlDialect capability gates — three new methods
    (supports_ddl, supports_graph_traversal, supports_hybrid_search)
    with default-true so PgDialect inherits the v0.3.1 surface
    unchanged. SqliteDialect overrides all three to false. The
    DarshanQL executor checks them at dispatch time.
  • darshql::ExecutorContext — a {pool, Arc<dyn Store>, Arc<dyn SqlDialect>} bundle threaded through every executor
    function. The HTTP entry point keeps the existing
    execute(&PgPool, …) signature for backwards compatibility and
    constructs the context internally; new call sites (tests, future
    portable runners) use execute_with_context directly.
  • tests/sqlite_e2e_query.rs — six end-to-end integration tests
    covering bare SELECT, $where Eq, $where Neq, $limit + $offset,
    $order ASC, and the empty-result case against an in-memory
    SqliteStore driven through plan_query_with_dialect + the new
    Store::query path. Plus two matching store::sqlite::tests
    unit tests and three new query::dialect::tests capability checks.

Added — mlua ddb.kv.* host API

  • MluaContext.cache: Arc<DdbCache> — the v0.3.2 stub fields
    for ddb.kv.get/set/del are gone. MluaContext now carries the
    cache handle alongside the store + dialect, and main.rs wires
    it from a single shared_ddb_cache: Arc<DdbCache> constructed
    before AppState::with_pool so REST/RESP3 dispatchers and the
    Lua function runtime hold the same Arc — Lua writes from a
    server function are immediately visible to subsequent REST
    cache GETs and vice versa.
  • ddb.kv.get(key) — returns a Lua string for UTF-8 values,
    nil for missing keys, and raises a Lua RuntimeError if a
    present value is not valid UTF-8 (binary blobs belong in object
    storage, not the string-shaped Lua boundary).
  • ddb.kv.set(key, value [, ttl_seconds]) — accepts an
    optional trailing Option<u64> TTL. 0 is treated as
    "no expiry" so dynamic-TTL callers don't need to special-case
    the zero literal.
  • ddb.kv.del(key) — returns a bool indicating whether the
    key existed across any cache tier prior to deletion.
  • 5 new mlua tests exercising roundtrip, TTL expiry, deletion,
    missing-key, and the non-UTF-8 error path. The total mlua test
    count is now 30 passing (up from 23 in v0.3.2).
  • Stub-error assertion trimmed — the v0.3.2 mlua hardening
    sprint added a ddb_stubs_all_raise_lua_error test that loop-
    asserted ddb.kv.* raised NotYetImplemented. That assertion
    is now wrong; the kv methods are removed from the loop and the
    test continues to guard the still-stubbed ddb.notify and
    friends (if any).

Changed

  • DarshanQL executor — Tier 2 statement-type gates. DEFINE TABLE, DEFINE FIELD, RELATE, SELECT fields containing
    ->edge traversal, and count(->edge) computed fields now check
    ctx.dialect.supports_*() and return InvalidQuery with a
    v0.3.3-tracking message on dialects that don't support them
    (SQLite today). PostgreSQL production behaviour is byte-for-byte
    unchanged because PgDialect inherits the default-true.
  • Workspace version bumped 0.3.20.3.3 (released as v0.3.3
    because Cargo's strict three-part semver disallows a 0.3.2.1
    version field; sprint files and agent reports retain the
    v0.3.2.1 naming for traceability).

Deferred to v0.3.2.2 / v0.3.3

  • The portable Pg-or-SQLite hookup for SELECT / CREATE / INSERT /
    RETRACT through ctx.store (rather than ctx.pool) is plumbed
    through ExecutorContext but the executor body still reaches for
    the pool directly for the read/write SQL — the SurrealQL-shaped AST
    consumed by query/darshql/executor.rs is independent of the
    JSON-shaped QueryAST driven by plan_query_with_dialect, and the
    v0.3.3 milestone tracks unifying the two planner surfaces so the
    same Store::query path serves both. Until then the gates are the
    safety net.
  • INFO FOR against :schema/* triples on SQLite — the storage
    shape is portable but the planner pieces have not been ported.
    Tracked alongside the DDL gate in v0.3.3.
  • See DEFERRED.md for the full deferral list with rationale.

Full Changelog: v0.3.2...v0.3.3

DarshJDB v0.3.2 — SQLite Backend + mlua Runtime

14 Apr 23:05

Choose a tag to compare

[0.3.2] - 2026-04-15 — SQLite Backend + mlua Runtime + Dialect Abstraction

The v0.3.1 architecture wave laid the trait boundaries; v0.3.2 fills
them in. Three sprint branches developed in parallel land together:
a real SQLite backend behind the Store trait, a SqlDialect trait
that lets the DarshanQL planner emit Postgres or SQLite SQL from the
same AST, and an embedded Lua 5.4 function runtime with a hardened
sandbox and a wired ddb.* host API. PostgreSQL 16 is still the
production HTTP backend — sqlite-only HTTP boot lands in v0.3.3.

Added

  • SqliteStore (gated on --features sqlite-store) — full
    Store trait implementation over a bundled rusqlite 0.31
    database. Migrations live at migrations/sqlite/001_initial.sql
    (triples + darshan_tx_seq). Uses IMMEDIATE transactions with a
    5-second busy_timeout so concurrent set_triples paths do not
    deadlock. 12 unit tests covering migration, set/get/retract
    roundtrip, TTL expiry, schema inference, concurrent batch ingest,
    and timestamp parsing.
  • SqlDialect trait + PgDialect / SqliteDialect impls — the
    DarshanQL planner now routes through a dialect handle so the same
    QueryAST produces Postgres SQL ($1-style placeholders, JSONB
    operators, ::uuid casts, ANY($1::uuid[]) batches) or SQLite SQL
    (?N placeholders, JSON LIKE fallbacks, IN(__UUID_LIST__)
    templates expanded at bind time). PlanCache instances are now
    pinned to a specific dialect so a Postgres planner cache can never
    hand back SQLite SQL or vice versa. Snapshot parity tests cover
    every WHERE op, ORDER, LIMIT/OFFSET, search, semantic, hybrid, and
    nested combination across both dialects.
  • MluaRuntime (gated on --features mlua-runtime) — embedded
    mlua 0.10 with vendored Lua 5.4. Hardened sandbox strips os.execute,
    io, require, dofile, loadfile, load, loadstring,
    string.dump, debug, collectgarbage, the raw accessors, every
    bytecode loader path, and pins ChunkMode::Text on every load.
    Per-invocation environment isolation: each call gets a fresh proxy
    table over a frozen safe_globals snapshot so string.sub = ...
    in one user chunk cannot leak into another tenant. Wall-clock
    timeout via tokio::time::timeout + call_async. Function path
    containment via canonicalize + starts_with check (rejects
    ../escape.lua). Single Mutex<Lua> serializes invocations
    (concurrency=1 is honest until v0.4 brings a Pool<Lua>). 23 unit
    tests covering every sandbox escape vector.
  • Wired ddb.* host API — when MluaRuntime is constructed with
    an MluaContext (production server boot path), the Lua host
    bindings are wired live against the runtime-selected Store and
    SqlDialect:
    • ddb.query(json_ast) parses a DarshJQL AST, plans through the
      pinned dialect, dispatches via Store::query, returns rows as a
      Lua table.
    • ddb.triples.get(uuid_string) calls Store::get_entity.
    • ddb.triples.put(uuid_string, attribute, value) allocates a
      fresh tx_id via Store::next_tx_id and writes via
      Store::set_triples.
    • ddb.log.{debug,info,warn,error} forward into tracing with
      structured message fields and a 64 KiB cap to prevent OOM via
      string.rep.
  • DDB_FUNCTION_RUNTIME=mlua dispatch in main.rs, mirroring
    the existing DDB_FUNCTION_RUNTIME=v8 pattern. Subprocess
    (ProcessRuntime) remains the safe default. Misconfiguration
    (e.g. mlua requested without --features mlua-runtime) emits a
    clear warn and falls back to subprocess.
  • Top-level Store + SqlDialect handles in main.rs
    Arc<dyn Store + Send + Sync> and Arc<dyn SqlDialect + Send + Sync> constructed once at boot from the existing PgTripleStore +
    PgPool path. Today they wrap Postgres; in v0.3.3 the same handles
    flow out of the URL-scheme dispatch branch.
  • docs/SQL_DIALECTS.md describing the dialect trait surface,
    what differs between Postgres and SQLite, and the v0.4 portable IR
    roadmap.

Changed

  • darshql/dialect.rs adds the trait extraction; query/mod.rs
    routes plan_query through plan_query_with_dialect(ast, &PgDialect) so v0.3.1 callers see byte-identical SQL output.
  • Front-door DATABASE_URL validation in main.rs rejects sqlite:
    URLs with a clear message: SqliteStore is wired into the function
    runtime and the Store trait, but the HTTP server's auth, anchor,
    search, agent_memory, and chunked_uploads bootstraps are still
    Postgres-only and a sqlite-only HTTP boot lands in v0.3.3. Misconfig
    surfaces immediately instead of as a cryptic pg_advisory_lock panic.
  • SqliteStoreTx::{commit,rollback} are stateless markers that match
    PgStoreTx symmetry — multi-statement transactions through the
    dynamic dispatch surface are tracked for v0.3.3.

Security

The mlua runtime landed under a full security audit by the
gsd-code-reviewer, gsd-security-auditor, gsd-nyquist-auditor,
gsd-integration-checker, and gsd-doc-verifier agents. Findings
that landed as fixes inside the v0.3.2 sprint:

  • MJ-02 — drop redundant per-invocation semaphore (admitted N
    permits but every admitted task locked the same Mutex<Lua>, so
    the permit cap was theatre).
  • MJ-03 + MN-01 — user log text passed as a structured message
    field (not a captured format identifier) so embedded newlines are
    escaped by the log formatter instead of injecting fake log lines.
    64 KiB cap on a single user log.
  • MN-03 + F6 — canonicalize and validate the functions directory
    at construction time; reject ../escape.lua traversal via
    canonicalize + starts_with containment check.
  • MN-04 — switched the per-invocation source read from blocking
    std::fs::read_to_string to tokio::fs::read_to_string and moved
    it before the Mutex<Lua> lock so I/O does not block the VM mutex.
  • F4 — per-invocation environment isolation via fresh proxy
    tables over a frozen safe_globals snapshot.
  • F5ChunkMode::Text pinned on every chunk load to refuse
    bytecode (which can bypass every source-level sandbox check).
  • F7 — wall-clock timeout via tokio::time::timeout +
    call_async so CPU-cooperative user code cannot hang the worker.

Cargo features

sqlite-store = ["dep:rusqlite"]      # SqliteStore backend
mlua-runtime = ["dep:mlua"]          # Embedded Lua 5.4 function runtime

Both default-off so production builds skip the bundled SQLite + Lua
compilation cost. All four feature combos
(default, sqlite-store, mlua-runtime, sqlite-store mlua-runtime)
are covered by cargo check, cargo clippy --all-targets -D warnings,
and cargo test --lib in CI.

Known limitations / deferred to v0.3.2.1

  • darshql/executor.rs rewire onto Store::query — the bespoke
    SurrealQL-style statement executor (959 lines, 12 statement types,
    20+ pg-specific helpers including graph traversal and DEFINE TABLE)
    still uses PgPool directly. The simpler parse_darshan_ql → plan_query → execute_query JSON-AST path is fully wired through the
    Store trait via PgStore::query, which is what the mlua ddb.query
    binding uses. The richer executor lands in v0.3.2.1.
  • SqliteStore::query — currently returns InvalidQuery because
    the v0.3.2 SQLite SQL emission path covers triple-level CRUD but
    not the full DarshanQL surface. Triple-level APIs (set_triples,
    get_entity, retract, next_tx_id, get_schema) are wired
    end-to-end and exercised by the ddb.triples.* Lua bindings against
    a real :memory: SqliteStore.
  • ddb.kv.{get,set} — kept as NotYetImplemented with an updated
    message. The DdbCache (slice 10) is keyed on the HTTP request
    boundary and is not exposed to the function runtime; tracked for
    v0.3.2.1.
  • CPU-bound Lua mid-instruction interruption — the mlua 0.10
    set_interrupt hook lands in v0.3.3. Today the wall-clock timeout
    cancels at the next yield boundary, which is sufficient for any
    cooperative user code (the lua_call_respects_wall_clock_cap test
    passes) but a while true do end tight loop is bounded only by
    the OS scheduler.
  • sqlite: URL HTTP boot — main.rs rejects sqlite: URLs at the
    front door because the auth/anchor/search/agent_memory/chunked_uploads
    bootstraps are still Postgres-only. Sqlite-only HTTP boot lands in
    v0.3.3.

Acknowledgements

The v0.3.2 sprint shipped under the gsd-army audit protocol:
gsd-code-reviewer, gsd-security-auditor, gsd-nyquist-auditor,
gsd-integration-checker, and gsd-doc-verifier. Every Mxx and Fx
finding above carries the audit tag of the agent that surfaced it.

Full Changelog: v0.3.1.1...v0.3.2

DarshJDB v0.3.1.1 — Security hotfix

14 Apr 22:11

Choose a tag to compare

Security hotfix for v0.3.1.

Critical: F1 — DatabaseConfig.url now wraps in Secret. Pre-hotfix, the Postgres connection URL including embedded password leaked to the tracing sink at every startup via tracing::info!(?cfg, "loaded configuration").

Also lands: 3 fix commits for bind_addr shadowing, pool max_lifetime logging, and misleading unsafe safety comments; 5 doc corrections (Store trait method names, error variants, cache-server auth semantics, cluster status response shape, stale postgres URL in rustdoc); 1 compose hardening (required POSTGRES_PASSWORD); 1 clippy debt sweep in config/mod.rs; 1 Pg/Sqlite StoreTx symmetry fix.

Full detail in commit log 32e9b18..v0.3.1.1.

Full Changelog: v0.3.1...v0.3.1.1

DarshJDB v0.3.1 — Architecture Wave

14 Apr 21:10

Choose a tag to compare

[0.3.1] - 2026-04-15 — Architecture Wave

Three feature branches (PR #3, #5, #6) that spent the v0.3.0 release cycle
in-flight now land together as the architecture wave. v0.3.1 does not
change the DDB runtime requirements (PostgreSQL 16 is still mandatory);
it ships the trait boundaries, typed config surface, and cluster
primitives that v0.3.2 will build on.

Added — Slice 17 · Typed DdbConfig hierarchy (PR #3)

  • 13-subsystem strongly-typed config tree: server, database,
    auth, cors, dev, cache, embedding, llm, storage, schema,
    anchor, memory, rules — each with its own Rust struct and
    defaults.
  • Layered loading: defaults → config.tomlDDB__* / DARSH__*
    env vars, decoded via config 0.15 with the convert-case feature so
    enum fields deserialise from kebab/camel/snake transparently.
  • Secret<T> wrapper redacts sensitive fields in Debug output
    (JWT secrets, SMTP passwords, API keys) with <redacted>.
  • Backward compatibility: legacy flat env vars still work; the typed
    loader only takes priority when both are set, and DDB_RULES_FILE
    continues to override cfg.rules.file_path when present.
  • 8 config unit tests green; cfg.server.log_level seeds RUST_LOG
    before tracing init so log levels land correctly.

Added — Cluster module · Horizontal scaling baseline (PR #5)

  • ddb_server::cluster: new top-level module holding all
    multi-replica primitives.
  • Advisory-lock leader election via pg_try_advisory_lock wrapped
    in spawn_singleton_task + spawn_singleton_supervisor: only one
    replica runs each singleton (e.g. LOCK_EXPIRY_SWEEPER) at any time;
    failover is automatic when the leader's Postgres session drops.
  • Cross-replica WS fanout via LISTEN ddb_changes: the extracted
    notify_listener task auto-reconnects on listener-session drop and
    feeds ChangeEvent into the local broadcast channel, so WebSocket
    subscribers attached to any replica see mutations from any other.
  • /cluster/status endpoint alongside /health and /metrics
    no auth required, exposes node_id, current leader, held locks.
  • 9 lib tests + 6 integration tests covering lock acquisition,
    supervisor restart, and notify reconnect.

Added — Architecture wave (PR #6)

  • Store trait at packages/server/src/store/: defines the
    pluggable storage boundary (get_triple, put_triple, delete_triple,
    query_pattern, bulk_ingest, ...). PgStore is a full delegation
    adapter around the existing PgTripleStore and is the default.
  • SqliteStore compile-time stub gated behind --features sqlite-store (rusqlite 0.31 bundled). Every method returns
    StoreError::NotYetImplemented; the stub exists to verify the trait
    boundary before v0.3.2 implements the real schema. This is NOT a
    functional SQLite backend — DarshJDB v0.3.1 still requires PostgreSQL.
  • docker-compose.ha.yml production HA stack: Patroni 3-node +
    etcd + HAProxy + pgBouncer + WAL-G + MinIO + 3 DDB replicas + nginx +
    Prometheus + Grafana. Companion configs under deploy/ha/.
  • docs/HORIZONTAL_SCALING.md full guide: Patroni failover, WAL-G
    PITR restore runbook, pgBouncer tuning, the cluster module reference
    (leader election, singleton supervisor, notify fanout), live-readiness
    checklist.
  • docs/STORAGE_BACKENDS.md: honest portability assessment and
    v0.3.2/v0.4 roadmap for the SqliteStore + DarshanQL dialect work.
  • NOT FOR PRODUCTION banner on the single-node docker-compose.yml.
  • oauth2 5.0.0 stable (up from 5.0.0-rc.1); DDB_WATCH dev shim
    removed; DARSH_CACHE_PASSWORD now required for the cache server.

Fixed

  • notify platform feature: the v0.3.0 followup CI fix removed the
    explicit macos_fsevent feature that broke Linux builds. v0.3.1
    keeps notify = "7" with default features so each platform's backend
    is auto-selected.
  • Workspace version bumped 0.3.00.3.1 across all crates.

Known limitations — will land in v0.3.2

  • PostgreSQL is still required. The Store trait boundary is in
    place, but SqliteStore is a compile-time stub only.
  • DarshanQL emits Postgres-specific SQL (JSONB operators, UUID
    casts, DISTINCT ON, recursive CTEs, make_interval). A
    SqlDialect abstraction is required before the SQLite backend can
    execute real queries.
  • Function runtime still uses subprocess ProcessRuntime — the
    embedded Lua / mlua 0.10 runtime is deferred to v0.3.2.
  • 4 require_admin_auth_* tests remain #[ignore] pending a
    real testcontainers harness; 15 pre-existing baseline failures in
    views/automations/formulas/graph/plugins/storage/tables likewise
    marked #[ignore].

What's Changed

  • Grand Transformation v0.3.0 — Redis + Memory + MCP + Vector + Observability by @darshjme in #1
  • fix(ci): v0.3.0 followup — Dockerfile workspace members + login_attempts off-by-one + notify platform feature by @darshjme in #2
  • feat(admin): Slice 22 — graph explorer UI with force-directed visualisation by @darshjme in #4
  • feat(config): Slice 17 — typed DdbConfig hierarchy by @darshjme in #3
  • feat(cluster): horizontal scaling — advisory-lock leader election + LISTEN/NOTIFY fanout by @darshjme in #5
  • feat(arch): v0.3.1 architecture wave — Store trait + HA compose + pgBouncer + WAL-G + oauth2 stable by @darshjme in #6

New Contributors

Full Changelog: https://github.com/darshjme/darshjdb/commits/v0.3.1

v0.3.0

14 Apr 12:43

Choose a tag to compare