4. Native Ports
py-bbn is the reference implementation, but it is not the only runtime we
maintain. If you need the same exact reasoning surface in a native or
non-Python environment, we also have C#, Java, C++, and
TypeScript/JavaScript ports available on request.
These ports are maintained against the same shared fixtures and benchmark
harness used in the darkstar-bbn superproject, and they target parity for
the richer exact-query surface:
pqueryjquerycondquerypevidenceinterveneandiquerycpquery,cquery,cjquery,ccondquery, andcpevidence
Current port names and namespaces:
C#:RocketVector.DarkstarJava:io.rocketvector.darkstarC++:darkstar::reasoningTypeScript / JavaScript:darkstar
4.1. Code Examples
Each example below loads the same tracked Huang benchmark network from
examples/huang.bbn.json and runs prior, interventional, and
counterfactual queries against it.
4.1.1. C#
using System;
using System.Collections.Generic;
using RocketVector.Darkstar;
var model = ReasoningModel.FromBbnJson("examples/huang.bbn.json");
var prior = model.pquery(new[] { "H" }, null)["H"];
var observed =
model.pquery(
new[] { "H" },
model.e(new Dictionary<string, string> { ["A"] = "on", ["C"] = "on" }))["H"];
var interventional =
model.iquery(new[] { "H" }, new[] { "on" }, new[] { "C" }, new[] { "on" });
var counterfactual =
model.cquery(
"H",
new Dictionary<string, string> { ["C"] = "on", ["H"] = "on" },
new Dictionary<string, string> { ["C"] = "off" });
Console.WriteLine(prior.ProbOf(new Dictionary<string, string> { ["H"] = "on" }));
Console.WriteLine(observed.ProbOf(new Dictionary<string, string> { ["H"] = "on" }));
Console.WriteLine(interventional["H"]);
Console.WriteLine(counterfactual.ProbOf(new Dictionary<string, string> { ["H"] = "on" }));
4.1.2. Java
import io.rocketvector.darkstar.ReasoningModel;
import java.util.List;
import java.util.Map;
var model = ReasoningModel.fromBbnJson("examples/huang.bbn.json");
var prior = model.pquery(List.of("H"), null).get("H");
var observed =
model.pquery(List.of("H"), model.e(Map.of("A", "on", "C", "on"))).get("H");
var interventional =
model.iquery(List.of("H"), List.of("on"), List.of("C"), List.of("on"));
var counterfactual =
model.cquery("H", Map.of("C", "on", "H", "on"), Map.of("C", "off"));
System.out.println(prior.probOf(Map.of("H", "on")));
System.out.println(observed.probOf(Map.of("H", "on")));
System.out.println(interventional.get("H"));
System.out.println(counterfactual.probOf(Map.of("H", "on")));
4.1.3. C++
#include <iostream>
#include "reasoning.h"
using darkstar::reasoning::Assignment;
auto model =
darkstar::reasoning::ReasoningModel::fromBbnJsonFile("examples/huang.bbn.json");
auto prior = model->pquery({"H"}).at("H");
auto observed =
model->pquery({"H"}, model->e({{"A", "on"}, {"C", "on"}})).at("H");
auto interventional = model->iquery({"H"}, {"on"}, {"C"}, {"on"});
auto counterfactual =
model->cquery("H", {{"C", "on"}, {"H", "on"}}, {{"C", "off"}});
std::cout << prior.probOf(Assignment{{"H", "on"}}) << '\n';
std::cout << observed.probOf(Assignment{{"H", "on"}}) << '\n';
std::cout << interventional.at("H") << '\n';
std::cout << counterfactual.probOf(Assignment{{"H", "on"}}) << '\n';
4.1.4. TypeScript / JavaScript
The TypeScript port compiles to JavaScript for Node, so the same public API is available from either language. The example below is plain modern JavaScript.
import { readFileSync } from 'node:fs';
import { ReasoningModel } from 'darkstar';
const specification = JSON.parse(readFileSync('examples/huang.bbn.json', 'utf8'));
const model = ReasoningModel.fromBbnJson(specification);
const prior = model.pquery(['H']).get('H');
const observed = model.pquery(['H'], model.e({ A: 'on', C: 'on' })).get('H');
const interventional = model.iquery(['H'], ['on'], ['C'], ['on']);
const counterfactual = model.cquery('H', { C: 'on', H: 'on' }, { C: 'off' });
console.log(prior.probOf({ H: 'on' }));
console.log(observed.probOf({ H: 'on' }));
console.log(interventional.get('H'));
console.log(counterfactual.probOf({ H: 'on' }));
4.2. Runtime Comparison
The shared benchmark harness runs the same deterministic graph and query corpus
across all five implementations and checks every non-Python output against the
Python reference. The saved runs below use the committed 500-node benchmark
model, all 12 exact query workloads, and 5 repetitions in both cold and
warm modes.
In these saved runs, every non-Python port matched the Python reference on all 12 exact query workloads in both temperature modes.
Note
These are machine-specific wall-clock numbers from the committed benchmark
artifacts under _benchmark/_results/query-cold-500-r5 and
_benchmark/_results/query-warm-500-r5. They are useful for relative
comparisons, not as universal absolutes.
4.2.1. Overall 500-Node Query Sweep
Language |
Cold overall (ms) |
Cold vs Python |
Warm overall (ms) |
Warm vs Python |
|---|---|---|---|---|
Python |
12.678 |
baseline |
0.0405 |
baseline |
TypeScript / JavaScript |
2.373 |
5.34x faster |
0.1996 |
4.93x slower |
Java |
1.996 |
6.35x faster |
0.1057 |
2.61x slower |
C# |
4.220 |
3.00x faster |
0.0308 |
1.31x faster |
C++ |
1.670 |
7.59x faster |
0.1390 |
3.43x slower |
The cold run is the clearest first-hit comparison. On this machine, the native
ports were materially faster than Python on the full 500-node exact-query
sweep. The warm run is different: py-bbn reuses cached calibrated state and
counterfactual context aggressively, which is why repeated exact queries stay
very competitive there.
The largest cold-query deltas in the saved run showed up on causal and
counterfactual workloads. For example, iquery averaged 17.55 ms in
Python versus 0.85 ms in C++ and 0.98 ms in TypeScript, while
cquery averaged 23.68 ms in Python versus 4.09 ms in C++ and
4.12 ms in TypeScript.
For comparisons against third-party inference toolkits rather than the native Darkstar ports, see Benchmarks.