4. Native Ports
py-bbn is the reference implementation, but it is not the only runtime we
maintain. If you need the same exact reasoning surface in a native or
non-Python environment, we also have C#, Java, C++, TypeScript/JavaScript, R,
Julia, Go, Rust, Octave, Swift, Ruby, and Lua ports available on request.
These ports are maintained against the same shared fixtures and benchmark harness used for the Python reference, and they target parity for the richer exact-query surface:
pqueryjquerycondquerypevidenceinterveneandiquerycpquery,cquery,cjquery,ccondquery, andcpevidence
Current port names and namespaces:
C#:RocketVector.DarkStar.DiscreteandRocketVector.DarkStar.ContinuousJava:io.rocketvector.darkstar.discreteandio.rocketvector.darkstar.continuousC++:rocketvector::darkstar::discreteandrocketvector::darkstar::continuousTypeScript / JavaScript: package@rocketvector/darkstarwithdarkstar.discreteanddarkstar.continuousR: packagedarkstarwith S3 model classes and shared query verbsJulia: packageDarkstarwith multiple-dispatch model and query functionsGo: moduledarkstarwith packagedarkstarRust: cratedarkstarwithBbnModeland typed query payloadsOctave: packagedarkstarwithdarkstar_-prefixed script functions and C ABI-backed model handlesSwift: packageDarkstarwithBBNModeland Swift error handlingRuby: gemdarkstarwithDarkstar::BBNand C extension-backed model handlesLua: moduledarkstarwith C ABI-backed model handles
4.1. Code Examples
Each example below loads the checked-in Huang fixture for that port and runs prior, interventional, and counterfactual queries against it.
4.1.1. C#
using System;
using System.Collections.Generic;
using RocketVector.DarkStar.Discrete;
var model = ReasoningModel.FromBbnJson("examples/huang.bbn.json");
var prior = model.pquery(new[] { "H" }, null)["H"];
var observed =
model.pquery(
new[] { "H" },
model.e(new Dictionary<string, string> { ["A"] = "on", ["C"] = "on" }))["H"];
var interventional =
model.iquery(new[] { "H" }, new[] { "on" }, new[] { "C" }, new[] { "on" });
var counterfactual =
model.cquery(
"H",
new Dictionary<string, string> { ["C"] = "on", ["H"] = "on" },
new Dictionary<string, string> { ["C"] = "off" });
Console.WriteLine(prior.ProbOf(new Dictionary<string, string> { ["H"] = "on" }));
Console.WriteLine(observed.ProbOf(new Dictionary<string, string> { ["H"] = "on" }));
Console.WriteLine(interventional["H"]);
Console.WriteLine(counterfactual.ProbOf(new Dictionary<string, string> { ["H"] = "on" }));
4.1.2. Java
import io.rocketvector.darkstar.discrete.ReasoningModel;
import java.util.List;
import java.util.Map;
var model = ReasoningModel.fromBbnJson("examples/huang.bbn.json");
var prior = model.pquery(List.of("H"), null).get("H");
var observed =
model.pquery(List.of("H"), model.e(Map.of("A", "on", "C", "on"))).get("H");
var interventional =
model.iquery(List.of("H"), List.of("on"), List.of("C"), List.of("on"));
var counterfactual =
model.cquery("H", Map.of("C", "on", "H", "on"), Map.of("C", "off"));
System.out.println(prior.probOf(Map.of("H", "on")));
System.out.println(observed.probOf(Map.of("H", "on")));
System.out.println(interventional.get("H"));
System.out.println(counterfactual.probOf(Map.of("H", "on")));
4.1.3. C++
#include <iostream>
#include "reasoning.h"
using rocketvector::darkstar::discrete::Assignment;
auto model =
rocketvector::darkstar::discrete::ReasoningModel::fromBbnJsonFile("examples/huang.bbn.json");
auto prior = model->pquery({"H"}).at("H");
auto observed =
model->pquery({"H"}, model->e({{"A", "on"}, {"C", "on"}})).at("H");
auto interventional = model->iquery({"H"}, {"on"}, {"C"}, {"on"});
auto counterfactual =
model->cquery("H", {{"C", "on"}, {"H", "on"}}, {{"C", "off"}});
std::cout << prior.probOf(Assignment{{"H", "on"}}) << '\n';
std::cout << observed.probOf(Assignment{{"H", "on"}}) << '\n';
std::cout << interventional.at("H") << '\n';
std::cout << counterfactual.probOf(Assignment{{"H", "on"}}) << '\n';
4.1.4. TypeScript / JavaScript
The TypeScript port compiles to JavaScript for Node, so the same public API is available from either language. The example below is plain modern JavaScript.
import { readFileSync } from 'node:fs';
import { darkstar } from '@rocketvector/darkstar';
const specification = JSON.parse(readFileSync('examples/huang.bbn.json', 'utf8'));
const model = darkstar.discrete.ReasoningModel.fromBbnJson(specification);
const prior = model.pquery(['H']).get('H');
const observed = model.pquery(['H'], model.e({ A: 'on', C: 'on' })).get('H');
const interventional = model.iquery(['H'], ['on'], ['C'], ['on']);
const counterfactual = model.cquery('H', { C: 'on', H: 'on' }, { C: 'off' });
console.log(prior.probOf({ H: 'on' }));
console.log(observed.probOf({ H: 'on' }));
console.log(interventional.get('H'));
console.log(counterfactual.probOf({ H: 'on' }));
4.1.5. R
library(darkstar)
model <- read_bbn_json(system.file("extdata", "huang.bbn.json", package = "darkstar"))
prior <- pquery(model, nodes = "H")$H
observed <- pquery(model, nodes = "H", evidence = evidence(A = "on", C = "on"))$H
interventional <- iquery(
model,
y = "H",
y_values = "on",
x = "C",
x_values = "on"
)
counterfactual <- cquery(
model,
target = "H",
evidence = evidence(C = "on", H = "on"),
hypothetical = evidence(C = "off")
)
as.data.frame(prior)
as.data.frame(observed)
interventional[["H"]]
as.data.frame(counterfactual)
4.1.6. Julia
using Darkstar
path = joinpath(pkgdir(Darkstar), "fixtures", "huang.bbn.json")
model = read_bbn_json(path)
prior = pquery(model; nodes = ["H"])["H"]
observed = pquery(
model;
nodes = ["H"],
evidence = Dict("A" => "on", "C" => "on"),
)["H"]
interventional = iquery(
model;
y = ["H"],
y_values = ["true"],
x = ["C"],
x_values = ["on"],
method = "graph",
)
counterfactual = cquery(
model;
target = "H",
evidence = Dict("C" => "on", "H" => "true"),
hypothetical = Dict("C" => "off"),
)
h_true(potential) =
only(row["probability"] for row in potential.rows if row["assignment"]["H"] == "true")
println(h_true(prior))
println(h_true(observed))
println(interventional["H"])
println(h_true(counterfactual))
4.1.7. Go
package main
import (
"context"
"fmt"
"darkstar"
)
func main() {
ctx := context.Background()
model, err := darkstar.ReadBBNJSON("testdata/huang.bbn.json")
if err != nil {
panic(err)
}
defer model.Close()
prior, err := model.PQuery(ctx, darkstar.Query{Nodes: []string{"H"}})
if err != nil {
panic(err)
}
observed, err := model.PQuery(ctx, darkstar.Query{
Nodes: []string{"H"},
Evidence: map[string]string{"A": "on", "C": "on"},
})
if err != nil {
panic(err)
}
interventional, err := model.IQuery(ctx, darkstar.CausalQuery{
YNodes: []string{"H"},
YValues: []string{"on"},
XNodes: []string{"C"},
XValues: []string{"on"},
Method: "graph",
})
if err != nil {
panic(err)
}
counterfactual, err := model.CQuery(ctx, darkstar.CounterfactualQuery{
Target: "H",
Evidence: map[string]string{"C": "on", "H": "on"},
Hypothetical: map[string]string{"C": "off"},
})
if err != nil {
panic(err)
}
fmt.Println(mustProb(prior["H"], map[string]string{"H": "on"}))
fmt.Println(mustProb(observed["H"], map[string]string{"H": "on"}))
fmt.Println(interventional["H"])
fmt.Println(mustProb(counterfactual, map[string]string{"H": "on"}))
}
func mustProb(potential darkstar.Potential, assignment map[string]string) float64 {
value, ok := potential.ProbOf(assignment)
if !ok {
panic("missing assignment")
}
return value
}
4.1.8. Rust
use std::collections::HashMap;
use darkstar::{BbnModel, CausalQuery, CounterfactualQuery, Query};
fn main() -> Result<(), darkstar::DarkstarError> {
let model = BbnModel::from_json_file("testdata/huang.bbn.json", None)?;
let prior = model.pquery(Query {
nodes: vec!["H".to_string()],
..Default::default()
})?;
let observed = model.pquery(Query {
nodes: vec!["H".to_string()],
evidence: HashMap::from([
("A".to_string(), "on".to_string()),
("C".to_string(), "on".to_string()),
]),
..Default::default()
})?;
let interventional = model.iquery(CausalQuery {
y_nodes: vec!["H".to_string()],
y_values: vec!["on".to_string()],
x_nodes: vec!["C".to_string()],
x_values: vec!["on".to_string()],
method: "graph".to_string(),
..Default::default()
})?;
let counterfactual = model.cquery(CounterfactualQuery {
target: "H".to_string(),
evidence: HashMap::from([
("C".to_string(), "on".to_string()),
("H".to_string(), "on".to_string()),
]),
hypothetical: HashMap::from([("C".to_string(), "off".to_string())]),
..Default::default()
})?;
let h_on = HashMap::from([("H".to_string(), "on".to_string())]);
println!("{}", prior["H"].prob_of(&h_on).unwrap());
println!("{}", observed["H"].prob_of(&h_on).unwrap());
println!("{}", interventional["H"]);
println!("{}", counterfactual.prob_of(&h_on).unwrap());
Ok(())
}
4.1.9. Octave
addpath("inst");
model = darkstar_read_bbn_json("testdata/huang.bbn.json");
prior = darkstar_pquery(model, "H");
observed = darkstar_pquery(
model,
struct("nodes", {{"H"}}, "evidence", struct("A", "on", "C", "on"))
);
interventional = darkstar_iquery(
model,
struct(
"y_nodes", {{"H"}},
"y_values", {{"on"}},
"x_nodes", {{"C"}},
"x_values", {{"on"}},
"method", "graph"
)
);
counterfactual = darkstar_cquery(
model,
struct(
"target", "H",
"evidence", struct("C", "on", "H", "on"),
"hypothetical", struct("C", "off")
)
);
prior.H
observed.H
interventional.H
counterfactual
darkstar_close(model);
4.1.10. Swift
import Darkstar
let model = try readBBNJSON("testdata/huang.bbn.json")
defer { model.close() }
let prior = try model.pquery(nodes: ["H"])
let observed = try model.pquery(
nodes: ["H"],
evidence: ["A": "on", "C": "on"])
let interventional = try model.iquery([
"yNodes": ["H"],
"yValues": ["on"],
"xNodes": ["C"],
"xValues": ["on"],
"method": "graph",
])
let counterfactual = try model.cquery([
"target": "H",
"evidence": ["C": "on", "H": "on"],
"hypothetical": ["C": "off"],
])
print(prior["H"]?.probability(of: ["H": "on"]) ?? 0)
print(observed["H"]?.probability(of: ["H": "on"]) ?? 0)
print(interventional["H"] ?? 0)
print(counterfactual.probability(of: ["H": "on"]) ?? 0)
4.1.11. Ruby
require 'darkstar'
model = Darkstar.read_bbn_json('testdata/huang.bbn.json')
prior = model.pquery(nodes: ['H']).fetch('H')
observed = model.pquery(
nodes: ['H'],
evidence: { 'A' => 'on', 'C' => 'on' }
).fetch('H')
interventional = model.iquery(
y: ['H'],
y_values: ['on'],
x: ['C'],
x_values: ['on'],
method: 'graph'
)
counterfactual = model.cquery(
target: 'H',
evidence: { 'C' => 'on', 'H' => 'on' },
hypothetical: { 'C' => 'off' }
)
puts prior.probability('H' => 'on')
puts observed.probability('H' => 'on')
puts interventional.fetch('H')
puts counterfactual.probability('H' => 'on')
model.close
4.1.12. Lua
local darkstar = require("darkstar")
local model = darkstar.read_bbn_json("testdata/huang.bbn.json")
local prior = model:pquery({ nodes = { "H" } }).H
local observed = model:pquery({
nodes = { "H" },
evidence = { A = "on", C = "on" },
}).H
local interventional = model:iquery({
yNodes = { "H" },
yValues = { "on" },
xNodes = { "C" },
xValues = { "on" },
method = "graph",
})
local counterfactual = model:cquery({
target = "H",
evidence = { C = "on", H = "on" },
hypothetical = { C = "off" },
})
print(prior:probability({ H = "on" }))
print(observed:probability({ H = "on" }))
print(interventional.H)
print(counterfactual:probability({ H = "on" }))
model:close()
4.2. Runtime Comparison
The shared benchmark harness runs the same deterministic graph and query corpus
across the maintained implementations and checks every non-Python output
against the Python reference. The table below uses the 1000-node exact
query corpus in both cold and warm modes.
Language |
Cold ms |
Warm ms |
vs Python cold |
|---|---|---|---|
C++ |
3.138 |
0.302 |
4.6x |
Rust |
3.184 |
0.311 |
4.5x |
Ruby |
3.196 |
0.314 |
4.5x |
Lua |
3.242 |
0.342 |
4.4x |
Go |
3.260 |
0.314 |
4.4x |
Swift |
3.288 |
0.338 |
4.4x |
R |
3.477 |
0.471 |
4.1x |
Octave |
3.513 |
0.516 |
4.1x |
Java |
3.705 |
0.136 |
3.9x |
TypeScript / JavaScript |
5.153 |
0.285 |
2.8x |
C# |
10.253 |
0.100 |
1.4x |
Python |
14.351 |
0.042 |
1.0x |
Julia |
22.766 |
0.348 |
0.6x |
The cold run is the clearest first-hit comparison because it includes model
preparation. The warm run has a different shape: py-bbn reuses calibrated
state and counterfactual context aggressively, so repeated exact queries stay
very competitive there.
For comparisons against third-party inference toolkits rather than the native Darkstar ports, see Benchmarks.