Quickstart (Streamlit UI)
cd /home/jandu/repos/NBV
source activate aria-nbv # or conda activate aria-nbv
streamlit run -m oracle_rri.streamlit_app
Pages in the app:
- Data: load one
EfmSnippetView from ASE/ATEK (GT mesh optional).
- Candidate Poses: sample NBV candidates with collision / free-space rules.
- Candidate Renders: render depth for valid candidates via PyTorch3D.
- RRI: compute oracle RRI scores and compare point-cloud distances.
- VIN Diagnostics: run VIN
forward_with_debug on oracle batches and inspect internal tensors.
- W&B Analysis: compare multiple W&B runs in the project, including train/val dynamics and config-driven performance trends.
- Testing & Attribution: load checkpoints and compute VIN head attributions on cached samples.
The app caches configs + results in st.session_state. “Run previous” auto-runs earlier stages with cached settings; background threads keep the UI responsive.
Dataset loading (Data page)
Sidebar controls map to AseEfmDatasetConfig (defaults baked in streamlit_app):
scene_ids: [] (auto-discover shards under .data/ase_efm*)
atek_variant: "efm" (fixed)
load_meshes: True, require_mesh: checkbox (default on)
- Mesh simplification:
mesh_simplify_ratio slider (0 → no decimation)
- Mesh crop:
mesh_crop_margin_m and mesh_crop_min_keep_ratio
- Verbosity/debug:
verbose=True, is_debug=True
Buttons:
- Run / refresh data: loads first snippet (or cached index)
- Next sample: increments index then reloads
- Clear cache: drops all cached stages
Candidate generation (Candidate Poses page)
Config fields surfaced from CandidateViewGeneratorConfig:
num_samples, min_radius, max_radius, min_elev_deg, max_elev_deg
ensure_collision_free, min_distance_to_mesh
ensure_free_space
device (cpu/cuda), is_debug
Plot controls:
- frustum scale, max frustums, option to plot only rejected poses
Actions:
- Run previous: re-run Data stage with cached config
- Run / refresh candidates: background thread; results cached under
STATE_KEYS["candidates"]
Depth rendering (Candidate Renders page)
Config mirrors CandidateDepthRendererConfig + nested Pytorch3DDepthRendererConfig:
max_candidates (subset render), renderer device, zfar, faces_per_pixel, is_debug
- Checkbox: “Compute depth renders”
Actions:
- Run previous: ensures data + candidates exist
- Run / refresh renders: background thread; stores
CandidateDepthBatch
Logging & scratchpad
- Console logs collected via
Console.set_sink into a textarea; clearable per page.
- “Interactive Python console” executes arbitrary code in the app process (uses
st, torch pre-populated); output appended to logs.
CLI: download datasets (unchanged)
# list top scenes by snippet count
python -m oracle_rri.data.downloader -m list -c efm -n 5
# download scenes (limit snippets to 10 each)
python -m oracle_rri.data.downloader -m download -c efm -n 5 --ms 10
Programmatic dataset use
from oracle_rri.data_handling.dataset import ASEDatasetConfig
cfg = ASEDatasetConfig(atek_variant="efm", load_meshes=True, require_mesh=True)
ds = cfg.setup_target()
sample = next(iter(ds)) # EfmSnippetView
Typed views exposed on sample:
camera_rgb, camera_slam_left, camera_slam_right → images + CameraTW
trajectory → PoseTW series (t_world_rig)
semidense → padded SLAM points + stds + bounds
gt → OBBs / efm_gt
mesh / has_mesh
Pipeline recap (in-app)
Code
flowchart LR
A[Dataset cfg<br/>ASE/ATEK] --> B[Load snippet<br/>EfmSnippetView]
B --> C[Sample NBV poses<br/>CandidateViewGenerator]
C --> D[Render depth<br/>CandidateDepthRenderer]
D --> E[Inspect depth grid<br/>& pick NBV]
flowchart LR
A[Dataset cfg<br/>ASE/ATEK] --> B[Load snippet<br/>EfmSnippetView]
B --> C[Sample NBV poses<br/>CandidateViewGenerator]
C --> D[Render depth<br/>CandidateDepthRenderer]
D --> E[Inspect depth grid<br/>& pick NBV]
Mesh handling
- Meshes are loaded once per scene (cached if
cache_meshes=True).
- Optional decimation via
mesh_simplify_ratio; optional crop AABB (crop_aabb_from_semidense in UI).