vignettes/publish_lm_layer.Rmd
publish_lm_layer.RmdTake an LM stack registered to a fly template, resample it into BANC voxel coordinates so it overlays correctly on BANC EM, write it out as a Neuroglancer precomputed layer, and add it to the canonical public BANC scene.
Different target space than the MIP vignettes. The MIP vignettes (EM, LM) target
JRC2018U_HR/JRC2018VNCU_HR— the spaces NeuronBridge ColorMIP search expects. This vignette targets BANC voxel space (2400 × 924 × 789@ 400 nm for the brain), reached via the BANC team’s Elastix transform from JRC2018F. The two outputs are not interchangeable: aJRC2018U_HRvolume won’t align on BANC EM, and a BANC-grid volume can’t be NeuronBridge-indexed.
The full pipeline has four stages:
| Stage | Tool | What it does |
|---|---|---|
| 1 |
nat::xform_brain() (CMTK + nat.h5reg) |
Source space (IS2 / JFRC2 / FCWB) → JRC2018F |
| 2 | transformix -tp BANC_to_template.txt |
JRC2018F → BANC voxel coords (2400 × 924 × 789 @ 400
nm) |
| 3 | nrrd_to_precomputed() |
BANC-aligned NRRD → Neuroglancer precomputed
directory |
| 4 | bancr::banc_lm_scene() |
Build a Spelunker scene with the new LM layer added to the canonical public BANC scene |
remotes::install_github("natverse/neuronbridger")
remotes::install_github("flyconnectome/bancr") # banc_lm_scene + auth
remotes::install_github("natverse/nat.flybrains")
remotes::install_github("natverse/nat.jrcbrains")
nat.jrcbrains::download_saalfeldlab_registrations() # ~ 10 GB; one-time
install.packages("reticulate")
reticulate::py_install("cloud-volume", pip = TRUE) # the precomputed writer
# CMTK: pre-built MacOSX zip from https://www.nitrc.org/projects/cmtk/
# Java (for nat.h5reg): brew install openjdk (or your platform's equivalent)
# Elastix 5.x: download from https://github.com/SuperElastix/elastix/releases/latestThe BANC team’s Elastix chain expects its input on the
JRC2018F grid (1652 × 768 × 479 voxels at
380 nm). For an IS2-space LM volume the bridging chain is
IS2 → FCWB → JRC2018F: CMTK for the first hop,
nat.h5reg for the second.
nat.h5reg warps points but not whole image volumes, so
we go via points and re-voxelise:
nrrd_to_mip()’s 3 × 3 × 3 median + Triangle on
12-bit-trimmed data).xform_brain(points, sample = <source>, reference = "JRC2018F").
NRRD_IN <- "IS2_CapaR_no1_02_warp_m0g40c4e1e-1x16r3.nrrd"
v <- nat::read.nrrd(NRRD_IN)
voxdims_um <- diag(attr(v, "header")[["space directions"]])
vol <- as.integer(pmin(pmax(as.integer(v), 0L), 4095L) / 16L)
dim(vol) <- dim(v)
vol_med <- mmand::medianFilter(vol, mmand::shapeKernel(c(3, 3, 3), type = "box"))
thr <- neuronbridger:::colormip_triangle_threshold(vol_med)
fg_idx <- which(vol_med > thr, arr.ind = TRUE)
intens <- vol_med[vol_med > thr]
pts_is2 <- sweep(fg_idx - 1L, 2, voxdims_um, "*")
pts_jrcf <- nat.templatebrains::xform_brain(pts_is2,
sample = "IS2",
reference = "JRC2018F")
# Voxelise into JRC2018F (1652 x 768 x 479 at 0.38 um isotropic),
# keeping max intensity per voxel.
ix <- as.integer(round(pts_jrcf[,1] / 0.38)) + 1L
iy <- as.integer(round(pts_jrcf[,2] / 0.38)) + 1L
iz <- as.integer(round(pts_jrcf[,3] / 0.38)) + 1L
keep <- !is.na(ix) & ix %in% 1:1652 & iy %in% 1:768 & iz %in% 1:479
ix <- ix[keep]; iy <- iy[keep]; iz <- iz[keep]; intens <- intens[keep]
vol_jrcf <- array(0L, dim = c(1652L, 768L, 479L))
lin <- ix + (iy - 1L) * 1652L + (iz - 1L) * 1652L * 768L
ord <- order(lin, intens); lin_s <- lin[ord]; intens_s <- intens[ord]
vol_jrcf[lin_s[!duplicated(lin_s, fromLast = TRUE)]] <-
intens_s[!duplicated(lin_s, fromLast = TRUE)]
nat::write.nrrd(vol_jrcf, "CapaR_in_JRC2018F.nrrd")The BANC public bucket serves the JRC2018F template already aligned
to BANC voxel coordinates at
gs://lee-lab_brain-and-nerve-cord-fly-connectome/templates/JRC2018F_aligned240721_to_BANC.ng/,
produced by the Elastix transforms checked into the BANC repo under fanc/transforms/transform_parameters/brain_240721/.
We use the same parameter chain to push our own JRC2018F-aligned LM
stack onto the BANC grid.
The naming is a little counter-intuitive: the parameter file whose
Size matches the BANC volume (2400 × 924 × 789
@ 400 nm) is BANC_to_template.txt, and that is the one to
feed to transformix. Transformix follows the chain back to BANC space
automatically.
Run transformix with the JRC2018F volume from Stage 1 as input:
system(paste("transformix",
"-in", "CapaR_in_JRC2018F.nrrd",
"-out", "./CapaR_BANC_xform_out",
"-tp", "brain_240721/BANC_to_template.txt"))
# Output: ./CapaR_BANC_xform_out/result.nrrd (2400 x 924 x 789 at 400nm)Transformix writes a float32 NRRD. B-spline interpolation produces a small amount of ringing around the brain margins, with values just below 0 or above 255. Clip those off and downcast to uint8 before the precomputed step (otherwise an unsigned-integer cast wraps negatives around to ~150 and looks like a uniform background haze):
v <- nat::read.nrrd("CapaR_BANC_xform_out/result.nrrd")
v[v < 0.5] <- 0; v[v > 255] <- 255
v <- as.integer(round(v)); dim(v) <- c(2400L, 924L, 789L)
nat::write.nrrd(v, "CapaR_no1_02_aligned240721_to_BANC.nrrd",
dtype = "byte", enc = "gzip")nrrd_to_precomputed() (this package) reads an NRRD or a
3-D R array and writes the on-disk layout Neuroglancer expects: an
info JSON describing scales, chunk sizes and data type,
plus a flat directory of chunks named
<resolution>/<x_min-x_max>_<y_min-y_max>_<z_min-z_max>
(raw bytes by default; pass compress = TRUE to gzip
them).
nrrd_to_precomputed(
input = "CapaR_no1_02_aligned240721_to_BANC.nrrd",
output = "/tmp/CapaR_BANC_pc",
resolution = c(400, 400, 400), # BANC voxel resolution
data_type = "uint8",
encoding = "raw",
chunk_size = c(64L, 64L, 64L) # match the public atlas chunk size
)inst/scripts/lm_capar_to_precomputed.R ships a fuller
reproducer that down-samples 4× in xy and 2× in z and squashes 16-bit
signal into 8-bit to keep the demo precomputed dir small (~1.5 MB). For
NeuronBridge searches keep the full resolution and use
uint16 if the dynamic range matters.
If you already work in Python, the npimage
helper from the BANC team is a one-liner around the same
cloud-volume call:
import npimage
arr = npimage.load("CapaR_in_JRC2018U.nrrd")
npimage.save(arr, "CapaR.ng", pixel_size=[519, 519, 1000])
# Source: https://github.com/jasper-tms/npimage/blob/main/npimage/imageio.py#L311-L366The Python and R routes write byte-equivalent precomputed
directories, so pick whichever fits your build environment.
Multi-channel .lsm stacks need a
per-channel loop or RGB packing on either side; neither helper splits
channels for you.
Neuroglancer fetches chunks over HTTPS, so the precomputed directory needs a public-read host. Lee-lab BANC team members upload to the curated mirror; everyone else uses their own bucket.
system(paste("gsutil -m cp -r /tmp/CapaR_BANC_pc",
"gs://lee-lab_brain-and-nerve-cord-fly-connectome/light_level/kondo_et_al_2020/CapaR_no1_02_aligned240721_to_BANC.ng/"))Bucket caveat — important.
gs://lee-lab_brain-and-nerve-cord-fly-connectome/is a curated mirror run by the lee-lab BANC team and is not public-write. Thelight_level/kondo_et_al_2020/path above is the canonical home for the Kondo 2020 imports, but to write there you need either (a) write access granted by the lee-lab maintainers, or (b) your own GCS / S3 / static-HTTP host that is public-read so Neuroglancer can fetch the chunks. Once the precomputed directory is reachable over HTTPS, pass its URL tobancr::banc_lm_scene().
One way to gzip, not two. Either let
nrrd_to_precomputed(compress = TRUE)write.gzchunks and upload them with plaingsutil cp -r, or write raw chunks (compress = FALSE) and upload withgsutil cp -Z -rso gsutil gzips on the fly and tags each blob withContent-Encoding: gzip. The two paths are equivalent; mixing them —.gzfiles uploaded with-Z, or raw files uploaded without it — yields a layer whoseinfois reachable but whose chunks all 404.
bancr::banc_lm_scene() builds a Neuroglancer state that
starts from the standard public BANC scene (BANC EM + segmentation +
region outlines + the JRC2018F atlas + imported FAFB / hemibrain / MANC
meshes) and appends your LM layer on top:
u <- bancr::banc_lm_scene(
lm_url = paste0("gs://lee-lab_brain-and-nerve-cord-fly-connectome/",
"light_level/kondo_et_al_2020/",
"CapaR_no1_02_aligned240721_to_BANC.ng/CapaR_BANC_pc"),
layer_name = "Kondo 2020 - CapaR (no1_02, aligned to BANC)",
range = c(1, 30), # match the actual uint8 dynamic range
# (Elastix ringing was clipped at <0.5)
opacity = 0.55, # default; matches the public atlas layer
blend = "additive", # LM signal lights up where it overlaps EM
volume_rendering = "on", # required for 3-D rendering
shorten = TRUE,
open = TRUE
)
u
#> [1] "https://spelunker.cave-explorer.org/#!middleauth+https://global.daf-apis.com/nglstate/api/v1/5028046288453632"Pinned
example scene: the Kondo 2020 CapaR stain on the canonical BANC
scene. The new LM layer matches the public
JRC2018F atlas imported layer’s volume rendering
(volumeRendering = "on", depthSamples = 788,
opacity = 0.55); only the
shaderControls.normalized.range differs —
[29, 255] for the bright template vs [1, 30]
for the dim post-Elastix LM signal. Tighten range for
sparse stains, expand it for brighter sources.
shorten = TRUE (the default) POSTs the state via
bancr::banc_shorturl() — the same helper
bancsee() uses — and returns a
spelunker.cave-explorer.org/...nglstate/api/v1/<id>
URL. This requires a CAVE token, set once with
bancr::banc_set_token(). Pass shorten = FALSE
to skip the round-trip and inline the full state in a long fragment URL
instead.
The chunk below builds a tiny synthetic 3-D volume, writes it as
precomputed to a local directory, reads it back through
cloud-volume to confirm a clean round-trip, and constructs
a long-form BANC Neuroglancer URL referencing it — all offline. The
chunk requires
reticulate::py_install("cloud-volume", pip = TRUE) and
skips automatically if the Python module isn’t available.
library(neuronbridger)
library(reticulate)
set.seed(7)
v <- array(as.integer(runif(96 * 96 * 32, 0, 250)), dim = c(96L, 96L, 32L))
td <- tempfile()
out <- nrrd_to_precomputed(
v,
output = td,
resolution = c(519, 519, 1000),
data_type = "uint8",
encoding = "raw"
)
# Read back through cloud-volume; confirm we got our volume out unchanged.
np <- reticulate::import("numpy", convert = TRUE)
cv <- reticulate::import("cloudvolume", convert = FALSE)
vol <- cv$CloudVolume(out, mip = 0L, fill_missing = TRUE)
back <- np$squeeze(np$asarray(vol[0:96, 0:96, 0:32]), axis = 3L)
identical(as.integer(back), as.integer(v))
# Build a BANC scene with the local layer (long fragment URL form; no
# auth needed). For a real sharable URL you'd upload the precomputed
# dir to a public bucket and pass that gs:// URL instead.
u <- bancr::banc_lm_scene(out, layer_name = "tiny synthetic",
shorten = FALSE)
cat("URL prefix:", substring(u, 1, 100), "\n")
cat("LM layer present:", grepl("synthetic|Synthetic", u, ignore.case = TRUE), "\n")gs://lee-lab_brain-and-nerve-cord-fly-connectome/ — see
bancr::banc_scene() for the canonical entry-point
state.cloud-volume
Python library.fanc/transforms/transform_parameters/brain_240721/
(Elastix; TPS approximations for points are exposed as data in
bancr — banc_to_jrc2018f_tpsreg).