This function is a generic building block for access to
experimental/in progress neuron metadata. It is intended for internal use
and the end user or developer is responsible for choosing the active CAVE
dataset (see choose_segmentation).
Usage
cam_meta(
ids = NULL,
ignore.case = F,
fixed = F,
table = "aedes_main",
base = NULL,
version = NULL,
timestamp = NULL,
unique = FALSE,
token = NULL,
...
)Arguments
- ids
Root ids (as character or int64 vector) or a query (see examples)
- ignore.case
for queries whether to ignore the case
- fixed
whether to treat queries as a fixed string
- table
The name of the table to query
- base
Optional name of the seatable base containing the table (sometimes the table may not be found or two bases contain a table with the same name).
- version
Integer materialisation version. The special value of
'latest'means the most recent materialisation according to CAVE.- timestamp
A timestamp to normalise into an R or Python timestamp in UTC. The special value of
'now'means the current time in UTC.- unique
Whether to drop rows that have the same root_id. See details. There is no special logic in choosing which rows to drop, but the dropped rows are retained as an attribute on the table with a warning so that you can inspect.
- token
Optional API token. When supplied, the
FLYTABLE_TOKENenvironment variable is temporarily set to this value for the duration of the call (and restored on exit) so you can authenticate against an alternative seatable instance without permanently overwriting your token. Typically used in combination with thefafbseg.flytable.urlpackage option.- ...
Additional arguments passed to
flytable_cached_table(e.g.expiry,refresh) which can be used to control details of the cache strategy.
Details
This function now uses flytable_cached_table for
efficient row-wise caching of metadata. The defaults should be a good
trade-off between cache speed and getting the latest updates, but you can
set expiry = 0 if you want to ensure that you are as up to date as
possible - this still only downloads new changes and is very fast (300ms vs
100ms for a pre-cached dataset with 14k rows).
Note that rows with status `duplicate` or `bad_nucleus` are dropped even before the `unique` argument is processed.