ReproNim logo

How would ReproNim...?

A series of practical tutorials on the ReproNim way

Volume 1: How would ReproNim do a Local Container Analysis?* contains a companion Jupyter notebook
Given an image file (anat.nii) and a container to run (neuronets/kwyk:latest-cpu), how would you manage the process and output in a reproducible way and then publish this process so that someone else can do the same thing.

Volume 2: How Would ReproNim Publish a Dataset?
After some investigation, you have a study dataset that includes N subjects, imaging data (NIfTI images) and behavioral metadata (subject demographics, in a csv file). How would you publish this data, both in accordance with FAIR principles,and in such a way that your dataset particulars are self-explanatory and therefore fully reusable by others?

Volume 3: How Would ReproNim 'NIDM-ify' My Software? Coming Soon
As a software engineer, you have just developed a new volumetric tool for image analysis. How do your make your tool ReproNim-compliant for (NIDM based) computational description of both provenance and output?

Volume 4: How Would ReproNim Manage Local FreeSurfer Results?* contains a companion Jupyter notebook
You've been running a lot of FreeSurfer analyses (and other volumetric tools, for that matter) on your data, and have a lot of resultant output. How do you integrate your results (local) with those of other investigators that are relevant to your work and publicly available (e.g.ReproLake)?

Volume 5: How Would ReproNim Manage Behavioral Data in REDCap? Coming Soon
As a clinical investigator, or a behavioral scientist, you have collected a lot of behavioral data in REDCap. How do you annotate your data appropriately for both local storage and export options? The goal is to implement semantic markup for metatdata desciption to maintain necessary and sufficient descriptive details for reproducible reuse, as well as to enable provenance tracking, regardless of where the data is stored, used, or shared.

Volume 6: How Would ReproNim Do an Analysis on ABCD Data?
You have access to data from the ABCD initiative, and a specific analysis planned that you would like to run on ABCD imaging data. In order to carefully control your computational environment and experimental parameters, your goal is is to 'containerize' your analysis workflow. More specifically, you would like to run the dcanlabs/abcd-hcp-pipeline:latest (GitHub, DockerHub, which is a BIDS application for processing functional MRI data that is robust to scanner, acquisition and age variability. How do you set this up for use on a large, shared dataset at a remote site?

Volume 7: How Would ReproNim Read DICOM Data? Coming Soon
You have DICOM data from your scanner. How do you ingest (capture and store) this data in a version and provenance aware fashion?

Volume 8: How Would ReproNim Make Reproducibility Manageable? (5-Steps to More Reproducible Neuroimaging Computation) Coming Soon
There are so many things one has to do to make everything completely reproducible. Is there a list of smaller, more manageable steps one can adopt that will still make a positive difference?

Volume 9: How Would ReproNim Containerize a Workflow? (ENIGMA Container for Shape Analysis)
You have an analysis workflow that you run locally. You would like to make it into a containerized workflow. We explore this topic using the ENIGMA Sulcal Shape analysis workflow as an example.