Teach

Get your learn on

Our Educational Objectives include:

  • Topical training in the overall issues that affect the reproducibility of neuroimaging research (data acquisition and characterization, experimental methods, analyses, record keeping and reporting, reusability, and sharing of data and methods)
  • Development of a next-generation cadre of software developers that are versed in the techniques that promote reproducibility and education of neuroimaging researchers in the tools that promote complete experimental description.
  • Train the trainer: we adopted the software and data carpentry philosophy and work to train individuals who will themselves train others.

Our initial (Phase 1) curriculum focuses on developing material that address reproducibility in four areas:

  • FAIR data
  • Computational basics
  • Reproducible workflows
  • Statistical tools

in order to initiate a set of tools and practices that are reproducibility-enabled. Future curricula will extend the reach of these materials to encompass a broader community of researchers and will feature more training material on the tools developed by the ReproNim project.

We are developing the following forms of training:

  • Our curriculum is developed on GitHub; each of the modules is accessible from the sections below.
  • In addition, this material will be continued to be offered as a Moodle Course (currently unavailable while upgrades are in progress).
  • ReproNim/INCF Training Fellowship; a Train-the-Trainer program to empower researchers to teach reproducible methods to others.
  • Associated Training Programs: ReproRehab and ABCD-ReproNim

ReproNim Introduction

Why do we care about reproducibility? Can we do anything to improve the reproducibility of our neuroimaging work? Let's get motivated to change the world!

Go to module.

Reproducibility Basics

Shells, version control, package managers, and other tools to embrace "Reproducibility By Design"!

Go to module.

FAIR Data

FAIR is a collection of guiding principles to make data Findable, Accessible, Interoperable, and Re-usable. We look at ways to ensure that a researcher’s data is properly managed and published in support of reproducible research.

Go to module.

Data Processing

What do we need to know to conduct reproducible analysis? Learn to: Annotate, harmonize, clean, and version data; and create and maintain reproducible computational environments.

Go to module.

Statistics

Here we describe some key statistical concepts, and how to use them to make your research more reproducible. Everything you ever wanted to know about power, effect size, P-values, sampling and everything else.

Go to module.

Make your own!

Here is the repository containing the template used for all aforementioned modules. Make your own training material based on that template for a uniform appearance and features.

Go to sources on GitHub.

Our Training Team

Satrajit Ghosh

Jean-Baptiste Poline

Dorota Jarecka

Yaroslav Halchenko

Jeffrey Grethe

Nina Preuss

and Many More!!!