This module will give you an overview of best practices and available tools to set up and conduct a fully reproducible data processing and analysis workflow.
This module is a part of the training curriculum from the ReproNim (Reproducible Neuroimaging) Center.
The template for all ReproNim modules is based on the templates of Neurohackweek, Data Carpentry and Software Carpentry workshops.
| 09:00 | Module overview | What do we need to know to set up a reproducible analysis workflow? | 
| 09:10 | Lesson 1: Core concepts using an analysis workflow example | What are the different considerations for reproducible analysis? | 
| 10:40 | Lesson 2: Annotate, harmonize, clean, and version data | How to work with and preserve data of different types? | 
| 12:40 | Lesson 3: Create and maintain reproducible computational environments | Why and how to use containers and Virtual Machines? | 
| 15:40 | Lesson 4: Create reusable and composable dataflow tools | How to use dataflow tools? | 
| 15:55 | Lesson 5: Use integration testing to revalidate analyses as data and software change | Why and how do we use continuous integration? | 
| 15:59 | Lesson 6: Track provenance from data to results | Can we represent the history of an entire analysis? Can we use this history to repeat the analysis? | 
| 16:44 | Finish |