Desolé, ce contenu n'est pas disponible en français. Nous le montrons en anglais.
WP 1 Software integration and provision
This work package will develop and deploy a complete DevOps toolchain as the basis for the overall aim of the MICROCARD project: a production-ready simulation platform for cardiac electrophysiology at the cellular scale on future exascale computers. Moreover, it will integrate the advances regarding parallelization, discretization, resilience, energy consumption, and tailored numerical schemes in the openCARP software stack.
leaders: Axel Loewe, Aurel Neic
faculty: Nico Mittenzwey, Xing Cai, Mark Potse, Simone Pezzuto, Martin Weiser, Vincent Loechner, Luca Antiga, Luca Pavarino
staff: Fatemeh Chegini, Marie Houillon, Nico Tippmann, Terry Cojean, Marcel Koch, Kristian Hustad
Task 1.1 Software interoperability, bundling and maintenance
Task 1.2 Continuous Integration and Deployment
Task 1.3 High-Performance Data Output
Task 1.4 Simple and Secure Access & Collaboration
WP 2 Task-based parallelization
Task-based parallelization is needed to obtain the flexibility required for our code to scale to thousands of compute nodes that may be equipped with thousands of processors each, with different architectures such as CPUs, GPUs, and other accelerators. The objective of this work package is therefore to provide a parallel and task-based formulation of the application. In addition, this formulation will be able to deal with hardware failures of some nodes during the execution.
leaders: Denis Barthou, Vincent Loechner,
faculty: Amina Guermouche, Marie-Christine Counilh, Emmanuelle Saillard, Stéphane Genaud, Bérenger Bramas, Luca Antiga, Martin Weiser, Olivier Aumage, Yves Coudière, Mark Potse,
staff: Fatemeh Chegini, Arun Thangamani, Raphaël Colin, Mariem Makni
Task 2.1 Parallel task-graph expression
Task 2.2 Hiding MPI communication latency
Task 2.3 Adaptation of automatic checkpoint placement strategy
Task 2.4 Evaluation of the resilience
WP 3 Numerical discretization and implementation of the cell-by-cell model
This work package will develop suitable discretization schemes to turn the system of partial differential equations that describes the current flow in the tissue into a system of algebraic equations. These schemes need to be numerically efficient in terms of convergence order, sparsity, and condition numbers, but also well suited for current and future HPC architectures in terms of parallelization and vectorization opportunities as well as energy demand and incurred communication.
leaders: Martin Weiser, Simone Pezzuto
faculty: Yves Coudière
staff: Fatemeh Chegini, Lea Strubberg, Zeina Chehade, Wissam Bouymedj, Giacomo Rosilho de Souza
Task 3.1 Space discretization
Task 3.2 Time discretization
Task 3.3 Adaptivity
Task 3.4 Coupling to bidomain
Task 3.5 Implementation of μCARP
WP 4 Production-ready high performance linear solver technology
This work package will develop custom linear solvers, adapted to the needs of the numerical scheme and working in collaboration with the preconditioners developed in WP5. The solvers will be deployed in the Ginkgo open-source sparse linear algebra library, to make them available to all computational science research.
leaders: Hartwig Anzt, Simone Scacchi
faculty: Xing Cai, Vincent Loechner
staff: Terry Cojean, Aditya Kashi, Marcel Koch, Fritz Göbel, Kristian Hustad
In this work package, we will develop and implement iterative linear solvers meeting the requirements of the MICROCARD simulation ecosystem. This entails efficiently exploiting the compute power available in next-generation manycore processors, scaling to large processor counts, reduced communication and synchronization overheads, and flexibility in accepting different preconditioner types (WP5) and integrating into the MICROCARD software stack (WP1). Production-ready implementations of the iterative linear solvers will be deployed in the Ginkgo open-source sparse linear algebra library, therewith making the solver technology available to all computational science research. A special focus will be put on portability and platform-specific energy considerations. The Continuous Integration (CI) service provided by MEGWARE will form the basis for the solver development for non-standard and prototype hardware, with an emphasis on the technology developed in the European Processor Initiative (EPI). Thanks to a unique technology with high-resolution power measurement this server will notably help us to move to a multi-objective optimization accounting not only for runtime but also for energy balance.
Task 4.1 Assessment of linear solver requirements
Task 4.2 Hardware-aware high performance linear solver development
Task 4.3 Linear solver preconditioner integration
Task 4.4 Linear solver application integration
WP 5 Tailored preconditioners
The effective solution of the PDE systems employed in MICROCARD on HPC architectures requires highly scalable iterative solvers and preconditioners that are tailored to the problem and to each other. In this work package we will develop the preconditioners, which will work with the linear solvers developed in WP4.
leaders: Luca Pavarino, Rolf Krause
faculty: Simone Scacchi, Stefano Gualandi, Raffaella Guglielmann, Piero Colli-Franzone, Hartwig Anzt
staff: Ngoc Mai Monica Huynh, Fatemeh Chegini, Aditya Kashi, Marcel Koch, Lea Strubberg, Fritz Göbel, Edoardo Centofanti
In order to fully exploit the pre-exascale and exascale parallel machines targeted by this project, our primary concern will be to obtain weak scalability, i.e. maintaining per-element performance with a large number of model elements. We will also target strong scaling, i.e. the best speedup for a fixed problem size with respect to the number of processors.
Weak scalability for reaction-diffusion problems is typically achieved by using multilevel preconditioners, such as multigrid (MG) and domain decomposition (DD) preconditioners. We will design, implement, and validate novel preconditioners based on recent advances in one-level and multilevel methods. Our strategy will be to explore both several multiplicative levels (such as in MG methods) and a few additive levels (such as in DD methods). Our preconditioners will be tailored to the specific choice of both space-time discretization (WP 3) and iterative solvers (WP4).
Task 5.1 One-level preconditioners
Task 5.2 DD preconditioners
Task 5.3 MG preconditioners
Task 5.4 Compressed communication
WP 6 Code generation for heterogeneous architectures
High-performance computers rely more and more on innovative techniques such as vector units and massive but functionally limited parallelism. To obtain optimal performance from these systems it is necessary to write specially adapted code. In this work package we will develop tools that can write such code automatically, based on higher-level descriptions of the functionality.
leaders: Vincent Loechner, Denis Barthou,
faculty: Xing Cai, Bérenger Bramas, Stéphane Genaud, Amina Guermouche
staff: Tiago Trevisan Jost, Arun Thangamani, Kristian Hustad, Raphaël Colin, Vincent Alba, Thai Hoa Trinh
This work package will build a bridge from a high-level model representation convenient for ionic model experts to an optimized implementation that exploits both target architecture resources and properties of the scientific problem (computation patterns, resilience to approximation). We will design a compiler infrastructure to translate an equational formulation extended with domain-specific information to a code that aims to be efficient in both execution time and energy dissipation. Our objective is (1) to maximize productivity by allowing users to express the scientific problem rather than the implementation details and (2) to maximize flexibility. Our strategy is to rely on a dedicated compiler front end, and on new research extending state-of-the-art code generation and runtime techniques to statically and dynamically optimize the application.
Task 6.1 Compilation scheme for equational formulation
Task 6.2 Parallel source code generation
Task 6.3 Adaptive code optimization
Task 6.4 Runtime resources and heterogeneity management
WP 7 Mesh generation
The simulations that our software is to perform will require exceptionally large meshes. In this work package we will develop adapted segmentation tools and a high-performance version of the parMMG meshing software, using the same task-based parallelization methods as for the simulation code itself.
leaders: Algiane Froehly, Luca Antiga
faculty: Raffaella Guglielmann, Nicolas Barral, Luca Cirrottola, Alessandro Chiarini
staff: Laetitia Mottet, Corentin Prigent
This work package will provide the tools to create the extremely large meshes needed for the macroscopic cell-by-cell models in the MICROCARD project. We will develop a new version of the parMMG meshing software using a task-based parallellization paradigm, based on the same tools and expertise as the simulation software in the project, and able to run on large heterogeneous supercomputers. We will further develop the automated image segmentation and mesh construction toolchain that will be needed for the use cases.
Task 7.1 segmentation and mesh preparation
Task 7.2 parallelization of meshing software
WP 8 Use cases
In this work package the tools that are developed in the project will be road-tested on four different use cases, involving pathologies such as atrial fibrillation and mycardial infarction.
leaders: Hermenegild Arevalo, Edward Vigmond
faculty: Axel Loewe, Mark Potse
staff: Kristian Hustad, Joshua Steyer
The purpose of this work package is to test the developed modeling technologies in the most realistic and demanding way, by using them for cardiological research projects in the hands of researchers who are experienced in applied modeling studies, in collaboration with cardiologists or physiologists.
Task 8.1 Role of heterogeneous re-modeling during ischemia in promoting arrhythmias
Task 8.2 The effect of microscopic tissue structure on clinical electrograms
Task 8.3 Relation between macroscopic conduction properties and microstructural defects
Task 8.4 The effects of micro- and macro-structure on atrial fibrillation
Task 8.5 Arrhythmia in idiopathic cardiomyopathies
WP 9 Project outreach: dissemination and exploitation
This work package concentrate our efforts to share what we learn and develop during the project, to different audiences ranging from specialists in different domains to students at different levels, and interested citizens.
leaders: Mark Potse, Axel Loewe
project manager: Andréa Alexander
faculty: Raffaella Guglielmann
This work package aims at ensuring
Task 9.1 Engagement with the direct user community
Task 9.2 Engagement with the end user community
Task 9.3 Dissemination activities
Task 9.4 Exploitation & sustainability
Task 9.5 Knowledge management (IP and data)
WP 10 Project management
This last work package serves the scientific and financial coordination, risk management, quality assurance, and reporting of the project.
scientific coordinator: Mark Potse
scientific and technical manager: Yves Coudière
project manager: Andréa Alexander
faculty: Vincent Loechner, Luca Pavarino, Axel Loewe, Hartwig Anzt, Rolf Krause, Xing Cai, Martin Weiser, Luca Antiga
This work package will ensure that MICROCARD achieves its stated objectives within
the given resources and time schedule. The goals are to
Task 10.1 Scientific coordination, quality assurance and risk monitoring
Task 10.2 Project management and EC reporting
Task 10.3 Overall project communication and branding
LinkedIn: @project MICROCARD
WP3/5 meeting in Berlin
On 16 and 17 March, Participants in work packages 3 and 5 met at the Zuse Institute in Berlin to discuss the state of research and software development in space and time discretizations as well as preconditioners. Concrete implementation aspects were also discussed.
Mmg release 5.7.0
Mmg version 5.7.0 was released in December, including features that are crucial for the MICROCARD project, such as 64-bit integers and several improvements in the isosurfacing functionality.
MICROCARD Periodic Report submitted
On 30 November we submitted the first of our two Periodic Reports to EuroHPC. This report covers the first 18 months of the project.
International Supercomputing Conference (ISC'23) in Hamburg
Monday 22 May
15:30 WP1 meeting
5th openCARP User Meeting on Computational Modeling of Cardiac Electrophysiology, in Karlsruhe
Friday 25 May
10:30 WP2+6 meeting
website powered by Dutch windmills • no cookies • all browsers welcome