Tpetra parallel linear algebra
Version of the Day
|
Parallel data redistribution of Tpetra objects
The Tpetra_Lesson02_Vector example introduces and describes Tpetra's Map class, which is Tpetra's representation of a data distribution. This example builds on that by showing how to use Maps and Tpetra's Export class to redistribute data. In this case, we build a sparse matrix on one MPI process, and redistribute it to a sparse matrix stored in block row fashion, with an equal number of rows per process.
Tpetra's Map class describes a data distribution over one or more distributed-memory parallel processes. It "maps" global indices (unique labels for the elements of a data structure) to parallel processes. This ability to describe a data distribution calls for a redistribution capability, that is, to reorganize or remap data from one distribution to another. Tpetra provides this capability through the Import and Export classes.
Import redistributes from a uniquely owned (one-to-one) Map to a possibly not uniquely owned Map. Export redistributes from a possibly not uniquely owned to a uniquely owned Map. We distinguish between these cases both for historical reasons and for performance reasons.
Import and Export objects encapsulate and remember a communication pattern for reuse. Computing the computation pattern requires nontrivial work, but keeping around the Import or Export object lets you reuse that work. This is very important for operations that are performed frequently, such as the Import and Export operations in Tpetra's sparse matrix-vector multiply.
In both cases, Import and Export let the user specify how to combine incoming new data with existing data that has the same global index. For example, one may replace old data with new data or sum them together.
This example shows how to migrate the data in Tpetra objects (sparse matrices and vectors) between two different parallel distributions.