Tpetra parallel linear algebra
Version of the Day
|
A parallel distribution of indices over processes. More...
#include <Tpetra_Map_decl.hpp>
Public Types | |
Typedefs | |
using | local_ordinal_type = LocalOrdinal |
The type of local indices. More... | |
using | global_ordinal_type = GlobalOrdinal |
The type of global indices. More... | |
using | device_type = typename Node::device_type |
This class' Kokkos::Device specialization. More... | |
using | execution_space = typename device_type::execution_space |
The Kokkos execution space. More... | |
using | memory_space = typename device_type::memory_space |
The Kokkos memory space. More... | |
using | node_type = Node |
Legacy typedef that will go away at some point. More... | |
using | local_map_type = ::Tpetra::Details::LocalMap< local_ordinal_type, global_ordinal_type, device_type > |
Type of the "local" Map. More... | |
Public Member Functions | |
Constructors and destructor | |
Map (const global_size_t numGlobalElements, const global_ordinal_type indexBase, const Teuchos::RCP< const Teuchos::Comm< int > > &comm, const LocalGlobal lg=GloballyDistributed) | |
Constructor with contiguous uniform distribution. More... | |
Map (const global_size_t numGlobalElements, const size_t numLocalElements, const global_ordinal_type indexBase, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) | |
Constructor with contiguous, possibly nonuniform distribution. More... | |
Map (const global_size_t numGlobalElements, const Kokkos::View< const global_ordinal_type *, device_type > &indexList, const global_ordinal_type indexBase, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) | |
Constructor with arbitrary (possibly noncontiguous and/or nonuniform and/or overlapping) distribution, taking the input global indices as a Kokkos::View. More... | |
Map (const global_size_t numGlobalElements, const global_ordinal_type indexList[], const local_ordinal_type indexListSize, const global_ordinal_type indexBase, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) | |
Constructor with arbitrary (possibly noncontiguous and/or nonuniform and/or overlapping) distribution, taking the input global indices as a raw host pointer. More... | |
Map (const global_size_t numGlobalElements, const Teuchos::ArrayView< const global_ordinal_type > &indexList, const global_ordinal_type indexBase, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) | |
Constructor with arbitrary (possibly noncontiguous and/or nonuniform and/or overlapping) distribution, taking the input global indices as a Teuchos::ArrayView (for backwards compatibility). More... | |
Map () | |
Default constructor (that does nothing). More... | |
Map (const Map< local_ordinal_type, global_ordinal_type, node_type > &)=default | |
Copy constructor (shallow copy). More... | |
Map (Map< local_ordinal_type, global_ordinal_type, node_type > &&)=default | |
Move constructor (shallow move). More... | |
Map & | operator= (const Map< local_ordinal_type, global_ordinal_type, node_type > &)=default |
Copy assigment (shallow copy). More... | |
Map & | operator= (Map< local_ordinal_type, global_ordinal_type, node_type > &&)=default |
Move assigment (shallow move). More... | |
virtual | ~Map () |
Destructor (virtual for memory safety of derived classes). More... | |
Boolean tests | |
bool | isNodeLocalElement (local_ordinal_type localIndex) const |
Whether the given local index is valid for this Map on the calling process. More... | |
bool | isNodeGlobalElement (global_ordinal_type globalIndex) const |
Whether the given global index is owned by this Map on the calling process. More... | |
bool | isUniform () const |
Whether the range of global indices is uniform. More... | |
bool | isContiguous () const |
True if this Map is distributed contiguously, else false. More... | |
bool | isDistributed () const |
Whether this Map is globally distributed or locally replicated. More... | |
bool | isCompatible (const Map< local_ordinal_type, global_ordinal_type, Node > &map) const |
True if and only if map is compatible with this Map. More... | |
bool | isSameAs (const Map< local_ordinal_type, global_ordinal_type, Node > &map) const |
True if and only if map is identical to this Map. More... | |
bool | locallySameAs (const Map< local_ordinal_type, global_ordinal_type, node_type > &map) const |
Is this Map locally the same as the input Map? More... | |
bool | isLocallyFitted (const Map< local_ordinal_type, global_ordinal_type, Node > &map) const |
True if and only if map is locally fitted to this Map. More... | |
Teuchos::RCP< const Teuchos::Comm< int > > | getComm () const |
Accessors for the Teuchos::Comm and Kokkos Node objects. More... | |
std::string | description () const |
Implementation of Teuchos::Describable . More... | |
void | describe (Teuchos::FancyOStream &out, const Teuchos::EVerbosityLevel verbLevel=Teuchos::Describable::verbLevel_default) const |
Describe this object in a human-readable way to the given output stream. More... | |
Teuchos::RCP< const Map < local_ordinal_type, global_ordinal_type, Node > > | removeEmptyProcesses () const |
Advanced methods. More... | |
Teuchos::RCP< const Map < local_ordinal_type, global_ordinal_type, Node > > | replaceCommWithSubset (const Teuchos::RCP< const Teuchos::Comm< int > > &newComm) const |
Replace this Map's communicator with a subset communicator. More... | |
Related Functions | |
(Note that these are not member functions.) | |
template<class LocalOrdinal , class GlobalOrdinal > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal > > | createLocalMap (const size_t numElements, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Nonmember constructor for a locally replicated Map with the default Kokkos Node. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > > | createLocalMapWithNode (const size_t numElements, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Nonmember constructor for a locally replicated Map with a specified Kokkos Node. More... | |
template<class LocalOrdinal , class GlobalOrdinal > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal > > | createUniformContigMap (const global_size_t numElements, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Non-member constructor for a uniformly distributed, contiguous Map with the default Kokkos Node. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > > | createUniformContigMapWithNode (const global_size_t numElements, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Non-member constructor for a uniformly distributed, contiguous Map with a user-specified Kokkos Node. More... | |
template<class LocalOrdinal , class GlobalOrdinal > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal > > | createContigMap (const global_size_t numElements, const size_t localNumElements, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Non-member constructor for a (potentially) non-uniformly distributed, contiguous Map using the default Kokkos::Device. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > > | createContigMapWithNode (const global_size_t numElements, const size_t localNumElements, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Nonmember constructor for a (potentially) nonuniformly distributed, contiguous Map for a user-specified, possibly nondefault Kokkos Node type. More... | |
template<class LocalOrdinal , class GlobalOrdinal > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal > > | createNonContigMap (const Teuchos::ArrayView< const GlobalOrdinal > &elementList, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Nonmember constructor for a non-contiguous Map using the default Kokkos::Device type. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > > | createNonContigMapWithNode (const Teuchos::ArrayView< const GlobalOrdinal > &elementList, const Teuchos::RCP< const Teuchos::Comm< int > > &comm) |
Nonmember constructor for a noncontiguous Map with a user-specified, possibly nondefault Kokkos Node type. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > > | createOneToOne (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &M) |
Nonmember constructor for a contiguous Map with user-defined weights and a user-specified, possibly nondefault Kokkos Node type. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
Teuchos::RCP< const Map < LocalOrdinal, GlobalOrdinal, Node > > | createOneToOne (const Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > &M, const ::Tpetra::Details::TieBreak< LocalOrdinal, GlobalOrdinal > &tie_break) |
Creates a one-to-one version of the given Map where each GID lives on only one process. The given TieBreak object specifies the rule to break ties. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
bool | operator== (const Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node > &map1, const Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node > &map2) |
True if map1 is the same as (in the sense of isSameAs()) map2, else false. More... | |
template<class LocalOrdinal , class GlobalOrdinal , class Node > | |
bool | operator!= (const Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node > &map1, const Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node > &map2) |
True if map1 is not the same as (in the sense of isSameAs()) map2, else false. More... | |
Attributes | |
bool | isOneToOne () const |
Whether the Map is one to one. More... | |
global_size_t | getGlobalNumElements () const |
The number of elements in this Map. More... | |
size_t | getLocalNumElements () const |
The number of elements belonging to the calling process. More... | |
global_ordinal_type | getIndexBase () const |
The index base for this Map. More... | |
local_ordinal_type | getMinLocalIndex () const |
The minimum local index. More... | |
local_ordinal_type | getMaxLocalIndex () const |
The maximum local index on the calling process. More... | |
global_ordinal_type | getMinGlobalIndex () const |
The minimum global index owned by the calling process. More... | |
global_ordinal_type | getMaxGlobalIndex () const |
The maximum global index owned by the calling process. More... | |
global_ordinal_type | getMinAllGlobalIndex () const |
The minimum global index over all processes in the communicator. More... | |
global_ordinal_type | getMaxAllGlobalIndex () const |
The maximum global index over all processes in the communicator. More... | |
local_ordinal_type | getLocalElement (global_ordinal_type globalIndex) const |
The local index corresponding to the given global index. More... | |
global_ordinal_type | getGlobalElement (local_ordinal_type localIndex) const |
The global index corresponding to the given local index. More... | |
local_map_type | getLocalMap () const |
Get the LocalMap for Kokkos-Kernels. More... | |
LookupStatus | getRemoteIndexList (const Teuchos::ArrayView< const global_ordinal_type > &GIDList, const Teuchos::ArrayView< int > &nodeIDList, const Teuchos::ArrayView< local_ordinal_type > &LIDList) const |
Return the process ranks and corresponding local indices for the given global indices. More... | |
LookupStatus | getRemoteIndexList (const Teuchos::ArrayView< const global_ordinal_type > &GIDList, const Teuchos::ArrayView< int > &nodeIDList) const |
Return the process ranks for the given global indices. More... | |
global_indices_array_type | getMyGlobalIndices () const |
Return a view of the global indices owned by this process. More... | |
global_indices_array_device_type | getMyGlobalIndicesDevice () const |
Return a view of the global indices owned by this process on the Map's device. More... | |
Teuchos::ArrayView< const global_ordinal_type > | getLocalElementList () const |
Return a NONOWNING view of the global indices owned by this process. More... | |
A parallel distribution of indices over processes.
LocalOrdinal | The type of local indices. Currently, this must be int . (In Epetra, this is always just int .) |
GlobalOrdinal | The type of global indices. This must be a built-in integer type. We allow either signed or unsigned types here, but prefer signed types. Also, we require that GlobalOrdinal be no smaller than LocalOrdinal , that is: sizeof(GlobalOrdinal) >= sizeof(LocalOrdinal);
LocalOrdinal is int , good models of GlobalOrdinal are
GlobalOrdinal is int , then the global number of rows or columns in the matrix may be no more than INT_MAX , which for typical 32-bit int is (about two billion). If you want to solve larger problems, you must use a 64-bit integer type here. |
Node | A class implementing on-node shared-memory parallel operations. The default Node type should suffice for most users. The actual default type depends on your Trilinos build options. This must be one of the following:
|
This class describes a distribution of data elements over one or more processes in a communicator. Each element has a global index (of type GlobalOrdinal
) uniquely associated to it. Each global index in the Map is "owned" by one or more processes in the Map's communicator. The user gets to decide what an "element" means; examples include a row or column of a sparse matrix (as in CrsMatrix), or a row of one or more vectors (as in MultiVector).
Before reading the rest of this documentation, it helps to know something about the following:
You will not need to use MPI directly to use Map, but it helps to be familiar with the general idea of distributed storage of data over a communicator.
The distinction between local and global indices and types might confuse new Tpetra users. Global indices represent the elements of a distributed object (such as rows or columns of a CrsMatrix, or rows of a MultiVector) uniquely over the entire object, which may be distributed over multiple processes. Local indices are local to the process that owns them. If global index G is owned by process P, then there is a unique local index L on process P corresponding to G. If the local index L is valid on process P, then there is a unique global index G owned by P corresponding to the pair (L, P). However, multiple processes might own the same global index (an "overlapping Map"), so a global index G might correspond to multiple (L, P) pairs. In summary, local indices on a process correspond to object elements (e.g., sparse matrix rows or columns) owned by that process.
Tpetra differs from Epetra in that local and global indices may have different types. In Epetra, local and global indices both have type int
. In Tpetra, you get to pick the type of each. For example, you can use a 64-bit integer GlobalOrdinal
type to solve problems with more than unknowns, but a 32-bit integer LocalOrdinal
type to save bandwidth in sparse matrix-vector multiply.
A contiguous Map divides an interval of global indices over the processes in its communicator, such that each process gets a contiguous interval of zero or more of those global indices, with the indices owned by a process p strictly greater than those owned by process q if . Formally, we call a Map contiguous when all of the following hold:
Different processes may own different numbers of global indices. We call a Map uniform if it is contiguous, and if the user let the Map divide a global count of indices evenly over the Map's communicator's processes. The latter happens by calling the version of Map's constructor that takes a global count of indices, rather than a local count or an arbitrary list of indices.
Map optimizes for the contiguous case. For example, noncontiguous Maps always require communication in order to figure out which process owns a particular global index. (This communication happens in getRemoteIndexList().) Contiguous but nonuniform Maps may also require communication in this case, though we may only need to perform that communication once (at Map setup time). Contiguous Maps also can convert between global and local indices more efficiently.
Globally distributed means that all of the following are true:
If at least one of the above are not true, then the map is locally replicated. (The two are mutually exclusive.)
Globally distributed objects are partitioned across multiple processes in a communicator. Each process owns at least one element in the object's Map that is not owned by another process. For locally replicated objects, each element in the object's Map is owned redundantly by all processes in the object's communicator. Some algorithms use objects that are too small to be distributed across all processes. The upper Hessenberg matrix in a GMRES iterative solve is a good example. In other cases, such as with block iterative methods, block dot product functions produce small dense matrices that are required by all images. Replicated local objects handle these situations.
Definition at line 198 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::local_ordinal_type = LocalOrdinal |
The type of local indices.
Definition at line 204 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::global_ordinal_type = GlobalOrdinal |
The type of global indices.
Definition at line 207 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::device_type = typename Node::device_type |
This class' Kokkos::Device specialization.
A Kokkos::Device is an (execution_space, memory_space) pair. It defines where the Map's data live, and where Map might choose to execute parallel kernels.
Definition at line 214 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::execution_space = typename device_type::execution_space |
The Kokkos execution space.
Definition at line 217 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::memory_space = typename device_type::memory_space |
The Kokkos memory space.
Definition at line 220 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::node_type = Node |
Legacy typedef that will go away at some point.
Definition at line 223 of file Tpetra_Map_decl.hpp.
using Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::local_map_type = ::Tpetra::Details::LocalMap<local_ordinal_type, global_ordinal_type, device_type> |
Type of the "local" Map.
The "local" Map is suitable for use in Kokkos parallel operations in the Map's native execution space, which is device_type::execution_space
.
By "local," we mean that the object performs no MPI communication, and can only access information that would never need MPI communication, no matter what kind of Map this is.
Definition at line 241 of file Tpetra_Map_decl.hpp.
Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::Map | ( | const global_size_t | numGlobalElements, |
const global_ordinal_type | indexBase, | ||
const Teuchos::RCP< const Teuchos::Comm< int > > & | comm, | ||
const LocalGlobal | lg = GloballyDistributed |
||
) |
Constructor with contiguous uniform distribution.
Build a Map representing the following contiguous range of numGlobalElements
indices:
For example, if indexBase
is 0 and numGlobalElements
is N and positive, the resulting contiguous range is [0, N-1].
The lg
argument determines whether the indices will be distributed evenly over all the processes in the given communicator comm
, or replicated on all processes in the communicator. "Distributed evenly" (the default) means that each process gets a contiguous range of either numGlobalElements / P or (numGlobalElements / P) + 1 indices. The resulting Map is nonoverlapping. "Replicated" means that every process shares the range [0, N-1]; the resulting Map is an overlapping Map.
This constructor must be called as a collective over the input communicator. It reserves the right to use MPI collectives to check input values in a debug build. If it does check and any check fails, it will throw std::invalid_argument on all processes in the given communicator.
numGlobalElements | [in] Global number of indices in the Map (over all processes). |
indexBase | [in] The base of the global indices in the Map. This must be the same on every process in the communicator. The index base will also be the smallest global index in the Map. (If you don't know what this should be, use zero.) |
lg | [in] Either GloballyDistributed or LocallyReplicated . If GloballyDistributed and the communicator contains P processes, then each process will own either numGlobalElements/P or numGlobalElements/P + 1 nonoverlapping contiguous indices. If LocallyReplicated , then all processes will get the same set of indices, namely indexBase, indexBase + 1, ..., numGlobalElements + indexBase - 1 . |
comm | [in] Communicator over which to distribute the indices. |
Definition at line 162 of file Tpetra_Map_def.hpp.
Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::Map | ( | const global_size_t | numGlobalElements, |
const size_t | numLocalElements, | ||
const global_ordinal_type | indexBase, | ||
const Teuchos::RCP< const Teuchos::Comm< int > > & | comm | ||
) |
Constructor with contiguous, possibly nonuniform distribution.
If N is the sum of numLocalElements
over all processes, then this constructor produces a nonoverlapping Map with N indices in the global contiguous range [0, N-1], distributed over all the processes in the given communicator comm
, with a contiguous range of numLocalElements
indices on the calling process.
This constructor must be called as a collective over the input communicator. It reserves the right to use MPI collectives to check input values in a debug build. If it does check and any check fails, it will throw std::invalid_argument on all processes in the given communicator.
numGlobalElements | [in] If you want Tpetra to compute the global number of indices in the Map, set this to Teuchos::OrdinalTraits<Tpetra::global_size_t>::invalid(). This costs a global all-reduce. Otherwise, this must equal the sum of numLocalElements over all processes in the input communicator comm . |
numLocalElements | [in] Number of indices that the calling process will own in the Map. |
indexBase | [in] The base of the global indices in the Map. This must be the same on every process in the given communicator. For this Map constructor, the index base will also be the smallest global index in the Map. If you don't know what this should be, use zero. |
comm | [in] Communicator over which to distribute the elements. |
Definition at line 337 of file Tpetra_Map_def.hpp.
Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::Map | ( | const global_size_t | numGlobalElements, |
const Kokkos::View< const global_ordinal_type *, device_type > & | indexList, | ||
const global_ordinal_type | indexBase, | ||
const Teuchos::RCP< const Teuchos::Comm< int > > & | comm | ||
) |
Constructor with arbitrary (possibly noncontiguous and/or nonuniform and/or overlapping) distribution, taking the input global indices as a Kokkos::View.
Call this constructor if you have an arbitrary list of global indices for each process in the given communicator. Those indices need not be contiguous, and the sets of global indices on different processes may overlap. This is one of the constructors to use to make a general, possibly overlapping distribution.
This constructor, like all Map constructors, must be called as a collective over the input communicator. It reserves the right to use MPI collectives to check input values in a debug build. If it does check and any check fails, it will throw std::invalid_argument on all processes in the given communicator.
numGlobalElements | [in] If numGlobalElements == Teuchos::OrdinalTraits<Tpetra::global_size_t>::invalid() , the number of global elements will be computed (via a global communication) as the sum of the counts of local indices. Otherwise, it must equal the sum of the number of indices on each process, over all processes in the given communicator, and must be the same on all processes in the communicator. |
indexList | [in] List of global indices owned by the calling process. (This likely differs on different processes.) |
indexBase | [in] The base of the global indices in the Map. This must be the same on every process in the given communicator comm . Currently, Map requires that this equal the global minimum index over all processes' entryList inputs. |
comm | [in] Communicator over which to distribute the indices. This constructor must be called as a collective over this communicator. |
Definition at line 944 of file Tpetra_Map_def.hpp.
Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::Map | ( | const global_size_t | numGlobalElements, |
const global_ordinal_type | indexList[], | ||
const local_ordinal_type | indexListSize, | ||
const global_ordinal_type | indexBase, | ||
const Teuchos::RCP< const Teuchos::Comm< int > > & | comm | ||
) |
Constructor with arbitrary (possibly noncontiguous and/or nonuniform and/or overlapping) distribution, taking the input global indices as a raw host pointer.
Call this constructor if you have an arbitrary list of global indices for each process in the given communicator. Those indices need not be contiguous, and the sets of global indices on different processes may overlap. This is one of the constructors to use to make a general, possibly overlapping distribution.
This constructor, like all Map constructors, must be called as a collective over the input communicator. It reserves the right to use MPI collectives to check input values in a debug build. If it does check and any check fails, it will throw std::invalid_argument on all processes in the given communicator.
numGlobalElements | [in] If numGlobalElements == Teuchos::OrdinalTraits<Tpetra::global_size_t>::invalid() , the number of global elements will be computed (via a global communication) as the sum of the counts of local elements. Otherwise, it must equal the sum of the local elements over all processes. This value must be the same on all processes participating in the call. |
indexList | [in] List of global indices owned by the calling process. |
indexListSize | [in] Number of valid entries in indexList. This is a local_ordinal_type because the number of indices owned by each process must fit in local_ordinal_type. |
indexBase | [in] The base of the global indices in the Map. This must be the same on every process in the given communicator. Currently, Map requires that this equal the global minimum index over all processes' indexList inputs. |
comm | [in] Communicator over which to distribute the elements. |
Definition at line 849 of file Tpetra_Map_def.hpp.
Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::Map | ( | const global_size_t | numGlobalElements, |
const Teuchos::ArrayView< const global_ordinal_type > & | indexList, | ||
const global_ordinal_type | indexBase, | ||
const Teuchos::RCP< const Teuchos::Comm< int > > & | comm | ||
) |
Constructor with arbitrary (possibly noncontiguous and/or nonuniform and/or overlapping) distribution, taking the input global indices as a Teuchos::ArrayView (for backwards compatibility).
Call this constructor if you have an arbitrary list of global indices for each process in the given communicator. Those indices need not be contiguous, and the sets of global indices on different processes may overlap. This is one of the constructors to use to make a general, possibly overlapping distribution.
This constructor, like all Map constructors, must be called as a collective over the input communicator. It reserves the right to use MPI collectives to check input values in a debug build. If it does check and any check fails, it will throw std::invalid_argument on all processes in the given communicator.
numGlobalElements | [in] If numGlobalElements == Teuchos::OrdinalTraits<Tpetra::global_size_t>::invalid() , the number of global elements will be computed (via a global communication) as the sum of the counts of local elements. Otherwise, it must equal the sum of the number of indices on each process, over all processes in the given communicator, and must be the same on all processes in the communicator. |
indexList | [in] List of global indices owned by the calling process. (This likely differs on different processes.) |
indexBase | [in] The base of the global indices in the Map. This must be the same on every process in the given communicator. Currently, Map requires that this equal the global minimum index over all processes' indexList inputs. |
comm | [in] Communicator over which to distribute the indices. This constructor must be called as a collective over this communicator. |
Definition at line 896 of file Tpetra_Map_def.hpp.
Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::Map | ( | ) |
Default constructor (that does nothing).
This creates an empty Map, with 0 (zero) indices total. The Map's communicator only includes the calling process; in MPI terms, it behaves like MPI_COMM_SELF.
This constructor exists mainly to support view semantics of Map. That is, we can create an empty Map, and then assign a nonempty Map to it using operator=. This constructor is also useful in methods like removeEmptyProcesses(), where we have the information to initialize the Map more efficiently ourselves, without going through one of the three usual Map construction paths.
Definition at line 139 of file Tpetra_Map_def.hpp.
|
default |
Copy constructor (shallow copy).
|
default |
Move constructor (shallow move).
|
virtual |
Destructor (virtual for memory safety of derived classes).
=default
declarations for copy construction, move construction, copy assignment, and move assignment. Definition at line 1168 of file Tpetra_Map_def.hpp.
|
default |
Copy assigment (shallow copy).
|
default |
Move assigment (shallow move).
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isOneToOne | ( | ) | const |
Whether the Map is one to one.
This must be called collectively over all processes in the Map's communicator.
Definition at line 1219 of file Tpetra_Map_def.hpp.
|
inline |
The number of elements in this Map.
Definition at line 537 of file Tpetra_Map_decl.hpp.
|
inline |
The number of elements belonging to the calling process.
Definition at line 546 of file Tpetra_Map_decl.hpp.
|
inline |
The index base for this Map.
Definition at line 555 of file Tpetra_Map_decl.hpp.
|
inline |
The minimum local index.
Definition at line 564 of file Tpetra_Map_decl.hpp.
|
inline |
The maximum local index on the calling process.
If this process owns no elements, that is, if getLocalNumElements() == 0
, then this method returns the same value as Teuchos::OrdinalTraits<local_ordinal_type>::invalid()
.
Definition at line 578 of file Tpetra_Map_decl.hpp.
|
inline |
The minimum global index owned by the calling process.
Definition at line 591 of file Tpetra_Map_decl.hpp.
|
inline |
The maximum global index owned by the calling process.
Definition at line 600 of file Tpetra_Map_decl.hpp.
|
inline |
The minimum global index over all processes in the communicator.
Definition at line 609 of file Tpetra_Map_decl.hpp.
|
inline |
The maximum global index over all processes in the communicator.
Definition at line 618 of file Tpetra_Map_decl.hpp.
LocalOrdinal Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getLocalElement | ( | global_ordinal_type | globalIndex | ) | const |
The local index corresponding to the given global index.
globalIndex | [in] The global index. |
Definition at line 1234 of file Tpetra_Map_def.hpp.
GlobalOrdinal Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getGlobalElement | ( | local_ordinal_type | localIndex | ) | const |
The global index corresponding to the given local index.
localIndex | [in] The local index. |
Definition at line 1259 of file Tpetra_Map_def.hpp.
Map< LocalOrdinal, GlobalOrdinal, Node >::local_map_type Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getLocalMap | ( | ) | const |
Get the LocalMap for Kokkos-Kernels.
Definition at line 1311 of file Tpetra_Map_def.hpp.
LookupStatus Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getRemoteIndexList | ( | const Teuchos::ArrayView< const global_ordinal_type > & | GIDList, |
const Teuchos::ArrayView< int > & | nodeIDList, | ||
const Teuchos::ArrayView< local_ordinal_type > & | LIDList | ||
) | const |
Return the process ranks and corresponding local indices for the given global indices.
This operation must always be called as a collective over all processes in the Map's communicator. For a distributed noncontiguous Map, this operation requires communication.
GIDList | [in] List of global indices for which to find process ranks and local indices. These global indices need not be owned by the calling process. Indeed, they need not be owned by any process. |
nodeIDList | [out] List of process rank corresponding to the given global indices. If a global index does not belong to any process, the resulting process rank is -1. |
LIDList | [out] List of local indices (that is, the local index on the process that owns them) corresponding to the given global indices. If a global index does not have a local index, the resulting local index has the same value as Teuchos::OrdinalTraits<local_ordinal_type>::invalid(). |
Definition at line 2174 of file Tpetra_Map_def.hpp.
LookupStatus Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getRemoteIndexList | ( | const Teuchos::ArrayView< const global_ordinal_type > & | GIDList, |
const Teuchos::ArrayView< int > & | nodeIDList | ||
) | const |
Return the process ranks for the given global indices.
This method must always be called as a collective over all processes in the Map's communicator. For a distributed noncontiguous Map, this operation requires communication.
GIDList | [in] List of global indices for which to find process ranks and local indices. These global indices need not be owned by the calling process. Indeed, they need not be owned by any process. |
nodeIDList | [out] List of process ranks corresponding to the given global indices. If a global index does not belong to any process, the resulting process rank is -1. |
Definition at line 2263 of file Tpetra_Map_def.hpp.
Map< LocalOrdinal, GlobalOrdinal, Node >::global_indices_array_type Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getMyGlobalIndices | ( | ) | const |
Return a view of the global indices owned by this process.
The returned "view" has some type that looks like
Kokkos::View<const global_ordinal_type*, ...>
or Teuchos::ArrayView<const global_ordinal_type>
It implements operator[] and the size() method, and behaves as a one-dimensional array. You may not modify its entries.
auto
instead.If you call this method on a contiguous Map, it will create and cache the list of global indices for later use. Beware of calling this if the calling process owns a very large number of global indices.
Definition at line 1620 of file Tpetra_Map_def.hpp.
Map< LocalOrdinal, GlobalOrdinal, Node >::global_indices_array_device_type Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getMyGlobalIndicesDevice | ( | ) | const |
Return a view of the global indices owned by this process on the Map's device.
Definition at line 1707 of file Tpetra_Map_def.hpp.
Teuchos::ArrayView< const GlobalOrdinal > Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getLocalElementList | ( | ) | const |
Return a NONOWNING view of the global indices owned by this process.
If you call this method on a contiguous Map, it will create and cache the list of global indices for later use. Beware of calling this if the calling process owns a very large number of global indices.
Definition at line 1776 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isNodeLocalElement | ( | local_ordinal_type | localIndex | ) | const |
Whether the given local index is valid for this Map on the calling process.
Definition at line 1280 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isNodeGlobalElement | ( | global_ordinal_type | globalIndex | ) | const |
Whether the given global index is owned by this Map on the calling process.
Definition at line 1292 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isUniform | ( | ) | const |
Whether the range of global indices is uniform.
This is a conservative quantity. It need only be true if the Map was constructed using the first (uniform contiguous) constructor or a nonmember constructor that calls it. We reserve the right to do more work to check this in the future.
Definition at line 1298 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isContiguous | ( | ) | const |
True if this Map is distributed contiguously, else false.
Currently, creating this Map using the constructor for a user-defined arbitrary distribution (that takes a list of global elements owned on each process) means that this method always returns false. We currently make no effort to test whether the user-provided global indices are actually contiguous on all the processes. Many operations may be faster for contiguous Maps. Thus, if you know the indices are contiguous on all processes, you should consider using one of the constructors for contiguous elements.
Definition at line 1303 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isDistributed | ( | ) | const |
Whether this Map is globally distributed or locally replicated.
"Globally distributed" means that all of the following are true:
If at least one of the above are not true, then the map is "locally replicated." (The two are mutually exclusive.)
Calling this method requires no communication or computation, because the result is precomputed in Map's constructors.
Definition at line 1798 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isCompatible | ( | const Map< local_ordinal_type, global_ordinal_type, Node > & | map | ) | const |
True if and only if map
is compatible with this Map.
Two Maps are "compatible" if all of the following are true:
Determining #3 requires communication (a reduction over this Map's communicator). This method assumes that the input Map is valid on all processes in this Map's communicator.
Compatibility is useful for determining correctness of certain operations, like assigning one MultiVector X to another Y. If X and Y have the same number of columns, and if their Maps are compatible, then it is legal to assign X to Y or to assign Y to X.
The behavior of this method is undefined if the input Map's communicator and this Map's communicator have different numbers of processes. This method must be called collectively over this Map's communicator.
Definition at line 1322 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isSameAs | ( | const Map< local_ordinal_type, global_ordinal_type, Node > & | map | ) | const |
True if and only if map
is identical to this Map.
"Identical" is stronger than "compatible." Two Maps are identical if all of the following are true:
MPI_IDENT
or MPI_CONGRUENT
return values of MPI_Comm_compare). "Identical" (isSameAs()) includes and is stronger than "compatible" (isCompatible()).
A Map corresponds to a block permutation over process ranks and global element indices. Two Maps with different numbers of processes in their communicators cannot be compatible, let alone identical. Two identical Maps correspond to the same permutation.
The behavior of this method is undefined if the input Map's communicator and this Map's communicator have different numbers of processes. This method must be called collectively over this Map's communicator.
Definition at line 1527 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::locallySameAs | ( | const Map< local_ordinal_type, global_ordinal_type, node_type > & | map | ) | const |
Is this Map locally the same as the input Map?
"Locally the same" means that on the calling process, the two Maps' global indices are the same and occur in the same order.
Definition at line 1388 of file Tpetra_Map_def.hpp.
bool Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::isLocallyFitted | ( | const Map< local_ordinal_type, global_ordinal_type, Node > & | map | ) | const |
True if and only if map
is locally fitted to this Map.
For two maps, map1 and map2, we say that map1 is locally fitted to map2 (on the calling process), when the indices of map1 (on the calling process) are the same and in the same order as the initial indices of map2. "Fittedness" is entirely a local (per MPI process) property.
The predicate "is map1 fitted to map2 ?" is not symmetric. For example, map2 may have more entries than map1.
Fittedness on a process can let Tpetra avoid deep copies in some Export or Import (communication) operations. Tpetra could use this, for example, in optimizing its sparse matrix-vector multiply.
Definition at line 1475 of file Tpetra_Map_def.hpp.
Teuchos::RCP< const Teuchos::Comm< int > > Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::getComm | ( | ) | const |
Accessors for the Teuchos::Comm and Kokkos Node objects.
Get this Map's communicator, as a Teuchos::Comm.
Definition at line 2361 of file Tpetra_Map_def.hpp.
std::string Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::description | ( | ) | const |
Implementation of Teuchos::Describable
.
Return a one-line description of this object.
Definition at line 1803 of file Tpetra_Map_def.hpp.
void Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::describe | ( | Teuchos::FancyOStream & | out, |
const Teuchos::EVerbosityLevel | verbLevel = Teuchos::Describable::verbLevel_default |
||
) | const |
Describe this object in a human-readable way to the given output stream.
You must call this method as a collective over all processes in this object's communicator.
out | [out] Output stream to which to write. Only Process 0 in this object's communicator may write to the output stream. |
verbLevel | [in] Verbosity level. This also controls whether this method does any communication. At verbosity levels higher (greater) than Teuchos::VERB_LOW, this method may behave as a collective over the object's communicator. |
Teuchos::FancyOStream wraps std::ostream. It adds features like tab levels. If you just want to wrap std::cout, try this:
Definition at line 1874 of file Tpetra_Map_def.hpp.
Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::removeEmptyProcesses | ( | ) | const |
Advanced methods.
Create a shallow copy of this Map, with a different Node type. Return a new Map with processes with zero elements removed.
This method first computes a new communicator, which contains only those processes in this Map's communicator (the "original
communicator") that have a nonzero number of elements in this Map (the "original Map"). It then returns a new Map distributed over the new communicator. The new Map represents the same distribution as the original Map, except that processes containing zero elements are not included in the new Map or its communicator. On processes not included in the new Map or communicator, this method returns Teuchos::null
.
The returned Map always has a distinct communicator from this Map's original communicator. The new communicator contains a subset of processes from the original communicator. Even if the number of processes in the new communicator equals the number of processes in the original communicator, the new communicator is distinct. (In an MPI implementation, the new communicator is created using MPI_Comm_split.) Furthermore, a process may have a different rank in the new communicator, so be wary of classes that like to store the rank rather than asking the communicator for it each time.
This method must be called collectively on the original communicator. It leaves the original Map and communicator unchanged.
This method was intended for applications such as algebraic multigrid or other multilevel preconditioners. Construction of each level of the multilevel preconditioner typically requires constructing sparse matrices, which in turn requires all-reduces over all participating processes at that level. Matrix sizes at successively coarser levels shrink geometrically. At the coarsest levels, some processes might be left with zero rows of the matrix, or the multigrid implementation might "rebalance" (redistribute the matrix) and intentionally leave some processes with zero rows. Removing processes with zero rows makes the all-reduces and other communication operations cheaper.
Definition at line 2066 of file Tpetra_Map_def.hpp.
Teuchos::RCP< const Map< LocalOrdinal, GlobalOrdinal, Node > > Tpetra::Map< LocalOrdinal, GlobalOrdinal, Node >::replaceCommWithSubset | ( | const Teuchos::RCP< const Teuchos::Comm< int > > & | newComm | ) | const |
Replace this Map's communicator with a subset communicator.
This method must be called collectively on the original communicator. It leaves the original Map and communicator unchanged.
Definition at line 1949 of file Tpetra_Map_def.hpp.
|
related |
Nonmember constructor for a locally replicated Map with the default Kokkos Node.
This method returns a Map instantiated on the default Kokkos Node type. The Map is configured to use zero-based indexing.
numElements | [in] Number of elements on each process. Each process gets the same set of elements, namely 0, 1, ..., numElements - 1 . |
comm | [in] The Map's communicator. |
|
related |
Nonmember constructor for a locally replicated Map with a specified Kokkos Node.
This method returns a Map instantiated on the given Kokkos Node instance. The Map is configured to use zero-based indexing.
numElements | [in] Number of elements on each process. Each process gets the same set of elements, namely 0, 1, ..., numElements - 1 . |
comm | [in] The Map's communicator. |
|
related |
|
related |
|
related |
|
related |
|
related |
|
related |
|
related |
Creates a one-to-one version of the given Map where each GID lives on only one process. The given TieBreak object specifies the rule to break ties.
|
related |
True if map1 is the same as (in the sense of isSameAs()) map2, else false.
Definition at line 1436 of file Tpetra_Map_decl.hpp.
|
related |
True if map1 is not the same as (in the sense of isSameAs()) map2, else false.
Definition at line 1443 of file Tpetra_Map_decl.hpp.