Tpetra parallel linear algebra  Version of the Day
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Tpetra::Details::Impl::IallreduceCommRequest< PacketType, SendLayoutType, SendDeviceType, RecvLayoutType, RecvDeviceType, rank > Class Template Reference

Object representing a pending ::Tpetra::Details::iallreduce operation. More...

#include <Tpetra_Details_iallreduce.hpp>

Detailed Description

template<class PacketType, class SendLayoutType, class SendDeviceType, class RecvLayoutType, class RecvDeviceType, const int rank>
class Tpetra::Details::Impl::IallreduceCommRequest< PacketType, SendLayoutType, SendDeviceType, RecvLayoutType, RecvDeviceType, rank >

Object representing a pending ::Tpetra::Details::iallreduce operation.

This subclass keeps the send and receive buffers. Since ::Kokkos::View reference-counts, this ensures that the buffers will not be deallocated until the iallreduce completes. The buffer references get cleared on wait().

Tpetra developers should not use this directly; they should instead create instances of this via the wrapIallreduceCommRequest function (see below).

Template Parameters
PacketTypeType of each entry of the send and receive buffers.
SendLayoutTypearray_layout of the send buffer. Must be Kokkos::LayoutLeft or Kokkos::LayoutRight.
SendDeviceTypeKokkos::Device specialization used by the send buffer.
RecvLayoutTypearray_layout of the receive buffer. Must be Kokkos::LayoutLeft or Kokkos::LayoutRight.
RecvDeviceTypeKokkos::Device specialization used by the receive buffer. It's OK for this to differ from SendDeviceType. We assume that MPI implementations can handle this. (This is a reasonable assumption with CUDA-enabled MPI implementations.)
rankInteger rank of the send and receive buffers. Must be either 0 or 1.

Definition at line 230 of file Tpetra_Details_iallreduce.hpp.


The documentation for this class was generated from the following file: