|
ForTrilinos
|
ForTrilinos provides modern Fortran interfaces to C++ packages in Trilinos. In keeping with the object-oriented programming philosophy of the Trilinos project, ForTrilinos leverages the object-oriented features of the Fortran 2003 standard. To provide portability, ForTrilinos extensively exploits the Fortran 2003 standard's support for interoperability with C.
As of release 10.8, ForTrilinos builds with IBM XL compiler suite or the Numerical Algorithms Group (NAG) Fortran compiler. Additionally, the Cray and Intel Fortran compilers nominally support all features required to build ForTrilinos and we are working toward successful builds with the Cray and Intel compilers.
ForTrilinos contains one module for each wrapped Trilinos C++ class. Each module bears the name of the corresponding C++ class prepended with an "F" so FEpetra_Comm is the Fortran interface to the C++ Epetra_Comm class. Each ForTrilinos module contains one class (extensible derived type) bearing the same name as the corresponding C++ class.
Epetra provides the fundamental construction routines and services function that are required for serial and parallel linear algebra libraries. Epetra provides the underlying foundation for all Trilinos solvers.
The ForTrilinos modules that wrap Epetra classes are FEpetra_BlockMap, FEpetra_Comm, FEpetra_CrsGraph, FEpetra_CrsMatrix, FEpetra_DistObject, FEpetra_Export, FEpetra_Import, FEpetra_LinearProblem, FEpetra_Map, FEpetra_MpiComm, FEpetra_MultiVector, FEpetra_OffsetIndex, FEpetra_RowMatrix, FEpetra_SerialComm, FEpetra_SrcDistObject, and FEpetra_Vector.
AztecOO is an object-oriented interface to the Aztec, a massively parallel, iterative solver library for sparse linear systems. The ForTrilinos module that wraps AztecOO is FAztecOO.
Pliris is a parallel, object-oriented LU solver for double-precision, dense matrices. The ForTrilinos module that wraps Pliris is FPliris.
The Galeri package easily generates Epetra_Map, Epetra_CrsMatrix and Epetra_VbrMatrix objects, providing functionality very close to MATLAB's gallery() function for creating several well-known finite difference and finite element matrices. Although the procedural infrastructure for accessing Galeri exists, safe access to Galeri from Fortran requires the object-oriented interfaces. The ForTrilinos team can assist in the development of object-oriented interfaces to Galeri upon user request.
IFPACK provides object-oriented, algebraic preconditioners for iterative solvers. IFPACK can be used as a preconditioner for AztecOO. Although the procedural infrastructure for accessing IFPACK exists, safe access to IFPACK from Fortran requires the object-oriented interfaces. The ForTrilinos team can assist in the development of object-oriented interfaces to IFPACK upon user request.
Amesos is the direct sparse solver package in Trilinos. Amesos provides clean and consistent interfaces to the following third party libraries: LAPACK, UMFPACK version 4.4, TAUCS version 2.2, PARDISO version 1.2.3 (outdated), SuperLU version 4.1, SuperLU_DIST version 2.5, DSCPACK version 1.0, SCALAPACK version 1.7, and MUMPS version 4.7.3 (experimental support for version 4.9).
Although the procedural infrastructure for accessing Amesos exists, safe access to Amesos from Fortran requires the object-oriented interfaces. The ForTrilinos team can assist in the development of object-oriented interfaces to Amesos upon user request.
program main
#include "ForTrilinos_config.h"
#ifdef HAVE_MPI
use mpi
use FEpetra_MpiComm,only:Epetra_MpiComm
#else
use FEpetra_SerialComm,only:Epetra_SerialComm
#endif
use FEpetra_Map
use FEpetra_Vector
use FEpetra_CrsMatrix
use FAztecOO
use ForTrilinos_error
use ForTrilinos_enum_wrappers
use iso_c_binding, only : c_int,c_double
implicit none
#ifdef HAVE_MPI
type(Epetra_MpiComm) :: comm
#else
type(Epetra_SerialComm) :: comm
#endif
type(Epetra_Map) :: map
type(Epetra_CrsMatrix) :: A
type(Epetra_Vector) :: x,b
type(AztecOO) :: Solver
type(error) ::err
integer(c_int), parameter :: grid_resolution=8
integer :: rc,ierr
integer(c_int),parameter::IndexBase=1
integer(c_int) :: NumGlobalElements = 8, NumMyElements, NumEntries, MaximumIter=100
integer(c_int),dimension(:),allocatable :: MyGlobalElements, NumNonZero
real(c_double),dimension(:),allocatable:: c
real(c_double) :: values(2)
real(c_double), parameter :: tolerance = 1.0E-4,three=3.0
integer(c_int) :: MyGlobalElements_diagonal(1),i,indices(2)
integer(c_int),parameter :: diagonal=1
logical :: success = .true.
#ifdef HAVE_MPI
call MPI_INIT(ierr)
comm = Epetra_MpiComm(MPI_COMM_WORLD)
#else
comm = Epetra_SerialComm()
#endif
map = Epetra_Map(NumGlobalElements,IndexBase,comm)
NumMyElements = map%NumMyElements()
allocate(MyGlobalElements(NumMyElements))
MyGlobalElements = map%MyGlobalElements()
allocate(NumNonZero(NumMyElements))
NumNonZero = 3 ! Non zero elements per row
! Create a Epetra_Matrix
A = Epetra_CrsMatrix(FT_Epetra_DataAccess_E_Copy,map,NumNonZero)
! Add rows one at a time
! off diagonal values will always be 1 and 1
values(1) = 1.0
values(2) = 1.0
do i=1,NumMyElements
if (MyGlobalElements(i)==1) then
indices(1) = NumGlobalElements
indices(2) = 2
NumEntries = 2
else if(MyGlobalElements(i)==NumGlobalElements) then
indices(1) = NumGlobalElements-1
indices(2) = 1
NumEntries = 2
else
indices(1) = MyGlobalElements(i)-1
indices(2) = MyGlobalElements(i)+1
NumEntries = 2
end if
call A%InsertGlobalValues(MyGlobalElements(i),values,indices)
!Put in the diaogonal entry
MyGlobalElements_diagonal=MyGlobalElements(i)
call A%InsertGlobalValues(MyGlobalElements(i),[three],MyGlobalElements_diagonal)
end do
!Finish up
call A%FillComplete(.true.)
x=Epetra_Vector(map,.true.)
b=Epetra_Vector(map,.true.)
call x%Random()
call b%PutScalar(1.0_c_double)
Solver=AztecOO(A,x,b)
call Solver%iterate(A,x,b,MaximumIter,tolerance,err)
allocate(c(x%MyLength()))
c=x%ExtractCopy(err)
do i=1, NumMyElements
if (abs((c(i)-0.2)/0.2)>tolerance) success = .false.
print *,c(i)
enddo
if (success) then
print *
print *, "End Result: TEST PASSED"
else
print *
print *, "End Result: TEST FAILED"
end if
call Solver%force_finalize
call A%force_finalize
call b%force_finalize
call x%force_finalize
call map%force_finalize
call comm%force_finalize
#ifdef HAVE_MPI
call MPI_FINALIZE(rc)
#endif
end program
1.8.6