Communication is an important but difficult aspect of parallel programming. This paper describes a parallel communication infrastructure, based on remote method invocation,
to simplify parallel programming by abstracting lowlevel shared-memory or message passing details while maintaining high performance and portability. STAPL, the Standard
Template Adaptive Parallel Library, builds upon this infrastructure
to make communication transparent to the user. The basic design is discussed, as well as the mechanisms used in the current Pthreads and MPI implementations. Performance
comparisons between STAPL and explicit Pthreads or MPI are given on a variety of machines, including an HPV2200,
Origin 3800 and a Linux Cluster.