Infiniband Performance Comparisons of SDR, DDR and Infinipath [electronic resource].
- Washington, D.C. : United States. Dept. of Energy, 2006. and Oak Ridge, Tenn. : Distributed by the Office of Scientific and Technical Information, U.S. Dept. of Energy.
- Physical Description:
- PDF-file: 22 pages; size: 0.1 Mbytes
- Additional Creators:
- Lawrence Berkeley National Laboratory, United States. Department of Energy, and United States. Department of Energy. Office of Scientific and Technical Information
- This technical report will be comparing the performance between the most common infiniband-related technologies currently available. Included will be TCP-based, MPI-based and low-level performance tests to see what performance can be expected from Mellanox's SDR and DDR as well as PathScale's Infinipath. Also, we will be performing comparisons of the Infinipath on both OpenIB as well as PathScale's ipath stack. Infiniband promises to bring high performance interconnects for I/O (filesystem and networking) to a new cost performance level. Thus, LLNL has been evaluating Infiniband for use as a cluster interconnect. Various issues impact the decision of which interconnect to use in a cluster. This technical report will be looking more closely at the actual performance of the major infiniband technologies present today. Performance testing will focus on latency, and bandwidth (both uni and bi-directional) using both TCP and MPI. In addition, we will be looking at an even lower-level (removing most of the upper-level protocols) and seeing what the connection can really do if the TCP or MPI layers were perfectly written.
- Published through SciTech Connect., 05/30/2006., "ucrl-tr-221775", and Minich, M.
- Funding Information:
View MARC record | catkey: 14344970