The Dolphin PCI Express software stack is written to be agnostic to hardware platforms and will run on most systems. Generally, Dolphin strives to support every platform that can run a supported version of Windows, RTX, VxWorks or Linux and offers PCI Express networking capabilities.
PCI Express should normally be used between all systems that requires low latency or high throughput.
Most systems will provide low latency for small amount of data. Applications that needs high throughput will normally benefit from selecting a platform that provides DMA capabilities. PCI Express Gen3 based platforms are recommended for highest throughput.
The Dolphin PCI Express PX hardware (interconnect adapters) complies to the PCI industry standard PCI-Express 3.0 and will thus operate in any machine that offers compliant slots. Supported CPU architectures are:
x86 (32 bit)
x86_64 (AMD64 and Intel EMT)
Some combinations of CPU and chipset implementations offer sub-optimal performance which should be considered when planning a new system.
If you have questions about your specific hardware platform, please compare with the known issues listed in Appendix C, Platform Issues and Software Limitations or contact support.
The hardware platform for the Cluster Nodes should be chosen from the Supported Platforms as described above. Next to the PCI Express specific requirements, you need to consult your application vendor expert / consultant on the recommended configuration for your application.
The Dolphin PXH810 and PXH830 adapters are low profile cards with a full-height bracket; a half height bracket is included.
The PCI Express interconnect is fully inter-operable between all supported hardware platforms, also with different PCI or CPU architectures. As with all applications that communicate over a network, care must be taken by the applications if data with different endianness is communicated.
The Cluster Management Node does only run a lightweight Network Manager service which does not impose special hardware requirements.
The Network Manager service is optional when using the SISCI API but mandatory if the SuperSockets or IPoPCIe software is used.
The Cluster Management Node requires a reliable Ethernet connection to all Cluster Nodes. One of the Cluster Nodes can also operate as a Cluster Management Node.