Advance Access to remote PCIe Devices
Dolphin has developed a method of remotely sharing and accessing remote standard PCIe devices. The method is based on developing with the SISCI API and the SISCI API SmartIO device access extension. The SISCI API has since 1998 provided a stable and super low latency API for various networks that can enable remote access to memory. The API provides basic access to physical addresses and PCIe devices.
The SISCI API SmartIO extension makes it easy and convenient for SISCI application developers to access and use PCIe devices within a PCIe fabric. PCIe devices located in the local server, in a remote server or directly connected to the PCIe fabric can be managed by the API. The SmartIO software provides a solution where you don't need to care where the device is located, local or remote - it is available over PCIe. The result is extremely low latency and performance close to the wire speed or device speed. Accessing a remote PCIe device is just a few hundred nanoseconds extra (The cut through latency of 2 PCIe switches).
The SISCI API SmartIO extension doesn't follow the traditional device driver model, but is a flexible API that enables several systems concurrent access one or more devices. Once concurrent access is established, the application programmer is required to manage and control the concurrent access to the devices. Some devices, by nature, can not be concurrently shared, in this case, the programmer needs to ensure only a single system is accessing the device. (This can be done using a distributed lock manager running over PCIe). Other devices, like nonvolatile memory boards, FPGAs and NVMes may be easier so share.
If you would like to access a remote PCIe device but want to use an unmodified driver for your PCIe device, please consider the PCIe Device Lending software solution.
The SISCI API Smart IO extension is contantly undergoing enhancements and changes may still occur to the API based on feedback from customers. The current SISCI API can be found here - look for the SmartIO section.
NVMe Example Driver
As a test case of the new SmartIO API, we have implemented a user space NVMe driver that allows a single-function NVMe namespace to be shared concurrently among multiple computers connected via a PCIe fabric.
This is possible by exploiting the inherent parallel design of NVMe specification and assigning IO queue pairs to different hosts. SmartIO sets up NT mappings behind the scenes, making IO queues and user space data buffers accessible directly from the NVMe controller. In other words, the controller accesses command queues and buffers over the NTB.
While this requires implementing a custom driver or extending existing drivers with SmartIO support, it means that it is possible to share a single-function NVMe SSD concurrently between multiple nodes and move data using an optimal data path directly into an application buffer or a GPU buffer.
The current SISCI API SmartIO extension does not support forwarding Device Interrupts. This will be supported in the near future.
PCIe devices directly attached to the PCIe fabric is currently not supported. This will be available with the new MXS824 PCIe Gen3 24 port switch. Devices must currently be installed in one of the servers.
The SISCI API is currently available for Linux, Windows, RTX and VxWorks. The SmartIO extension is currently only available with Linux but it seems possible to port the software to Windows, RTX and VxWorks.
Please contact Dolphin for more information.