OCP and Kalray
The Open Compute Project Foundation was founded in 2011 by Facebook, Intel and Rackspace. OCP’s mission is to apply the benefits of open source to hardware and rapidly increase the pace of innovation in the data center and beyond.
Of particular interest to Kalray is how its Massively Parallel Processor Array (MPPA®) technology can be applied to OCP projects, specifically: High Performance Computing (HPC), Server, Storage, Telco and Networking.
With the OCP Foundation deciding to have its first regional Summit in Europe at the Amsterdam RAI, attracting over 600 attendees, Kalray sent their team to investigate the MPPA technology fit within this rapidly growing community.
During the 60 or so technical sessions and discussions with the 36 sponsor exhibitors, certain technical themes and challenges kept re-surfacing, challenges that Kalray’s MPPA technology could indeed offer assistance with..
Scaling of OCP Flash Storage
Open Rack solutions address the cost reduction, power reduction, standardization and deployability key aims of OCP, but independent scaling (of compute and storage) using current OCP Flash storage architectures remains a challenge.
Within current OCP JBOF architectures, the predominance of PCIe connectivity limits scope for scalability.
Replacing this costly, bulky and distance limited PCIe cabling with a fabric connection, opens up the possibility of standards based independently scalable flash storage solutions through adoption of NVMe over Fabrics (NVMe-oF).
The Kalray Target Controller (KTC) solution for NVMe-oF JBOF offers a way to simplify JBOF rack interconnection, reduce costs and deliver on independent scaling, whilst providing the required IOPS performance to unleash the full potential of NVMe SSD storage.
Delivering this NVMe-oF JBOF Target Controller function whilst being able to concurrently perform in-line data processing, is a major added bonus.
To learn more about Kalray Target Controller and how it helps achieve independently scalable Flash storage solutions follow this link: KTC.
Delivering OCP AI Inference Compute
There are some excellent OCP compute Sled offerings within the OCP partner community but the offering for AI compute seems to be skewed toward data center-based training applications.
There was common acceptance within the OCP Summit technical sessions that there is a need for more AI Inference processor choices that offer:-
- Large on chip memory architecture
- Low latency
- Low power
- Multi-concurrent AI framework and network abilities
Kalray’s next generation Coolidge processor delivers these features and abilities. In addition, this processor acts as a main CPU, providing direct access to data storage, whether local or remote, to solve the bottleneck of access to IOs.
Another Kalray-OCP fit arose during discussions on OCP Edge Compute architectures, where the need for multichannel heterogeneous compute with low latency, with a keen eye on low power consumption, again showed remarkable synergy with Coolidge’s abilities.
We are much looking forward to seeing what’s new at the next Global Summit which will focus on collaboration to help grow, drive and support the open hardware ecosystem in, near and around the datacenter and beyond.
To learn more about Kalray’s 3rd generation MPPA processor, follow this link: Coolidge.