Self-driving vehicles, home delivery by drone, intelligent personal assistants… These are just a few examples of ground-breaking technologies that have emerged thanks to the development of Artificial Intelligence (AI). AI has opened up a world of new technological opportunities in the consumer goods market, as well as in industry and defense. These technologies use complex computing techniques called deep learning, which allow machines to perform activities that would normally require human intelligence such as speech, handwriting and/or face recognition. More and more, this process is being applied to mobile and critical embedded technologies like autonomous vehicles and drones, hence the necessity for low-power and low-latency processors operating in real time.
The MPPA® advantage
Kalray’s MPPA is a leading processing solution for deep learning in the industry. The processor’s clustered manycore architecture offers high-performance and low-latency deep learning inference. It enables multiple Neural Network layers to compute concurrently. With a built-in on-chip memory and a dedicated wrapper called the Kalray Neural Network (KaNN), the processor is able to process a large number of frames per second and keep power consumption relatively low.
The key benefits:
- High performance and low latency: leverage on on-chip memory and cluster-to-cluster communication.
- On-chip memory: high bandwidth memory (70 GB/s on Bostan and 300GB/s on Coolidge) to store data closer to the compute units.
- On-chip communication: fast and direct communication between clusters and the chip for faster communication between layers. NoC multicasting of parameters fosters spatial dimension splitting.
- Compatibility with existing standards: the MPPA® can integrate intelligence into a complex embedded system. Rather than have a chip run CNNs, it offers an embedded solution, including CNN capability.
With Kalray’s MPPA®, customers will have a competitive advantage for artificial intelligence in embedded technology!