NVIDIA Open Source Deep Learning Accelerator (NVDLA)
KeithE
Posts: 957
Haven't seen any mention of this here, so hopefully this isn't a dupe. Not sure what to make of it - but at the very least it's the ultimate form of documentation right? No more binary blobs? Has anyone seen any analysis of this move?
"The NVIDIA Deep Learning Accelerator (NVDLA) is a free and open architecture that promotes a standard way to design deep learning inference accelerators"
"Xavier is a complete system-on-chip (SoC), integrating a new GPU architecture called Volta, a custom 8 core CPU architecture, and a new computer vision accelerator. The processor will deliver 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power. As the brain of a self-driving car, Xavier is designed to be compliant with critical automotive standards, such as the ISO 26262 functional safety specification."
http://nvdla.org
https://github.com/nvdla/
"The NVIDIA Deep Learning Accelerator (NVDLA) is a free and open architecture that promotes a standard way to design deep learning inference accelerators"
"Xavier is a complete system-on-chip (SoC), integrating a new GPU architecture called Volta, a custom 8 core CPU architecture, and a new computer vision accelerator. The processor will deliver 20 TOPS (trillion operations per second) of performance, while consuming only 20 watts of power. As the brain of a self-driving car, Xavier is designed to be compliant with critical automotive standards, such as the ISO 26262 functional safety specification."
http://nvdla.org
https://github.com/nvdla/
Comments
This is very uncharacteristic of NVIDIA. What's going on here?
https://www.forbes.com/sites/moorinsights/2017/05/15/why-nvidia-is-building-its-own-tpu/#26a234c3347f
Looks to me like this NVDL is a deep learning accelerator designed to work with their VOLTA GPU. I'm sure the VOLTA will never be open source. So the NVDL may not be usable on it's own.
On the other hand there are hint's that it can be hooked to ARM cores and such.
Certainly an interesting response to Google building it's own Tensor Flow chip, the TPU.