Unmet Need: Efficient architecture for employing Deep Neural Networks
Deep Neural Networks (DNNs) employed for image segmentation are computationally more expensive and complex compared to the ones used for classification. GPU-based platforms are the most popular choice of hardware for training DNNs but are not specifically optimized for this task. Resistive Random-Access Memory (ReRAM)-based architectures offer a promising alternative to GPUs. However, due to their low-precision storage capability, these architectures cannot support all DNN layers and suffer from accuracy loss of the learned models.
The Technology: Engaging GPU and ReRAM for DNN Training
WSU is developing GRAMARCH, a GPU-ReRAM based heterogeneous architecture for neural image segmentation and other machine learning tasks. It combines the benefits and efficiency of ReRAM and GPUs simultaneously by using a high-throughput 3D Network-on-chip (NOC). This architecture solves the problems of accuracy loss due to low precision computation.
• High performance platform for training various deep neural network learning frameworks
• High accuracy training and support for crucial operations like normalization using stochastic rounding
• 53 times better performance compared to conventional GPUs for image segmentation