Deep Learning Hardware Accelerators with Silicon Photonics

With researchers creating deeper and more complex MLP and CNN architectures to push deep learning performance levels to new heights, the underlying hardware platform must consistently deliver better performance levels while also satisfying strict power dissipation limits. This endeavor to achieve higher performance-per-watt has driven hardware architects to design application-specific integrated circuit (ASIC) accelerators for deep learning that have much higher performance-per-watt than conventional general-purpose CPUs and GPUs. Unfortunately, state-of-the-art electronic accelerator architectures are beginning to face fundamental limits in the post Moore’s law era where processing capabilities are no longer improving as they did over the past several decades. In particular, moving data electronically on metallic wires in these accelerators is a major bandwidth and energy bottleneck. Photonic interconnects offer one of the most promising solutions to overcome these data movement challenges. Photonic links have already replaced metallic ones for light-speed information transmission at almost every hierarchy level of computing, and are now being considered for integration at the chip-scale. The advent of silicon photonics, which allowed for cost-effective integration of optical components based on CMOS electronics manufacturing, has been one of the major catalysts for chipscale photonic interconnects. Even more remarkable is the fact that various computations required in deep learning, such as matrix-vector multiplications, can be performed entirely in the optical domain. Thus, we are close to a point where it will become possible to realize deep learning accelerators that utilize silicon photonics for both communication and computation. Such silicon photonics based deep learning accelerators can provide unprecedented levels of energy efficiency and parallelism.

The research objective of this project is to design new hardware accelerators for machine learning workloads that leverage the remarkable communication and computational capabilities of silicon photonics to improve the performance, energy/power, and reliability of deep learning model execution.

Selected Publications

S. Afifi, F. Sunny, M. Nikdast, S. Pasricha, “Accelerating Neural Networks for Large Language Models and Graph Processing with Silicon Photonics”,   IEEE/ACM DATE, Mar 2024.

E. Taheri, M. A. Mahdian, S. Pasricha, M. Nikdast, “TRINE: A Tree-Based Silicon Photonic Interposer Network for Energy-Efficient 2.5D Machine Learning Acceleration”, IEEE/ACM 16th International Workshop on Network on Chip Architectures (NoCArc), 2023.

A. Shafiee, S. Banerjee, B. Charbonnier, S. Pasricha, and M. Nikdast, “Compact and Low-Loss PCM-based Silicon Photonic MZIs for Photonic Neural Networks,” IEEE Photonics Conference (IPC), Orlando, FL, Nov 2023.

S. Afifi, F. Sunny, A. Shaifee, M. Nikdast, S. Pasricha, “GHOST: A Graph Neural Network Accelerator using Silicon Photonics”, ACM Transactions on Embedded Computing Systems (TECS), 2023.

F. Sunny, M. Nikdast, S. Pasricha, “Cross-Layer Design for AI Acceleration with Non-Coherent Optical Computing”, ACM GLSVLSI, 2023.

S. Afifi, F. Sunny, M. Nikdast, S. Pasricha, “TRON: Transformer Neural Network Acceleration with Non-Coherent Silicon Photonics”, ACM GLSVLSI, 2023.

F. Sunny, E. Taheri, M. Nikdast, S. Pasricha, “Machine Learning Accelerators in 2.5D Chiplet Platforms with Silicon Photonics”, IEEE/ACM DATE, 2023.

M. Nikdast, S. Pasricha, K. Chakrabarty, “Silicon Photonic Neural Network Accelerators: Opportunities and Challenges”, CLEO, 2022.

F. Sunny, M. Nikdast and S. Pasricha, “RecLight: A Recurrent Neural Network Accelerator With Integrated Silicon Photonics”, IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2022.

S. Banerjee, M. Nikdast, S. Pasricha, K. Chakrabarty, “Pruning Coherent Integrated Photonic Neural Networks Using the Lottery Ticket Hypothesis”, IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2022.

A. Shafiee, S. Banerjee, K. Chakrabarty, S. Pasricha and M. Nikdast, “LoCI: An Analysis of the Impact of Optical Loss and Crosstalk Noise in Integrated Silicon-Photonic Neural Networks”, ACM GLSVLSI, 2022.

F. Sunny, M. Nikdast and S. Pasricha, “A Silicon Photonic Accelerator for Convolutional Neural Networks with Heterogeneous Quantization”, ACM GLSVLSI, 2022.

S. Banerjee, M. Nikdast, S. Pasricha, K. Chakrabarty, “CHAMP: Coherent Hardware-Aware Magnitude Pruning of Integrated Photonic Neural Networks”, IEEE OFC, 2022.

F. Sunny, M. Nikdast, and S. Pasricha, “SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy-Efficient Deep Learning”, IEEE/ACM Asia & South Pacific Design Automation Conference (ASPDAC), Jan 2022.

F. Sunny, E. Taheri, M. Nikdast, S. Pasricha, “A Survey on Silicon Photonics for Deep Learning”, ACM Journal on Emerging Technologies in Computing Systems (JETC), 2021. 

D. Dang, S. V. R. Chittamuru, S. Pasricha, R. Mahapatra, D. Sahoo, “BPLight-CNN: A Photonics-based Backpropagation Accelerator for Deep Learning”, ACM Journal on Emerging Technologies in Computing Systems (JETC), 2021.

A. Shafiee, A. Mirza, F. Sunny, S. Banerjee, K. Chakrabarty, S. Pasricha, and M. Nikdast, “Inexact Silicon Photonics: From Devices to Applications”, OSA, 2021.

F. Sunny, A. Mirza, M. Nikdast, S. Pasricha, “ROBIN: A Robust Optical Binary Neural Network Accelerator”, ACM Transactions on Embedded Computing Systems (TECS), 2021. 

F. Sunny, A. Mirza, M. Nikdast, S. Pasricha, “CrossLight: A Cross-Layer Optimized Silicon Photonic Neural Network Accelerator”, IEEE/ACM Design Automation Conference (DAC), 2021