Professor in ECE earns $175k NSF grant to optimize device performance

Published: Jun 29, 2022 9:00 AM

By Joe McAdory

Even today’s gazillion-gig cell phones and high-speed computer GPUs can be stretched beyond performance capacities. Deep learning (DL) and artificial intelligence (AI) programs, such as image and facial recognition, speech and language processing, personalized recommendations, and many others now require large data sets that can inhibit desired levels of speed and accuracy.

The unfortunate result is a memory bottleneck … your high-speed computer becomes all bark, and no byte.

Mehdi Sadi, assistant professor in electrical and computer engineering, has a proposed solution that he believes will optimize performance and decrease power usage without increasing device size. Sadi will utilize emerging magnetic random-access memory (MRAM) and chiplet-based packaging technologies to optimally design on-chip and off-chip memory systems for AI/DL hardware.

His proposal, “Design and System Technology Co-optimization Towards Addressing the Memory Bottleneck Problem of Deep Learning Hardware,” was recognized by the National Science Foundation (NSF), earning a two-year, $174,923 grant.

“The benefit with MRAM and chiplet technologies is that you can enable a very high-memory capacity on your cell phones and GPUs,” Sadi said. “If you have a computer, AI hardware, or a GPU with more memory, then the direct benefit with this breakthrough is the device now has the capacity to train even more sophisticated and powerful AI/DL models. Modern AI models are powerful and require big data to ensure accuracy. Obviously, it’s important that your AI model is giving you accurate results.”

Artificial intelligence and deep learning aren’t limited to cell phones or personal computers. This emerging technology influences a range of areas, including autonomous vehicles, healthcare, cybersecurity, robotics and gene editing.

Sadi believes it’s vital his work increases device performance and power without increasing device size.

“With this technology, we will be able to accomplish complex and data-intensive AI/DL tasks at a much lower energy cost, (i.e., reduced electricity bill in computer servers, longer battery life on mobile devices),” he said. “While this will significantly increase the memory and computing capacity of AI accelerators or GPUs used in the laptops and desktops, it will not impact the overall size of the device as this emerging memory technology requires fraction of a chip area compared to the conventional art."

“Aligned with the goal of establishing the United States’ leadership in the AI/DL domain, the efforts of this project are dedicated to achieving excellence in education, workforce development, and outreach through graduate and undergraduate research, mentoring underrepresented and minority students, and promoting hardware education at the K-12 level.”

Media Contact: Joe McAdory,, 334.844.3447

Recent Headlines