New MIT class brings together hardware and AI

MIT-AI-Hardware.jpg

Emer and Vivienne Sze, his co-teacher and associate professor of electrical engineering and computer science, wrote a journal article –and tutorial- about recent advances in the field, which is the base text for the class.

With a new Hardware Architecture for Deep Learning course, MIT’s Electrical Engineering & Computer Science Department aims to teach students the interactions between hardware and Artificial Intelligence, two traditionally separate disciplines.

The new class has a long list of prerequisites and covers advanced topics –such as algorithmic design and computer hardware design-- in just a few weeks.

“We are beginning to see tremendous student interest in the hardware side of deep learning,” said Joel Emer, a professor of the practice in MIT and distinguished research scientist at the chip manufacturer NVidia.

Emer and Vivienne Sze, his co-teacher and associate professor of electrical engineering and computer science, wrote a journal article –and tutorial- about recent advances in the field, which is the base text for the class.

Deep learning, also called neural networks, is an approach to Artificial Intelligence in which a machine learns to perform tasks by analyzing training material. Popular applications of deep learning are self-driving cars, language translation, and image recognition.

“People are recognizing the importance of having efficient hardware to support deep learning (…). One of the greatest limitations of progress in deep learning is the amount of computation available,” said Sze.