Perhaps the most interesting emergent phenomena in nature are the functions of the brain. You start with what is essentially a bunch of neurons and you end up with a thinking thing. We will try to get a feel for how this is possible by examining simple models of learning based on computational analogues of physical neurons.

**Template For Solutions: ** https://docs.google.com/document/d/14-3JKGK7AHgH68CTd0TJkSYQnlGBapsy9xLeYhiU00I/edit?usp=sharing

Computational models of neurons are known as **artificial neural networks**, or sometimes just *neural nets*. While neural nets are grossly simplified models of the brain, they have become useful tools in computer science, engineering, and science to solve a variety of interesting and challenging problems. In this unit, we will look at three artificial neural networks in our investigation of learning.

- 1. Hopfield Networks (
*50% of the grade*)- The Hopfield network is a simple artificial neural network designed by physicist John Hopfield to model how neurons memorize information. We will explore the ability of Hopfield networks to memorize simple images.

- 2. Feedforward Neural Networks (
*50% of the grade*)- The feedforward neural network is a widely popular neural network used in the deep learning community to do supervised learning. For example, such neural nets can be trained to identify faces in images. We will apply this machine learning tool to study a physics problem: identifying phase transitions in the Ising model.

*Extra Credit:* 25%

- Restricted Boltzmann Machines
- The restricted Boltzmann machine (RBM) is a statistical physics-inspired neural net that is able to learn complicated probability distributions. We will demonstrate how to use this physics-inspired tool in engineering contexts by training an RBM to learn hand-written digits.