A neural network model gives a systematic representation of the connections that occur between neurons in the brain. The neuronal system is very complex because all neurons are in an interconnected system that bases output on the behavior of each specific neuron. Linear algebra has many different solutions for analyzing these types of models. Projection of vectors, eigen values, and the gradient vector are just some of the concepts from linear algebra that aid in the analysis of neural network models. Matrices are also used in order to define a function within the neural networking paradigm. Linear algebra is a very useful tool in understanding a very elaborate mechanism such as the brain.