In previous lab we learned to create a perceptron network using the *newp
*function. In that lab, we didn't deal with the training and learning
rules at all. In this lab, we will create perceptron networks and we
then practice with training and will apply learning rules.

We learned in class the following rules:

e = t – p The error is the difference between target and input

wnew = wold + e*p

bnew = bold + e

How do we implement this in Matlab?

Suppose in our previous example, we would have a single perceptron with:

target t = [ 0 ] for the input pair p = [2 2], where values for input are
between -2 and 2.

Let's create the network again:

*net = newp([-2 2; -2 2], 1);*

*net.b{1} = [0];*

*w = [1 -0.8];*

*net.IW{1, 1} = w;*

*p = [1 ; 2];*

*t = [1];*

*a = sim(net, p)*

This, the error is:

*e = t – a;*

So now that you have the error, what is the next step? Can you use the learning rules given above to proceed?

The two steps are:

*w = w’; * (to
temporarily transpose it)

*w = w + e*p;*

*b = b + e;*

*w = w’; *(transpose it back to the original format)

Now try again to simulate the network with the new *w * and
*b*.

*net.b{1} = b;*

*a = sim(net, p);*

*e = t – a;*

*Did it converge?*

If not we should repeat this until error is 0. As you may have noticed
performing these steps may take a good bit of time. There is a better
way to do this. Let's go back and repeat the first few steps:

*net = newp([-2 2; -2 2], 1);*

*net.b{1} = [0];*

*w = [1 -0.8];*

*net.IW{1, 1} = w;*

*p = [1 ; 2];*

*t = [1];*

*a = sim(net, p)*

*e = t – a;*

Also, we can use the function *learnp*. Use help to find out what
each parameter mean.

*dw = learnp(w, p, [ ], [ ], [ ], [ ], e, [ ], [ ], [ ])*

*w = w + dw;*

If you wish read the manual for *learnp* and use it to simulate the
network again.

Let's try to further improve the method that we worked with above. We can set the number of iterations (epochs). To set the number of iterations we will use:

*net.trainParam.epochs. *For example, this is an example of
the creating a network similar to the one given above and training it for
one iteration:

*net = newp([-2 2; -2 2], 1);*

*p = [2 ; 2];*

*t = [0];*

*net.trainParam.epochs = 1; (number of iterations)*

*net = train(net, p, t);*

What are the weights and the bias? What is the error?

Did your network produce the right result with one
epoch?

Note that number of iterations is very important.

**Lab Assignment**

Use the above procedure to solve the classification problem with perceptron
rule.

**class 1:** p_{1}^{t }=[2 2], t_{1}= 0 **
class 2: **p_{2}^{t }=[1 -2], t_{2 }= 1

**class 3:** p_{3}^{t }=[-2 2], t_{3}= 0 **
class 4: **p_{4}^{t }=[-1 1], t_{4}= 1

You can use initial weights and a bias as you wish, but if you end up spending too much time thinking about it, just make them all zero to start.

After you trained your network, try it for the following individual inputs. Note that you are done with training, this time you will simulate:

p = [2 -2], a = ?

p = [2 2], a = ?

p = [-1 -1], a = ?

Copy the list of all commands and your results in a file and e-mail it.

**Post Lab Assignment - Due Monday June 16**

Solve Exercise E4.7 using the above procedure. E-mail your commands and the
results.