Lab (3)
Neural Network - Perceptron Learning Rules

In previous lab we learned to create a perceptron network using the newp function.  In that lab, we didn't deal with the training and learning rules at all.  In this lab, we will create perceptron networks and we then practice with training and will apply learning rules.

We learned in class the following rules:

                        e = t – p                       The error is the difference between target and input

                        wnew = wold + e*p

                        bnew = bold + e

MATLAB Implementation

Suppose we have a single perceptron defined as:

{p = [2 2]T , t  = [ 0 ]}, where the range for input is -2 to 2.
 
Let's create a perceptron network with the above parameters:

net = newp([-2 2; -2 2], 1);
b = 0;

net.b{1} = b;

w = [1  -0.8];

net.IW{1, 1} = w;

p = [2 ; 2];

t = [0];

a = sim(net, p)

To compute the:
e = t – a;

If e is not 0, we need to update the weights and the biases: 

w = w' + e*p;

b = b + e;

Now simulate the network again with the new w and b.

net.b{1} = b;

net.IW{1, 1} = w;

a = sim(net, p);

e = t – a;

If e is not 0, repeat until error is 0.

As you may have noticed performing these steps may take a good bit of time.  We can create a .m file that handles the iterations.  I have created a MATLAB program that does the above task.  Save the following code in a .m file by going to the File, new, M-file.  Make sure your current directory is the same as the place in which you have saved the file.   Run the program by typing the file name without the .m to see the results.  Note that this program has a limit on the number of iterations as well.

net = newp([-2 2; -2 2], 1);
p = [2 2]'
t = 0
b = 0;
net.b{1} = b;
w = [1  -0.8];
net.IW{1, 1} = w;
a = sim(net, p)
e = t - a;
count = 0;  // iterations

while(e ~= 0 & count < 100)
    w = w' + e*p;
    b = b + e;
    w = w'
    net.b{1} = b
    w = [1  -0.8];
    net.IW{1, 1} = w;
    a = sim(net, p)
    e = t - a;
    count = count + 1;
end

count
w
b

There is also a way to do this by utilizing the NNet Toolbox.  Let's go back and repeat the first few steps:

net = newp([-2 2; -2 2], 1);

net.b{1} = [0];

w = [1  -0.8];

net.IW{1, 1} = w;

p = [1 ; 2];

t = [1];

a = sim(net, p)

e = t – a;

Also, we can use the function learnp. Use help to find out what each parameter mean.

dw = learnp(w, p, [ ], [ ], [ ], [ ], e, [ ], [ ], [ ])

w = w + dw;

If you wish read the manual for learnp and use it to simulate the network again.

Let's try to further improve the method that we worked with in the example above.  We can set the number of iterations (epochs). To set the number of iterations we will use:

net.trainParam.epochs.  For example, this is an example of the creating a network similar to the one given above and training it for one iteration: 

net = newp([-2 2; -2 2], 1);

p = [2 ; 2];

t = [0];

net.trainParam.epochs = 1;  (number of iterations)

net = train(net, p, t);

What are the weights and the bias?  What is the error?

 
Did your network produce the right result with one epoch?

Note that number of iterations is very important.

Lab Assignment
Use the program we used above to solve the classification problem with perceptron rule.

class 1: {p1=[2 2]T , t1= 0}        class 2: {p2 =[1 -2]T, t2 = 1}

class 3: {p3 =[-2 2]T, t3= 0}        class 4: {p4 =[-1 1]T, t4= 1}

You can use initial weights and a bias as you wish, but if you end up spending too much time thinking about it, just make the starting weights and bias 0.

After you trained your network, try it for the following individual inputs.  Note that you are done with training, this time you will simulate:

inp1 = [2 -2], a = ?
inp2 = [2 2], a = ?
inp3 = [-1 -1], a = ?

Copy your program and the results in a file and e-mail that as an attachment.  Please make sure you have your name on your file.