**Lab (1)**

**Neural Network –
Perceptron Architecture**

** **

**Objective: **Learn to create Perceptron networks

Learn to apply the Perceptron Learning Rule

In order to complete the lab, you need to have access to a machine with Matlab Neural Network Toolbox installed. You can complete this lab on your machine at home, or use sc to do it. Also, to complete the lab you need to use the Matlab Neural Network Manual.

A Perceptron can be created using
the *newp *function, usually, by running a command like:

**net = newp(PR,S,TF,LF)**

Use the Matlab help on *newp*
to explain what does each parameter of the function mean?

PR :

S :

TF:

LF:

To answer this question, you can
either use the help command at the Matlab prompt; help *newp*, or use the
online help for Neural network Toolbox.

The following command creates a Perceptron network with a single one-element input vector and one neuron. The range for the single element is [0 2].

*net = newp(*[0 2], 1);

To see what has been created so far, you can use this:

*Inputweights =
net.inputweights*{1, 1}

inputweights =

delays: 0

initFcn: 'initzero'

learn: 1

learnFcn: 'learnp'

learnParam: []

size: [1 1]

userdata: [1x1 struct]

weightFcn: 'dotprod'

The default learning function is *learnp*. The net input to the *hardlim* transfer
function is *dotprod*. Thus, the
DOT product of the input vector and weight matrix will be computed then the
bias will be added to compute the net input.

The default initialization
function, *initzero*, is used to set the initial values of the weights to
zero.

To check the bias, we can run:

*biases = net.biases*{1}

biases =

initFcn: 'initzero'

learn: 1

learnFcn: 'learnp'

learnParam: []

size: 1

userdata: [1x1 struct]

As one can immediately notice, the bias is 0.

**Simulation (sim)**

When a network is created, that does not necessary mean it is ready for use. A network should be trained for the given cases, and then be used for other inputs.

Suppose we want to create a network with a single-neuron, a bias, and two inputs. The limit for the inputs are [-2 and 2]. As we mentioned before, the weights and the bias are set to 0 by default.

*net = newp(*[-2 2; -2 2],
1);

Let’s set the weights to –1 for *w _{11
}*= -1,

*net.IW{1,1} = [-1 1];*

*net.b{1} = [1];*

* *

Note: at anytime of the process, if you want to check the
outcome, just ignore the ; at the end of the command. Now, let’s
create some inputs. Each input matrix
in our case should have two values, a pair.

*p1
= [1; 1];*

A
test simulation of our network with this input is:

*a1
= sim(net, p1) *

* *

What
is the output?

**Try
p2 = [1 –1]. **

**Find
out how you can run the network for both p1 and p2 together.**

Note: Just in case you want to reset the weights and bias back to the default
of 0, you can use the *init* command.
For example in the network that we just created, we can type:

*net= init(net)*.

Sometimes,
one may want to assign the weights and biases randomly. There is a function that does this:

*net.inputweights{1,1}.initFcn
= ‘rands’;*

*net.biases{1}.initFcn
= ‘rands’;*

*net
= init(net);*

* *

Let’s
chek it out:

*wts
= net.IW{1,1}*

* *

What
do you have for the weights and the bias?

Try
the two inputs and see what your answer is?

**Perceptron
Learning Rules**

We
learned in class the following rules:

e = t – p The error is the
difference between target and input

*w ^{new} = w^{old
}+ e*p *

* b ^{new} = b^{old}
+ e*

* *

How
do we implement this in Matlab?

Suppose
in our previous example, we would have a target:

t
= [ 0 ] for the input pair p = [2 2]

Let’s
create the network again:

*net
= newp([-2 2; -2 2], 1);*

*net.b{1}
= [0];*

*w
= [1 -0.8];*

*net.IW{1,
1} = w;*

*p
= [1 ; 2];*

*t
= [1];*

*a
= sim(net, p)*

*e
= t – a;*

So
now that you have the error, what is the next step? Can you use the learning rules given above to proceed?

The
two steps are:

*w = w’;* (to
temporarily transpose it)

*w
= w + e*p;*

*b
= b + e;*

w
= w’;

Now
try again to simulate the network.

*a
= sim(net, p)*

*e
= t – a;*

Did
it converge? If not we should repeat
this. There was a better way to do
this. Let’s go back and repeat the
first few steps:

*net
= newp([-2 2; -2 2], 1);*

*net.b{1}
= [0];*

*w
= [1 -0.8];*

*net.IW{1,
1} = w;*

*p
= [1 ; 2];*

*t
= [1];*

* *

*a
= sim(net, p)*

*e
= t – a;*

Now,
we can use the function *learnp*. Use help to find out what each parameter
mean.

*dw
= learnp(w, p, [], [], [], [], e, [], [], [])*

*w
= w + dw;*

* *

Now
use this to simulate the network again.

Let’s
try to further improve this process. We
can set the number of iteration (epochs). To set for one iteration:

* *

*net
= newp([-2 2; -2 2], 1);*

*p
= [2 ; 2];*

*t
= [0];*

*net.trainParam.epochs
= 1;*

*net
= train(net, p, t);*

What
are the weights and the bias? What is
the error?

Did
your network produce the right result with one epoch?

Lets’
try this for a set of input but keep the number of epochs the same.

*p
= [ [-2; 2] [1; -2] [-2; 2] [-1; 1]];*

*t
= [ 0 1 0 1];*

*net
= train(net, p, t);*

* *

Now
what are the weights and the bias?

Suppose
that this network has been trained, let see what it produces for a set of inputs:

a
= sim(net, p);

**Lab
Assignment**

Problem
(1)

Now
that you learned to set up a Perceptron network, design a network to separate
apples and oranges using the criteria give in class and in brief below: _{} ,

thus the input parameter for an apple: _{}will generate the target value of [1]. Similarly, the input parameter for and
apple: _{} will generate the
target value of [-1].

Once your network is trained, try the following inputs:_{}and then _{}

What
to submit?

The
list of all commands that you have used following by the outcome of Matlab
run. You can create a blank file and
cut an paste your commands as you will progress. Then print that file.

**What
comes next?**

**Perceptron
Network with more neurons.**