Lab (1)
Neural Network – Perceptron Architecture


Objective:
Learn to create Perceptron networks


You will use MATLAB's Neural Network Toolbox to complete this lab.   The manual for Neural Network Toolbox is available under the resource page.

A Perceptron can be created using the newp function, usually, by running a command like this:

net = newp(PR,S,TF,LF)

Perceptrons are used to solve simple (i.e. linearly separable) classification problems.

 
Use the MATLAB help on newp to explain what each parameter of the function mean:

 

 PR:
 

S :
 

TF:
 

LF:
 
   
The following command creates a Perceptron network with a single one-element input vector and one neuron.  The range for the input is [0 2].

net = newp([0 2], 1);


To view everything that is created, you can run the above command without the ";". 

When I tried this I produced this output:
>> net = newp([0 2], 1)
net =
    Neural Network object:
    architecture:
         numInputs: 1
         numLayers: 1
       biasConnect: [1]
      inputConnect: [1]
      layerConnect: [0]
     outputConnect: [1]
     targetConnect: [1]

        numOutputs: 1  (read-only)
        numTargets: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:
            inputs: {1x1 cell} of inputs
            layers: {1x1 cell} of layers
           outputs: {1x1 cell} containing 1 output
           targets: {1x1 cell} containing 1 target
            biases: {1x1 cell} containing 1 bias
      inputWeights: {1x1 cell} containing 1 input weight
      layerWeights: {1x1 cell} containing no layer weights

    functions:
          adaptFcn: 'trains'
           initFcn: 'initlay'
        performFcn: 'mae'
          trainFcn: 'trainc'

    parameters:
        adaptParam: .passes
         initParam: (none)
      performParam: (none)
        trainParam: .epochs, .goal, .show, .time

    weight and bias values:
                IW: {1x1 cell} containing 1 input weight matrix
                LW: {1x1 cell} containing no layer weight matrices
                 b: {1x1 cell} containing 1 bias vector

    other:
          userdata: (user stuff)

Try to to understand these as much as you can.   Perhaps, you will understand some of them when we go through one example.

At this time the weights and biases are set to the default values. The default learning function is learnp. The net input to the hardlim transfer function is dotprod.  Thus, the DOT product of the input vector and weight matrix will be computed then the bias will be added to compute the net input to the transfer function. The default initialization function, initzero, is used to set the initial values of the weights and biases to zero.

To check the weights and biases, we can run:

inputweights = net.inputweights{1, 1}

biases = net.biases{1}

Simulation (sim)

When a network is created, that does not necessary mean it is ready for use.  A network should be trained for the given cases, and then be used for other inputs. Here we will try an example in which we set the weights and biases manually.  This means, we set the parameters and will run the network.  If we are happy with the outcome, then we will keep the weights and the biases. If we are not happy, we make some changes, and will try the network again.

Example

Suppose we want to create a perceptron network with a single-neuron, one bias, and two inputs.  This network will separate some patterns from each other.  The limits for the input are [-1 and 1].  As we mentioned before, when you create a perceptron network (and for all networks in MATLAB), the weights and the bias are set to 0 by default.
net = newp([-1 1; -1 1], 1);

Let’s set the weights to: w11 = -1, w12 = 1, and bias = 1.
net.IW{1,1} = [-1 1];
net.b{1} = [1];

Note: at anytime of the process, if you want to check the outcome, just remove the ";" from the end of the command.  Now, let's create an input.  Each input matrix in our case should have two values, as a pair.
p = [1  1   -1   -1 ; 1   -1   1   -1] 
This is a two rows 4 columns matrix (2-by-4) that represents 4 pairs for the input.   To save time we are  training the network for all  the given pairs.

You can run a simulation of this network using:

a1 = sim(net, p)


Now that you have trained your network, try it for a single case.  What is the output for p_new = [1 –1]'?
Confirm the answer by hand
 

OR Gate Perceptron Network

A good example for a supervised learning.  This code creates a perceptron layer with one 2-element input (ranges [0 1] and [-2 2]) and one neuron. (Supplying only two arguments to NEWP results in the default perceptron learning function LEARNP being used.)
 
net = newp([0 1; -2 2],1);
 
Now we define a problem, an OR gate, with a set of four 2-element input vectors  P and the corresponding four 1-element targets T.  Remember the table for OR gate:

X
Y
X OR Y
0
0
0
1
0
1
0
1
1
1
1
1

 

So there are 4 pairs that can be stored as:
 P = [0 0 1 1; 0 1 0 1];

and 4 targets (possible outputs) for the above pairs:
 T = [0 1 1 1];
 
 Here we simulate the network's output, train for a maximum of 20 epochs, and then simulate it again.
 
Y = sim(net,P)
net.trainParam.epochs = 20;
net = train(net,P,T);
Y = sim(net,P)
 

Confirm the answer by hand

Note:  Just in case you want to reset the weights and biases back to the default values (0), you can use the init command. For example in the network that we just created, we can type:

net= init(net).


to reset them back to 0 again.


Sometimes, one may want to assign the weights and biases randomly. There is a function that does this:

net.inputweights{1,1}.initFcn = ‘rands’;

net.biases{1}.initFcn = ‘rands’;
net = init(net);
Let’s check it out:
wts = net.IW{1,1}

What do you have for the weights and the bias?
Try your network with these parameter for the two inputs and see what you will get this time?

You can use the nntool command to create neural networks using a Graphical Interface.  Try to see if you can create the above network using that tool.
 
Lab Assignment - Due at the end of the lab
Now that you learned to set up a Perceptron network, design a network to separate apples and oranges using the weights and biases given in class,
.
Thus, the input parameter for an apple and orange, respectively, are:
pOrange = [1 -1 -1]T and pApple = [1 1 -1]T.
The network produces [1] for orange and [-1] for apple.
Once your network is trained, try the following inputs:

and then

What did your network produce for these two cases?

What to submit?
Include the list of all commands that you have used following by the outcome of MATLAB run either in file that you will attach to your e-mail or in the body of your e-mail. You can create a blank file and cut an paste your commands as you will progress. Then attach that file to an e-mail for me.  You can also send the file in the body of your e-mail.