Friday 16 December 2011

A framework for distributed representation-Neural Netork


A framework for distributed representation


An artifcial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections. A set of major aspects of a parallel distributed model can be distinguished :
  • a set of processing units ('neurons,' 'cells');
  • a state of activation yk for every unit, which equivalent to the output of the unit;
  • connections between the units. Generally each connection is defined by a weight wjk which determines the effect which the signal of unit j has on unit k;
  • a propagation rule, which determines the effective input sk of a unit from its external inputs;
  • an activation function Fk, which determines the new level of activation based on the efective input sk(t) and the current activation yk(t) (i.e., the update);
  • an external input (aka bias, offset) øk for each unit;
  • a method for information gathering (the learning rule);
  • an environment within which the system must operate, providing input signals and|if necessary|error signals.

Processing units

Each unit performs a relatively simple job: receive input from neighbours or external sources and use this to compute an output signal which is propagated to other units. Apart from this processing, a second task is the adjustment of the weights. The system is inherently parallel in the sense that many units can carry out their computations at the same time. Within neural systems it is useful to distinguish three types of units: input units (indicated by an index i) which receive data from outside the neural network, output units (indicated by an index o) which send data out of the neural network, and hidden units (indicated by an index h) whose input and output signals remain within the neural network. During operation, units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously; with asynchronous updating, each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be able to do this at a time. In some cases the latter model has some advantages.


No comments:

Post a Comment