public abstract class Neurons
extends java.lang.Object
Modifier and Type | Class and Description |
---|---|
static class |
Neurons.ExpRectifier |
static class |
Neurons.ExpRectifierDropout
Exponential Rectifier with dropout
|
static class |
Neurons.Input
Input layer of the Neural Network
This layer is different from other layers as it has no incoming weights,
but instead gets its activation values from the training points.
|
static class |
Neurons.Linear
Output neurons for regression - Linear units
|
static class |
Neurons.Maxout
Maxout neurons (picks the max out of the k activation_j = sum(A_ij*x_i) + b_j)
Requires k times the model parameters (weights/biases) as a "normal" neuron
|
static class |
Neurons.MaxoutDropout
Maxout neurons with dropout
|
static class |
Neurons.Output
Abstract class for Output neurons
|
static class |
Neurons.Rectifier
Rectifier linear unit (ReLU) neurons
|
static class |
Neurons.RectifierDropout
Rectifier linear unit (ReLU) neurons with dropout
|
static class |
Neurons.Softmax
Output neurons for classification - Softmax
|
static class |
Neurons.Tanh
Tanh neurons - most common, most stable
|
static class |
Neurons.TanhDropout
Tanh neurons with dropout
|
Modifier and Type | Field and Description |
---|---|
Storage.DenseVector[] |
_a |
Storage.DenseVector |
_avg_a |
Storage.DenseVector |
_b |
Storage.DenseVector |
_bEA |
protected Dropout |
_dropout
For Dropout training
|
Storage.DenseVector[] |
_e |
protected int |
_index |
Neurons |
_input |
Storage.DenseVector[] |
_origa
Layer state (one per neuron): activity, error
|
Neurons |
_previous
References for feed-forward connectivity
|
Storage.DenseRowMatrix |
_w |
Storage.DenseRowMatrix |
_wEA |
protected DeepLearningModel.DeepLearningParameters |
params
Parameters (deep-cloned() from the user input, can be modified here, e.g.
|
protected int |
units |
Modifier and Type | Method and Description |
---|---|
protected double |
autoEncoderGradient(int row,
int mb)
Helper to compute the reconstruction error for auto-encoders (part of the gradient computation)
|
protected abstract void |
bprop(int n)
Back propagation of error terms stored in _e (for non-final layers)
|
protected void |
bpropOutputLayer(int n)
Back-propagate gradient in output layer
|
protected abstract void |
fprop(long seed,
boolean training,
int n)
Forward propagation
|
void |
init(Neurons[] neurons,
int index,
DeepLearningModel.DeepLearningParameters p,
DeepLearningModelInfo minfo,
boolean training)
Initialization of the parameters and connectivity of a Neuron layer
|
protected float |
momentum() |
float |
momentum(double n)
The momentum - real number in [0, 1)
Can be a linear ramp from momentum_start to momentum_stable, over momentum_ramp training samples
|
float |
rate(double n)
The learning rate
|
protected void |
setOutputLayerGradient(double ignored,
int mb,
int n)
Accumulation of reconstruction errors for a generic Neurons class
(This is only used for AutoEncoders)
|
java.lang.String |
toString()
Print the status of this neuron layer
|
protected int units
protected transient DeepLearningModel.DeepLearningParameters params
protected transient int _index
public transient Storage.DenseVector[] _origa
public transient Storage.DenseVector[] _a
public transient Storage.DenseVector[] _e
public Neurons _previous
public Neurons _input
public Storage.DenseRowMatrix _w
public Storage.DenseRowMatrix _wEA
public Storage.DenseVector _b
public Storage.DenseVector _bEA
protected Dropout _dropout
public Storage.DenseVector _avg_a
public java.lang.String toString()
toString
in class java.lang.Object
public final void init(Neurons[] neurons, int index, DeepLearningModel.DeepLearningParameters p, DeepLearningModelInfo minfo, boolean training)
neurons
- Array of all neuron layers, to establish feed-forward connectivityindex
- Which layer am I?p
- User-given parameters (Job parental object hierarchy is not used)minfo
- Model information (weights/biases and their momenta)training
- Whether training is done or just testing (no need for dropout)protected abstract void fprop(long seed, boolean training, int n)
seed
- For seeding the RNG inside (for dropout)training
- Whether training is done or just testing (no need for dropout)n
- number of actually trained samples in this mini-batchprotected abstract void bprop(int n)
protected final void bpropOutputLayer(int n)
protected void setOutputLayerGradient(double ignored, int mb, int n)
protected double autoEncoderGradient(int row, int mb)
row
- neuron indexmb
- minibatch-internal indexpublic float rate(double n)
n
- The number of training samples seen so far (for rate_annealing greater than 0)protected float momentum()
public final float momentum(double n)
n
- The number of training samples seen so far