public class DeepLearningTask extends FrameTask<DeepLearningTask>
FrameTask.ExtractDenseRow| Constructor and Description |
|---|
DeepLearningTask(water.Key jobKey,
DeepLearningModelInfo inputModel,
float fraction,
int iteration)
The only constructor
|
DeepLearningTask(water.Key jobKey,
DeepLearningModelInfo inputModel,
float fraction,
int iteration,
water.H2O.H2OCountedCompleter cmp) |
| Modifier and Type | Method and Description |
|---|---|
static void |
bpropMiniBatch(Neurons[] neurons,
int n)
Helper to apply back-propagation without clearing out the gradients afterwards
Used for gradient checking
|
protected void |
chunkDone(long n)
After each chunk, add the number of processed rows to the counter
|
protected boolean |
chunkInit()
Override this to initialize at the beginning of chunk processing.
|
protected void |
closeLocal()
After all maps are done on a node, this is called to store the per-node model into DKV (for elastic averaging)
Otherwise, do nothing.
|
static void |
fpropMiniBatch(long seed,
Neurons[] neurons,
DeepLearningModelInfo minfo,
DeepLearningModelInfo consensus_minfo,
boolean training,
double[] responses,
double[] offset,
int n)
Forward propagation
assumption: layer 0 has _a filled with (horizontalized categoricals) double values
|
protected int |
getMiniBatchSize()
Note: If this is overridden, then applyMiniBatch must be overridden as well to perform the model/weight mini-batch update
|
static Neurons[] |
makeNeuronsForTesting(DeepLearningModelInfo minfo) |
static Neurons[] |
makeNeuronsForTraining(DeepLearningModelInfo minfo) |
DeepLearningModelInfo |
model_info()
Accessor to the object containing the (final) state of the Deep Learning model
Should only be queried after calling this.doAll(Frame training)
|
protected void |
postGlobal()
After all reduces are done, the driver node calls this method to clean up
This is only needed if we're not inside a DeepLearningTask2 (which will do the reduction between replicated data workers).
|
void |
processMiniBatch(long seed,
double[] responses,
double[] offsets,
int n)
Apply the gradient to update the weights
|
void |
processRow(long seed,
DataInfo.Row r,
int mb)
Process one training row at a time (online learning)
|
void |
reduce(DeepLearningTask other)
Average the per-node models (for elastic averaging, already wrote them to DKV in postLocal())
This is a no-op between F/J worker threads (operate on the same weights/biases)
|
protected void |
setupLocal()
Transfer ownership from global (shared) model to local model which will be worked on
|
dinfo, map, processRow, processRow, skipRowappendables, asyncExecOnAllNodes, block, compute2, dfork, dfork, dfork, dfork, dfork, dinvoke, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAll, doAllNodes, getResult, getResult, isReleasable, map, map, map, map, map, map, map, map, map, map, map, modifiesVolatileVecs, onCompletion, onExceptionalCompletion, outputFrame, outputFrame, outputFrame, profile, profString, self, withPostMapActioncopyOver, getDException, hasException, logVerbose, onAck, onAckAck, setExceptionasBytes, clone, compute, compute1, currThrPriority, frozenType, icer, priority, read, readJSON, reloadFromBytes, write, writeJSON__tryComplete, addToPendingCount, compareAndSetPendingCount, complete, exec, getCompleter, getPendingCount, getRawResult, setCompleter, setPendingCount, setRawResult, tryCompleteadapt, adapt, adapt, cancel, compareAndSetForkJoinTaskTag, completeExceptionally, fork, get, get, get, getException, getForkJoinTaskTag, getPool, getQueuedTaskCount, getSurplusQueuedTaskCount, helpQuiesce, inForkJoinPool, invoke, invokeAll, invokeAll, invokeAll, isCancelled, isCompletedAbnormally, isCompletedNormally, isDone, join, peekNextLocalTask, pollNextLocalTask, pollTask, quietlyComplete, quietlyInvoke, quietlyJoin, reinitialize, setForkJoinTaskTag, tryUnforkpublic DeepLearningTask(water.Key jobKey,
DeepLearningModelInfo inputModel,
float fraction,
int iteration)
jobKey - inputModel - Initial model statefraction - Fraction of rows of the training to train withiteration - public DeepLearningTask(water.Key jobKey,
DeepLearningModelInfo inputModel,
float fraction,
int iteration,
water.H2O.H2OCountedCompleter cmp)
public final DeepLearningModelInfo model_info()
protected void setupLocal()
setupLocal in class FrameTask<DeepLearningTask>protected boolean chunkInit()
FrameTaskchunkInit in class FrameTask<DeepLearningTask>public final void processRow(long seed,
DataInfo.Row r,
int mb)
processRow in class FrameTask<DeepLearningTask>seed - Seed is only used if reproducible mode is enabledr - Row (must be dense for now)mb - mini-batch internal indexpublic void processMiniBatch(long seed,
double[] responses,
double[] offsets,
int n)
processMiniBatch in class FrameTask<DeepLearningTask>seed - responses - offsets - n - number of trained examples in this last mini batch (usually == mini_batch_size, but can be less)public static void bpropMiniBatch(Neurons[] neurons, int n)
neurons - n - number of trained examples in this last mini batch (usually == mini_batch_size, but can be less)protected int getMiniBatchSize()
FrameTaskgetMiniBatchSize in class FrameTask<DeepLearningTask>protected void chunkDone(long n)
chunkDone in class FrameTask<DeepLearningTask>n - Number of processed rowsprotected void closeLocal()
closeLocal in class FrameTask<DeepLearningTask>public void reduce(DeepLearningTask other)
reduce in class water.MRTask<DeepLearningTask>other - protected void postGlobal()
postGlobal in class water.MRTask<DeepLearningTask>public static Neurons[] makeNeuronsForTraining(DeepLearningModelInfo minfo)
public static Neurons[] makeNeuronsForTesting(DeepLearningModelInfo minfo)
public static void fpropMiniBatch(long seed,
Neurons[] neurons,
DeepLearningModelInfo minfo,
DeepLearningModelInfo consensus_minfo,
boolean training,
double[] responses,
double[] offset,
int n)
seed - neurons - minfo - consensus_minfo - training - n - Number of actually trained samples in this mini-batch