org.deeplearning4j.nn.multilayer.MultiLayerNetwork.feedForward()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(7.9k)|赞(0)|评价(0)|浏览(61)

本文整理了Java中org.deeplearning4j.nn.multilayer.MultiLayerNetwork.feedForward()方法的一些代码示例,展示了MultiLayerNetwork.feedForward()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MultiLayerNetwork.feedForward()方法的具体详情如下:
包路径:org.deeplearning4j.nn.multilayer.MultiLayerNetwork
类名称:MultiLayerNetwork
方法名:feedForward

MultiLayerNetwork.feedForward介绍

[英]Compute activations from input to output of the output layer
[中]计算输出层从输入到输出的激活

代码示例

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Compute activations from input to output of the output layer
 *
 * @return the list of activations for each layer
 */
public List<INDArray> feedForward() {
  return feedForward(false);
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

protected INDArray silentOutput(INDArray input, boolean train) {
  List<INDArray> activations = feedForward(input, train);
  //last activation is output
  return activations.get(activations.size() - 1);
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Reconstructs the input.
 * This is equivalent functionality to a
 * deep autoencoder.
 *
 * @param x        the input to transform
 * @param layerNum the layer to output for encoding
 * @return a reconstructed matrix
 * relative to the size of the last hidden layer.
 * This is great for data compression and visualizing
 * high dimensional data (or just doing dimensionality reduction).
 * <p>
 * This is typically of the form:
 * [0.5, 0.5] or some other probability distribution summing to one
 */
public INDArray reconstruct(INDArray x, int layerNum) {
  List<INDArray> forward = feedForward(x);
  return forward.get(layerNum - 1);
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Compute activations from input to output of the output layer
 *
 * @return the list of activations for each layer
 */
public List<INDArray> feedForward(INDArray input, boolean train) {
  setInput(input);
  return feedForward(train);
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Returns the probabilities for each label
 * for each example row wise
 *
 * @param examples the examples to classify (one example in each row)
 * @return the likelihoods of each example and each label
 */
@Override
public INDArray labelProbabilities(INDArray examples) {
  List<INDArray> feed = feedForward(examples);
  IOutputLayer o = (IOutputLayer) getOutputLayer();
  return o.labelProbabilities(feed.get(feed.size() - 1));
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/** Compute the activations from the input to the output layer, given mask arrays (that may be null)
 * The masking arrays are used in situations such an one-to-many and many-to-one rucerrent neural network (RNN)
 * designs, as well as for supporting time series of varying lengths within the same minibatch for RNNs.
 */
public List<INDArray> feedForward(INDArray input, INDArray featuresMask, INDArray labelsMask) {
  setLayerMaskArrays(featuresMask, labelsMask);
  List<INDArray> list = feedForward(input);
  clearLayerMaskArrays();
  return list;
}

代码示例来源:origin: CampagneLaboratory/variationanalysis

private int getModelActivationNumber(MultiLayerNetwork model, FeatureMapper modelFeatureMapper) {
  int numActivations = 0;
  Layer[] layers = model.getLayers();
  INDArray inputFeatures = Nd4j.zeros(1, modelFeatureMapper.numberOfFeatures());
  int sum = model.feedForward(inputFeatures, false).stream().mapToInt(indArray ->
      indArray.data().asFloat().length).sum();
  System.out.println("Number of activations: " + sum);
  return sum;
}

代码示例来源:origin: CampagneLaboratory/variationanalysis

private int getModelActivationNumber(MultiLayerNetwork model, FeatureMapper modelFeatureMapper) {
  int numActivations = 0;
  Layer[] layers = model.getLayers();
  int totalNumParams = 0;
  INDArray inputFeatures = Nd4j.zeros(1, modelFeatureMapper.numberOfFeatures());
  int sum = model.feedForward(inputFeatures, false).stream().mapToInt(indArray ->
      indArray.data().asFloat().length).sum();
  System.out.println("Number of activations: " + sum);
  return sum;
}

代码示例来源:origin: CampagneLaboratory/variationanalysis

private FloatArrayList getModelInternalActivations(INDArray testFeatures) {
  FloatArrayList floats = new FloatArrayList();
  predictiveModel.feedForward(testFeatures).stream().forEach(indArray -> floats.addAll(FloatArrayList.wrap(indArray.data().asFloat())));
  return floats;
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Run SGD based on the given labels
 */
public void finetune() {
  if (!layerWiseConfigurations.isBackprop()) {
    log.warn("Warning: finetune is not applied.");
    return;
  }
  if (!(getOutputLayer() instanceof IOutputLayer)) {
    log.warn("Output layer not instance of output layer returning.");
    return;
  }
  if (flattenedGradients == null) {
    initGradientsView();
  }
  if (labels == null)
    throw new IllegalStateException("No labels found");
  log.info("Finetune phase");
  IOutputLayer output = (IOutputLayer) getOutputLayer();
  if (output.conf().getOptimizationAlgo() != OptimizationAlgorithm.HESSIAN_FREE) {
    feedForward();
    output.fit(output.input(), labels);
  } else {
    throw new UnsupportedOperationException();
  }
}

代码示例来源:origin: CampagneLaboratory/variationanalysis

private FloatArrayList getModelInternalActivations(MultiLayerNetwork model, FeatureMapper modelFeatureMapper, BaseInformationRecords.BaseInformation record, int indexOfNewRecordInMinibatch) {
  INDArray inputFeatures = Nd4j.zeros(1, modelFeatureMapper.numberOfFeatures());
  modelFeatureMapper.prepareToNormalize(record,0);
  modelFeatureMapper.mapFeatures(record, inputFeatures, 0);
  FloatArrayList floats = new FloatArrayList();
  model.feedForward(inputFeatures).stream().forEach(indArray -> floats.addAll(FloatArrayList.wrap(indArray.data().asFloat())));
  return floats;
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Compute activations from input to output of the output layer
 *
 * @return the list of activations for each layer
 */
public List<INDArray> feedForward(INDArray input) {
  if (input == null)
    throw new IllegalStateException("Unable to perform feed forward; no input found");
  else if (this.getLayerWiseConfigurations().getInputPreProcess(0) != null)
    setInput(getLayerWiseConfigurations().getInputPreProcess(0).preProcess(input, input.size(0)));
  else
    setInput(input);
  return feedForward();
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Sets the input and labels and returns a score for the prediction
 * wrt true labels
 *
 * @param input  the input to score
 * @param labels the true labels
 * @return the score for the given input,label pairs
 */
@Override
public double f1Score(INDArray input, INDArray labels) {
  feedForward(input);
  setLabels(labels);
  Evaluation eval = new Evaluation();
  eval.eval(labels, labelProbabilities(input));
  return eval.f1();
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**
 * Sets the input and labels from this dataset
 *
 * @param data the dataset to initialize with
 */
public void initialize(DataSet data) {
  setInput(data.getFeatureMatrix());
  feedForward(getInput());
  this.labels = data.getLabels();
  if (getOutputLayer() instanceof IOutputLayer) {
    IOutputLayer ol = (IOutputLayer) getOutputLayer();
    ol.setLabels(labels);
  }
}

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

/**Calculate the score for each example in a DataSet individually. Unlike {@link #score(DataSet)} and {@link #score(DataSet, boolean)}
 * this method does not average/sum over examples. This method allows for examples to be scored individually (at test time only), which
 * may be useful for example for autoencoder architectures and the like.<br>
 * Each row of the output (assuming addRegularizationTerms == true) is equivalent to calling score(DataSet) with a single example.
 * @param data The data to score
 * @param addRegularizationTerms If true: add l1/l2 regularization terms (if any) to the score. If false: don't add regularization terms
 * @return An INDArray (column vector) of size input.numRows(); the ith entry is the score (loss value) of the ith example
 */
public INDArray scoreExamples(DataSet data, boolean addRegularizationTerms) {
  boolean hasMaskArray = data.hasMaskArrays();
  if (hasMaskArray)
    setLayerMaskArrays(data.getFeaturesMaskArray(), data.getLabelsMaskArray());
  feedForward(data.getFeatureMatrix(), false);
  setLabels(data.getLabels());
  INDArray out;
  if (getOutputLayer() instanceof IOutputLayer) {
    IOutputLayer ol = (IOutputLayer) getOutputLayer();
    ol.setLabels(data.getLabels());
    double l1 = (addRegularizationTerms ? calcL1(true) : 0.0);
    double l2 = (addRegularizationTerms ? calcL2(true) : 0.0);
    out = ol.computeScoreForExamples(l1, l2);
  } else {
    throw new UnsupportedOperationException(
            "Cannot calculate score with respect to labels without an OutputLayer");
  }
  if (hasMaskArray)
    clearLayerMaskArrays();
  return out;
}

相关文章