gov.sandia.cognition.math.matrix.Vector类的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(8.5k)|赞(0)|评价(0)|浏览(111)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector类的一些代码示例,展示了Vector类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector类的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector

Vector介绍

[英]The Vector interface defines the operations that are expected on a mathematical vector. The Vector can be thought of as a collection of doubles of fixed size (the dimensionality) that supports array-like indexing into the Vector. The Vector is defined as an interface because there is more than one way to implement a Vector, in particular if you want to make use of sparseness, which occurs when most of the elements of a Vector are zero so that they are not represented.
[中]Vector接口定义了数学向量上的预期操作。Vector可以被认为是一个固定大小(维度)的双精度集合,支持类似数组的索引到VectorVector被定义为一个接口,因为实现向量的方法不止一种,特别是如果你想利用稀疏性,当Vector的大多数元素都为零时,会出现稀疏性,因此它们不会被表示出来。

代码示例

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

public void convertFromVector(
  Vector parameters)
{
  if( parameters.getDimensionality() != 3 )
  {
    throw new IllegalArgumentException(
      "Expected three parameters: amplitude, frequency, phase" );
  }
  this.amplitude = parameters.getElement(0);
  this.frequency = parameters.getElement(1);
  this.phase = parameters.getElement(2);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

@Override
final public Vector minus(
  final Vector v)
{
  // I need to flip this so that if it the input is a dense vector, I
  // return a dense vector.  If it's a sparse vector, then a sparse vector
  // is still returned.
  Vector result = v.clone();
  result.negativeEquals();
  result.plusEquals(this);
  return result;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-text-core

@Override
public Vector computeLocalWeights(
  final Vector counts)
{
  // Compute the local weights.
  final Vector result = super.computeLocalWeights(counts);
  final int dimensionality = result.getDimensionality();
  if (dimensionality != 0)
  {
    final double average = counts.norm1() / dimensionality;
    final double divisor = Math.log(1.0 + average);
    result.scaleEquals(1.0 / divisor);
  }
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final public Vector plus(
  final Vector v)
{
  // I need to flip this so that if it the input is a dense vector, I
  // return a dense vector.  If it's a sparse vector, then a sparse vector
  // is still returned.
  Vector result = v.clone();
  result.plusEquals(this);
  return result;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

protected double computeScaleFactor(
  Vector gradientCurrent,
  Vector gradientPrevious )
{
  Vector deltaGradient = gradientCurrent.minus( gradientPrevious );
  double deltaTgradient = deltaGradient.dotProduct( gradientCurrent );
  double denom = gradientPrevious.norm2Squared();
  
  double beta = deltaTgradient / denom;
  return beta;
}

代码示例来源:origin: algorithmfoundry/Foundry

final double actual = example.getOutput();
final double error = actual - prediction;
errors.set(i, error);
final double newBias = (oldBias * this.dataSize + errors.sum()) 
  / (this.dataSize + this.biasRegularization);
this.result.setBias(newBias);
  errors.increment(i, biasChange);
  final double oldWeight = weights.getElement(j);
  final Vector inputs = this.inputsTransposed.get(j);
  final double sumOfSquares = derivative.norm2Squared();
  final double newWeight = sumOfSquares == 0.0 ? 0.0 :
    (oldWeight * sumOfSquares + derivative.dot(errors))
    / (sumOfSquares + this.weightRegularization);
  weights.set(j, newWeight);
  errors.scaledPlusEquals(weightChange, inputs);
  this.totalChange += Math.abs(weightChange);
    factorTimesInput.set(i,
      this.dataList.get(i).getInput().dot(factorRow));
    final Vector derivative = inputs.dotTimes(factorTimesInput);
    derivative.scaledMinusEquals(oldFactor, inputs.dotTimes(inputs));
    final double sumOfSquares = derivative.norm2Squared();

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Computes the Viterbi recursion for a given "delta" and "b"
 * @param delta
 * Previous value of the Viterbi recursion.
 * @param bn
 * Current observation likelihood.
 * @return
 * Updated "delta" and state backpointers.
 */
protected Pair<Vector,int[]> computeViterbiRecursion(
  Vector delta,
  Vector bn )
{
  final int k = delta.getDimensionality();
  final Vector dn = VectorFactory.getDefault().createVector(k);
  final int[] psi = new int[ k ];
  for( int i = 0; i < k; i++ )
  {
    WeightedValue<Integer> transition =
      this.findMostLikelyState(i, delta);
    psi[i] = transition.getValue();
    dn.setElement(i, transition.getWeight());
  }
  dn.dotTimesEquals( bn );
  delta = dn;
  delta.scaleEquals( 1.0/delta.norm1() );
  return DefaultPair.create( delta, psi );
}

代码示例来源:origin: algorithmfoundry/Foundry

public void convertFromVector(
  Vector parameters)
{
  final int d = this.getInputDimensionality();
  parameters.assertDimensionalityEquals( 1+d + 1+d*d );
  this.setCovarianceDivisor( parameters.getElement(0) );
  Vector mean = parameters.subVector(1, d);
  this.gaussian.setMean(mean);
  Vector iwp = parameters.subVector(d+1, parameters.getDimensionality()-1);
  this.inverseWishart.convertFromVector(iwp);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public Vector convertToVector()
{
  final int dim = this.getInputDimensionality() + 1;
  Vector p = VectorFactory.getDefault().createVector(dim);
  for( int i = 0; i < dim-1; i++ )
  {
    p.setElement(i, this.weightVector.getElement(i) );
  }
  p.setElement(dim-1, this.bias);
  return p;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-text-core

* latent.pDocumentGivenLatent.getElement(i)
      * latent.pTermGivenLatent.getElement(j);
latent.pLatent = latent.pDocumentGivenLatent.sum();
      * latent.pDocumentGivenLatent.getElement(i)
      * latent.pTermGivenLatent.getElement(j);

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public Vector getMean()
{
  return this.parameters.scale(1.0 / this.parameters.norm1());
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

/**
 * Sets the entry value to the first underlying vector.
 *
 * @param value Entry value to the first underlying vector.
 */
public void setFirstValue(
  double value)
{
  this.getFirstVector().setElement(this.getIndex(), value);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

Vector alpha = this.getInitialProbability().clone();
Matrix A = this.getTransitionProbability();
int index = 0;
  alpha.dotTimesEquals(b);
  final double weight = alpha.norm1();
  alpha.scaleEquals(1.0/weight);
  logLikelihood += Math.log(weight);
  index++;

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

@Override
final public Vector dotTimes(
  final Vector v)
{
  // By switch from this.dotTimes(v) to v.dotTimes(this), we get sparse
  // vectors dotted with dense still being sparse and dense w/ dense is
  // still dense.  The way this was originally implemented in the Foundry
  // (this.clone().dotTimesEquals(v)), if v is sparse, it returns a
  // dense vector type storing sparse data.
  Vector result = v.clone();
  result.dotTimesEquals(this);
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Evaluates the weighted Euclidean distance between two vectors.
 *
 * @param   first
 *      The first vector.
 * @param   second
 *      The second vector.
 * @return
 *      The weighted Euclidean distance between  the two vectors.
 */
@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  // \sqrt(\sum_i w_i * (x_i - y_i)^2)
  // First compute the difference between the two vectors.
  final Vector difference =
    first.convertToVector().minus(second.convertToVector());
  // Now square it.
  difference.dotTimesEquals(difference);
  // Now compute the square root of the weights times the squared
  // difference.
  return Math.sqrt(this.weights.dotProduct(difference));
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Sets the initial guess ("x0")
 *
 * @param initialGuess the initial guess ("x0")
 */
@Override
final public void setInitialGuess(Vector initialGuess)
{
  x0 = initialGuess.clone();
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-text-core

public Vector computeLocalWeights(
  final Vector counts)
{
  // Since the counts are positive, the 1-norm of them is their sum.
  final Vector result = this.vectorFactory.copyVector(counts);
  final double countSum = counts.norm1();
  if (countSum != 0.0)
  {
    result.scaleEquals(1.0 / countSum);
  }
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

@Override
public void convertFromVector(
  final Vector parameters)
{
  parameters.assertDimensionalityEquals(1);
  this.value = parameters.getElement(0);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected double computeScaleFactor(
  Vector gradientCurrent,
  Vector gradientPrevious )
{
  Vector direction = this.lineFunction.getDirection();
  
  Vector deltaGradient = gradientCurrent.minus( gradientPrevious );
  double deltaTgradient = deltaGradient.dotProduct( gradientCurrent );
  double denom = gradientPrevious.dotProduct( direction );
  double beta = -deltaTgradient / denom;
  return beta;
}

相关文章

微信公众号

最新文章

更多