gov.sandia.cognition.math.matrix.Vector.scaleEquals()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(8.2k)|赞(0)|评价(0)|浏览(125)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector.scaleEquals()方法的一些代码示例,展示了Vector.scaleEquals()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.scaleEquals()方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:scaleEquals

Vector.scaleEquals介绍

暂无

代码示例

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

/**
 * Divides all of the given elements of the vector by the 1-norm (the sum
 * of the absolute values of the elements). If the 1-norm is zero (which
 * means all the elements are zero), then the vector is not modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 1-norm. It is modified by
 *      this method.
 */
public static void divideByNorm1Equals(
  final Vector vector)
{
  final double norm1 = vector.norm1();
  if (norm1 != 0.0)
  {
    vector.scaleEquals(1.0 / norm1);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Divides all of the given elements of the vector by the 1-norm (the sum
 * of the absolute values of the elements). If the 1-norm is zero (which
 * means all the elements are zero), then the vector is not modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 1-norm. It is modified by
 *      this method.
 */
public static void divideByNorm1Equals(
  final Vector vector)
{
  final double norm1 = vector.norm1();
  if (norm1 != 0.0)
  {
    vector.scaleEquals(1.0 / norm1);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Divides all of the given elements of the vector by the 1-norm (the sum
 * of the absolute values of the elements). If the 1-norm is zero (which
 * means all the elements are zero), then the vector is not modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 1-norm. It is modified by
 *      this method.
 */
public static void divideByNorm1Equals(
  final Vector vector)
{
  final double norm1 = vector.norm1();
  if (norm1 != 0.0)
  {
    vector.scaleEquals(1.0 / norm1);
  }
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

/**
 * Divides all of the given elements of the vector by the 2-norm (the square 
 * root of the sum of the squared values of the elements). If the 2-norm is 
 * zero (which means all the elements are zero), then the vector is not 
 * modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 2-norm. It is modified by
 *      this method.
 */
public static void divideByNorm2Equals(
  final Vector vector)
{
  final double norm2 = vector.norm2();
  if (norm2 != 0.0)
  {
    vector.scaleEquals(1.0 / norm2);
  }
}

代码示例来源:origin: openimaj/openimaj

/**
 * @param vt
 * @return mean of each row
 */
public static Vector rowMean(Matrix vt) {
  final Vector sumOfColumns = vt.sumOfColumns();
  sumOfColumns.scaleEquals(1. / vt.getNumColumns());
  return sumOfColumns;
}

代码示例来源:origin: openimaj/openimaj

/**
 * @param vt
 * @return mean of each row
 */
public static Vector colMean(Matrix vt) {
  final Vector sumOfColumns = vt.sumOfRows();
  sumOfColumns.scaleEquals(1. / vt.getNumRows());
  return sumOfColumns;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-text-core

public Vector computeLocalWeights(
  final Vector counts)
{
  // Since the counts are positive, the 1-norm of them is their sum.
  final Vector result = this.vectorFactory.copyVector(counts);
  final double countSum = counts.norm1();
  if (countSum != 0.0)
  {
    result.scaleEquals(1.0 / countSum);
  }
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

public Vector computeLocalWeights(
  final Vector counts)
{
  // Since the counts are positive, the 1-norm of them is their sum.
  final Vector result = this.vectorFactory.copyVector(counts);
  final double countSum = counts.norm1();
  if (countSum != 0.0)
  {
    result.scaleEquals(1.0 / countSum);
  }
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

public Vector computeLocalWeights(
  final Vector counts)
{
  // Since the counts are positive, the 1-norm of them is their sum.
  final Vector result = this.vectorFactory.copyVector(counts);
  final double countSum = counts.norm1();
  if (countSum != 0.0)
  {
    result.scaleEquals(1.0 / countSum);
  }
  return result;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-text-core

@Override
public Vector computeLocalWeights(
  final Vector counts)
{
  // Compute the local weights.
  final Vector result = super.computeLocalWeights(counts);
  final int dimensionality = result.getDimensionality();
  if (dimensionality != 0)
  {
    final double average = counts.norm1() / dimensionality;
    final double divisor = Math.log(1.0 + average);
    result.scaleEquals(1.0 / divisor);
  }
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public Vector computeLocalWeights(
  final Vector counts)
{
  // Compute the local weights.
  final Vector result = super.computeLocalWeights(counts);
  final int dimensionality = result.getDimensionality();
  if (dimensionality != 0)
  {
    final double average = counts.norm1() / dimensionality;
    final double divisor = Math.log(1.0 + average);
    result.scaleEquals(1.0 / divisor);
  }
  return result;
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public Vector computeLocalWeights(
  final Vector counts)
{
  // Compute the local weights.
  final Vector result = super.computeLocalWeights(counts);
  final int dimensionality = result.getDimensionality();
  if (dimensionality != 0)
  {
    final double average = counts.norm1() / dimensionality;
    final double divisor = Math.log(1.0 + average);
    result.scaleEquals(1.0 / divisor);
  }
  return result;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Updates the initial probabilities from sequenceGammas
 * @param firstGammas
 * The first gamma of the each sequence
 * @return
 * Updated initial probability Vector for the HMM.
 */
protected Vector updateInitialProbabilities(
  ArrayList<Vector> firstGammas )
{
  RingAccumulator<Vector> pi = new RingAccumulator<Vector>();
  for( int k = 0; k < firstGammas.size(); k++ )
  {
    pi.accumulate( firstGammas.get(k) );
  }
  Vector pisum = pi.getSum();
  pisum.scaleEquals( 1.0 / pisum.norm1() );
  return pisum;
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Updates the initial probabilities from sequenceGammas
 * @param firstGammas
 * The first gamma of the each sequence
 * @return
 * Updated initial probability Vector for the HMM.
 */
protected Vector updateInitialProbabilities(
  ArrayList<Vector> firstGammas )
{
  RingAccumulator<Vector> pi = new RingAccumulator<Vector>();
  for( int k = 0; k < firstGammas.size(); k++ )
  {
    pi.accumulate( firstGammas.get(k) );
  }
  Vector pisum = pi.getSum();
  pisum.scaleEquals( 1.0 / pisum.norm1() );
  return pisum;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

public Vector computeParameterGradientAmalgamate(
  Collection<Object> partialResults )
{
  RingAccumulator<Vector> numerator = new RingAccumulator<Vector>();
  double denominator = 0.0;
  for( Object result : partialResults )
  {
    GradientPartialSSE sse = (GradientPartialSSE) result;
    
    numerator.accumulate( sse.getFirst() );
    denominator += sse.getSecond();
  }
  
  Vector scaleSum = numerator.getSum();
  if( denominator != 0.0 )
  {
    scaleSum.scaleEquals( 1.0 / (2.0*denominator) );
  }
  return scaleSum;
}

代码示例来源:origin: algorithmfoundry/Foundry

public Vector computeParameterGradientAmalgamate(
  Collection<Object> partialResults )
{
  RingAccumulator<Vector> numerator = new RingAccumulator<Vector>();
  double denominator = 0.0;
  for( Object result : partialResults )
  {
    GradientPartialSSE sse = (GradientPartialSSE) result;
    
    numerator.accumulate( sse.getFirst() );
    denominator += sse.getSecond();
  }
  
  Vector scaleSum = numerator.getSum();
  if( denominator != 0.0 )
  {
    scaleSum.scaleEquals( 1.0 / (2.0*denominator) );
  }
  return scaleSum;
}

代码示例来源:origin: algorithmfoundry/Foundry

public Vector computeParameterGradientAmalgamate(
  Collection<Object> partialResults )
{
  RingAccumulator<Vector> numerator = new RingAccumulator<Vector>();
  double denominator = 0.0;
  for( Object result : partialResults )
  {
    GradientPartialSSE sse = (GradientPartialSSE) result;
    
    numerator.accumulate( sse.getFirst() );
    denominator += sse.getSecond();
  }
  
  Vector scaleSum = numerator.getSum();
  if( denominator != 0.0 )
  {
    scaleSum.scaleEquals( 1.0 / (2.0*denominator) );
  }
  return scaleSum;
}

相关文章

微信公众号

最新文章

更多