gov.sandia.cognition.math.matrix.Vector.norm2()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(8.7k)|赞(0)|评价(0)|浏览(72)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector.norm2()方法的一些代码示例,展示了Vector.norm2()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.norm2()方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:norm2

Vector.norm2介绍

暂无

代码示例

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-common-core

/**
 * Divides all of the given elements of the vector by the 2-norm (the square 
 * root of the sum of the squared values of the elements). If the 2-norm is 
 * zero (which means all the elements are zero), then the vector is not 
 * modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 2-norm. It is modified by
 *      this method.
 */
public static void divideByNorm2Equals(
  final Vector vector)
{
  final double norm2 = vector.norm2();
  if (norm2 != 0.0)
  {
    vector.scaleEquals(1.0 / norm2);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Divides all of the given elements of the vector by the 2-norm (the square 
 * root of the sum of the squared values of the elements). If the 2-norm is 
 * zero (which means all the elements are zero), then the vector is not 
 * modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 2-norm. It is modified by
 *      this method.
 */
public static void divideByNorm2Equals(
  final Vector vector)
{
  final double norm2 = vector.norm2();
  if (norm2 != 0.0)
  {
    vector.scaleEquals(1.0 / norm2);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Divides all of the given elements of the vector by the 2-norm (the square 
 * root of the sum of the squared values of the elements). If the 2-norm is 
 * zero (which means all the elements are zero), then the vector is not 
 * modified.
 *
 * @param   vector
 *      The vector to divide the elements by the 2-norm. It is modified by
 *      this method.
 */
public static void divideByNorm2Equals(
  final Vector vector)
{
  final double norm2 = vector.norm2();
  if (norm2 != 0.0)
  {
    vector.scaleEquals(1.0 / norm2);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
protected void initialize(
  final LinearBinaryCategorizer target,
  final Vector input,
  final boolean actualCategory)
{
  final double norm = input.norm2();
  if (norm != 0.0)
  {
    final Vector weights = this.getVectorFactory().copyVector(input);
    final double actual = actualCategory ? +1.0 : -1.0;
    weights.scaleEquals(actual / input.norm2());
    target.setWeights(weights);
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

double lambda = vlambda.norm2();
  double dp = vunithat.minus( vunit ).norm2();
  double dn = vunithat.plus( vunit ).norm2();
  if (dn < dp)

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Evaluate the this function on the provided cluster.
 *
 * @param cluster The cluster to calculate the function on.
 * @return The result of applying this function to the cluster.
 */
public double evaluate(NormalizedCentroidCluster<V> cluster)
{
  double total = 1.0;
  Vector centroid = cluster.getCentroid().convertToVector();
  Vector normalizedCentroid
    = cluster.getNormalizedCentroid().convertToVector();
  //if centroid is 0.0, cosine measure returns 0.0
  if (centroid.norm2() != 0.0)
  {
    total -= centroid.dotProduct(normalizedCentroid) / centroid.norm2();
  }
  total *= cluster.getMembers().size();
  return total;
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Evaluate the this function on the provided cluster.
 *
 * @param cluster The cluster to calculate the function on.
 * @return The result of applying this function to the cluster.
 */
public double evaluate(NormalizedCentroidCluster<V> cluster)
{
  double total = 1.0;
  Vector centroid = cluster.getCentroid().convertToVector();
  Vector normalizedCentroid
    = cluster.getNormalizedCentroid().convertToVector();
  //if centroid is 0.0, cosine measure returns 0.0
  if (centroid.norm2() != 0.0)
  {
    total -= centroid.dotProduct(normalizedCentroid) / centroid.norm2();
  }
  total *= cluster.getMembers().size();
  return total;
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Evaluate the this function on the provided cluster.
 *
 * @param cluster The cluster to calculate the function on.
 * @return The result of applying this function to the cluster.
 */
public double evaluate(NormalizedCentroidCluster<V> cluster)
{
  double total = 1.0;
  Vector centroid = cluster.getCentroid().convertToVector();
  Vector normalizedCentroid
    = cluster.getNormalizedCentroid().convertToVector();
  //if centroid is 0.0, cosine measure returns 0.0
  if (centroid.norm2() != 0.0)
  {
    total -= centroid.dotProduct(normalizedCentroid) / centroid.norm2();
  }
  total *= cluster.getMembers().size();
  return total;
}

代码示例来源:origin: openimaj/openimaj

public boolean test_backtrack(Matrix W, Matrix grad, Matrix prox, double eta){
    Matrix tmp = prox.clone();
    tmp.minusEquals(W);
    Vector tmpvec = tmp.getColumn(0);
    return (
      eval(prox) <= eval(W) + grad.getColumn(0).dotProduct(tmpvec) + 0.5*eta*tmpvec.norm2());
    
  }
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

vectorFactory.createUniformRandom(this.dimensionality,
    -initializationRange, initializationRange, this.random);
if (initialWeights.norm2() < (1.0 / sqrtLambda))

代码示例来源:origin: algorithmfoundry/Foundry

vectorFactory.createUniformRandom(this.dimensionality,
    -initializationRange, initializationRange, this.random);
if (initialWeights.norm2() < (1.0 / sqrtLambda))

代码示例来源:origin: algorithmfoundry/Foundry

vectorFactory.createUniformRandom(this.dimensionality,
    -initializationRange, initializationRange, this.random);
if (initialWeights.norm2() < (1.0 / sqrtLambda))

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
protected boolean step()
{
  SumSquaredErrorCostFunction.Cache cost = 
    SumSquaredErrorCostFunction.Cache.compute( this.getResult(), this.getData() );
  
  Vector lastParameters = this.lineFunction.getVectorOffset();
  Vector direction = cost.JtJ.solve(cost.Jte);
  double directionNorm = direction.norm2();
  if( directionNorm > STEP_MAX )
  {
    direction.scaleEquals( STEP_MAX / directionNorm );
  }
  
  this.lineFunction.setDirection( direction );
  InputOutputPair<Vector,Double> result = this.getLineMinimizer().minimizeAlongDirection(
    this.lineFunction, cost.parameterCost, cost.Jte );
  this.lineFunction.setVectorOffset( result.getInput() );
  
  this.setResultCost( result.getOutput() );
  
  Vector delta = result.getInput().minus( lastParameters );
  
  this.getResult().convertFromVector( result.getInput() );
  return !MinimizationStoppingCriterion.convergence( 
    result.getInput(), result.getOutput(), cost.Jte, delta, this.getTolerance() );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected boolean step()
{
  SumSquaredErrorCostFunction.Cache cost = 
    SumSquaredErrorCostFunction.Cache.compute( this.getResult(), this.getData() );
  
  Vector lastParameters = this.lineFunction.getVectorOffset();
  Vector direction = cost.JtJ.solve(cost.Jte);
  double directionNorm = direction.norm2();
  if( directionNorm > STEP_MAX )
  {
    direction.scaleEquals( STEP_MAX / directionNorm );
  }
  
  this.lineFunction.setDirection( direction );
  InputOutputPair<Vector,Double> result = this.getLineMinimizer().minimizeAlongDirection(
    this.lineFunction, cost.parameterCost, cost.Jte );
  this.lineFunction.setVectorOffset( result.getInput() );
  
  this.setResultCost( result.getOutput() );
  
  Vector delta = result.getInput().minus( lastParameters );
  
  this.getResult().convertFromVector( result.getInput() );
  return !MinimizationStoppingCriterion.convergence( 
    result.getInput(), result.getOutput(), cost.Jte, delta, this.getTolerance() );
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected boolean step()
{
  SumSquaredErrorCostFunction.Cache cost = 
    SumSquaredErrorCostFunction.Cache.compute( this.getResult(), this.getData() );
  
  Vector lastParameters = this.lineFunction.getVectorOffset();
  Vector direction = cost.JtJ.solve(cost.Jte);
  double directionNorm = direction.norm2();
  if( directionNorm > STEP_MAX )
  {
    direction.scaleEquals( STEP_MAX / directionNorm );
  }
  
  this.lineFunction.setDirection( direction );
  InputOutputPair<Vector,Double> result = this.getLineMinimizer().minimizeAlongDirection(
    this.lineFunction, cost.parameterCost, cost.Jte );
  this.lineFunction.setVectorOffset( result.getInput() );
  
  this.setResultCost( result.getOutput() );
  
  Vector delta = result.getInput().minus( lastParameters );
  
  this.getResult().convertFromVector( result.getInput() );
  return !MinimizationStoppingCriterion.convergence( 
    result.getInput(), result.getOutput(), cost.Jte, delta, this.getTolerance() );
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

f.convertFromVector( wnew );
double delta = wnew.minus( w ).norm2();
return delta > this.getTolerance();

代码示例来源:origin: algorithmfoundry/Foundry

f.convertFromVector( wnew );
double delta = wnew.minus( w ).norm2();
return delta > this.getTolerance();

代码示例来源:origin: algorithmfoundry/Foundry

f.convertFromVector( wnew );
double delta = wnew.minus( w ).norm2();
return delta > this.getTolerance();

相关文章

微信公众号

最新文章

更多