gov.sandia.cognition.math.matrix.Vector.minus()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(6.0k)|赞(0)|评价(0)|浏览(80)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector.minus()方法的一些代码示例,展示了Vector.minus()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.minus()方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:minus

Vector.minus介绍

暂无

代码示例

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
final protected void initializeSolver(MatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(function.evaluate(x));
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
final protected void initializeSolver(MatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(function.evaluate(x));
  d = residual;
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

protected double computeScaleFactor(
  Vector gradientCurrent,
  Vector gradientPrevious )
{
  Vector deltaGradient = gradientCurrent.minus( gradientPrevious );
  double deltaTgradient = deltaGradient.dotProduct( gradientCurrent );
  double denom = gradientPrevious.norm2Squared();
  
  double beta = deltaTgradient / denom;
  return beta;
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final protected void initializeSolver(MatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(function.evaluate(x));
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final protected void initializeSolver(MatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(function.evaluate(x));
  d = residual;
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final protected void initializeSolver(MatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(function.evaluate(x));
  d = residual;
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
final protected void initializeSolver(
  OverconstrainedMatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  AtransB = (A.transposeMult(rhs));
  residual = AtransB.minus(function.evaluate(x));
  d = residual;
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  // The Chebyshev distance is the infinity-norm of difference, which is
  // the size of the largest difference in a single dimension between
  // the two vectors.
  return first.convertToVector().minus(
    second.convertToVector()).normInfinity();
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected double computeScaleFactor(
  Vector gradientCurrent,
  Vector gradientPrevious )
{
  Vector direction = this.lineFunction.getDirection();
  
  Vector deltaGradient = gradientCurrent.minus( gradientPrevious );
  double deltaTgradient = deltaGradient.dotProduct( gradientCurrent );
  double denom = gradientPrevious.dotProduct( direction );
  double beta = -deltaTgradient / denom;
  return beta;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  // The Chebyshev distance is the infinity-norm of difference, which is
  // the size of the largest difference in a single dimension between
  // the two vectors.
  return first.convertToVector().minus(
    second.convertToVector()).normInfinity();
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final protected void initializeSolver(
  OverconstrainedMatrixVectorMultiplier function)
{
  this.A = function;
  x = super.x0;
  AtransB = (A.transposeMult(rhs));
  residual = AtransB.minus(function.evaluate(x));
  d = residual;
  delta = residual.dotProduct(residual);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final protected void initializeSolver(
  MatrixVectorMultiplierWithPreconditioner function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(A.evaluate(x));
  d = A.precondition(residual);
  delta = residual.dotProduct(d);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  return first.convertToVector().minus(
    second.convertToVector()).norm(this.power);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
protected double computeScaleFactor(
  Vector gradientCurrent,
  Vector gradientPrevious )
{
  Vector direction = this.lineFunction.getDirection();
  
  Vector deltaGradient = gradientCurrent.minus( gradientPrevious );
  double deltaTgradient = deltaGradient.dotProduct( gradientCurrent );
  double denom = gradientPrevious.dotProduct( direction );
  double beta = -deltaTgradient / denom;
  return beta;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  return first.convertToVector().minus(
    second.convertToVector()).norm(this.power);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  // The Chebyshev distance is the infinity-norm of difference, which is
  // the size of the largest difference in a single dimension between
  // the two vectors.
  return first.convertToVector().minus(
    second.convertToVector()).normInfinity();
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
final protected void initializeSolver(
  MatrixVectorMultiplierWithPreconditioner function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(A.evaluate(x));
  d = A.precondition(residual);
  delta = residual.dotProduct(d);
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public double evaluate(
  final Vectorizable first,
  final Vectorizable second)
{
  return first.convertToVector().minus(
    second.convertToVector()).norm(this.power);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
final protected void initializeSolver(
  MatrixVectorMultiplierWithPreconditioner function)
{
  this.A = function;
  x = super.x0;
  residual = rhs.minus(A.evaluate(x));
  d = A.precondition(residual);
  delta = residual.dotProduct(d);
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
protected double computeScaleFactor(
  Vector gradientCurrent,
  Vector gradientPrevious )
{
  Vector direction = this.lineFunction.getDirection();
  
  Vector deltaGradient = gradientCurrent.minus( gradientPrevious );
  double deltaTgradient = deltaGradient.dotProduct( gradientCurrent );
  double denom = gradientPrevious.dotProduct( direction );
  double beta = -deltaTgradient / denom;
  return beta;
}

相关文章

微信公众号

最新文章

更多