本文整理了Java中gov.sandia.cognition.math.matrix.Vector.plus()
方法的一些代码示例,展示了Vector.plus()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.plus()
方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:plus
暂无
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
/**
* Transforms the scaleFactor into a multidimensional Vector using the
* direction
*
* @param scaleFactor scale factor to move along the direction from
* vectorOffset
* @return Multidimensional vector corresponding to the scale factor
* along the direction
*/
public Vector computeVector(
double scaleFactor )
{
return this.vectorOffset.plus(
this.direction.scale( scaleFactor ) );
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final protected double iterate()
{
Vector q = A.evaluate(d);
double alpha = delta / (d.dotProduct(q));
x.plusEquals(d.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = rhs.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
double delta_old = delta;
delta = residual.dotProduct(residual);
double beta = delta / delta_old;
d = residual.plus(d.scale(beta));
return delta;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
@Override
final protected double iterate()
{
Vector q = A.evaluate(d);
double alpha = delta / (d.dotProduct(q));
x.plusEquals(d.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = rhs.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
double delta_old = delta;
delta = residual.dotProduct(residual);
double beta = delta / delta_old;
d = residual.plus(d.scale(beta));
return delta;
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Transforms the scaleFactor into a multidimensional Vector using the
* direction
*
* @param scaleFactor scale factor to move along the direction from
* vectorOffset
* @return Multidimensional vector corresponding to the scale factor
* along the direction
*/
public Vector computeVector(
double scaleFactor )
{
return this.vectorOffset.plus(
this.direction.scale( scaleFactor ) );
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
@Override
final protected double iterate()
{
// This code is _exactly_ the same as the standard CG code because
// evaluate does the work of A^TAx, and A^Tb was calculated in init.
Vector q = A.evaluate(d);
double alpha = delta / (d.dotProduct(q));
x.plusEquals(d.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = AtransB.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
double delta_old = delta;
delta = residual.dotProduct(residual);
double beta = delta / delta_old;
d = residual.plus(d.scale(beta));
return delta;
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Transforms the scaleFactor into a multidimensional Vector using the
* direction
*
* @param scaleFactor scale factor to move along the direction from
* vectorOffset
* @return Multidimensional vector corresponding to the scale factor
* along the direction
*/
public Vector computeVector(
double scaleFactor )
{
return this.vectorOffset.plus(
this.direction.scale( scaleFactor ) );
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final protected double iterate()
{
Vector q = A.evaluate(d);
double alpha = delta / (d.dotProduct(q));
x.plusEquals(d.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = rhs.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
double delta_old = delta;
delta = residual.dotProduct(residual);
double beta = delta / delta_old;
d = residual.plus(d.scale(beta));
return delta;
}
代码示例来源:origin: algorithmfoundry/Foundry
public void update(
DirichletDistribution belief,
Vector value)
{
Vector a = belief.getParameters();
Vector anext = a.plus( value );
belief.setParameters(anext);
}
代码示例来源:origin: algorithmfoundry/Foundry
@Override
final protected double iterate()
{
Vector q = A.evaluate(d);
double alpha = delta / (d.dotProduct(q));
x.plusEquals(d.scale(alpha));
if (((iterationCounter + 1) % 50) == 0)
{
residual = rhs.minus(A.evaluate(x));
}
else
{
residual = residual.minus(q.scale(alpha));
}
Vector s = A.precondition(residual);
double delta_old = delta;
delta = residual.dotProduct(s);
double beta = delta / delta_old;
d = s.plus(d.scale(beta));
return delta;
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
public void update(
DirichletDistribution belief,
Vector value)
{
Vector a = belief.getParameters();
Vector anext = a.plus( value );
belief.setParameters(anext);
}
代码示例来源:origin: algorithmfoundry/Foundry
public void update(
DirichletDistribution belief,
Vector value)
{
Vector a = belief.getParameters();
Vector anext = a.plus( value );
belief.setParameters(anext);
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Overrides the default implementation so that L_tilde can be raised to a
* power and the diagonal weights can be added implicitly (which is much
* faster and memory efficient than the explicit representation).
*
* @param input The vector to multiply by the implicit represetation of the
* matrix
* @return The result of the function.
*/
@Override
public Vector evaluate(Vector input)
{
Vector v = input;
for (int i = 0; i < power; ++i)
{
v = m.times(v);
}
Vector plusV = additional.times(input);
return v.plus(plusV);
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
/**
* Overrides the default implementation so that L_tilde can be raised to a
* power and the diagonal weights can be added implicitly (which is much
* faster and memory efficient than the explicit representation).
*
* @param input The vector to multiply by the implicit represetation of the
* matrix
* @return The result of the function.
*/
@Override
public Vector evaluate(Vector input)
{
Vector v = input;
for (int i = 0; i < power; ++i)
{
v = m.times(v);
}
Vector plusV = additional.times(input);
return v.plus(plusV);
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Overrides the default implementation so that L_tilde can be raised to a
* power and the diagonal weights can be added implicitly (which is much
* faster and memory efficient than the explicit representation).
*
* @param input The vector to multiply by the implicit represetation of the
* matrix
* @return The result of the function.
*/
@Override
public Vector evaluate(Vector input)
{
Vector v = input;
for (int i = 0; i < power; ++i)
{
v = m.times(v);
}
Vector plusV = additional.times(input);
return v.plus(plusV);
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
/**
* Convolves this Gaussian with the other Gaussian.
*
* @param other Other Gaussian to convolve with this.
* @return Convolved Gaussians.
*/
public MultivariateGaussian convolve(
MultivariateGaussian other)
{
Vector meanHat = this.mean.plus(other.getMean());
Matrix covarianceHat = this.getCovariance().plus(other.getCovariance());
return new MultivariateGaussian(meanHat, covarianceHat);
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Convolves this Gaussian with the other Gaussian.
*
* @param other Other Gaussian to convolve with this.
* @return Convolved Gaussians.
*/
public MultivariateGaussian convolve(
MultivariateGaussian other)
{
Vector meanHat = this.mean.plus(other.getMean());
Matrix covarianceHat = this.getCovariance().plus(other.getCovariance());
return new MultivariateGaussian(meanHat, covarianceHat);
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Convolves this Gaussian with the other Gaussian.
*
* @param other Other Gaussian to convolve with this.
* @return Convolved Gaussians.
*/
public MultivariateGaussian convolve(
MultivariateGaussian other)
{
Vector meanHat = this.mean.plus(other.getMean());
Matrix covarianceHat = this.getCovariance().plus(other.getCovariance());
return new MultivariateGaussian(meanHat, covarianceHat);
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Adds two MultivariateGaussian random variables together and returns the
* resulting MultivariateGaussian
*
* @param other MultivariateGaussian to add to this MultivariateGaussian
* @return Effective addition of the two MultivariateGaussian random
* variables
*/
public MultivariateGaussian plus(
MultivariateGaussian other)
{
Vector m = this.mean.plus(other.getMean());
Matrix C = this.getCovariance().plus(other.getCovariance());
return new MultivariateGaussian(m, C);
}
代码示例来源:origin: algorithmfoundry/Foundry
/**
* Adds two MultivariateGaussian random variables together and returns the
* resulting MultivariateGaussian
*
* @param other MultivariateGaussian to add to this MultivariateGaussian
* @return Effective addition of the two MultivariateGaussian random
* variables
*/
public MultivariateGaussian plus(
MultivariateGaussian other)
{
Vector m = this.mean.plus(other.getMean());
Matrix C = this.getCovariance().plus(other.getCovariance());
return new MultivariateGaussian(m, C);
}
代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core
/**
* Adds two MultivariateGaussian random variables together and returns the
* resulting MultivariateGaussian
*
* @param other MultivariateGaussian to add to this MultivariateGaussian
* @return Effective addition of the two MultivariateGaussian random
* variables
*/
public MultivariateGaussian plus(
MultivariateGaussian other)
{
Vector m = this.mean.plus(other.getMean());
Matrix C = this.getCovariance().plus(other.getCovariance());
return new MultivariateGaussian(m, C);
}
内容来源于网络,如有侵权,请联系作者删除!