gov.sandia.cognition.math.matrix.Vector.dotTimes()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(8.1k)|赞(0)|评价(0)|浏览(87)

本文整理了Java中gov.sandia.cognition.math.matrix.Vector.dotTimes()方法的一些代码示例,展示了Vector.dotTimes()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Vector.dotTimes()方法的具体详情如下:
包路径:gov.sandia.cognition.math.matrix.Vector
类名称:Vector
方法名:dotTimes

Vector.dotTimes介绍

暂无

代码示例

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Computes the stochastic transition-probability matrix from the
 * given probabilities.
 * @param alphan
 * Result of the forward pass through the HMM at time n
 * @param betanp1
 * Result of the backward pass through the HMM at time n+1
 * @param bnp1
 * Conditionally independent likelihoods of each observation at time n+1
 * @return
 * Transition probabilities at time n
 */
protected static Matrix computeTransitions(
  Vector alphan,
  Vector betanp1,
  Vector bnp1 )
{
  Vector bnext = bnp1.dotTimes(betanp1);
  return bnext.outerProduct(alphan);
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the stochastic transition-probability matrix from the
 * given probabilities.
 * @param alphan
 * Result of the forward pass through the HMM at time n
 * @param betanp1
 * Result of the backward pass through the HMM at time n+1
 * @param bnp1
 * Conditionally independent likelihoods of each observation at time n+1
 * @return
 * Transition probabilities at time n
 */
protected static Matrix computeTransitions(
  Vector alphan,
  Vector betanp1,
  Vector bnp1 )
{
  Vector bnext = bnp1.dotTimes(betanp1);
  return bnext.outerProduct(alphan);
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the stochastic transition-probability matrix from the
 * given probabilities.
 * @param alphan
 * Result of the forward pass through the HMM at time n
 * @param betanp1
 * Result of the backward pass through the HMM at time n+1
 * @param bnp1
 * Conditionally independent likelihoods of each observation at time n+1
 * @return
 * Transition probabilities at time n
 */
protected static Matrix computeTransitions(
  Vector alphan,
  Vector betanp1,
  Vector bnp1 )
{
  Vector bnext = bnp1.dotTimes(betanp1);
  return bnext.outerProduct(alphan);
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the probability of the various states at a time instance given
 * the observation sequence.  Rabiner calls this the "gamma".
 * @param alpha
 * Forward probability at time n.
 * @param beta
 * Backward probability at time n.
 * @param scaleFactor
 * Amount to scale the gamma by
 * @return
 * Gamma at time n.
 */
protected static Vector computeStateObservationLikelihood(
  Vector alpha,
  Vector beta,
  double scaleFactor )
{
  Vector gamma = alpha.dotTimes(beta);
  gamma.scaleEquals(scaleFactor/gamma.norm1());
  return gamma;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Computes the probability of the various states at a time instance given
 * the observation sequence.  Rabiner calls this the "gamma".
 * @param alpha
 * Forward probability at time n.
 * @param beta
 * Backward probability at time n.
 * @param scaleFactor
 * Amount to scale the gamma by
 * @return
 * Gamma at time n.
 */
protected static Vector computeStateObservationLikelihood(
  Vector alpha,
  Vector beta,
  double scaleFactor )
{
  Vector gamma = alpha.dotTimes(beta);
  gamma.scaleEquals(scaleFactor/gamma.norm1());
  return gamma;
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the probability of the various states at a time instance given
 * the observation sequence.  Rabiner calls this the "gamma".
 * @param alpha
 * Forward probability at time n.
 * @param beta
 * Backward probability at time n.
 * @param scaleFactor
 * Amount to scale the gamma by
 * @return
 * Gamma at time n.
 */
protected static Vector computeStateObservationLikelihood(
  Vector alpha,
  Vector beta,
  double scaleFactor )
{
  Vector gamma = alpha.dotTimes(beta);
  gamma.scaleEquals(scaleFactor/gamma.norm1());
  return gamma;
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

ArrayList<WeightedValue<Vector>> weightedAlphas =
  new ArrayList<WeightedValue<Vector>>( N );
Vector alpha = b.get(0).dotTimes( this.getInitialProbability() );
double weight;
if( normalize )

代码示例来源:origin: algorithmfoundry/Foundry

ArrayList<WeightedValue<Vector>> weightedAlphas =
  new ArrayList<WeightedValue<Vector>>( N );
Vector alpha = b.get(0).dotTimes( this.getInitialProbability() );
double weight;
if( normalize )

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

final int k = this.getNumStates();
ArrayList<Vector> bs = this.computeObservationLikelihoods(observations);
Vector delta = this.getInitialProbability().dotTimes( bs.get(0) );
ArrayList<int[]> psis = new ArrayList<int[]>( N );
int[] psi = new int[ k ];

代码示例来源:origin: algorithmfoundry/Foundry

ArrayList<WeightedValue<Vector>> weightedAlphas =
  new ArrayList<WeightedValue<Vector>>( N );
Vector alpha = b.get(0).dotTimes( this.getInitialProbability() );
double weight;
if( normalize )

代码示例来源:origin: algorithmfoundry/Foundry

final int k = this.getNumStates();
ArrayList<Vector> bs = this.computeObservationLikelihoods(observations);
Vector delta = this.getInitialProbability().dotTimes( bs.get(0) );
ArrayList<int[]> psis = new ArrayList<int[]>( N );
int[] psi = new int[ k ];

代码示例来源:origin: algorithmfoundry/Foundry

final int k = this.getNumStates();
ArrayList<Vector> bs = this.computeObservationLikelihoods(observations);
Vector delta = this.getInitialProbability().dotTimes( bs.get(0) );
ArrayList<int[]> psis = new ArrayList<int[]>( N );
int[] psi = new int[ k ];

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

/**
 * Computes the backward probability recursion.
 * @param beta
 * Beta from the "next" time step.
 * @param b
 * Observation likelihood from the "next" time step.
 * @param weight
 * Weight to use for the current time step.
 * @return
 * Beta for the previous time step, weighted by "weight".
 */
protected WeightedValue<Vector> computeBackwardProbabilities(
  Vector beta,
  Vector b,
  double weight )
{
  Vector betaPrevious = b.dotTimes(beta);
  betaPrevious = betaPrevious.times( this.getTransitionProbability() );
  if( weight != 1.0 )
  {
    betaPrevious.scaleEquals(weight);
  }
  return new DefaultWeightedValue<Vector>( betaPrevious, weight );
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the backward probability recursion.
 * @param beta
 * Beta from the "next" time step.
 * @param b
 * Observation likelihood from the "next" time step.
 * @param weight
 * Weight to use for the current time step.
 * @return
 * Beta for the previous time step, weighted by "weight".
 */
protected WeightedValue<Vector> computeBackwardProbabilities(
  Vector beta,
  Vector b,
  double weight )
{
  Vector betaPrevious = b.dotTimes(beta);
  betaPrevious = betaPrevious.times( this.getTransitionProbability() );
  if( weight != 1.0 )
  {
    betaPrevious.scaleEquals(weight);
  }
  return new DefaultWeightedValue<Vector>( betaPrevious, weight );
}

代码示例来源:origin: algorithmfoundry/Foundry

/**
 * Computes the backward probability recursion.
 * @param beta
 * Beta from the "next" time step.
 * @param b
 * Observation likelihood from the "next" time step.
 * @param weight
 * Weight to use for the current time step.
 * @return
 * Beta for the previous time step, weighted by "weight".
 */
protected WeightedValue<Vector> computeBackwardProbabilities(
  Vector beta,
  Vector b,
  double weight )
{
  Vector betaPrevious = b.dotTimes(beta);
  betaPrevious = betaPrevious.times( this.getTransitionProbability() );
  if( weight != 1.0 )
  {
    betaPrevious.scaleEquals(weight);
  }
  return new DefaultWeightedValue<Vector>( betaPrevious, weight );
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

@Override
public UnivariateGaussian evaluateAsGaussian(
  final Vectorizable input)
{
  if (!this.isInitialized())
  {
    // Variance is not yet initialized.
    return new UnivariateGaussian();
  }
  else
  {
    final Vector x = input.convertToVector();
    return new UnivariateGaussian(
      this.evaluateAsDouble(x),
      x.dotProduct(x.dotTimes(this.getVariance())));
  }
}

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

sumsOfSquaresAccumulator.accumulate(vector.dotTimes(vector));

代码示例来源:origin: algorithmfoundry/Foundry

sumsOfSquaresAccumulator.accumulate(vector.dotTimes(vector));

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public UnivariateGaussian evaluateAsGaussian(
  final Vectorizable input)
{
  if (!this.isInitialized())
  {
    // Variance is not yet initialized.
    return new UnivariateGaussian();
  }
  else
  {
    final Vector x = input.convertToVector();
    return new UnivariateGaussian(
      this.evaluateAsDouble(x),
      x.dotProduct(x.dotTimes(this.getVariance())));
  }
}

代码示例来源:origin: algorithmfoundry/Foundry

@Override
public UnivariateGaussian evaluateAsGaussian(
  final Vectorizable input)
{
  if (!this.isInitialized())
  {
    // Variance is not yet initialized.
    return new UnivariateGaussian();
  }
  else
  {
    final Vector x = input.convertToVector();
    return new UnivariateGaussian(
      this.evaluateAsDouble(x),
      x.dotProduct(x.dotTimes(this.getVariance())));
  }
}

相关文章

微信公众号

最新文章

更多