本文整理了Java中java.util.stream.DoubleStream.sum()
方法的一些代码示例,展示了DoubleStream.sum()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。DoubleStream.sum()
方法的具体详情如下:
包路径:java.util.stream.DoubleStream
类名称:DoubleStream
方法名:sum
[英]Returns the sum of elements in this stream. Summation is a special case of a reduction. If floating-point summation were exact, this method would be equivalent to:
return reduce(0, Double::sum);
However, since floating-point summation is not exact, the above code is not necessarily equivalent to the summation computation done by this method.
If any stream element is a NaN or the sum is at any point a NaN then the sum will be NaN. The value of a floating-point sum is a function both of the input values as well as the order of addition operations. The order of addition operations of this method is intentionally not defined to allow for implementation flexibility to improve the speed and accuracy of the computed result. In particular, this method may be implemented using compensated summation or other technique to reduce the error bound in the numerical sum compared to a simple summation of doublevalues.
This is a terminal operation.
[中]返回此流中元素的总和。求和是reduction的特例。如果浮点求和是精确的,则此方法相当于:
return reduce(0, Double::sum);
但是,由于浮点求和不精确,因此上述代码不一定等同于此方法进行的求和计算。
如果任何流元素为NaN或总和在任何点为NaN,则总和将为NaN。浮点和的值是输入值和加法运算顺序的函数。有意不定义此方法的加法运算顺序,以允许实现灵活性,从而提高计算结果的速度和准确性。特别地,与双值的简单求和相比,该方法可以使用补偿求和或其他技术来实现,以减少数值求和中的误差范围。
这是一个terminal operation。
代码示例来源:origin: confluentinc/ksql
public static double currentErrorRate() {
return collectorMap.values().stream()
.mapToDouble(MetricCollector::errorRate)
.sum();
}
代码示例来源:origin: google/guava
public void testEquivalentStreams() {
// For datasets of many double values created from an array, we test many combinations of finite
// and non-finite values:
for (ManyValues values : ALL_MANY_VALUES) {
double[] array = values.asArray();
Stats stats = Stats.of(array);
// instance methods on Stats vs on instance methods on DoubleStream
assertThat(stats.count()).isEqualTo(stream(array).count());
assertEquivalent(stats.mean(), stream(array).average().getAsDouble());
assertEquivalent(stats.sum(), stream(array).sum());
assertEquivalent(stats.max(), stream(array).max().getAsDouble());
assertEquivalent(stats.min(), stream(array).min().getAsDouble());
// static method on Stats vs on instance method on DoubleStream
assertEquivalent(Stats.meanOf(array), stream(array).average().getAsDouble());
// instance methods on Stats vs instance methods on DoubleSummaryStatistics
DoubleSummaryStatistics streamStats = stream(array).summaryStatistics();
assertThat(stats.count()).isEqualTo(streamStats.getCount());
assertEquivalent(stats.mean(), streamStats.getAverage());
assertEquivalent(stats.sum(), streamStats.getSum());
assertEquivalent(stats.max(), streamStats.getMax());
assertEquivalent(stats.min(), streamStats.getMin());
}
}
代码示例来源:origin: prestodb/presto
public Optional<IndexInfo> build()
{
List<Integer> partitions = partitionsSizes.build();
if (partitions.size() == 0) {
return Optional.empty();
}
double avgSize = partitions.stream().mapToLong(Integer::longValue).average().getAsDouble();
double squaredDifferences = partitions.stream().mapToDouble(size -> Math.pow(size - avgSize, 2)).sum();
checkState(partitions.stream().mapToLong(Integer::longValue).sum() == rowsNumber, "Total number of rows in index does not match number of rows in partitions within that index");
return Optional.of(new IndexInfo(rowsNumber, sizeInBytes, squaredDifferences, partitions.size()));
}
}
代码示例来源:origin: apache/storm
private double fragmentedMemory() {
Double res = nodeIdToResources.get().values().parallelStream().filter(this::isFragmented)
.mapToDouble(SupervisorResources::getAvailableMem).filter(x -> x > 0).sum();
return res.intValue();
}
代码示例来源:origin: apache/storm
private int fragmentedCpu() {
Double res = nodeIdToResources.get().values().parallelStream().filter(this::isFragmented)
.mapToDouble(SupervisorResources::getAvailableCpu).filter(x -> x > 0).sum();
return res.intValue();
}
代码示例来源:origin: stanfordnlp/CoreNLP
private void trainPolicy(List<List<Pair<CandidateAction, CandidateAction>>> examples) {
List<Pair<CandidateAction, CandidateAction>> flattenedExamples = new ArrayList<>();
examples.stream().forEach(flattenedExamples::addAll);
for (int epoch = 0; epoch < NUM_EPOCHS; epoch++) {
Collections.shuffle(flattenedExamples, random);
flattenedExamples.forEach(classifier::learn);
}
double totalCost = flattenedExamples.stream()
.mapToDouble(e -> classifier.bestAction(e).cost).sum();
Redwood.log("scoref.train",
String.format("Training cost: %.4f", 100 * totalCost / flattenedExamples.size()));
}
代码示例来源:origin: neo4j/neo4j
/**
* Calculates the portion of the first value to all values passed
* @param n The values in the set
* @return the ratio of n[0] to the sum all n, 0 if result is {@link Double#NaN}
*/
public static double portion( double... n )
{
assert n.length > 0;
double first = n[0];
if ( numbersEqual( first, 0 ) )
{
return 0d;
}
double total = Arrays.stream(n).sum();
return first / total;
}
代码示例来源:origin: google/error-prone
double totalAssignmentCost() {
return assignmentCost().stream().mapToDouble(d -> d).sum();
}
代码示例来源:origin: google/error-prone
double totalOriginalCost() {
return originalCost().stream().mapToDouble(d -> d).sum();
}
代码示例来源:origin: apache/incubator-druid
@Override
public RelOptCost computeSelfCost(final RelOptPlanner planner, final RelMetadataQuery mq)
{
return planner.getCostFactory().makeCost(rels.stream().mapToDouble(mq::getRowCount).sum(), 0, 0);
}
代码示例来源:origin: confluentinc/ksql
@Override
public double errorRate() {
final List<TopicSensors.Stat> allStats = new ArrayList<>();
topicSensors.values().forEach(record -> allStats.addAll(record.errorRateStats()));
return allStats
.stream()
.mapToDouble(TopicSensors.Stat::getValue)
.sum();
}
代码示例来源:origin: confluentinc/ksql
public static double aggregateStat(final String name, final boolean isError) {
return collectorMap.values().stream()
.mapToDouble(m -> m.aggregateStat(name, isError))
.sum();
}
代码示例来源:origin: apache/storm
@Override
protected Double transform(ClusterSummary clusterSummary) {
return clusterSummary.get_supervisors().stream()
//Filtered negative value
.mapToDouble(supervisorSummary -> Math.max(supervisorSummary.get_fragmented_mem(), 0))
.sum();
}
});
代码示例来源:origin: apache/storm
@Override
protected Double transform(ClusterSummary clusterSummary) {
return clusterSummary.get_supervisors().stream()
//Filtered negative value
.mapToDouble(supervisorSummary -> Math.max(supervisorSummary.get_fragmented_cpu(), 0))
.sum();
}
});
代码示例来源:origin: goldmansachs/gs-collections
@Benchmark
public double parallel_lazy_collectDoubleSum_jdk()
{
return this.doublesJDK.parallelStream().mapToDouble(each -> each).sum();
}
代码示例来源:origin: prestodb/presto
/**
* Returns estimated data size.
* Unknown value is represented by {@link Double#NaN}
*/
public double getOutputSizeInBytes(Collection<Symbol> outputSymbols, TypeProvider types)
{
requireNonNull(outputSymbols, "outputSymbols is null");
return outputSymbols.stream()
.mapToDouble(symbol -> getOutputSizeForSymbol(getSymbolStatistics(symbol), types.get(symbol)))
.sum();
}
代码示例来源:origin: apache/storm
private static double getCpuUsed(SchedulerAssignment assignment) {
return assignment.getScheduledResources().values().stream().mapToDouble((wr) -> wr.get_cpu()).sum();
}
代码示例来源:origin: confluentinc/ksql
public static <T> double aggregateStat(
final String name,
final boolean isError,
final Collection<TopicSensors<T>> sensors) {
return sensors.stream()
.flatMap(r -> r.stats(isError).stream())
.filter(s -> s.name().equals(name))
.mapToDouble(TopicSensors.Stat::getValue)
.sum();
}
代码示例来源:origin: apache/storm
private static double getMemoryUsed(SchedulerAssignment assignment) {
return assignment.getScheduledResources().values().stream()
.mapToDouble((wr) -> wr.get_mem_on_heap() + wr.get_mem_off_heap()).sum();
}
代码示例来源:origin: prestodb/presto
@VisibleForTesting
static double calculateNullsFractionForPartitioningKey(
HiveColumnHandle column,
List<HivePartition> partitions,
Map<String, PartitionStatistics> statistics,
double averageRowsPerPartition,
double rowCount)
{
if (rowCount == 0) {
return 0;
}
double estimatedNullsCount = partitions.stream()
.filter(partition -> partition.getKeys().get(column).isNull())
.map(HivePartition::getPartitionId)
.mapToDouble(partitionName -> getPartitionRowCount(partitionName, statistics).orElse(averageRowsPerPartition))
.sum();
return normalizeFraction(estimatedNullsCount / rowCount);
}
内容来源于网络,如有侵权,请联系作者删除!