Commit a8938103 authored by Santiago Aboy Solanes's avatar Santiago Aboy Solanes Committed by Commit Bot

Remove the benchmarks/ folder

It was unused and the last commit was a long time ago.

NOPRESUBMIT=true

Change-Id: I5c4992cbc2e9977549787e21e4f5dac284291c58
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1863938Reviewed-by: 's avatarHannes Payer <hpayer@chromium.org>
Reviewed-by: 's avatarYang Guo <yangguo@chromium.org>
Reviewed-by: 's avatarMichael Stanton <mvstanton@chromium.org>
Reviewed-by: 's avatarSigurd Schneider <sigurds@chromium.org>
Reviewed-by: 's avatarMichael Achenbach <machenbach@chromium.org>
Commit-Queue: Santiago Aboy Solanes <solanes@chromium.org>
Cr-Commit-Position: refs/heads/master@{#64411}
parent ad9bd3a0
file:../COMMON_OWNERS
V8 Benchmark Suite
==================
This is the V8 benchmark suite: A collection of pure JavaScript
benchmarks that we have used to tune V8. The licenses for the
individual benchmarks are included in the JavaScript files.
In addition to the benchmarks, the suite consists of the benchmark
framework (base.js), which must be loaded before any of the individual
benchmark files, and two benchmark runners: An HTML version (run.html)
and a standalone JavaScript version (run.js).
Changes From Version 1 To Version 2
===================================
For version 2 the crypto benchmark was fixed. Previously, the
decryption stage was given plaintext as input, which resulted in an
error. Now, the decryption stage is given the output of the
encryption stage as input. The result is checked against the original
plaintext. For this to give the correct results the crypto objects
are reset for each iteration of the benchmark. In addition, the size
of the plain text has been increased a little and the use of
Math.random() and new Date() to build an RNG pool has been removed.
Other benchmarks were fixed to do elementary verification of the
results of their calculations. This is to avoid accidentally
obtaining scores that are the result of an incorrect JavaScript engine
optimization.
Changes From Version 2 To Version 3
===================================
Version 3 adds a new benchmark, RegExp. The RegExp benchmark is
generated by loading 50 of the most popular pages on the web and
logging all regexp operations performed. Each operation is given a
weight that is calculated from an estimate of the popularity of the
pages where it occurs and the number of times it is executed while
loading each page. Finally the literal letters in the data are
encoded using ROT13 in a way that does not affect how the regexps
match their input.
Changes from Version 3 to Version 4
===================================
The Splay benchmark is a newcomer in version 4. It manipulates a
splay tree by adding and removing data nodes, thus exercising the
memory management subsystem of the JavaScript engine.
Furthermore, all the unused parts of the Prototype library were
removed from the RayTrace benchmark. This does not affect the running
of the benchmark.
Changes from Version 4 to Version 5
===================================
Removed duplicate line in random seed code, and changed the name of
the Object.prototype.inherits function in the DeltaBlue benchmark to
inheritsFrom to avoid name clashes when running in Chromium with
extensions enabled.
Changes from Version 5 to Version 6
===================================
Removed dead code from the RayTrace benchmark and fixed a couple of
typos in the DeltaBlue implementation. Changed the Splay benchmark to
avoid converting the same numeric key to a string over and over again
and to avoid inserting and removing the same element repeatedly thus
increasing pressure on the memory subsystem. Changed the RegExp
benchmark to exercise the regular expression engine on different
input strings.
Furthermore, the benchmark runner was changed to run the benchmarks
for at least a few times to stabilize the reported numbers on slower
machines.
Changes from Version 6 to Version 7
===================================
Added the Navier-Stokes benchmark, a 2D differential equation solver
that stresses arithmetic computations on double arrays.
// Copyright 2012 the V8 project authors. All rights reserved.
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following
// disclaimer in the documentation and/or other materials provided
// with the distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Simple framework for running the benchmark suites and
// computing a score based on the timing measurements.
// A benchmark has a name (string) and a function that will be run to
// do the performance measurement. The optional setup and tearDown
// arguments are functions that will be invoked before and after
// running the benchmark, but the running time of these functions will
// not be accounted for in the benchmark score.
function Benchmark(name, run, setup, tearDown) {
this.name = name;
this.run = run;
this.Setup = setup ? setup : function() { };
this.TearDown = tearDown ? tearDown : function() { };
}
// Benchmark results hold the benchmark and the measured time used to
// run the benchmark. The benchmark score is computed later once a
// full benchmark suite has run to completion.
function BenchmarkResult(benchmark, time) {
this.benchmark = benchmark;
this.time = time;
}
// Automatically convert results to numbers. Used by the geometric
// mean computation.
BenchmarkResult.prototype.valueOf = function() {
return this.time;
}
// Suites of benchmarks consist of a name and the set of benchmarks in
// addition to the reference timing that the final score will be based
// on. This way, all scores are relative to a reference run and higher
// scores implies better performance.
function BenchmarkSuite(name, reference, benchmarks) {
this.name = name;
this.reference = reference;
this.benchmarks = benchmarks;
BenchmarkSuite.suites.push(this);
}
// Keep track of all declared benchmark suites.
BenchmarkSuite.suites = [];
// Scores are not comparable across versions. Bump the version if
// you're making changes that will affect that scores, e.g. if you add
// a new benchmark or change an existing one.
BenchmarkSuite.version = '7';
// To make the benchmark results predictable, we replace Math.random
// with a 100% deterministic alternative.
Math.random = (function() {
var seed = 49734321;
return function() {
// Robert Jenkins' 32 bit integer hash function.
seed = seed & 0xffffffff;
seed = ((seed + 0x7ed55d16) + (seed << 12)) & 0xffffffff;
seed = ((seed ^ 0xc761c23c) ^ (seed >>> 19)) & 0xffffffff;
seed = ((seed + 0x165667b1) + (seed << 5)) & 0xffffffff;
seed = ((seed + 0xd3a2646c) ^ (seed << 9)) & 0xffffffff;
seed = ((seed + 0xfd7046c5) + (seed << 3)) & 0xffffffff;
seed = ((seed ^ 0xb55a4f09) ^ (seed >>> 16)) & 0xffffffff;
return (seed & 0xfffffff) / 0x10000000;
};
})();
// Runs all registered benchmark suites and optionally yields between
// each individual benchmark to avoid running for too long in the
// context of browsers. Once done, the final score is reported to the
// runner.
BenchmarkSuite.RunSuites = function(runner) {
var continuation = null;
var suites = BenchmarkSuite.suites;
var length = suites.length;
BenchmarkSuite.scores = [];
var index = 0;
function RunStep() {
while (continuation || index < length) {
if (continuation) {
continuation = continuation();
} else {
var suite = suites[index++];
if (runner.NotifyStart) runner.NotifyStart(suite.name);
continuation = suite.RunStep(runner);
}
if (continuation && typeof window != 'undefined' && window.setTimeout) {
window.setTimeout(RunStep, 25);
return;
}
}
if (runner.NotifyScore) {
var score = BenchmarkSuite.GeometricMean(BenchmarkSuite.scores);
var formatted = BenchmarkSuite.FormatScore(100 * score);
runner.NotifyScore(formatted);
}
}
RunStep();
}
// Counts the total number of registered benchmarks. Useful for
// showing progress as a percentage.
BenchmarkSuite.CountBenchmarks = function() {
var result = 0;
var suites = BenchmarkSuite.suites;
for (var i = 0; i < suites.length; i++) {
result += suites[i].benchmarks.length;
}
return result;
}
// Computes the geometric mean of a set of numbers.
BenchmarkSuite.GeometricMean = function(numbers) {
var log = 0;
for (var i = 0; i < numbers.length; i++) {
log += Math.log(numbers[i]);
}
return Math.pow(Math.E, log / numbers.length);
}
// Converts a score value to a string with at least three significant
// digits.
BenchmarkSuite.FormatScore = function(value) {
if (value > 100) {
return value.toFixed(0);
} else {
return value.toPrecision(3);
}
}
// Notifies the runner that we're done running a single benchmark in
// the benchmark suite. This can be useful to report progress.
BenchmarkSuite.prototype.NotifyStep = function(result) {
this.results.push(result);
if (this.runner.NotifyStep) this.runner.NotifyStep(result.benchmark.name);
}
// Notifies the runner that we're done with running a suite and that
// we have a result which can be reported to the user if needed.
BenchmarkSuite.prototype.NotifyResult = function() {
var mean = BenchmarkSuite.GeometricMean(this.results);
var score = this.reference / mean;
BenchmarkSuite.scores.push(score);
if (this.runner.NotifyResult) {
var formatted = BenchmarkSuite.FormatScore(100 * score);
this.runner.NotifyResult(this.name, formatted);
}
}
// Notifies the runner that running a benchmark resulted in an error.
BenchmarkSuite.prototype.NotifyError = function(error) {
if (this.runner.NotifyError) {
this.runner.NotifyError(this.name, error);
}
if (this.runner.NotifyStep) {
this.runner.NotifyStep(this.name);
}
}
// Runs a single benchmark for at least a second and computes the
// average time it takes to run a single iteration.
BenchmarkSuite.prototype.RunSingleBenchmark = function(benchmark, data) {
function Measure(data) {
var elapsed = 0;
var start = new Date();
for (var n = 0; elapsed < 1000; n++) {
benchmark.run();
elapsed = new Date() - start;
}
if (data != null) {
data.runs += n;
data.elapsed += elapsed;
}
}
if (data == null) {
// Measure the benchmark once for warm up and throw the result
// away. Return a fresh data object.
Measure(null);
return { runs: 0, elapsed: 0 };
} else {
Measure(data);
// If we've run too few iterations, we continue for another second.
if (data.runs < 32) return data;
var usec = (data.elapsed * 1000) / data.runs;
this.NotifyStep(new BenchmarkResult(benchmark, usec));
return null;
}
}
// This function starts running a suite, but stops between each
// individual benchmark in the suite and returns a continuation
// function which can be invoked to run the next benchmark. Once the
// last benchmark has been executed, null is returned.
BenchmarkSuite.prototype.RunStep = function(runner) {
this.results = [];
this.runner = runner;
var length = this.benchmarks.length;
var index = 0;
var suite = this;
var data;
// Run the setup, the actual benchmark, and the tear down in three
// separate steps to allow the framework to yield between any of the
// steps.
function RunNextSetup() {
if (index < length) {
try {
suite.benchmarks[index].Setup();
} catch (e) {
suite.NotifyError(e);
return null;
}
return RunNextBenchmark;
}
suite.NotifyResult();
return null;
}
function RunNextBenchmark() {
try {
data = suite.RunSingleBenchmark(suite.benchmarks[index], data);
} catch (e) {
suite.NotifyError(e);
return null;
}
// If data is null, we're done with this benchmark.
return (data == null) ? RunNextTearDown : RunNextBenchmark();
}
function RunNextTearDown() {
try {
suite.benchmarks[index++].TearDown();
} catch (e) {
suite.NotifyError(e);
return null;
}
return RunNextSetup;
}
// Start out running the setup.
return RunNextSetup();
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
<html>
<head>
<title>V8 Benchmark Suite Revisions</title>
<link type="text/css" rel="stylesheet" href="style.css" />
</head>
<body>
<div>
<div class="title"><h1>V8 Benchmark Suite Revisions</h1></div>
<table>
<tr>
<td class="contents">
<p>
The V8 benchmark suite is changed from time to time as we fix bugs or
expand the scope of the benchmarks. Here is a list of revisions, with
a description of the changes made. Note that benchmark results are
not comparable unless both results are run with the same revision of
the benchmark suite.
</p>
<div class="subtitle"><h3>Version 7 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v7/run.html">link</a>)</h3></div>
<p>This version includes the new Navier-Stokes benchmark, a 2D differential
equation solver that stresses arithmetic computations on double arrays.</p>
<div class="subtitle"><h3>Version 6 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v6/run.html">link</a>)</h3></div>
<p>Removed dead code from the RayTrace benchmark and fixed a couple of
typos in the DeltaBlue implementation. Changed the Splay benchmark to
avoid converting the same numeric key to a string over and over again
and to avoid inserting and removing the same element repeatedly thus
increasing pressure on the memory subsystem. Changed the RegExp
benchmark to exercise the regular expression engine on different input
strings.</p>
<p>Furthermore, the benchmark runner was changed to run the benchmarks
for at least a few times to stabilize the reported numbers on slower
machines.</p>
<div class="subtitle"><h3>Version 5 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v5/run.html">link</a>)</h3></div>
<p>Removed duplicate line in random seed code, and changed the name of
the Object.prototype.inherits function in the DeltaBlue benchmark to
inheritsFrom to avoid name clashes when running in Chromium with
extensions enabled.
</p>
<div class="subtitle"><h3>Version 4 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v4/run.html">link</a>)</h3></div>
<p>The <i>Splay</i> benchmark is a newcomer in version 4. It
manipulates a splay tree by adding and removing data nodes, thus
exercising the memory management subsystem of the JavaScript engine.
</p>
<p>
Furthermore, all the unused parts of the Prototype library were
removed from the RayTrace benchmark. This does not affect the running
of the benchmark.
</p>
<div class="subtitle"><h3>Version 3 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v3/run.html">link</a>)</h3></div>
<p>Version 3 adds a new benchmark, <i>RegExp</i>. The RegExp
benchmark is generated by loading 50 of the most popular pages on the
web and logging all regexp operations performed. Each operation is
given a weight that is calculated from an estimate of the popularity
of the pages where it occurs and the number of times it is executed
while loading each page. Finally the literal letters in the data are
encoded using ROT13 in a way that does not affect how the regexps
match their input.
</p>
<div class="subtitle"><h3>Version 2 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v2/run.html">link</a>)</h3></div>
<p>For version 2 the Crypto benchmark was fixed. Previously, the
decryption stage was given plaintext as input, which resulted in an
error. Now, the decryption stage is given the output of the
encryption stage as input. The result is checked against the original
plaintext. For this to give the correct results the crypto objects
are reset for each iteration of the benchmark. In addition, the size
of the plain text has been increased a little and the use of
Math.random() and new Date() to build an RNG pool has been
removed. </p>
<p>Other benchmarks were fixed to do elementary verification of the
results of their calculations. This is to avoid accidentally
obtaining scores that are the result of an incorrect JavaScript engine
optimization.</p>
<div class="subtitle"><h3>Version 1 (<a href="http://v8.googlecode.com/svn/data/benchmarks/v1/run.html">link</a>)</h3></div>
<p>Initial release.</p>
</td><td style="text-align: center">
</td></tr></table>
</div>
</body>
</html>
This diff is collapsed.
This diff is collapsed.
// Copyright 2008 the V8 project authors. All rights reserved.
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following
// disclaimer in the documentation and/or other materials provided
// with the distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
load('base.js');
load('richards.js');
load('deltablue.js');
load('crypto.js');
load('raytrace.js');
load('earley-boyer.js');
load('regexp.js');
load('splay.js');
load('navier-stokes.js');
var success = true;
function PrintResult(name, result) {
print(name + ': ' + result);
}
function PrintError(name, error) {
PrintResult(name, error);
success = false;
}
function PrintScore(score) {
if (success) {
print('----');
print('Score (version ' + BenchmarkSuite.version + '): ' + score);
}
}
BenchmarkSuite.RunSuites({ NotifyResult: PrintResult,
NotifyError: PrintError,
NotifyScore: PrintScore });
<html>
<head>
<style>
body { text-align: center; }
</style>
</head>
<body>
<script type="text/javascript" src="splay-tree.js"></script>
<script type="text/javascript" src="v.js"></script>
</body>
</html>
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff was suppressed by a .gitattributes entry.
{
"path": ["."],
"main": "run.js",
"run_count": 2,
"results_regexp": "^%s: (.+)$",
"tests": [
{"name": "Richards"},
{"name": "DeltaBlue"},
{"name": "Crypto"},
{"name": "RayTrace"},
{"name": "EarleyBoyer"},
{"name": "RegExp"},
{"name": "Splay"},
{"name": "NavierStokes"}
]
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment