Skip to content

Releases: bencheeorg/benchee

0.13.0

14 Apr 09:27
Compare
Choose a tag to compare

Memory Measurements are finally here! Please report problems if you experience them.

Features (User Facing)

  • Memory measurements obviously ;) Memory measurement are currently limited to process your function will be run in - memory consumption of other processes will not be measured. More information can be found in the README. Only usable on OTP 19+. Special thanks go to @devonestes and @michalmuskala
  • new pre_check configuration option which allows users to add a dry run of all
    benchmarks with each input before running the actual suite. This should save
    time while actually writing the code for your benchmarks.

Bugfixes (User Facing)

  • Standard Deviation is now calculated correctly for being a sample of the population (divided by n - 1 and not just n)

0.12.1

05 Mar 09:55
33346bf
Compare
Choose a tag to compare

Bugfixes (User Facing)

  • Formatters that use FileCreation.each will no longer silently fail on file
    creation and now also sanitizes / and other file name characters to be _.
    Thanks @gfvcastro

0.12.0 - Saving and Loading

20 Jan 11:21
Compare
Choose a tag to compare

Adds the ability to save benchmarking results and load them again to compare
against. Also fixes a bug for running benchmarks in parallel.

Breaking Changes (User Facing)

  • Dropped Support for elixir 1.3, new support is elixir 1.4+

Features (User Facing)

  • new save option specifying a path and a tag to save the results and tag them
    (for instance with "master") and a load option to load those results again
    and compare them against your current results.
  • runs warning free with elixir 1.6

Bugfixes (User Facing)

  • If you were running benchmarks in parallel, you would see results for each
    parallel process you were running. So, if you were running two jobs, and
    setting your configuration to parallel: 2, you would see four results in the
    formatter. This is now correctly showing only the two jobs.

Features (Plugins)

  • Scenario has a new name field to be adopted for displaying the scenario names,
    as it includes the tag name and potential future additions.

0.11.0

30 Nov 18:27
Compare
Choose a tag to compare

A tiny little release with a bugfix and MOARE statistics for the console formatter.

Bugfixes (User Facing)

  • estimated run times should be correct again, they were too high when inputs were used

Features (User Facing)

  • the console formatter accepts a new extended_statistics options that shows you additional statistics such as minimum, maximum, sample_size and the mode. Thanks @lwalter

0.10.0

24 Oct 08:20
Compare
Choose a tag to compare

This release focuses on 2 main things: the internal restructuring to use scenarios and the new hooks system. Other than that we also have some nice convenience features and formatters can be generated in parallel now.

Features (User Facing)

  • Hooks system - basically you can now do something before/after a benchmarking scenario or the benchmarking function, too much to explain it in a Changelog, check the README
  • Don't show more precision than we have - i.e. 234.00 microseconds (measurements are in microseconds and .00 doesn't gain you anything)
  • Limit precision of available memory displayed, you don't need to know 7.45678932 GB. Thanks to @elpikel.
  • Display the 99th percentile runtime. Thanks to @wasnotrice.
  • :unit_scaling is now a top level configuration option that can now also be used and picked up by formatters, like the HTML formatter
  • formatters can now be specified as a module (which should implement the Benchee.Formatter behaviour) - this makes specifying them nice and now at least their format/1 functions can be executed in parallel

Bugfixes (User Facing)

  • Determining CPUs was too strict/too assuming of a specific pattern breaking in certain environments (like Semaphore CI). That is more relaxed now thanks to @predrag-rakic!
  • Memory is now correctly converted using the binary (1024) interpreation, instead of the decimal one (1000)

Features (Plugins)

  • the statistics now also provide the mode of the samples as well as the 99th percentile
  • There is a new Benchee.Formatter behaviour to adopt and enforce a uniform format for formatters, best to do use Benchee.Formatter

Breakin Changes (Plugins)

  • :run_times, :statistics and :jobs have been removed and folded together into :scenarios - a scenario holds the benchmarking function, porentially the input, the raw run times measures and the computed statistics. With this data structure, all the relevant data for one scenario is one place although it takes a lot to change, this seems to be the best way going forward. Huge thanks to @devonestes!

0.5.0

13 Oct 07:49
Compare
Choose a tag to compare

This release focuses on scaling units to more appropriate sizes. Instead of always working with base one for counts and microseconds those values are scaled accordingly to thousands, milliseconds for better readability. This work was mostly done by new contributor @wasnotrice.

Features (User Facing)

  • Console output now scales units to be more friendly. Examples:
    • instead of "44556677" ips, you would see "44.56 M"
    • instead of "44556.77 μs" run time, you would see "44.56 ms"
  • Console output for standard deviation omits the parentheses
  • Scaling of console output can be configured with the 4 different strategies :best, :largest, :smallest and :none. Refer to the documentation for their different properties.
  • Shortened the fast function warning and instead linked to the wiki

Features (Plugins)

  • The statistics module now computes the minimum, maximum and sample_size (not yet shown in the console formatter)
  • you can rely on Benchee.Conversion.Duration, Benchee.Conversion.Count and Benchee.Conversion.DeviationPercent for help with formatting and scaling units

Breaking Changes (Plugins)

  • The Benchee.Timemodule is gone, if you relied on it for one reason or another it's succeeded by the more powerful Benchee.Conversion.Duration

0.4.0

11 Sep 11:49
Compare
Choose a tag to compare

0.4.0 (September 11, 2016)

Focuses on making what benchee print out configurable to make it fit to your preferences :)

Features (User Facing)

  • The configuration now has a :print key where it is possible to configure in a map what benchee prints out during benchmarking. All options are enabled by default (true). Options are:
    • :benchmarking - print when Benchee starts benchmarking a new job (Benchmarking name ..)
    • :configuration - a summary of configured benchmarking options including estimated total run time is printed before benchmarking starts
    • :fast_warning - warnings are displayed if functions are executed too fast leading to inaccurate measures
  • There is also a new configuration option for the built-in console formatter. Which is also enabled by default:
    • :comparison - if the comparison of the different benchmarking jobs (x times slower than) is shown
  • The pre-benchmarking output of the configuration now also prints the currently used Erlang and Elixir versions (similar to elixir -v)
  • Add a space between the benchmarked time and the unit (microseconds)

0.3.0

11 Jul 20:11
Compare
Choose a tag to compare

0.3.0 (July 11, 2016)

Breaking Changes (User Facing)

  • The recommended data structure handed to Benchee.run was changed from a list of 2-element tuples to a map ("Name" => benchmark_function). However, the old list of tuples still works but may be removed in future releases (so it's not "breaking" strictly speaking).
  • You can not have benchmark jobs with the same names anymore, the last one wins here. This was the reason why previously the data structure was a list of tuples. However, having benchmarks with the same name is nonsensical as you can't discern their results in the output any way.

Breaking Changes (Plugins)

  • main data structure to hold benchmarks and results was changed from a list of 2-element tuples to a map ("Name" => values). That is for the jobs, the run times as well as the statistics. However, if you used something like Enum.each(data, fn({name, value}) -> .. end) you are still fine though, cause Elixir is awesome :)

Features (User Facing)

  • now takes a parallel: number configuration option and will then execute each job in parallel in as many parallel processes as specified in number. This way you can gather more samples in the same time and also simulate a system more under load. This is tricky, however. One of the use cases is also stress testing a system. Thanks @ldr
  • the name column width is now determined based on the longest name. Thanks @alvinlindstam
  • Print general configuration information at the start of the benchmark, including warmup, time, parallel and an estimated total run time
  • New method Benchee.Formatters.Console.output/1 that immediately prints to the console
  • now takes a formatters: [&My.Format.function/1, &Benchee.Formatters.console.output/1] configuration option with which multiple formatters for the same benchmarking run can be configured when using Benchee.run/2. E.g. you can print results to the console and create a csv from that same run. Defaults to the builtin Console formatter.

Features (Plugins)

  • All previous configuration options are preserved after Benchee.Statistics.statistics/1, meaning there is access to raw run times as well as custom options etc. E.g. you could grab custom options like %{csv: %{file: "my_file_name.csv"}} to use.

Bugfixes

  • name columns are no longer truncated after 30 characters. Thanks @alvinlindstam

0.2.0

11 Jun 18:55
Compare
Choose a tag to compare

Backwards Incompatible Changes

  • Benchee.benchmark/3 now doesn't run the benchmark anymore but simply adds it to :jobs in the config. The whole benchmark suite is then run via Benchee.measure/1. This only affects you if you used the more verbose way of defining benchmarks, Benchee.run/2 should still work as expected.
  • the defined benchmarking are now preserved after running the benchmark under the :jobs key of the suite. Run times are added to the :run_times key of the suite (important for alternative statistics implementations)

Features

  • configuring a warmup time to run functions before measurements are taken can be configured via the warmup key in the config defaulting to 2 (seconds)
  • additionally supply the total standard deviation of iterations per second as std_dev_ips after Benchee.Statistics.statistics
  • statistics in console output are aligned right now for better comparisons
  • last blank line of console output removed

Bugfixes

  • if no time/warmup is specified the function won't be called at all