Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes.
23 changes: 23 additions & 0 deletions .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: Documentation

on:
push:
branches:
- master
tags: '*'

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: julia-actions/setup-julia@latest
with:
version: 1
- name: Install dependencies
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
- name: Build and deploy
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
DOCUMENTER_KEY: ${{ secrets.DOCUMENTER_KEY }} # For authentication with SSH deploy key
run: julia --project=docs/ docs/make.jl
6 changes: 5 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,8 @@
*.jl.*.cov
*.jl.mem
benchmark/params.jld
test/x.json
test/x.json
docs/Manifest.toml
docs/build
docs/src/assets/indigo.css
Manifest.toml
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
DocThemeIndigo = "8bac0ac5-51bf-41f9-885e-2bf1ac2bec5f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
25 changes: 25 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
using BenchmarkTools
using Documenter
using DocThemeIndigo
indigo = DocThemeIndigo.install(BenchmarkTools)

makedocs(;
modules=[BenchmarkTools],
repo="https://github.com/JuliaCI/BenchmarkTools.jl/blob/{commit}{path}#{line}",
sitename="BenchmarkTools.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://JuliaCI.github.io/BenchmarkTools.jl",
assets=String[indigo],
),
pages=[
"Home" => "index.md",
"Manual" => "manual.md",
"Linux-based environments" => "linuxtips.md",
"Reference" => "reference.md",
],
)

deploydocs(;
repo="github.com/JuliaCI/BenchmarkTools.jl",
)
74 changes: 74 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# BenchmarkTools

BenchmarkTools makes **performance tracking of Julia code easy** by supplying a framework for **writing and running groups of benchmarks** as well as **comparing benchmark results**.

This package is used to write and run the benchmarks found in [BaseBenchmarks.jl](https://github.com/JuliaCI/BaseBenchmarks.jl).

The CI infrastructure for automated performance testing of the Julia language is not in this package, but can be found in [Nanosoldier.jl](https://github.com/JuliaCI/Nanosoldier.jl).

## Quick Start

The primary macro provided by BenchmarkTools is `@benchmark`:

```julia
julia> using BenchmarkTools

# The `setup` expression is run once per sample, and is not included in the
# timing results. Note that each sample can require multiple evaluations
# benchmark kernel evaluations. See the BenchmarkTools manual for details.
julia> @benchmark sin(x) setup=(x=rand())
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 4.248 ns (0.00% GC)
median time: 4.631 ns (0.00% GC)
mean time: 5.502 ns (0.00% GC)
maximum time: 60.995 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 1000
```

For quick sanity checks, one can use the [`@btime` macro](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#benchmarking-basics), which is a convenience wrapper around `@benchmark` whose output is analogous to Julia's built-in [`@time` macro](https://docs.julialang.org/en/v1/base/base/#Base.@time):

```julia
julia> @btime sin(x) setup=(x=rand())
4.361 ns (0 allocations: 0 bytes)
0.49587200950472454
```

If the expression you want to benchmark depends on external variables, you should use [`$` to "interpolate"](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#interpolating-values-into-benchmark-expressions) them into the benchmark expression to
[avoid the problems of benchmarking with globals](https://docs.julialang.org/en/v1/manual/performance-tips/#Avoid-global-variables).
Essentially, any interpolated variable `$x` or expression `$(...)` is "pre-computed" before benchmarking begins:

```julia
julia> A = rand(3,3);

julia> @btime inv($A); # we interpolate the global variable A with $A
1.191 μs (10 allocations: 2.31 KiB)

julia> @btime inv($(rand(3,3))); # interpolation: the rand(3,3) call occurs before benchmarking
1.192 μs (10 allocations: 2.31 KiB)

julia> @btime inv(rand(3,3)); # the rand(3,3) call is included in the benchmark time
1.295 μs (11 allocations: 2.47 KiB)
```

Sometimes, interpolating variables into very simple expressions can give the compiler more information than you intended, causing it to "cheat" the benchmark by hoisting the calculation out of the benchmark code
```julia
julia> a = 1; b = 2
2

julia> @btime $a + $b
0.024 ns (0 allocations: 0 bytes)
3
```
As a rule of thumb, if a benchmark reports that it took less than a nanosecond to perform, this hoisting probably occured. You can avoid this by referencing and dereferencing the interpolated variables
```julia
julia> @btime $(Ref(a))[] + $(Ref(b))[]
1.277 ns (0 allocations: 0 bytes)
3
```

As described the [Manual](@ref), the BenchmarkTools package supports many other features, both for additional output and for more fine-grained control over the benchmarking process.
Loading