Running Performance Tests
Contents
Prerequisites
To run the tests you will need to have the source code available locally, and satisfy the requirements for running mach commands. If you want to run tests against the same infrastructure as our continuous integration then you will need to follow the try server documentation.
Identifying tests to run
To be able to run one or more performance tests you first need to know which tests you would like to run. If you're responding to a regression bug then the test names will be listed in the bug report. The performance tests and frameworks available can be found from our projects list. You can also see our new performance test documentation, which will eventually replace our wiki pages.
Running tests locally
Webext
Webextension tests can be ran locally via mach. The command is:
> ./mach raptor-test -t [test-name]
e.g.
> ./mach raptor-test -t raptor-tp6-amazon-firefox
There are some extra options you could use like --cold that will run the cold version of the test.
For the entire list of extra options you can use:
> ./mach raptor-test -t [test-name] --help
You can find all the tests available to run by using --print-tests.
Browsertime
Browsertime tests can be ran locally via mach. The command is:
> ./mach raptor-test -t [test-name] --browsertime
e.g.
> ./mach raptor-test -t amazon --browsertime
There are some extra options you could use like:
--app that allow the selection of app used to run the test
--cold that will run the cold version of the test.
For the entire list of extra options you can use:
> ./mach raptor-test -t [test-name] --browsertime --help
You can find all the tests available to run by using --print-tests.
Talos
Follow this guide to find out how to run talos tests locally.
AWSY
Follow this guide to find out how to run awsy tests locally.
Scheduling tests on try
Follow this guide to schedule jobs against the try server.
Rebuilds
Due to the variance in performance test results it is a good idea to schedule multiple rebuilds. We typically recommend 3 rebuilds. This can be achieved by adding --rebuild 3
to your try syntax.
Presets
If you're unsure which tests to run, there are some mach try presets that can help:
- perf
- Runs all performance (raptor and talos) tasks across all platforms. Android hardware platforms are excluded due to resource limitations.
- perf-chrome
- Runs the talos tests most likely to change when making a change to the browser chrome. This skips a number of talos jobs that are unlikely to be affected in order to conserve resources.
Some jobs are hidden by default to reduce them being scheduled unintentionally. These are typically jobs that run on limited pools of hardware such as mobile devices. To make these available to your try run add the --full
option.
Viewing test results
After pushing the jobs to try, you will be given a link with the Treeherder job view. While there you can see when the tests failed or passed, their results and many other information, we do not recommend to use the graph view in order to see data-points trend for the try repo, only for the others. Instead, you can use the compare view to make a thorough comparison between 2 pushes. You can also compare 2 pushes from different repos as long as they contain comparable jobs.
Comparing results from multiple try jobs
Follow this guide to be able to compare results from multiple try jobs.