TestEngineering/Testrail
Contents
What is testrail?
TestRail (was) is a test case management tool used by most QA teams at Mozilla to catalog and track test cases over the life of a project (including versions/releases/features). TestRail is the single source of truth for all testing data (test plans, reports, failures, etc.) and so is an important component in the overall software life cycle at Mozilla.
MozTrap (legacy) was decommissioned. Most projects/test suites were moved over to TestRail ahead of the shutdown, however should you have any question or problems please contact one of the administrators below or email the dev-quality mailing list.
Access and Permissions
TestRail is currently an internal Mozilla tool. It is behind our VPN and requires an LDAP account to log in. We are working on a solution for community/public access which would allow anyone to review our test plans and progress. For progress on community access, you can subscribe to the dev-quality mailing list, or check back on this page.
TestRail has a hierarchical-permission structure based on who can view/edit/update any given project and test suite. By default all new users have read-only permissions.
We have defined a set of projects and groups to allow easier management of these permissions. If you want edit access to a specific project you will need to be a member of the team(s) responsible for that project. To get edit access to a project, please contact one of the TestRail administrators.
TestRail Administrators
- Stuart Philp <sphilp@mozilla.com> - General questions, permissions, accounts
- Miles Crabill <miles@mozilla.com> - Ops questions
Writing a test plan
More detailed steps of how to write a test plan and a checklist to work through can be found here https://wiki.mozilla.org/QA/Test_Plan_Template
Typically when a project kicks off, a QA/Test Engineer will want to create a test plan for the feature/project. This process will begin when the product manager has a clear idea of what needs to be built, including specifications, design mockups, etc. The test plan should be reviewed and agreed upon prior to any code being written.
The engineer, using their preferred tool (google docs, google sheets, etherpad, testrail, etc.) begins brainstorming and writing test cases. The engineer may confer with other engineers and various leads during the process to determine the appropriate test cases and areas of risk.
Once the engineer feels confident in the test plan, he/she enters the test plan into testrail.
The engineer responsible for the test plan works to identify the key stakeholders required for sign off.
Test plan stakeholders may include:
- Softvision leads
- Internal testing leads
- Test engineering leads
- Dev leads
- Product leads
- Project leads
Once the test plan draft is complete and the stakeholders are identified, the review process begins.
Test plan review process
- An email from the test plan writer is sent to the stakeholders asking for review
- Stakeholders provide feedback
- Changes are implemented
- Test plan is signed off
If there is a test engineering lead for the project, the review process for that lead will include flagging test cases that are applicable for test automation using the “Automatable” checkbox on a particular test case in TestRail. Test cases flagged "Automatable" are triaged for tickets for the test engineering and dev teams to implement.
Sign off is the responsibility of the test plan writer, which includes following up with stakeholders to complete a review and clarifying any review points.
Using the test plan
Once a test plan is complete and signed off, we have an agreement on what needs to be verified for a project to release. Development begins and developers can refer to the test plan as a sort of spec should they want to do so. When a testable build is ready (nightly, beta, rc), the test plan may be executed using the "test run" feature of TestRail. A test run captures the results of the execution of all test cases against a version of the product, and allows for creation of reports to distribute to stakeholders.
Any items in the test plan that are not successfully passing following a test run means they are at the discretion of the engineer executing the test plan whether they should flag the test case as blocking release. Test cases that have been flagged as requesting automation can also be flagged as blockers if the automation is not complete or if the automation is complete but not passing. All blockers are triaged by the stakeholders and an agreement on which items are/are not blocking is made.