QA/TCM/Meeting Notes
Contents
.2
3-29-2011
http://mozqa.sync.in/mozservices-110401 (under NextGen TCM)
3-15-2011
Things done since thursday:
- carljm
- was at pycon
- ericam
- finished all .2 wireframes
- camd
- implementing testcases
- implementing APIs for the results page (.3)
- aakashd
- .2 wireframes completed
- working on re-doing .3 wireframes
Things to do:
- carljm
- feature-complete .2 by 24th
- ericam
- work on design
- camd
- finishing APIs for the results page
- aakashd
- finish .3 wireframes
- being to re-work .3 PRD and workflows
3-1-2011
Things done since thursday:
- carljm
- api work with vadim
- got new layout for testcycles pane
- put up create test cycle
- ericam
- worked on designs on create test cycle
- added filtering buttons on test cycle pane
- camd
- hudson is a bit fidgety
- working back and forth with carl on api's needed to work on for .3
Things to do:
- carljm
- continue working through
- ericam
- put in the rest of the wireframes
- camd
- work with carl on getting hudson set up on his end
- write tests for api's
- aakashd
- work on flow diagrams for .3
- continue hashing out .2 feedback
- finish up test cycles mock-up and show to utest for review
2-24-2011
things done since thursday:
- carljm
- had an emergency call with a previous client, couldn't do much
- ericam
- worked on designs
- camd
- hudson is up and running
- worked on criteria for each release
- looked through prd's and feedback for it
things to do:
- carljm
- start activating wireframes
- 3/9-3/16 going to be at pycon
- ericam
- put in the rest of the wireframes
- camd
- looks into .3 work and start working on APIs that need
2-22-2011
things over the past week:
- carljm
- reviewing bugs and making sure everything is lined up
- ericam
- working on design
- beginning set of wireframes up on dev site
- camd
- aakashd
- .2 PRD
things to do:
- carljm
- activating manage panes
- writing testcases for his code
- ericam
- continue working on design
- camd
- aakashd
- review PRD for .2 and .3
- talk to eric
- follow-up with IT
.1
2-17-2011
Things Done:
- carljm
- feature-completed .1
- made bug fixes from feedback post .1
- ericam
- worked on design/branding work
- camd
- setup hudson to checkout the lettuce tests
- Create TestCase scenario tests - partially done. Creating testcase and steps. Now need to add "approval" which involves creating a user with that permission. You can see the test steps and result here: http://camd.mv.mozilla.com:8080/lettuceresults/tcm_testcases_2011_02_17.html
- aakashd
- sent PRD's for review
- sent e-mail for feedback after feature-complete
Things to accomplish and when:
- carljm:
- start up work on .2
- ericam
- start work on .2 wireframes
- camd
- get hudson up and running on a VM
- aakashd
- finish up PRD review comments
2-8-2011
Things Done:
- carljm
- finished test runs page and environments
- going for run tests page right now
- ericam
- sent in the colors and direction for the branding
- camd
- got environments and env types written
- found an issue with env types will ask carljm/vadim
- aakashd
- finished .2 PRD except for APIs to be used
Things to accomplish and when:
- carljm:
- finish run tests page
- finish up add a test case
- ericam
- get something finite for design and branding
- camd
- starting to plan to find issues before carl finds issues
- ask vadim to get on irc
- have hudson start running tests continuously per check-in on /tcm and /tcmplatform
- aakashd
- finish .2 PRD completely and send it out to TCM@Moz
- start up on .3 PRD
- send e-mail for feedback after feature-complete
2-3-2011
Things Done:
- carljm
- deployed environments (can't add environments for the time being)
- ericam
- working on designs/branding
- camd
- products/testcases/environments tests
- aakashd
- finished authentication-related bugs
Things to accomplish and when:
- carljm:
- hook up test run and run test page
- ericam
- should have something to show on thursday/friday
- camd
- environments and environments groups/types tests
- aakashd
- correspond with eric when designs and branding
- send e-mail for feedback after feature-complete
2-1-2011
Things Done:
- carljm
- struggled with environments (date vs. datetime)
- ericam
- working on design
- camd
- aakashd
- correspondence with eric about designs/branding
Things to accomplish and when:
- carljm:
- will push latest changes
- debugging platform changes
- hook up templates test cycle, test run, environments, run test page
- ericam
- should have something to show on thursday
- camd
- aakashd
- send e-mail for feedback after feature-complete
Flags:
1-27-2011
Things Done:
- carljm
- updated staging login/registration
- ericam
- camd
- got hudson going
- issues with pulling vadim's platform changes
- aakashd
Things to accomplish and when:
- carljm:
- finish run test page
- ericam
- camd
- work on smoketest-level tests
- look into creating testcases and testcase results as well as testcycles
- aakashd
- send e-mail for feedback after feature-complete
Flags:
1-25-2011
Things Done:
- carljm
- finished up user accounts
- ericam
- corresponding on naming/branding
- camd
- servlet's up and running
- aakashd
- filed bug for .6
- finish up basic naming/branding
Things to accomplish and when:
- carljm:
- put login/user account on staging
- ericam
- corresponding on naming/branding
- camd
- tried to set up hudson
- ran it against the latest tcmplatform pull, found errors
- aakashd
- send e-mail on feedback from the wireframes
Flags:
- Hitting errors on the latest database pull
- Agile like spiderman. We can bob and weave. Sidenote: buy spiderman outfits for team.
1-20-2011
Things Done:
- carljm
- worked on django core
- continued work on login
- ericam
- camd
- rebuilt test server
- aakashd
- file bugs for .3
- started work on naming/branding with names and personality
Things to accomplish and when:
- carljm:
- login/user account
- ericam
- camd
- how to get test servlet up and running
- aakashd
- file bugs for .6
- finish up basic naming/branding
- send e-mail on feedback from the wireframes
1-18-2011
Things Done:
- carljm
- user reg and login started, but python code in place to get API data
- got prototype server up
- ericam
- finished wireframes for run tests, frontpage, registration, run tests,
- camd
- issues with calling platform on a testserver and my automation
- aakashd
- laid out testcases for .3 and .6
Things to accomplish and when:
- carljm:
- finish user registration and login
- finish product and cycles list
- ericam
- make adjustments per carljm's recommendation for the wireframes
- camd
- finish a servlet representing the platform that'll allow the automation to run without hiccups
- aakashd
- file bugs for .3 and .6
- follow-up with roy on branding and naming
Flags:
- move code freeze to 1/27
- create testcase
- questions about tabs per user
- not logged-in: run tests, results
- tester: run tests, create testcase, results
- admin: run tests, manage, results
1-13-2011
Things Done:
- carljm
- worked on registrations and logins,
- ericam
- worked on the test cycle chooser and run-tests wireframes
- camd
- succeeded in getting a create object test for user and company
- refactoring URI mapping to accomodate changes uTest has made for my tests
- more smoketests now run against the qa test server
- aakashd
- filed bugs for .2 and .5 release
Things to accomplish and when:
- carljm:
- working on registrations and logins, eric's working on the test cycle chooser and run-tests wireframesd
- ericam
- finish up test cycle chooser and run-tests wireframes
- camd
- get more lettuce tests to pass against camd.mv.mozilla.com TCM server
- create more lettuce smoketests against TCM apis
- aakashd
- get .3 and .6 bugs filed
- work on branding and naming
Flags:
1-11-2011
Things Done:
- carljm
- Got majority of the site work
- triaged through bugs needed for .1
- camd
- Worked on automation and ran some smoketests on some APIs
- aakashd
- 1st try at a project schedule up
Things to accomplish and when:
- carljm:
- Get HTML-styles for wireframes completed and pushed
- camd
- get automation running
- host it up in a public place for utest to see
- file bugs as necessary
- aakashd
- .2, .3, .4 bugs filed
Flags:
- camd's testruns have a lot of dependencies (fyi in the future when creating tests)
1-6-2011
Things Done:
- carljm
- Setup basic structure of the code (handle dependencies)
- Hook into
- camd
- Setting up a dedicated server for running tests
- aakashd
- 1st try at a project schedule up
Things to accomplish and when:
- carljm:
- HTML versions of the wireframes for .1, 1/
- Requirements offered from uTest for .1
- camd
- get automation running
- host it up in a public place for utest to see
- file bugs as necessary
- aakashd
- .2, .3, .4 bugs filed
Flags:
- Calls advertised for .1 are all there
- Environments
- user profile default environment variables:
- no default when users are managing their profiles or beginning to run tests
- talk to utest about adding that feature
- file a bug in TCM:Future for default env vars
- user profile default environment variables:
12-13-2010
test suite
- added priority and order to testcases
management
- manage products?
- admin options
- manage companies?
- companies get associated to the urls
- testsuites
- clone 1.0+
- testruns
- clone 1.0+
- testcases
- step model for adding testcases
- checkbox for leave older version alone, fixing a minor typo or change all existing versions of this testcase (WARNING)
run tests
- step number failed
- categorize by test cycle and then a list view
12-9-2010
Testcase Edit
- if a test run is in draft, then update testcase;
- only add a new version or update the testcase (edit testcase)
- "update the testcase" or "create a new version"
Test Case Bulk Edit
- no api availability for bulk editing
- when adding testcase to test suite, there's external variables that need to be dealt with. when can't do a bulk edit of adding a test suites
- take out bulk edit for 1.0 and move to future
Testcase Manage
- lots of possible api calls with test suite
- scale with hundreds
- can on hover (file a bug)
Testcase Import
- steps for testcase are not free form test field; each step has its own identity
- should each step have its own object?
- what does a test case look like in the data model? (matt, emily)
- import feature would need to be UI-side
carljm's Notes
Platform wishlist
- Manage
- Backend API can currently only support one-at-a-time actions, not bulk.
- Deletion: may want to consider asking the backend to only put deleted items into "deleted" state, not actually remove; this would allow for user-friendly "undo" on delete. Only worth it for things where a deletion actually loses significant amounts of data.
UI Implementation Notes
- Manage
- Don't allow deletion of active test cycle or test run, or test case version referenced in active test run, etc. Backend will throw error anyway.
- Removing test cycle also removes all the test runs for that cycle.
- Test Run states: uTest uses the terminology Draft, Activated, Closed. We may as well use the same terms.
- Test Run management screen is missing Test Cycle selection, start/end date, a few other attributes.
- Test Runs can't be assigned directly, so listing a TestRun as assigned to a particular tester doesn't make sense. They just have a self-assignable or not (boolean) attribute, which determines whether the TestCases in that TestRun can be self-assigned. Only test cases as part of a test run can be assigned to someone; though the UI will probably want to provide bulk-assignment features.
- In general, important to have filters in management interface. Ability to filter things by status, boolean attributes, etc. Wireframes are missing this; will also need platform support. (For example, searching for TestCases when creating a TestSuite need to be able to filter on non-textual attributes of a TestCase).
- TestSuite and TestCase creation need to be within context of a product: this doesn't seem to be reflected in the wireframes.
- Environments should be associatable with anything (test case, test suite, product, etc) via a UI that is consistent throughout the management interface.
- When adding TestCases to a TestSuite or TestRun, need to be able to specify order and priority for each TestCase, also blocking (does Mozilla care about this attribute?).
- TestCase creation shouldn't include adding to TestSuite (how do you specify priority, order, etc, when you can't see the rest of the TestCases in the TestSuite?) TestCase editing should include a read-only view of what test suites the test case is a part of, though.
- What about management screens for products and components and companies?
- Is the assumption that each company will have a company-specific separate site and URL? Or that there may be a single site for multiple companies?
- Need management screens for setting up EnvironmentTypes and EnvironmentGroups, in order to populate the environment dropdowns with options.
- Test case editing "edit an older version" needs to be clearer that you are effectively superseding the current latest version, since we don't support branching.
- TestCase steps are actual entities, not just a freeform string field. Each step has separate action and expected result.
- TestCase editing should make versioning explicit and optional. So you can edit an existing version in-place (for minor edits, typos etc), or create a new version. This replaces the "Update Test Runs associated" checkbox: if you create a new version, TestRuns referencing the old version will remain unchanged. If you edit a version in-place, TestRuns referencing it will see the change.
- Bulk TestCase edit: remove "add to testsuite," doesn't work well with the need to specify order of test cases in test suite.
- TestCase list: may need to make test-suite list a one-by-one ajax call on hover or on button click for more details.
- Run Tests
- How is user profile default environment supposed to work, given that the relevant environment factors are product-dependent and a user might be testing multiple products in totally different spaces?
- RunTests environment selection needs to be filters on the testcase list, not a prior step. Otherwise it's confusing if nothing shows up. Not all environment stuff needs to be fixed for an entire testing session.
- RunTests-choose: Test runs, not cycles. Is it needed to select multiple test runs? Seems like one at a time makes more sense and keeps things simpler.
- RunTests-choose: Progress bar is just percentage of tests assigned to me that are completed?
- RunTests-choose: need drill-down to list of test cases for claiming self-assigned, and viewing ones that have already been assigned, before moving on to execution.
- Missing the UI for assigning test cases: need to be able to assign environments.
- RunTests - executing
- When failing a TestCase, need to be able to describe what failed (and possibly also identify a particular step as having failed, since steps are separated).
- Just show environment details, don't make them link to it. And it should be read-only.
- Sort test-results on these screens by environment, because that's the most sensible ordering for a tester. Within that, by order.
- Show priority on test-result, also give option to sort by priority?
- Need not just attachments from test case, but also way for tester to upload attachment related to their result.
- Need start button and then succeeded/failed button for each test (because backend wants to record time taken to perform test). Also buttons provide clearer UI path than dropdowns.
- Is there any point to displaying the full controls for multiple tests at once? A tester only cares about one at a time. Maybe just display titles and such for the previous and next few to give some context if that's important, but not all the controls.
- Maybe need a company and/or product-level configuration to toggle the availability of the "make it better" feature: some users won't want that at all.
- Results
- test cycle
- Doesn't make sense to list a tester on a test cycle; there will almost surely be multiple testers involved.
- Does it even make sense to be able to approve or reject an entire test cycle at once? Don't you need to look at the individual results?
- "Recreate with failures only" should be test-run level, within the test cycle. Not an option on a test cycle.
- Test run
- Need to be able to "re-test" (optionally failures only) the entire test run. By default assigned to same tester, can be reassigned.
- Test case
- Really needs to be results, not test cases: can have multiple results per test case (one for each environment group).
- Need re-test option for individual test case, with way to assign it to a new tester.
- Don't need both Status and Result columns, they are the same thing.
- Test suite
- At the execution level, test suites are no longer really relevant. They are just a convenience for adding test cases to a test run. So this screen probably shouldn't exist.
Features to postpone
- Import/export.
- Bulk-editing of TestCases.