Firefox Social Integration QA Plan
Notes from Clint Talbert:
I think we are likely going to need a few different frameworks here before it is all said and done. But focusing on the smaller problem of something that is very developer friendly so that people can run it quickly to write unit tests is the right place to start.
The Test Server
How tough is it to create your test service? It would be great if we could run it locally on the test slaves (or on developer machines) and we could spin it up when we start up the test and shut it down when we're through. Do you need a web server for it? We have two web servers available. The httpd.js [1] is the javascript web server that backs all of mochitest. It is useful because it allows a whole lot of customization so that a test can test poorly behaved (and even incorrect) web server behaviors. It's slow, and it consumes quite a bit of resources.
We have a newer webserver that we are looking at replacing it with. This is a simpler server called mozhttpd and it is part of our wider array of mozbase test harness framework base classes. [2], [3] This server is faster, and is more suited to general purpose web-serving and is unable to do all the configuration stuff that httpd.js can do (although we have a contributor working on those capabilities so we can retire httpd.js one day).
How do you plan to implement the communication between your server and the test itself? That's the part I kept getting hung up on. Most of our testing is quite stateless. The closest thing we have to a stateful testing is TPS (the test system for sync), which runs outside of the main automation trees and uses real servers. It sounds like this is going to have quite a bit of in-browser integration so something that runs outside of the main automation trees while necessary for QA testing, is probably a non-starter if you have to clear Firefox Team review hurdles.
We will want to wire in your test server so that the browser-chrome test framework will start it up for you and keep it running the entire duration of the test run. My team can help wire that into the harness.
The Tests
Likewise, the other thing that I got hung up on with this was the life-cycle nature of these tests. In order for people who aren't you (other devs, QA, etc) we need these to be rather data-driven. Perhaps that means that each test has a large javascript object at the head of the test that details the actions the test will take. Then it should be easy for people to write tests as well as debug tests. You can see how the TPS tests are constructed here: [4].
The other thing I worried about is that with these tests that are essentially using the browser itself as the verification mechanism (i.e. did the button glow blue to indicate server state X?) we run a chance for a whole lot of timing related intermittent issues. It would be awesome if the tests had two modes of verification: 1. Simple Verification Mode: Captures the event that is going to make the button turn blue (this is easier to verify) 2. Complex Verification Mode: Not only does it capture the event, it also ensures that the button did in fact turn blue (this is a little harder and can be prone to causing intermittent orange tests)
If the test is data driven, then the test object can have an attribute stating the type of verification mode it will be using.
Furthermore, this will help streamline the test verification steps into a set of concrete APIs that any given test can use. And that will further simplify test authoring and debugging. (I'm totally making these up, but here are some possible examples of APIs):
- "bool VerifyGoSocialNoticeReceived(int verificationMode)"
- "bool VerifySideBarActivated(int verificationMode)"
- and so on...it would be fine for these verify APIs to write their mochitest pass/fail notice (i.e. they would do the assertion that things are ok or not), so their return value could simply indicate whether we should abort the test at that point or not.
Aborting the test
I think that any long lived test that could cause compounding failures (if we aren't in state 1 we will never make it to state 2, etc) needs to abort early. This way when people go to debugging it, it is clear where we failed and when. So an early abort system should be built into these from the start.
Framework
I think that browser chrome[5] tests are probably the best avenue that we have available for you to use right now. Ideally, Marionette [6] tests would be better simply because then you could drive your test from python scripts so you could ostensibly manage your server (assuming the server can be written quickly and simply in python) and your test from the same code, handling the state checking between test and server in a simple fashion. However, Marionette is not in production yet for desktop firefox (it is for B2G) and we won't have deployed in desktop until sometime in Q3.
So, if you need tests deployed now, I'm thinking that writing a set of JavaScript APIs like I mentioned above to run some data-driven, multi-mode verification tests and calling those from within the browser-chrome framework is probably the way to go. There is quite a bit of precedent for constructing these mini-test-harnesses atop the mochitest framework. Several test directories do this. By being in browser chrome, your actual test files will be javascript, but you will have full control to load pages in the browser, and since your test will operate from chrome you won't have nearly as much headache dealing with security as if you wrote mochitest-plain tests.
And if in the future this proves unwieldy, we can fairly easily switch such a mechanism (self contained javascript tests using mochitest assertions) into marionette tests.
[1]: https://developer.mozilla.org/en/HTTP_server_for_unit_tests [2]: Mozbase's location in m-c: http://mxr.mozilla.org/mozilla-central/source/testing/mozbase/ [3]: Mozbase's canonical location: https://github.com/mozilla/mozbase [4]: https://developer.mozilla.org/en/TPS_Tests [5]: https://developer.mozilla.org/en/Browser_chrome_tests [6]: https://developer.mozilla.org/en/Marionette