CIDuty/SVMeetings/Dec14-Dec18
Upcoming vacation/PTO: alin - dec 24, dec 28-31
Meetings every Tuesday and Thursday
- Main Meetings Page: https://wiki.mozilla.org/ReleaseEngineering/Buildduty/SVMeetings
https://wiki.mozilla.org/ReleaseEngineering/Buildduty/SVMeetings/Nov2-Nov6 https://wiki.mozilla.org/ReleaseEngineering/Buildduty/SVMeetings/Nov9-Nov13 https://wiki.mozilla.org/ReleaseEngineering/Buildduty/SVMeetings/Nov16-Nov20 https://wiki.mozilla.org/ReleaseEngineering/Buildduty/SVMeetings/Nov29-Dec4
14.12.2015 - 15-12.2015
hg mercurial reconfigured, switched from https to ssh. we received access to taskcluster, run a demo test https://bugzilla.mozilla.org/show_bug.cgi?id=1215294 - uploaded a patch https://bugzilla.mozilla.org/show_bug.cgi?id=1230763 - uploaded a patch https://bugzilla.mozilla.org/show_bug.cgi?id=1226729 - the following win7 slaves 234 until 243 have been re-imaged and prepared for production
./end_to_end_reconfig.sh -p -n -b
kim I tried to sign your keys tonight but it failed each time with error message “public keys not found”. WIll ask coop to sign.
Regarding rolling restarts of buildbot masters, I found tools/buildfarm/maintenance/buildbot-wrangler.py has graceful_restart I asked in irc if this is what we usually do on non-maintenance windows but didn’t receive an answer that this is how we proceed
irccloud.mozilla.com if it possible to receive access in order to have access to all disccussions when we are offline. Mihai Tabara suggested about irccloud
we received access. Thank you Kim
we discussed between us and we thought that half a day to be dedicated for bugs and the rest of the day to be dedicated for slaveloan tool,also depends of the load work that we have
https://bugzilla.mozilla.org/show_bug.cgi?id=1231750 - managed to clone an hg repo, do some changes and commit them, more info on the bug.
16.12.2015 - 17-12.2015
https://bugzilla.mozilla.org/show_bug.cgi?id=1233206 (Slave loan request for mstange [Mulet Linux x64 opt Mulet Reftest [TC] Reftest R(R5)]) not sure on what machines are this type of jobs running I guess they are spot instances, since in treeherder we see something like: "Machine: unknown" Task Inspector shows a “WorkerId”, but that the corresponding instance cannot be found in AWS, even for the most recent cases (made sure to check in the appropriate region) https://bugzilla.mozilla.org/show_bug.cgi?id=1215294 uploaded the patches tested the manage_masters.py script on dev-master2 https://bugzilla.mozilla.org/show_bug.cgi?id=1230763 pushed the changes https://bugzilla.mozilla.org/show_bug.cgi?id=1233173 re-imaged the slave, after the re-image the disk space problem disappear https://bugzilla.mozilla.org/show_bug.cgi?id=1223042 started looking over it