Mozilla DB news, Friday March 16th

Sheeri

1

  • While adding a custom field to Bugzilla to track the newest SeaMonkey version, the script ran into a lock wait timeout and aborted. Some of the data needed to be manually inserted to finish adding the custom field.
  • We then needed to add database grants so our metrics team could access the new fields.
  • Added access so the Autoland staging server
  • We added the DBAs to what gets paged for our new backup server.
  • This seemed to be the week that a few of machines started having disk issues, though all of them were one-offs (as opposed to having to set expire_logs_days). I did run into a fascinating issue where binary logs for a machine were 7G even though the maximum size was supposed to be 1G.
  • This was also the week that some cron jobs did not get run, because we “sprung ahead”. Monday was a fun day, but luckily everything was easy to fix. Lesson learned: do NOT run anything via cron from 0200 to 0259 because if your server is set to a time zone that observes Daylight Saving Time, it will run twice in October/November and zero times in March.
  • The mozillians.org team wanted some data about group names so they could optimize their searching, so we gave them a data export.
  • We removed some company-sensitive comments from a bugzilla bug.
  • Due to machines being moved around from the old data center to the new one, we had a new location for the developers to pick up their nightly exports of the support.mozilla.com database.
  • Did you know I co-host a weekly podcast about MySQL? It’s called OurSQL Cast. You can find it on Feedburner and iTunes. Episode 83 is up, called “The NewSQL World”, and we interview Ori Herrnstadt, the CTO of Akiban.
  • We got several new database nodes kickstarted in our new data center.
  • We are preparing to upgrade MySQL on Bugzilla’s staging server, which will happen on Sunday.

One response

  1. justdave wrote on :

    >> “I did run into a fascinating issue where binary logs for a machine were 7G even though the maximum size was supposed to be 1G. ”

    This happens when a single transaction (such as a full database reload) is that large, as it won’t split a transaction across multiple binlogs. We wind up with 30 GB binlogs every time we reload the Bugzilla staging server with a fresh snapshot of the Bugzilla database.