I’ve been trying to figure out if a MySQL Cluster would make sense in our environment and if it’s worth the complexity. I think so but I can’t find enough information online to answer some nagging questions.
Nearly any “dynamic” web application we have is backed by MySQL and we have several “clusters” using MySQL’s replication . This gives a master read-write node and a read-only slave (and in some cases, multiple slaves) and all the apps need to know about this split. This works but has some nasty side effects – kernel upgrades require downtime, overloaded masters/slaves delay replication and impacts other databases on that pair.
And this doesn’t scale without end especially in instances were my gut says we could use more read-write heads to the same database or where we need a new slave right now but can’t take the downtime hit to rsync all of /var/lib/mysql (or whatever).
What I want is to be able to throw more machines into a cluster to increase I/O, CPU or number of client connections & queries I can handle without a lot of effort. On paper, MySQL Cluster looks like it’ll do that but I’m left with a bunch of questions:
- What do I do if disk I/O is my contention? CPU?
- If I need more capacity on my read-write heads, do I just add more SQL nodes? Do my apps just need to know of multiple heads?
- How are simultaneous database writes (updates/inserts) handled? Will I run into locking issues?
- How critical is the management node? If it stops running, what breaks?
I’m hoping someone out there’s been through these issues and can point me in the right directions. Ping me offline – mrz at mozlla .