Bleg for a new machine (part 2)

Last week I blegged for help in designing a new machine, and I got almost 50 extremely helpful comments and a handful of private emails.  Many thanks to all those who gave advice.

I mentioned that I want browser and JS shell builds to be fast, and that I want the machine to be quiet.  There were two other things that I didn’t mention, that affect my choices.

  • I’m not a hardware tinkerer type.  I don’t particularly enjoy setting up machines — I’m a programmer, not a sysadmin :)  I like vanilla configurations, so that problems are unlikely, and so that when they do occur there’s a good chance someone else has already had the same problem and found a solution.  So that’s a significant factor in my design.
  • I turn off my machine at night. And I use lots of repository clones (I have 10 copies of inbound present at all times), typically switching between two or three of them in one session.  So I stress the disk cache in ways that other people might not.

Here’s my latest configuration.  I don’t expect anything other than perhaps minor changes to this, though I’d still love to hear your thoughts.

  • CPU.  The Intel i7-4770.  I originally chose the i7-4770K, which is 0.1 GHz faster and is overclockable, but it lacks some of the newer CPU features such as support for virtualization and transactional memory.  Since I won’t overclock — as I said, I’m not the tinkerer type — several people suggested the i7-4770 would be better.
  • Motherboard. ASUS Z87-Plus.  I originally chose the ASUS Z87-C, but was advised that a board with an Intel NIC would be better.
  • Memory. 32 GiB of Kingston 2133 MHz RAM.  No change.
  • Disk. Samsung 840 Pro Series 512 GB.  No change. Multiple people said this was overkill — that 256 GB should be enough, or that the cheaper 840 EVO was almost as good.  But I’ll stick with it;  those disks have a really good reputation, it should last a long time, and I really like the idea of not having to worry about disk space, especially with two OSes installed. And apparently the performance of those drives diminishes once they get about 80% full, so having some excess capacity sounds good.
  • Graphics card.  Multiple people agreed that the Intel integrated graphics was powerful enough, and that the Intel driver situation on Linux is excellent, which is great — I don’t like mucking about with drivers!
  • Case. The Fractal Design Define R4 (Black) was recommended by two people.  It looks fantastic (my wife is in love with it already) and is reputedly very quiet.
  • Optical drive.  A Samsung DVD-RW drive. Unchanged.
  • Software. Several people suggested using Virtual Box instead of VMWare for my Windows VM.  I didn’t know about Virtual Box, so that was a good tip.  Someone also suggested I get Windows 7 Professional instead of Home Premium because the latter only supports 16 GiB of RAM.  Ugh, typical Microsoft segmented software offerings.
  • I didn’t mention monitor, keyboard and mouse because I’m happy with my current ones.

This looks like an excellent set-up for a single-CPU, quad-core machine.  However, multiple people suggested that I go for more cores, either by choosing 6-core or 8-core server CPUs, or using dual-sockets, or both.  I spent a lot of time investigating this option, and I considered several configurations, including a dual-socket machine with two Xeon E5-2630 CPUs (giving 12 cores and 24 threads) or a single-socket machine with an i7-3970X (giving 6 cores and 12 threads) or a Xeon E5-2660 (giving 8 cores and 16 threads).  But I have a mélange of concerns: (a) a more complex configuration (esp. dual-socket), (b) lack of integrated graphics, (c) higher power consumption, heat and noise, and (d) probably worse single-threaded performance.  These were enough that I have put it into the too-hard basket for now.

Ideally, I’d love to build two or three machines, benchmark them, and give all but one back.  Or, it would be nice Intel’s rumoured Haswell-E 8-core consumer machines were available now.

Still, daydreams aside, compared to my current machine, the above machine should give a nice speed bump (maybe 15–20% for CPU-bound operations, and who-knows-how-much for disk-bound operations), should be quieter, and will allow me to do Windows builds much more easily.

Thanks again to everyone who gave such good advice!  I promise that once I purchase and set up the new machine, I’ll blog about its performance compared to the old machine, so that any other Mozilla developers who want to get a new machine have some data points.

20 Responses to Bleg for a new machine (part 2)

  1. Sounds like a great build. I also looked at multi-processor builds, and decided that even with my employer paying for it, I couldn’t really justify the price.

    One more thing you could look at if you have any spare computers is icecream. We have it installed on all of our development and build machines, and it dramatically speeds up ccacheless rebuilds (on my laptop, they went from around an hour and a half to 10 minutes).

  2. Try Qemu-KVM/Virt-manager/virt-viewer instead of VirtualBox.

    It’s “in tree” virtualization for Linux. It’s what most OpenStack clouds are built with, has nice migration features. Until recently it was lacking a bit on the desktop; but depending on what distro you use that should be mostly fixed.

  3. Love this set up. Nice work.
    The only thing I can comment on is the Case. I highly recommend a Cooler Master. I have a Cooler Master Cosmos for 5+ years now. It’s as good as new. I’ve changed almost everything inside it. It’s super quite even with all the fans running because of sound proofing on the two sides.
    As for looking futuristic, I think the Cosmos looks even better.
    Now the Cosmos II is available. I’d love to get it but I don’t really need to replace my existing one.
    Here’s a great video review of the Cosmos II. This case has a very thoughtful design.
    Also, VirtualBox is excellent. I made the switch about 2-3 years ago and I use VMs several times a week.
    Good luck, thanks for sharing!

  4. You could consider using 2 x 256 GB SSDs (Samsung Pro available) instead of one 512 and either use it as RAID or simply put your concurrent repositories on different disks for faster read and write speeds.

    For the virtualization, if your hardware supports VT-d (I guess so, check the mainboard specs, the processor should be fine), take a look at Xen: https://en.wikipedia.org/wiki/Xen

    • Nicholas Nethercote

      Interesting idea about having two disks. How hard is it to set up RAID under Linux? Does that make virtualization harder?

      • I have Raid-0 (2x128GB SSD) on my Asus U500 laptop.
        Installing Linux was a bit trickier than usually, but luckily there are enough
        instructions scattered in various blogs and message boards.
        Dunno about virtualization.

    • Do not attempt to use SSDs in a RAID0 configuration. This usually breaks TRIM, which is the feature that lets the SSD actually erase flash when the file occupying it has been deleted. Without TRIM functioning, performance will decline over time (potentially to very low levels) and drive lifespan will be impacted. You start off with great benchmarks, but actual performance doesn’t go up much and declines heavily over time. These caveats don’t apply if you are SURE that TRIM will function in your configuration, but then you have the normal risks of RAID0. TRIM USUALLY works on RAID1, but confirm on your platform. NEVER use RAID5 or other striped parity arrays with SSDs.

      Also, to clarify my previous comments on capacity: NO DRIVES perform well when filled beyond 80% capacity, though some drives suffer more than others. In general terms you want to avoid filling your drive beyond 75-80% capacity if you can help it.

  5. Pick whatever you like for the NIC, this is as interchangeable as computer parts get (never had a problem with reliability or cpu use, not that your machine is starved in that department). You didn’t mention the PSU, I think today’s are all 80+, which is good because your machine will be near-idle most of the time. If you buy a new one (calculator) pay attention to the reliability, this is a component that can actually fail. I’ll renew the bcache recommendation; it’s a small amount of tinkering but in my opinion worth it.

    • It is important to get a motherboard with an Intel NIC versus a Realtek, the Realtek NICs have annoying driver flaws and sometimes get stuck in “deep sleep” mode, requiring a full power interruption (not just shutdown) for quite a few minutes to resolve. Intel NICs “just work” and you’ll never have to mess with them. For cheaper aftermarket NICs I find that Marvell Yukon chipsets work very well under Windows, though I haven’t tested them under other OSes. I too used to think that it couldn’t possibly matter in this day and age, then Realtek pushed a driver update via Windows Update that broke connectivity to a very small number (but not all) websites, and I spent like 3 months running around applying driver updates from Flash drives as people complained. That kind of thing just doesn’t happen on Intel NICs.

      There definitely is no value in expensive aftermarket NICs (like those hilarious Killer NICs) unless you really need certain advanced server features, though.

      • Nicholas Nethercote

        I’ve had problems on my current machine with the NIC getting “stuck”, and barely working, if I reboot from Windows into Linux without doing a hard power-off in between.

  6. I would recommend getting the SSD first. You can simply clone your current software setup using onto the new disk (the tools support resizing the partitions). If you are using a disk imager for backup purposes (not sure if those are common for Linux), just restoring your backup to the new SSD works.

    See how far that gets you. If that doesn’t do the trick, 20-30% more from Haswell are probably not going to cut it either. If you are considering a 6-core setup (and that’s the only way to get a helpful boost in CPU speed), you will want to wait for Ivy Bridge-E, which is coming a little later this year.

    Noise should not be too much of a concern, there are excellent CPU coolers that will deal with the additional power quietly and you can use a fanless video card, but power consumption will be a lot higher (possibly still better than your machine before the 2600K).

  7. AMD do 8-core desktop CPUs; I have one (Ben Bucksch chose it for me). Unfortunately I had to tinker around getting it to work; it turned out that my PSU was underpowered, but I’d already switched from a quite fan to a noisy fan, and I’m too lazy to swap them back. When I originally got it I was gobsmacked at my 15 minute cold clobber time, but that’s now a distant memory…

    As for the graphics adapter, I should figure out whether I can do better than the 1280×1024 I’m currently getting (I know the monitor itself can do 1600×1200). (Did I say I’m too lazy?)

    • Because of weak single threaded performance AMD’s 8 core CPUs struggle to match the performance of Intels 4 core HT systems. It’s not directly comparable to what NJN will be doing; but Anandtech just published benchmarks comparing FF compile times on windows. The AMD 8 core system performed no better than Intels old I7-2500; while Intels newest CPUs finished the job in two thirds of the time.

      http://anandtech.com/show/7255/intel-core-i7-4960x-ivy-bridge-e-review/4

  8. Maybe you can wait for the i7-4771, to get the same clock speed as the 4770K.

  9. Here is a link on SSD Raid benchmarks, hope it will help: http://www.tomshardware.com/reviews/ssd-raid-benchmark,3485.html

    Also, I heard Intel GPUs aren’t really good when it comes to OpenGL, you might want to ask around about it.

  10. Ivy Bridge-E reviews are out and this might be an interesting graph for you:
    http://techreport.com/review/25293/intel-core-i7-4960x-processor-reviewed/7

    Anandtech has a graph for Visual Studio. I assume that the build system setup will play a major role in this. The more it keeps all 12 threads pegged, the bigger the advantage for the 6-core parts.

    • That’s interesting; much less scaling across cores than AT saw I wonder if the limiting factor was GCC or the qtbench code itself. Intuitively I’d expect larger projects to scale better with more cores because finding N n0n-dependent items to compile at a time would be easier; but I freely admit I could be wrong.