{"id":3153,"date":"2016-11-23T16:00:09","date_gmt":"2016-11-23T05:00:09","guid":{"rendered":"http:\/\/blog.mozilla.org\/nnethercote\/?p=3153"},"modified":"2016-11-23T16:00:09","modified_gmt":"2016-11-23T05:00:09","slug":"how-to-speed-up-the-rust-compiler-some-more","status":"publish","type":"post","link":"https:\/\/blog.mozilla.org\/nnethercote\/2016\/11\/23\/how-to-speed-up-the-rust-compiler-some-more\/","title":{"rendered":"How to speed up the Rust compiler some more"},"content":{"rendered":"<p>I recently wrote about <a href=\"https:\/\/blog.mozilla.org\/nnethercote\/2016\/10\/14\/how-to-speed-up-the-rust-compiler\/\">some work I&#8217;ve done to speed up the Rust compiler<\/a>. Since then I&#8217;ve done some more.<\/p>\n<h3>Heap allocations<\/h3>\n<p>My last post mentioned how heap allocations were frequent within rustc. This led to some wild speculation in some venues: about whether Rust should use heap allocation at all, about whether garbage collection was necessary, and so on. I have two comments about this.<\/p>\n<p>The first is that rustc is a compiler, and like most compilers its execution is dominated by complex traversals of large tree structures: ASTs, IRs, etc. Tree structures typically require heap allocation. In particular, a lot of these tree structures contain <code>Vec<\/code>, <code>HashMap<\/code> and <code>HashSet<\/code> fields, all of which unavoidably use heap allocation.<\/p>\n<p>The second is that although some heap allocation is unavoidable, the amount that rustc was doing was excessive. It is clear that nobody had (or not for a long time) made a concerted effort to minimize heap allocations. With the help of DHAT I&#8217;ve been able to greatly reduce the amount done. Any time you throw a new profiler at any large codebase that hasn&#8217;t been heavily optimized there&#8217;s a good chance you&#8217;ll be able to make some sizeable improvements.<\/p>\n<h3>Tools<\/h3>\n<p>As in my previous post, I focused almost entirely on the benchmarks present in <a href=\"https:\/\/github.com\/rust-lang-nursery\/rustc-benchmarks\">rustc-benchmarks<\/a> to guide my efforts.<\/p>\n<p>Once again I mostly used Cachegrind and DHAT to profile these benchmarks. I also used Massif (plus the excellent massif-visualizer) to profile peak memory usage on one workload (see below).<\/p>\n<p>I have also recently started using perf a bit more. Huon Wilson told me to add the <code>--callgraph=dwarf<\/code> flag to my <code>perf record<\/code> invocations. That improves things significantly, though I still find perf puzzling and frustrating at times, even after reading Brendan Gregg&#8217;s thorough <a href=\"http:\/\/www.brendangregg.com\/perf.html\">examples page<\/a>.<\/p>\n<h3>Wins<\/h3>\n<p class=\"gh-header-title\"><span class=\"gh-header-number\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37229\">#37229<\/a><\/span>: rustc spends a lot of time doing hash table lookups, enough that the cost of hashing is significant. In this PR I changed the hash function used by rustc (FNV) to one inspired by the hash function used within Firefox. The new function is faster because it can process an entire word at a time, rather than one byte at a time. The new hash function is slightly worse in terms of the number of collisions but the change sped up compilation of most workloads by 3&#8211;6%.<\/p>\n<p><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37161\">#37161<\/a> &amp; <a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37318\">#37318<\/a> &amp; <span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37373\">#37373<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: <\/span>These three PRs removed a lot of unnecessary heap allocations (mostly due to <code>clone()<\/code>) in the macro parser. They sped up the compilation of html5ever by 20% and 7% and 2%, respectively.<\/p>\n<p><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37267\"><span class=\"gh-header-number\">#37267<\/span><\/a> &amp; <a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37298\"><span class=\"gh-header-number\">#37298<\/span><\/a>: rustc uses &#8220;<a href=\"https:\/\/en.wikipedia.org\/wiki\/DEFLATE\">deflate<\/a>&#8221; compression in a couple of places: crate metadata and LLVM bitcode. In the metadata case rustc was often compressing metadata and then throwing away the result! (Thanks to eddyb for diagnosing this and explaining to me how to fix it.) Avoiding this unnecessary work speed up compilation of syntex-incr by 6% and several others by 1&#8211;2%.<\/p>\n<p>For the LLVM bitcode case I tweaked the deflate settings so that it ran almost twice as fast while the compression ratio was only slightly worse. This sped up compilation of syntex-incr by another 8% and a couple of others by 1&#8211;2%. It&#8217;s possible that switching to a different compression algorithm would help some more, thought that would be a much larger change.<\/p>\n<p><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37108\">#37108<\/a>: This PR avoided interning values of the <code>Substs<\/code> type in cases where it was easy to tell that the value had previously been interned. It sped up compilation of several benchmarks by 1&#8211;4%.<\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37705\">#37705<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: This PR avoided some unnecessary calls to <code>mk_ty<\/code>. It sped up compilation of one benchmark by 5% and a few others by 1&#8211;2%.<\/span><\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37083\">#37083<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: This PR inlined various methods involved with uleb128 decoding, which is used when reading crate metadata. This sped up compilation of several benchmarks by 1%.<\/span><\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/36973\">#36973<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: This PR fixed things so that a data structure that is only required for incremental compilation is not touched during non-incremental compilation. This sped up compilation of several non-incremental benchmarks by 1%.<\/span><\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37445\">#37445<\/a><\/span> &amp; <span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37764\">#37764:<\/a><\/span> One unusual workload was found to make rustc consume excessive amounts of memory (4.5 GiB!) which made it OOM on some machines. I used Massif to identify ways to improve this.<\/p>\n<p>The first PR reduced the size of the <code>Expr<\/code> enum by shrinking the outsized <code>InlineAsm<\/code> variant, which reduced peak memory usage by 9%. The second PR removed <code>scope_auxiliary<\/code>, a data structure used only during MIR dumping (and of marginal utility even then). This reduced peak memory usage by another 10%.<\/p>\n<p>I have filed a <a href=\"https:\/\/github.com\/rust-lang-nursery\/rustc-benchmarks\/pull\/23\">PR<\/a> to add a cut-down version of this workload to rustc-benchmarks.<\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/36993\">#36993<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: This PR did some manual inlining and restructuring of several ObligationForest functions. It sped up compilation of inflate by 2%.<\/span><\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37642\">#37642<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: This PR reduced some excessive indirection present in the representation of HIR. It sped up compilation of some benchmarks by 1, 2 and 4%.<\/span><\/p>\n<p><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7 url\"><a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37427\">#37427<\/a><\/span><span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">: rustc uses <a href=\"https:\/\/en.wikipedia.org\/wiki\/BLAKE_(hash_function)\">Blake2b hashing<\/a><\/span> to determine when a function&#8217;s code has changed. This PR reduced the number of bytes to be hashed by (a) avoiding hashing filenames twice for each span, and (b) pre-uleb128-encoding 32-bit and 64-bit integers, which are usually small. This s<span class=\"author-a-z84zz89zcz86zz90zz81zuvyz89zez75zz81zyz82z7\">ped up compilation of syntex-incr by 2%.<\/span><\/p>\n<h3>Future work<\/h3>\n<p>As the compiler front-end gets faster, the proportion of rustc execution time spent in the LLVM back-end increases. For some benchmarks it now exceeds 50% (when doing debug builds), though there is still plenty of variation. Thanks to mrhota there is a new <a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37742\/\">&#8211;enable-llvm-release-debuginfo<\/a> configure option that (unsurprisingly) enabled debuginfo within LLVM. This means that profilers can now give filenames and line numbers for LLVM. I&#8217;ve looked at a few places in LLVM that show up high in profiles, though I haven&#8217;t yet managed to make any useful changes to it.<\/p>\n<p>Another interesting new development is pnkfelix&#8217;s <a href=\"https:\/\/github.com\/rust-lang\/rust\/pull\/37770\">-Zprint-type-sizes<\/a> option, which should land soon, and will potentially be useful for any program written in Rust, not just rustc. This option will make it trivial to see how each type is laid out in memory, which will make it easy to see how types can be rearranged to be smaller. Up until now this has been a painful and imprecise exercise.<\/p>\n<p>Finally, I want to encourage anyone else with the slightest knack and\/or enthusiasm for optimizing code to take a look at rustc. You might be thinking that there isn&#8217;t that much low-hanging fruit left, but I am confident there is. It&#8217;s getting harder for <em>me<\/em> to find things to improve, but I am one just person with a particular background and a few preferred profiling tools. I am confident that other people with different backgrounds and tools will find plenty of stuff that I cannot. Rust compile speed still isn&#8217;t great, even after these improvements, but the more people who pitch in the faster they&#8217;ll improve. Don&#8217;t be shy, and if you have any questions please contact me or ask in the #rust or #rustc IRC channels.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I recently wrote about some work I&#8217;ve done to speed up the Rust compiler. Since then I&#8217;ve done some more. Heap allocations My last post mentioned how heap allocations were frequent within rustc. This led to some wild speculation in some venues: about whether Rust should use heap allocation at all, about whether garbage collection [&hellip;]<\/p>\n","protected":false},"author":139,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[311,16179],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/posts\/3153"}],"collection":[{"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/users\/139"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/comments?post=3153"}],"version-history":[{"count":0,"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/posts\/3153\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/media?parent=3153"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/categories?post=3153"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mozilla.org\/nnethercote\/wp-json\/wp\/v2\/tags?post=3153"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}