Lots of good stuff happened this week in MemShrink-land.
Necko Buffer Cache
I removed the Necko buffer cache. This cache used nsRecyclingAllocator to delay the freeing of up to 24 short-lived 32KB chunks (24 x 16KB on Mobile/B2G) in the hope that they could be reused soon. The cache would fill up very quickly and the chunks wouldn’t be freed unless zero re-allocations occurred for 15 minutes. This idea is to avoid malloc/free churn, but the re-allocations weren’t that frequent, and heap allocators like jemalloc tend to handle cases like this pretty well, so performance wasn’t affected noticeably. Removing the cache has the following additional benefits.
- nsRecyclingAllocator added a clownshoe-ish extra word to each chunk, which meant that the 32KB (or 16KB) allocations were being rounded up by jemalloc to 36KB (or 20KB), resulting in 96KB (24 x 4KB) of wasted memory.
- This cache wasn’t covered by memory reporters, so about:memory’s “heap-unclassified” number has dropped by 864KB (24 x 36KB) on desktop and 480KB (24 x 20KB) on mobile.
I also removed the one other use of nsRecyclingAllocator in libjar, for which this recycling behaviour also had no noticeable effect. This meant I could then remove nsRecyclingAllocator itself. Taras Glek summarized things nicely: “I’m happy to see placebos go away”.
Other Stuff
I gave a talk at the linux.conf.au Browser MiniConf entitled “Notes on Reducing Firefox’s Memory Consumption”, which got some attention on Slashdot.
Henrik Skupin reported that Ghostery 2.6.2 and 2.7beta2 have memory leaks (zombie compartments). I contacted the authors today and they said they are looking into the problem, so hopefully we’ll see a fix soon.
Gian-Carlo Pascutto reworked the code that downloads the safe browsing database to use less memory, and to have a fallback if memory runs out. This memory usage is transient and so the main benefit is that it prevents some out-of-memory crashes that were happening frequently.
Last week I mentioned bug 703427, which held up the possibility of a simple, big reduction in SQLite memory usage. Marco Bonardo did some analysis, and unfortunately the patch caused large increases in the number of disk accesses, and so it won’t work. A shame.
Kyle Huey fixed a zombie compartment that occurred when right-clicking on a single-line textbox. The fun thing about this was that in only 3 hours and 35 minutes, the following events happened: the bug report was filed, the problem was confirmed by two people, the bug report was re-categorized into the appropriate component, a patch was posted, the patch was reviewed, the patch landed on mozilla-central, and the bug report was marked as fixed! And approval for back-porting to Aurora was granted 8 hours later. Not bad. Kyle has also made progress on a more frequent zombie compartment caused by searching for text.
Jonathan Kew made the shaped-word caches (which are involved in text rendering) discard their data on memory pressure events.
Quote of the week
A commenter on my blog named jas said (the full comment is here):
a year ago, FF’s memory usage was about 10x what chrome was using in respect to the sites we were running…
so we have switched to chrome…
i just tested FF 9.0.1 against chrome, and it actually is running better than chrome in the memory department, which is good. but, it’s not good enough to make us switch back (running maybe 20% better in terms of memory). a tenfold difference would warrant a switch. in our instance, it was too little, too late.
but glad you are making improvements.
So that’s good, I guess?
I also like this comment from the aforementioned Slashdot thread!
Bug Counts
Here are the current bug counts.
- P1: 24 (-3/+1)
- P2: 131 (-8/+7)
- P3: 69 (-1/+3)
- Unprioritized: 3 (-3/+2)
That’s a net drop of two, largely because I went through and closed some P1 and P2 bugs that were stale or had been fixed elsewhere.
17 replies on “MemShrink progress, week 31”
Ill switch back when Firefox uses 10kbytes of ram with 150 tabs!
Otherwise, I’ll keep my 2Gb happy Chrome.
Yeah, well. Guess you got work to do then 🙂
the slides about firefox memory management was a great read. thanks and good luck for the next steps.
It’s “good news” that Firefox spent a year treading ground and several user’s abandoned ship? I’d hate to know what you think is bad news!
“Waaaaaaaa! People are making progress on a piece of software I spend all my time hating on, and this conflicts with my deep and abiding need to speak negatively about it!”
pd, you need a new hobby, dude.
Clearly missing sarcasm too.
Go away.
If you don’t have anything constructive to say, don’t say anything at all.
When you feel the need to verbally piss into the wind, open a text file, not a web browser.
Criticizing is not contributing. It baffles me why you come here. Your negativity has only proven toxic.
Either try to be more positive and constructive or just, please, go away.
I guess it’s not surprising that people have a hard time switching back. There’s a pretty big opportunity cost to switching in terms of getting your browsing history and add-ons into a new browser. That being said, it is annoying to hear people who switched previously bash Firefox for issues that no longer exist. I guess all we can do is continue to make improvements and make a better browser.
from what I remember Firefox 11 includes an way to import data from chrome.
I did some optimizations on the nsRecyclingAllocator code, but it is even nicer to see that it is no longer needed, as it not only added memory overhead, but also timers as well.
In the end will all these timers being added add up to any sort of detriment in terms of performance or memory?
Each nsRecyclingAllocator had a timer, and so timers have been removed, not added. We had at most two nsRecyclingAllocators alive at any one time. I expect the removal of at most two timers is negligible in terms of performance or memory consumption.
Does the removal of these two timers reduce the number of CPU wakeups? If so, could this help to reduce power usage on laptops?
I don’t know, but I suspect removing two timers makes a negligible difference.
If i do understand correctly, memshrink doesn’t use compression to reduce memory usage.
I was wondering why. Is that out of scope ? or technically impossible ?
Note : I understand that compression is only applicable to “cached data”, not to “live” data currently being processed. But then, my understanding is that Firefox has a hell of lot of cached data, typically for background tabs, or even undisplayed elements of current tab…
It seems to me such an experiment would be better left to RAM module manufacturers. If you do it in software, presumably it would slow things down a lot.
Probably too much of a performance hit. Compression for data stored on an HD isn’t generally a problem any more, but DRAM is ~10,000 times faster. I doubt it’d be feasible to do it without a major performance hit.
Well, it is.
It is even a common practice in DB, to increase the size of data kept into memory and into cache, which helps performance a lot. So if it’s good for Databases….
The speed of best compression algorithm is about 300MB/s per core on compression, and 1GB/s on decompression.
Check out this benchmark table :
https://github.com/decster/jnicompressions
However, there are 2 catches :
– Compression only works on “large enough” data blocks. Large enough is about a few KB.
– You have to decode data before using it, hence it is “twice in memory” while using it. Therefore, you want your data blocks to remain “small enough”. Small enough is presumably below 1MB.
Hence, there is a big difference between cached data (stored for later use), and active data (being used).
However, i’m not sure if it is within Memshrink requirement list to make a difference between cached data and active data.