Categories: Security TLS

Deprecating Non-Secure HTTP

Today we are announcing our intent to phase out non-secure HTTP.

There’s pretty broad agreement that HTTPS is the way forward for the web.  In recent months, there have been statements from IETF, IAB (even the other IAB), W3C, and the US Government calling for universal use of encryption by Internet applications, which in the case of the web means HTTPS.

After a robust discussion on our community mailing list, Mozilla is committing to focus new development efforts on the secure web, and start removing capabilities from the non-secure web.  There are two broad elements of this plan:

  1. Setting a date after which all new features will be available only to secure websites
  2. Gradually phasing out access to browser features for non-secure websites, especially features that pose risks to users’ security and privacy.

For the first of these steps, the community will need to agree on a date, and a definition for what features are considered “new”.  For example, one definition of “new” could be “features that cannot be polyfilled”.  That would allow things like CSS and other rendering features to still be used by insecure websites, since the page can draw effects on its own (e.g., using <canvas>).  But it would still restrict qualitatively new features, such as access to new hardware capabilities.

The second element of the plan will need to be driven by trade-offs between security and web compatibility.  Removing features from the non-secure web will likely cause some sites to break.  So we will have to monitor the degree of breakage and balance it with the security benefit.  We’re also already considering softer limitations that can be placed on features when used by non-secure sites.  For example, Firefox already prevents persistent permissions for camera and microphone access when invoked from a non-secure website.  There have also been some proposals to limit the scope of non-secure cookies.

It should be noted that this plan still allows for usage of the “http” URI scheme in legacy content. With HSTS and the upgrade-insecure-requests CSP attribute, the “http” scheme can be automatically translated to “https” by the browser, and thus run securely.

Since the goal of this effort is to send a message to the web developer community that they need to be secure, our work here will be most effective if coordinated across the web community.  We expect to be making some proposals to the W3C WebAppSec Working Group soon.

Thanks to the many people who participated in the mailing list discussion of this proposal.  Let’s get the web secured!

Richard Barnes, Firefox Security Lead

Update (2015-05-01): Since there are some common threads in the comments, we’ve put together a FAQ document with thoughts on free certificates, self-signed certificates, and more.

288 comments on “Deprecating Non-Secure HTTP”

  1. david wrote on

    awesome news 🙂 I don’t except this to take place over the course of a year, but it’s a good start. And some people thought that the discussions about getting rid of http were a joke!

  2. Hamish wrote on

    > Gradually phasing out access to browser features for non-secure websites, especially features that pose risks to users’ security and privacy.

    Why “especially”? What is the motivation behind phasing out access to browser features that do not pose risks to security and privacy?

    1. rbarnes wrote on

      If you look closely enough, it can be hard to find features that don’t have security and privacy risks. Nobody thought was a privacy risk until people demonstrated canvas fingerprinting. It’s a question of degree.

  3. Peter wrote on

    Does this mean I can no longer launch a web site without paying for a cert? Or somewhat anonymously? Or likewise?

    I wouldn’t mind if http=self-signed cert, https=CA-signed cert, or similar.

    As is, half the kids in my dorm when I was an undergrad ran their own web servers, and maybe 5% of those turned into web startups.

    Making the web even more asymmetric seems like a Very Bad Idea.

    Good news is I suspect this might be a bit like IPV6 and never actually happen.

    1. rbarnes wrote on

      Nothing about this plan prevents you from using non-secure HTTP. It just means that over time, secure HTTPS is going to get more awesome, while non-secure HTTP is going to get less awesome. If the less-awesome web is good enough for you, you can keep on using non-secure HTTP. Though obviously the web would be better if you didn’t.

      1. Matthew wrote on

        You didn’t address the meat of his question. SSL certs are very expensive. This will be a factor in limiting speech on the web. The most frustrating thing about this https-only push is that the advocates absolutely ignore that the web was built on people having servers in their closets.

        Shouldn’t we fix the SSL cert probably until they are as cheap as domains first?

        1. J.R. wrote on

          “SSL certs are very expensive”

          The price of SSL certs falls somewhere between very cheap and free.

          “This will be a factor in limiting speech on the web”

          What absolute nonsense. The vast majority of people online “speak” through third party services. A single certificate can enable the speech of millions of users. A few dollars for a certificate isn’t enough to hamper competition.

          1. Dave wrote on

            A basic SSL cert from a decent CA costs anywhere from $25 to $50 per year.
            Multi-subdomain / widlcard SSL cert cost $300 or more per year.
            People outside USA and EU number in the billions.
            For most of them, USD costs are high and since CA chains originate from American root CAs most commonly, the high costs get passed down to third world users.
            Lots of kids purchase $20/year VPSes and start their first proper web presence. This entire layer of users will be screwed unless self-signed certs are given more weight or the cost of SSL certs is brought down drastically. Demand supply does not apply to this because the supply is restricted artificially. A CA-derived SSL cert might look good in a browser but says literally nothing about the business. No real verification except email and Credit Card number happens. So it’s only MITM that seems to prevented / protected against by the CA tree. There needs to a regulated price reduction, OR, an openID kind of verification (it’s the same level with a CA) during CSR processing, OR, there needs to be one good corporation that disrupts the CA extortion business model – like if Google or Mozilla were to start as a CA selling certs at $5 per year for email + CC verification.

            Or at least, in your replies henceforth on this topic, provide links to cheap SSL cert providers.
            Please help solve the problem for everyone, “works on my pc” doesn’t work on the internet.

            1. Zed wrote on

              Nuh uh man the websites will just be “less awesome” for those kids. What a joke, Richard.

              Bottom line is that http isn’t as easily accessible to the general public (WHO F***ING BUILT THE WEB AND MAKE IT WHAT IT IS) and this is going to limit our presence. I’m a web developer. I have 2 websites. One is a gallery of my own pictures and one is a personal blog. What about either of those things needs to be forced to be secure and should cost me an extra shit-ton (relative to my $10/year/domain domain registration and $10/year hosting)?

            2. Aranjedeath wrote on

              One need only check Startcom for a free certificate. Soon we’ll have let’s encrypt, as well.

          2. foljs wrote on

            The price of SSL certs falls somewhere between very cheap and free.

            Are you an American/Western European by any chance? If so, shut up, stop posting misleading BS, do some research and then talk.

            1. Gabriel wrote on

              Self-signed is free, though less awesome, there are other free options today and Let’s Encrypt is just around the corner. This move is going to push the demand for other free or trivially expensive options over time as well. Certificates fall somewhere between very cheap and free.

              Do some research!

        2. HybridAU wrote on

          SSL Certs have not been expensive for some time now.

          StartSSL offers free Class 1 certs and has done so for a few years now they are trusted by all browsers and good enough for 99% of sites. Then if you want Class 2 (for organizations rather than personal) or more SANs or wild cards, unlimited Class 2 certificates can be had for < $70.

          Then there is "Let’s Encrypt" we are yet to see how that plays out but it looks like it will make it very easy to get a free.

          1. Anon wrote on

            StartSSL is Israeli-based. No thanks! I’d prefer to be a few dollars rather than trusting them.

            1. Anon wrote on

              be -> pay

            2. Ben Hutchings wrote on

              Unless you use certificate pinning, your trust in the CA you choose for your web site is irrelevant. Any widely trusted CA, including StartSSL, GoDaddy, or the Dutch government, could issue a fake certificate for it.

            3. Ninveh wrote on

              Why would not you trust an Israeli CA? from a political point of view?
              If from a trust POV, keep in mind that Israeli cyber operations, as widely reported, are geared only against local middle east adversaries and against entities who might have info relevant to their ME adversaries. This is in contrast to the US, and its 5-eyes proxies, who consider the whole world as their adversaries.

          2. Dave wrote on

            > Then if you want Class 2 (for organizations rather than personal) or more SANs or wild cards, unlimited Class 2 certificates can be had for < $70.

            Links, please …?


            1. Adrian wrote on


        3. alex wrote on

          “Shouldn’t we fix the SSL cert probably until they are as cheap as domains first?”
          Establishing a CA that provides free certificates (like “Let’s Encrypt”) probably takes way less time than deprecating non-secure HTTP. So we should do both in parallel.

      2. Frank wrote on

        That is complete and utter b*ll and you know it. Nobody in his right mind would insist on using a secure site for a simple web presence that doesn’t present anything more than some info. While certificates may not be expensive anymore, secure HTTP still requires a fixed IP address. Guess what the often used shared hosting services don’t provide (unless at considerable extra cost). And don’t start about IPv6. IPv4 addresses are going to be necessary for a long time to come.

        This decision sucks.

        1. Graham wrote on

          @Frank: Almost all browsers (IE on Windows XP & old Android are the main exceptions) support SNI now so servers don’t need one IP per cert.

        2. Gerry Mander wrote on

          Amen brother. While we’re at it, I’d love for someone to come along and build a browser that only supports HTML and CSS. The web was much better before AJAX. So much of cliient side scripting is unnecessary and only makes it easier to spy on users while degarding the experience.

          1. James wrote on

            Why muck it all up with CSS?

        3. Owen wrote on

          Exactly, there are millions of sites that don’t have any need for https, what a typical lazy one-size-fits-all response to a problem. The legacy web for distributing information to users without logins and web “apps” is what the started the whole www thing in the first place and continues to be important, wake up mozilla you’re losing the plot.

          1. S. Albano wrote on

            If my cable company can inject JavaScript notifications into any unencrypted site (thanks Cable Company…), couldn’t a malicious script kiddie at the coffee shop inject an iframe to an attack site into your site while they were MITM attacking your user?

            Iframes, JavaScript, plugins can and are being injected into our unencrypted traffic for bad purposes. This is about data integrity and user security, not just privacy.

        4. kirb wrote on

          If you read the article, a simple web presence like you describe wouldn’t be affected. All it affects is some current issues regarding potentially private data (such as camera/microphone access) when sent unencrypted and limiting future features that are similarly problematic if unencrypted. Definitely not a one-size-fits-all; that would be too stupid and Mozilla would wake up the next day to find nobody uses their product any more. Any simple, informational, website probably wouldn’t even need JavaScript at all nowadays; if it does it surely wouldn’t need such features.

      3. Peter wrote on

        I spend a while in China. The Internet was awesome, especially if I went to Baidu.

        Of course, I could use Google, but it was a little less awesome. Slow load times and dropped packets. But that was probably for my own good. The Internet is better off without information about the Tiananmen Square Massacre. And China did a great job at making the open, distributed just a little less awesome.

        Thank you, Mozilla, for making the open, distributed Internet just a little less awesome in the US as well!

      4. Nicholas Steel wrote on

        Why are HTTP websites becoming less awesome? Your wording implies that there will be an active attempt to worsen HTTP instead of leaving it as is.

    2. Nando wrote on

      Why will IPv6 never actually happen?

      1. J.R. wrote on

        Because there’s no way to gradually transition. Everyone has to buy in simultaneously for a “switch over”, which will absolutely never happen.

        1. Oedipus wrote on

          Not a single part of that is true.

          IPv6 usage is rising on what sure looks to be a standard sigmoid growth curve. It is normally deployed gradually and compatibly and does not require any sort of simultaneous switchover.

        2. Alex wrote on

          Sure you can have a gradual switch over, there’s the whole point of dual stack networks and happy eyeballs in client applications and operating systems.

          I’ve been running with a native ISP provided IPv6 connection for a couple of years, it’s completely transparent.

    3. Simplebeian wrote on

      Or you know… StartSSL.

    4. Dan wrote on

      I agree. Forcing everyone to by a $995 SSL certificate will do nothing but ruin the internet for small companies – tilting the playing field more in the favor of large corporations once again.
      Mozilla has sold its soul…

      1. Neal wrote on

        Google for free ssl certificates

        About 9,480,000 results (0.49 seconds)

        So where are you spending that $995 cause I’ve got a real nice bridge I’m looking to sell

      2. Simplebeian wrote on

        Who is paying $995 for a certificate?

    5. mathew wrote on

      Actually, that’s a great idea — roll out new Mozilla features that only work on IPv6. That’ll force adoption, right?

      1. Grad wrote on

        Not really, it’ll force people to use another browser. If Firefox had a market share of 80% it’d work, but with the current marketshare? Not quite.

    6. Grad wrote on

      If you worry about those 9 bucks a year for the cert, check out Let’s Encrypt which will give you free certs…

      1. Anon wrote on

        This ^

  4. Peterr wrote on

    Wow! This is fantastic!

    I’m looking forward to having all web sites require signed certificates! I will be much more secure! The great thing is, if there is a scammy web site, or even one which is not politically acceptable, the government can just have the CA revoke the certificate! And I’m safe from content I shouldn’t be reading.

    (footnote: My previous comment was negative. Fortunately, moderated it away! Thank you for protecting me from having posted something perhaps foolish before!)

    1. cxqn wrote on

      Yes! Hooray! Please, Mozilla, be aware of the flaw in the CA system.

    2. NoneWhatsoever wrote on

      What utter nonsense. If a government doesn’t want you to read something on the Internet, then assuming they have any kind of jurisdiction over the site or its CA, then they can just, you know, shut down or seize the website.

      You don’t honestly expect anyone with a brain to believe that requiring certs introduces some kind of new avenue for government censorship that didn’t before, do you? Pure idiocy.

  5. Kise wrote on

    Could you give example of those features that will be disallowed on http?

    Also keep in mind that there is millions of websites that does not deal with your information such as my own blog where i write blog once in like a month or so, requiring SSL for such simple blog is overkill and overhead for no reason other then we say so.

    Also is Mozilla whiling to give free SSL certs + ips to all those websites on the internet? not all browsers support SNI to allow multiple certs on same IP, most server providers doesn’t give more than 1-2 IPs per server. not to mention hosting companies hosting thousands of websites on single IP.

    1. rbarnes wrote on

      Some things that have been discussed include geolocation and getUserMedia (microphone and camera access).

      Mozilla will not be giving away free certificates, but we are very supportive of Let’s Encrypt and other projects to make certificates more widely available. We can’t do anything to make IP addresses easier to get, though we fully support IPv6 and SNI.

  6. wowaname wrote on

    There are places where HTTPS is overkill, especially with the obvious cases such as localhost / LAN websites, and in cases where end-to-end encryption is already present such as Tor hidden services and I2P eepsites. Also, there is the broad category of all those old static websites that are left up for reference and haven’t done anything to update their content or servers for years, so I don’t see HTTP ever completely phasing out. HTTPS is good for the dynamic sites that we trust to keep our information secure, but saying it is the only option is unrealistic.

  7. Peter wrote on

    Deprecate all the things, before there is a sane* alternative.. Great plan!

    (*not sane: paying every 2 years 400$ to an ominous security company for a lousy cert)

    1. Simplebeian wrote on

      $400… the hell you buying your certs from?

  8. Whitney wrote on

    I sincerely hope this is strictly enforced. My router’s web interface uses weak encryption that prevents it from being viewed over HTTPS. Having had disabled HTTP I could not find a single browser that would allow me to load the page. I couldn’t even force the browser to let me load the page. I had to telnet onto the router to reenable HTTP to get to the configuration page. If we make browsers HTTPS-strong only all those legacy device-configuration pages will become inaccessible.

    1. Sebastian Jensen wrote on

      Most likely, anything intranet will get excluded from these limitations. Anything else seems illogical.

      1. Richard B wrote on

        I certainly hope intranet addresses are excluded (localhost, 192.168/16 10.*/24 etc) as there are also lots of long running applications that have mini http webservers to serve up current monitoring data. They specifically used http becuase it was easy to implement in code and consume in different ways. https will force shims like stunnel to be placed inbetween. These apps typically only accept connections from same subnet and are never seen on internet at large.

  9. James wrote on

    Not all data needs to be secure. Not all websites need to be secure. Requiring HTTPS means additional compute and additional servers securing something may not need to be secured and provides no benefit – only cost. Free and open information should be (optionally) free of encryption as well.

    And if other browsers don’t follow suit you’ll be painting yourself into corner with being intentionally incompatible with non-https sites.

    BTW- in case you care I’m a donator to Mozilla because of FireFox, but this type of move could drive me back to one of the big 3.

    1. Jipp wrote on

      Sorry, but encryption is *not* computationally expensive.

      “In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.”

      In the same series:

      1. Bill A. wrote on

        That’s because SSL sessions are cached, so the cost of initial key exchange is amortized across dozens, hundreds, or even thousands of connections. And the largest CDNs like Google have custom SSL stacks that permit sharing sessions across physical servers.

        Whereas for small sites where each viewer only visits a few pages every once in awhile, the connection cost will be very significant, even if you run multiple servers.

        Builtin hardware AES in modern Intel and AMD cpus makes the cost of bulk encryption negligible. But key exchange was and still is costly compared to unencrypted connections. Elliptic curve keys reduce the cost considerably, but only in comparison to RSA key negotiation. You’re still talking fractional connection throughput relative to unencrypted connections.

        HTTP/2 multiplexing might help, but given how poorly HTTP 1.1 pipelining has been supported by webapp stacks, I think it would be foolhardy to rely on it to save the day.

        That said, there’s plenty of fat to trim in various webapp software stacks. Even though in absolute terms SSL is _still_ expensive, I don’t think it’ll prove to be a big deal. I’m more concerned about the Certificate Authority racket.

        1. Andy Green wrote on

          “Whereas for small sites where each viewer only visits a few pages every once in awhile, the connection cost will be very significant”

          Yeah. But as ‘small sites… each viewer only visits a few pages every once in a while’, how significant can that be? They are small sites, and each user doesn’t do much… it’s not a problem then…

          1. gggeek wrote on

            But many small sites are hosted on a single server. Since the server can not reuse the ssl connections of userA->siteX for userB->siteY, it will take considerably more load

  10. Andy wrote on

    For those commenting about the cost of certificates – note that the price for most basic certs these days runs on the order of $10 to $20 per year, so not as exuberant as it used to be (and no more expensive than having pay yearly domain name registration fees). Furthermore, the EFF, Mozilla, and others are well at work building a completely free CA as part of the Lets’s Encrypt project: There are also CAs operating today through which one can obtain certs for free (e.g.,, etc). So while there are legitimate areas for concern in doing away with HTTP, I don’t really think cert cost is one of them.

    1. Kise wrote on

      while there is cheap certs < $20 it not so much cheap when you consider there is a lot of hosting companies hosting thousands of websites, and SNI is not widely available yet at least on old android phones, and when we are talking about IPv4 there even less IPs then websites. unless this is solved i foresee the same thing happening when Mozilla backtracked on not supporting H264. after losing huge market share to the likes of Chrome.

      1. Andy wrote on

        I think SNI is more widely available than you suggest. I run a number of SNI-based websites and have for years now with no use complaints. Unless you’re visitors are using WIndows XP (a significant security problem in and of itself) or Android 2 or earlier (now over 4 years old), I don’t think any siginifcant portion of web traffic is still SNI-incomaptible: And given that a large swath of the secure web is already unavailable to such individuals, claiming that we should avoid rolling out additional security features to support the few percent (or less) of users who can’t use them seems a bit of a stretch. And peopel without SNI support are only going to become rarer over the next few years as this initiative progress.

        1. gggeek wrote on

          SNI is still to happen for the internet-of-things. Lots of non-browser http clients have much simpler/outdated networking stacks (case in point: I justt spent 3 days battling Jira, flagship app from atlassian, which does not support SNI for its http calls, even when running on java 8…)

      2. alex wrote on

        Isn’t the stock Android (version < 3) browser already the only remaining relevant user agent that doesn't support SNI? So until this change is completed (i.e. at least several years in the future), there will be only a really small number of users of such old browsers left.

  11. Iain R. Learmonth wrote on

    What’s going to happen when the content is local? Are you going to have to run a webserver with HTTPS in order to do web development now?

    There are times when you do require not using encryption. You’ve missed a large part of the point of HTTPS, which in this case is the part that seems to apply. HTTPS provides an authentication mechanism and yes, I can see how this is useful to protect people from malicious code. Now a cracker will have to go out and spend £5.99 on an SSL certificate for his malware to work. But the use of enforced encryption has negative consequences in some cases and browsers should be flexible in this regard.

    In the case of amateur radio, the use of encryption is forbidden by the license conditions in every country I know of for the most part (there are exceptions for example when supporting a service with personal information involved). Is Mozilla saying that because you’ve decided to jump on a bandwagon, we’re going to have to go and find another browser?

    In the case of network hardware (and I’m guessing other hardware) the web interfaces can be quite dreadful and often will have poor SSL implementations. I’ve already had problems with being able to access switches to reconfigure them. Currently I just firewall these off and make sure the interfaces are only available from select machines. Am I now going to have to set aside another machine to run an older version of Firefox to manage these switches too?

    In the case of Internet engineering, especially in the development of these new protocols, it can be easier to see how things are working, performing packet captures, etc. when encryption is not in use. Mandatory SSL would mean extra steps in debugging experiments and this extra work could be avoided. (Of course, I’m aware that testing with the encryption is also necessary, but one of the advantages of an open source project is that you can take a white box approach).

    I agree that for the most part encryption is a good thing, and that for most service providers, they should have mandatory SSL to protect connections to their services, but Firefox is running on MY computer. Why should Firefox be artificially limiting what I can and cannot do, not based on any technical limitations, seems a ridiculous step for an open source project to be taking. There are times when communications are deliberately not secure, there is no way to make them secure or they have been secured through other means.

  12. Zach wrote on

    I like this, prohibiting login forms from being submitted in a non-secure manner would be a great first step. And prohibiting HTTP POST requests on non-HTTPS altogether might be a good step too. Consider adding an in-browser banner above every webpage, letting the user know this page’s contents may have been altered in-transit, and that nothing on it can be authenticated.

    1. Kise wrote on

      Grats you just broke more then half of the web.

      1. RandomHacker wrote on

        No; half the web was already broken, it was just failing silently while it passed our credentials to passive adversaries. Mozilla is making it fail loudly, and good on them. We’re not going to gain an inch of security if we’re crushed under the weight of supporting every bad idea anyone has had for the past twenty years.

  13. Phil Rosentahl wrote on

    This is a very bad idea.

    Encryption carries large computational costs, and reduces performance by breaking sendfile on servers.

    There are many places where Encryption is valuable (eg: websites that handle private information), and there are also many places where Encryption is completely wasteful (eg: video streaming websites).

    Datacenters are already huge consumers of power, and will necessarily increase this power consumption for all of this unnecessary encryption.

    I sincerely hope that no other browser follows suit, and Mozilla realizes how bad of an idea this is.

    1. Jipp wrote on

      Sorry, but encryption is *not* computationally expensive.

      “In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.”

      In the same series:

      1. J.R. wrote on

        But it *does* break sendfile.

      2. J.R. wrote on

        Also, your gmail counter-example is silly. An email webapp is a few KiB of HTML and a few dozen small text files. It may add up to a lot with millions of users, but it’s still nothing at all compared to streaming video. Video streaming sites consume bandwidth that runs into high double digit figures for percentage of global bandwidth.

        1. alex wrote on

          YouTube delivers its video streams over HTTPS (at least the stuff in Firefox Network Monitor looks like they do). So it doesn’t seem to be impossible to do video streaming via HTTPS.

          1. Phil Rosenthal wrote on

            Google is also a multi-billion dollar company with enough cash available to deploy the 2x servers required for encrypting video streams.

            See this document:

            Even after all of this development, Netflix was unable to achieve better than 50% of HTTP performance when using HTTPS.

    2. passcod wrote on

      It has been known for years that HTTPS is not overly significant in terms of computational costs on servers. Estimates range at about 1% of server load on average, and anyway the largest performance penalty for HTTPS by far is the handshake process (because it adds requests on initial connection, and that uses the network… the network being orders of magnitude slower than your CPU, that’s where performance hits occur) which can be mitigated by various things, including using keep-alive… and that’s the default in HTTP/1.1.

      Encryption defeats not only purposeful attackers (so is useful for “private data”), but also many forms of censorship (and so is useful for just about anything else, including video sharing websites).

      I’m not quite sure what you mean by “Encryption breaks sendfile”.

      “I sincerely hope that no other browser follows suit” Actually, it was Chrome that started something like this, or at least it was Chrome that started deploying browsers with penalties (only visual at this point afaik, but probably getting more stringent as time goes on) for non-secure websites. So really, it’s Firefox that’s following suit.

      1. J.R. wrote on

        Who cares about “on average” if you break entire classes of use cases (e.g. streaming video). Talking about averages is cute, but it doesn’t tell the full story.

      2. Mildred wrote on

        > I’m not quite sure what you mean by “Encryption breaks sendfile”.

        This can also be described as sending the file over the TCP socket with no extra copies.

        Generally, you use the read(2) system call to read from a file, and copy to a local buffer. Then you use write(2) to write the buffer to the TCP socket. The extra copy here is not necessary and can be removed. Either by using sendfile(2), nmap(2) or splice(2). The logic is the same and involves no extra copy to a buffer. File is directly read from disk and sent to the network.

        With encryption, you can’t send the file straight to the network and it needs to be processed by the encryption layer. When the file is static and public, this is purely a waste of resources.

        Preventing modifying the resource can be done more efficiently by generating a digest hash of the content and signing it with a private key. The signature can be reused by the server.

        Note that in case there is a reverse proxy in the pipeline, the extra copy is there on the reverse proxy server. So no adverse effect should be noticed when switching from plain HTTP to HTTPS because of an extra copy. For small servers, this shouldn’t be the case.

        To sum up: this is great for server farms and big companies. This is great for authenticated traffic, but this isn’t great for unauthenticated and untracked traffic or small servers. This and the fact that I’ll never get a certificate from a CA because there is no CA out there that I trust.

    3. Nick Lewycky wrote on

      “There are many places where Encryption is valuable (eg: websites that handle private information), and there are also many places where Encryption is completely wasteful (eg: video streaming websites).”

      See this article “You Can Get Hacked Just By Watching This Cat video on YouTube”:

      The problem exists regardless of what the website is; all content served over non-SSL can be replaced by a man in the middle and is therefore an attack vector. All HTTP GET requests.

  14. James T James wrote on

    The Thirtieth day of April in the year of Our Lord Two-Thousand-And-Fifteen will go down in history as the day that the World Wide Web died.

    1. Gary L. L. wrote on

      No just the day Firefox died. The rest of the way I mean. It has been dieing since Australis was forced on us even though very few liked it.

  15. Adam Jacob Muller wrote on

    While this may be a well-intentioned move, there are far more pressing issues to security. What’s the point of forcing SSL on browsers if you’re not going to be careful about *what* CA’s you give carte-blanche to sign certificates to (see:

    Mozilla (and other browser vendors) also need seriously consider the computational cost that is going to be associated with forcing SSL encryption and in connection, the power and eventual environmental impact that this will have. I’m entirely serious when I say that this will have a completely measurable impact in the amount of CPU time that is required to serve SSL impact which translates directly to power usage and environmental impact.

    As other commentators have pointed out, this is going to break a plethora of things that are considered core to the internet. Want to run a personal website for your own consumption? Have to pay for a certificate. Want to run an anonymous site, no way, you have to verify your identity to get a certificate. Want to debug issues using common tools like tcpdump and packet captures? No way. Running a proxy for a secure institution or corporation that simply requires packet inspection (to ensure against data leakage) and thus must block HTTPS? Great, now your employees can’t even check the weather or traffic — excellent for morale.

    Even with HTTPs-everywhere people will still be able to see what sites you are browsing (either sniffing your DNS requests or sniffing SNI on your “secure” HTTPs-requests) even if they can’t see the actual content. If the content providers deemed that whatever you’re doing is security-sensitive or privacy-sensitive they already have the ability to make a decision to secure that information over HTTPS — or not.

    There are, also, far better security measures that firefox can take to ensure that site operators have the control and freedom to make their sites secure for everyone, for example, HTTPs pinning.

    This just seems to me to be another case of others trying to impose short-sighted goals on everyone (in a father knows best attitude) for very limited gains and will be highly detrimental to the internet as a whole.

    1. alex wrote on

      Most of the things you claim this breaks aren’t really true: If you “run a personal website for your own consumption” or “want to debug issues”, it’s trivial to generate self-signed certificate and import it (if needed temporarily) into your browser’s trust store.

      1. David Cantrell wrote on

        No, it’s not trivial. It requires arsing about on the command line, and then arsing about in obscure corners of your browser. And don’t forget the obscure corner’s of your phone’s browser as well if you want to use it from there.

        This makes it infeasible for normal people.

  16. Phil Rosenthal wrote on

    Note how Netflix admits that even after all improvements, there is still a 50% reduction in performance after introducing SSL. All so that they can re-encrypt the same videos over and over.

    Anyone who claims that there is no computation impact for high bandwidth static file serving is just flat out wrong.

  17. Ben Cooke wrote on

    What’s the plan to ensure that governments can’t compel CAs via secret courts to issue fraudulent certificates so they can execute MITM attacks?

    What’s the plan to fix the CA system so that one malicious/incompetent CA can’t compromise the whole system for everyone else?

    This change seems premature. There is no value in pushing people towards a system with such obvious flaws in it.

    1. J.R. wrote on

      This is like saying: “putting a lock on my door is all well and good, but what’s to stop someone prying it open with a crowbar?”.

      1. Ben Cooke wrote on

        I disagree. SSL is already deployed enough to protect the overt threats on my security: SSL is used to collect my credit card number and other such instruments.

        Privacy rather than security is the motivation for blanket SSL, but the government is the main collector and abuser of the cleartext metadata in question and the biggest threat to those for which privacy is a significant issue.

        Locks don’t afford privacy.

  18. brian wrote on

    > and the US Government calling for universal use of encryption

    Yeah I wonder why…. uhmmm maybe because X.509 is COMPLETELY BROKEN?! Are you serious Mozilla?? if you’re seriously going to do this then at least remove those self signed certs warnings

  19. 78 wrote on

    bad idea. some places in the world need to have locks, sure, but others specifically need to not have locks. diversity is essential for survival.

    1. hugo wrote on

      The lock-metaphor doesn’t really work here. The reason that certain places “specifically need to not have locks” is that they need to be open to a large or unspecified audience and that access to them needs to be as fast as possible. HTTPS doesn’t impede either.

      1. TimC wrote on

        You’ve obviously never used HTTPS before.

  20. Nate wrote on

    I hope this doesn’t just serve to reduce the usage of Firefox further.
    If other browsers don’t implement these same changes, then it will just be a case of it appearing that Firefox causes problems for users that other browsers do not.
    With Firefox usage dropping regularly, I’m not sure it is really in the position of forcing any sort of changes on anyone.

More comments:1 2 3 7