What's the end game here? I agree with the dissent. Why not make it 30 seconds?
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
mcpherrinm 1 hours ago [-]
I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
0xbadcafebee 1 hours ago [-]
So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.
ignoramous 32 minutes ago [-]
Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
grey-area 5 minutes ago [-]
Since you’ve thought about it a lot, in an ideal world, should CAs exist at all?
delfinom 52 minutes ago [-]
Realistically, how often are domains traded and suddenly put in legitimate use (that isn't some domain parking scam) that (1) and (2) are actual arguments? Lol
zamadatix 3 minutes ago [-]
Domain trading (regardless if the previous use was legitimate or not) is only one example, not the sole driving argument for why the revocation system is in place or isn't perfectly handled.
Lammy 48 minutes ago [-]
> but customers aren't able to respond on an appropriate timeline
Sounds like your concept of the customer/provider relationship is inverted here.
crote 36 minutes ago [-]
No. The customer is violating their contract.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
woodruffw 2 hours ago [-]
The "end game" is mentioned explicitly in the article:
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
sitkack 31 minutes ago [-]
Don't lower cert times also get people to trust certs that were created just for their session to MITM them?
That is the next step in nation state tapping of the internet.
ezfe 5 minutes ago [-]
lol no? lower cert times still extend the root certificates that are already trusted. It is not a noticeable thing when browsing the web as a user.
A MITM cert would need to be manually trusted, which is a completely different thing.
notatoad 1 hours ago [-]
>unless I've missed some monetary/power advantage
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
fs111 2 hours ago [-]
Load on the underlying infrastructure is a concern. The signing keys are all in HSMs and don't scale infinitely.
bob1029 13 minutes ago [-]
How does cycling out certificates more frequently reduce the load on HSMs?
timewizard 2 hours ago [-]
If the service becomes unavailable for 48 straight hours then every certificate expires and nothing works. You probably want a little more room for catastrophic infrastructure problems.
pixl97 1 days ago [-]
Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.
plorkyeran 1 days ago [-]
This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.
tetha 2 hours ago [-]
Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
donnachangstein 1 hours ago [-]
> Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
kam 48 minutes ago [-]
At least that sounds like it would be a more interesting story than the one where the person who quit a year ago didn't document all the places they manually installed the 2-year certificate.
tetha 49 minutes ago [-]
I will. We've been betting Postgres connectivity for a few hundred applications on this over the past three years. If this fucks up, it'll be known without me.
rlpb 8 hours ago [-]
Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc
akerl_ 7 hours ago [-]
As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.
franga2000 3 hours ago [-]
You use the term "internal use" and "corporations" like they're interchangable, but that's definitely not the case. Lots of small businesses, other organizations or even individuals want to have some internal services and having to "set up" a CA and add the certs to all client devices just to access some app on the local network is absurd!
JimBlackwood 2 hours ago [-]
I don't think it's absurd and personally it feels easier to setup an internal CA than some of the alternatives.
In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.
Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.
lucb1e 2 hours ago [-]
> it's a few commands to generate a CA
My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints
JimBlackwood 1 hours ago [-]
> My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
This is fair. I assumed all small businesses would be tech startups, haha.
Retric 1 hours ago [-]
The vast majority of companies operate just fine without understanding anything about building codes or vehicle repair etc.
Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.
lucb1e 1 hours ago [-]
Paying an expert to come set up a local CA seems rather silly when you'd normally outsource operating one to the people who professionally run a CA
Retric 32 minutes ago [-]
You’d only need internal certificates if someone had set up internal infrastructure. Expecting that person to do a good job means having working certificates be they internal or external.
nilslindemann 58 minutes ago [-]
> Paying experts is a perfectly viable option
Congrats for securing your job by selling the free internet and your soul.
Retric 28 minutes ago [-]
I’m not going to be doing this, but I care about knowledge being free not labor or infrastructure.
If someone doesn’t want to learn then nobody needs to help them for free.
disiplus 2 hours ago [-]
We have this, it's not trivial for some small team, and you have to deal with stuff like conda env coming with it's own set of certs so you have to take care of that. It's better then the alternative of fighting with browsers but still it's not without extra complexity
JimBlackwood 1 hours ago [-]
For sure, nothing is without extra complexity. But, to me, it feels like additional complexity for whoever does DevOps (where I think it should be) and takes away complexity from all other users.
msie 1 hours ago [-]
Wow, amazing how out of touch this is.
JimBlackwood 1 hours ago [-]
Can you explain? I don't see why
Henchman21 39 minutes ago [-]
You seem to think every business is a tech startup and is staffed with competent engineers.
Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.
acedTrex 3 hours ago [-]
Sounds like there is a market for a browser that is intranet only and doesnt do various checks
jillyboel 3 hours ago [-]
Good luck getting that distributed everywhere including the iOS app store and random samsung TVs that stopped receiving updates a decade ago.
Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.
JimBlackwood 2 hours ago [-]
Why would you want this? Then on production, you'll run into issues you did not encounter on staging because you skipped various checks.
stefan_ 2 hours ago [-]
Do I add the root CA of my router manufacturer so I can visit its web interface on my internal network without having half the page functionality broken because of overbearing browser manufacturers who operate the "web PKI" as a cartel? This nowadays includes things such as basic file downloads.
jillyboel 3 hours ago [-]
Getting my parents to add a CA to their android, iphone, windows laptop and macbook just so they can use my self hosted nextcloud sounds like an absolute nightmare.
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
mysteria 30 minutes ago [-]
I actually do this for my homelab setup. Everyone basically gets the local CA installed for internal services as well as a client cert for RADIUS EAP-TLS and VPN authentication. Different devices are automatically routed to the correct VLAN and the initial onboarding doesn't take that long if you're used to the setup. Guests are issued a MSCHAP username and password for simplicity's sake.
For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.
crote 27 minutes ago [-]
Rolling out LetsEncrypt for a self-hosted Nextcloud instance is absolutely trivial. There are many reasons corporations might want to roll their own internal CA, but simple homelab scenarios like these couldn't be further from them.
richardwhiuk 3 hours ago [-]
Why are your parents on a corporations internal network?
jillyboel 3 hours ago [-]
What corporation are you talking about? Have you never heard of someone self hosting software for their family and friends? You know, an intranet.
smw 2 hours ago [-]
Just buy a domain and use dns verification to get real certs for whatever internal addresses you want to serve? Caddy will trivially go get certs for you with one line of config
Or cheat and use tailscale to do the whole thing.
ClumsyPilot 54 minutes ago [-]
> Corporations can run an internal CA
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
rlpb 5 hours ago [-]
Indeed they are compatible. However HTTPS is often unnecessary, particularly in a smaller organisation, but browsers mandate significant unnecessary complexity there. In that sense, brwosers are not suited to this use in those scenarios.
freeopinion 3 hours ago [-]
If only browsers could understand something besides HTTPS. Somebody should invent something called HTTP that is like HTTPS without certificates.
recursive 3 hours ago [-]
Cool. And when they invent it, it should have browser parity with respect to which API features and capabilities are available, so that we don't need to use HTTPS just so things like `getUserMedia` work.
There’s enough APIs limited to secure contexts that many internal apps become unfeasible.
SoftTalker 3 hours ago [-]
Modern browsers default to trying https first.
tedivm 4 hours ago [-]
I really don't see many scenarios where HTTPS isn't needed for at least some internal services.
donnachangstein 3 hours ago [-]
Then, I'm afraid, you work in a bubble.
A static page that hosts documentation on an internal network does not need encryption.
The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.
Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.
imroot 3 hours ago [-]
What overhead?
Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.
The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.
donnachangstein 3 hours ago [-]
> What overhead?
[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]
So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.
tedivm 12 minutes ago [-]
I'm afraid you didn't read my response. I explicitly said I can't see a case where it isn't needed for some services. I never said it was required for every service. Once you've got it setup for one thing it's pretty easy to set it up everywhere (unless you're manually deploying, which is an obvious problem).
brendoelfrendo 3 hours ago [-]
Sure it does! You may not need confidentiality, but what about integrity?
donnachangstein 3 hours ago [-]
It's a very myopic take.
Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Just because something is possible in theory doesn't make it likely or worth the time invested.
You can put 8 locks on the door to your house but most people suffice with just one.
Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?
But it's not really a concern worth investing resources into for most.
growse 2 hours ago [-]
> Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Ah, the "both me and my attackers agree on what's important" fallacy.
What if they modify the man page response to include drive-by malware?
therealpygon 5 hours ago [-]
And it is even more trivial in a small organization to install a Trusted Root for internally signed certificates on their handful of machines. Laziness isn’t a browser issue.
rlpb 3 hours ago [-]
How is that supposed to work for an IoT device that wants to work out of the box using one of these HTTPS-only browser APIs?
metanonsense 2 hours ago [-]
I am not saying I‘d do this, but in theory you could deploy a single reverse proxy in front of your HTTP-only devices and restrict traffic accordingly.
Spooky23 1 hours ago [-]
Desired by who?
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
crote 30 minutes ago [-]
CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
christina97 3 hours ago [-]
What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…
ozim 15 hours ago [-]
Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
akerl_ 7 hours ago [-]
The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.
Why would browsers "most likely" enforce this change for internal CAs as well?
nickf 12 hours ago [-]
'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.
ryao 15 hours ago [-]
Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.
That said, it would be really nice if they supported DANE so that websites do not need CAs.
rsstack 1 days ago [-]
> I've seen most of them moving to internally signed certs
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
pavon 1 days ago [-]
Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.
pkaye 1 days ago [-]
What about something like step-ca? I got the free version working easily on my home network.
Not everything that's easy to do on a home network is easy to do on a corporate network. The biggest problem with corporate CAs is how to emit new certificates for a new device in a secure way, a problem which simply doesn't exist on a home network where you have one or at most a handful of people needing new certs to be emitted.
bravetraveler 21 hours ago [-]
> A lot more work
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
Spivak 13 hours ago [-]
I think you're being generous if you think the average "cloud native" company is joining their servers to a domain at all. They've certainly fallen out of fashion in favor of the servers being dumb and user access being mediated by an outside system.
bravetraveler 12 hours ago [-]
Why not? The actual clouds do.
I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.
Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.
My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.
The devices are already managed; you've deployed them to your fleet.
No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!
Don't complain to me about 'your' choices. Self-selected problem if I've heard one.
Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.
Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.
Literal Clouds do this, why can't 'you'?
Spivak 10 hours ago [-]
Adding machines to a domain is far far more common on bare-metal deployments which is why I said "cloud native." Adding a bunch of cloud VMs to a domain is not very common in my experience because they're designed to be ephemeral and thrown away and IPA being stateful isn't about that.
You're managing your machine deployments with something so
of course you just use that
that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.
bravetraveler 10 hours ago [-]
To be honest, with "cloud-init" and the ability for SSSD to send record updates, I could make a worthwhile cloudy deployment
To your point, people don't, but it's a perfectly viable path.
Containers/kubernetes, that's pipeline city, baby!
12 hours ago [-]
maccard 2 hours ago [-]
I’ve unfortunately seen the opposite - internal apps are now back to being deployed over VPN and HTTP
lokar 4 hours ago [-]
I’ve always felt a major benefit of an internal CA is making it easy to have very sort TTLs
SoftTalker 3 hours ago [-]
Or very long ones. I often generate 10 year certs because then I don't have to worry about renewing them for the lifetime of the hardware.
formerly_proven 3 hours ago [-]
I'm surprised there is no authorization-certificate-based challenge type for ACME yet. That would make ACME practical to use in microsegmented networks.
I'd set that up the second it becomes available if it were a standard protocol.
Just went through setting up internal certs on my switches -- it was a chore to say the least!
With a Cert Template on our internal CA (windows), at least we can automate things well enough!
formerly_proven 2 hours ago [-]
Yeah it's almost weird it doesn't seem to exist, at least publicly. My megacorp created their own protocol for this purpose (though it might actually predate ACME, I'm not sure), and a bunch of in-house people and suppliers created the necessary middlewares to integrate it into stuff like cert-manager and such (basically everything that needs a TLS certificate and is deployed more than thrice). I imagine many larger companies have very similar things, with the only material difference being different organizational OIDs for the proprietary extension fields (I found it quite cute when I learned that the corp created a very neat subtree beneath its organization OID).
Pxtl 8 minutes ago [-]
At this point I wish we could just get all our clients to say "self-signed is fine if you're connecting to a .LOCAL domain name". https is intrinsically useful over raw http, but the overhead of setting up centralized certs for non-public domains is just dumb.
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
jiggawatts 10 hours ago [-]
I just got a flashback to trying to automate the certificate issuance process for some ESRI ArcGIS product that used an RPC configuration API over HTTPS to change the certificate.
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
shlant 10 hours ago [-]
this is exactly what I do because mongo and TLS is enough of a headache. I am not dealing with rotating certificates regularly on top of that for endpoints not exposed to the internet.
SoftTalker 3 hours ago [-]
Yep letsencrypt is great for public-facing web servers but for stuff that isn't a web server or doesn't allow outside queries none of that "easy" automation works.
procaryote 1 hours ago [-]
Acme dns challenge works for things that aren't webservers.
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
SoftTalker 39 minutes ago [-]
I don't have an API or any permission to add TXT records to my DNS. That's a support ticket and has about a 24-hour turnaround best case.
procaryote 56 seconds ago [-]
That's not great, sorry to hear
xienze 1 days ago [-]
> but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
pixl97 1 days ago [-]
Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.
tikkabhuna 2 hours ago [-]
F5s don't support ACME, which has been a pain for us.
xienze 1 days ago [-]
Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.
hedora 21 hours ago [-]
Also, moving termination off the endpoint server makes it much easier for three letter agencies to intercept + log.
qmarchi 4 hours ago [-]
Most responsible orgs do TLS termination on the public side of a connection, but will still make a backend connection protected by TLS, just with a internal CA.
cryptonym 1 days ago [-]
You now have to build and self-shot a complete CA/PKI.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
JoshTriplett 3 hours ago [-]
> Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.
pixl97 1 days ago [-]
I'm pretty sure every bank will auto fail wildcard certs these days, at least the ones I've worked with.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
greatgib 1 days ago [-]
As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain. Only the big one embedded in browser will have the receive to have their own CA certificate with whatever period they want...
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
47 days might seem like an arbitrary number, but it’s a simple cascade:
- 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
lolinder 3 hours ago [-]
> everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
gruez 1 days ago [-]
>As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
greatgib 1 days ago [-]
Let's suppose that I'm a competitor of Google and Amazon, and I want to have my Public root CA for mydomain.com to offer my clients subdomains like s3.customer1.mydomain.com, s3.customer2.mydomain.com,...
tptacek 3 hours ago [-]
If you want to be a public root CA, so that every browser in the world needs to trust your keys, you can do all the lifting that the browsers are asking from public CAs.
gruez 1 days ago [-]
Why do you want this when there are wildcard certificates? That's how the hyperscalers do it as well. Amazon doesn't have a separate certificate for each s3 bucket, it's all under a wildcard certificate.
vlovich123 2 hours ago [-]
Amazon did this the absolute worst way - all customers share the same flat namespace for S3 buckets which limits the names available and also makes the bucket names discoverable. Did it a bit more sanely and securely at Cloudflare where it was namespaced to the customer account, but that required registering a wildcard certificate per customer if I recall correctly.
anacrolix 4 hours ago [-]
No. Chrome flat out rejects certificates that expire more than 13 months away, last time I tried.
lucb1e 1 hours ago [-]
> 47 [is?] arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
Certificate pinning to public roots or CAs is bad. Do not do it. You have no control over the CA or roots, and in many cases neither does the CA - they may have to change based on what trust-store operators say.
Pinning to public CAs or roots or leaf certs, pseudo-pinning (not pinning to a key or cert specifically, but expecting some part of a certificate DN or extension to remain constant), and trust-store limiting are all bad, terrible, no-good practices that cause havoc whenever they are implemented.
precommunicator 16 hours ago [-]
> everyone will be so used to certificates changing all the time, and no certificate pinning anymore
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
yjftsjthsd-h 1 days ago [-]
If you're in a position to pin certs, aren't you in a position to ignore normal CAs and just keep doing that?
ghusto 1 days ago [-]
I really wish encryption and identity weren't so tightly coupled in certificates. If I've issued a certificate, I _always_ care about encryption, but sometimes do not care about identity.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
tptacek 1 days ago [-]
There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.
SoftTalker 3 hours ago [-]
Trust On First Use is the normal thing for these situations.
asmor 2 hours ago [-]
TOFU equates to "might as well never ask" for most users. Just like Windows UAC prompts.
superkuh 2 hours ago [-]
You're right most of the time. But there are two webs. And it's only in the later (far more common) case that things like that matter.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
steventhedev 1 days ago [-]
There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.
tptacek 1 days ago [-]
MITM scenarios are more common on the 2025 Internet than passive attacks are.
steventhedev 1 days ago [-]
MITM attacks are common, but noisy - BGP hijacks are literally public to the internet by their nature. I believe that insisting on coupling confidentiality to authenticity is counterproductive and prevents the development of more sophisticated security models and network design.
orev 1 days ago [-]
You don’t need to BGP hijack to perform a MITM attack. An HTTPS proxy can be easily and transparently installed at the Internet gateway. Many ISPs were doing this with HTTP to inject their own ads, and only the move to HTTPS put an end to it.
steventhedev 23 hours ago [-]
Yes. MITM attacks do happen in reality. But by their nature they require active participation which for practical purposes means leaving some sort of trail. More importantly is that by decoupling confidentionality from authenticity, you can easily prevent eavesdropping attacks at scale.
Which for some threat models is sufficiently good.
tptacek 23 hours ago [-]
This thread is dignifying a debate that was decisively resolved over 15 years ago. MITM is a superset of the eavesdropper adversary and is the threat model TLS is designed to risk.
It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.
pyuser583 15 hours ago [-]
As someone who had to set up monitoring software for my kids, I can tell you MITM are very real.
It’s how I know what my kids are up to.
It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.
Identity really is security.
steventhedev 15 hours ago [-]
TLS chose the threat model that includes MITM - there's no good reason that should ever change. All I'm arguing is that having a middle ground between http and https would prevent eavesdropping, and that investment elsewhere could have been used to mitigate the MITM attacks (to the benefit of all protocols, even those that don't offer confidentiality). Instead we got OpenSSL and the CA model with all it's warts.
More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.
simiones 8 hours ago [-]
It is literally impossible to securely talk to a different party over an insecure channel unless you have a shared key beforehand or use a trusted third-party. And since the physical medium is always inherently insecure, you will always need to trust a third party like a CA to have secure communications over the internet. This is not a limitation of some protocol, it's a fundamental law of nature/mathematics (though maybe we could imagine some secure physical transport based on entanglement effects in some future world?).
So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.
YetAnotherNick 3 hours ago [-]
The key could be shared in DNS records or could even literally be in the domain name like Tor. Although each approach has its pros and cons.
BobbyJo 1 days ago [-]
What does their commonality have to do with the use cases where they aren't viable?
panki27 1 days ago [-]
How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?
Ajedi32 1 days ago [-]
It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)
oconnor663 1 days ago [-]
They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.
2mlWQbCK 2 hours ago [-]
You can have TLS with TOFU, like in the Gemini protocol. At least then, in theory, the MTIM has to happen the first time you connect to a site. There is also the possibility for out of band confirmation of some certificate's fingerprint if you want to be really sure that some Gemini server is the one you hope it is.
panki27 1 days ago [-]
You can not MITM a key that is being exchanged through Diffie-Hellman, or have I missed something big?
Ajedi32 1 days ago [-]
Yes, Mallory just pretends to be Alice to Bob and pretends to be Bob to Alice, and they both establish an encrypted connection to Mallory using Diffie-Hellman keys derived from his secrets instead of each other's. Mallory has keys for both of their separate connections at this point and can do whatever he wants. That's why TLS only uses Diffie-Hellman for perfect forward secrecy after Alice has already authenticated Bob. Even if the authentication key gets cracked later Mallory can't reach back into the past and MITM the connection retroactively, so the DH-derived session key remains protected.
oconnor663 4 hours ago [-]
If we know each other's DH public key in advance, then you're totally right, DH is secure over an untrusted network. But if we don't know each other's public keys, we have to get them over that same network, and DH can't protect us if the network lies about our public keys. Solving this requires some notion of "identity", i.e. some way to verify that when I say "my public key is abc123" it's actually me who's saying that. That's why it's hard to have privacy without identity.
simiones 1 days ago [-]
Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.
gruez 1 days ago [-]
>Connections never start as encrypted, they always start as plain text
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
TCP SYN is not encrypted, and neither is Client Hello. Even with TCP cookies and TLS session resumption, the initial packet is still unencrypted, and can be intercepted.
However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.
EE84M3i 20 hours ago [-]
Yes but this still depends on identity. It's not unauthenticated.
ekr____ 2 hours ago [-]
The situation is actually somewhat more complicated than this.
ECH gets the key from the DNS, and there's no real authentication for this data (DNSSEC is rare and is not checked by the browser). See S 10.2 [0] for why this is reasonable.
GP means unencrypted at the wire level. ClientHelloOuter is still unencrypted even with HSTS.
1 days ago [-]
jiveturkey 15 hours ago [-]
Chrome started doing https-first since April 2021 (v90).
Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.
Firefox 136 (2025) now does https first as well.
simiones 8 hours ago [-]
That is irrelevant. All TCP connections start as a TCP SYN, that can be trivially intercepted and MITMd by anyone. So, if you don't have an out-of-band reason to trust the server certificate (such as trust in the CA that PKI defines, or knowing the signature of the server certificate), you can never be sure your TLS session is secure, regardless of the level of encryption you're using.
gruturo 53 minutes ago [-]
After the TCP handshake, the very first payload will be the HTTPS negotiation - and even if you don't use encrypted client hello / encrypted SNI, you still can't spoof it because the certificate chain of trust will not be intact - unless you somehow control the CAs trusted by the browser.
With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)
Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?
let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.
jchw 1 days ago [-]
I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)
*I mistakenly wrote "certificate" here initially. Sorry.
tptacek 1 days ago [-]
SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.
jchw 1 days ago [-]
I've made some critical mistakes in my argument here. I am definitely not referring to using SSH TOFU in a fleet. I'm talking about using SSH TOFU with long-lived machines, like your own personal computers, or individual long-running servers.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
tptacek 1 days ago [-]
I don't understand any of this. If you want TOFU for TLS, just use self-signed certificates. That makes sense for your own internal stuff. For good reason, the browser vendors aren't going to let you do it for public resources, but that doesn't matter for your use case.
jchw 1 days ago [-]
Self-signed certificates have a terrible UX and worse security; browsers won't remember the trusted certificate so you'd have to verify it each time if you wanted to verify it.
In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.
tptacek 1 days ago [-]
Just add the self-signed certificate. It's literally a TOFU system.
jchw 24 hours ago [-]
But again, you then get (much) worse UX than plaintext HTTP, it won't even remember the certificate. The thing that makes TOFU work is that you at least only have to verify the certificate once. If you use a self-signed certificate, you have to allow it every session.
A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.
tptacek 24 hours ago [-]
Yes, it will.
jchw 2 hours ago [-]
I checked and you seem to be correct, at least for Firefox and Chromium. I tried using:
and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.
I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.
I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.
Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.
To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)
arccy 1 days ago [-]
ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
tptacek 1 days ago [-]
Intercepting and exploiting first-contact SSH sessions is a security conference sport. People definitely do it.
jchw 1 days ago [-]
I just typed the wrong thing, fullstop. I meant to say server keys; fixed now.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
pabs3 1 days ago [-]
You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.
gruez 1 days ago [-]
>I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
jchw 1 days ago [-]
I'm not arguing for replacing existing uses of HTTPS here, just cases where you would today use self-signed certificates or plaintext.
hedora 21 hours ago [-]
TOFU is not less secure than using a certificate authority.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
tptacek 21 hours ago [-]
TOFU is less secure than using a trust anchor.
hedora 21 hours ago [-]
That’s only true if you operate the trust anchor (possible) and it’s not an attack vector (impossible).
For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.
Alternatively, you could manually verify + pin certs after first use.
tptacek 20 hours ago [-]
There are a couple of these concepts --- TOFU (key continuity) is one, PAKEs are another, pinning a third --- that sort of float around and captivate people because they seem easy to reason about, but are (with the exception of Magic Wormhole) not all that useful in the real world. It'd be interesting to flesh out the complete list of them.
The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.
hedora 7 hours ago [-]
There are ~ 200 entries in my password manager. Maybe 25 are important. Pinning their certs would meaningfully reduce the transport layer attack surface for those accounts.
IshKebab 2 hours ago [-]
I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
tikkabhuna 22 minutes ago [-]
But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.
saurik 2 hours ago [-]
When I use a service over TLS on a network I don't trust, the premise is that I only will trust the connection if it has a certificate from a handful of companies trusted by the people who wrote the software I'm using (my browser/client and/or my operating system) to only issue said certificates to people who are supposed to have them (which these days is increasingly defined to be "who are in control of the DNS for the domain name at a global level", for better or worse, not that everyone wants to admit that).
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
woodruffw 2 hours ago [-]
> I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
mannyv 3 hours ago [-]
All our door locks suck, but everyone has a door lock.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
Ajedi32 1 days ago [-]
In what situation would you want to encrypt something but not care about the identity of the entity with the key to decrypt it? That seems like a very niche use case to me.
xyzzy123 1 days ago [-]
Because TLS doesn't promise you very much about the entity which holds the key. All you really know is that they they control some DNS records.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
Ajedi32 1 days ago [-]
It tells you the entity which holds the key is the actual owner of myfavouriteshoes.com, and not just a random guy operating the free Wi-Fi hotspot at the coffee shop you're visiting. If you don't care about that then why even bother with encryption in the first place?
xyzzy123 1 days ago [-]
True.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
yjftsjthsd-h 1 days ago [-]
If you really don't care, sometimes you can just go plantext HTTP. I do this for some internal things that are accessed over VPN links. Of course, that only works if you're not doing anything that browsers require HTTPS for.
Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.
smw 2 hours ago [-]
Or just run tailscale and let it take care of the certs for you. I hate to sound like a shill, but damn does it make it easier.
akerl_ 7 hours ago [-]
It seems like you have two pretty viable options:
1. Wire up LetsEncrypt certs for things running on your LAN, and all the "dire certificate warnings" go away.
2. Run a local ACME service, wire up ACME clients to point to that, make your private CA valid for 100 years, trust your private CA on the devices of the Regular People in your house.
I did this dance a while back, and things like acme.sh have plugins for everything from my Unifi gear to my network printer. If you're running a bunch of servers on your LAN, the added effort of having certs is tiny by comparison.
arccy 1 days ago [-]
at least it's not evil-government-proxy.com that decided to mitm you and look at your favorite shoes.
xyzzy123 1 days ago [-]
Indeed and the system is practically foolproof because the government cannot take over DNS records, influence CAs, compromise cloud infrastructure / hosting, or rubber hose the counter-party to your communications.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
pizzafeelsright 2 hours ago [-]
Seems logical.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
lucb1e 1 hours ago [-]
What? I work in this field and I have no idea what you mean. (I get the abbreviations like authz and pk, but not how "encrypting everything" and "posting links" is supposed to remove the need for authentication)
silverwind 16 hours ago [-]
I agree, there needs to be a TLS without certificates. Pre-shared secrets would be much more convenient in many scenarios.
ryao 14 hours ago [-]
How about TLS without CAs? See DANE. If only web browsers would support it.
pornel 3 hours ago [-]
DANE is a TLS with too-big-to-fail CAs that are tied to the top-level domains they own, and can't be replaced.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
I want a middle ground. Identity verification is useful for TLS, but I really wish there was no reliance on ultimately trusted third parties for that. Maybe put some sort of identity proof into DNS instead, since the whole thing relies on DNS anyway.
immibis 1 hours ago [-]
Makes it trivial for your DNS provider to MITM you, and you can't even use certificate transparency to detect it.
panki27 1 days ago [-]
Isn't this excatly the reason why LetsEncrypt was brought to life?
1 days ago [-]
charcircuit 1 days ago [-]
Having them always coupled disincentivizes bad ISP's from MITM the connection.
Vegenoid 18 hours ago [-]
Isn't identity the entire point of certificates? Why use certificates if you only care about encryption?
ryao 14 hours ago [-]
If web browsers supported DANE, we would not need CAs for encryption.
Avamander 2 hours ago [-]
DNSSEC is just a shittier PKI with CAs that are too big to ever fail.
immibis 1 hours ago [-]
It is, but since we rely on DNS anyway, no matter what, and your DNS provider can get a certificate from Let's Encrypt for your site, without asking you, there's merit to combining them. It doesn't add any security to have PKI separate from DNS.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
bityard 2 hours ago [-]
I see that there is a timeline for progressive shortening, so if anyone has any "inside baseball" on this, I'm very curious to know:
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
wnevets 1 hours ago [-]
Has anyone managed to calculate the increase power usage across the entire internet this change will cause? Well, I suppose the environment can take one more for the team.
margalabargala 44 minutes ago [-]
The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
wnevets 13 minutes ago [-]
> The single use of AI to generate that video of Trump licking Elon Musk's feet, used significantly more power than this change will cause to be used over the next decade.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
44 minutes ago [-]
ori_b 2 hours ago [-]
Can someone point me to specific exploits that this key rotation schedule would have stopped?
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Avamander 1 hours ago [-]
> Can someone point me to specific exploits that this key rotation schedule would have stopped?
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
ori_b 36 minutes ago [-]
> You also have to prove more frequently that you have control of the domain or IP in the certificate.
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
procaryote 1 hours ago [-]
If this is causing you pain, certbot with Acme DNS challenge is pretty easy to set up to get you certs for your internal services. There are tools for many different dns providers like route53 or cloudflare.
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
merb 24 minutes ago [-]
The problem is more or less devices that do not support dns challenges or only support letsencrypt and not the acme protocol (to chain acme servers, etc)
captn3m0 1 days ago [-]
This is great news. This would blow a hole in two interesting places where leaf-level certificate pinning is relied upon:
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
bearjaws 1 hours ago [-]
Giving me PTSD for working in healthcare.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
DiggyJohnson 1 days ago [-]
Do you (or anyone) recommend any text based resources laying out the state of enterprise TLS management in 2025?
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
grishka 1 days ago [-]
Isn't it usually the server's public key that's pinned? The key pair isn't regenerated when you renew the certificate.
toast0 1 days ago [-]
Typical guidance is to pin the CA or intermediate, because in case of a key compromise, you're going to need to generate a new key.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
nickf 12 hours ago [-]
I've said it up-thread, but never ever never never pin to anything public. Don't do it. It's bad. You, and even the CA have no control over the certificates and cannot rely on them remaining in any way constant.
Don't do it. If you must pin, pin to private CAs you control. Otherwise, don't do it. Seriously. Don't.
toast0 4 hours ago [-]
There's not really a better option if you need your urls to work with public browsers and also an app you control. You can't use a private CA for those urls, because the public browsers won't accept it; you need to include a public CA in your app so you don't have to rely on the user's device having a reasonable trust store. Including all the CAs you're never going to use is silly, so picking a few makes sense.
richardwhiuk 3 hours ago [-]
You don't need both of those things. Give your app a different url.
ori_b 2 hours ago [-]
Why should I trust a CA that has no control over the certificate chains?
einsteinx2 9 hours ago [-]
Repeating it doesn’t make it any more true. Cert providers publish their root certs, you pin those root certs, zero problems.
peanut-walrus 3 hours ago [-]
So the assumption here is that somehow your private key is easier to compromise than whatever secret/mechanism you use to provision certs?
Yeah not sure about that one...
dsr_ 32 minutes ago [-]
Daniel K Moran figured out the endgame:
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
throwaway96751 1 days ago [-]
Off-topic: What is a good learning resource about TLS?
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
ivanr 3 hours ago [-]
I have a bunch of useful resources, most of which are free:
I learned a lot from TLS Mastery by Michael W. Lucas.
throwaway96751 1 days ago [-]
Thanks, looks exactly like what I wanted
physicles 6 hours ago [-]
Use ECDSA if you can, since it reduces the size of the handshake on the wire (keys are smaller). Don’t bake in intermediate certs unless you have a very good reason.
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
jsheard 1 days ago [-]
This change will have a steady roll-out, but if you want to get ahead of the curve then Let's Encrypt will be offering 6 day certs as an option soon.
Is there an actual issue with widespread cert theft? That seems like the primary valid reason to do this, not forcing automation.
cryptonym 1 days ago [-]
Let's Encrypt dropped support for OCSP. CRL doesn't scale well. Short lived certificate probably are a way to avoid certificate revocation quirks.
Ajedi32 1 days ago [-]
It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
NoahZuniga 2 hours ago [-]
Also certificate transparency is moving to a new standard (sunlight CT) that has immediate merges. Google requires maximum merge delay to be 1 minute or less, but they've said on google groups that they expect merges to be way faster.
lokar 4 hours ago [-]
The log is not really for real time use. It’s to catch CA non-compliance.
dboreham 1 days ago [-]
I think it's more about revocation not working in practice. So the only solution is a short TTL.
trothamel 1 days ago [-]
I suspect it's to limit how long a malicious or compromised CA can impact security.
hedora 21 hours ago [-]
Equivalently, it also maximizes the number of sites impacted when a CA is compromised.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
lokar 3 hours ago [-]
Mostly this. Today of a big CA is caught breaking the rules, actually enforcing repairs (eg prompt revocation ) is a hard pill to swallow.
rat9988 1 days ago [-]
I think op is asking has there been many real case scenarios in practice that pushed for this change?
chromanoid 1 days ago [-]
I guess the main reason behind this move is platform capitalism. It's an easy way to cut off grassroots internet.
bshacklett 1 days ago [-]
How does this cut off the grassroots internet?
chromanoid 1 days ago [-]
It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.
ezfe 29 seconds ago [-]
I've done the work to set up, by hand, a self-hosted Linux server that uses an auto-renewing Let's Encrypt cert and it was totally fine. Just read some documentation.
icedchai 1 days ago [-]
Many folks switched to Lets Encrypt ages ago. Certificates are way easier to acquire now than they were in "Frontpage' days. I remember paying 100's of dollars and sending a fax for "verification."
whs 18 hours ago [-]
Do they offer any long term commitment for the API though. I remembered that they were blocking old cert manager clients that were hammering their server. You can't automate that (as it could be unsafe, like Solarwinds) and they didn't give one year window to do it manually either.
icedchai 18 hours ago [-]
You do have a point. I still feel that upgrading your client is less work than manual cert renewals.
chromanoid 1 days ago [-]
I agree, but I think the pendulum just went too far on the tradeoff scale.
jack0813 1 days ago [-]
There are very convenient tools to do https easily these days, e.g. Caddy. You can use it to reverse proxy any http server and it will do the cert stuff for you automatically.
chromanoid 1 days ago [-]
Ofc, but you have to be quite techsavy to know this and to set this up. It's also cumbersome in many low-tech situations. There is certificate revocation, I would really like to see the threat model here. I am not even sure if automation helps or just shifts the threat vector to certificate issuing.
gjsman-1000 1 days ago [-]
If that were true, we would not have Let's Encrypt and tools which can give us certificates in 30 seconds flat once we prove ownership.
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
nottorp 1 days ago [-]
How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
icedchai 1 days ago [-]
You are always dependent on a 3rd party to some extent: DNS registration, upstream ISP(s), cloud / hosting providers, etc.
nottorp 1 days ago [-]
And now your list has 2 more items in it …
chromanoid 1 days ago [-]
I dunno. Self-hosting w/o automation was feasible. Now you have to automate. It will lead to a huge amount of link rot or at least something very similar. There will be solutions but setting up a page e2e gets more and more complicated. In the end you want a service provider who takes care of it. Maybe not the worst thing, but what kind of security issues are we talking about? There is still certificate revocation...
icedchai 1 days ago [-]
Have you tried caddy? Each TLS protected site winds up being literally a couple lines in a config file. Renewals are automatic. Unless you have a network / DNS problem, it is set and forget. It is far simpler than dealing with manual cert renewals, downloading the certificates, restarting your web server (or forgetting to...)
chromanoid 1 days ago [-]
Yes, but only for internal stuff. I prefer traefik at the moment. But my point is more about how people use wix over free webspace and so on. While I don't agree with many of Jonathan Blow's arguments, but news like this make me think of his talk "Preventing the
collapse of civilization" https://m.youtube.com/watch?v=ZSRHeXYDLko
ikiris 30 minutes ago [-]
Traefik without certmanager is just as self inflicted a wound. It’s literally designed to handle this for you.
rhodey 2 hours ago [-]
After EFF Lets Encrypt made the change to disable reminder emails I decided I would be moving my personal blog from my VPS to AWS specifically. I just today made the time to make the move and 10 minutes after I find this.
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
mystified5016 1 hours ago [-]
Why not just automate your LetsEncrypt keys like literally everyone else does? It's free and you have to go out of your way to do it manually.
Or just pay Amazon, I guess. Easier than thinking.
thyristan 18 minutes ago [-]
semi-related question: where is the letsencrypt workalike for s/mime?
AlfeG 60 minutes ago [-]
Will see how Azure FD will handle this. We opened more than expected tickets with support on certs not updating automatically...
nickf 12 hours ago [-]
Don't forget the lede buried here - you'll need to re-validate control over your DNS names more frequently too.
Many enterprises are used to doing this once-per-year today, but by the time 47-day certs roll around, you'll be re-validating all of your domain control every 10 days (more likely every week).
0xbadcafebee 1 hours ago [-]
I hate this, but I'm also glad it's happening, because it will speed up the demise of Web PKI.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
1970-01-01 1 days ago [-]
Your 90-day snapshot backups will soon become 47-day backups. Take care!
gruez 1 days ago [-]
???
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
belter 1 days ago [-]
This is going to be one of the obvious traps.
DiggyJohnson 1 days ago [-]
To care about stale certs on snapshots or the opposite?
belter 1 days ago [-]
Both. One breaks your restore, the other breaks your trust chain.
jonathantf2 1 days ago [-]
A welcome change if it gives some vendors a kick up the behind to implement ACME.
ShakataGaNai 3 hours ago [-]
There is no more choice. No one is going to buy from (example) GoDaddy if they have to login every 30 days to manually get a new certificate. Not when they can go to (example) digicert and it's all automatic.
It goes from a "rather nice to have" to "effectively mandatory".
jonathantf2 2 hours ago [-]
I think GoDaddy supports ACME - but if you're using ACME you might as well use Let's Encrypt and stop paying for it.
zephius 1 days ago [-]
Old SysAdmin and InfoSec Admin perspective:
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
tptacek 1 days ago [-]
I think everybody involved knows about the likelihood that things are going to break at enterprise shops with super-expensive commercial middleboxes. They just don't care anymore. We ran a PKI that cared deeply about the concerns of admins for a decade and a half, and it was a fiasco. The coders have taken over, and things are better.
zephius 1 days ago [-]
That's great for shops with Dev teams and in house developed platforms. Those shops are rare outside Silicon Valley and fortune 500s and not likely to increase beyond that. For the rest of us, we are at the mercy of off the shelf products and 3rd party platforms.
tptacek 1 days ago [-]
I suggest you buy products from vendors who care about the modern WebPKI. I don't think the browser root programs are going to back down on this stuff.
nickf 12 hours ago [-]
This. Also, re-evaluate how many places you actually need public trust that the webPKI offers. So many times it isn't needed, and you make problems for yourself by assuming it does.
I have horror stories I can't fully disclose, but if you have closed networks of millions of devices where you control both the server side and the client side, relying on the same certificate I might use on my blog is not a sane idea.
whs 18 hours ago [-]
Agree. My company was cloud first, and when we built the new HQ buying Cisco gear and VMware (as they're the only stack several implementers are offering) it felt like we were sending the company 15 years backwards
zephius 1 days ago [-]
I agree, and we try, however that is not a currently widely supported feature in the boring industry specific business software/hardware space. Maybe now it will be, so time will tell.
ikiris 28 minutes ago [-]
Reverse proxies exist. If you don’t like having to do that then have requirements for standards of the past 10 years in your purchasing.
yjftsjthsd-h 1 days ago [-]
Sounds like everything is solvable via code, and the hardware vendors just suck at it.
zephius 1 days ago [-]
In a nutshell, yes. From a security perspective, look at Fortinet as an egregious example of just how bad. Palo Alto also has some serious internal issues.
cpach 1 days ago [-]
“Hardware vendors are simply incompetent and slow to adapt to security changes.”
Perhaps the new requirements will give them additional incentives.
zephius 1 days ago [-]
Yeah, just like TLS 1.2 support. Don't even get me started on how that fiasco is still going.
raggi 1 days ago [-]
It sure would be nice if we could actually fix dns.
trothamel 1 days ago [-]
Question: Does anyone have a good solution for renewing letsencrypt certificates for websites hosted on multiple servers? Right now, I have one master server that the others forward the well-known requests too, and then I copy the certificate over when I'm done, but I'm wondering if there's a better way.
hangonhn 1 days ago [-]
We just use certbot on each server. Are you worried about the rate limit? LE rate limits based on the list of domains. So we send the request for the shared domain and the domain for each server instance. That makes each renew request unique per server for the purpose of the rate limit.
nullwarp 1 days ago [-]
I use DNS verification for this then the server doesn't even need to be exposed to the internet.
magicalhippo 10 hours ago [-]
And if changing the DNS entry is problematic, for example the DNS provider used doesn't have an API, you can redirect the challenge to another (sub)domain which can be hosted by a provider that has an API.
I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
I copy the same certbot account settings and private key to all servers and they obtain the certs themselves.
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
noinsight 1 days ago [-]
Orchestrate the renewal with Ansible - renew on the "master" server remotely but pull the new key material to your orchestrator and then push them to your server fleet. That's what I do. It's not "clean" or "ideal" to my tastes, but it works.
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
bayindirh 1 days ago [-]
There's a tool called "lsyncd" which watches for a file and syncs the changed file to other servers "within seconds".
I use this to sync users between small, experimental cluster nodes.
Have you tried certbot? Or if you want a turnkey solution, you may try Caddy or Traefik that have their own automated certificate generation utility.
1 days ago [-]
dboreham 1 days ago [-]
DNS verification.
throw0101b 1 days ago [-]
getssl was written with a bit of a focus on this:
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
Oh no, Bunch of stupid bureacrats came up with another dumb idea. What a surprise!
xyst 3 hours ago [-]
I don’t see any issue here. I already automate with ACME so rotating certificates on an earlier basis is okay. This should be like breathing for app and service developers and infrastructure teams.
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
ShakataGaNai 2 hours ago [-]
Because there are lots of companies, large and small, which haven't gotten that far. Lots of legacy sites/services/applications.
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
readthenotes1 1 days ago [-]
I wonder how many forums run by the barely able are going to disappear or start charging.
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
iJohnDoe 1 days ago [-]
Getting a bit ridiculous.
dboreham 1 days ago [-]
Looks like a case where there are tradeoffs to be made, but the people with authority over the decision have no incentive to consider one side of the trade.
bayindirh 1 days ago [-]
Why?
nottorp 1 days ago [-]
The logical endgame is 30 second certificates...
krunck 1 days ago [-]
Or maybe the endgame could be: creation of a centralized service that all web servers are required to be registered with and connected to at all times in order to receive their (frequently rotated) encryption keys. Controllers of said service then have kill switch control of any web service by simply withholding keys.
nottorp 1 days ago [-]
Exactly. And all in the name of security! Think of the children!
saltcured 3 hours ago [-]
I was thinking about this with my morning coffee.. the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
bayindirh 1 days ago [-]
For extremely sensitive systems, I think a more logical endgame is 30 minutes or so. 30 seconds is practically continuous generation.
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
nottorp 1 days ago [-]
> once or twice a year makes sense
You don't say. Why are the defaults already 90 days or less then?
bayindirh 1 days ago [-]
Because most of the sites on the internet store much more sensitive information when compared to the sites I gave as an example, and can afford 1/2 certificates a year.
90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.
nottorp 1 days ago [-]
That's not the average website, that's a corporate website or an online store.
Why do you think all the average web sites have to handle members?
bayindirh 1 days ago [-]
Give me examples of websites which doesn’t have any kind of member system in place.
Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.
What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.
So most of the sites have much more to protect than meets the eye.
ArinaS 12 hours ago [-]
> "Negligible amount."
Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.
nottorp 1 days ago [-]
I could move that all your examples except forums do not NEED members or users... except to spy on you and spam you.
bayindirh 1 days ago [-]
I mean, a news site needs their journalists to login. Your own personal Wordpress needs a user for editing the site. The blog platform I use (mataroa) doesn’t even have detailed statistics serve many users so they need user support.
Web is a bit different than you envision/think.
ArinaS 12 hours ago [-]
> "I mean, a news site needs their journalists to login."
Why can't this site just upload HTML files to their web server?
nottorp 10 hours ago [-]
Why can't this site have their CMS entirely separated from the public facing web site, for that matter? :)
> Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing...
Any non predatory practices you can add to the list?
bayindirh 10 hours ago [-]
I think you were trying to reply to me.
I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.
The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.
However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.
bayindirh 11 hours ago [-]
Two operational requirements:
1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.
2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.
Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.
panki27 1 days ago [-]
That CRL is going to be HUGE.
psz 1 days ago [-]
Why you think so? Keep in mind that revoked certs are not included in CRLs once expired (because they are not valid any more).
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
Lammy 3 hours ago [-]
This sucks. I'm actually so sick of mandatory TLS. All we did was get Google Analytics and all the other spyware running “““securely””” while making it that much harder for any regular person to host anything online. This will push people even further into the arms of the walled gardens as they decide they don't want to deal with the churn and give up.
ShakataGaNai 3 hours ago [-]
First.... 99% of people have zero interest in hosting things themselves anyways. Like on their own server themselves. Geocities era was possibly the first and last time that "having your own page" was cool, and that was basically killed by social media.
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Lammy 2 hours ago [-]
lol at recommending Cloudflare and Microsoft (Github) in response to a comment decrying spyware
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
jonathantf2 37 minutes ago [-]
A "regular person" won't/can't deal with a server, VPS, anything like that. They'll go to GoDaddy because they saw an advert on the TV for "websites".
Lammy 2 minutes ago [-]
They absolutely can deal with the one-time setup of one single thing that's easy to set up auto-pay for. It's so many additional concepts when you add in the ACME challenge/response because now you have to learn sysadmin-type skills to care for a periodic process, users/groups for who-runs-what-process and who-owns-what-cert-files, software updates to chase LE/ACME changes or else all your stuff breaks, etc.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
junaru 1 days ago [-]
> For this reason, and because even the 2027 changes to 100-day certificates will make manual procedures untenable, we expect rapid adoption of automation long before the 2029 changes.
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
xnyanta 4 hours ago [-]
I have automated IPMI certificate rotation set-up through Let's Encrypt and ACME via the Redfish API. And this is on 15 year old gear running HP iLO4. There's no excuse for not automating things.
panki27 1 days ago [-]
People will just roll out almost forever-lasting certificates through their internal CA for all systems that are not publicly reachable.
throw0101d 1 days ago [-]
> through their internal CA
Nope. People will create self-signed certs and tell people to just click "accept".
Avamander 2 hours ago [-]
They're doing it right now and they'll continue doing so. There are always scapegoats for not automating.
zelon88 1 days ago [-]
This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
pornel 3 hours ago [-]
Browsers check the identity of the certificates every time. The host name is the identity.
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
chowells 1 days ago [-]
I think it's absolutely critical when I'm sending a password to a site that it's actually the site it claims to be. That's identity. It matters a lot.
zelon88 1 days ago [-]
Not to users. The user who types Wal-Mart into their address bar expects to communicate with Wal-Mart. They aren't going to check if the certificate matches. Only that the icon is green.
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
chowells 1 days ago [-]
Well, no. That's just not true.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
BrandoElFollito 1 days ago [-]
You use words that are alien to everyone. Well, there is a small incertainity in "everyone" and it is there where the people who actually understand DHCP, DoS, etc. live. This is a very, very small place.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
chowells 1 days ago [-]
Who said a word about looking at a certificate?
I said exactly the words I meant.
> I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.
That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.
hedora 21 hours ago [-]
When you say identity, you mean “the identity of someone that convinced a certificate authority that they controlled walmart.com’s dns record at some point in the last 47 days, or used some sort of out of band authentication mechanism”.
You don’t mean “Walmart”, but 99% of the population thinks you do.
Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.
chowells 20 hours ago [-]
So what you're saying is that you actually understand the identity portion is critical to how the web is used and you're just cranky. It's ok. Take a walk, get a bite to eat. You'll feel better.
hedora 7 hours ago [-]
I’m not the person you were arguing with. Just explaining your misunderstanding.
JambalayaJimbo 6 hours ago [-]
Right so misrepresenting your identity with similar looking urls is a real problem with PKI. That doesn’t change the fact that certificates are ultimately about asserting your identity, it’s just a flaw in the system.
aseipp 2 hours ago [-]
Web browsers have had defenses against homograph attacks for years now, my man, dating back to 2017. I'm somewhat doubtful you're on top of this subject as much as you seem to be suggesting.
gruez 1 days ago [-]
>This naively (or maliciously perhaps) maintains that the "purpose" of the certificate is to identify an entity. While identity and safeguarding against MITM is important, identity is not the primary purpose certificates serve in the real world. At least that is not how they are used or why they are purchased.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
msie 1 hours ago [-]
Eff this shit. I'm getting out of sysadmin.
CommanderData 12 hours ago [-]
Why bother with such a long staggered approach?
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
datadrivenangel 6 hours ago [-]
Enterprises are like lobsters: You gotta crank the water temperature up slowly.
throw0101b 1 days ago [-]
Justification:
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
gruez 1 days ago [-]
The "less trustworthy" refers to key compromise, not the e-shop going rogue and start scamming customers or whatever.
throw0101a 1 days ago [-]
Okay, the key is compromised: that means they can MITM the trust relationship. But with modern algorithms you have forward security, so even if you've sniffed/captured the traffic it doesn't help.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
gruez 1 days ago [-]
>And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
throw0101d 1 days ago [-]
> By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
tptacek 1 days ago [-]
It does not work well for SSH. We just don't care about how badly it works.
throw0101d 1 days ago [-]
> It does not work well for SSH. We just don't care about how badly it works.
How "should" it work? Is there a known-better way?
tptacek 1 days ago [-]
Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).
throw0101d 1 days ago [-]
> Yes: SSH certificates. (They're unrelated to X509 certificates and the WebPKI).
I am aware of them.
As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.
Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).
So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.
tptacek 1 days ago [-]
I don't know what to tell you. The problem with TOFU is obvious: the FU. The FU happens more often than people think it does (every time you log in from a new or reprovisioned workstation) and you're vulnerable every time. I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.
throw0101d 1 days ago [-]
> I don't really care what you do for SSH (we use certificates) but this is not a workable model for TLS, where FUs are the norm.
It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).
Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.
xurukefi 1 hours ago [-]
Nobody forces you to change your key for renewals.
aaomidi 1 days ago [-]
Good.
If you can't make this happen, don't use WebPKI and use internal PKI.
riffic 2 hours ago [-]
ah this is gonna piss off a few coworkers today but it's a good move either way.
ocdtrekkie 2 days ago [-]
It's hard to express how absolutely catastrophic this is for the Internet, and how incompetent a group of people have to be to vote 25/0 for increasing a problem that breaks the Internet for many organizations yearly by a factor of ten for zero appreciable security benefit.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
arp242 2 hours ago [-]
You can disagree with all of this, but calling for everyone involved to be fired is just ridiculous and mean-spirited.
dextercd 2 days ago [-]
CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
xyzzy123 2 days ago [-]
Can you point to a specific security problem this change is actually solving? For example, can we attribute any major security compromises in the last 5 years to TLS certificate lifetime?
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
dextercd 1 days ago [-]
If a CA or subscriber improves their security but had an undetected incident in the past, a hacker today has a 397 day cert and can reuse the domain control validation in the next 397 days, meaning they can MITM traffic for effectively 794 days.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
sidewndr46 2 days ago [-]
According to the article:
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
xyzzy123 2 days ago [-]
> I'm not even sure what "outdated certificate data" could be...
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
dextercd 1 days ago [-]
"outdated certificate data" would be domains you no longer control. (Example would be a customer no longer points a DNS record at some service provider or domains that have changed ownership).
In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.
Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.
Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.
The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.
ocdtrekkie 2 days ago [-]
It's actually far worse for smaller sites and organizations than large ones. Entire pricey platforms exist around managing certificates and renewals, and large companies can afford those or develop their own automated solutions.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
dextercd 1 days ago [-]
I think most orgs can get away with free ACME clients and free/cheap monitoring options.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
ocdtrekkie 1 days ago [-]
I mean to give you an example of how far we are from this: IIS does not have built-in ACME support, and in the enterprise world it is basically "most web servers". Sure, you can add some third party thing off the Internet to do it, but... how many banks will trust that?
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
dextercd 1 days ago [-]
Let's Encrypt lists 10 ACME clients for Windows / IIS.
If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.
Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.
Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.
The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.
ocdtrekkie 1 days ago [-]
> Let's Encrypt lists 10 ACME clients for Windows / IIS.
How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.
dextercd 1 days ago [-]
No idea how many are first-party or vetted by Microsoft. Probably none of them. But I really, really doubt you can only run software that ticks one of those two boxes.
Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.
I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.
Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.
ocdtrekkie 24 hours ago [-]
[flagged]
dextercd 22 hours ago [-]
Stealing a private key or getting a CA to misissue a certificate is hard. Then actually making use of this in a MITM attack is also difficult.
Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.
For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.
Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.
Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).
Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.
I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.
rcxdude 2 days ago [-]
A large part of why it breaks things is because it only happens yearly. If you rotate certs on a regular pace, you actually get good at it and it stops breaking, ever. (basically everything I've set up with letsencrypt has needed zero maintenance, for example)
ocdtrekkie 2 days ago [-]
So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
Avamander 2 hours ago [-]
> So at a 47 day cadence, it's true we'll have to regularly maintain it: We'll need to hire another staff member to constantly do nothing but. (Most of the software we use does not support automated rotation yet. I assume some will due to this change, but certainly not 100%.)
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
belter 1 days ago [-]
Are the 47 days to please the current US Administration?
eesmith 1 days ago [-]
Based on the linked-to page, no:
47 days might seem like an arbitrary number, but it’s a simple cascade:
* 200 days = 6 maximal month (184 days) + 1/2 30-day month (15 days) + 1 day wiggle room
* 100 days = 3 maximal month (92 days) + ~1/4 30-day month (7 days) + 1 day wiggle room
* 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
vasilzhigilei 3 hours ago [-]
I'm on the SSL/TLS team @ Cloudflare. We have great managed certificate products that folks should consider using as certificate validity periods continue to shorten.
bambax 3 hours ago [-]
Simply having a domain managed by Cloudflare makes it magically https; yes, the traffic between the origin server and Cloudflare isn't encrypted, so it's not completely "secure", but for most uses it's good enough. It's also zero-maintenance and free.
Keep up the good work! ;-)
vasilzhigilei 22 minutes ago [-]
Thanks! You can also set up free origin certs to make Cloudflare edge to origin connections encrypted as well.
Lammy 2 hours ago [-]
SSL added and removed here ;-)
lucb1e 58 minutes ago [-]
Is this a joke (as in, that you don't actually work there) to make CF look bad for posting product advertisements in comment threads, or is this legit?
vasilzhigilei 25 minutes ago [-]
It's one of my first times posting on HN, thought this could be relevant helpful info for someone. Thanks for pointing out that it sounds salesy, rereading my comment I see it too now.
Rendered at 20:22:13 GMT+0000 (Coordinated Universal Time) with Vercel.
Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.
This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...
Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.
Shorter lifetimes have several advantages:
1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.
2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.
3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.
Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.
Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.
> easier for a few big players in industry
Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).
Sounds like your concept of the customer/provider relationship is inverted here.
The whole "customer is king" doesn't apply to something as critical as PKI infrastructure, because it would compromise the safety of the entire internet. Any CA not properly applying the rules will be removed from the trust stores, so there can be no exceptions for companies who believe they are too important to adhere to the contract they signed.
> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.
(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)
That is the next step in nation state tapping of the internet.
A MITM cert would need to be manually trusted, which is a completely different thing.
the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.
it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.
It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.
But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.
Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?
In the hackiest of setups, it's a few commands to generate a CA and issue a wildcard certificate for everything. Then a single line in the bootstrap script or documentation for new devices to trust the CA and you're done.
Going a few steps further, setting up something like Hashicorp Vault is not hard and regardless of org size; you need to do secret distribution somehow.
My dad still calls my terminals a "DOS window" and doesn't understand why I don't use GUIs like a normal person. He has his own business. He absolutely cannot just roll out a CA for secure comms with his local printer or whatever. He literally calls me to help with buying a PDF reader
Myself, I'm employed at a small business and we're all as tech savvy as it gets. It took me several days to set it up on secure hardware (smartcard, figuring out compatibility and broken documentation), making sure I understand what all the options do and that it's secure for years to come and whatnot, working out what the procedure for issuing should be, etc. Eventually got it done, handed it over to the higher-up who gets to issue certs, distribute the CA cert to everyone... it's never used. We have a wiki page with TLS and SSH fingerprints
This is fair. I assumed all small businesses would be tech startups, haha.
Paying experts (Ed: setting up internal infrastructure) is a perfectly viable option so the only real question is the amount of effort involved not if random people know how to do something.
Congrats for securing your job by selling the free internet and your soul.
If someone doesn’t want to learn then nobody needs to help them for free.
Perhaps spend some time outside your bubble? I’ve read many of your comments and you just do seem to be caught in your own little world. “Out of touch” is apt and you should probably reflect on that at length.
Not to mention the massive undertaking that even just maintaining a multi-platform chromium fork is.
The nightmare only intensifies for small businesses that allow their users to bring their own devices (yes, yes, sacrilege but that is how small businesses operate).
Not everything is a massive enterprise with an army of IT support personnel.
For internal web services I could use just Let's Encrypt but I need to deploy the client certs anyways for network access and I might as well just use my internal cert for everything.
Or cheat and use tailscale to do the whole thing.
Having just implemented an internal CA, I can assure you, most corporations can’t just run an internal CA. Some struggle to update containers and tie their shoe laces.
https://www.digicert.com/blog/https-only-features-in-browser...
A static page that hosts documentation on an internal network does not need encryption.
The added overhead of certificate maintenance (and investigating when it does and will break) is simply not worth the added cost.
Of course the workaround most shops do nowadays is just hide the HTTP servers behind a load balancer doing SSL termination with a wildcard cert. An added layer of complexity (and now single point of failure) just to appease the WebPKI crybabies.
Just about every web server these days supports ACME -- some natively, some via scripts, and you can set up your own internal CA using something like step-ca that speaks ACME if you don't want your certs going out to the transparency log.
The last few companies I've worked at had no http behind the scenes -- everything, including service-to-service communications was handled via https. It's a hard requirement for just about everything financial, healthcare, and sensitive these days.
[proceeds to describe a bunch of new infrastructure and automation you need to setup and monitor]
So when ACME breaks - which it will, because it's not foolproof - the server securely hosting the cafeteria menus is now inaccessible, instead of being susceptible to interception or modification in transit. Because the guy that has owned your core switches is most concerned that everyone will be eating taco salad every day.
Someone that has seized control of your core network such that they were capable of modifying traffic, is not going to waste precious time or access modifying the flags of ls on your man page server. They will focus on more valuable things.
Just because something is possible in theory doesn't make it likely or worth the time invested.
You can put 8 locks on the door to your house but most people suffice with just one.
Someone could remove a piece of mail from your unlocked rural mailbox, modify it and put it back. Do you trust the mail carrier as much as the security of your internal network?
But it's not really a concern worth investing resources into for most.
Ah, the "both me and my attackers agree on what's important" fallacy.
What if they modify the man page response to include drive-by malware?
There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.
Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.
The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.
Non browser things usually don’t care even if cert is expired or trusted.
So I expect people still to use WebPKI for internal sites.
Why would browsers "most likely" enforce this change for internal CAs as well?
That said, it would be really nice if they supported DANE so that websites do not need CAs.
Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.
https://smallstep.com/docs/step-ca/
'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.
If you're at the scale past what IPA/your domain can manage, well, c'est la vie.
I think folks are being facetious wanting more for 'free'. The solutions have been available for literal decades, I was deliberate in my choice.
Not the average, certainly the majority where I've worked. There are at least two well-known Clouds that enroll their hypervisors to a domain. I'll let you guess which.
My point is, the difficulty is chosen... and 'No choice is a choice'. I don't care which, that's not my concern. The domain is one of those external things you can choose. Not just some VC toy. I won't stop you.
The devices are already managed; you've deployed them to your fleet.
No need to be so generous to their feigned incompetence. Want an internal CA? Managing that's the price. Good news: they buy!
Don't complain to me about 'your' choices. Self-selected problem if I've heard one.
Aside from all of this, if your org is being hung up on enrollment... I'm not sure you're ready for key management. Or the other work being a CA actually requires.
Yes, it's more work. Such is life and adding requirements. Trends - again, for decades - show organizations are generally able to manage with something.
Literal Clouds do this, why can't 'you'?
You're managing your machine deployments with something so of course you just use that that to include your cert which isn't particularly hard but there's a long-tail of annoying work when dealing with containers and vms you aren't building yourself like k8s node pools. It can be done but it's usually less effort to just get public certs for everything.
To your point, people don't, but it's a perfectly viable path.
Containers/kubernetes, that's pipeline city, baby!
The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/
(disclamer: i'm a founder at anchor.dev)
Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.
So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.
Fun times...
For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.
Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.
Key loss on one of those is like a takeover of an entire chunk of hostnames. Really opens you up.
And in term of security, I think that it is a double edged sword:
- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it
- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.
As a side note, I'm totally laughing at the following explanation in the article:
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.
At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.
like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.
Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:
> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous
From http://crypto.stackexchange.com/questions/16364/why-do-nothi...
Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.
There are alternatives to pinning, DNS CAA records, monitoring CT logs.
For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.
Pet peeve.
There is the web as it always has been on http/1.1 that is a hyperlinked set of html documents hosted on a mishmash of random commercial and personal servers. Then there is modern http/2 http/3 CA TLS only web hosted as a service on some other website or cloud; mostly to do serious business and make money. The modern web's CA TLS-only ID scheme is required due to the complexity and risk of automatic javascript execution in browsers.
I wish we could have browsers that could support both use cases. But we can't because there's too much money and private information bouncing around now. Can't be whimsical, can't 'vibe code' the web ID system (ie, self signed not feasible in HTTP/3). It's all gotta be super serious. For everyone. And that means bringing in a lot of (well hidden by acme2 clients) complexity and overhead and centralization (everyone uses benevolent US based Lets Encrypt). This progressive lowering of the cert lifetimes is making the HTTP-only web even more fragile and hard to create lasting sites on. And that's sad.
TOFU works for the old web just great. It's completely incompatible with the modern web because major browsers will only ever compile their HTTP/* libs with flags that prevent TOFU and self-signed. You could host a http/1.1 self-signed and TOFU but everyone (except geeks) would be scared away or incapable of loading it.
So, TOFU works if you just want to do something like "gemini" protocol but instead of a new protocol just stick to original http and have a demographic of retro-enthusiasts and poor people. It's just about as accessible as gemni for most people (ie, not very) except for two differences. 1. Bots still love http/1.1 and don't care if it's plain text. 2. There's still a giant web of http/1.1 websites out there.
Which for some threat models is sufficiently good.
It's worth pointing out that MITM is also the dominant practical threat on the Internet: you're far more likely to face a MITM attacker, even from a state-sponsored adversary, than you are a fiber tap. Obviously, TLS deals with both adversaries. But altering the security affordances of TLS to get a configuration of the protocol that only deals with the fiber tap is pretty silly.
It’s how I know what my kids are up to.
It’s possible because I installed a trusted cert in their browsers, and added it to the listening program in their router.
Identity really is security.
More importantly - this debate gets raised in every single HN post related to TLS or CAs. Answering with a "my threat model is better than yours" or somehow that my threat model is incorrect is even more silly than offering a configuration of TLS without authenticity. Maybe if we had invested more effort in 801.x and IPSec then we would get those same guarantees that TLS offers, but for all traffic and for free everywhere with no need for CA shenanigans or shortening lifetimes. Maybe in that alternative world we would be arguing that nonrepudiation is a valuable property or not.
So no, IPSec couldn't have fixed the MITM issue without requiring a CA or some equivalent.
Not "never", because of HSTS preload, and browsers slowly adding scary warnings to plaintext connections.
https://preview.redd.it/1l4h9e72vp981.jpg?width=640&crop=sma...
However, ECH relies on a trusted 3rd party to provide the key of the server you are intending to talk to. So, it won't work if you have no way of authenticating the server beforehand the way GP was thinking about.
ECH gets the key from the DNS, and there's no real authentication for this data (DNSSEC is rare and is not checked by the browser). See S 10.2 [0] for why this is reasonable.
[0] https://tlswg.org/draft-ietf-tls-esni/draft-ietf-tls-esni.ht...
Safari did some half measures starting in Safari 15 (don't know the year) and now fully defaults to https first.
Firefox 136 (2025) now does https first as well.
With an intact trust chain, there is NO scenario where a 3rd party can see or modify what the client requests and receives beyond seeing the hostname being requested (and not even that if using ECH/ESNI)
Your "if you don't have an out-of-band reason to trust the server cert" is a fitting description of the global PKI infrastructure, can you explain why you see that as a problem? Apart from the fact that our OSes and browser ship out of the box with a scary long list of trusted CAs, some from fairly dodgy places?
let's not forget that BEFORE that TCP handshake there's probably a DNS lookup where the FQDN of the request is leaked, if you don't have DoH.
*I mistakenly wrote "certificate" here initially. Sorry.
Undoubtedly it is not best practice to lean on TOFU for good reason, but there are simply some lower stakes situations where engaging the CA system is a bit overkill. These are systems with few nodes (maybe just one) that have few users (maybe just one.) I have some services that I deploy that really only warrant a single node as HA is not a concern and they can easily run off a single box (modern cheap VPSes really don't sweat handling ~10-100 RPS of traffic.) For those, I pre-generate SSH server keys before deployment. I can easily verify the fingerprint in the excessively rare occasion it isn't already trusted. I am not a security expert, but I think this is sufficient at small scales.
To be clear, there are a lot of obvious security problems with this:
- It relies on me actually checking the fingerprint.
- SSH keys are valid and trusted indefinitely, so it has to be rotated manually.
- The bootstrap process inevitably involves the key being transmitted over the wire, which isn't as good as never having the key go over the wire, like you could do with CSRs.
This is clearly not good enough for a service that needs high assurance against attackers, but I honestly think it's largely fine for a small to medium web server that serves some small community. Spinning up a CA setup for that feels like overkill.
As for what I personally would do instead for a fleet of servers, personally I think I wouldn't use SSH at all. In professional environments it's been a long time since I've administered something that wasn't "cloud" and in most of those cloud environments SSH was simply not enabled or used, or if it was we were using an external authorization system that handled ephemeral keys itself.
That said, here I'm just suggesting that I think there is a gap between insecure HTTP and secure HTTPS that is currently filled by self-signed certificates. I'm not suggesting we should replace HTTPS usage today with TOFU, but I am suggesting I see the value in a middle road between HTTP and HTTPS where you get encryption without a strong proof of what you're connecting to. In practice this is sometimes the best you can really get anyway: consider the somewhat common use case of a home router configuration page. I personally see the value in still encrypting this connection even if there is no way to actually ensure it is secure. Same for some other small scale local networking and intranet use cases.
In practice, this means that it's way easier to just use unencrypted HTTP, which is strictly worse in every way. I think that is suboptimal.
A self-signed certificate has the benefit of being treated as a secure origin, but that's it. Sometimes you don't even care about that and just want the encryption. That's pretty much where this argument all comes from.
https://self-signed.badssl.com/
and when I clicked "Accept the risk and continue", the certificate was added to Certificate Manager. I closed the browser, re-opened it, and it did not prompt again.
I did the same thing in Chromium and it also worked, though I'm not sure if Chromium's are permanent or if they have a lifespan of any kind.
I am absolutely 100% certain that it did not always work that way. I remember a time when Firefox had an option to permanently add an exception, but it was not the default.
Either way, apologies for the misunderstanding. I genuinely did not realize that it worked this way, and it runs contrary to my previous experience dealing with self-signed certificates.
To be honest, this mostly resolves the issues I've had with self-signed certificates for use cases where getting a valid certificate might be a pain. (I have instead been using ACME with DNS challenge for some cases, but I don't like broadcasting all of my internal domains to the CT log nor do I really want to manage a CA. In some cases it might be nice to not have a valid internet domain at all. So, this might just be a better alternative in some cases...)
TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.
Also, I agree that TOFU in its own is certainly worse than having robust verification via the CA system. OTOH, SSH-style TOFU has some advantages over the CA system, too, at least without additional measures like HSTS and certificate pinning. If you are administering machines that you yourself set up, there is little reason to bother with anything more than TOFU because you'll cache the key shortly after the machine is set up and then get warned if a MITM is attempted. That, IMO, is the exact sort of argument in favor of having an "insecure but encrypted" sort of option for the web; small scale cases where you can just verify the key manually if you need to.
Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.
Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.
For example, TOFU where “first use” is a loopback ethernet cable between the two machines is stronger than a trust anchor.
Alternatively, you could manually verify + pin certs after first use.
The thing to think in comparing SSH to TLS is how frequent counterparty introductions are. New counterparties in SSH are relatively rare. Key continuity still needlessly exposes you to an grave attack in SSH, but really all cryptographic protocol attacks are rare compared to the simpler, more effective stuff like phishing, so it doesn't matter. New counterparties in TLS happen all the time; continuity doesn't make any sense there.
On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.
But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.
Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.
The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.
That's an important practical distinction that's overlooked by security bozos.
You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.
OK I will fess up. The truth is that I don't spend a lot of time in coffee shops but I do have a ton of crap on my LAN that demands high amounts of fiddle faddle so that the other regular people in my house can access stuff without dire certificate warnings, the severity of which seems to escalate every year.
Like, yes, I eat vegetables and brush my teeth and I understand why browsers do the things they do. It's just that neither I nor my users care in this particular case, our threat model does not really include the mossad doing mossad things to our movie server.
Alternatively, I would suggest letsencrypt with DNS verification. Little bit of setup work, but low maintenance work and zero effort on clients.
1. Wire up LetsEncrypt certs for things running on your LAN, and all the "dire certificate warnings" go away.
2. Run a local ACME service, wire up ACME clients to point to that, make your private CA valid for 100 years, trust your private CA on the devices of the Regular People in your house.
I did this dance a while back, and things like acme.sh have plugins for everything from my Unifi gear to my network printer. If you're running a bunch of servers on your LAN, the added effort of having certs is tiny by comparison.
Yes I am being snarky - network level MITM resistance is wonderful infrastructure and CT is great too.
If we encrypt everything we don't need AuthN/Z.
Encrypt locally to the target PK. Post a link to the data.
Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.
However, we could use some form of Certificate Transparency that would somehow work with DANE.
Also it still protects you from everyone who isn't your DNS provider, so it's valuable if you only need a medium level of security.
Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?
When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?
Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?
It's great to be environmentally conscious, but if reducing carbon emissions is your goal, complaining about this is a lot like saying that people shouldn't run marathons, because physical activity causes humans to exhale more CO2.
We are effectively talking about the entire world wide web generating multiple highly secure cryptograph key pairs every 47 days. That is a lot of CPU cycles.
Also you not picking up on the Futurama quote is disappointing.
It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.
In essence it brings a working method of revocation to WebPKI.
> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.
Compared to a year?
That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.
On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.
The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.
In short: Don't lose your domain.
> Compared to a year?
Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.
But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.
Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?
I tend to have secondary scripts that checks if the cert in certbots dir is newer than whatever is installed for a service, and if so install it. Some services prefer the cert in certain formats, some services want to be reloaded to pick up a new cert etc, so I put that glue in my own script and run it from cron or a systemd timer.
1. mobile apps.
2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.
Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.
Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.
It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.
You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.
What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.
Yeah not sure about that one...
"Therefore, the Lunar Bureau of the United Nations Peace Keeping Force DataWatch has created the LINK, the Lunar Information Network Key. There are currently nine thousand, four hundred and two Boards on Luna; new Boards must be licensed before they can rent lasercable access. Every transaction--every single transaction--which takes place in the Lunar InfoNet is keyed and tracked on an item-by-item basis. The basis of this unprecedented degree of InfoNet security is the Lunar Information Network Key. The Key is an unbreakable encryption device which the DataWatch employs to validate and track every user in the Lunar InfoNet. Webdancers attempting unauthorized access, to logic, to data, to communications facilities, will be punished to the full extent of the law."
from The Long Run (1989)
Your browser won't access a site without TLS; this is for your own protection. TLS certificates are valid for one TCP session. All certs are issued by an organization reporting directly to a national information security office; if your website isn't in compliance with all mandates, you stop getting certs.
I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?
- If you're looking for a concise (yet complete) guide: https://www.feistyduck.com/library/bulletproof-tls-guide/
- OpenSSL Cookbook is a free ebook: https://www.feistyduck.com/library/openssl-cookbook/
- SSL/TLS and PKI history: https://www.feistyduck.com/ssl-tls-and-pki-history/
- Newsletter: https://www.feistyduck.com/newsletter/
- If you're looking for something comprehensive and longer, try my book Bulletproof TLS and PKI: https://www.feistyduck.com/books/bulletproof-tls-and-pki/
If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?
The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.
No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.
https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)
The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)
(Edit because I'm posting too fast, for the reply):
> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?
Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.
Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?
I could have probably done more with Lets Encrypt automation to stay with my old VPS but given that all my professional work is with AWS its really less mental work to drop my old VPS.
Times they are a changing
Or just pay Amazon, I guess. Easier than thinking.
CAs and web PKI are a bad joke. There's too many ways to compromise security, there's too many ways to break otherwise-valid web sites/apps/connections, there's too many organizations that can be tampered with, the whole process is too complex and bug-prone.
What Web PKI actually does, in a nutshell, is prove cryptographically that at some point in the past, there was somebody who had control of either A) an e-mail address or B) a DNS record or C) some IP space or D) some other thing, and they generated a certificate through any of these methods with one of hundreds of organizations. OR it proves that they stole the keys of such a person.
It doesn't prove that who you're communicating with right now is who they say they are. It only proves that it's someone who, at some point, got privileged access to something relating to a domain.
That's not what we actually want. What we actually want is to be assured this remote host we're talking to now is genuine, and to keep our communication secret and safe. There are other ways to do that, that aren't as convoluted and vulnerable as the above. We don't have to twist ourselves into all these knots.
I'm hopeful changes like these will result in a gradual catastrophy which will push industry to actually adopt simpler, saner, more secure solutions. I've proposed one years ago but nobody cares because I'm just some guy on the internet and not a company with a big name. Nothing will change until the people with all the money and power make it happen, and they don't give a shit.
Do people really backup their https certificates? Can't you generate a new one after restoring from backup?
It goes from a "rather nice to have" to "effectively mandatory".
Dev guys think everything is solvable via code, but hardware guys know this isn't true. Hardware is stuck in fixed lifecycles and firmware is not updated by the vendors unless it has to be. And in many cases updated poorly. No hardware I've ever come across that supports SSL\TLS (and most do nowadays) offers any automation capability in updating certs. In most cases, certs are manually - and painfully - updated with esoteric CLI cantrips that require dancing while chanting to some ancient I.T. God for mercy because the process is poorly (if at all) documented and often broken. No API call or middelware is going to solve that problem unless the manufacturer puts it in. In particular, load balancers are some of the worst at cert management, and remember that not everyone uses F5 - there are tons of other cheaper and popular alternatives most of which are atrocious at security configuration management. It's already painful enough to manage certs in an enterprise and this 47 day lifecycle is going to break things. Hardware vendors are simply incompetent and slow to adapt to security changes. And not everyone is 100% in the cloud - most enterprises are only partially in that pool.
Perhaps the new requirements will give them additional incentives.
I've done this and it works very well. I had a Digital Ocean droplet so used their DNS service for the challenge domain.
https://letsencrypt.org/docs/challenge-types/#dns-01-challen...
It is a bit funny that LetsEncrypt has non-expiring private keys for their accounts.
It also occurred to me that there's nothing(?) preventing you from concurrently having n valid certificates for a particular hostname, so you could just enroll distinct certificates for each host. Provided the validation could be handled somehow.
The other option would maybe be doing DNS-based validation from a single orchestrator and then pushing that result onto the entire fleet.
I use this to sync users between small, experimental cluster nodes.
Some notes I have taken: https://notes.bayindirh.io/notes/System+Administration/Synci...
> Get certificates for remote servers - The tokens used to provide validation of domain ownership, and the certificates themselves can be automatically copied to remote servers (via ssh, sftp or ftp for tokens). The script doesn't need to run on the server itself. This can be useful if you don't have access to run such scripts on the server itself, e.g. if it's a shared server.
* https://github.com/srvrco/getssl
Side note: I wonder how much pressure this puts on providers such as LetsEncrypt, especially with the move to validate IPs. And more specifically IPv6…
I don't disagree with you that it should be super common. But it's surprisingly not in many businesses. Heck, Okta (nominally a large security company) still sends out notifications every time they change certificates and publishes a copy of their current correct certs in github: https://github.com/okta/okta-pki - How they do the actual rotation? No idea, but... I'd guess it's not automatic with that level of manual notification/involvement. (Happy to be proven wrong though).
I fairly regularly get cert expired problems because the admin is doing it as the yak shaving for a secondary hobby
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
You don't say. Why are the defaults already 90 days or less then?
90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.
Why do you think all the average web sites have to handle members?
Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.
What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.
So most of the sites have much more to protect than meets the eye.
Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.
Web is a bit different than you envision/think.
Why can't this site just upload HTML files to their web server?
> Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing...
Any non predatory practices you can add to the list?
I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.
The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.
However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.
1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.
2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.
Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."
As for certs... maybe at the start it was hard, but it's really quite easy to host things online, with a valid certificate. There are many CDN services like Cloudflare which will handle it for you. There are also application proxies like Traefik and Caddy which will get certs for you.
Most people who want their own site today, will use Kinsta or SquareSpace or GitHub pages any one of thousands of page/site hosting services. All of whom have a system for certificates that is so easy to use, most people don't even realize it is happening.
Every single thing you mentioned is plugged in to the tier-1 surveillance brokers. I am talking plain files on single server shoved in a closet, or cheap VPS. I don't often say this but I really don't think you “get” it.
Your attitude is so dismissive to the general public. We should be encouraging people to learn the little bits they want to learn to achieve something small, and instead we are building this ivory tower all-or-nothing stack. For what, job security? Bad mindset.
Oh yes, vendors will update their legacy NAS/IPMI/whatever to include certbot. This change will have the exact opposite effect - expired self signed certificates everywhere on the most critical infrastructure.
Nope. People will create self-signed certs and tell people to just click "accept".
They are purchased to provide encryption. Nobody checks the details of a cert and even if they did they wouldn't know what to look for in a counterfeit anyway.
This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
There are lots of issues with trust and social and business identities in general, but for the purpose of encryption, the problem can be simplified to checking of the host name (it's effectively an out of band async check that the destination you're talking to is the same destination that independent checks saw, so you know your connection hasn't been intercepted).
You can't have effective TLS encryption without verifying some identity, because you're encrypting data with a key that you negotiate with the recipient on the other end of the connection. If someone inserts themselves into the connection during key exchange, they will get the decryption key (key exchange is cleverly done that a passive eavesdropper can't get the key, but it can't protect against an active eavesdropper — other than by verifying the active participant is "trusted" in a cryptographic sense, not in a social sense).
This is where the disconnect comes in. Me and you know that the green icon doesn't prove identity. It proves certificate validity. But that's not what this is "sold as" by the browser or the security community as a whole. I can buy the domain Wаl-Mart right now and put a certificate on it that says Wаl-Mаrt and create the conditions for that little green icon to appear. Notice that I used U+0430 instead of the letter "a" that you're used to.
And guess what... The identity would match and pass every single test you throw at it. I would get a little green icon in the browser and my certificate would be good. This attack fools even the brightest security professionals.
So you see, Identity isn't the value that people expect from a certificate. It's the encryption.
Users will allow a fake cert with a green checkmark all day. But a valid certificate with a yellow warning is going to make people stop and think.
I care that when I type walmart.com, I'm actually talking to walmart.com. I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Preventing local DNS servers from fucking with users is critical, as local DNS is the weakest link in a typical setup. They're often run by parties that must be treated as hostile - basically whenever you're on public wifi. Or hell, when I'm I'm using my own ISP's default configuration. I don't trust Comcast to not MitM my connection, given the opportunity. I trust technical controls to make their desire to do so irrelevant.
Without the identity component, any DNS server provided by DHCP could be setting up a MitM attack against absolutely everything. With the identity component, they're restricted to DoS. That's a lot easier to detect, and gets a lot of very loud complaints.
So no, nobody will ever look at a certificate.
When I look at them, as a security professional, I usually need to rediscover where the fuck they moved the certs details again in the browser.
I said exactly the words I meant.
> I don't look at the browser bar or symbols on it. I care what my bookmarks do, what URLs I grab from history do, what my open tabs do, and what happens when I type things in.
Without the identity component, I can't trust that those things I care about are insulated from local interference. With the identity component, I say it's fine to connect to random public wifi. Without it, it wouldn't be.
That's the relevant level. "Is it ok to connect to public wifi?" With identity validation, yes. Without, no.
You don’t mean “Walmart”, but 99% of the population thinks you do.
Is it OK to trust this for anything important? Probably not. Is OK to type your credit card number in? Sure. You have fraud protection.
"example.com" is an identity just like "Stripe, Inc"[1]. Just because it doesn't have a drivers license or article of incorporation, doesn't mean it's not an identity.
[1] https://web.archive.org/web/20171222000208/https://stripe.ia...
>This is just another gatekeeping measure to make standing up, administering, and operating private infrastructure difficult. "Just use Google / AWS / Azure instead."
Certbot is trivial to set up yourself, and deploying it in production isn't so hard that you need to be "Google / AWS / Azure" to do it. There's plenty of IaaS/PaaS services that have letsencrypt, that are orders of magnitude smaller than those hyperscalers.
There should be 1 change from 365 to 47 days. This industry doesnt need constant changes, which will force everyone to automating renewals anyway.
> The ballot argues that shorter lifetimes are necessary for many reasons, the most prominent being this: The information in certificates is becoming steadily less trustworthy over time, a problem that can only be mitigated by frequently revalidating the information.
> The ballot also argues that the revocation system using CRLs and OCSP is unreliable. Indeed, browsers often ignore these features. The ballot has a long section on the failings of the certificate revocation system. Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.
Personally I don't really buy this argument. I don't think the web sites that most people visit (especially highly-sensitive ones like for e-mail, financial stuff, a good portion of shopping) change or become "less trustworthy" that quickly.
And I would argue that MITMing communications is a lot hard for (non-nation state) attackers than compromising a host, so trust compromise is a questionable worry.
By that logic, we don't really need certificates, just TOFU.
It works fairly well for SSH, but that tends to be a more technical audience. But doing a "Always trust" or "Always accept" are valid options in many cases (often for internal apps).
How "should" it work? Is there a known-better way?
I am aware of them.
As someone in the academic sphere, with researchers SSHing into (e.g.) HPC clusters, this solves nothing for me from the perspective of clients trusting servers. Perhaps it's useful in a corporate environment where the deployment/MDM can place the CA in the appropriate place, but not with BYOD.
Issuing CAs to users, especially if they expire is another thing. From a UX perspective, we can tie password credentials to things like on-site Wifi and web site access (e.g., support wiki).
So SSH certs certainly have use-cases, and I'm happy they work for people, but TOFU is still the most useful in the waters I swim in.
It was suggested by someone else: I commented TOFU works for SSH, but is probably not as useful for web-y stuff (except for maybe small in-house stuff).
Personally I'm somewhat sad that opportunistic encryption for the web never really took off: if folks connect on 80, redirect to 443 if you have certs 'properly' set up, but even if not do an "Upgrade" or something to move to HTTPS. Don't necessary indicate things are "secure" (with the little icon), but scramble the bits anyway: no false sense of security, but make it harder for tapping glass in bulk.
If you can't make this happen, don't use WebPKI and use internal PKI.
Everyone in the CA/B should be fired from their respective employers, and we honestly need to wholesale plan to dump PKI by 2029 if we can't get a resolution to this.
It's really not that hard to automate renewals and monitor a system's certificate status from a different system, just in case the automation breaks and for things that require manual renewal steps.
I get that it's harder in large organisations and that not everything can be automated yet, but you still have a year before the certificate lifetime goes down to 200 days, which IMO is pretty conservative.
With a known timeline like this, customers/employees have ammunition to push their vendors/employers to invest into automation and monitoring.
Are the security benefits really worth making anything with a valid TLS certificate stop working if it is air-gapped or offline for 48 days?
> CAs and certificate consumers (browsers) voted in favour of this change. They didn't do this because they're incompetent but because they think it'll improve security.
They're not incompetent and they're not "evil", and this change does improve some things. But the companies behind the top level CA ecosystem have their own interests which might not always align with those of end users.
CAs have now implemented MPIC. This may have thwarted some attacks, but those attackers still have valid certificates today and can request a new certificate without any domain control validation being performed in over a year.
BGP hijackings have been uncovered in the last 5 years and MPIC does make this more difficult. https://en.wikipedia.org/wiki/BGP_hijacking
New security standards should come into effect much faster. For fixes against attacks we know about today and new ones that are discovered and mitigated in the future.
"The goal is to minimize risks from outdated certificate data, deprecated cryptographic algorithms, and prolonged exposure to compromised credentials. It also encourages companies and developers to utilize automation to renew and rotate TLS certificates, making it less likely that sites will be running on expired certificates."
I'm not even sure what "outdated certificate data" could be. The browser by default won't negotiate a connection with an expired certificate
Agree.
> According to the article:
Thanks, I did read that, it's not quite what I meant though. Suppose a security engineer at your company proposes that users should change their passwords every 49 days to "minimise prolonged exposure from compromised credentials" and encourage the uptake of password managers and passkeys.
How to respond to that? It seems a noble endeavour. To prioritise, you would want to know (at least):
a) What are the benefits - not mom & apple pie and the virtues of purity but as brass tacks - e.g: how many account compromises do you believe would be prevented by this change and what is the annual cost of those? How is that trending?
b) What are the cons? What's going to be the impact of this change on our customers? How will this affect our support costs? User retention?
I think I would have a harder time trying to justify the cert lifetime proposal than the "ridiculously frequent password changes" proposal. Sure, it's more hygenic but I can't easily point to any major compromises in the past 5 years that would have been prevented by shorter certificate lifetimes. Whereas I could at least handwave in the direction of users who got "password stuffed" to justify ridiculously frequent password changes.
The analogy breaks down in a bad way when it comes to evaluating the cons. The groups proposing to decrease cert lifetimes bear nearly none of the costs of the proposal, for them it is externalised. They also have little to no interest in use cases that don't involve "big cloud" because those don't make them any money.
In the case of OV/EV certificates, it could also include the organisation's legal name, country/locality, registration number, etc.
Forcing people to change passwords increases the likelihood that they pick simpler, algorithmic password so they can remember them more easily, reducing security. That's not an issue with certificates/private keys.
Shorter lifetimes on certs is a net benefit. 47 days seems like a reasonable balance between not having bad certs stick around for too long and having enough time to fix issues when you detect that automatic renewal fails.
The fact that it encourages people to prioritise implementing automated renewals is also a good thing, but I understand that it's frustrating for those with bad software/hardware vendors.
None of the platforms which I deal with will likely magically support automated renewal in the next year. I will likely spend most of the next year reducing our exposure to PKI.
Smaller organizations dependent on off the shelf software will be killed by this. They'll probably be forced to move things to the waiting arms of the Big Tech cloud providers that voted for this. (Shocker.) And it probably won't help stop the bleeding.
And again, there's no real world security benefit. Nobody in the CA/B has ever discussed real world examples of threats this solves. Just increasingly niche theoretical ones. In a zero cost situation, improving theoretical security is good, but in a situation like this where the cost is real fragility to the Internet ecosystem, decisions like this need to be justified.
Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
This is a group of people who have hammers and think everything is a nail, and unfortunately, that includes a lot of ceramic and glass.
This will be painful for people in the short term, but in the long term I believe it will make things more automated, more secure, and less fragile.
Browsers are the ones pushing for this change. They wouldn't do it if they thought it would cause people to see more expired certificate warnings.
> Unfortunately the CA/B is essentially unchecked power, no individual corporate member is going to fire their representatives for this, much less is there a way to remove everyone that made this incredibly harmful decision.
Representatives are not voting against the wishes/instructions of their employer.
Unfortunately the problem is likely too removed from understanding for employers to care. Google and Microsoft do not realize how damaging the CA/B is, and probably take the word of their CA/B representatives that the choices that they are making are necessary and good.
I doubt Satya Nadella even knows what the CA/B is, much less that he pays an employee full-time to directly #### over his entire customer base and that this employee has nearly god-level control over the Internet. I have yet to see an announcement from the CA/B that represented a competent decision that reflected the reality of the security industry and business needs, and yet... nobody can get in trouble for it!
If an organisation ignores all those options, then I suppose they should keep doing it manually. But at the end of the day, that is a choice.
Maybe they'll reconsider now that the lifetime is going down or implement their own client if they're that scared of third party code.
Yeah, this will inconvenience some of the CA/B participant's customers. They knew that. It'll also make them and everyone else more secure. And that's what won out.
The idea that this change got voted in due to incompetence, malice, or lack of oversight from the companies represented on the CA/B forum is ridiculous to me.
How many of those are first-party/vetted by Microsoft? I'm not sure you understand how enterprises or secure environments work, we can't just download whatever app someone found on the Internet that solves the issue.
Certify The Web has a 'Microsoft Partner' badge. If that's something your org values, then they seem worth looking into for IIS.
I can find documentation online from Microsoft where they use YARP w/ LettuceEncrypt, Caddy, and cert-manager. Clearly Microsoft is not afraid to tell customers about how to use third party solutions.
Yes, these are not fully endorsed by Microsoft, so it's much harder to get approval for. If an organisation really makes it impossible, then they deserve the consequences of that. They're going to have problems with 397 day certificates as well. That shouldn't hold the rest of the industry back. We'd still be on 5 year certs by that logic.
Still, oppressive states or hacked ISPs can perform these attacks on small scales (e.g. individual orgs/households) and go undetected.
For a technology the whole world depends on for secure communication, we shouldn't wait until we detect instances of this happening. Taking action to make these attacks harder, more expensive, and shorter lasting is being forward thinking.
Certificate transparency and Multi-Perspective Issuance Corroboration are examples of innovations without bothering people.
Problem is, the benefits of these improvements are limited if attackers can keep using the stolen keys or misissued certificates for 5 years (plus potentially whatever the DCV reuse limit is).
Next time a DigiNotar, Debian weak keys, or heartbleed -like event happens, we'll be glad that these certs exit the ecosystem sooner rather than later.
I'm sure you have legit reasons to feel strongly about the topic and also that you have substantive points to make, but if you want to make them on HN, please make them thoughtfully. Your argument will be more convincing then, too, so it's in your interests to do so.
And also, it probably won't avoid problems. Because yes, the goal is automation and a couple weeks ago I was trying to access a site from an extremely large infrastructure security company which rotates their certificates every 24 hours. And their site was broke and the subreddit about their company was all complaining about it. Turns out automated daily rotation just means 365 more opportunities for breakage a year.
Even regular processes break, and now we're multiplying the breaking points... and again, at no real security benefit. There’s like... never ever been a case where a certificate leak caused a breach.
This is fundamentally a skill issue. If a human can replace the certificate, so can a machine. Write a script.
Keep up the good work! ;-)