January 2020

It's possible to change a premature optimization in the security design of the web. It's not necessary to have trusted third parties (Certificate Authorities) issue public key certificates that are saved in the browser ahead of time. It's sufficient to make sure the site doesn't do anything bad.

When you connect to a site that says it's for company X but turns out to be from hacker Y, so far you don't have a way of knowing it without certificates saved in the browser. But although having a certificate may be sufficient to verify the site is from who it says it is, it's not necessary, and it's not sufficient for the end goal. It's one avenue towards the end goal but isn't the end goal itself. The end goal is one step further. What the user wants is to not get hacked.

Certificates sort of worked. What else could work?

What makes a site safe isn't the certificate. A certificate can prevent a man-in-the-middle attack but not a man-in-the-end attack. It doesn't guarantee the site won't do anything bad. Bad things can happen with sites that have a certificate and whose owner is known. When I get junk mail I often have to search online for the name of the company and the word "scam" to see if they're legit. And even then I can't tell. Their site has a certificate, but this doesn't tell me if they're scammers.

There could be any number of reasons that a site's code does bad things. Good people make bad decisions; good programmers write bugs; build processes fail; testcases get skipped; computers break. Bad intentions aren't the only reason sites do bad things.

So it seems like a mistake to cement site ownership verification in the security model of the web. It doesn't solve the full problem.

As for encryption, it can be got without saving public key certificates in the browser and the server ahead of time. Do a Diffie-Hellman-Merkle exchange for each session, create public keys and a shared secret, and encrypt with the secret.

Removing trusted third parties from the security model switches the focus to the other half and probably more important part of the problem. How to not let sites do bad things.

Could there be a way to make sure a site doesn't do anything bad? Could a client detect every bad command issued by a site, intercept it, and stop it? Could sites run primarily on the client under the client's full control instead of run primarily on the server?

If it can't be done prove it.