|How it Works|
The browser extension protects decrypted data from unauthorized access by enforcing a key restriction: decrypted data can't leave the browser. It verifies all code sent by the server does not violate this restriction before allowing the code to execute. All a web app can do with decrypted data is display them or use them in navigation logic.
If an application wants to use decrypted data for some other reason, it has to convince the user to retype the data in.
To encrypt, data are copied from the input fields of a browser tab (from the browser web page) into the browser extension. They are encrypted there with public-key cryptography, copied back into the input fields of the browser tab, sent to the server and saved.
To decrypt, data are retrieved from the server, copied from the browser tab into the browser extension, decrypted inside the browser extension, and displayed back on the browser tab.
At no time can server code steal data saved in the browser extension, like a private key or password.
(The user notices if some input fields will not be encrypted because they are changed to have a red background. A red alert also appears on the top left of the web page.)
An unforeseen twist here is that data that will be encrypted may need to be first sent unencrypted to foreign servers. For example, in credit card payments. When a user presses submit, the credit card number should be sent unencrypted to the payment processor, but still sent encrypted to the app server so the server cannot read the number. The browser extension identifies such input fields, changes them to have a red background, and displays a red alert on the top left of the web page.
The worst thing that can happen is someone breaks into the server and steals the data. But the data are encrypted, and without the private key there are limits to what one can do with them. None of the data can be decrypted if the encryption was strong enough.
With Joan you program in Arc, a new Lisp with succinct syntax that works well for basic web apps. The code verifier in the browser extension and the server part of this demo are both written in Arc but you can write the server code in any language.
Although you can't execute arbitrary code, some types of code are safe. Like a function that takes as input a string and splits the string on whitespace. A function like this doesn't refer to input fields on the web page at all; it doesn't get or set anything. As another example, code that accesses encrypted fields, decrypts the data, and does nothing other than display the data is also safe.
Besides executing code you write yourself, you can also execute library functions built in the browser extension. These are functions that simplify writing web apps that use encryption. You can control from the server which functions to use but none of them contain logic that could harm data in the browser.
This is certainly an ugly hack. That's an awful lot of hoops to jump through for a feature that should probably be offered by default. But it's not out of recklessness Joan was designed to run safe code this way: browsers have yet to offer safe web form data encryption natively. Short of writing or modifying an existing browser, I don't see another option.
How did it get to this? Mostly as a result of natural evolution. You are pressed to ship features fast when there's demand and after you ship it's hard to update.
Web form data encryption is in a similar spot now. It looks appalling to me now to have bolted a code verifier to the browser as a plugin. But it's not unreasonable a hack like Joan could someday make it into browsers. You have to encrypt web form data in a browser; a code verifier is a fast way to do it. It's arguably a mistake not to ship a code verifier fast.
Why haven't browsers shipped a code verifier already then?
So don't expect browsers sprint to add a verifier on top of it all. Especially one that includes a new language, no matter what the language. It's simply too much to ask. A better question to ask is: how do you get off the ground some hard feature whose de facto suppliers can't justify writing yet?
To the extent features harder to build are also more powerful there is a serious, overlooked implication here. That the more powerful a feature you want to add in existing browsers, the more resistance the feature will face. Suppose you wanted servers to access any file on people's laptops, or save more than the current browser limit of 5MB, or some other potent hack you are brewing. Preposterous. You won't get people to agree on any of this. If you wanted some explosive feature and wanted it fast, waiting for the approval of browsers or wavering committees or companies is the last thing you should do.
The most powerful features aren't products of consensus.
(At its extreme, if you came up with a powerful new way for the web to work, you are better off bypassing the browser. You are better off going directly to users.)
This is a classic bootstrap problem. You must first build a dirty, small thing before you build a cleaner, bigger thing. If browsers don't ship a verifier, what options are there left? Aside from writing a new browser, the first version of the code verifier browsers will one day ship will have no other way of getting bootstrapped but through a browser plugin. And not even that would be enough for takeoff, since that doesn't in itself make the code verifier widespread enough to convince browsers to embed it.
A big surprise would be if a browser caught all the way up to the most powerful language. Lisp. That would be something like reaching escape velocity, that point where you finally break free from the shackles of gravity and no further propulsion is needed. A different universe opens up when you reach limits like that.
Is this enough to say for sure it's time for a new browser? Who knows. It feels it's time for me to want one. I now want to write web apps and they are painful to write.
If a new browser took over the web it probably wouldn't look like a browser at first. It would probably look like a database or a desktop app or a release system. It remains a fruitful thought exercise though to imagine how the ideal browser would come about, and as a consequence how the ideal Internet would work.
Let's start by imagining the destination. If anything was possible, how would you want the Internet to work?
I have my own biased preferences. I want all code written in Lisp. I want API calls to not be URLs but Lisp functions too, to have access to source code where possible, and to be free to modify source (both manually and automatically) without stepping on any toes. I don't want to have to worry about code problems like scaling and downtime and ssh-ing between servers and talking about machine specs. Or worry about data problems, like backups and encrypting and syncing files. I don't want a $1000 computer to ever say "Your startup disk is almost full", and I want the development environment to take closer to 14MB than 14GB. I want web programming to be as easy as Visual Basic, and no matter which computer I use to have the same environment. Not having Internet access shouldn't make it impossible to use a web app I ran in the past. I want live updates without breaking what's running, function access through capabilities, on some kind of SASOS designed like Plan 9. In the ideal Internet all running programs and data are "in the computer" with a working copy on your laptop.
This wish list doesn't show how we get there from where we are now. But the fuel in the burner is need. There is a visible gap from the current Internet to a better one. A good way to discover the ideal Internet is to wait until you hit limits in the current one and we are already hitting limits.
I trip on limits everywhere. Web apps need built-in encryption. Scaling is done manually but would be easier if you could automatically transform the code with Lisp, catching and fixing bugs along with it. Backups and syncing are almost a solved problem minus the caveat filesystems still don't let you run code over data the way a database does. I'm tired of installing software and libraries instead of them just being there. A big open problem with software is reusing it, because it's hard to familiarize yourself with a codebase fast or know ahead of time how well the sofware works, especially if there's no automated testing. I want to hold in my head the entire program I'm writing. But by the time I run through my head these problems I'm facing I'm back to problem number one: I'm not excited about writing web apps.
Which leads to serious, overlooked implication number two. If the best software is written by the best programmers, and the best programmers dislike writing web apps, then the best web software hasn't been written yet. The best web software won't be written with the current web.
The entire web for all we know may be an informal version 1. So while I don't know if it's time for a new browser, I know there is room for one as long as these problems are open.
Build all that and it'd be enough to tip the scale from tolerating the old Internet to upgrading to a new one. If this gigantic conjecture turns true, it will have gigantic consequences: today's browser and a big chunk of the Internet will be replaced.
It may even be easy to start. The first popular browser ViolaWWW was written in just four days.
But to be realistic it's absurd to actually expect the Internet will be replaced. It's no less absurd than the odds of all 6 of these unlikely conditions to hold true. Even if each of them had a generous 50% chance of happening, the probability the Internet will be replaced would be a lowly 1.5%. Which only adds to the shock: some apps may be halfway there. There are massively used web apps out there that already meet at least 5 of these 6 conditions.
Which surprisingly means that if giants like Facebook never use a plugin like Joan, but you write a web app that does, you are in better position to replace the Internet than some of today's Internet giants.