This page explains the kinds of things that aren't apparent just by looking at the examples. One of the biggest challenges is that the problem keeps changing the more you learn about it.
Writing Joan felt like fighting the mythical Lernaean Hydra, a monster that grew two more heads for each one cut off. You have to cauterize the neck stumps.
If I knew when I started what I know now I would have not written a browser plugin. I would have shot for a browser. Writing a full browser may be too much work but I don't see why one couldn't write a rough version 1 of a browser wrapper in two days.
The code that generates the user's private/public keypair is in options_generate.js_common.in.
The code that migrates existing, unencrypted data of a user to become encrypted, after a private/public keypair is generated, is in encrypt_data_page.js_post2.in.
Remote Code Execution
Remote code execution commands are issued with a (rex) call. The first parameter contains a list of predefined browser extension commands. The second parameter contains as a string custom source code defined at the server that will be executed inside the browser extension. This source code must be inside a single (do) or (withs) block, because ArcJS does not implement (readall) so Joan could only use (read), and (read) only returns the first S-expression.
When you use a 'submit rex command, for a submit button, you must also use an 'encrypt command. That's because 'encrypt supplies the form name the submit button will use, even if the list of items to encrypt is empty.
You can have multiple submit buttons but data are encrypted in all forms when you press any of those buttons. Only data for the corresponding form are submitted, as they should, but encrypting everything kept the implementation simpler. It also makes a user confident no data were sent unencrypted to the server.
Pressing Enter in an input box typically submits the form unencrypted. To prevent this, the browser plugin overrides onkeypress on all input boxes to force the data to be automatically encrypted.
Besides encryption logic, an 'encrypt rex command also contains logic to decrypt. Don't supply a separate 'decrypt command if you supplied an 'encrypt, but do supply to 'encrypt the list of items to decrypt.
There are four hooks inside the browser extension that define what executes when:
- (srex-after-decrypt) after a specific field has been decrypted.
- (srex-after-decrypt-all) after all fields have been decrypted. If no data will be decrypted but you want to run code after the page loads, this is the hook to use.
- (srex-before-encrypt) before a specific field will be encrypted.
- (srex-before-encrypt-all) before all fields will be encrypted. Use this hook if you want data sent to the server to be derived from other data. You may incorrectly take encrypted data as input otherwise if you used (srex-before-encrypt).
The user can choose to not encrypt data after a page was sent from the server, by checking a 'publish' field. This can be useful for blogging. But the publish field can't be hidden and already checked when served. Otherwise the server could trick the user to submit unencrypted data.
The 'contact command lets the server ask the user to verify the fingerprint of the public-key of a contact. An fn field in the data structure defines which user function to run that returns the public key. It may seem inconsistent the name of this function is defined there instead of the (rex) code through a hook. The reason there's no hook is that no encrypted data need to be processed: public usernames and public keys are both public. The function implementation is still provided by the user though and must return a hash table that contains key-value pairs for the ids 'userid' and 'pubkey'.
Encrypted credentials like the password and initialization vector for encrypted search, or the verified mappings between key ids to contact ids, are not transmitted through the (rex) command but as hidden text values through HTML. It'd be insecure to transmit them in (rex). The browser plugin would need to call (srex-clean-item), which inspects all (rex) input and refuses to process commands containing whitespace. Since these credentials are encrypted with PGP, which produces ciphertext containing whitespaces (e.g. "-----BEGIN PGP MESSAGE-----"), (rex) would refuse to process them.
The damage from this is server code increases. Every page on the server must transmit the credentials even if nothing is encrypted or decrypted.
To make it easier to determine if one should access the value of textboxes or the innerText of an HTML element, there's an (srex-get) function with a boolean parameter signifying if an element is a textbox. An (srex-set) function does something similar when setting values.
It shouldn't be nececcary to call (srex-get-wtype) in Arc manually, but the code verifier does not automatically detect the cases where it should call it. For example, it should use it in (each) loops but doesn't.
The (len) Arc function is converted to (srex-length), which dynamically determines if its parameter is a string or an object to return its length.
Hashing user passwords turned out to conflict with the browser's password manager. The manager saved the hash instead of the password. That lead to double hashing when the user visited the login page again, which failed authenticating. To address this, the hashing logic does not hash if the password length is equal to the length of the SHA512 hash (128 characters.)
Arc and ArcJS
ArcJS also does not attempt to load most of Arc's libraries. Some useful ones, like string.arc, have been loaded by Joan. Some functions in them, like (subst), have been redefined to not use (tostring). Some simple yet necessary library functions, like (color), have been manually added. html.arc is not part of Joan because I didn't need it yet.
parsecomb0.arc and fromjson.arc, which are not part of Arc, have been loaded in Joan to parse the JSON sent from the server.
A change was applied to jQuery Tokeninput, a library that autocompletes text. An onSearch event was added, to allow encrypting a search query before submitting it to the server. This required changing the signature of run_search() to accept as an additional parameter the search term typed in before it was encrypted. The dropdown menu didn't show up otherwise.
Encrypting with Autocomplete
In the examples, there are two fields involved when you use autocomplete to pick results from an encrypted search query and submit them back to the server. A web application can use any name it chooses for these fields, but they do serve two distinct purposes, and an app has to use both when using autocomplete:
Why have both? Because not everything that autocompletes must be encrypted. One should be free to autocomplete and submit back to the server only unencrypted text, while still encrypting other text fields.
- autocomplete-all: a list that contains a hash table for each entry that was selected from the autocomplete control. For example, if what was selected is people's contact info, each entry in the list contains key-value pairs for an id (like a username), an encrypted public key, a name, or an email address.
- encpass: a hash table that contains key-value pairs from an id (like a username) to a hash table that contains an entry for the public key. For each user that should be able to decrypt all encrypted fields in a page, an entry is added in encpass.
The two fields are connected though. When text is about to be encrypted and submitted back to the server, each list entry from autocomplete-all that contains the two fields 'id' and 'pubkey' is used to generate an entry in encpass: it is used to encrypt for that user.
Note that if what is autocompleted should be encrypted, the 'pubkey' value supplied by the server should be in an encrypted state. The autocomplete logic will decrypt it and use it.
It isn't easy gluing together the bare minimum crypto needed: public-key cryptography, with signing and verification. The tough part is making one JS library work with another. Key features are missing and examples are incomplete. You could copy code from one library to the other but it gets confusing and it's not the most secure thing to do with libraries that lack a testsuite.
(Does anyone know of a self-contained JS crypto library that offers signing and verification and nothing else?)
CryptoJS for example is good at symmetric cryptography but doesn't support asymmetric (public-key cryptography). JSEncrypt does, but not signing and verification. JsRSAsign says it does, but it doesn't read a private key generated with JSEncrypt. You need to do more to get it to work, like possibly generate a self-signed certificate, and there's no self-contained example that shows how. Imagine how pleasing it was to find Cryptico, which supports public-key cryptography and signing and verification, and then realize it doesn't return the private key.
It's strange CryptoJS uses the insecure Math.random() instead of window.crypto.getRandomValues() in component/core.js:random(). It's strange five out of six authors of crypto libraries I emailed didn't respond.
Out of necessity I caved. Joan uses OpenPGP.js. The good part is OpenPGP.js implements key pieces: public-key crypto, with signing and verification, in an open format. The bad part is, PGP is an old format.
Most people had no Internet when the first PGP spec came out in 1996. So it's no wonder the new spec that followed in 1998 missed the mark. How could it have predicted user needs in a 5 year-old new medium called Internet, when even the makers of the leading Internet application back then, Internet search, couldn't predict the needs of their app and Google was barely two months old?
Although feasible, the web of trust proposed by the 1998 spec isn't perfect. Two years is too short to beta test managing keyrings, or issuing revocation certificates, or attending key-signing parties when web apps aren't using PGP yet and the Internet is exploding.
You also can't search encrypted messages. All you can do is decrypt and verify them. That's not a problem if you decrypt data and store them on a laptop but it becomes a problem when most of the applications you use store their data on the Internet.
That sums up the big blind spot in the PGP spec. Neither spec nor software made it easy to encrypt data for the dominant way Internet apps would be delivered: in the browser. It's a mistake to cement in a spec how people may work when you don't know for sure how people want to work. That's premature optimization. It's better to wait and see first.
I don't mean to be hard on PGP. It solves a specific problem well. The full scope of the privacy problem is wider than what PGP solves though, since it includes the browser, and any solution to the privacy problem will need to adapt for the browser. It seems suspect to stay tied to how people used to think about a problem two decades ago while the problem has been evolving into something else.
As for specifics, some design flaws are around presentation. OpenPGP.js doesn't decrypt ciphertext that isn't neatly formatted. Like ciphertext in an HTML element's .innerText property which returns text as a string that doesn't contain newlines. "Unknown ASCII armor type" says the error message. You could read a message with .innerHTML, but then you can't embed other elements inside the one you are decrypting, and server code shrinks more when you can do that (even in Lisp). You can format ciphertext with newlines using (topgp) on the server, but it doesn't work with text input fields and it's slow in Arc. A faster but less elegant workaround is to read ciphertext from a hidden textarea.
This is also a bug when encrypting. The default ASCII armored output of PGP can't be used with input fields because once again input automatically removes newlines and carriage return. You have to use a textarea.
Except in the cases where you can't. Like when the UI you are building shouldn't use a textarea because you need everything the user types to show up in a single line and because there's no way to disable the multiline feature of textarea to do that. So you have to manually write code that copies the unencrypted data from input into the hidden textarea. See what a mess ASCII armored output is with HTML?
Another flaw is you can't show ciphertext as a single string with no whitespace. I want to do that and I can't. The ciphertext is always formatted at a fixed length of 60 characters, and it always shows the '-- BEGIN PGP MESSAGE --' banner that users shouldn't be forced to look at. This wasn't a big deal in an ASCII terminal in 1996 but it affects aesthetics in a browser today.
I don't know if it's a bug, but encrypting multiple messages in parallel and then waiting for all Promises to be fulfilled leads to some messages failing to encrypt. No error message is raised either. The result is simply null. The workaround was to call directly the 6 lines of code found in openpgp.js:signAndEncryptMessage.execute.
One reason the plugin is slow is it takes 400ms for the following decrypt function to finish, which is strange, since it typically takes a standalone OpenPGP.js only 30ms to run the same function:
The end result is it takes 1.3 seconds to load a static HTML page served from localhost that decrypts content. It also takes 2.5 seconds from the time a form's encrypt function is entered until the submit is issued. Overall, OpenPGP.js isn't as fast as it could be.
When a contact is added, the plugin saves a mapping between the key id of the contact's public key to the contact id (like a username). This is necessary because you can't reliably get the contact id otherwise. Encrypted PGP messages include key ids, not contact ids.
It would be nice if the problem of mapping key ids to contact ids went away. Keybase is working in that direction.
Mapping a key id to a contact id is not ideal because key ids are short, which means they are prone to collisions and can be forged. But that's a problem that must be dealt with anyway, because even full fingerprints can be forged, and the danger is greater after a contact has been verified. The user could be tricked to switch to a forged key for a contact it had trusted, and messages encrypted for the contact will be read by the attacker.
As a first line of defense, the user is shown the full fingerprint of a public key before verifying the contact.
Then, an additional check in the plugin refuses to re-verify a previously verified contact, to protect from overwriting an existing short key id with a different public key. If a contact's public key changed, delete the contact and verify again.
A better mapping would have been from public key fingerprints to contact ids. The problem with that is there doesn't seem to be a way in OpenPGP.js to extract a fingerprint from an encrypted message's signature. There is a .getSigningKeyIds() function but no .getSigningFingerprints().
The public key of each contact is encrypted and saved on the server. The main reason is to keep the solution elegant. To save everything encrypted on the server (except the private key and encrypted password), which allows accessing contacts from multiple machines without need for synchronization. A secondary reason is the default localStorage limit of 5MB in browsers. A 4096-bit armored key consumes roughly 2404-2432 bytes (the range is due to the length of the optional comment), which means no more than 2155-2180 contacts can be stored.
For the same reasons, the mapping between key ids to contact ids is encrypted and stored on the server.
With a desktop app that stores content locally, unless there are continual software updates, you always know content you wrote was written by you. Not so with a web app. You can't tell if the content was forged on the server unless you examine the signature.
For this reason the browser plugin always verifies the author of all content by looking at the content's signature.
If a checkbox that reads "show authors" is checked by the user, the browser plugin "always" displays the author's user id on the left-hand-side of the content.
With two exceptions. Number one, if all content in a page was written by you, no confirmation is shown. Showing author user ids is necessary when multiple authors share content but the confirmation isn't visually pleasing. So when you are the only author, the plugin doesn't show the confirmation message (but still verifies signatures).
The final decision to display author ids isn't left to the server, because a bad server that forged content could easily display your own user id to trick you into thinking the content you are looking at is yours, even though the content was slightly altered by someone else. Say, if it was written by one of your verified but forged contacts. This still isn't too strong of a protection. A malicious server could display author user ids on the right-hand-side of the content, while author ids are also displayed on the left-hand-side, and confuse the user.
But the user has at least some control for viewing author user ids. They can visually see what changes on the screen when they check "show authors". The user wouldn't have this option if it was fully controlled by the server.
Exception number two is an application wanting to visually conceal the identity of all authors, to provide visual anonymity. The browser plugin still verifies signatures but doesn't display author user ids. It only displays an alert that author information will not be shown. It's possible to extend this feature to also indicate if content was written by the current user or not, but I didn't think it'd be useful in practice.
There is a small presentation bug in the paper that showed how encrypted search can be practical. In Figure 3, F_k_i(S_i) should be F_k_i(L_i). That would match the explanation in Section 4.4, paragraph 2, which is correct: "Alice should generate k_i as k_i := f_k'(L_i)".
There is no logic in Joan that implements encrypted indexes. If all that users will need to encrypt for search is a list of contacts, that isn't a huge list, and encrypted search can be paralellized on the server. But there are fair odds users will need encrypted search for content as well. As the paper authors noted, efficiently updating an encrypted search index is an interesting research question.
Arc code sent to the browser plugin for execution currently can't define macros. I'm not sure it's wise to support them, because to do that the verifier must also become an evaluator, which increases the odds of introducing a bug that leads to a security breach. I didn't need macros anyway. You could generate macroexpanded source on the server and send that instead.
Chrome's autocomplete sets the background of a textbox to a light yellow, which overrides the red background set by the browser extension for fields that won't be encrypted.
I don't know why but attempting to disable double form submission didn't work.
The reason key pairs are 3072 bits and not 4096 or higher is that on my machine Safari is often unable to generate a 4096-bit key in under 5 minutes. You would think this wouldn't matter, since the browser plugin is written for Chrome. But it matters when reusing the key generation logic on a trusted server.
When running on a trusted server, the dialog that asks to verify the public-key fingerprint of a contact isn't displayed correctly in Firefox and Safari because these browsers don't support it yet. It's displayed at the bottom left of the screen and pressing on either 'Yes' or 'No' in that dialog doesn't work. Firefox doesn't give an error at all and Safari gives the error:
[Error] TypeError: undefined is not a function (evaluating 'dialog_gs12.close()')
Encrypted search words are too short. They are 32 bytes in length, since they are constructed from the output of CryptoJS.AES.encrypt() which is 256 bits. Can the length be increased? People who understand encryption well should better have a look at this.