Two goals: Don't introduce major security holes, and reduce the amount of stuff users have to know to stay secure to protect both newbies and advanced users.
Watch out for the following issues when dealing with URLs:
eval() can be dangerous because it invokes the JavaScript parser on what might be an untrusted string. Two security holes, bug 87980 and bug 191817, were caused by misuse of eval and setTimeout.
Avoid using eval(). When your goal is to convert a string to one of JavaScript's built-in types, replace eval() with parseInt(), == "true", new RegExp(), etc. When your goal is to get property p on object obj, use obj[p] rather than eval("obj." + p). When your goal is to trigger an event handler from another event handler, convince someone to fix bug 246720 and then call the event handler directly.
Instead of using setTimeout(string, time), which is a lot like eval(), use setTimeout(function, time, param1, param2, ...).
Avoid using other things that are like eval(), such as new Function() and setting event attributes.
Take all the precations against XSS attacks that you would take if you were writing a web application. For example, do not display data: or javascript: links in chrome windows; use nsIURI::schemeIs to test for these protocols. A successful XSS attack against chrome JavaScript allows arbitrary code execution.
It is nearly impossible to determine whether code that interacts with untrusted DOM directly is correct. Pages can really screw with your code, making it do things very different from what the code appears to do and does in normal situations.
Pages can create getter and setter functions to turn your assignment statements into function calls of their choice, like in bug 217195. When you think you're getting a string attribute, you could be getting an object whose toString() returns a different string every time you treat the object as a string, like in bug 249332. Until Firefox ???, it was possible for a web page to hand you |eval| instead of setAttribute or even as a setter.
In Firefox 1.0, the solution for many of these problems was to use XPCNativeWrapper whenever you interact with untrusted DOM. In Firefox 1.0.4 (???) and Firefox 1.1, XPCNativeWrapper is unnecessary for chrome code.
When writing code that allocates memory, make sure your code is correct. In particular, beware of certain types of code errors that can lead to compromises rather than just memory leaks and crashes:
Most code in Mozilla uses reference-counted objects, where you only have to worry about memory leaks such as cycles.
Use security dialogs sparingly. It is usually better to pick a policy (allow or disallow) than to show the user a dialog. There are two reasons to minimize the number of security warning dialogs. First, many users will click "Yes" without reading the dialog, and we want to protect those users as much as possible. Second, many users place undue trust in web sites. Third, the more security dialogs users encounter, the less likely they are to pay attention to subsequent dialogs. This phenomenon is known as "warning fatigue". To prevent warning fatigue, avoid showing security dialogs in situations that are common and not actually dangerous. For example, one version of Outlook Express warned that an attachment could contain a virus even if the attachment is a text file or jpeg image, causing many users to ignore an identical warning when a virus spread via executable attachments.
Use warnings with "scariness" appropriate to the situation. When a site is trying to install software, a dialog with bold warning text is approprite.
Minimize the amount of text on security dialogs. The more text a dialog contains, the more likely it is that users will ignore the text completely.
Use clear button labels. "Install" and "Cancel" are better than "Yes" and "No".
Avoid adding dialogs where the safe response on a malicious site is "Yes", such as onbeforeunload's "Are you sure you want to navigate away from this page?". Malicious sites might use the dialog in order to induce warning fatigue, intentionally or unintentionally (bug 68215 comment 22, bug 190515 comment 8).
Disable the most dangerous button until the dialog has been visible and focused for two seconds (bug 162020, blog entry). Not only does this force users to read the dialog, but it also prevents against attacks where a site pops up a software installation dialog just as you are about to type the 'i' key or click in the location the button will appear.
In some cases, a yellow information bar is more appropriate than a dialog.
Most security dialogs contain some text that depends on the site. This might be the URL of the site, the filename of an XPI, the filename of a file you are downloading, or other information. Ensure that a malicious site cannot change the meaning of the dialog by choosing this text cleverly. Examples of attacks include putting a sentence or two where the dialog author expected a word (e.g. bug 253942) and making a filename contain a lot of spaces so the extension is hidden. In general, do not make untrusted text bold -- bold text in security dialogs should be reserved for warnings.
The default button in a security dialog should be a safe choice. For most security dialogs, this means Cancel should be the default button.