I was invited again to contribute to the blog of application security company Checkmarx. My second post was published a couple of days ago and covers software security and the Building Security In Maturity Model (BSIMM).
Thursday, July 31, 2014
Friday, April 25, 2014
It turns out that application developers sometimes need to implement a
forgot password feature but don't have much identity data about the
users in the system. Neither can they always be so flexible as to
require users to establish personal security questions. These things
are a key part of my forgot password security recommendations.
But the reality is that sometimes you don't have any information about a
user except their username and email address. Heck, sometimes email
address IS the username.
In this type of situation, implementing a secure forgot password feature is challenging. Sending a password reset link via email is probably the best option (barring a non-automated solution where users call customer support). So here I will offer up some specific ideas on how to secure the process when using email.
- When a user invokes the forgot password process, don't say anything about whether the username entered was recognized or not. It should simply display a generic message such as: "Thank you. If the username you provided is valid, we will send you an email with instructions on how to reset your password".
- Along with the above, don't show the email address where the email was sent. It might give legitimate users a warm, fuzzy feeling but it definitely helps attackers in a number of scenarios.
- The password reset link in the email message should incorporate a GUID or similar high-entropy token. The token could be a parameter in the query string or part of the URL path itself. It doesn't really matter.
- Allow only one valid token per user at any given time.
- Make sure the email message does not include the username.
- Make sure the link can be used only once. In other words, invalidate the token immediately when an HTTP request containing that token is received.
- The link should expire. Depending on your situation, implement logic to invalidate the token 10, 20, or 30 minutes after the email is sent out. Make it a configurable value so it can be adjusted if needed without a code change.
- The password reset page (the one that appears after clicking the link) should force the user to re-enter his username.
- If the username entered is incorrect 3 times in a row, lock the account. Remember, your application knows which username is associated with the token. The person attempting to reset the password should know it as well.
- After a successful password reset, send a confirmation email to the user to notify them it happened. This can alert users to fraud if they didn't initiate it.
- Throughout each step of the process, make sure the application is logging everything that occurs so there's a solid audit trail in case something goes haywire.
(updated on May 5, 2014 based on some feedback I received)
Wednesday, April 16, 2014
In case you missed it, both IE 11 and Chrome recently made a change and they now ignore autocomplete="off" on password input fields within HTML pages. This attribute is something I've always recommended for input fields that contain sensitive data so that browsers won't store the data locally where it could be compromised. Apparently the changes were made solely because lots of people are using password managers. Here's a snippet from a messy MSDN blog post that tries to explain the reason for changing IE:
Password Managers improve real-world security, and the IE team felt it was important to put users in control. Users rely on their password manager to permit them to comfortably use strong passwords. Password managers encourage strong, unique password creation per site, but unique, strong passwords are often difficult to remember and type on touch devices. If the browser doesn't offer to autocomplete a password, the user assumes that the browser is broken. The user will then either use another browser that ignores the attribute, or install a password manager plugin that ignores it.I'm not sure I agree. Moving to another browser would not have worked since they all honored the attribute until recently. It is also stated plainly that users could use a password manager plugin to overcome the restriction.
And here's a snippet from a message posted by the Chrome team with their reasoning:
We believe that the current respect for autocomplete='off' for passwords is, in fact, harming the security of users by making browser password managers significantly less useful than they should be, thus discouraging their adoption, making it difficult for users to generate, store, and use more complex or (preferably) random passwords.Maybe I don't understand the decisions because I don't use a password manager. Either way, it is good that all browsers continue to honor autocomplete="off" for non-password inputs (type="text") so that sensitive data such as credit card numbers can be protected.
Sunday, March 9, 2014
Do you know web application security? Here is a little 10-question quiz to find out. I've interviewed quite a few people for AppSec jobs in the past and asked these type of questions. I thought it would be fun to share. Answers are at the bottom along with your ninja score. Don't cheat by googling for answers!
1. As a web application user, what puts you at most risk to fall victim to a cross-site request forgery (CSRF) attack?
a) Using an old browser
b) Using a web app that is not fully protected by SSL/TLS
c) Using the "keep me logged in" option offered by web apps
d) Using weak passwords
2. TRUE or FALSE? All web applications are vulnerable to CSRF attacks unless there's a specific protection mechanism in place.
3. TRUE or FALSE? An attacker could use a cross-site scripting (XSS) flaw on a banking site to steal login credentials while the victim appears to remain on the legitimate banking site.
4. If you want your web application to defend itself against cross-site scripting attacks that steal session IDs, which cookie attribute is best able to help you?
5. TRUE or FALSE? The best way to eliminate SQL injection vulnerabilities in code is to validate input data.
6. TRUE or FALSE? Using POST requests with hidden form fields provides a significant level of protection against attackers who want to tamper with requests.
7. What is one way developers can defend against forced browsing attacks?
a) Incorporate GUIDs into file names
b) Log all user activity
c) Validate input data
d) Use a sensible directory naming scheme
8. A race condition in a web application can lead to a security hole. Which software analysis technique is best suited to identify the existence of a race condition?
a) A manual penetration test
b) A dynamic (blackbox) automated scan
c) A static (whitebox) scan
d) Functional tests by QA team
9. Your web application allows users to download their account statements in PDF format. What is the most secure way to implement this functionality?
a) Store all PDFs in an obscure directory on the web server and provide a link to the correct PDF depending on the user.
b) Generate the PDF on the fly, write it to a temporary directory on the server, and redirect the browser to that location (via 302 response).
c) Generate the PDF on the fly, store it in memory on the server, and send the bytes of the PDF to the browser directly (via 200 response).
d) Store the PDFs in a database and retrieve the correct PDF by looking at the identifier/primary key provided in the HTTP request.
10. TRUE or FALSE? Most web applications provide only one method of authentication, namely username + password.
1. Answer: c
With the "keep me logged in" option, a persistent cookie is set causing you to be in a permanently-authenticated state. A key factor in a successful CSRF attack is that the victim is authenticated to the target site.
2. Answer: FALSE
Read-only web apps (no actions can be taken by a user) are not subject to CSRF attacks.
3. Answer: TRUE
4. Answer: d
5. Answer: FALSE
Using parameterized queries with data binding is the best way. That said, input data validation should always be done.
6. Answer: FALSE
Many free tools are available that make it easy for anyone to edit HTTP requests prior to being sent to the server.
7. Answer: a
Using GUIDs (globally unique identifiers) makes it near impossible for a user to guess valid file names. A problem I've seen frequently when doing pen tests is that the application names static files such as PDF or Excel documents in a logical, consistent manner. For example, a file name might include the user's name or account number. This could make it easy for one user to guess the name of other files and access information intended for other users.
8. Answer: c
Static analysis theoretically has full insight into the whole codebase and should be able to spot a situation where multiple threads compete for the same resource. With dynamic/run-time testing, it can't be guaranteed the race condition will ever manifest itself. If you've ever tried to reproduce a deadlock problem in a piece of software, you know how very difficult it can be.
9. Answer: c
Because the PDF is never written to disk in option c, there is no chance an attacker can forcefully browse to it. Option d is not secure because a user could tamper with the identifier to access another user's document.
10. Answer: FALSE
Most web applications provide TWO methods of authentication. One is username + password. The other is some sort of Forgot Password mechanism, which is often created as an afterthought and less secure than it needs to be.
Answers Correct AppSec Ninja Level* 9-10 Kage 7-8 Jounin 5-6 Chuunin 3-4 Genin 0-1-2 Academy student * Based on Naruto Rank
Wednesday, February 19, 2014
You might remember the rash of unintended acceleration incidents that occurred in Toyota vehicles a few years ago. Perhaps the worst incident happened near me in Southlake, Texas where four people were killed. I remember thinking at the time that these incidents had all the indicators of a software problem. Well it turns out that is most likely the case. Research from an embedded software expert as part of an Oklahoma trial indicates that a stack overflow may be responsible.
The Toyota issue reminded me of the story of the Therac-25. Every computer science student should be required to read it. The Therac-25 was a medical linear accelerator that used electrons to create high-energy beams to destroy tumors in cancer patients. Eleven of these devices were built and used in the 1980s. Software bugs in the Therac-25 caused massive overdoses of radiation that killed patients.
Here are some quotes from the story. It reads like a novel.
she felt a "tremendous force of heat . . . this red-hot sensation." When the technician came in, the patient said, "You burned me." The technician replied that that was not possible.
She completely lost the use of her shoulder and her arm, and was in constant pain. She had suffered a serious radiation burn, but the manufacturer and operators of the machine refused to believe that it could have been caused by the Therac-25.
the patient said that he felt like he had received an electric shock or that someone had poured hot coffee on his back: He felt a thump and heat and heard a buzzing sound from the equipment. Since this was his ninth treatment, he knew that this was not normal. He began to get up from the treatment table to go for help. It was at this moment that the operator hit the "P" key to proceed with the treatment. The patient said that he felt like his arm was being shocked by electricity and that his hand was leaving his body.Software quality is really important. The reality is that some bugs can lay hidden for a very long time because they surface only under a very rare set of circumstances. A race condition (multiple threads competing for the same resource) is a good example of this. Another example is the security flaw in MySQL that allowed a 1 in 256 chance of *any* password to work.
Fortunately, most developers don't write code that can cause direct bodily harm, but I think it's good to be familiar with these types of cases and hopefully avoid repeating history.
Tuesday, February 18, 2014
I was invited to contribute to the blog of application security company Checkmarx. Last week my first post was published and covers some ways you can safely practice your web hacking skills.
Tuesday, January 7, 2014
Demonstrating that a web application is vulnerable to reflected cross-site scripting (XSS) is not very exciting. It's always kind of like, "oh hey, look here, an alert box popped up when you clicked on that link". Scary. Dramatic. Not! I was looking for more interesting ways to show how XSS could be used. I figure the code is more likely to get fixed if you can make a memorable impression. I came up with a few options.
I'll present these techniques using 3 websites that are Internet facing and purposefully built to be susceptible to reflected XSS.
- demo.testfire.net (operated by IBM)
- www.webscantest.com (NT Objectives)
- testasp.vulnweb.com (Acunetix)
First, there is the boring alert box that I'm trying to get away from:
Lastly, I have a hilarious, but mildly racy (NSFW?) alternative. (in Firefox you may have to click refresh for this to work - not sure why)
Sunday, December 22, 2013
We all know that you shouldn't re-use the same password on different websites, but this is extremely difficult in practice considering the number of sites people use today. Password managers were developed to help solve the problem of remembering passwords. Some examples are KeePass, Password Safe, and LastPass. They work fine for many people. However, I personally don't like the idea of depending on a password manager. I want the ability to pull the correct password out of my brain in case I'm ever in a situation where I don't have access to the password manager. There's also a risk that your passwords could be compromised (this is true about any data that is stored, encrypted or not).
I have about 80 different passwords, but I don't have any problem remembering them. I don't write them down or use any sort of password manager. I came up with a system that enables me to remember my passwords. It works for me, so I'm sharing the technique in case anyone else thinks it might be helpful.
With my system, you only have to remember two things.
- Your "core" password.
- Your scheme.
Second, pick a scheme based on the website's domain name. The scheme will be used to supplement your core password. As a simple example, you could look at the last 3 characters of the site's domain, add one letter to each (this is actually an encryption technique called "ROT1"), and append this to your core password. So, for the site "www.verizonwireless.com", we see the last 3 characters of the domain are "ess". Therefore the 3 additional characters would be "ftt" and your final password becomes kM92ax43ftt.
For sprint.com, your final password is kM92ax43jou.
For att.com, your final password is kM92ax43buu.
Tweak your scheme however you want before finalizing it. Some possibilities:
- Prepend the first character to your core password/append the last two
- Capitalize one or two of the letters
- Subtract two letters ("ROT24" encryption) instead of adding one
- Look at the first two chars + last char of the domain, instead of the last three
P.S. My system isn't perfect. It doesn't work on sites that have a short maximum password length (like 10) or have onerous password requirements (like requiring a special character). It also doesn't work for my Windows domain account or my home router where I'm not actually logging into a website. I treat these as exceptions and remember them separately. I do keep notes about exceptions as well, but I rarely need to refer to them.
Sunday, December 8, 2013
I made some wordlists a while ago containing common usernames. They have proven very useful to me when doing application penetration testing, specifically they are great to use as the payload for Burp Intruder.
I created the lists by taking the 10,000 most common last names in the United States and prepending a single letter (for example "dferguson" appears in the usernames-d.txt wordlist). There are wordlists for all letters except "i", "q", "x", and "z" (frankly, there aren't many first names that begin with those letters so it's a waste of time to try them).
Click here to get the username wordlists (zip)
One scenario where you might leverage these wordlists is a web application where the login page returns a different error message depending if a valid username is received versus and invalid username. Run Intruder on the login request and you can probably reap a nice set of valid accounts.
You'll also find a special wordlist called usernames-top100-each-letter.txt. This is perfect when you have limited time and want to maximize your potential to find a valid account. And there's another list called usernames-generic.txt, which could help you discover some test accounts. Of course you can combine these wordlists any way you want (even concatenate them together and try the whole darn thing).
Things get a little more complex if the web app requires an email address for login. You could certainly append "@gmail.com", "@yahoo.com", "@aol.com", etc. to the usernames. Separate wordlists could be created for each email domain, or you could just leverage the power of Burp Intruder to append the domain on the fly.
Friday, October 4, 2013
I think there is still confusion in the minds of many developers about Base64 encoding. It's not encryption. It doesn't offer any protection of the data at all. It's trivial to decode with Burp Suite, a desktop utility, or a website such as this.
When testing a web application for security, be sure to decode any Base64-encoded strings you see (often they will have one or two "=" at the end). You may find the resulting data is gibberish, which probably means it's encrypted or hashed. Or you may find sensitive user data developers were hoping to hide. You also might find technical information about the deployment infrastructure or a clue that leads you to another area of the application that has a security hole.
What is Base64 encoding? Just a way to encode ANY data so it can be represented as plain ASCII text. This is convenient in many situations. Email attachments are sent as Base64 data for example. For every 3 bytes of data, encoding will get you 4 ASCII characters. It works by first concatenating ("smooshing") the data together in binary form, then chunking it up into 6 bit sequences. Padding with zeros is done if needed so the data is a multiple of 3 bytes. Each 6 bit sequence is converted to an ASCII character according to a standard Base64 encoding table.
Here is an example of Base64 encoding in action.
1) Start with some data. It could be Hex, Binary, ASCII, and so on. Let's use the word "Hi".
2) Convert each byte to binary. We need to add one zero for padding in this case.
H : ascii code 72 / binary 01001000
i : ascii code 105 / binary 01101001
0 : binary 00000000
3) Smoosh the binary data together:
4) Chunk it up into 6 bit sequences:
010010 000110 100100 000000
The decimal equivalent is:
18 6 36 0
5) Convert to Base64 using the table below. Any artificial trailing zero is represented by a "=".
The Base64 string is: SGk=
Saturday, March 30, 2013
We all understand the security benefits of using a CAPTCHA as it relates to anti-automation in a web application. The most common implementation of CAPTCHA functionality is reCaptcha, a technology that Google purchased in 2009. Most of the time it works great, but recently I ran across a site that was producing some bizarre images. I saved a few of these, and I'm pleased to present them here now for your enjoyment!
Hmm, I don't have some of these letters on my keyboard:
Users are expected know calculus now?
If a word is upside down, would you enter it left to right or right to left?
Absolutely no clue:
Wednesday, November 28, 2012
I've been taking a closer look at my mobile apps lately, specifically the permissions they request when downloading and installing them. It has been quite an eye opener. It turns out that mobile apps are invading our privacy. It's as simple as this: any app that can read your contacts and access the Internet can slurp your data and send it off to some random server to be stored and/or used in a nefarious way.
The finding that surprised me the most was the audacity of my little old flashlight app. I was using "Tiny Flashlight + LED", which is allowed to read your phone identity and have full Internet access. A flashlight app that needs Internet access is nonsensical to me. I switched to use OI Flashlight, which requires only the permissions of camera control and preventing the device from sleeping. I discovered during my research that most flashlight apps want Internet access. The top 4 flashlight apps that appear when searching for "flashlight" on Google Play are:
- full Internet access
- your location (both coarse and fine)
- modify your SD card contents
- read your phone identity