Friday, April 25, 2014

Implementing Forgot Password with Email

It turns out that application developers sometimes need to implement a forgot password feature but don't have much identity data about the users in the system.  Neither can they always be so flexible as to require users to establish personal security questions.  These things are a key part of my forgot password security recommendations.  But the reality is that sometimes you don't have any information about a user except their username and email address.  Heck, sometimes email address IS the username.

In this type of situation, implementing a secure forgot password feature is challenging.  Sending a password reset link via email is probably the best option (barring a non-automated solution where users call customer support).  So here I will offer up some specific ideas on how to secure the process when using email.

  1. When a user invokes the forgot password process, don't say anything about whether the username entered was recognized or not.  It should simply display a generic message such as: "Thank you. If the username you provided is valid, we will send you an email with instructions on how to reset your password".
  2. Along with the above, don't show the email address where the email was sent.  It might give legitimate users a warm, fuzzy feeling but it definitely helps attackers in a number of scenarios.
  3. The password reset link in the email message should incorporate a GUID or similar high-entropy token. The token could be a parameter in the query string or part of the URL path itself.  It doesn't really matter.
  4. Allow only one valid token per user at any given time.
  5. Make sure the email message does not include the username.
  6. Make sure the link can be used only once.  In other words, invalidate the token immediately when an HTTP request containing that token is received.
  7. The link should expire.  Depending on your situation, implement logic to invalidate the token 10, 20, or 30 minutes after the email is sent out.  Make it a configurable value so it can be adjusted if needed without a code change.
  8. The password reset page (the one that appears after clicking the link) should force the user to re-enter his username.
  9. If the username entered is incorrect 3 times in a row, lock the account.  Remember, your application knows which username is associated with the token.  The person attempting to reset the password should know it as well.
  10. After a successful password reset, send a confirmation email to the user to notify them it happened.  This can alert users to fraud if they didn't initiate it.
  11. Throughout each step of the process, make sure the application is logging everything that occurs so there's a solid audit trail in case something goes haywire.
So those are the mitigating controls I came up with.  Feel free to let me know in the comments if you have any other ideas!

(updated on May 5, 2014 based on some feedback I received)

Read more...

Wednesday, April 16, 2014

Autocomplete="off" Now in Disfavor

In case you missed it, both IE 11 and Chrome recently made a change and they now ignore autocomplete="off" on password input fields within HTML pages.  This attribute is something I've always recommended for input fields that contain sensitive data so that browsers won't store the data locally where it could be compromised.  Apparently the changes were made solely because lots of people are using password managers.  Here's a snippet from a messy MSDN blog post that tries to explain the reason for changing IE:

Password Managers improve real-world security, and the IE team felt it was important to put users in control. Users rely on their password manager to permit them to comfortably use strong passwords. Password managers encourage strong, unique password creation per site, but unique, strong passwords are often difficult to remember and type on touch devices. If the browser doesn't offer to autocomplete a password, the user assumes that the browser is broken. The user will then either use another browser that ignores the attribute, or install a password manager plugin that ignores it.
I'm not sure I agree.  Moving to another browser would not have worked since they all honored the attribute until recently.  It is also stated plainly that users could use a password manager plugin to overcome the restriction.

And here's a snippet from a message posted by the Chrome team with their reasoning:
We believe that the current respect for autocomplete='off' for passwords is, in fact, harming the security of users by making browser password managers significantly less useful than they should be, thus discouraging their adoption, making it difficult for users to generate, store, and use more complex or (preferably) random passwords.
Maybe I don't understand the decisions because I don't use a password manager.  Either way, it is good that all browsers continue to honor autocomplete="off" for non-password inputs (type="text") so that sensitive data such as credit card numbers can be protected.

Read more...

Sunday, March 9, 2014

A Basic Application Security Quiz

Do you know web application security?  Here is a little 10-question quiz to find out.  I've interviewed quite a few people for AppSec jobs in the past and asked these type of questions.  I thought it would be fun to share.  Answers are at the bottom along with your ninja score. Don't cheat by googling for answers!

1. As a web application user, what puts you at most risk to fall victim to a cross-site request forgery (CSRF) attack?
a) Using an old browser
b) Using a web app that is not fully protected by SSL/TLS
c) Using the "keep me logged in" option offered by web apps
d) Using weak passwords
2. TRUE or FALSE? All web applications are vulnerable to CSRF attacks unless there's a specific protection mechanism in place.
3. TRUE or FALSE? An attacker could use a cross-site scripting (XSS) flaw on a banking site to steal login credentials while the victim appears to remain on the legitimate banking site.
4. If you want your web application to defend itself against cross-site scripting attacks that steal session IDs, which cookie attribute is best able to help you?
a) Secure
b) Path
c) Expires
d) HttpOnly
5. TRUE or FALSE? The best way to eliminate SQL injection vulnerabilities in code is to validate input data.
6. TRUE or FALSE? Using POST requests with hidden form fields provides a significant level of protection against attackers who want to tamper with requests.
7. What is one way developers can defend against forced browsing attacks?
a) Incorporate GUIDs into file names
b) Log all user activity
c) Validate input data
d) Use a sensible directory naming scheme
8. A race condition in a web application can lead to a security hole.  Which software analysis technique is best suited to identify the existence of a race condition?
a) A manual penetration test
b) A dynamic (blackbox) automated scan
c) A static (whitebox) scan
d) Functional tests by QA team
9. Your web application allows users to download their account statements in PDF format. What is the most secure way to implement this functionality?
a) Store all PDFs in an obscure directory on the web server and provide a link to the correct PDF depending on the user.
b) Generate the PDF on the fly, write it to a temporary directory on the server, and redirect the browser to that location (via 302 response).
c) Generate the PDF on the fly, store it in memory on the server, and send the bytes of the PDF to the browser directly (via 200 response).
d) Store the PDFs in a database and retrieve the correct PDF by looking at the identifier/primary key provided in the HTTP request.
10. TRUE or FALSE? Most web applications provide only one method of authentication, namely username + password. 

ANSWERS

1. Answer: c
With the "keep me logged in" option, a persistent cookie is set causing you to be in a permanently-authenticated state. A key factor in a successful CSRF attack is that the victim is authenticated to the target site.

2. Answer: FALSE
Read-only web apps (no actions can be taken by a user) are not subject to CSRF attacks.

3. Answer: TRUE
With XSS, a login form having an action attribute that points to the attacker's site could be created via JavaScript on the legitimate site.

4. Answer: d
The HttpOnly attribute of a cookie instructs web browsers that JavaScript is not allowed to access the cookie.  This means that malicious JavaScript injected in an XSS attack can't access the cookie.  (HttpOnly is widely supported by web browsers)

5. Answer: FALSE
Using parameterized queries with data binding is the best way.  That said, input data validation should always be done.

6. Answer: FALSE
Many free tools are available that make it easy for anyone to edit HTTP requests prior to being sent to the server.

7. Answer: a
Using GUIDs (globally unique identifiers) makes it near impossible for a user to guess valid file names.  A problem I've seen frequently when doing pen tests is that the application names static files such as PDF or Excel documents in a logical, consistent manner.  For example, a file name might include the user's name or account number.  This could make it easy for one user to guess the name of other files and access information intended for other users.

8. Answer: c
Static analysis theoretically has full insight into the whole codebase and should be able to spot a situation where multiple threads compete for the same resource.  With dynamic/run-time testing, it can't be guaranteed the race condition will ever manifest itself.  If you've ever tried to reproduce a deadlock problem in a piece of software, you know how very difficult it can be.

9. Answer: c
Because the PDF is never written to disk in option c, there is no chance an attacker can forcefully browse to it.  Option d is not secure because a user could tamper with the identifier to access another user's document.

10. Answer: FALSE
Most web applications provide TWO methods of authentication.  One is username + password.  The other is some sort of Forgot Password mechanism, which is often created as an afterthought and less secure than it needs to be.

SCORING
Answers Correct       AppSec Ninja Level*
9-10Kage
7-8Jounin
5-6Chuunin
3-4Genin
0-1-2Academy student
* Based on Naruto Rank

Read more...

Wednesday, February 19, 2014

Software Bugs That Kill

You might remember the rash of unintended acceleration incidents that occurred in Toyota vehicles a few years ago.  Perhaps the worst incident happened near me in Southlake, Texas where four people were killed.  I remember thinking at the time that these incidents had all the indicators of a software problem.  Well it turns out that is most likely the case.  Research from an embedded software expert as part of an Oklahoma trial indicates that a stack overflow may be responsible

The Toyota issue reminded me of the story of the Therac-25.  Every computer science student should be required to read it.  The Therac-25 was a medical linear accelerator that used electrons to create high-energy beams to destroy tumors in cancer patients.  Eleven of these devices were built and used in the 1980s. Software bugs in the Therac-25 caused massive overdoses of radiation that killed patients.

Here are some quotes from the story.  It reads like a novel.

she felt a "tremendous force of heat . . . this red-hot sensation." When the technician came in, the patient said, "You burned me." The technician replied that that was not possible.
She completely lost the use of her shoulder and her arm, and was in constant pain. She had suffered a serious radiation burn, but the manufacturer and operators of the machine refused to believe that it could have been caused by the Therac-25.
the patient said that he felt like he had received an electric shock or that someone had poured hot coffee on his back: He felt a thump and heat and heard a buzzing sound from the equipment. Since this was his ninth treatment, he knew that this was not normal. He began to get up from the treatment table to go for help. It was at this moment that the operator hit the "P" key to proceed with the treatment. The patient said that he felt like his arm was being shocked by electricity and that his hand was leaving his body.
Software quality is really important.  The reality is that some bugs can lay hidden for a very long time because they surface only under a very rare set of circumstances.  A race condition (multiple threads competing for the same resource) is a good example of this.  Another example is the security flaw in MySQL that allowed a 1 in 256 chance of *any* password to work.

Fortunately, most developers don't write code that can cause direct bodily harm, but I think it's good to be familiar with these types of cases and hopefully avoid repeating history.

Read more...

Tuesday, February 18, 2014

Where To Practice Your Web Hacking Skills

I was invited to contribute to the blog of application security company Checkmarx.  Last week my first post was published and covers some ways you can safely practice your web hacking skills.

Read more...

Tuesday, January 7, 2014

Alternatives to the Boring XSS Alert Box

Demonstrating that a web application is vulnerable to reflected cross-site scripting (XSS) is not very exciting.  It's always kind of like, "oh hey, look here, an alert box popped up when you clicked on that link".  Scary.  Dramatic. Not!  I was looking for more interesting ways to show how XSS could be used.  I figure the code is more likely to get fixed if you can make a memorable impression.  I came up with a few options. 

I'll present these techniques using 3 websites that are Internet facing and purposefully built to be susceptible to reflected XSS.

  1. demo.testfire.net (operated by IBM)
  2. www.webscantest.com (NT Objectives)
  3. testasp.vulnweb.com (Acunetix)
All of the URLs here were tested successfully with Firefox 26, IE 11 with the XSS Filter disabled, and Chrome 31 with the "--disable-xss-auditor" command line option.  If you have the NoScript Firefox extension, you'll obviously have to enable scripts on these sites (as well as sc0rn.com) for everything to work properly.

First, there is the boring alert box that I'm trying to get away from:
Alternative #1 is to fill the victim's screen with unicorns and rainbows. (in Firefox you may have to click refresh for this to work - not sure why)
Alternative #2 is to Rickroll the victim (i.e., redirect to Rick Astley's famous 80's music video).
Alternative #3 is to display some HTML... a funny news story in this case. (in Firefox you may have to click refresh for this to work - not sure why)
Feel free to use these or create your own.  I think you'll agree these are definitely better than popping an alert box.

Lastly, I have a hilarious, but mildly racy (NSFW?) alternative. (in Firefox you may have to click refresh for this to work - not sure why)

Read more...

Sunday, December 22, 2013

How I Keep Track of My Passwords

We all know that you shouldn't re-use the same password on different websites, but this is extremely difficult in practice considering the number of sites people use today.  Password managers were developed to help solve the problem of remembering passwords.  Some examples are KeePass, Password Safe, and LastPass.  They work fine for many people.  However, I personally don't like the idea of depending on a password manager.  I want the ability to pull the correct password out of my brain in case I'm ever in a situation where I don't have access to the password manager.  There's also a risk that your passwords could be compromised (this is true about any data that is stored, encrypted or not).

I have about 80 different passwords, but I don't have any problem remembering them.  I don't write them down or use any sort of password manager.  I came up with a system that enables me to remember my passwords.  It works for me, so I'm sharing the technique in case anyone else thinks it might be helpful.

With my system, you only have to remember two things.

  1. Your "core" password.
  2. Your scheme.
First, come up with a strong core password of about 8 or 9 characters.  This core piece should be gibberish and needs to have a combination of lowercase letters, uppercase letters, and numbers.  An example is kM92ax43. Whatever you decide upon, memorize it.

Second, pick a scheme based on the website's domain name.  The scheme will be used to supplement your core password.  As a simple example, you could look at the last 3 characters of the site's domain, add one letter to each (this is actually an encryption technique called "ROT1"), and append this to your core password.  So, for the site "www.verizonwireless.com", we see the last 3 characters of the domain are "ess".  Therefore the 3 additional characters would be "ftt" and your final password becomes kM92ax43ftt.

For sprint.com, your final password is kM92ax43jou.

For att.com, your final password is kM92ax43buu.

Tweak your scheme however you want before finalizing it.  Some possibilities:
  • Prepend the first character to your core password/append the last two 
  • Capitalize one or two of the letters
  • Subtract two letters ("ROT24" encryption) instead of adding one
  • Look at the first two chars + last char of the domain, instead of the last three
You get the idea. The scheme remains constant, but your password changes.  Whatever you decide, never tell anyone your core password or your scheme.

P.S.  My system isn't perfect.  It doesn't work on sites that have a short maximum password length (like 10) or have onerous password requirements (like requiring a special character).  It also doesn't work for my Windows domain account or my home router where I'm not actually logging into a website.  I treat these as exceptions and remember them separately.  I do keep notes about exceptions as well, but I rarely need to refer to them.

Read more...

Sunday, December 8, 2013

Wordlists for Common Usernames

I made some wordlists a while ago containing common usernames.  They have proven very useful to me when doing application penetration testing, specifically they are great to use as the payload for Burp Intruder.

I created the lists by taking the 10,000 most common last names in the United States and prepending a single letter (for example "dferguson" appears in the usernames-d.txt wordlist).  There are wordlists for all letters except "i", "q", "x", and "z" (frankly, there aren't many first names that begin with those letters so it's a waste of time to try them).

 Click here to get the username wordlists (zip)

One scenario where you might leverage these wordlists is a web application where the login page returns a different error message depending if a valid username is received versus and invalid username.  Run Intruder on the login request and you can probably reap a nice set of valid accounts.

You'll also find a special wordlist called usernames-top100-each-letter.txt.  This is perfect when you have limited time and want to maximize your potential to find a valid account.  And there's another list called usernames-generic.txt, which could help you discover some test accounts.  Of course you can combine these wordlists any way you want (even concatenate them together and try the whole darn thing).

Things get a little more complex if the web app requires an email address for login.  You could certainly append "@gmail.com", "@yahoo.com", "@aol.com", etc. to the usernames.  Separate wordlists could be created for each email domain, or you could just leverage the power of Burp Intruder to append the domain on the fly.

Read more...

Friday, October 4, 2013

Think of Base64 Data as Plain Text

I think there is still confusion in the minds of many developers about Base64 encoding.  It's not encryption.  It doesn't offer any protection of the data at all.  It's trivial to decode with Burp Suite, a desktop utility, or a website such as this.

When testing a web application for security, be sure to decode any Base64-encoded strings you see (often they will have one or two "=" at the end).  You may find the resulting data is gibberish, which probably means it's encrypted or hashed.  Or you may find sensitive user data developers were hoping to hide.  You also might find technical information about the deployment infrastructure or a clue that leads you to another area of the application that has a security hole.

What is Base64 encoding?  Just a way to encode ANY data so it can be represented as plain ASCII text.  This is convenient in many situations.  Email attachments are sent as Base64 data for example.  For every 3 bytes of data, encoding will get you 4 ASCII characters.  It works by first concatenating ("smooshing") the data together in binary form, then chunking it up into 6 bit sequences.  Padding with zeros is done if needed so the data is a multiple of 3 bytes. Each 6 bit sequence is converted to an ASCII character according to a standard Base64 encoding table.

Here is an example of Base64 encoding in action.

1) Start with some data. It could be Hex, Binary, ASCII, and so on.  Let's use the word "Hi".

2) Convert each byte to binary. We need to add one zero for padding in this case.
H : ascii code 72 / binary 01001000
i : ascii code 105 / binary 01101001
0 : binary 00000000

3) Smoosh the binary data together:
010010000110100100000000

4) Chunk it up into 6 bit sequences:
010010 000110 100100 000000
The decimal equivalent is:
18  6  36  0

5) Convert to Base64 using the table below. Any artificial trailing zero is represented by a "=".

The Base64 string is:  SGk=


Read more...

Saturday, March 30, 2013

Bizarre CAPTCHAs

We all understand the security benefits of using a CAPTCHA as it relates to anti-automation in a web application.  The most common implementation of CAPTCHA functionality is reCaptcha, a technology that Google purchased in 2009.  Most of the time it works great, but recently I ran across a site that was producing some bizarre images.  I saved a few of these, and I'm pleased to present them here now for your enjoyment!

Hmm, I don't have some of these letters on my keyboard:




Users are expected know calculus now?

If a word is upside down, would you enter it left to right or right to left?




Absolutely no clue:



Read more...

Wednesday, November 28, 2012

The Audacity of Your Flashlight App

I've been taking a closer look at my mobile apps lately, specifically the permissions they request when downloading and installing them.  It has been quite an eye opener.  It turns out that mobile apps are invading our privacy.  It's as simple as this: any app that can read your contacts and access the Internet can slurp your data and send it off to some random server to be stored and/or used in a nefarious way.

The finding that surprised me the most was the audacity of my little old flashlight app.  I was using "Tiny Flashlight + LED", which is allowed to read your phone identity and have full Internet access.  A flashlight app that needs Internet access is nonsensical to me.  I switched to use OI Flashlight, which requires only the permissions of camera control and preventing the device from sleeping.  I discovered during my research that most flashlight apps want Internet access.  The top 4 flashlight apps that appear when searching for "flashlight" on Google Play are:

  1. Tiny Flashlight + LED
  2. Brightest Flashlight Free
  3. Flashlight
  4. Color Flashlight
All four require Internet connectivity!  However, the winner of the most inappropriate and egregious permissions contest is "Brightest Flashlight Free" by Goldenshores Technologies, LLC.  This popular app (over 10 million downloads) requires the following permissions:
  • full Internet access
  • your location (both coarse and fine)
  • modify your SD card contents
  • read your phone identity
Can you think of a reason a flashlight app needs to know your current location or modify the data on your SD card?  I can't either.

Read more...

Tuesday, November 20, 2012

Risky to Report Website Vulns

The main reason I stopped reporting vulnerabilities to website owners is the risk of being prosecuted.  The Internet is more dangerous when well-meaning security researchers are treated this way.  I was new to Application Security in 2006, so I didn't realize that I was actually taking a pretty big risk when I told Netflix about their CSRF vulnerabilities.  In my mind I was doing them a favor.  They got a free mini pen test.  In fact as a Netflix subscriber, I was giving them money!  It turns out they were nice and simply said "thank you", then went about fixing the issue.

Today I ran across Patrick Webster's story from Australia and he wasn't so lucky.  He noticed that his bank's web application allowed for any customer to view another customer's account information, including very sensitive data that could allow for identity theft.  This type of insecure direct object reference vulnerability is very simple to exploit.  Mr. Webster just changed a numerical parameter in the URL to discover the problem.  He reported it to his bank, who decided to report him to the police.  It's not like this guy was a determined attacker with premeditation who spent weeks doing reconnaissance on the site.  That said, he clearly went too far by running a script that "cycled through each ID number and pulled down the relevant report to his computer".  That wasn't necessary to report the vulnerability.

Another example is Andrew Auernheimer who is potentially facing 5 years in prison due to his AT&T "account slurper" script.  Again, he went too far with the script, but clearly he might've been prosecuted anyway.  One of the comments on this story was humorous:

You seem to be implying that every exploit can be anticipated. The article points out that AT&T changed their code after discovery of the hack. There is no indication that they knew it was a problem before hand.
Web app vulns can and should be anticipated.

Read more...

  © Blogger templates The Professional Template by Ourblogtemplates.com 2008

Back to TOP