Sunday, December 7, 2014

I Cut My Payment Card in Half and What I Found Surprised Me

Recently I received an American Express card from Citibank to replace my expiring one.  Naturally, I cut the old card in half.  My customary procedure is then to discard one of the pieces in a trash receptacle at my house and the other piece in a different trash receptacle.  I figure this will keep me pretty safe from dumpster-diving fraudsters because the trash receptacles are typically emptied at different times and go into different plastic trash bags.

This time I decided to examine the pieces of my card.  What I found was that having only the right-side piece would allow someone to reconstitute the full 15-digit account number! Given where I cut the card, which was pretty much right down the middle, the front showed the last 7 digits and the back showed the first 8 digits.  See the photos below (numbers masked for the protection of me).

What's more, the 4-digit security code appeared on the front side and my full signature was visible on the back (also masked for my protection).  At least the security code was different on my new card, so once activated, using the old code should cause a payment authorization failure.  Still, many e-commerce sites do not require the security code when making a purchase.

So with just half of my old card, the expiration date is the only unknown to a dumpster diver.  That is not a big obstacle to overcome at all.  Logically, if someone throws away a payment card, the probable reason is that he/she received a new card to replace the expiring one.  What would be the expiration date of the new card?  It's very likely to be either two years or four years from now - either the current month or the subsequent month.

This reminds me of the story of the torn-up credit card application that I read about a few years ago.  A man named Rob Cockerham taped the pieces back together, filled out the application, and sent it in.  Amazingly, a shiny new card arrived in the mail for him a few weeks later.

The bottom line is: Be aware that if you cut up your debit cards or credit cards and throw the pieces away in different receptacles like me, you're not necessarily safe from dumpster diving.

I'm asking for a shredder for Christmas.


Thursday, November 20, 2014

Wordlist for Common Pet Names

If you are testing web applications for security, be sure to examine the Forgot Password functionality and attempt to subvert it.  It's another way that users can authenticate to the app and is often less secure than the primary method.  First you'll need to enumerate usernames (try the username wordlists I made available a while ago).  Once you have some valid usernames, the Forgot Password functionality will often present you with a challenge to answer one of the user's personal security questions.

One of the most common security questions you see is "What was the name of your first pet?".  If the application doesn't limit the number of attempts, you have a very good chance at answering this question by iterating through different names with a tool like Burp Intruder.  The last time I did this successfully, "Rocky" was the name of the user's pet.

You need big list of common pet names to do this.  That's exactly what I'm providing here for your download pleasure.  My wordlist currently has over 1,400 pet names.

  Click here to get the pet name wordlist

Enjoy!  Obviously my list can't cover every conceivable pet name, but please let me know if you think I'm missing a common one.


Thursday, October 30, 2014

Disabling SSLv3 in Firefox

With the recent discovery of the POODLE vulnerability in the SSLv3 protocol, I wanted to change my Firefox configuration to disallow SSLv3.  Mozilla released an extension for this called SSL Version Control, but I decided not to install it given its somewhat sketchy reviews.

No problem I thought.  Time to open the advanced configuration in Firefox by entering "about:config" in the address bar and make the change there.  Searching for "security", will show many configuration settings that start with "security.ssl3".  Some of them will be set to true and some to false.  You would think setting all the values to "false" here would be the solution.  Nope!  Don't do it.  Although the settings have "ssl3" in their name, they actually apply to both SSLv3 and all three TLS versions (1.0, 1.1, and 1.2).  If you change them all to false, both SSLv3 and TLS will be disabled and your browser will be incapable of communicating securely at all.

The correct solution, as described here, is easier.  Just set "security.tls.version.min" to 1, which means that TLS v1.0 is the minimum allowed version.  When set to 0, it means that SSLv3 is allowed.  I hope that helps.

This is a temporary work-around anyway as Mozilla says that SSLv3 will be disabled by default starting with Firefox 34.


Thursday, July 31, 2014

Building Secure Applications: How Mature Are You?

I was invited again to contribute to the blog of application security company Checkmarx.  My second post was published a couple of days ago and covers software security and the Building Security In Maturity Model (BSIMM).


Friday, April 25, 2014

Implementing Forgot Password with Email

It turns out that application developers sometimes need to implement a forgot password feature but don't have much identity data about the users in the system.  Neither can they always be so flexible as to require users to establish personal security questions.  These things are a key part of my forgot password security recommendations.  But the reality is that sometimes you don't have any information about a user except their username and email address.  Heck, sometimes email address IS the username.

In this type of situation, implementing a secure forgot password feature is challenging.  Sending a password reset link via email is probably the best option (barring a non-automated solution where users call customer support).  So here I will offer up some specific ideas on how to secure the process when using email.

  1. When a user invokes the forgot password process, don't say anything about whether the username entered was recognized or not.  It should simply display a generic message such as: "Thank you. If the username you provided is valid, we will send you an email with instructions on how to reset your password".
  2. Along with the above, don't show the email address where the email was sent.  It might give legitimate users a warm, fuzzy feeling but it definitely helps attackers in a number of scenarios.
  3. The password reset link in the email message should incorporate a GUID or similar high-entropy token. The token could be a parameter in the query string or part of the URL path itself.  It doesn't really matter.
  4. Allow only one valid token per user at any given time.
  5. Make sure the email message does not include the username.
  6. Make sure the link can be used only once.  In other words, invalidate the token immediately when an HTTP request containing that token is received.
  7. The link should expire.  Depending on your situation, implement logic to invalidate the token 10, 20, or 30 minutes after the email is sent out.  Make it a configurable value so it can be adjusted if needed without a code change.
  8. The password reset page (the one that appears after clicking the link) should force the user to re-enter his username.
  9. If the username entered is incorrect 3 times in a row, lock the account.  Remember, your application knows which username is associated with the token.  The person attempting to reset the password should know it as well.
  10. After a successful password reset, send a confirmation email to the user to notify them it happened.  This can alert users to fraud if they didn't initiate it.
  11. Throughout each step of the process, make sure the application is logging everything that occurs so there's a solid audit trail in case something goes haywire.
So those are the mitigating controls I came up with.  Feel free to let me know in the comments if you have any other ideas!

(updated on May 5, 2014 based on some feedback I received)


Wednesday, April 16, 2014

Autocomplete="off" Now in Disfavor

In case you missed it, both IE 11 and Chrome recently made a change and they now ignore autocomplete="off" on password input fields within HTML pages.  This attribute is something I've always recommended for input fields that contain sensitive data so that browsers won't store the data locally where it could be compromised.  Apparently the changes were made solely because lots of people are using password managers.  Here's a snippet from a messy MSDN blog post that tries to explain the reason for changing IE:

Password Managers improve real-world security, and the IE team felt it was important to put users in control. Users rely on their password manager to permit them to comfortably use strong passwords. Password managers encourage strong, unique password creation per site, but unique, strong passwords are often difficult to remember and type on touch devices. If the browser doesn't offer to autocomplete a password, the user assumes that the browser is broken. The user will then either use another browser that ignores the attribute, or install a password manager plugin that ignores it.
I'm not sure I agree.  Moving to another browser would not have worked since they all honored the attribute until recently.  It is also stated plainly that users could use a password manager plugin to overcome the restriction.

And here's a snippet from a message posted by the Chrome team with their reasoning:
We believe that the current respect for autocomplete='off' for passwords is, in fact, harming the security of users by making browser password managers significantly less useful than they should be, thus discouraging their adoption, making it difficult for users to generate, store, and use more complex or (preferably) random passwords.
Maybe I don't understand the decisions because I don't use a password manager.  Either way, it is good that all browsers continue to honor autocomplete="off" for non-password inputs (type="text") so that sensitive data such as credit card numbers can be protected.


Sunday, March 9, 2014

A Basic Application Security Quiz

Do you know web application security?  Here is a little 10-question quiz to find out.  I've interviewed quite a few people for AppSec jobs in the past and asked these type of questions.  I thought it would be fun to share.  Answers are at the bottom along with your ninja score. Don't cheat by googling for answers!

1. As a web application user, what puts you at most risk to fall victim to a cross-site request forgery (CSRF) attack?
a) Using an old browser
b) Using a web app that is not fully protected by SSL/TLS
c) Using the "keep me logged in" option offered by web apps
d) Using weak passwords
2. TRUE or FALSE? All web applications are vulnerable to CSRF attacks unless there's a specific protection mechanism in place.
3. TRUE or FALSE? An attacker could use a cross-site scripting (XSS) flaw on a banking site to steal login credentials while the victim appears to remain on the legitimate banking site.
4. If you want your web application to defend itself against cross-site scripting attacks that steal session IDs, which cookie attribute is best able to help you?
a) Secure
b) Path
c) Expires
d) HttpOnly
5. TRUE or FALSE? The best way to eliminate SQL injection vulnerabilities in code is to validate input data.
6. TRUE or FALSE? Using POST requests with hidden form fields provides a significant level of protection against attackers who want to tamper with requests.
7. What is one way developers can defend against forced browsing attacks?
a) Incorporate GUIDs into file names
b) Log all user activity
c) Validate input data
d) Use a sensible directory naming scheme
8. A race condition in a web application can lead to a security hole.  Which software analysis technique is best suited to identify the existence of a race condition?
a) A manual penetration test
b) A dynamic (blackbox) automated scan
c) A static (whitebox) scan
d) Functional tests by QA team
9. Your web application allows users to download their account statements in PDF format. What is the most secure way to implement this functionality?
a) Store all PDFs in an obscure directory on the web server and provide a link to the correct PDF depending on the user.
b) Generate the PDF on the fly, write it to a temporary directory on the server, and redirect the browser to that location (via 302 response).
c) Generate the PDF on the fly, store it in memory on the server, and send the bytes of the PDF to the browser directly (via 200 response).
d) Store the PDFs in a database and retrieve the correct PDF by looking at the identifier/primary key provided in the HTTP request.
10. TRUE or FALSE? Most web applications provide only one method of authentication, namely username + password. 


1. Answer: c
With the "keep me logged in" option, a persistent cookie is set causing you to be in a permanently-authenticated state. A key factor in a successful CSRF attack is that the victim is authenticated to the target site.

2. Answer: FALSE
Read-only web apps (no actions can be taken by a user) are not subject to CSRF attacks.

3. Answer: TRUE
With XSS, a login form having an action attribute that points to the attacker's site could be created via JavaScript on the legitimate site.

4. Answer: d
The HttpOnly attribute of a cookie instructs web browsers that JavaScript is not allowed to access the cookie.  This means that malicious JavaScript injected in an XSS attack can't access the cookie.  (HttpOnly is widely supported by web browsers)

5. Answer: FALSE
Using parameterized queries with data binding is the best way.  That said, input data validation should always be done.

6. Answer: FALSE
Many free tools are available that make it easy for anyone to edit HTTP requests prior to being sent to the server.

7. Answer: a
Using GUIDs (globally unique identifiers) makes it near impossible for a user to guess valid file names.  A problem I've seen frequently when doing pen tests is that the application names static files such as PDF or Excel documents in a logical, consistent manner.  For example, a file name might include the user's name or account number.  This could make it easy for one user to guess the name of other files and access information intended for other users.

8. Answer: c
Static analysis theoretically has full insight into the whole codebase and should be able to spot a situation where multiple threads compete for the same resource.  With dynamic/run-time testing, it can't be guaranteed the race condition will ever manifest itself.  If you've ever tried to reproduce a deadlock problem in a piece of software, you know how very difficult it can be.

9. Answer: c
Because the PDF is never written to disk in option c, there is no chance an attacker can forcefully browse to it.  Option d is not secure because a user could tamper with the identifier to access another user's document.

10. Answer: FALSE
Most web applications provide TWO methods of authentication.  One is username + password.  The other is some sort of Forgot Password mechanism, which is often created as an afterthought and less secure than it needs to be.

Answers Correct       AppSec Ninja Level*
0-1-2Academy student
* Based on Naruto Rank


Wednesday, February 19, 2014

Software Bugs That Kill

You might remember the rash of unintended acceleration incidents that occurred in Toyota vehicles a few years ago.  Perhaps the worst incident happened near me in Southlake, Texas where four people were killed.  I remember thinking at the time that these incidents had all the indicators of a software problem.  Well it turns out that is most likely the case.  Research from an embedded software expert as part of an Oklahoma trial indicates that a stack overflow may be responsible

The Toyota issue reminded me of the story of the Therac-25.  Every computer science student should be required to read it.  The Therac-25 was a medical linear accelerator that used electrons to create high-energy beams to destroy tumors in cancer patients.  Eleven of these devices were built and used in the 1980s. Software bugs in the Therac-25 caused massive overdoses of radiation that killed patients.

Here are some quotes from the story.  It reads like a novel.

she felt a "tremendous force of heat . . . this red-hot sensation." When the technician came in, the patient said, "You burned me." The technician replied that that was not possible.
She completely lost the use of her shoulder and her arm, and was in constant pain. She had suffered a serious radiation burn, but the manufacturer and operators of the machine refused to believe that it could have been caused by the Therac-25.
the patient said that he felt like he had received an electric shock or that someone had poured hot coffee on his back: He felt a thump and heat and heard a buzzing sound from the equipment. Since this was his ninth treatment, he knew that this was not normal. He began to get up from the treatment table to go for help. It was at this moment that the operator hit the "P" key to proceed with the treatment. The patient said that he felt like his arm was being shocked by electricity and that his hand was leaving his body.
Software quality is really important.  The reality is that some bugs can lay hidden for a very long time because they surface only under a very rare set of circumstances.  A race condition (multiple threads competing for the same resource) is a good example of this.  Another example is the security flaw in MySQL that allowed a 1 in 256 chance of *any* password to work.

Fortunately, most developers don't write code that can cause direct bodily harm, but I think it's good to be familiar with these types of cases and hopefully avoid repeating history.


Tuesday, February 18, 2014

Where To Practice Your Web Hacking Skills

I was invited to contribute to the blog of application security company Checkmarx.  Last week my first post was published and covers some ways you can safely practice your web hacking skills.


Tuesday, January 7, 2014

Alternatives to the Boring XSS Alert Box

Demonstrating that a web application is vulnerable to reflected cross-site scripting (XSS) is not very exciting.  It's always kind of like, "oh hey, look here, an alert box popped up when you clicked on that link".  Scary.  Dramatic. Not!  I was looking for more interesting ways to show how XSS could be used.  I figure the code is more likely to get fixed if you can make a memorable impression.  I came up with a few options. 

I'll present these techniques using 3 websites that are Internet facing and purposefully built to be susceptible to reflected XSS.

  1. (operated by IBM)
  2. (NT Objectives)
  3. (Acunetix)
All of the URLs here were tested successfully with Firefox 26, IE 11 with the XSS Filter disabled, and Chrome 31 with the "--disable-xss-auditor" command line option.  Note - if you have the NoScript extension, you'll have to either disable it temporarily -or- allow scripts on the above domains plus and disable the XSS protection.

First, there is the boring alert box that I'm trying to get away from:
Alternative #1 is to fill the victim's screen with unicorns and rainbows.
Alternative #2 is to Rickroll the victim (i.e., redirect to Rick Astley's famous 80's music video).
Alternative #3 is to display some HTML... a funny news story in this case. (in Firefox w/NoScript you may have to click refresh for this to work)
Feel free to use these or create your own.  I think you'll agree these are definitely better than popping an alert box.

Lastly, I have a hilarious, but mildly racy (NSFW?) alternative. (in Firefox w/NoScript you may have to click refresh for this to work)


  © Blogger templates The Professional Template by 2008

Back to TOP