I recently found a security bug in Cisco's WebEx Productivity Tools. The bug caused your audio conferencing credentials to be sent out in meeting invitations. It was limited in scope to InterCall customers who integrate with WebEx.
InterCall is an audio conferencing solution and can be used as an alternative to WebEx's built-in audio. My company is starting to roll out WebEx this way. InterCall users have a dedicated conference code and a leader PIN which are your account credentials. The conference code is meant to be public, but the leader PIN is like a password and should be kept confidential. Productivity Tools (PT) is an add-on product for WebEx customers. One of the key features is an integration with Outlook that allows you to create WebEx meetings and send out the invitations from within Outlook.
First I set up WebEx to use my InterCall account for audio and then downloaded and installed WebEx PT. Next I created a test WebEx meeting from within Outlook and invited one person. Upon clicking "Send", PT securely communicated to the WebEx server to auto-populate the conferencing information in the meetng invite. When the information appeared, I saw my InterCall leader PIN just for a moment before the email was sent. At first I thought it was a mistake, but inspection of my Sent Items folder showed that my PIN was indeed sent. The person who received the invite confirmed he got my PIN as well. Wow! How could no one at Cisco or my company notice this? I was unable to find a work-around except for avoiding PT altogether by logging into the WebEx site and creating a meeting from there.
I reported this security threat to Cisco (and InterCall) on April 28th. After pestering them for updates, Cisco Engineering finally confirmed to me on May 14 that it was a defect and that they were working on a fix. I can now confirm that the bug has been fixed in WebEx Productivity Tools Version 2.36.13013.10003, which was released on May 19, 2015. I would like to thank both InterCall support and Cisco PSIRT for their attention to this matter. For reasons that are unclear, Cisco hasn't released a security advisory or security alert about this issue This blog post will have to suffice.
I'd like to be able to say that technical acumen and advanced hacking were needed to find this vulnerability! Alas that was not it. I was just curious about my new WebEx toy, wanted to understand how it worked, and stumbled upon it. Being curious and questioning things... it's what people in information security tend to do.
Friday, May 22, 2015
I recently found a security bug in Cisco's WebEx Productivity Tools. The bug caused your audio conferencing credentials to be sent out in meeting invitations. It was limited in scope to InterCall customers who integrate with WebEx.
Sunday, December 7, 2014
Recently I received an American Express card from Citibank to replace my expiring one. Naturally, I cut the old card in half. My customary procedure is then to discard one of the pieces in a trash receptacle at my house and the other piece in a different trash receptacle. I figure this will keep me pretty safe from dumpster-diving fraudsters because the trash receptacles are typically emptied at different times and go into different plastic trash bags.
This time I decided to examine the pieces of my card. What I found was that having only the right-side piece would allow someone to reconstitute the full 15-digit account number! Given where I cut the card, which was pretty much right down the middle, the front showed the last 7 digits and the back showed the first 8 digits. See the photos below (numbers masked for the protection of me).
What's more, the 4-digit security code appeared on the front side and my full signature was visible on the back (also masked for my protection). At least the security code was different on my new card, so once activated, using the old code should cause a payment authorization failure. Still, many e-commerce sites do not require the security code when making a purchase.
So with just half of my old card, the expiration date is the only unknown to a dumpster diver. That is not a big obstacle to overcome at all. Logically, if someone throws away a payment card, the probable reason is that he/she received a new card to replace the expiring one. What would be the expiration date of the new card? It's very likely to be either two years or four years from now - either the current month or the subsequent month.
This reminds me of the story of the torn-up credit card application that I read about a few years ago. A man named Rob Cockerham taped the pieces back together, filled out the application, and sent it in. Amazingly, a shiny new card arrived in the mail for him a few weeks later.
The bottom line is: Be aware that if you cut up your debit cards or credit cards and throw the pieces away in different receptacles like me, you're not necessarily safe from dumpster diving.
I'm asking for a shredder for Christmas.
Thursday, November 20, 2014
If you are testing web applications for security, be sure to examine the Forgot Password functionality and attempt to subvert it. It's another way that users can authenticate to the app and is often less secure than the primary method. First you'll need to enumerate usernames (try the username wordlists I made available a while ago). Once you have some valid usernames, the Forgot Password functionality will often present you with a challenge to answer one of the user's personal security questions.
One of the most common security questions you see is "What was the name of your first pet?". If the application doesn't limit the number of attempts, you have a very good chance at answering this question by iterating through different names with a tool like Burp Intruder. The last time I did this successfully, "Rocky" was the name of the user's pet.
You need big list of common pet names to do this. That's exactly what I'm providing here for your download pleasure. My wordlist currently has over 1,400 pet names.
Click here to get the pet name wordlist
Enjoy! Obviously my list can't cover every conceivable pet name, but please let me know if you think I'm missing a common one.
Thursday, October 30, 2014
With the recent discovery of the POODLE vulnerability in the SSLv3 protocol, I wanted to change my Firefox configuration to disallow SSLv3. Mozilla released an extension for this called SSL Version Control, but I decided not to install it given its somewhat sketchy reviews.
No problem I thought. Time to open the advanced configuration in Firefox by entering "about:config" in the address bar and make the change there. Searching for "security", will show many configuration settings that start with "security.ssl3". Some of them will be set to true and some to false. You would think setting all the values to "false" here would be the solution. Nope! Don't do it. Although the settings have "ssl3" in their name, they actually apply to both SSLv3 and all three TLS versions (1.0, 1.1, and 1.2). If you change them all to false, both SSLv3 and TLS will be disabled and your browser will be incapable of communicating securely at all.
The correct solution, as described here, is easier. Just set "security.tls.version.min" to 1, which means that TLS v1.0 is the minimum allowed version. When set to 0, it means that SSLv3 is allowed. I hope that helps.
This is a temporary work-around anyway as Mozilla says that SSLv3 will be disabled by default starting with Firefox 34.
Thursday, July 31, 2014
I was invited again to contribute to the blog of application security company Checkmarx. My second post was published a couple of days ago and covers software security and the Building Security In Maturity Model (BSIMM).
Friday, April 25, 2014
It turns out that application developers sometimes need to implement a
forgot password feature but don't have much identity data about the
users in the system. Neither can they always be so flexible as to
require users to establish personal security questions. These things
are a key part of my forgot password security recommendations.
But the reality is that sometimes you don't have any information about a
user except their username and email address. Heck, sometimes email
address IS the username.
In this type of situation, implementing a secure forgot password feature is challenging. Sending a password reset link via email is probably the best option (barring a non-automated solution where users call customer support). So here I will offer up some specific ideas on how to secure the process when using email.
- When a user invokes the forgot password process, don't say anything about whether the username entered was recognized or not. It should simply display a generic message such as: "Thank you. If the username you provided is valid, we will send you an email with instructions on how to reset your password".
- Along with the above, don't show the email address where the email was sent. It might give legitimate users a warm, fuzzy feeling but it definitely helps attackers in a number of scenarios.
- The password reset link in the email message should incorporate a GUID or similar high-entropy token. The token could be a parameter in the query string or part of the URL path itself. It doesn't really matter.
- Allow only one valid token per user at any given time.
- Make sure the email message does not include the username.
- Make sure the link can be used only once. In other words, invalidate the token immediately when an HTTP request containing that token is received.
- The link should expire. Depending on your situation, implement logic to invalidate the token 10, 20, or 30 minutes after the email is sent out. Make it a configurable value so it can be adjusted if needed without a code change.
- The password reset page (the one that appears after clicking the link) should force the user to re-enter his username.
- If the username entered is incorrect 3 times in a row, lock the account. Remember, your application knows which username is associated with the token. The person attempting to reset the password should know it as well.
- After a successful password reset, send a confirmation email to the user to notify them it happened. This can alert users to fraud if they didn't initiate it.
- Throughout each step of the process, make sure the application is logging everything that occurs so there's a solid audit trail in case something goes haywire.
(updated on May 5, 2014 based on some feedback I received)
Wednesday, April 16, 2014
In case you missed it, both IE 11 and Chrome recently made a change and they now ignore autocomplete="off" on password input fields within HTML pages. This attribute is something I've always recommended for input fields that contain sensitive data so that browsers won't store the data locally where it could be compromised. Apparently the changes were made solely because lots of people are using password managers. Here's a snippet from a messy MSDN blog post that tries to explain the reason for changing IE:
Password Managers improve real-world security, and the IE team felt it was important to put users in control. Users rely on their password manager to permit them to comfortably use strong passwords. Password managers encourage strong, unique password creation per site, but unique, strong passwords are often difficult to remember and type on touch devices. If the browser doesn't offer to autocomplete a password, the user assumes that the browser is broken. The user will then either use another browser that ignores the attribute, or install a password manager plugin that ignores it.I'm not sure I agree. Moving to another browser would not have worked since they all honored the attribute until recently. It is also stated plainly that users could use a password manager plugin to overcome the restriction.
And here's a snippet from a message posted by the Chrome team with their reasoning:
We believe that the current respect for autocomplete='off' for passwords is, in fact, harming the security of users by making browser password managers significantly less useful than they should be, thus discouraging their adoption, making it difficult for users to generate, store, and use more complex or (preferably) random passwords.Maybe I don't understand the decisions because I don't use a password manager. Either way, it is good that all browsers continue to honor autocomplete="off" for non-password inputs (type="text") so that sensitive data such as credit card numbers can be protected.
Sunday, March 9, 2014
Do you know web application security? Here is a little 10-question quiz to find out. I've interviewed quite a few people for AppSec jobs in the past and asked these type of questions. I thought it would be fun to share. Answers are at the bottom along with your ninja score. Don't cheat by googling for answers!
1. As a web application user, what puts you at most risk to fall victim to a cross-site request forgery (CSRF) attack?
a) Using an old browser
b) Using a web app that is not fully protected by SSL/TLS
c) Using the "keep me logged in" option offered by web apps
d) Using weak passwords
2. TRUE or FALSE? All web applications are vulnerable to CSRF attacks unless there's a specific protection mechanism in place.
3. TRUE or FALSE? An attacker could use a cross-site scripting (XSS) flaw on a banking site to steal login credentials while the victim appears to remain on the legitimate banking site.
4. If you want your web application to defend itself against cross-site scripting attacks that steal session IDs, which cookie attribute is best able to help you?
5. TRUE or FALSE? The best way to eliminate SQL injection vulnerabilities in code is to validate input data.
6. TRUE or FALSE? Using POST requests with hidden form fields provides a significant level of protection against attackers who want to tamper with requests.
7. What is one way developers can defend against forced browsing attacks?
a) Incorporate GUIDs into file names
b) Log all user activity
c) Validate input data
d) Use a sensible directory naming scheme
8. A race condition in a web application can lead to a security hole. Which software analysis technique is best suited to identify the existence of a race condition?
a) A manual penetration test
b) A dynamic (blackbox) automated scan
c) A static (whitebox) scan
d) Functional tests by QA team
9. Your web application allows users to download their account statements in PDF format. What is the most secure way to implement this functionality?
a) Store all PDFs in an obscure directory on the web server and provide a link to the correct PDF depending on the user.
b) Generate the PDF on the fly, write it to a temporary directory on the server, and redirect the browser to that location (via 302 response).
c) Generate the PDF on the fly, store it in memory on the server, and send the bytes of the PDF to the browser directly (via 200 response).
d) Store the PDFs in a database and retrieve the correct PDF by looking at the identifier/primary key provided in the HTTP request.
10. TRUE or FALSE? Most web applications provide only one method of authentication, namely username + password.
1. Answer: c
With the "keep me logged in" option, a persistent cookie is set causing you to be in a permanently-authenticated state. A key factor in a successful CSRF attack is that the victim is authenticated to the target site.
2. Answer: FALSE
Read-only web apps (no actions can be taken by a user) are not subject to CSRF attacks.
3. Answer: TRUE
4. Answer: d
5. Answer: FALSE
Using parameterized queries with data binding is the best way. That said, input data validation should always be done.
6. Answer: FALSE
Many free tools are available that make it easy for anyone to edit HTTP requests prior to being sent to the server.
7. Answer: a
Using GUIDs (globally unique identifiers) makes it near impossible for a user to guess valid file names. A problem I've seen frequently when doing pen tests is that the application names static files such as PDF or Excel documents in a logical, consistent manner. For example, a file name might include the user's name or account number. This could make it easy for one user to guess the name of other files and access information intended for other users.
8. Answer: c
Static analysis theoretically has full insight into the whole codebase and should be able to spot a situation where multiple threads compete for the same resource. With dynamic/run-time testing, it can't be guaranteed the race condition will ever manifest itself. If you've ever tried to reproduce a deadlock problem in a piece of software, you know how very difficult it can be.
9. Answer: c
Because the PDF is never written to disk in option c, there is no chance an attacker can forcefully browse to it. Option d is not secure because a user could tamper with the identifier to access another user's document.
10. Answer: FALSE
Most web applications provide TWO methods of authentication. One is username + password. The other is some sort of Forgot Password mechanism, which is often created as an afterthought and less secure than it needs to be.
Answers Correct AppSec Ninja Level* 9-10 Kage 7-8 Jounin 5-6 Chuunin 3-4 Genin 0-1-2 Academy student * Based on Naruto Rank
Wednesday, February 19, 2014
You might remember the rash of unintended acceleration incidents that occurred in Toyota vehicles a few years ago. Perhaps the worst incident happened near me in Southlake, Texas where four people were killed. I remember thinking at the time that these incidents had all the indicators of a software problem. Well it turns out that is most likely the case. Research from an embedded software expert as part of an Oklahoma trial indicates that a stack overflow may be responsible.
The Toyota issue reminded me of the story of the Therac-25. Every computer science student should be required to read it. The Therac-25 was a medical linear accelerator that used electrons to create high-energy beams to destroy tumors in cancer patients. Eleven of these devices were built and used in the 1980s. Software bugs in the Therac-25 caused massive overdoses of radiation that killed patients.
Here are some quotes from the story. It reads like a novel.
she felt a "tremendous force of heat . . . this red-hot sensation." When the technician came in, the patient said, "You burned me." The technician replied that that was not possible.
She completely lost the use of her shoulder and her arm, and was in constant pain. She had suffered a serious radiation burn, but the manufacturer and operators of the machine refused to believe that it could have been caused by the Therac-25.
the patient said that he felt like he had received an electric shock or that someone had poured hot coffee on his back: He felt a thump and heat and heard a buzzing sound from the equipment. Since this was his ninth treatment, he knew that this was not normal. He began to get up from the treatment table to go for help. It was at this moment that the operator hit the "P" key to proceed with the treatment. The patient said that he felt like his arm was being shocked by electricity and that his hand was leaving his body.Software quality is really important. The reality is that some bugs can lay hidden for a very long time because they surface only under a very rare set of circumstances. A race condition (multiple threads competing for the same resource) is a good example of this. Another example is the security flaw in MySQL that allowed a 1 in 256 chance of *any* password to work.
Fortunately, most developers don't write code that can cause direct bodily harm, but I think it's good to be familiar with these types of cases and hopefully avoid repeating history.
Tuesday, February 18, 2014
I was invited to contribute to the blog of application security company Checkmarx. Last week my first post was published and covers some ways you can safely practice your web hacking skills.
Tuesday, January 7, 2014
Demonstrating that a web application is vulnerable to reflected cross-site scripting (XSS) is not very exciting. It's always kind of like, "oh hey, look here, an alert box popped up when you clicked on that link". Scary. Dramatic. Not! I was looking for more interesting ways to show how XSS could be used. I figure the code is more likely to get fixed if you can make a memorable impression. I came up with a few options.
I'll present these techniques using 3 websites that are Internet facing and purposefully built to be susceptible to reflected XSS.
- demo.testfire.net (operated by IBM)
- www.webscantest.com (NT Objectives)
- testasp.vulnweb.com (Acunetix)
First, there is the boring alert box that I'm trying to get away from:
Lastly, I have a hilarious, but mildly racy (NSFW?) alternative. (in Firefox w/NoScript you may have to click refresh for this to work)
Sunday, December 22, 2013
We all know that you shouldn't re-use the same password on different websites, but this is extremely difficult in practice considering the number of sites people use today. Password managers were developed to help solve the problem of remembering passwords. Some examples are KeePass, Password Safe, and LastPass. They work fine for many people. However, I personally don't like the idea of depending on a password manager. I want the ability to pull the correct password out of my brain in case I'm ever in a situation where I don't have access to the password manager. There's also a risk that your passwords could be compromised (this is true about any data that is stored, encrypted or not).
I have over 100 different passwords, but I don't have any problem remembering them. I don't write them down or use any sort of password manager. I came up with a system that enables me to remember my passwords. It works for me, so I'm sharing the technique in case anyone else thinks it might be helpful.
With my system, you only have to remember two things.
- Your "core" password.
- Your scheme.
Second, pick a scheme based on the website's domain name. The scheme will be used to supplement your core password. As a simple example, you could look at the last 3 characters of the site's domain, add one letter to each (this is actually an encryption technique called "ROT1"), and append this to your core password. So, for the site "www.verizonwireless.com", we see the last 3 characters of the domain are "ess". Therefore the 3 additional characters would be "ftt" and your final password becomes kM92ax43ftt.
For sprint.com, your final password is kM92ax43jou.
For att.com, your final password is kM92ax43buu.
Tweak your scheme however you want before finalizing it. Some possibilities:
- Prepend the first character to your core password/append the last two
- Capitalize one or two of the letters
- Subtract two letters ("ROT24" encryption) instead of adding one
- Look at the first two chars + last char of the domain, instead of the last three
P.S. My system isn't perfect. It doesn't work on sites that have a short maximum password length (like 10) or have onerous password requirements (like requiring a special character). It also doesn't work for my Windows domain account or my home router where I'm not actually logging into a website. I treat these as exceptions and remember them separately. I do keep notes about exceptions as well, but I rarely need to refer to them.