Wednesday, November 28, 2012

The Audacity of Your Flashlight App

I've been taking a closer look at my mobile apps lately, specifically the permissions they request when downloading and installing them.  It has been quite an eye opener.  It turns out that mobile apps are invading our privacy.  It's as simple as this: any app that can read your contacts and access the Internet can slurp your data and send it off to some random server to be stored and/or used in a nefarious way.

The finding that surprised me the most was the audacity of my little old flashlight app.  I was using "Tiny Flashlight + LED", which is allowed to read your phone identity and have full Internet access.  A flashlight app that needs Internet access is nonsensical to me.  I switched to use OI Flashlight, which requires only the permissions of camera control and preventing the device from sleeping.  I discovered during my research that most flashlight apps want Internet access.  The top 4 flashlight apps that appear when searching for "flashlight" on Google Play are:
  1. Tiny Flashlight + LED
  2. Brightest Flashlight Free
  3. Flashlight
  4. Color Flashlight
All four require Internet connectivity!  However, the winner of the most inappropriate and egregious permissions contest is "Brightest Flashlight Free" by Goldenshores Technologies, LLC.  This popular app (over 10 million downloads) requires the following permissions:
  • full Internet access
  • your location (both coarse and fine)
  • modify your SD card contents
  • read your phone identity
Can you think of a reason a flashlight app needs to know your current location or modify the data on your SD card?  I can't either.

Tuesday, November 20, 2012

Risky to Report Website Vulns

The main reason I stopped reporting vulnerabilities to website owners is the risk of being prosecuted.  The Internet is more dangerous when well-meaning security researchers are treated this way.  I was new to Application Security in 2006, so I didn't realize that I was actually taking a pretty big risk when I told Netflix about their CSRF vulnerabilities.  In my mind I was doing them a favor.  They got a free mini pen test.  In fact as a Netflix subscriber, I was giving them money!  It turns out they were nice and simply said "thank you", then went about fixing the issue.

Today I ran across Patrick Webster's story from Australia and he wasn't so lucky.  He noticed that his bank's web application allowed for any customer to view another customer's account information, including very sensitive data that could allow for identity theft.  This type of insecure direct object reference vulnerability is very simple to exploit.  Mr. Webster just changed a numerical parameter in the URL to discover the problem.  He reported it to his bank, who decided to report him to the police.  It's not like this guy was a determined attacker with premeditation who spent weeks doing reconnaissance on the site.  That said, he clearly went too far by running a script that "cycled through each ID number and pulled down the relevant report to his computer".  That wasn't necessary to report the vulnerability.

Another example is Andrew Auernheimer who is potentially facing 5 years in prison due to his AT&T "account slurper" script.  Again, he went too far with the script, but clearly he might've been prosecuted anyway.  One of the comments on this story was humorous:
You seem to be implying that every exploit can be anticipated. The article points out that AT&T changed their code after discovery of the hack. There is no indication that they knew it was a problem before hand.
Web app vulns can and should be anticipated.

Wednesday, October 10, 2012

"Sorry, that password is already in use"

Here is a good example of the kind of insecure practices application developers are doing out there.

http://lists.webappsec.org/pipermail/websecurity_lists.webappsec.org/2012-October/008535.html

Thank you Jim Burton for being concerned and speaking out.  I have to admit laughing out loud when first reading this.  That may have been wrong of me.

Wednesday, June 6, 2012

Time to Update Your LinkedIn Password

Change your LinkedIn password!  Also change it on any site where you use that same password.  In case you missed it, about 6.5 million LinkedIn passwords were leaked today.  The passwords were in the form of unsalted SHA-1 hashes.  This leads you to believe that LinkedIn was not following secure best practices in terms of storage of user passwords.  A blog post from LinkedIn indicates that hashing and salting of user passwords was "recently put in place".  I wonder how recently?  Probably today.

If you are curious about whether your password was compromised, head over to LeakedIn.org, a site just launched by PHP security guru Chris Shiflett. (Side note: Chris authors a very informative blog and I've learned a lot about AppSec from his posts over the years.)  Once there, enter your LinkedIn password.  Client-side JavaScript code will produce the corresponding SHA-1 hash, then send the hash value to the server.  You will soon find out if your password was part of the 6.5 million that were leaked and whether or not the hash was cracked.  If you don't feel comfortable entering your password, just run HashCalc locally to calculate the SHA-1 hash of your password and enter the hash value instead.  I did this check today and my password was indeed among those that were leaked, but it hadn't been cracked yet.

Needless to say, I've changed my LinkedIn password.  It's a perfect example of why you should be using different passwords for every site you use.

Tuesday, May 22, 2012

AppSec Shouldn't Be Something Special You Do

To really improve the security posture of applications, development shops must get to a place where security is simply part of their normal development process.  In other words, designing secure software and writing hack-proof code shouldn't be a special side project or considered for the first time during the testing phase.

 
Making application security an inherent part of your SDLC can be done on a gradual basis.  You could start with instructor-led training of developers to teach them appsec concepts.  I did this type of training for several years.  A more scalable way to teach developers is computer based training (CBT), also known as eLearning.  This may be your only option with hundreds or thousands of developers on staff and limited budget.

Another key piece to building security into your SDLC is regular static and dynamic testing of applications. The goal is to find vulnerabilities and fix them before going live.  Static analysis looks at an application from the inside out.  You may also hear this referred to as white-box testing or static application security testing (SAST).  Static analysis can be done for any type of application (web, thick client, mobile, glue code, etc).  Dynamic testing involves looking at a web application from the outside in and is also known as black-box testing or dynamic application security testing (DAST).  Both static and dynamic testing should be done to have the best chance at finding all the vulnerabilities.  They are complementary approaches, although there is some overlap in the issues they can find.