Monday, March 31, 2014

Y2K and Mayan apocalypse had a mutant baby: the end of XP support

I ran across this post earlier today, and by the end of the first paragraph, was convinced it was an
early April Fool's post. Come on - Y2K references and "The End is Coming"? Could we be more alarmist here? Here's the myths I see being perpetuated in this article:
  • "All hell will break loose"
    • Probably not - bad guys don't like to rock the boat - if they alert you to their presence, they increase the chance that they'll get removed. Remember Conficker? It infected millions of systems and waited silently. Most didn't know they were infected. On a side note, if you take the 'e' out of Confickr, it sounds like a Hadoop startup. Of course, it depends on the malware's purpose - if ransomware gets installed on 100 million systems in a week's timeframe, then yes - all hell can be considered broken and loose.
    • Did I miss the 'hell that broke loose' when Win2000 hit end of support?
  • The whole tone of the article treats XP as if it has some sort of hermetic, unbroken seal on it with phrases like, "will probably be hacked within a short timeframe" and "hackers are counting down the days".
    • XP has been through a lot. It has been hacked and infected non-stop over the past 13 years. I still sit down at XP systems belonging to friends, family and businesses with "65 updates ready to install". If someone smart and skillful decides to take out most of the remaining XP machines with a worm and some dir /s /b | del /f, it could probably be done, but I think if someone was going to do that... why wait? You could do plenty of damage now, or 2 years ago. Or 4 years ago.
  • We're assuming the bad guys are really excited about compromising 12 year-old Dell Dimensions.
    • If you were putting together a bitcoin mining botnet, would you target octo-core gaming systems with dual-nVidia cards running Windows 7, or single-core Dell Optiplexes with 20GB hard drives, ATI Rage 128 onboard and an 8-year old install of WinXP that's constantly running out of RAM and swapping hard? Sure, you'll get 8 WinXP systems for every Win7 system, but one good tire rolls a lot better than 8 flats.

Don't get me wrong - the fact that support for XP is going away is a big deal, and people need to get off the operating system. However, I disagree that it is going to be an earth-shattering end-of-times affair for several reasons. The article cites numbers that 29% the 'systems in the world' still run XP, but it is a gross estimate from a small sample that ignores the fact that a huge number of systems in the world don't browse the Internet at all, and aren't considered by that sample. I've seen numbers as low as 10%, but even those fancy zmap scans of the Internet can't give us an accurate number because most XP machines will be NATed behind firewalls and routers. Microsoft might possibly have the most accurate numbers, from Windows Update data and other 'phone home' functionality in Windows. If they have those numbers, I haven't found them or they're not sharing.

Different sources cite that anywhere from a third to half of XP systems are already compromised. I suppose it is possible that someone might compromise a compromised XP and take away their compromise in a compromise battle... We've actually seen malware that removes its competitors before, especially in crimeware turf battles. It is also possible that half of the numbers reported by AV vendors are OpenCandy (if that counts as malware, a pint of Jeni's counts as medication). I found another report that said infection rates could jump 66% after support ends. That means we could see rates of infection as high as 116%. I'd cite my sources, but they vary so wildly, I see no point. Each source of statistics is a tiny window into one vendor or website's log counters. Combine that knowledge with the fact that 60% of statistics are too conservative in their estimates by 30%, and infection rate could soar as high as 151%.

You get the idea.

So, in summary:

  • There are kajillions of XP machines still out there (ShodanHQ regularly shows over 4 million that are Internet accessible)
  • Tons of them are already pwned
  • We might see the emergence of a large botnet based on XP systems, but I'd say it is more likely we'll just see a modest bump up from the norm.
  • Permanent zero-days will impact XP, but not all at once in a 2012-style cataclysm.
  • The article that kicked this rant off is right, there are things you can do to protect XP and extend its life, but it would probably make more sense (and be cheaper) to replace it.

Saturday, June 15, 2013

Welcome to the Club: Advice to First-Time Pentesters

This is the first post in a series offering advice to InfoSec newcomers. Not being the most colorful crayon in the box, I'll just call it "Welcome to the Club", and will tag all posts in the series accordingly.

Giant pit mine in the Siberian tundra
Hop in. I'll be right behind you.
Occasionally, folks starting out in InfoSec will ask me for advice. I try to give it without sending them screaming toward a different, less-punishing career, like working in a Siberian diamond mine.

An acquaintance recently contacted me via LinkedIn to ask for advice on his first paid pentest gig, and this is what I told him.

As you progress from pentest to pentest, your skill and ability to find flaws, use tools, etc will increase, so I'm not going to give you any technical advice at this point. On the first gig, it is more important to ensure there will be a second gig than to try to cover every technical avenue possible. It would also be ideal for your first gig to also be the client's first pentest - then as your skills increase, their ability to implement your findings (in theory) and security posture should increase as well.

The best way to have a good first pentest is to focus on good communication with the client. This skill is important for consultants of any kind, but more so in any situation where there is the potential to cause harm in the course of doing the job they are paying you for. Relationship building is also important. Don't think about any gig as just one job. Think of it as the potential to start a relationship where you could potentially establish yourself as their go-to for any security work.

Burning Building
Yeah, could you stop scanning? It isn't going well for us.
Come up with a good plan, share it with the customer, and stick to it. If something changes, e.g. you find issues going deeper than you expected and you need to change the plan, notify them before going down any "rabbit holes". Make them aware that pentesting - even just scanning - is a potentially disruptive activity, but that you'll do your best to minimize the risks to their network. Make sure they know how to contact you, and that you can stop scanning/pentest activities relatively quickly if there are any issues.

Manage the client's expectations well, and they should be happy. Happy clients spread your services via word-of-mouth and rehire you. Positive word-of-mouth and reoccurring gigs build a solid business. Never stop learning and trying new things on pentests, and the technical side will improve as you gather experience.

There is also a ton of advice posted by the "Pentest Lessons" Twitter account.

Tuesday, May 7, 2013

OpUSA and HTP5: Winners and Losers

We were warned.

MAN were we warned.

On May 7th, some serious shit was going down.

*** Part1: The Losers ***
The warnings started going out weeks ago. Banks and other financial institutions on the Anonymous "hit list" were warned ahead of time. Some took services offline as a preventative measure. As it turns out, these self-inflicted "lock downs" appeared to be the only damage done. I've seen no reports of any of organizations on the target list being affected by #OpUSA.

It started out with some dire warnings, a hit list and a lot of talk. Tweets like this were a common sight:

#OpUSA Hackers plan "Day to Remember" with May 7 attacks on banks, government agencies

Thousands of sites hacked, defaced and down during #OpUSA. Here's an update list.

The only thing likely to be remembered about this day though, is how the boasts were quickly overshadowed by sarcasm and jeers:

At this point I doubt #OpUSA could shut down their own computers. Using the power button.

#OpUSA hits an online bakery, but banks and the FBI are safe

Their own attempts to brag were more entertaining than some of the jokes going around. They hacked an unused Kansas pawn shop website. Someone spitting in the local Taco Bell's sour cream would be more newsworthy.

Patriot Pawn & Gun of USA Fucked by AnonGhost for #OpUSA


The hackers also have a site set up to act as a running tally of their accomplishments. By midday, the
list looked pretty impressive. That is, until you started digging into the details.
  • For an attack on the US, they reported hitting quite a few non-US websites, and much of the breached data was international.
  • The 100k breached accounts appeared to be from a 2009 breach
  • The ~12k breached accounts appeared to be from a 2005 breach
  • All breached credit cards were long expired
  • Many websites were misrepresented. One that appeared to be a Dallas criminal attorney's office was actually an abandoned WordPress blog with a few criminal law-related posts.
  • Another, completeharleydavidson.us, didn't even pass as believable, and I couldn't find any evidence it existed before a few days ago. I suspect they might even be registering domains and setting up sites just to make it look like they were hacked.
  • Little to no notable websites appear to have been affected
  • A XSS vuln found in the "Municipal Chambers of Brasil" website. As part of #OpUSA? Really?
Why so lame? I see three possibilities: These "hackers" really are that incapable, that their activities were only meant to cause fear and an overreaction (which worked, to a small extent), or that this whole thing was an intentional diversion from something more devious going on. I doubt the latter, but I'm no threat intel expert. I just know what I've seen.

Jaeson Schultz of Cisco touches on another possibility: that #OpUSA is a sting of sorts, set up to help law enforcement catch members of anonymous. It would be interesting to test the claim that the tools linked for this operation are backdoored.




Thursday, November 1, 2012

My favorite Windows tools and utilities

Some of these sites have been around for over a decade, and some of the tools hosted there, I've been using for nearly that long, if not longer. I've used these tools as standalone, but many of them have been  indispensable in scripts I've written over the years.

You might notice that all of these tool collections are made for Windows. As the hacker operating system of choice from the beginning, handy tools for integrating into scripts and troubleshooting were never hard to find for Unix and Linux. However, the more commercial and business-oriented Windows was severely handicapped in this regard, and didn't officially get a proper shell environment (out-of-the-box) until 2006. This oversight resulted the developers, like the ones I've listed below, creating some amazing, useful and largely free tools, beginning in the mid-to-late 90's.

Yes, there are many powerful scripting alternatives readily available for Windows nowadays, like Ruby, Python and Powershell. I cut my teeth on Windows shell (command) scripting though, and when I need a quick-and-dirty script to automate something, it ends up being either a bash script on my Mac, or a Windows shell script. Both work in their native environments without any additional downloads, installs or even changes to paths or environment variables.

The Tools

Sysinternals

The first, and most impressive collection of tools is that of Sysinternals. Tools like psexec, tcpmon, Process Explorer and Process Monitor are so good that they should be part of Windows. That made it no surprise when Microsoft bought Sysinternals (actually, Winternals Software) and brought the brilliant Mark Russinovich and Bryce Cogswell on board. Mark widely recognized as a tech rockstar these days, and is now a successful fiction author with two novels available!

Nirsoft

Nirsoft, like Sysinternals, seemingly has a tool for everything. In fact, one of the available tools, nircmd,   seems to do nearly everything you could imagine needing from a desktop automation standpoint. The latest Nirsoft tool I've been making use of is SiteShoter - a tool that allows you to take screenshots of a website from the commandline, using the native Internet Explorer API. Again, like Sysinternals, the vast amount of tools Nir Sofer (author of everything found on Nirsoft.net) has written is staggering. For anyone writing scripts to automate tasks, both sites are a godsend.

JoeWare

This site is the same type as Nirsoft and Sysinternals - a huge collection of Windows tools that make power users' and administrators' jobs easier. The emphasis of these tools is very heavy on automating tasks related to Microsoft's popular enterprise products, like Active Directory and Exchange.

AnalogX

Part developer, part musician and part philosopher, AnalogX is a bit different from the previous three sites. This is a home for all of this individual's creative ventures, whatever they might be. I've been using some of his tools for 12 years now, and am grateful his site is still around and available. Also, like me, he never throws anything away. One of my old favorites is TextScan, which has often helped me out when I've had a need to do some quick and dirty binary analysis (as long as what I'm looking for is in ASCII!).

Standalone Mentions

Blat

Even though the latest versions of Windows Task Scheduler include an integrated email/smtp utility, Blat is still the best tool out there for using the internal open relay to impersonate your coworkers. Not that I'd ever do that...

If you need your Windows script to email you, look no further.

Ploticus

There are probably prettier visualization utilities out there now, but I've yet to find anything as easy to learn and use as Ploticus. It will parse out any file with structured data, and can output the results in a large variety of graph formats.

Cygwin

More than just a group of handy tools, Cygwin is an entire posix-friendly environment you can install on Windows. Setup could have been a nightmare, but instead, it is full of streamlined awesomeness.

Unix2Dos
Similar to Cygwin, but different. There is no environment or emulation layer here. These are unix utilities ported as native Win32 binaries. No additional requirements.

If you see anything I've missed that belongs on this list, let me know in the comments!

Thursday, May 17, 2012

PCI and Mobile Payment Application Security

So far, the world of mobile payments has been a "Wild West", before the sheriff came to town. The vendors have been making their own rules, though at least a few have been smart, and have prepared for what they guessed would happen. The solution can be expressed in one word.

Encryption. As early in the payment process as possible, all the way to the bank (acquirer).

The PCI Council has issued a press release on mobile payment security, along with an "At a Glance" publication. These usually precede the release of new standards/best practices documents by a few months as fair warning. This post is my attempt to analyze where the Council sits on the matter, and a bit of reading between the lines to try to predict what's coming.

End to end encryption, or point-to-point encryption (P2PE), as the PCI Council calls it, is easily the best solution to securing the explosion of mobile payment applications now on the market. It is ideal because, in most cases, when implemented, it is invisible to the user, the merchant and the application. Apps don't have to be rewritten, the user experience doesn't suffer, and the merchant still has the same level of convenience. Most importantly, when done correctly, it is easily the most secure approach available.

There is a price though, and it is on the merchant. All solutions I've seen offered raise the transaction rate. Such is the price for the convenience of mobile payment acceptance in this case.

Blah blah encryption blah P2PE, what are we really talking about here, Adrian? 


We're talking about encrypting the cardholder data in the same hardware that reads your card. The Android/iOS/Psion/QNX/Whatever mobile operating system never handles unencrypted payment data. Furthermore, in a P2PE environment, the key to decrypt this data should not be present. In most cases, this encrypted data will be sent directly to a payment gateway, and will not be stored. At this point, risk and attack vectors are minimized, and you've added little to no disruption in the sales process.

It takes a lot of work and expense to switch POS solutions, however. For environments already planning to switch, or entering mobile payments for the first time though, it makes sense to get it right the first time, and the Council will soon be publishing P2PE-certified POS solutions,  making it easier to choose a secure, vetted product. Currently, a lot of vendors are offering half-baked solutions that only reduce some of the risk, and it is difficult to separate the pretenders from the real deal. Beware.

If this is such a perfect solution, why isn't everyone already doing it?

  1. Vendor lock-in. Many merchants' POS solution and processing come from the same vendor, and that vendor may not have a P2PE or tokenization solution ready yet.
  2. Cost of new hardware/POS solution.
  3. Increased per-transaction cost - You pay more for using payment gateways, and you'll pay more for a P2PE solution where the processor decrypts your transactions. How much more? Some level 1/level 2 merchants could potentially be going from paying $0.01 to $0.36 or more per transaction! Those kinds of increases really add up for merchants processing 1 million+ transactions annually.
  4. Too early. Most vendors are at 1st Gen or earlier with P2PE products. We're just getting started here, and most established POS vendors don't operate at startup speeds. This is an interesting market to watch however, because there are some very interesting startups popping up in this space!

I think you've been hitting the Council Kool-Aid pretty hard.


A valid perspective, but this isn't just idle speculation from the stands. I've had an opportunity to assess a startup employing a P2PE approach first hand. I got down into the weeds with them, dug into their solution, and issued their ROC. I've used all my security, hacking and pentesting experience to consider all the attack angles. Could have missed something? Absolutely, and there is always room for improvement. 

Throw your concerns, questions and doubts my way, and I'll be happy to address them all. Challenge me, and I'll meet it. We're still in the early stages here, remember. Our money will be going through these solutions, and they need to be challenged (read: hacked) to ensure they are as strong as they should be.


Friday, April 20, 2012

Defining Trust

The other day I joined a Twitter discussion between Rafal Los, Wim Remes and several others over "trust". It struck us that we needed a clear definition of Trust, and that it would take more than 140 characters.

Rafal quickly put together a post, Trust - Making an intelligent, defensible trust valuation, and the debate continued. As I felt myself and Rafal were on the same page, and that some of the commenters weren't quite getting it, I was inspired to contribute a post of my own. I'm a believer in gaining understanding through examples, so I've put together a few scenarios in this post to try to drive the point home. I'd love to hear what you think. Comment here, on Rafal's post, or hit us up on Twitter.

The Question

Is trust binary? Is it a yes/no decision? All or nothing? Are there levels of trust? Go get a burbon, beer or chamomile, and we'll explore this question a bit. I'd urge you to think about this before I muddy the waters. We're not just talking about Trust as it relates to users, information security or IT vendors. There is no reason the answer to this question can't apply to social relationships and other situations.

Trust Fall, by SkinnyAndy

How do we define Trust?

There is an opportunity for trust to come into play any time we lack control over a product, a person's actions, an environment, or situation. I believe trust to be heuristic, requiring many rules that result in various levels. We see evidence of these levels in the simplest of examples: you may trust code you wrote more than that of your vendor's software; you probably trust your own network more than a partner's. I think some good examples and/or scenarios are necessary effectively define what it means to have different levels of trust. 

What should these "trust levels" be? I believe they can be formal or informal, but ultimately, they are the result of rules you use to determine "how much" you choose to trust someone or something. The ones I've come up with are completely arbitrary, and off the top of my head. One could define only two levels, or go up to ten or more. I think four is sufficient for the scenarios I present here. Yes, I realize there are actually five levels listed in the scale below. Note the zero level is not a level of trust, but the absence of it.

Sawaba's Amazing Non-Binary Trust Scale
4 - Full Trust
3 - High Trust
2 - Moderate Trust
1 - Low (initial trust; trust out of necessity or desperation)
0 - Distrust, i.e. no trust

We also need to understand how levels of trust are affected. This list is not all-inclusive, and is geared toward measuring IT products and services, to support the scenarios and examples I'll use later.

Enhancers
Detractors
Meets promises and expectations
Caught lying
Time without incident or detractors
Missed deadlines or promises
Consistancy
Mishandled or ignored vulnerabilities
Stability
Slow response to addressing issues
Quick to address issues
Inaccurate quotes
Ability to test and/or validate product
Breaches or other security incidents
Transparency
Surprise costs

Scenario 1

Purchasing a software product from a vendor. Let us assume this is a licensed, closed source software product that will install and run on servers/workstations on the local network. Though the customer in this example does not have access to the source code, they can test behavior, performance, capture network traffic, examine logs/output, etcetera.

Trust Level 0 - Haven't dealt with vendor yet. Unaware of reputation.
Trust Level 1 - Initial conversations and demo went well. "Gut check" says things are good so far.
Trust Level 2 - Checked vendor's reputation and tested product. Due diligence processes/procedures have been carried out and yielded positive results. Most people/companies are ready to do business at this "moderate" level of trust, though they may refrain from initially signing long-term contracts. Many consider this a "trial period".
Trust Level 3 - After a year or more, the vendor has "earned" a higher level of trust by consistently meeting expectations over a significant period of time. Most vendor/product relationships need not go past this level, at least by my arbitrary scale. I prefer to reserve the highest level of trust for more extreme situations where human safety and life and death are concerns. Recall, in this scenario, we don't have full control. We can't see source code, so there is always a chance a disgruntled programmer could insert a back door, for example. Perhaps over a very long period of time (10 years or more?) the level of trust could rise even higher.

Scenario 2

Using a piece of open source software.
With the services of an experienced, knowledgeable programmer trained to spot serious security vulnerabilities, stability issues, and performance concerns, a high level of trust can easily be achieved. Spend enough time reviewing and testing (especially when patches, or upgrades are released!), and it is reasonable to consider that full trust in the product could be attained.

I believe you can make the argument that, with 100% control and ability to verify/validate, we have zero need for trust in this case.

Scenario 3

A cloud service, say a Saas sales product, for example.

You can build trust based on
  • interactions with the company
  • reputation
  • a limited ability to test
  • time without incident
However, in this scenario, it is reasonable to believe that the level of trust may not pass the moderate level, due to the lack of transparency and control inherent in the model. Consider:
  • We can't see or review the source code
  • We can't see or review most of the operating environment
  • We may not know if incidents occur
  • We don't know for sure who has access to our data
  • They may say they encrypt our data, but we have no way of validating whether they do it correctly
  • Even if they are audited, and compliant with regulations designed to give assurance, we cannot put full trust in the auditors, especially with a history of varying quality and efficacy in audit practices and the auditors themselves
  • We have to take the vendor's word on the majority of items that present a risk to our data 
As a result, we might take measures to compensate for the lack of trust. To use an example, if we decide to use Dropbox, perhaps we independently encrypt all files before allowing Dropbox to sync them to compensate for a lack of trust. This is a real-world example that resulted after reports came out that many Dropbox employees had access to customer files. This was not previously clearly stated to customers, and resulted in a drop in the level of trust. These reports became a detractor.

Conclusion

There is an opportunity to trust an individual, company or product when either parties lacks control to some extent. When levels of control vary, so do levels of trust. It is therefore, not an "all or nothing" model, though both extremes (0% control and 100% control) can be experienced, and can reasonably occur.

Monday, April 16, 2012

Uncrackable Quantum Encryption, Unicorns and Perpetual Motion

What do these three things have in common?

None of them exist.

Unicorn by James Bowe
I'm only going to address uncrackable quantum encryption though. I'm not touching unicorns or perpetual motion.

This article over at ZDNet was responsible for sending me down this rabbit hole, though I've been rolling my eyes at "Uncrackable Quantum Encryption" articles for at least a decade.

First off, most of the "uncrackable quantum encryption" claims refer to encrypting data for transmission across networks or between endpoints. The idea is that you can make a tamper-evident system due to the nature of quantum mechanics. If an attacker attempts to manipulate or observe data in a quantum system, the data will be altered. Once altered, we're aware of the attacker and can apply countermeasures.

It is more likely that companies and researchers trying to sell the idea of quantum encryption are depending on its Sci-Fi "WOW" factor to sell it as the next big thing in cryptography. In reality there are many issues with quantum cryptography.

1. It is new, and largely untested

When someone claims something is uncrackable, and there are very few people with the knowledge and skills to test that theory, beware. In fact, in the last decade, quantum cryptography has been touted as "uncrackable" many times, and has been cracked just as many times. In fact, somewhat unfortunately, one of the researchers credited with cracking commercial quantum cryptography for the first time is now making this latest "uncrackable" claim!

2. We already have uncrackable encryption...

...Or near enough that the difference doesn't matter in the real world. AES has faithfully served us for over a decade now, and no practical method to crack AES-encrypted data at rest, much less in transit (when used as a stream cipher), has been presented. For any and all practical purposes, AES has fit the bill, so what do we need quantum encryption for?

3. The real problem in most encryption failures is poor implementation

Say someone does come up with a truly uncrackable quantum encryption. Historically, the human factor has been the limiting factor more than the quality of the cryptography. Someone will set it up, configure it or code it incorrectly. Why go through the wall when you can go around it?

4. Aside from researchers, no one is attacking cryptography

Users are the weak point. The person behind the desk and their phone/laptop/desktop is the goal of most attackers, because it is the weakest link, and it works. Even at the server/enterprise level, the low-hanging fruit is code thrown together at the last minute by an overworked developer, not some $200k quantum cryptography endpoint.

Show me some uncrackable quantum encryption that keeps your data safe, and I'll show you the treadmill I use to power my house. He never gets tired.

UPDATE: I noticed the commenters on the ZDNet article that inspired this post state almost all of the same points I make here, which tells me two things: 1) you guys already know better and 2) nobody's buying into quantum BS.