Friday, April 20, 2012

Defining Trust

The other day I joined a Twitter discussion between Rafal Los, Wim Remes and several others over "trust". It struck us that we needed a clear definition of Trust, and that it would take more than 140 characters.

Rafal quickly put together a post, Trust - Making an intelligent, defensible trust valuation, and the debate continued. As I felt myself and Rafal were on the same page, and that some of the commenters weren't quite getting it, I was inspired to contribute a post of my own. I'm a believer in gaining understanding through examples, so I've put together a few scenarios in this post to try to drive the point home. I'd love to hear what you think. Comment here, on Rafal's post, or hit us up on Twitter.

The Question

Is trust binary? Is it a yes/no decision? All or nothing? Are there levels of trust? Go get a burbon, beer or chamomile, and we'll explore this question a bit. I'd urge you to think about this before I muddy the waters. We're not just talking about Trust as it relates to users, information security or IT vendors. There is no reason the answer to this question can't apply to social relationships and other situations.

Trust Fall, by SkinnyAndy

How do we define Trust?

There is an opportunity for trust to come into play any time we lack control over a product, a person's actions, an environment, or situation. I believe trust to be heuristic, requiring many rules that result in various levels. We see evidence of these levels in the simplest of examples: you may trust code you wrote more than that of your vendor's software; you probably trust your own network more than a partner's. I think some good examples and/or scenarios are necessary effectively define what it means to have different levels of trust. 

What should these "trust levels" be? I believe they can be formal or informal, but ultimately, they are the result of rules you use to determine "how much" you choose to trust someone or something. The ones I've come up with are completely arbitrary, and off the top of my head. One could define only two levels, or go up to ten or more. I think four is sufficient for the scenarios I present here. Yes, I realize there are actually five levels listed in the scale below. Note the zero level is not a level of trust, but the absence of it.

Sawaba's Amazing Non-Binary Trust Scale
4 - Full Trust
3 - High Trust
2 - Moderate Trust
1 - Low (initial trust; trust out of necessity or desperation)
0 - Distrust, i.e. no trust

We also need to understand how levels of trust are affected. This list is not all-inclusive, and is geared toward measuring IT products and services, to support the scenarios and examples I'll use later.

Meets promises and expectations
Caught lying
Time without incident or detractors
Missed deadlines or promises
Mishandled or ignored vulnerabilities
Slow response to addressing issues
Quick to address issues
Inaccurate quotes
Ability to test and/or validate product
Breaches or other security incidents
Surprise costs

Scenario 1

Purchasing a software product from a vendor. Let us assume this is a licensed, closed source software product that will install and run on servers/workstations on the local network. Though the customer in this example does not have access to the source code, they can test behavior, performance, capture network traffic, examine logs/output, etcetera.

Trust Level 0 - Haven't dealt with vendor yet. Unaware of reputation.
Trust Level 1 - Initial conversations and demo went well. "Gut check" says things are good so far.
Trust Level 2 - Checked vendor's reputation and tested product. Due diligence processes/procedures have been carried out and yielded positive results. Most people/companies are ready to do business at this "moderate" level of trust, though they may refrain from initially signing long-term contracts. Many consider this a "trial period".
Trust Level 3 - After a year or more, the vendor has "earned" a higher level of trust by consistently meeting expectations over a significant period of time. Most vendor/product relationships need not go past this level, at least by my arbitrary scale. I prefer to reserve the highest level of trust for more extreme situations where human safety and life and death are concerns. Recall, in this scenario, we don't have full control. We can't see source code, so there is always a chance a disgruntled programmer could insert a back door, for example. Perhaps over a very long period of time (10 years or more?) the level of trust could rise even higher.

Scenario 2

Using a piece of open source software.
With the services of an experienced, knowledgeable programmer trained to spot serious security vulnerabilities, stability issues, and performance concerns, a high level of trust can easily be achieved. Spend enough time reviewing and testing (especially when patches, or upgrades are released!), and it is reasonable to consider that full trust in the product could be attained.

I believe you can make the argument that, with 100% control and ability to verify/validate, we have zero need for trust in this case.

Scenario 3

A cloud service, say a Saas sales product, for example.

You can build trust based on
  • interactions with the company
  • reputation
  • a limited ability to test
  • time without incident
However, in this scenario, it is reasonable to believe that the level of trust may not pass the moderate level, due to the lack of transparency and control inherent in the model. Consider:
  • We can't see or review the source code
  • We can't see or review most of the operating environment
  • We may not know if incidents occur
  • We don't know for sure who has access to our data
  • They may say they encrypt our data, but we have no way of validating whether they do it correctly
  • Even if they are audited, and compliant with regulations designed to give assurance, we cannot put full trust in the auditors, especially with a history of varying quality and efficacy in audit practices and the auditors themselves
  • We have to take the vendor's word on the majority of items that present a risk to our data 
As a result, we might take measures to compensate for the lack of trust. To use an example, if we decide to use Dropbox, perhaps we independently encrypt all files before allowing Dropbox to sync them to compensate for a lack of trust. This is a real-world example that resulted after reports came out that many Dropbox employees had access to customer files. This was not previously clearly stated to customers, and resulted in a drop in the level of trust. These reports became a detractor.


There is an opportunity to trust an individual, company or product when either parties lacks control to some extent. When levels of control vary, so do levels of trust. It is therefore, not an "all or nothing" model, though both extremes (0% control and 100% control) can be experienced, and can reasonably occur.

Monday, April 16, 2012

Uncrackable Quantum Encryption, Unicorns and Perpetual Motion

What do these three things have in common?

None of them exist.

Unicorn by James Bowe
I'm only going to address uncrackable quantum encryption though. I'm not touching unicorns or perpetual motion.

This article over at ZDNet was responsible for sending me down this rabbit hole, though I've been rolling my eyes at "Uncrackable Quantum Encryption" articles for at least a decade.

First off, most of the "uncrackable quantum encryption" claims refer to encrypting data for transmission across networks or between endpoints. The idea is that you can make a tamper-evident system due to the nature of quantum mechanics. If an attacker attempts to manipulate or observe data in a quantum system, the data will be altered. Once altered, we're aware of the attacker and can apply countermeasures.

It is more likely that companies and researchers trying to sell the idea of quantum encryption are depending on its Sci-Fi "WOW" factor to sell it as the next big thing in cryptography. In reality there are many issues with quantum cryptography.

1. It is new, and largely untested

When someone claims something is uncrackable, and there are very few people with the knowledge and skills to test that theory, beware. In fact, in the last decade, quantum cryptography has been touted as "uncrackable" many times, and has been cracked just as many times. In fact, somewhat unfortunately, one of the researchers credited with cracking commercial quantum cryptography for the first time is now making this latest "uncrackable" claim!

2. We already have uncrackable encryption...

...Or near enough that the difference doesn't matter in the real world. AES has faithfully served us for over a decade now, and no practical method to crack AES-encrypted data at rest, much less in transit (when used as a stream cipher), has been presented. For any and all practical purposes, AES has fit the bill, so what do we need quantum encryption for?

3. The real problem in most encryption failures is poor implementation

Say someone does come up with a truly uncrackable quantum encryption. Historically, the human factor has been the limiting factor more than the quality of the cryptography. Someone will set it up, configure it or code it incorrectly. Why go through the wall when you can go around it?

4. Aside from researchers, no one is attacking cryptography

Users are the weak point. The person behind the desk and their phone/laptop/desktop is the goal of most attackers, because it is the weakest link, and it works. Even at the server/enterprise level, the low-hanging fruit is code thrown together at the last minute by an overworked developer, not some $200k quantum cryptography endpoint.

Show me some uncrackable quantum encryption that keeps your data safe, and I'll show you the treadmill I use to power my house. He never gets tired.

UPDATE: I noticed the commenters on the ZDNet article that inspired this post state almost all of the same points I make here, which tells me two things: 1) you guys already know better and 2) nobody's buying into quantum BS.

Thursday, April 5, 2012

MintChip: Canada Test Drives a New Payment System

A few years ago, at the DefCon 18 PCI panel, I chuckled as James Arlen sardonically explained to the crowd that the only worthwhile solution to the current credit card security issue was to scrap the current system and start fresh. It wasn't that I didn't agree with James, I think most in information security can agree that the current system is flawed enough to warrant such an extreme approach. I simply thought that there was such a slim chance of the payment brands ever considering such an approach that it was pointless to discuss. Perhaps I was wrong.

The modern payment system, born in the 50's and 60's predated e-commerce by decades. It wasn't until the advent of high-speed Internet access that breaches became commonplace. In the early 2000's, it became obvious that this system was quite vulnerable.

Today, I stumbled upon the Royal Canadian Mint's new MintChip system. In Canada, where debit cards are already free of the five big payment brands' logos, something like MintChip has a chance. From what little information is available, it seems there are hardware and software components to this solution. In fact, it seems the only information available is in the open because the Mint is having a contest to spur MintChip application development.

Indulge me while I fantasize a bit.

If MintChip is successful, there is a chance it could replace credit cards as a dominant form of payment. There is every chance for success also. It will take advantage of the latest technology. It seems well designed and thought out, and finally, has government backing. This is no startup. This is a revolution against an insecure payment system that costs Canadian citizens time and money with every breach. What about visitors and tourists? In addition to changing out your currency for Canadian dollars, you could potentially purchase pre-filled MintChips, like buying a pre-paid phone or gift card. Just look at the slick website, the convincing video, and they even have rainbows and unicorns on the 404 page.

Whew, I had to get that out.

It's all very pretty and hopeful, but in reality, there are a few issues here. First, I'm not Canadian, and realistically, I can only get so excited about a new payment system that has very little chance of popping up in the states any time in my pre-geriatric lifetime. Second, though they've made resources available for developers to come up with apps, it is clear from reading over the site and through the forum posts that there is precious little detail about how this system works. Without some transparency on how this system works from end-to-end, we really won't know if it is better than the credit card payment system in place today.

If you have any other information or opinions on MintChip, I'd be interested to hear about it.

Wednesday, April 4, 2012

Over half a million Macs infected?

Update 5: The Legacy

I wasn't expecting to update this post again, but this Mac botnet is not going away, suggesting that click-happy Mac users that get infected with trojans are less click-happy when it comes to installing Apple's updates.

As of two days ago, the Flashback botnet is just as large as when I first posted this story on April 4th! I suspect there will be a "learning phase" as Mac users get used to having to patch and remove malware. Part of the problem is likely that users don't realize they are infected. I'm not sure Apple's current approach is going to cut it in the long run. Personally, I think Apple should round up the brain trust like Microsoft did in the early 2000s, and come up with a sustainable solution. A future where Mac most users feel like they need to run antivirus would be sad.

Update 4: The Aftermath

  • We received independent confirmation of the numbers reported by Dr.Web.
  • The numbers I've heard report 2-3% of all Macs are infected, or were infected at the peak.
  • Dr.Web has a tool you can use to see if you are infected. Though it is using HTTP, I'm fairly sure the hardware UUID of your Mac isn't intended to be kept secret.
  • A downloadable App is also available to check for infection.
  • An apparent issue with the original java patch for Lion resulted in a second patch being released by Apple three days after the first.
  • Sites everywhere are reporting (some almost celebrating) that Apple's reputation as malware-resistant is dead.
  • Common suggestions to ditch Java are unhelpful and unlikely for the average user. It is far too ubiquitous.
  • Unless there is a huge resurgence in infections caused by variant of Flashback that uses a new vuln/exploit/vector, this will be my last update to this article.

Update 3

Where are we now?
  • Dr.Web claims the number of infected Macs has risen to 600,000, and that a significant number of them (273!) are reporting in from Cupertino.
  • F-Secure has posted instructions for manual removal of the trojan. If you've never done it, manually removing malware is a fun and empowering exercise. Not that I'd recommend getting infected just for an excuse to remove it. Well, maybe on your friend's computer.
  • Mikko Hypponen, F-Secure's Chief Research Officer, has spoken with Dr.Web about their methods, and seems inclined to believe the numbers.
  • I have received messages from people that are infected with the Flashback trojan.
  • I was very careful when opening those messages.
  • Dr.Web and F-Secure detail that the Flashback trojan is sending the Mac's Universally Unique Identifier (UUID) in the payload to the C&C server. This would definitely make it easy to get an accurate count of the number of infected hosts.
  • Mikko also tweeted that the number of infected Macs is now roughly equivalent, in relative terms, to the number of PCs infected at the height of Conficker's reign.

Update 2

Many people seem to think that Dr.Web's statistics came from the current install-base of their anti-virus software, which isn't the case. Dr.Web allegedly used botnet C&C sinkhole tactics, which have been effectively used in the past for the same purpose, and are detailed in this Trend Micro paper.

Update 1

Regardless of whether Dr.Web's results are real or not, I think our main takeaway from this should be that many Mac users have been lured into a false sense of security, and will be, or may already be, in for a rude awakening. Apple's marketing efforts are at least partially responsible for this.

Original Post

Say it isn't so!

Despite what Apple's marketing department would have you believe, Macs are not invulnerable to attacks and malware targeting OS X does exist. Though Macs are popular with security practitioners and hackers, most are well aware the BSD-based operating system isn't a panacea when it comes to security - only less targeted.

Until now, apparently.

If what the Russian security software company, Dr.Web, reports is accurate, a trojan has succeeded in infecting over 550,000 Macs, the majority of which are located in the United States. The trojan, named "Flashback", takes advantage of a vulnerability in Java that was only yesterday addressed in a patch released by Apple.

So far, I haven't seen any other reports numbering the victims of Flashback, but if accurate, such a large infection rate on Macs may change common perception of OS X as "virus-proof" and could result in a spike in Mac anti-virus software sales. However, given that the company reporting these numbers is in the business of selling anti-virus software, I think we need to see their claims corroborated before we get too excited.

It didn't look like an english version of the article was available, so I've included a Google Translate translation below:

"Doctor Web" discovered a botnet of more than 550 000 "Poppies"

 April 4, 2012

Experts of company "Doctor Web" - the Russian developer of IT security - held a special study, which allowed to evaluate a picture distribution Trojan BackDoor.Flashback, infecting computers running the operating system Mac OS X.Now BackDoor.Flashback botnet operates more than 550 000 infected workstations, most of which are located in the United States and Canada. This once again denies claims by some experts that there is no threat to users' Macs. "  

Infection by the Trojan BackDoor.Flashback.39 performed using infected Web sites and intermediate TDS (Traffic Direction System, distribution systems, traffic), redirecting Mac OS X users to a malicious site. These pages, the specialists of "Doctor Web" found quite a lot - they all contain Java-script, which loads the user's browser Java-applet, which in turn contains the exploit. Among the newly detected malicious sites appear, in particular:
According to some sources at the end of March in the Google SERP attended by more than 4 million infected web pages. In addition, Apple users forums reported cases of infection by the Trojan when you visit a site BackDoor.Flashback.39
Beginning in February 2012 attackers were used to spread malicious software vulnerabilities CVE-2011-3544 and CVE-2008-5353, and after March 16, they began to use another exploit (CVE-2012-0507). The fix for this vulnerability, Apple Inc. has released only the April 3, 2012. 
Exploit stores on the infected hard drive "poppy" executable file to download a payload from a remote server control and its subsequent launch. The specialists of "Doctor Web" found two versions of the Trojan: approximately April 1, attackers have used a modified version of BackDoor.Flashback.39. As in previous versions, after running a malicious program checks the hard disk the following components:
  • / Library / Little Snitch
  • / Developer / Applications / / Contents / MacOS / Xcode
  • / Applications / VirusBarrier
  • / Applications / iAntiVirus /
  • / Applications / avast!. App
  • / Applications /
  • / Applications /
  • / Applications / Packet 

If the specified file could not be found, the Trojan creates a particular algorithm list management servers, sends a message has been successfully installed on server statistics created by hackers and performs a serial poll command centers.  

It should be noted that the malware uses a very interesting mechanism for generating addresses of managing servers, allowing, if necessary, dynamically adjust the load between them, switching from one command center to another. After receiving a response management server, BackDoor.Flashback.39 checks passed to the command center at the post match signatures RSA, and then, if the test proves successful, loads and runs on the infected machine payload, as which can be any executable file specified in the resulting Trojan directive.  

Each of the bot sends the management server in the query string unique identifier for the infected computer. Using the method of sinkhole specialists of "Doctor Web" was able to redirect traffic to botnet on their own servers, thus making counting of infected hosts.  

On April 4, bot networks are more than 550,000 infected computers that are running the operating system Mac OS X. In this case it is only a part of a botnet that uses a modification of this Trojan BackDoor.Flashback. Most of the infections accounted for by the United States (56.6%, or 303,449 infected hosts), in second place is Canada (19.8%, or 106,379 infected computers), third place is taken by the United Kingdom (12.8% or 68,577 cases of infection ), in fourth place - Australia with the index 6.1% (32,527 living units).  

In order to protect their computers from the possibility of penetration of the Trojan BackDoor.Flashback.39 specialists, "Dr. Web" recommend Mac OS X users to download and install offered by Apple security update:

Monday, April 2, 2012

Global Payments Breach


Welcome InfoSec Daily Podcast listeners! I'm going to address a few items related to this story that were discussed on last night's show.
  • To the best of my knowledge, participation in VISA's Service Provider Registry is required for all service providers potentially storing VISA cardholder data. Based on my experience, this is primarily a way to track service providers and a marketing tool. Even though Global has been booted off the list, they can still continue to do business, process VISA cards, and sign up new merchants. If anyone has more direct experience or corrections, please comment below.
  • PCI is applied different to service providers like processors. However, it is in the opposite direction from what you were thinking. Service providers actually have more requirements to comply with than a merchant would. They do the full PCI DSS plus a few additional requirements that apply only to service providers. They also have to perform level 1 compliance (full Report on Compliance annually, third party annual audit required) with much fewer annual transactions than a merchant would. I think where this misunderstanding came from is the fact that, traditionally, issuers haven't needed to be PCI compliant. That's changed in recent years.
  • YES, the requirement not to store track data applies equally to processors as it does to merchants. Issuers (financial institutions that actually brand and send out credit cards) are the only ones with a good chance of getting an exception for storing track data, as they are the original source for producing/creating that data.

Here's the original post:

It isn’t so much the size of this breach that is significant, but the fact that one of the largest global payment processors got popped. Visa has allowed them to continue processing credit cards, but dropped them off their service provider registry (which is a BIG deal). The breach only affects North American merchants and cardholders. To give you an idea of how bad a breach at a large credit card processor can be, if a month’s worth of the transactions they handle were exposed, it is entirely possible that over 90% of all cardholders in the US would need new credit/debit cards.

This doesn’t happen often. I only know of two other cases where a processor was hit by a breach. CardSystems Services, as a business, was literally destroyed by their breach. VISA and AMEX revoked processing rights, forcing CardSystems to have to shut down operations and sell off assets almost overnight. Heartland Payment Systems is the most recent case, and the second largest breach ever at 130 million. They were also stripped from the registry, but managed to recover, regain PCI compliance, and get back onto the registry within a year.

Global Payments had a public conference call at 8AM this morning that I didn’t have time to listen to, but has resulted in an explosion of news stories on the breach.

The worst thing I've been able to determine from the details so far, is that it seems Global Payments was storing Track Data. The PCI DSS explicitly forbids storing track data (requirement 3.2.1), and PCI considers the storage of sensitive data to be one of the most serious PCI violations. CardSystems was effectively shut down for a lesser violation, though their breach was much larger.

It will be interesting to see if any of the details of the breach are released. These details are essential for the rest of the industry to learn from Global's mistakes. I'd like to see:
  • The attack vectors used, and the level of sophistication necessary to breach Global.
  • How long the attackers had access to systems
  • If track data really was stored, and what Global's excuse for such a violation is
  • Why the breach was limited to only 1.5 million accounts in North America. A large processor like Global might process 1.5 million transactions in just a few days. Why weren't more accounts stolen? Why only North America? Perhaps some effective segmentation was in place? That would be good news the PCI Council would be happy to point out.
  • And of course, we'll hopefully eventually find out who the perps were, and their level of hacking expertise

Time will tell.