Canvassing all security cracks

April 22, 2005

If Dave Aitel finds using assembly language second nature today, he has a grandparent to thank for it. When Aitel was 12, his maternal grandfather started teaching him the language, paving the way for a career in information technology.

Getting a scholarship to work as a computer scientist at the National Security Agency was probably the deciding factor.

At 28 (he looks much younger), Aitel has been in the security business a long time - he joined the NSA when he was 18 and worked there for six years.

After the NSA stint, Aitel spent three years with @stake, the security firm that grew out of the hacker group L0pht. (@stake has now been bought by Symantec). He says these two billets taught him most of what he needs to run his own business, which he set up five years ago.

His clients pay his New York-based firm, Immunity, anything from $US50,000 ($A65,000) to $US100,000 a year. Five people work at Immunity full-time; others come on board as needed.

Immunity's main product is an attack tool, CANVAS, which can be used for penetration testing. The open design allows companies to adapt it to their own environment.

Despite being considered one of the best in the dog-eat-dog global IT security community, Aitel isn't prone to arrogance.

He is one hacker (and the term is used advisedly) who is also very good at public relations. Not for him the short, sharp, incisive reply - he considers the questions thrown at him and, at times, seems genuinely surprised that people would seek to sink a knife into his back.

That is just what has been happening recently. Both proprietary software vendors and a senior researcher have been questioning the way he operates. Jealousy could be one of the reasons behind the attacks, he concedes; he makes a good deal of money from his clients. He and his researchers provide clients with inside knowledge of the vulnerabilities they find. The vendor of the software in question is not informed.

Aitel defends his method of operation. He questions why he should be informing people about the defects in their products. "Any software vendor has to assume that if I or any other researcher can find holes, then the crackers, the bad guys, are likely to have found them much earlier. The bad guys outnumber researchers. So vendors should be looking at the security of their own software much more closely.

"Look at the average software company, apart from Microsoft. All these companies have a small number of people devoted to security. Most of their time goes on marketing and sales. Why aren't they paying more attention to security?"

People in the security business have different attitudes when it comes to disclosure of vulnerabilities - some say one should publish exploits without bothering to inform the vendor, others say the vendor should be informed and given a period of 30 or 60 days to fix things before the vulnerability is made public.

And vendors of proprietary software say vulnerabilities should only be disclosed when a patch is ready - they call it responsible disclosure.

But Aitel says the whole idea of responsible disclosure is bogus. "Look, these problems (vulnerabilities) have existed for years. Multiple people come up with the same discoveries all the time. I am not arrogant enough to think that when I find a serious flaw in an application that is widely used by business, only Dave Aitel can find this out.

"No, I know that dozens of blackhats would have found these same holes already. Look at any security mailing list - Full Disclosure or Bugtraq, for example. The number of people posting under anonymous names is much, much greater than the researchers who disclose their names. The security community is a year or two behind the blackhats.

"All I am doing is making my clients aware of the risk at which they are putting themselves when they use a given application. What's wrong with that?"

Aitel disputes the mantra that patches are the ultimate remedy. "Patching is terribly expensive," he says. "You have to test and test to ensure that your applications all work after the patch. And then deploying a patch in a medium-sized firm will cost many hundreds of thousands. How many companies are prepared - or even have - this kind of money to spend on deploying a patch?"

He blames media hype for the attacks on Immunity. "Someone may say something after a couple of drinks, something that he or she thought was not for publication. A journalist may understand it differently."

Would he do anything differently if he had the chance to start all over again? "Charge higher prices for my work," he laughs.

More in his own words:

Immunity is probably the only security firm which has the model of finding bugs and not informing vendors - ever. How did you come to decide on this kind of business model?

We're definitely not the only vendor that has private vulnerability information and capitalises on it in some way. In fact, most major vendors have this sort of policy at some level.

As hackers get into business, they tend to become more starchy and corporate in their behaviour. You appear to still retain some of the playfulness that characterises hackers as evinced by your "Microsoft total cost of 0wnership" paper. Ever found this to be a liability? Or is it an asset?

The TC0 paper got on Slashdot, and generated more hits than any advisory ever could. Most people who read it liked it - the ones that understood it was a joke, at least. I consider not having venture capitalist money and being able to do things like that an asset. We didn't put it out solely for publicity though; we just thought the whole "Total Cost of Ownership" whitepaper idea was funny. We tried to fill it with just slightly less unsubstantiated nonsense as the other papers available on the subject.

What did those years at the NSA teach you? Why did you leave?

The first thing they teach you is not to talk about your years at the NSA. I have to be pretty strict on this sort of thing since I do a lot of travelling and I need foriegn intelligence services to know there are no inches to be given, nor miles to be taken. I can say that I'm happy not to be living in Columbia, Maryland ( the small suburban town the NSA is based in). Obviously I met a lot of great people at the NSA and @stake and some of them are now at Microsoft and the other big consulting companies or are potential customers, and that doesn't hurt.

What did your years at @stake teach you?

How to do the trade of consulting, and in some cases, how not to do the trade of consulting. Consulting is a valuable skill. So few people know how to do it that you find yourself looking for consulting skills even in people you are hiring into a research role. Ninety per cent of the job is being able to get to where the job is, on time, and in a suit.

Likewise, I learned that fame is money. @stake, like most other consulting companies, was not extremely fond of its "resources" getting famous. It's difficult to release papers at most other companies. They feel that if you get famous, you'll be able to ask for more money, and they know that salary is their biggest expense. On the other hand, there were several gigs that both myself and David Litchfield were on, and @stake charged more for David Litchfield. Rightfully so, since the customer was willing to pay for it, but that taught me an important lesson as to the value of getting your employees as famous as possible. Hence, Immunity is always trying to get our employees to talk at conferences, release papers, and otherwise show how smart they are.

Another lesson I learned from @stake's mistakes was not to make my employees miserable when they travel. When I was at @stake I got put on a three month gig in Redmond, Washington in the equivalent of a Motel 6, without being able to expense a rental car. I left shortly after to start Immunity. Immunity puts you up in real hotels, and we let our employees expense iPods and good noise reduction headphones for long travel.

It is natural for other companies which have lesser lights in their ranks to dislike your company. How does this dislike play out in real life?

I don't believe there are other companies that have lesser lights. NAI has Oded Horowitz, among others, Symantec has Chris Eng and Kevin Dunn, among others. ISS has Neel Mehta and Mark Dowd. Neohapsis has John MacDonald. And that's just the US companies I know off the top of my head. There's some great people in Europe and Asia doing research as well. New Zealand is well represented. There's a lot of top notch researchers all over the place. One of the things I do is throw these little parties here in New York City called "Security Shindigs". Basically, these are mini-conferences, with free pizza and a talk from someone about something security related. You'll see our competition come to these. There's no bad blood allowed.

What's the difference between what you're doing now and what Marc Maiffret did in 2001 when he released details of the Code Red vulnerability?

eEye still releases details to vulnerabilities, and that's a good thing. Releasing the details of vulnerabilities helps everyone determine how at risk they are. It helps tool-makers, like scanner companies and IDS companies, protect their customers. This information is freely available to everyone once the patch is released anyways via binary diffing. Even if it wasn't, you have to assume people have 0day in the wild.

Once you accept that there are bugs you don't know about that other people do, it's not about when someone releases a vulnerability. It's about what secondary protections you have. Patches are hugely expensive to deploy and largely irrelevant. The key is host intrusion prevention, locked down configurations, and upgrading to Windows 2003 SP1, Windows XP SP2 or any modern Linux that has protective measures enabled and isn't vulnerable to fifty year old problems.

Other security vendors are on record as questioning the "ethical" nature of your business model. Your comment?

There's a lot of nonsense in the media about this, but if you come to a security conference, you find everyone in the same bar having a good time and buying each other drinks. Likewise, most people in the industry are customers of Immunity. I'm not certain how true some of these articles are. I can't believe a VP of Computer Associates said he doesn't think NDAs are worth signing because everyone breaks them. He probably signs one NDA a day.

Do you ever fear that someone will do what Sybase did recently - get wind of some vulnerability you have found and send you a legal notice to prevent you sharing information with your customers?

This is one of the reasons Open Source software is so much more secure than closed source software - there's no incentive for an Open Source software vendor to sue you because you did free work for them. Security information about Open Source software is freely distributed, and this helps educate the end users as to their actual risk. With closed source software, there's no way for a business or an end user to know how much risk they are at from any particular configuration, which makes it more expensive to run. Businesses like to be able to know how much risk they are at. What Sybase really did was not protect their customers, but put them in a very difficult position. Now Sybase's customers don't know the full extent of the problem, but they do know Sybase has something to hide.

Securing software is actually quite easy. Both Open Source and Microsoft compilers have special flags to protect software from common vulnerabilities. There have been dramatic changes in the security of recent operating systems. These days, there's no excuse for anyone to run Windows 2000 on a network and then complain about it getting hacked. Upgrade to Windows 2003 SP1 or XP SP2 or any modern Linux and your problems just go away.