Tuesday, December 15, 2009

Return of the Ping of Death

An old exploit made a return recently to the Linux Kernel. If you were to send a large data packet via ICMP to a vulnerable system it would crash causing a denial of service event. The exploit, known as the Ping of Death works because the maximum allowed size of an ICMP packet is 65535 bytes. It is possible to send a larger packet if it is fragmented. The receiving system will defragment it on arrival and if the system is vulnerable, the resulting payload will be bigger than the buffer size allocated to receive it and hence cause an overflow and possibly crash the system.

The attack first made its appearance in the 1990s. It was particularly effective as it was even possible to bypass firewalls by spoofing the source IP address.

Back then there were plenty of exploits that worked with ICMP. Using a broadcast address for either source or destination was a particularly good way of causing denial of service just by generating large amounts of traffic. As internet access was typically dialup of around 56Kbps and corporate wide area networks weren’t much faster, a lot of damage could be done.

In theory, these kind of exploits known as Smurf attacks can’t really happen any more as systems are configured not to respond to broadcasts and routers are set not to forward packets directed to a broadcast address. For old time sake, I gave my systems a test using HPING and found this to be the case.

Monday, December 14, 2009

Thunderbirds are Go for French Military

Something that caught the eye on Slashdot this morning was the story about the French Military adopting Thunderbird as their mail client. I’m doing a bit of work on comparing the security features of Linux and Windows and by extension open source vs. closed. An early conclusion is that the merits and defects of each approach are very subjective. Indeed any attempt at rational debate on the subject tends to descend into a slanging match between the different camps. What is interesting about the Thunderbird story is it shows that the French consider that the open source mail client is secure enough for military use.

It would be naive to think the only selection criteria used was security; indeed the French government has a policy of seeking "maximum technological and commercial independence" for all its software. However, one would hope that security was a major factor in the selection process.

Further reading suggests that a plus point of Thunderbird for the French Military was that it allowed them to develop their own security extensions. Being able to review the original source code was also advantageous.

Tuesday, December 8, 2009

WPA Cracker

The was an interesting article on the Register this morning about a new cloud based service that allows you to brute force crack wireless WPA passwords. The service, run by Moxie Marlinspike of null byte prefix fame, claims it can compare your key against a 135 million word dictionary, optimised for WPA passwords, in around 20 minutes. It can achieve such speed by spreading the load over a 400 CPU cloud cluster.

Although the figures are impressive, the service falls way short of guaranteeing being able to crack your WPA password (Note, it doesn’t claim that it can). For an 8 letter key that uses upper and lower case and numbers, there are 2.18 x e14 possible combinations. This rises to 4.77 x e28 for a 16 letter password. Hence the chance of the service successfully finding your password depends on how closely it resembles a dictionary word.

Of course in reality, your WPA key almost certainly does resemble a dictionary word. If you want to make it safer but still keep it possible to remember, then increase its length as discussed here.

If you want to test the strength of you WPA password, you need to capture the WPA handshake using something like Aircrack-ng, then submit it to the site and hand over $17.

Friday, December 4, 2009

Practical Password Management

I did a quick count today of the passwords that I use at least once per month and was surprised to find that I have 41. I appreciate that this is higher than the average but suspect that anyone who uses a PC for work and does a bit of online shopping or banking is at least in double figures. It’s all very well security specialists (like me) telling us to use different complex passwords for each account and to change them regularly but how the hell can you remember 41 passwords? What I imagine most people do is to use the same password for all accounts and change them as infrequently as possible. This is obviously far from ideal so a compromise needs to be found.

The most important password is the one for your email. Nearly every other password you have will rely on it in some form for resetting it in the event that it is forgotten. Hence a unique strong password here is vital. Afterwards it is a case of assessing the importance of the data held with each account. If it limited personal information you can get away with your generic password, although it should still be difficult to guess.

Another option is to use password management tools. The idea is that you have a secure encrypted database of all your passwords, protected by a strong pass phrase, which means you only need to remember one password. I use Ubuntu Linux which comes with just such a program called Seahorse. It can also be used to managed PGP keys and certificates. More recent versions of Windows come with something called Credentials Manager which is fine in a Windows centric world but isn’t much use for storing passwords where the authentication is built into the end application. An open source utility that I use for password management on Windows but which also works on Linux and possibly MacOS is Password Safe. There are many programs of this type available for free but this one is particularly easy to use and so far has never let me down.

Another thing to do, at least for your systems is to change the default username. A recent study from Microsoft showed that brute force attacks target usernames such as administrator or administrateur.

Tuesday, December 1, 2009

Collocation vs Cloud

The perceived wisdom of locating your applications in the Cloud is that you benefit from scalability, availability and cost. The downside is that you surrender your data to a third party so you need to ensure that you trust them implicitly.

I’ve recently had a mission to source a suitable location to host a large web application with a required availability of close to 24x7 and some guaranteed performance levels. In Cloud terminology I was looking for Infrastructure as a Service (IaaS).To give an idea of the scale, the application was expected to have over 6 million unique visits per month with over 50 million page views.

I approached 4 “enterprise” level cloud providers with a fairly detailed spec of what I expected but was fairly flexible on what they could offer as a solution. Not surprisingly, given the initial spec, the proposed solutions were what could be termed “private cloud” or something close to what used to be called Managed Hosting. Although there were some price differences, all the offers were in the same area. Most importantly, all the providers gave me confidence that I could trust them with my data. I was also sure that they could meet the availability and scalability requirements.

As a follow up exercise, I carried out a cost analysis of the Cloud Solution compared to a collocated equivalent. In order to do so it was necessary to make some fairly large assumptions including that the capital to make the initial investment in the infrastructure was available and that cost could be written off over a period of three years. Extra staff also needed to be factored in.

The end result showed that for this application the Collocate and the Cloud solutions were very similar in cost, with the Collocate slightly cheaper. What was more interesting were the costs when doubling the expected load on the application and hence the supporting infrastructure. In this case, the Collocate solution becomes up to 40% cheaper than the Cloud one as economy of scale begins to take effect.

Of course, it would be foolish to draw the conclusion that Collocate is cheaper than Cloud. There are many things to consider included how much you need the fast scalability and provisioning capabilities of some Cloud offerings, the level of support and monitoring required, your ability to recruit and retain the right staff for collocate as well as security features. What I do think is a fair conclusion is that the larger your hosting requirements, the more you should investigate the available options.

Thursday, November 26, 2009

Packet Capture with Windows Network Monitor 3.3

I’ve recently been using Microsoft’s Network Monitor 3.3 to troubleshoot a few issues. I have a lot of experience of earlier versions of the tool, which although good for basic stuff soon reached its limits if you needed to dig a bit deeper. I’d found that Wireshark was much more powerful and also had the advantage of working on Linux. There are of course plenty of paid for network capture utilities out there but as I’d never reached the limits of the free open source Wireshark, I’d never felt the need to try them out.

There are many new features in Network Monitor 3.3 including powerful capture and display filters which despite the large number of examples and the ‘verify filter’ function, I initially found difficult to use. The ‘Network Conversations’ window is a welcome addition as it allows you to easily see traffic between specific hosts without relying on the filters.

Some effort has been made with regards to performance. You can switch parsers on and off as required and also run the tool from the command line. It’s also possible to limit the number of bytes captured.

An API is made available allowing you in theory to pull capture data into your own application or expand functionality. Some example add-ons, known as Experts, are available on the Microsoft site and can be easily integrated into the tool.

Other advanced features that I’ve not tested include capturing WWAN and tunnelled traffic. Something useful that I did test was Network Monitor’s ability to read pcap files. Hence you can capture the output of something like TCPDump to a file and then view it with Network Monitor.

Friday, November 20, 2009

Urban Myth

Like many other industries, the IT world has its own set of urban myths. One that has surfaced more often that most is the case of the mysteriously rebooting server. Normally it happens around 7.30 am in the morning and subsequent investigation shows no obvious problem. Even more bizarrely, it only occurs Monday to Friday and also avoids public holidays. Eventually an engineer will be tasked to come in early to observe the problem in action. Everything will appear normal and then suddenly the screen will go dead as the cleaner pulls out the power cable from the socket in order to plug in the vacuum cleaner.

Yesterday, this actually happened to me. I was messing around with Nagios on my test Ubuntu server, when I lost my SSH session. My test server is in the hall, next to my ADSL router as it runs VMWare which isn’t compatible with the Wi-Fi card that I have and hence needs to be connected by cable. Sure enough, investigation showed that the cleaner had unplugged the server in order to use the vacuum cleaner. Actually, she had knocked out the network cable, but it wouldn’t be an urban myth if there wasn’t some exaggeration.

Why is this related to security? Protection of physical infrastructure including power and communications is just as relevant to security as any other aspect. Power failure in particularly can cause data loss as well as the obvious availability problems.

Such an event should trigger a company’s incident event procedure. At one place I worked, this would have involved numerous meetings with a large number of participants who would have produced a report recommending IT training for cleaners, the development of a cleaning procedure for IT equipment, installation of security cameras to observe that the procedure was being followed and a member of staff to audit and report. I on the other hand will be taking my wife’s advice and shifting my fat lazy ar*e to put in some proper cabling to make sure cleaner and server never meet.

Tuesday, November 17, 2009

Software Policy and Data Protection

I’ve recently been putting together a set of security policies for a client which of course includes a software policy. Wherever I have been working in the past I’ve always argued for liberal and relaxed policy as to what staff can run on their PCs. The basic idea is to have a small set of core applications that are fully supported by the IT department and a list of banned software types which mainly consists of anything that is illegal. Anything that sits in the middle could then be installed by the user with the understanding that no support is to be provided by IT and that it must be removed if it is shown to conflict with a core application.

Although most software policies that I’ve seen in the past are restrictive to the point that they wouldn’t be out of place in North Korean government policy, the situation described above is often what occurs in reality. Trying to police a restrictive software policy is time consuming and potentially expensive. Locking down PCs can also be complicated as there always seems to be one critical application or function that requires the user to be an administrator. Technology is also often one step ahead. Did anyone else have fun trying to stop MSN use a few years ago? I’ve also been in several “discussions” about blocking Webex type tools which were only resolved when management launched a cost cutting initiative and it was pointed out that online collaboration could save us money and time .

Of course the reason for restrictive policies is mostly down to the fear of introducing malware onto the corporate network and ultimately loosing data. This is a very real threat and one that appears to be getting worse. Rather than relenting on my relaxed software policy strategy, I advocate another approach. The first step is to ensure an active update policy ensuring that OS and software patches are applied as rapidly as possible, closely followed by the installation and maintenance of an anti malware package. This should significantly reduce unwanted malware particularly that which comes from internet browsing. The next step, which is a little more radical, is to treat the corporate network as un-trusted. In real terms, this means placing firewalls, IDS/IPS and Data Loss Prevention (DLP) technology between corporate systems and internal users. If someone does inadvertently introduce malware to the network, then risk is limited to the end system. Such a setup also has the added benefits of protecting against internal data theft and may help meet regulatory requirements.

The move towards “Cloud Computing” with companies looking to locate their systems with a third party helps to facilitate such an approach. The idea of corporate networks being little more than semi private internet access points is maybe not that far away.

The ultimate aim of the software and other policy is protect against data loss without restricting productivity of the end user. Although not perfect, I think my approach is the best compromise between security and usability.

Friday, November 13, 2009

Web Application Vulnerability Trends

There is a recent report from Cenzic that produces statistics on Web vulnerabilities for Q1 and Q2 of 2009. Although such studies can be far from subjective, this one seems fairly well balanced and quotes, amongst others, NIST, US-CERT and SANS as sources.

Not surprisingly, web application vulnerabilities consisted of around 78% of all issues with old favourites Cross Site Scripting and SQL injection being the most significant.

The findings are somewhat disappointing as the vulnerabilities are not new and have appeared in the OWASP top 10 for many years. It suggests that more effort needs to be placed in good development practises as outlined here.

The report also has a section on browser vulnerabilities reporting that Firefox had 44% of all browser flaws over the period. As the Register points out, this isn’t really a true reflection of risk as other factors need to be considered such as vulnerability level, the time a manufacturer takes to fix it etc.

Monday, November 9, 2009

The Safety of SSL

There have been a number of SSL/TLS related security vulnerabilities in the news recently including the Null Prefix problem and the more recent Man in The Middle attacks. The later has yet to be fixed but doesn’t seem to yet present a major risk for e-commerce, online banking or other internet transactions that require authentication and encryption. Indeed it seems that sessions that require client certificates for authentication would be most at risk, a scenario that is not that common. Whatever the seriousness of the vulnerability, it is based on the implementation of SSL/TLS rather than the core technology of Asymmetric Encryption algorithms.

Asymmetric Algorithms more commonly known as Public Key Cryptology allows two way secure communication without the hassle of prior key exchange. Some of the more common implementations of Public Key Cryptology including RSA, make use of the fact that some mathematical operations are much easier to perform in one direction rather than the other, in particular the factoring of large numbers. For example if you tried to determine values for x and y where x * y = 65869 it would take a fair amount of time. The reverse problem of finding the result of 199 * 331 would be much quicker. Note x and y are prime numbers as otherwise they could be factored into smaller values. Naturally as computing technology improves it becomes feasible to use brute force to do the factoring in a short amount of time. However, the same technology allows us to use larger and larger values for x and y without compromising performance. The Greek mathematician, Euclid, proved in around 350 BC that there are an infinite number of primes and the more recent Gauss’s prime number theorem shows that will be a sufficient number of them not to risk choosing the same ones. Hence we can be reasonably confident that it should always be possible to stay ahead of improvements in brute force technology.

Of course one day someone may come up with a way of simplifying factoring of large numbers to render current Asymmetric Algorithms useless. Note that this wouldn’t make encryption impossible, but would stop simple over the wire key exchange. It’s worth pointing out that modern encryption for internet communication is generally hybrid with an Asymmetric Algorithm initially being used to exchange a symmetric key to be used for the rest of the session. A future technique that could replace current systems is quantum key exchange. To vastly over simplify, if a key exchanged in a quantum system is intercepted, the observation of the key will alter its state and hence alert the sender and the receiver. As the system relies on physics rather than mathematics, there is no algorithm to crack making it unbreakable. Today’s infrastructure is obviously not geared for wide spread quantum key exchange but who knows for the future.

Further reading. The Music of the Primes by Marcus du Sautoy. Quantum, a guide for the perplexed, Jim Al-Khali.

Friday, November 6, 2009

Portable Apps

A friend of my mine recently pointed me in the direction of Portable Apps for Windows. A portable application for Windows is one that does not leave its files or settings on the host computer. The concept was familiar as I’ve often used Linux Live CDs although this is really a portable OS rather than applications. I really like the idea but what about the security implications?

I installed the Portable Apps Suite Lite which contains over a dozen applications including Firefox. It uses the approach of having specially written applications rather than application virtualisation. I decided to do my testing with Firefox as it is something that I could see adding real value. A few years ago I used to travel a lot to the various European offices of the company I was working for. As my laptop was slow, old and over 4 kg I often used to leave it at home and work on any spare desktop that was available. It would have been a godsend to have had a USB stick with my own browser, email client etc rather than struggling away with an out of date version of Internet Explorer in a language I couldn’t always understand.

My first concern was that the portable applications might quickly become out of date exposing security vulnerabilities. However Firefox updated itself from version 3.0.7, which came with the initial install, to version 3.0.15, which suggests that security updates are application specific and not necessarily limited by being portable.

A significant risk of the portable app comes from the associated USB stick. Malware distribution from removable media although once prevalent when floppy disks were in common use was until recently rarely a problem. It seems however to be making a big come back with Trojans such as the Tartef Worm using USB sticks as its primary distribution method.

A positive aspect of a Windows portable app is that it has the potential to run on a host computer using an account that has minimum security privileges. In this respect, security could actually be improved by use of portable apps as malware either directly from the USB stick or something downloaded from a malicious web site could do less damage to the host system.

After a week of messing around with Portable Apps I could only conclude that the security implications of such technology are somewhat ambiguous. More investigation needs to be done. Unfortunately I can see Portable Apps being used mainly on corporate systems which are severely locked down to restrict users to certain approved applications. This of course defeats the object of the lockdown and so conflict with system administrators is highly likely.

Tuesday, November 3, 2009

Size Does Matter

There was an interesting article today in the Register about brute force password cracking using Amazon’s EC2 cloud architecture. The main focus was on how much it would cost to crack passwords of different lengths and complexity. One of the conclusions which is almost counter instinctive is that a long lower case only password is much harder to crack than a shorter complex one consisting of lower and upper case characters as well as numbers. I did my own calculations to verify the findings and came up with the same results.

Take for example an 8 character complex password containing upper case and lower case characters, numbers and also a choice of 20 non standard characters such as % . When considering brute force cracking, the 8 character complex password is easier to break than a 9 character one containing upper and lower case characters and also easier than an 11 character password containing only lower case characters.

Hence, next time that nasty Systems Administrator tells you that your password should resemble something like x%fF*Z3$ you can tell them that this is less secure than a password like HelloFred or even mycarisblue. Note that at various times in my working career, I have been one of those administrators so I know where they are coming from.

No doubt the Systems Admin would respond that in reality a brute force attack would not be random in the words and phrases it attempts and subsequently would crack a long non-complex password quicker than a short complex one. This is probably true when length diferences are small but difficult to quantify accurately. Coming back to the real world, nearly nobody can remember a password such as x%fF*Z3$ but it’s not so hard to recall a semi abstract phrase such as MyDogisfrenchThanksforthefish. This non complex password is approximately 2.8 x e34 more difficult to crack than the complex one when considering only a brute force approach. Even when factoring in dictionary approaches it’s probably still a lot safer as well as being easier to recall. Hence when next choosing a password, remember, size really does matter.

Tuesday, October 27, 2009

Guardian Loses 500 000 CVs

It’s been widely report this week that the Guardian jobs website was hacked resulting in 500 000 CVs being stolen. Although no logon or financial information was exposed, the breach is still considered serious as a typical CV contains plenty of information that can be used for identity theft. Similar information has previously been stolen from the likes of Monster.

Although the security breach was embarrassing and the theft illegal the actual data loss is perhaps less serious than it first appears. Much of the information found in a CV is often available legally in the public domain. Public profiles of business networks such as Linkedin are a good example as are the usual suspects such as Facebook for social networks. Most countries now have online phone books that can provide address and phone number details. Personal blogs and websites often complete the picture.

It wouldn’t be out of the question to develop an information crawler to farm personal information from public web sites and services. Granted that stealing CVs would provide a higher quality of data but it also comes with the risks of severe punishment if caught.

Legal methods of harvesting even more personal information are already in circulation. For example the Porn Star Name game that circulated on Twitter recently.

So although we should be concerned about crimes against our personal information, we should also pay attention as to what we give away for free.

Wednesday, October 21, 2009

T-Mobile Data Loss

The security story of the week that I have found the most interesting is the data loss by Microsoft subsidiary Danger, which provides Sidekick data services to T-Mobile customers. There are lots of different stories about what actually happened including one about a disaffected insider deliberately sabotaging certain critical systems. Whatever is true, there was clearly something lacking in the backup and restore procedures. What I find most astonishing is that some Microsoft apologists seem to be trying to blame Oracle and Sun for the issue as this was the platform in use for the Sidekick data services. If you are a customer who has just lost your data, this is not what you want to hear, as it for the supplier to manage the systems whatever they are.

At the business level, if you trust your data to a third party, you need assurances that they not only correctly backup your data but they that test also the restore procedures at regular intervals. Off site storage of backup media should also be a non negotiable requirement. Obviously at the consumer level, such assurances are harder to come by.

There has also been some debate as to whether the incident has been a set back for Cloud Computing. Putting aside the argument about what Cloud Computing actually is, if you engage a Cloud Computing service you need to check its provision for backup and restore as you would any other service.

It now looks like the T-Mobile data will be recovered which is great for consumers and hopefully a wake up call for other companies who manager and process data.

Tuesday, October 13, 2009

Buffer Overflows

With Microsoft due to have its biggest ever patch Tuesday with 34 security flaws addressed, it got me thinking about buffer overflow exploits. At least two of the security problems to be fixed are due to weaknesses that allow stack overflow errors. Although buffer overflows have been around since as early as 1972, it wasn’t until 1999 that I really became aware of them. A company called eEye Digital Security released details of an exploit in IIS 4 that allowed you to open a remote shell on the targeted system over the HTTP protocol. If memory servers me correctly, there was a bit of a fuss at the time with Microsoft claiming the vulnerability was released to the public in an irresponsible way whereas eEye and others stated that without such ‘shock’ tactics, Microsoft wouldn’t treat the problem with enough urgency. Whatever the truth I got the distinct impression that Microsoft security notification service including how they credited 3rd parties suddenly got a lot better.

Perhaps the most famous exploit of a buffer overflow was the Code Red worm in 2001 which exploited a vulnerability in the Microsoft indexing software distributed in IIS. Although a patch had been available for over a month, many system administrators of public facing servers had failed to apply it or even disable the software if it wasn’t used. The positive aspect of the Code Red worm was the ‘wake up call’ it gave system administrators to correctly patch their systems.

There are many ways to protect against buffer overflows including a technique called Address Space Layout Randomization (ASLR) which is incorporated into Microsoft Windows 2008 and Vista. Linux and Mac OS X 10.5 also have some ASLR functionality. ASLR picks different locations to load systems components into memory each time a system is started, making buffer exploits difficult but not impossible. Intrusion Prevention Systems (IPS) can help block known attacks or exploits but a good attack should be able to hide its intent.

Ultimately the best way to protect against buffer overflow is good programming from the most basic OS functions right up to application software.

Thursday, October 8, 2009

Null Prefix Attacks against SSL

There has been a lot of noise about over the past few days about attacking SSL using counterfeit certificates. The story gained momentum when a fake certificate for www.paypal.com was posted to the net with Paypal banning the author of the exploit from their service a few days later. It is possible to create the false certificate because certain browsers that rely on the Microsoft CryptoAPI fail to correctly interpret a null character in the common name. There seems to be much confusion about the seriousness of the vulnerability and how to exploit it. If you have a spare hour, I recommend watching the original Black Hat presentation by Moxie Marlinspike entitled More Tricks for SSL, which examines techniques for attacking SSL traffic including using certificates with the null byte in the common name. It includes examples of how such attacks can be used to harvest real data.

Friday, October 2, 2009

Poisoning Google

There are a couple of stories on The Register today about hackers manipulating search engine results so that searches for popular items would display links to sites serving malware. Google Wave and Microsoft Security Essentials were just two of the search terms that were targeted.

You have to admire the innovation of some of these hackers and wonder just how much money they could be making if they put their efforts into a legitimate business. The frightening aspect is that as they choose to work in the black economy the rewards available must be extremely lucrative to make it worth while.

Thursday, October 1, 2009

Security Essentials

One of the bigger security stories of the week is the release of Microsoft’s free Security Essentials package which contains anti-spyware and anti-virus functionality. The motivation behind the software seems to be to allow the millions of unprotected PCs in the world to get some basic anti-malware functionality. Microsoft is not well known for its displays of altruism when it comes to software and indeed there is an element of self interest in the move.

The Windows platform has the reputation of being the least secure of modern operating systems. This is at least partly due to the fact that it is the most popular OS by far and hence has the largest number of non technical users ill equipped to secure their PC. This makes Windows an attractive target for malware writers as the chances of a successful exploit are much greater than an attack against for example Linux. Although security awareness is better than it once was, anti malware software either comes at a cost or is free but with excessive marketing blurb to get you to upgrade to a paid for version. Security Essentials is an easy to download and install package which so far at least seems to be very unobtrusive. Hopefully it will encourage owners of non protected systems to improve their security.

Why is this a good thing if your own PC is already well protected? The simple answer is that the millions of compromised PCs in the world affect us all every day as they can be used to distribute SPAM, launch denial of service attacks or act as a platform for other exploits. The lower the numbers of unprotected systems, the lower are the possibilities for exploitation. This is good for Microsoft in that it makes the internet a safer place to do business and could potentially improve the reputation of its software.

Microsoft will not be bundling Security Essentials with future OS releases nor distributing it as a critical update, probably to avoid problems with anti-competition regulation. Neither will it install on pirated copies of Windows. Although these measures are understandable, the effectiveness will no doubt be reduced as many of the PCs in most need of anti-malware software will fail to receive the package.

Tuesday, September 29, 2009

Microsoft vs Apple

It's not really related to security but I found this article on the Guardian website fairly amusing. Of course the debate about whether Microsoft or Apple's products are the best is one that has gone on for many years. There is also the variant between Linux and Microsoft where all sides attack each other with the zeal of religious fanatics. The article takes a somewhat different approach, perhaps more in tune with how none techies view the debate. The best quote is:

'I know Windows is awful. Everyone knows Windows is awful. Windows is like the faint smell of piss in a subway:'

Well worth a read.

Monday, September 28, 2009

IDS and HTTP decoding

I’ve recently been doing some work on intrusion detection systems (IDS). As anyone who has ever discussed the subject with me will know, I am somewhat sceptical about the value they add to protecting an application, particularly when HTTP is involved. Part of the reason for the sceptism is the complexity of many of the tasks an IDS needs to carry out. Take for example decoding a URL. It’s claimed in a paper by Daniel Roelker at IDSResearch.org that there are over 8 different types of encoding possible for HTTP despite only two being defined in the relevant RFCs. An IDS needs to be able to understand each of these methods before it can hope to identify a malicious request. The task is complicated further by different products supporting different methods, with IIS perhaps being the worst offender. Whether such deviance from the standards is due to irresponsible software manufacturers or due to limitations or ambiguities in the standards, it is hard to tell. Note IIS7 now seems to disable many of the encoding techniques although they can easily be reactivated. IDS Research also has some useful tools for testing which encoding methods are supported by your web server and to allow you to see if your IDS can pick up the various types of encoding. It’s well worth testing your systems. You might be surprised what shows up.

Wednesday, September 23, 2009

Web Application Vulnerabilities – Spreading the Word

Many of the companies I have worked with in the past have been fairly progressive when it comes to security assessment. A part of this has been to commission penetration tests by a third party to determine network, OS and application vulnerabilities. Surprisingly, the most difficult part of the process was persuading colleagues to act on issues discovered in a test. This was particularly true for web application vulnerabilities as getting a stressed development manager to redirect valuable resource into fixing security holes was never easy. The main reason for this was often that the security team would find it difficult to articulate the threat level of each problem and hence not communicate the true danger level.

A testing company I’ve engaged in the past, Pentest Ltd, recently brought to my attention a site called The Web Application Firewall Information Centre, whose raison-d’être is to maintain a list of web application security incidents. The site lists all publicly reported incidents by type, time frame and outcome. Although I suspect it only includes a fraction of total incidents, not least because many are never reported, it is an excellent source of information to demonstrate how particular vulnerability types have been exploited in the real world. If nothing else it should help the security professional to explain why a vulnerability has a particular threat level and why it needs to be fixed.

Friday, September 18, 2009

Brute Force Password Cracking

The BackTrack Live CD I recently used in conjunction with a WEP proof of concept attack also comes with several SSH brute force password cracking tools. This reminded me of a previous brute force attack against my systems. One dark night in a data centre myself and a colleague were upgrading the hardware in a firewall cluster. We connected up the systems to the internet, powered on, and opened a terminal session. Within 5 minutes the terminal was flooded by failed logon attempts which was most surprising as SSH had not previously been enabled at the IP address in question. Fortunately for us, we had already configured the hardware offline and had changed the default password.

A review of the firewall logs indicated that our entire IP range had been scanned for open SSH ports and that once found, a brute force attack was launched. Further investigation suggested that the attack wasn’t specifically targeted at our firewall but was rather a speculative attempt to penetrate systems across a large IP address range. No doubt a successful authentication would have resulted in further exploitation.

There are several ways to protect against a brute force attack. The most obvious is to have some kind of account lockout, i.e. refuse logon attempts after a certain number of concurrent failures. However, this can lead to a denial of service attack where a hacker will deliberately lock out the account to prevent a legitimate user logging on. A slightly more sophisticated method is to use tarpitting where each failed logon increases the amount of time before a user can attempt a subsequent logon.

As ever, strong passwords are a must for protecting against brute force attacks. Last Bit have an interesting calculator to allow you to estimate the maximum time it would take to crack your password.

Perhaps the most simple but effective method of protection is to rename default accounts especially administrator and root. A speculative brute force attack will almost certainly use a generic account and so can be beaten whatever the strength of the password.

Wednesday, September 16, 2009

Cracking WEP

We all know that using WEP to protect wireless network communication is considered unsecure and many people also know that this is because of the way the depreciated RC4 stream cipher is used in its implementation. In real terms, the implementation weakness allows anyone who can capture enough WEP encrypted packets from a particular wireless access point to use statistical analysis to crack the encryption key. This is much quicker and easier that using a brute force dictionary attack.

If you are not a mathematician with some handy wireless packet capture equipment, cracking WEP still seems quite tricky. However a quick search of the internet shows that are plenty of tools out there to do the job for you. Most links point in the direction of aircrack-ng, a suite of tools that allow you to discover weak access points, capture wireless packets, inject extra packets to generate more traffic and finally to extract the encryption key. There are even plenty of You Tube videos to tell you how to use it but I found the documentation on the aircrack-ng site easier to follow.

The most difficult part of the process is getting you wireless network card to work with the software as not all chipsets are supported. Refer to the hardware compatibility list. Aircrack-ng works better with Linux but if you only have Windows you can use a Live CD such as BackTrack.

My own experimentation showed that once you have your attack system up and running, it typically took less that 10 minutes to break into a WEP protected wireless network. Although it is possible to complicate the process by hiding the SSID and restricting MAC addresses, these measures only delay the WEP network’s compromise.

To conclude, WEP shouldn’t be used to secure a wireless network. Given that I can pick up four WEP networks just from my house, (I do live near a business centre), it’s possible it is still in widespread use. Even those networks that are WPA protected are not necessarily safe. The aircrack-ng software also included a brute force attack method that worked against the 4 way handshake part of the initial WPA negotiation although it’s only really successful if common dictionary words are used for the key. There are also reports of new techniques similar to the WEP attack that can be used against WPA TKIP, so it is surely only a matter of time before tools are available for crack this as well. So my advice is to use WPA2 (AES), strong keys and upgrade as new technology comes available.

Monday, September 14, 2009

What the Hell is Cloud Computing?

The following link from PrudentCloud leads to a collection of You Tube videos on the definition of Cloud Computing as seen by various industry leaders. I particularly like the one from Larry Ellison of Oracle fame. If you watch all the videos, you will see that Cloud Computing means different things to different people.

My own take on Cloud Computing is based on my experience of working in the ‘Cloud’ space for over ten years. Back in 2000, I was Operations Director for an Application Service Provider (ASP) who towards the middle of the decade, rebranded their product range as Software as a Service (SaaS). Although, the same company is not yet publicly marketing themselves as cloud provider, their competitors are so I am sure it will only be a matter of time. Not surprisingly, the part of their solution that could be called ‘Cloud’, i.e. the delivery method to the client, is exactly the same as it was when the company was an ASP or SaaS provider.

The same seems to be true for ISP/hosting providers. In 1998 I as able to rent web space that would run perl scripts and interface onto a MySQL database (I think it was MySQL), also hosted by the ISP. The pricing model was based on resource utilisation; disk space and bandwidth. Such a service certainly seems to fit into the Cloud Computing definition. The offerings on today’s market are somewhat more advanced but the underlying architecture and pricing model is more or less the same.

Hence, from my perspective, Cloud Computing is more marketing than innovation but as it opens up a whole range of possible Cloud Computing security consultancy opportunities, I probably shouldn’t complain too much.

Friday, September 11, 2009

Home Security

After leaving the comfort zone of my job as Operations Director at a well known SaaS provider to set up as an independent IT Security Consultant, I though it might be wise to first test my skills on my own home and now also office network.

My home/office network is not dissimilar to many other peoples; there are 4 PCs running either Windows XP or Vista and an ADSL internet broadband connection. I also have a test server for work purposes running VMware with 2 virtual machines: Ubuntu Linux and Windows server 2008.

My starting point was to run a vulnerability scanner on my internal subnet. For this I chose Nessus which has an excellent reputation and is free for home use. My objectives were to gain a level of confidence as to the security of my home systems as seen from the privileged position of the local subnet and also to assess just how accurate Nessus is at vulnerability assessment.

I configured Nessus to scan the entire subnet rather than individual systems and also ran all security tests. After about 10 minutes the scan completed finding mostly what I expected but also a few interesting extras.

Firstly, Nessus managed to pick up my 4 PCs and correctly identify the operating systems and in three cases the hostname. The PCs running Skype were successfully detected as were the systems with ITunes. Two PCs were shown to have file-sharing enabled

Nessus also identified my VMware server which had a large number of ports open although none were flagged as a potential risk. The Linux server was identified with just SSH and Nessus (not surprisingly) running but gave me a whole list of recommendations as to how to better configure my server to stop information leakage and recommended an upgrade of acpid.

The Windows 2008 server was incorrectly identified as Windows Vista but this is not a million miles from the truth. The correct open ports for the server were detected; HTTP (80), FTP (21) and RDP (3389). Nessus pointed out that anonymous logins were available for ftp and that I could improve the security levels of RDP and FTP. The correct version of IIS7 was also identified.

Nessus also detected my iPhone, connected via wifi, the web server on my ADSL router, the streaming channel for TV and the FTP server on my TV decoder.

First conclusion from this test is that Nessus is excellent at system and service discovery. The second is that although overall security seems adequate, there are far more attack vectors on my network than first thought. It seems fanciful that my TV decoder might be a future target for hackers but a year ago, one might have said the same about mobile phones and ADSL routers, both of which have had know attacks in the past month.

Wednesday, September 9, 2009

Cracking Passwords with Google

An article about an SQL injection vulnerability on a UK Parliament web site exposing usernames and passwords reminded me of a story last year about using Google as a gigantic password cracker. One of the big problems with the UK Parliament web site was that passwords were not encrypted. You may wonder, if you can extract passwords using SQL injection, why you cannot also extract all other information held in the database. In actual fact you probably can but it is much easier if you have a username and password to logon to the application and manipulate data via a friendly user interface. Additionally, people often use the same password for different accounts so a hacker could potentially use the same credentials for a more interesting application.

The recommended method for encrypting passwords is to use a one way hash, typically MD5. Besides protecting the password against SQL injection, lost backup tapes etc, it also stops malicious system administrators stealing user credentials as it is impossible to decrypt the one way hash.

Last year, an article suggested that Google could be used as a huge lookup table for MD5 hashes. This is easy enough to test, use one of the many online MD5 hash generators to calculate the MD5 for your potential password. Type the resulting hash into Google and see if it can come up with your original text.

My unscientific testing from a year ago, suggest that the above method worked well for correctly formatted dictionary words but little else. 12 months on it appears that simple dictionary words with common numeric substitution, e.g. I=1 O=0 are also picked up as are simple words with irregular capitalisation, e.g. bIke.

As ever it appears that we really do need to follow those guidelines we get from system administrators about password complexity. To test your password complexity, click here.

Note, MD5 is now considered ‘cryptographically broken’ but is still in common use. Using Google to decrypt MD5 hashes can also be defeated by the use of a salt.

Monday, September 7, 2009

Microsoft IIS FTP vulnerability and IPS

With no patch in site for Microsoft's latest vulnerability in its FTP service, you would have thought that IPS vendors would be shouting from the rooftops how their products can protect their clients systems. Surprisingly, the background noise is very low. Checkpoint make a statement claiming they now protect against exploits of the vulnerability as do Snort who state that their existing rule set would already offer protection. Some of the other IPS vendors seem to be quiet on the subject, presumably because they are too busy helping their clients protect their systems. From reading some of the blog posts at Snort, it appears that it's quite easy to block individual exploits but general protection for the vulnerability is a little more complicated.

Sunday, September 6, 2009

Trojan Terror

When I started an Internet Security consultancy, having one of my son’s friends turn up wanting me to fix his Trojan infected laptop was not what I had in mind. However, as I had a bit of free time I was happy to help out. It was encouraging to see that the PC was configured to automatically receive and install Windows updates and had an ICSA labs certified antivirus security suite installed. Unfortuanetly this meant that as the Malware had breached the PC's defences, it could be something new and unknown and potentially difficult to get rid of. It turned out to be a variant of the Trojan.Win32.agent.Azsy which amongst other things installs a fake antispyware program that tries to induce the user into paying for a full version of the software. Neither Trend House Call or Bitdefender could detect it with a scan, although Bitdefender picked it up when using internet explorer. After an unsuccessful attempt to remove it manually, I resorted to using Spybot Search & Destroy an excellent piece of software that's been most useful to me in the past.

The experience brought to mind a presentation I saw last week from Trend Micro about malware evolution. The presenter claimed that in 1998, there were around 2000 new viruses each year which is about the same number of new Malwares that appear every hour in 2009. Even more interesting was that he more or less admitted that the current model of updating your antivirus software every day was no longer an effective way of protecting your computer. His proposed a solution called Hybrid Cloud-Client Architecture, a name no doubt dreamed up by his marketing team, which seemed ok in principle although I’m sceptical about its workability.

So the bad news from all this is that even with a well configured PC, it’s still easy to get infected with malware. The good news is that I received a bottle of AOC Haut-Medoc for my troubles

Friday, September 4, 2009

Web Site Launch

After much faffing around and last minute adjustment, I have finally got around to launching the web site for my new company, WNI-Sec. It's a while since I have created a web site and as well as giving me a presence on the web, it allows me to practise what I preach with regards to internet security.
This is also the first official day of my blog where I hope to make regular comment on whatever IT security issue is the flavour of the day or what I'm working on. I hope you find it interesting.