Thursday, November 26, 2009

Packet Capture with Windows Network Monitor 3.3

I’ve recently been using Microsoft’s Network Monitor 3.3 to troubleshoot a few issues. I have a lot of experience of earlier versions of the tool, which although good for basic stuff soon reached its limits if you needed to dig a bit deeper. I’d found that Wireshark was much more powerful and also had the advantage of working on Linux. There are of course plenty of paid for network capture utilities out there but as I’d never reached the limits of the free open source Wireshark, I’d never felt the need to try them out.

There are many new features in Network Monitor 3.3 including powerful capture and display filters which despite the large number of examples and the ‘verify filter’ function, I initially found difficult to use. The ‘Network Conversations’ window is a welcome addition as it allows you to easily see traffic between specific hosts without relying on the filters.

Some effort has been made with regards to performance. You can switch parsers on and off as required and also run the tool from the command line. It’s also possible to limit the number of bytes captured.

An API is made available allowing you in theory to pull capture data into your own application or expand functionality. Some example add-ons, known as Experts, are available on the Microsoft site and can be easily integrated into the tool.

Other advanced features that I’ve not tested include capturing WWAN and tunnelled traffic. Something useful that I did test was Network Monitor’s ability to read pcap files. Hence you can capture the output of something like TCPDump to a file and then view it with Network Monitor.

Friday, November 20, 2009

Urban Myth

Like many other industries, the IT world has its own set of urban myths. One that has surfaced more often that most is the case of the mysteriously rebooting server. Normally it happens around 7.30 am in the morning and subsequent investigation shows no obvious problem. Even more bizarrely, it only occurs Monday to Friday and also avoids public holidays. Eventually an engineer will be tasked to come in early to observe the problem in action. Everything will appear normal and then suddenly the screen will go dead as the cleaner pulls out the power cable from the socket in order to plug in the vacuum cleaner.

Yesterday, this actually happened to me. I was messing around with Nagios on my test Ubuntu server, when I lost my SSH session. My test server is in the hall, next to my ADSL router as it runs VMWare which isn’t compatible with the Wi-Fi card that I have and hence needs to be connected by cable. Sure enough, investigation showed that the cleaner had unplugged the server in order to use the vacuum cleaner. Actually, she had knocked out the network cable, but it wouldn’t be an urban myth if there wasn’t some exaggeration.

Why is this related to security? Protection of physical infrastructure including power and communications is just as relevant to security as any other aspect. Power failure in particularly can cause data loss as well as the obvious availability problems.

Such an event should trigger a company’s incident event procedure. At one place I worked, this would have involved numerous meetings with a large number of participants who would have produced a report recommending IT training for cleaners, the development of a cleaning procedure for IT equipment, installation of security cameras to observe that the procedure was being followed and a member of staff to audit and report. I on the other hand will be taking my wife’s advice and shifting my fat lazy ar*e to put in some proper cabling to make sure cleaner and server never meet.

Tuesday, November 17, 2009

Software Policy and Data Protection

I’ve recently been putting together a set of security policies for a client which of course includes a software policy. Wherever I have been working in the past I’ve always argued for liberal and relaxed policy as to what staff can run on their PCs. The basic idea is to have a small set of core applications that are fully supported by the IT department and a list of banned software types which mainly consists of anything that is illegal. Anything that sits in the middle could then be installed by the user with the understanding that no support is to be provided by IT and that it must be removed if it is shown to conflict with a core application.

Although most software policies that I’ve seen in the past are restrictive to the point that they wouldn’t be out of place in North Korean government policy, the situation described above is often what occurs in reality. Trying to police a restrictive software policy is time consuming and potentially expensive. Locking down PCs can also be complicated as there always seems to be one critical application or function that requires the user to be an administrator. Technology is also often one step ahead. Did anyone else have fun trying to stop MSN use a few years ago? I’ve also been in several “discussions” about blocking Webex type tools which were only resolved when management launched a cost cutting initiative and it was pointed out that online collaboration could save us money and time .

Of course the reason for restrictive policies is mostly down to the fear of introducing malware onto the corporate network and ultimately loosing data. This is a very real threat and one that appears to be getting worse. Rather than relenting on my relaxed software policy strategy, I advocate another approach. The first step is to ensure an active update policy ensuring that OS and software patches are applied as rapidly as possible, closely followed by the installation and maintenance of an anti malware package. This should significantly reduce unwanted malware particularly that which comes from internet browsing. The next step, which is a little more radical, is to treat the corporate network as un-trusted. In real terms, this means placing firewalls, IDS/IPS and Data Loss Prevention (DLP) technology between corporate systems and internal users. If someone does inadvertently introduce malware to the network, then risk is limited to the end system. Such a setup also has the added benefits of protecting against internal data theft and may help meet regulatory requirements.

The move towards “Cloud Computing” with companies looking to locate their systems with a third party helps to facilitate such an approach. The idea of corporate networks being little more than semi private internet access points is maybe not that far away.

The ultimate aim of the software and other policy is protect against data loss without restricting productivity of the end user. Although not perfect, I think my approach is the best compromise between security and usability.

Friday, November 13, 2009

Web Application Vulnerability Trends

There is a recent report from Cenzic that produces statistics on Web vulnerabilities for Q1 and Q2 of 2009. Although such studies can be far from subjective, this one seems fairly well balanced and quotes, amongst others, NIST, US-CERT and SANS as sources.

Not surprisingly, web application vulnerabilities consisted of around 78% of all issues with old favourites Cross Site Scripting and SQL injection being the most significant.

The findings are somewhat disappointing as the vulnerabilities are not new and have appeared in the OWASP top 10 for many years. It suggests that more effort needs to be placed in good development practises as outlined here.

The report also has a section on browser vulnerabilities reporting that Firefox had 44% of all browser flaws over the period. As the Register points out, this isn’t really a true reflection of risk as other factors need to be considered such as vulnerability level, the time a manufacturer takes to fix it etc.

Monday, November 9, 2009

The Safety of SSL

There have been a number of SSL/TLS related security vulnerabilities in the news recently including the Null Prefix problem and the more recent Man in The Middle attacks. The later has yet to be fixed but doesn’t seem to yet present a major risk for e-commerce, online banking or other internet transactions that require authentication and encryption. Indeed it seems that sessions that require client certificates for authentication would be most at risk, a scenario that is not that common. Whatever the seriousness of the vulnerability, it is based on the implementation of SSL/TLS rather than the core technology of Asymmetric Encryption algorithms.

Asymmetric Algorithms more commonly known as Public Key Cryptology allows two way secure communication without the hassle of prior key exchange. Some of the more common implementations of Public Key Cryptology including RSA, make use of the fact that some mathematical operations are much easier to perform in one direction rather than the other, in particular the factoring of large numbers. For example if you tried to determine values for x and y where x * y = 65869 it would take a fair amount of time. The reverse problem of finding the result of 199 * 331 would be much quicker. Note x and y are prime numbers as otherwise they could be factored into smaller values. Naturally as computing technology improves it becomes feasible to use brute force to do the factoring in a short amount of time. However, the same technology allows us to use larger and larger values for x and y without compromising performance. The Greek mathematician, Euclid, proved in around 350 BC that there are an infinite number of primes and the more recent Gauss’s prime number theorem shows that will be a sufficient number of them not to risk choosing the same ones. Hence we can be reasonably confident that it should always be possible to stay ahead of improvements in brute force technology.

Of course one day someone may come up with a way of simplifying factoring of large numbers to render current Asymmetric Algorithms useless. Note that this wouldn’t make encryption impossible, but would stop simple over the wire key exchange. It’s worth pointing out that modern encryption for internet communication is generally hybrid with an Asymmetric Algorithm initially being used to exchange a symmetric key to be used for the rest of the session. A future technique that could replace current systems is quantum key exchange. To vastly over simplify, if a key exchanged in a quantum system is intercepted, the observation of the key will alter its state and hence alert the sender and the receiver. As the system relies on physics rather than mathematics, there is no algorithm to crack making it unbreakable. Today’s infrastructure is obviously not geared for wide spread quantum key exchange but who knows for the future.

Further reading. The Music of the Primes by Marcus du Sautoy. Quantum, a guide for the perplexed, Jim Al-Khali.

Friday, November 6, 2009

Portable Apps

A friend of my mine recently pointed me in the direction of Portable Apps for Windows. A portable application for Windows is one that does not leave its files or settings on the host computer. The concept was familiar as I’ve often used Linux Live CDs although this is really a portable OS rather than applications. I really like the idea but what about the security implications?

I installed the Portable Apps Suite Lite which contains over a dozen applications including Firefox. It uses the approach of having specially written applications rather than application virtualisation. I decided to do my testing with Firefox as it is something that I could see adding real value. A few years ago I used to travel a lot to the various European offices of the company I was working for. As my laptop was slow, old and over 4 kg I often used to leave it at home and work on any spare desktop that was available. It would have been a godsend to have had a USB stick with my own browser, email client etc rather than struggling away with an out of date version of Internet Explorer in a language I couldn’t always understand.

My first concern was that the portable applications might quickly become out of date exposing security vulnerabilities. However Firefox updated itself from version 3.0.7, which came with the initial install, to version 3.0.15, which suggests that security updates are application specific and not necessarily limited by being portable.

A significant risk of the portable app comes from the associated USB stick. Malware distribution from removable media although once prevalent when floppy disks were in common use was until recently rarely a problem. It seems however to be making a big come back with Trojans such as the Tartef Worm using USB sticks as its primary distribution method.

A positive aspect of a Windows portable app is that it has the potential to run on a host computer using an account that has minimum security privileges. In this respect, security could actually be improved by use of portable apps as malware either directly from the USB stick or something downloaded from a malicious web site could do less damage to the host system.

After a week of messing around with Portable Apps I could only conclude that the security implications of such technology are somewhat ambiguous. More investigation needs to be done. Unfortunately I can see Portable Apps being used mainly on corporate systems which are severely locked down to restrict users to certain approved applications. This of course defeats the object of the lockdown and so conflict with system administrators is highly likely.

Tuesday, November 3, 2009

Size Does Matter

There was an interesting article today in the Register about brute force password cracking using Amazon’s EC2 cloud architecture. The main focus was on how much it would cost to crack passwords of different lengths and complexity. One of the conclusions which is almost counter instinctive is that a long lower case only password is much harder to crack than a shorter complex one consisting of lower and upper case characters as well as numbers. I did my own calculations to verify the findings and came up with the same results.

Take for example an 8 character complex password containing upper case and lower case characters, numbers and also a choice of 20 non standard characters such as % . When considering brute force cracking, the 8 character complex password is easier to break than a 9 character one containing upper and lower case characters and also easier than an 11 character password containing only lower case characters.

Hence, next time that nasty Systems Administrator tells you that your password should resemble something like x%fF*Z3$ you can tell them that this is less secure than a password like HelloFred or even mycarisblue. Note that at various times in my working career, I have been one of those administrators so I know where they are coming from.

No doubt the Systems Admin would respond that in reality a brute force attack would not be random in the words and phrases it attempts and subsequently would crack a long non-complex password quicker than a short complex one. This is probably true when length diferences are small but difficult to quantify accurately. Coming back to the real world, nearly nobody can remember a password such as x%fF*Z3$ but it’s not so hard to recall a semi abstract phrase such as MyDogisfrenchThanksforthefish. This non complex password is approximately 2.8 x e34 more difficult to crack than the complex one when considering only a brute force approach. Even when factoring in dictionary approaches it’s probably still a lot safer as well as being easier to recall. Hence when next choosing a password, remember, size really does matter.