Monday, January 25, 2010

IPv6 Again

An article on Slashdot this morning discusses the release of the IP address range 1.0.0.0/8 for public use. This is of course connected with the so called saturation of the IPv4 address range which according to the article is still predicted for the end of 2012.

As I’ve discussed before, the solution to the lack of IPv4 addresses is IPv6 for which the technology and in some cases the infrastructure is already in place. The comments section of the Slashdot article debates just how much of a problem this really is. Although there is no consensus, it seems clear that there is a leadership vacuum in addressing the issue. I can see no reason why businesses and certainly not home users would currently take the effort to migrate to IPv6. There needs to be some incentive or regulatory requirement to do so, which probably needs to be set at the government level. To be fair, the EU does have an IPv6 program in which they acknowledge the problem. The stated goals to address the issue are:

1. An increased support towards IPv6 in public networks and services,

2. The establishment and launch educational programmes on IPv6,

3. The adoption of IPv6 through awareness raising campaigns,

4. The continued stimulation of the Internet take-up across the European Union,

5. An increased support to IPv6 activities in the 6th Framework Programme,

6. The strengthening of the support towards the IPv6 enabling of national and European Research Networks,

7. An active contribution towards the promotion of IPv6 standards work,

8. The integration of IPv6 in all strategic plans concerning the use of new Internet services.

This is very noble, but to me at least, the program does not generate enough “noise” to provoke a mobilisation of effort that will make a difference.

Thursday, January 21, 2010

Vulnerability Trends

It was no surprise when reading US-Cert’s vulnerability summary for the week of January 11 2010 to see that six of the vulnerabilities classed as high were in some way related to Acrobat Reader. There seems to have been a constant stream of stories in the news about these bugs and public exploits for them. It doesn’t seem that long ago that PDF, the file format associated with Acrobat Reader, was considered the safe option for documents from un-trusted sources. Indeed I was once involved in a project to convert word documents uploaded to a web site into PDF before they were viewed by the end user.

Is Adobe Acrobat less secure than other software? Probably not. It’s more likely that as it exists on a vast proportion of PCs in the world, it has become a desirable target for hackers. The same could be send for Internet Explorer although now that Firefox is, according to some reports, taking up to 40% of market share in Europe at least, it will be interesting to see if more Firefox vulnerabilities come to light.

There were also five SQL injection vulnerabilities reported in the summary which were classed as high. This is disappointing as SQL injection is not a new attack, there is lots of information available on how to defend against it, and in theory at least, counter measures are not difficult to implement. This would suggest that despite the fashion for Security Development Life cycles, some companies are still not treating security seriously.

Friday, January 15, 2010

Targeted Malware

Targeted Malware has featured prominently in security news this week. To summarize, Targeted Malware is just like other malware but attempts to distribute it are limited to a small group of people or even just a single person. For obvious reasons, it’s more likely to be used for espionage, political or industrial, rather than direct crime. It can be particularly effective as the email, web site or document used to trick the user into installing the Malware can be tailored to a very narrow area of interest, luring the user into a false sense of security.

The chances of Targeted Malware being detected by an antivirus package is also low. Antivirus software relies mainly on comparing code against a database of known malicious patterns. The Anti-Malware vendors build their databases from Malware they have either “trapped” themselves or that which has been sent to them by their clients. A targeted attack would almost certainly miss the vendor honey pots and because of its small distribution, the chances of it being reported by an end user are slight.

The Register has a good article about a recent targeted attack on Google. There is a video at the bottom of the article by F-Secure that gives further insight into Targeted Malware.

Whilst writing this blog entry, it sprang to mind that a good launch pad for this kind of attack could be a social network, in particular, business orientated ones such as Linkedin. It’s easy enough to build a false profile and it’s also simple to identify targets at say an organisation that you wanted to infiltrate. The Groups feature could be particularly useful as you can post links to external websites and documents which could be a source of Malware. As the end user has had to log in to the system and they are probably looking at a Group that is fairly specific to their job role, the chances are that they have a false sense of security and are maybe not as cautious as they usually would be.

Wednesday, January 13, 2010

Breaking SSL (Again)

Another encryption land mark was reached towards the end of last year with the factorization of RSA-768. To put this is simpler terms, RSA-768 is a 768 binary bit number (232 digits in decimal) which is the product of two prime numbers, usually denoted as p and q. It forms part of the public and private keys used in TLS/SSL encryption most commonly used for securing internet traffic. If you can determine p and q from the public key, i.e. factor the RSA-768 number, then you can also calculate the private key and hence “crack” the encryption. It sounds easy, but try to factor the number 6947 into its prime factors? (See below for answer). Now try doing that with a 232 digit number.

Although some mathematics was used, notably the General Number Field Sieve, the attack was still effectively a brute force effort spread out over hundreds of processors and took over two and a half years. If the effort were repeated for a different 768 bit number, the experience would surely result in finding the solution in a shorter time. However it’s not clear if the result from the first test can be reused for a different number and I suspect not, meaning that an attack against a 768 bit key is still theoretical other than for the most critical of data.

One of the conclusions of the study was that 1024 bit keys although safe today should be phased out in the next 3 to 4 years and replaced with 2048 bit keys. A quick unscientific survey of certificates used on some of the more popular web sites suggests that 1024 bits is more or less ubiquitous, although there are some 2048 bit certificates out there. It is possible that some older browsers would not support the longer keys, but no one is flagging this as an issue.

For me, the most interesting part of the study was how the researchers concentrated on introducing parallelism into their algorithms to allow the load to be spread over multiple systems. This of course leads on to one thinking that a cloud setup such as Amazon’s EC2 could eventually be used for such tasks rather than private academic systems .

(89,73)

Friday, January 8, 2010

IPv6 First Looks

With predictions of doom and disaster for 2010, i.e. exhaustion of the IPv4 address space rather than the end of the world, I thought it would be good to have a look at how easy it is to implement IPv6 in the home/office network. As any eventual migration from IPv4 will involve none technical users, I tried to do this with minimal research and without any complex PC or router changes.

My ISP has been offering IPv6 for sometime now and it was simple enough to enable. I logged on to the admin interface of my ADSL router and clicked on the button “Enable IP6”. The next stage was to configure my test systems for IPv6 addresses. I decided to use my Windows 7 laptop and Ubuntu 9.10 desktop which I have running as a virtual machine. Windows 7 has IPv6 enabled by default and an address was assigned straight away. For the Ubuntu system, it was easy enough to enable via the GUI.

I then found a few IPv6 web sites using ping6 on Ubuntu and ping -6 on Windows. Not surprisingly, ICAAN and my ISP were IPv6 enabled as were Google and 01net, a French IT news publisher. Disappointingly, there doesn’t seem to be much support from major web sites other than an odd server for research purposes. I then successfully browsed to the sites I had found and made use of various packet capture tools to check that IPv6 was indeed used for the communication.

The next step was to disable IPv4 on both test systems. Ubuntu carried on as normal but the Windows 7 system stopped working. The problem turned out to be the DNS resolution. For whatever reason, my Ubuntu system had a different DNS assignment. Once I manually entered the Ubuntu values into Windows 7 it worked fine. I’m not sure why this problem arose and didn’t have time to investigate further. Whilst troubleshooting the issue, I discovered that into order to type IPv6 addresses directly into the address bar of your browser, you need to put the address in [] brackets.

Conclusions? Well for both Ubuntu and Windows 7 in combination with my ISP’s IPv6 setup, activating IPv6 was simple enough. However, the real issue is of course that most of the sites I visit every day don’t yet support IPv6. Even if they did, they would still need to support IPv4 so as not to shut themselves off from a large part of their user population. It seems that a huge effort will be required, probably mainly on the part of the ISPs to accelerate IPv6 acceptance. Some kind of gateway or tunnelling system will also be required between IPv6 and IPv4 during a transition period. One solution I looked at briefly was an offering from SixXS. Although I didn’t have a chance to install their tunnelling client, I did use their IPv6/IPv4 Website Gateway which allowed me to browse any IPv4 web site from my IPv6 only client. Although I don’t really think that IPv4 saturation will occur this year, it will arrive eventually and will probably cause a significant amount of pain. It will require a mobilisation similar to that seen for the so called Y2K bug. I sense a business opportunity.

Wednesday, January 6, 2010

SimpleIDS

I’ve posted a Windows Powershell script on my web site today that checks directories for file additions, deletions and changes. Its intended purpose is to act as a simple audit tool to detect unauthorised content change. It’s called SimpleIDS and can be downloaded here.

Although I think that intrusion detection systems (IDS) are a necessary part of any web application infrastructure, many of the commercial tools out there are expensive and in my opinion often do not give value for money. There are some excellent free systems out there such as Snort but even these can require a significant investment in man hours. If you are unsure of how cost effective a particular IDS control or system is, a quick way to assess its value is to consider the Annual Loss Expectancy (ALE). Subtract the ALE after a control is implemented from the ALE before the control and then compare the result to the cost of the IDS. If the ALE reduction is less than the IDS cost then it’s probably not worth having.

My approach to IDS has always been to keep it as simple as possible. Where feasible, it’s a good idea to build it directly into your application, something I’ll blog about later on. SimpleIDS is also a good example. If performs a single function, to detect content change, and so is easy to understand. It is a script and so doesn’t require any installation of software.

SimpleIDS is rather primitive at the moment and I intend to evolve it over the coming months with more command line options and an alerting function as the priorities. Feedback would be appreciated.

Monday, January 4, 2010

To Bug or not to Bug

An IIS bug reported towards the end of last year brought an abrupt response from Microsoft. According to the Register, Microsoft acknowledge that the bug exists in IIS 6 but claim that it doesn’t present a risk as you would need to be running your Web Server in an insecure configuration for it to be exploited. Umm, that’s alright then as we all know that everyone runs their applications in a secure config.

The vulnerability arises because of the way IIS6 parses semi-colons. If you had a file called badcode.asp;.jpg, everything after the semi-colon would be ignored and the web server would process the file as if it were called badcode.asp. The end result in this case would be that the file is processed on the server and not the client.

How could this be exploited in the real world? Consider that many sites allow anonymous users to upload documents to a webserver. This could be in the form of a photo or a CV. In order to stop malicious users uploading harmful content there would normally be a filtering process in place that would block files of type .exe, .asp etc. However, if the user were to append ;.jpg or ;.doc to their file the filtering process would be bypassed and the file uploaded to the server. If the file resides in an accessible web directory with script execute permissions, any user can execute the file.

Microsoft rightly point out that you would be foolish to allow uploaded content to be available from the web especially in a directory with execute permissions. Best practise would also not allow the end user to choose their own file names. Personal experience however suggests that best practise is not always followed. Given that IIS may occupy 21% of the entire web server market, I would be confident that some fairly high profile sites could be vulnerable. The worst case scenario would be something like a bank being exposed to the bug. It could lead to the ultimate phishing scam as malicious code would be authenticated and encrypted by a valid SSL certificate.