Tuesday, June 29, 2010

Datagram Transport Level Security - DTLS

With the never ending onslaught of mobile technology and greater security awareness, I though it might be interesting to look at Datagram Transport Level Security (DTLS). The two main methods of securing internet communication are Transport Level Security (TLS) and IPSec. TLS is most obvious for its implementation in securing HTTP traffic i.e. HTTPS but is also used in other areas such as for secure IMAP and SMTP. Its main limitation is that it requires a reliable transport channel, normally TCP and hence is unsuitable for UDP traffic which today includes Voice Over IP and several gaming protocols. IPSec is suitable for UDP traffic but its implementation is far more complicated than TLS as it is designed for peer to peer communication rather than client server.

This led to the emergence of DTLS which as the name suggests is Transport Level Security with UDP as the transport channel. A very readable paper on its detail can be found here.

One widely available implementation of DTLS is Cisco’s Anyconnect Secure Mobility client. It doesn’t appear to follow the full goals of DTLS as the key exchange and handshaking is established over TLS which results in two channels needing to be maintained.

With the emerging popularity of VoIP clients on smart phones, I expect we shall be hearing more about DTLS over the coming months.

Friday, June 25, 2010

Windows Powershell Remoting

Windows Powershell has always offered much promise but with version 1.0 at least often failed to deliver when you got into the detail. In contrast version 2 seems to offer more hope particularly when combined with the remote management feature that comes as standard with Windows Server 2008 R2 and is available as a download for R1.

My evaluation project was to use Powershell to obtain disk space, audit failures in the security event log and an instant processor reading on a couple of remote servers via a web service over HTTP(S).

The first step was to set up a ‘Listener’ on each of the remote servers for which there is “quick config” option that lets you automatically alter the relevant services, registry keys and other options to get you up and running. Making the changes manually isn’t too difficult if the quick config fails as it did for me.

Stage 2 was to establish a connection or session to each of my remote servers from my PC. There are plenty of options for this stage including authorisation and port number but nothing too complicated. The most difficult part was to get the password used for each session to be read from a file rather than needing to type it in each time. Powershell doesn’t allow you to store your password in plain text which although a ‘good’ thing hinders testing and evaluation.

The final stage was to issue the commands themselves. This proved to be extremely simple with the invoke-command and then either by using Powershell builtin commandlets or via WMI.

Of course the above has been possible before, even with vbscript, but Powershell offers some advantage over its predecessors, not least the following.

  • Commands issued to multiple servers run in parallel rather than sequentially.
  • A command can run in the background.
  • Powershell is extremely good at formatting output allowing the returned data to be easily read.
  • The remote connection is over HTTP(S) which is useful for servers in remote data centres or even in the cloud, e.g. with Amazon’s EC2.
  • The remote server listener can be configured to expose a limited set of functionality. Hence even someone with administrative credentials is restricted in the information they can gather.

My overall impression of Powershell 2, particularly the remoting feature is that it is now at the level where it is consistently useful. I’m looking forward to version 3, assuming there will be one.

Monday, June 14, 2010

Like Jacking – Facebook helps to spread the bad news

Facebook has been making the news a lot recently with its so called abuse of privacy. Although certainly important, it’s kind of naïve to post anything to a social network and expect it to be private for ever. I also believe that many people are relaxed about this and some individuals seem positively happy about the fact.

In my opinion Like Jacking poses a bigger threat than weak privacy settings. LikeJacking is based on the Facebook feature that allows web masters to insert a button on their web site which visitors can click on if the like the site content. A link back to the site then appears on the visitor’s Facebook page which is visible to all their friends. The Like Feature is easy to setup and indeed Facebook has a page that will even generate the code for you if you aren’t too hot on HTML and Javascript.

So far so good with no apparent danger. The exploit comes from the fact that it’s not necessary to get a visitor to click on a “Like” button for the link to appear on their Facebook page. There is a good example of how to achieve this here. It’s also easy to manipulate the link and the image displayed with it. It’s quite simple to imagine that someone who visits such a site in error, who also has a lot of Facebook friends, could allow the link to spread in an exponential fashion.

The most obvious use of this exploit is for SPAM purposes. Some people, (well me at least) think that Facebook is exclusively SPAM from people you know so what harm will a little bit more do? It is of course simple to manipulate the “Like” link so that it leads to a website that will attempt to install malicious content on your PC, e.g. keystroke loggers or some kind of botnet for which the consequences can be far more serious. Plenty of these sites exist including ones that you might normally trust. This article reports on a number of websites that have been poisoned via an SQL injection so that a visit to the site will result in an attempt to install Malware .

Thursday, June 3, 2010

HTML5 –The Next Best Thing and the End of Flash

Perhaps fed up with all things Cloud, the IT industry hype machine is turning its attention to HTML5. It shouldn’t really be a surprise given that Steve Jobs has been raving about how it can replace Flash and Microsoft sees their implementation of it as a major selling point for IE9.

When people say HTML5 they often mean CSS3 or one of certain other new web technologies. A good explanation of what is and isn’t HTML5 can be found in this blog from ExtJS.

Despite the hype, HTML5 and related technologies are showing much promise. A good collection of what can be achieved can be found at HTMLWatch. The results are certainly impressive.

Is this really the beginning of the end for Flash? Well Adobe products in general have been a nightmare from a security point of view over the past year, so a viable alternative might be desirable. However, at least part of the reason that so many vulnerabilities have been found in Flash is due to its massive installation base and the subsequent targeting by hackers. It’s not unreasonable to assume that subsequent versions of Flash will be more secure and that the developers who have Flash skills and tools will continue to produce Flash applications.

The real danger to Flash may come from Steve Jobs refusing to allow it on the iPhone and the iPad. Despite the miniscule market share that these products have, they dominate the media agenda. It’s not too difficult to imagine hype beating reason and there being a large scale move away from Flash to supposedly allow for maximum cross platform support irrespective of if this is really true.

Friday, May 21, 2010

Virtualisation. When is it Right for your Business?

It always strikes me as strange that the IT industry which you would think was packed full of rational intelligent people is so susceptible to marketing and fashion. Amongst others we have had the dotcom boom, WAP, any first version of a Microsoft product, and my all time favourite, the death of the router due to the invention of the switch. How could we have been so stupid?

As a former boss of my used to say, “Nobody ever got the sack for buying IBM”. And herein lies the root of the problem. Although many IT professionals claim to be innovative and cutting edge, in reality, their main priority is job preservation and so the “safety in numbers” principle kicks in. They invest in the same technology as everyone else, usually whatever has been “bigged up” in the IT press.

The most obvious technology currently benefitting from the sheep mentality is Cloud Computing but as I’ve blogged on this before, I thought I’d highlight another area doing well from IT fashion, namely Virtualisation.

Virtualisation is useful and does have some real benefits and indeed I use it myself. However I find it annoying that it is promoted almost as a silver bullet solution as if virtualisation is guaranteed to bring lower costs, better scalability and performance as well as more uptime irrespective of the application and its uses. All of these points can be true but more and more I’m coming across cases of companies virtualising large parts of their IT infrastructure without proper analysis of whether or not they will get any real benefits. But enough of the ranting, here are a few tips for what to consider before you start a virtualisation project.

Performance: Something often overlooked is that virtualisation reduces the performance level of your hardware as there is always some overhead from the virtualisation layer. This is not necessarily a problem if you plan to run a few applications that have low load but still require isolation from each other, in fact this is where virtualisation excels. It may become a problem if your applications have some significant usage peaks as the equivalent performance on a virtualised environment will be less than on a physical system. It’s impossible to quantify exactly what this impact will be so it is necessary to analyse your own applications. A couple of examples of testing can be found at WebPerformance Inc and also Microsoft. Be aware as well that the choice of hardware for the virtualisation platform can also have a significant impact for example AMD’s Rapid Virtualization Indexing and Intel’s Extended Page Tables are specifically design to optimise virtualisation.

Cost: The main cost saving from virtualisation comes from requiring less hardware and the reduction in the associated power and data centre space. It is possible to get some reasonably powerful virtualisation software for free if certain advanced functionality is not required. What must not be overlooked is the cost of additional licensing assuming of course you are not using open source software. For example, if you run four virtual Windows Servers on a single VMWare ESXi machine you may need to pay for 4 operating system licences as could be the case for other paid software such as databases and anti-malware packages. It may be cheaper to try to get your applications to run on a single system. You may also find that free Hypervisors are not sufficient for your needs in which case there are licensing costs here as well.

If you want to use some of the more interesting virtualisation features such as dynamically moving virtual machines between physical servers it is necessary to have some kind of storage area network (SAN). If your application has significant disk I/O requirements it is better to use fibre channel rather than iSCSI. Such an option is significantly more expensive than direct attached storage. Again, this is not necessarily a problem but it is important to be sure that you are getting a decent improvement in your service for the money you invest.

Scalability: A big plus point of virtualisation is the way you can dynamically add resources to a virtual machine (VM). Firstly you can let a busy VM take unused resources from a shared pool. If this is not sufficient you can dynamically move other virtual machines to different physical hardware, assuming you have made the relevant investment in a SAN etc, freeing system power for your newly busy application. It sounds great in theory and in some cases it probably is. Once again however, the true benefits are subject to the characteristics of your applications. It would be nice if each application had its peak usage on distinct days at times that were mutually exclusive. It my experience, it is more likely that the opposite is true and so you virtual infrastructure may need to be able to cope with all of your applications experiencing peak load at the same time. Once more, this is not necessarily a problem if you decided that the convenience of being able to easily move you applications between hardware platforms is worth the investment in the infrastructure.

An unwanted side affect of offering easy hardware upgrade is the tendency for deficiencies in applications to be ignored. If more processing power is easily available it is tempting to allocate it to a poorly performing application rather than optimising the code or the configuration.

Availability: Virtualisation can help improve the availability of your applications. With the right configuration when using a SAN if a physical server fails all the virtual machines that had been running on it can be automatically moved to other hardware. The same is true if you need to take down a system for maintenance, e.g. to add new memory. It sounds great but again it is important to assess if your investment is giving you value for money. Do your applications need to be available 24x7x365? Does your SLA allow for a couple of hours down time in order for you to recover a faulty system? How often do you expect your hardware to actually fail? My own experience is that if you run a server less than 5 years old in an environment with proper temperature control and consistent power, with the exception of disk drives which should be protected by RAID, failure is rare. Also consider that SANs may fail too which could leave you with a huge single point of failure.

To conclude virtualisation can add real value to your business but before implementing it is necessary to do a proper analysis to see if what you gain adds true value for money.

Tuesday, May 18, 2010

HTML 5 – Security Challenges

No post recently due to the extreme lack of anything interesting to post about. Finally yesterday something turned up via the unlikely source of a Linkedin group. Linkedin Groups are usually a hotbed of inanity or self promotionists but the OWASP French Chapter bucked the trend by pointing me in the direction of this HTML 5 article on eWEEK. Much in the news recently due to discussions about the H.264 video format, new features in HTML 5 present some interesting challenges for security. Client side storage is one area highlighted in the article. HTML 5 allows for three types of client side storage which are:

Session Storage similar to cookies but with much more information.

Local Storage, similar to session storage but available to all browser windows and persistent after a window is closed.

Database Storage: Structured data saved in a real local SQL database

The most obvious security risk that springs to mind is data leakage left after an application is closed but there are also other possibilities such as cross domain request forgery and perhaps even local SQL injection!

The article also highlights that the scope for cross domain communication by JavaScript is increased with HTML 5 which allows for more powerful applications but also opens up abuse possibilities.

A little extra research seems to suggest that the above features can be implemented securely but as ever it depends on the developer’s ability to understand the technology and to be aware of how to code in a secure manner.

Thursday, May 6, 2010

How to Hack Web Applications

I’ve been evaluating Google’s Web Application and Defences tutorial over the past day or so. Based around a fictitious Web application called Jarlsberg, it consists of a series of exercises that allow the student to exploit the numerous security holes on the site. The vulnerabilities include Cross Site Scripting (XSS) in its many forms, Cross Site Request Forgery (XSRF), Path Traversal, Denial of Service (DoS), Privilege Escalation, AJAX vulnerabilities and remote code execution. The main absentees are SQL injection and buffer over flows.

Although a basic understanding of HTML and Javascript is necessary to understand the content, you don’t need to be an experienced web developer to benefit from the tutorial. Its main plus point is seeing exploits in action to demonstrate the damage they can cause. In the past I’ve sometimes had problems explaining quite why something like an XSRF vulnerability is a risk to a web site.

Tuesday, April 27, 2010

Microsoft Security Intelligence Report Vol 8 Published

Microsoft has published its security intelligence report for the second half of 2009. The information is gathered by Microsoft security tools including the Malicious Software Removal tool, run before Windows Updates are installed, and Microsoft Security Essentials.

One of the main claims is that if you run Windows 7 or Vista rather than XP, you are more likely to be exposed to vulnerabilities in third party software most notably from Adobe, than to vulnerabilities in Microsoft products. Worldwide it seems that a PC is most likely to be infected by a Worm from the Tartef family which targets online game players, although malware patterns vary from country to country.

Fake security software is also prevalent as are botnets used for sending SPAM.

Wednesday, April 21, 2010

Top 10 Web Application Security Risks

The Open Web Application Security Project, OWASP, has published the 2010 version of its top 10 web application security risks, the first revamp since 2007. Recognised as a key information tool for developers and security professionals, the OWASP top 10 is referenced in many security standards including PCI DSS.

Although there are many similarities with the 2007 edition, the emphasis has been changed to reflect security risks rather than just vulnerabilities. Prominent as ever are cross site scripting, XSS, and injection vulnerabilities which have caught out both Apache and Amazon in recent weeks.

Added to the Top 10 is Security Misconfiguration. Left out of the 2007 edition as it wasn't considered to be a software issue, it was re-added to reflect the emphasis on risk. Also new is Unvalidated Redirects and Forwards. Relatively unknown as an issue, OWASP suggests that that its potential for damage is high.

Removed from the Top 10 is Malicious File Execution. Although still an issue, improvements in the default configuration of PHP where the problem was most widespread, has led to a reduction in incidents. Information Leakage and Improper Error Handling is also removed. Again, while still prevalent direct risk is minimal.

Friday, April 16, 2010

Security and Legal Issues in the Cloud

I recently tuned in to a Webinar on Security and Legal issues in the Cloud. I was pleased to find that most of the presenters started from the view point that Cloud technology is mainly rebranding of existing services and so many of the issues are what we know and are used to.

One of the main differences is not surprisingly trust levels. The more you outsource your services to the Cloud the more you need to have confidence in a third party to correctly handle your data and intellectual property. This can be achieved to a degree by the certifications and reputation of the supplier but it’s important to carry out your own audits.

Authentication was also heavily discussed during the presentations. One of the common side effects of Cloud Systems is the necessity to introduce yet another authentication level for the user population which of course is never popular. One of the presenters proposed federated authentication as a solution, particularly SAML and OpenID. These technologies as well as others have been around for a while but never seem to have got the momentum they might have.

Data protection is another area that needs some thought. It’s important to know where your data is held as it is governed by the laws of the country where it is located as well as those from which it is accessed. Regulation is very different from country to country and particularly between North America and Europe.

Although not directly related to the Webinar’s main subject, the undercurrent of the presentations was perhaps the most important. Cloud Computing is currently in fashion which has lead to many “Cloud” solutions being implemented when perhaps they shouldn’t have been. You would hope that the technology industry would be based more on fact that fashion but unfortunately that doesn’t seem to be the case.

Thursday, April 15, 2010

UK Digital Economy Act Encourages Innovation

The part of the UK Digital Economy Act designed to discourage illegal downloading has inadvertently initiated furious discussion about how best to anonymize Bit Torrent and other file sharing traffic. Only this morning an article on the Register speculated that the SeedF*cker code, originally considered an exploit, could be used to disguise the IP address of a server hosting illegal content. Although many commentators were quick to dispute this, it does appear that there are plenty of ideas out there on how to beat the snoopers.

Not surprisingly, many proposed solutions are based around encryption and VPNs but there are also more novel suggestions such as routing all traffic via countries with strong data protection laws or dividing files into extremely small chunks making them hard to identify. Some companies already claim to offer solutions.

There is also the so called darknet which offers solutions for remaining anonymous.

Wednesday, April 7, 2010

How Much is Your Online Privacy Worth?

How much is your online privacy worth? In the UK it appears that until recently it was no more than £5000, which was the maximum the Data Protection Commissioner could fine companies who committed a serious breach. The Register reports that figure has just been raised to £500 000 which should hopefully focus data custodians into taking data protection a little more seriously.

Tuesday, April 6, 2010

Google. What They Know About You Explained.

With yet another Google Privacy story in the news today, I thought it would be interesting to examine exactly what Google knows about the average internet user and how they gather their information. To simplify the article I have excluded any Google services that require some kind of authentication such as Gmail or Google Apps. All the information discussed below is harvested from so called anonymous browsing.

Firstly, irrespective of your Internet browser or even the HTTP protocol, every time you visit a web site, you must send your IP address to the destination so that the receiver knows where to send the reply. If you are a home user your IP address is allocated on a semi random basis from a large pool managed by your ISP. Even so, publicly available online tools can be used to narrow down your location from your IP address to at least the nearest city to where you browse from. Many sites use this information to target advertising at you. This is most obvious if you browse a site in a different country but see advertisements in your local language specific to businesses or services from your country. Your ISP of course can use the IP address to uniquely identify you.

Again, even before you start using Google products or web sites, each time you access a new web page you send the web server certain information contained in “headers”. One of these is called the user agent which tells the website your browser type and operating system.

So, before you even type the first character into Google’s search engine, they already know more or less where you live, your operating system and your browser type. This of course is not limited to Google but applies to any web site. Once you do access www.google.com for the first time you receive a cookie, nominally to record your preferences, e.g. language, number of results per page etc, but which also contains a unique ID. The next time you access the Google site the cookie, which is stored on your hard drive, is read and the ID from the first visit retrieved. Each search you perform is recorded by Google together with the ID to build up profile of your browsing habits. The information is used to target advertising at you, with the targeting becoming more effective as your profile expands.

Things get even more interesting for users of Google’s Chrome internet browser. As Microsoft highlighted last week, the address bar of Chrome is also the search bar. Every key stroke typed into the address/search bar is sent to Google to allow for an auto suggest of the term you may want to search for or the site you wish to browse to. Guess what. Your Google cookie containing the unique ID is also sent to Google allowing them to record every site you visit.

Many commentators also flag Google Analytics as a way Google can record your internet activity even if your web usage habits prohibit use of the previously mentioned techniques. Google Analytics allow web masters to record usage statistics for their site. It works by including a small amount of java script on each page that sends information to Google about the user’s activity. From what I can see, it doesn’t send the Google cookie and so identifying a user is limited to IP address.

What does this mean in practise? In a worst case scenario, a law enforcement agency with complicity from Google and your ISP can build a complete record of your internet browsing including time and location information. Eric Schmidt, Google’s CEO, has previously stated that if you’ve nothing to hide then what’s the problem? This may be true if you live in a Western Democracy although many people would argue otherwise. It’s certainly not the case if you live in a country where the human rights’ record is less ideal.

In European countries at least, abuse of this data is prohibited by the European Data Protection Directive and the country based laws that reflect it. This is just as well as imagine your employer, or anyone else for that matter, being able to get hold of all your internet activity including that from your home PC.

It’s also worth pointing out that none of these data gathering techniques are unique to Google but as it is the biggest player in the internet space, they can gather the most data and consequently attract the most criticism when people complain about privacy issues.

In theory there are a few things you can do to limit the information that you leak to Google. The simplest is to delete you cookies on a regular basis which makes it much harder to track your activity. There are also services like GoogleSharing that anonymise your Google traffic, but in reality you can never completely hide your browsing patterns. If you want to use the internet to the full, it is necessary to accept that some of your privacy is lost.

Friday, April 2, 2010

Privacy Online

The weekly podcast from the technology section of the Guardian recently did an item on online privacy. Although it was as informative and interesting as ever I was surprised that there was no mention of the data protection laws in place that, in theory at least, protect against many of the fears raised in the discussion.

Most European Nation data protection laws resemble, or should resemble, the European Data Protection Directive, EU 95/56/EC which is often summarized as follows:

Notice—data subjects should be given notice when their data is being collected;

Purpose—data should only be used for the purpose stated and not for any other purposes;

Consent—data should not be disclosed without the data subject’s consent;

Security—collected data should be kept secure from any potential abuses;

Disclosure—data subjects should be informed as to who is collecting their data;

Access—data subjects should be allowed to access their data and make corrections to any inaccurate data;

Accountability—data subjects should have a method available to them to hold data collectors accountable for following the above principles

From http://en.wikipedia.org/wiki/Data_Protection_Directive

At first glance, the directive appears to be fairly comprehensive and favourable to the privacy of the end user. Read deeper into the document and you find that the rules can be breached in cases of national security or public interest but otherwise is still sound.

My own experience of the directive in action came from a client in Germany for whom I was hosting a web application. They requested that I did not record the IP address of users who browsed the site as it breached the directive and they even came up with a court ruling to back up their argument. Like many web site administrators, I was recording source IP addresses for troubleshooting and security purposes but also to be able to produce statistics on the usage of the web site, particularly with regards to geographical location. Although browsing of the site was supposed to be anonymous, the IP addresses could ultimately be used to trace the individual user, which is what caused the problem with the data protection directive.

Although many of the abuses of data raised in the Guardian podcast probably occur on a regular basis, in my opinion it is not necessarily due to lack of legislation but more because of inefficient enforcement of existing rules. The directive does allow for compensation to be paid in the event of damage caused by misuse of data so I guess a few high profile cases with large payouts would help tighten up data protection law compliance.

Thursday, April 1, 2010

Protecting Your Email

As I blogged last week, email in the corporate setting is extremely vulnerable to being read by others. Although a typical company Email Usage Policy allows for employee email accounts to be read, perhaps for Data Leakage Prevention (DLP) purposes, you envisage this being in exceptional circumstances on the orders of the CEO rather than on an ad hoc basis by the IT department over their morning coffee and packet of Monster Munch. In addition, once your message leaves the corporate network, chances are that it is then transmitted in clear text over the public internet.

Looking at the external email problem first, most mail gateways can support Transport Level Security (TLS) which provides encryption and authentication. Unfortunately, TLS is often not configured and is not supported by some public mail systems such as Gmail. TLS can also be configured between the client and mail server depending on the individual setup of the mail system which reduces network sniffing attacks but does nothing to defeat abuse from rouge system administrators.

End to end encryption addresses all of the above issues and has been around for some time now. The leading solutions are Pretty Good Privacy (PGP) and Secure Multipurpose Internet Mail Extensions (S/MIME.) Both solutions use public/private key technology for encryption and certificates for authentication and integrity. The difference between them comes with the approach to the implementation of how certificates are trusted. S/MIME uses x509 certificates which have an hierarchical approach relying on a trusted certificate authority whereas PGP uses a web of trust.

Wide spread adoption of both technologies has been hindered by issues around certificate management and distribution as before you can send someone an encrypted message you need to get hold of their public key. Although automatic key retrieval is possible by a variety of techniques including LDAP queries of public directories, the management overhead has often been off putting for many people. It’s also necessary to have your key store on each system for which you wish to read and send encrypted mails. This is particularly annoying if you use a web client as although Outlook Web Access supports S/MIME and there are Firefox addons for both PGP and S/MIME that work with Gmail, both require local certificate stores.

Mobile devices don’t help much either. The iPhone has no support for either technology and although Blackberry devices do, they exist only as paid for extras.

One solution I found to the certificate locality problem with a web client was to use a portable version of Firefox with an S/MIME extension for Gmail. I could then read and send encrypted emails from any PC. The same is possible in theory with PGP. It is unlikely this would be feasible for Outlook Web Access given that there isn’t a portable version of IE and many of the enhanced features don’t work with Firefox.

An ideal solution would be to allow mail programs to access certificate stores located on something like a USB key and also to introduce a “miny” USB interface for mobile devices adopted by all the manufactures so that each user could use a single store from multiple devices. A single online public directory where everyone published their public key would also be useful. Why not something like Facebook or Linkedin?

Friday, March 26, 2010

Side Channel Attacks against SaaS

The Register highlights a paper from Microsoft Research and Indiana University on information leakage from popular SaaS applications. Interestingly, the attacks work even when only HTTPs is used and are most effective when the application is relatively sophisticated and uses modern development techniques.

The attack is based on observing the size of packets between a user and an application and subsequently deducing the content. Although this initially sounds rather far fetched, AJAX technology means data transfer has “low entropy” making it far easier to guess the content that for a more basic application. “AJAX (shorthand for asynchronous JavaScript and XML) is a group of interrelated web development techniques used on the client-side to create interactive web applications. With AJAX, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behaviour of the existing page.” * To put this in layman terms, AJAX allows the user and application to efficiently transfer data without the overhead of display and formatting information. It is subsequently much easier to determine the packet content by observing its length as there is less “noise” in the transmission.

The paper gives an example of determining a victim’s gross income by observing packets from an online tax preparation site.

What risk does this pose to everyday users of SaaS type applications? In the main, risk is probably very low as an attacker first needs to invest considerable time in profiling an application. They then need to be able to capture the traffic of the target at the right time. Information leakage, although highly dependant on the application, is unlikely to include specific data such as password or other fields that are not selected from a list. In the real world, an attack would need to be targeted against a specific user or group to have a chance of being effective.

In the list of things you need to worry about for internet security, this kind of attack is fairly low on the list. Of course this may change as exploit techniques develop and so is well worth keeping an eye on.

*Wikipedia

Wednesday, March 24, 2010

Who Can Read My Email?

I recently had a client who suspected that their email was being read by someone else in their company. They wanted to know if this was technically possible given their setup, which was the fairly standard Microsoft Exchange server with Outlook as the client. When including members of the IT team as possible suspects I could think of at least 7 possible attack vectors for reading someone else’s email. Starting with the most basic, these were:

Password Compromise: Failure to keep a password secret or easy to guess allows for anyone to logon to the associated email account.

Shoulder Surfing: Reading someone’s email over their shoulder or more likely when the PC is left unattended without an activated password protected screen saver. For example, during a coffee or cigarette break.

Inappropriate Permissions: Although more commonly used for Calendar access, it is possible to share your mailbox with other users in the organisation. An inappropriate general rule could allow unexpected access to the inbox. A malicious user could setup such a rule with a few minutes access to an unattended PC.

Administrator Permissions: A mail administrator can modify permissions at the server level to allow other accounts to access a mailbox.

Message Forwarding: A mail administrator can forward a copy of all incoming messages to another mailbox, completely transparent to the mailbox owner.

Anti Malware Program Abuse: Such software can often be configured to filter messages based on keywords and forward a copy of filtered emails to another mailbox. The filter can be constructed in a particular way to ensure messages from a particular user are always trapped.

Network Sniffing: By default the RPC protocol, most often used to communicate between the client and server, is not encrypted allowing the email to be intercepted and read. The same is true for the SMTP protocol which is normally used for communication between servers in an organisation or for messages destined for outside of the company.

I intend to blog in the future about how best to protect against such attacks but thought it worth while listing the basics here. Protecting your PC where the email client runs with a secure password and making sure it is not left unlocked when unattended is the obvious first step. It’s worth remembering that you also need your colleagues who receive your messages to do the same. Reviewing the permissions on your inbox is also important. If the culprit is a mail administrator, it’s much harder to defend against or even identify. S/MIME is one technology that could help although it is not appropriate in all cases. At the organisation level it is vital the IT policies make clear what is and is not permitted with regards to email. Even if you can prove someone has been reading email not destined for them, if there is not a policy in place that states that such activity is forbidden, it’s unlikely you could undertake any disciplinary action.

Thursday, March 18, 2010

Attacking the Virtual Machines

There was an interesting advisory posted by Core Security Technologies this week about a vulnerability in Microsoft Virtual PC. In actual fact no vulnerability had been discovered; instead weaknesses in Virtual PC had been identified which made exploitation of new vulnerabilities more likely. The problem was that the virtual machine memory management allowed the OS security mechanisms Data Execution Prevention (DEP), Safe Structured Error Handling (SafeSEH) and Address Space Layout Randomization (ASLR) to be bypassed. This functionality mitigates the effects of buffer overflows and other attacks. So, although the issue presents no current risk to a system, it is more likely that future vulnerabilities will be exploitable in a Virtual PC environment when compared to stand alone systems.

This got me thinking about the risks presented by virtualisation and in particular side channel attacks. My first attempt at investigation in this area was after a seminar at Infosec 2009 where someone suggested an attack could be made against the graphics subsystem of a virtual host. Side channel attacks against virtualisation seemed to be a very new area at the time and I didn’t make much progress. I’ve subsequently come across a recent paper from the University of California, San Diego, which looked at attacking cloud architecture including Amazon’s EC2 and Microsoft Azure.

To summarize, the paper explains how it is possible to identify a physical host of a target VM in a cloud infrastructure and then activate a virtual machine on the same system. It then goes on to discuss possible side channel attacks using vectors such as the shared processor data cache. Perhaps the closest result that came close to presenting a risk was the possibility of identifying key strokes on the target VM. Several denial of service attacks were also proposed.

Assuming the paper is representative of current knowledge on side channel attacks against virtual machines, there is nothing really to worry about at the moment. The amount of effort required by a criminal hacker to gain useful information by this method is disproportionate to the expected return. It is an area of interest more or less exclusively to researchers. However this is likely to change and possibly quite quickly as more and more information is located in the cloud.

Tuesday, March 16, 2010

300 Billion Passwords in One Second

The Register reports more advances in brute force password cracking. Swiss security firm Objectif Sécurité used Solid State Drives (SSD) to store rainbow tables and consequently speed up each brute force attack. The read speed from an SSD is typically quicker than that from a traditional drive, which is why there is an overall performance increase.

Objectif Sécurité claims a throughput of 300 billion passwords per second when attacking a Windows XP MD4 password hash.

My own testing using Objectif Sécurité’s online proof of concept easily cracked the passwords from my test XP machine including the complex x%fF*Z3$. It couldn’t crack my 29 character pass phrase from Size Does Matter, but this is because Windows XP stores passwords longer than 14 characters in a different way.

Of course the default LMHash used with MD4 to store Windows XP passwords is weak and has largely been replaced in modern operating systems. However the speed improvement can be applied to accelerate brute force attacks on other algorithms.

What does this mean in real terms? It was already true that if someone could get physical access to your XP PC, they could extract your data. Now it is trivial to steal your passwords as well, which of course opens up all sorts of identity theft related crime possibilities. This makes disk encryption, and not reusing passwords more important that ever.

For info, I “harvested” the passwords on my XP machine by booting from my favourite Linux Live CD, Backtrack. I then used Bkhive to extract the key used to encrypted the SAM file where the LMHashes are stored. Finally I used Samdump2 to extract the hashes themselves.

Thursday, March 11, 2010

Protecting Your PC with Free Software

A couple of announcements this week have brought home just how important it is to proactively defend your PC against the multitude of dangers that exist on the modern internet. This first was F-Secure announcing that the most targeted application of 2009 was, shock horror, not a Microsoft application. Less surprisingly was that fact that the gold medal winner was Adobe Acrobat. The second, as reported in the Register, was that Secunia have estimated that on average it is necessary to patch your PC once every 5 days to remain secure.

Until recently, if you had a good anti-malware package, a personal firewall (even the inbuilt Windows one) and had activated automatic Microsoft updates, there was a good chance that your PC was well protected. Unfortunately as has become apparent, the bad guys now target far more than Microsoft products and there is just too much new Malware to feel confident that anti-malware software can catch all new attacks. It seems that making sure all of your applications are patched as part of your defence strategy is more important than ever.

At this point, I am sure Linux users are feeling vastly superior as patching all applications is a fundamental part of many Linux distributions and has been for some time. Unfortunately, their numbers are not sufficient to make this article obsolete.

To address the above issues, I’ve recently evaluated Secunia’s free Personal Software Inspector (PSI). It is supposed to scan your PC, find all the applications running on it and then notify you if any of them need patching. I was pleasantly surprised to find out that it did just that and detected many applications that I thought would be too obscure for it to know about. The interface is easy to us, providing links to patches, explanations of vulnerabilities and also to a forum so that you can discuss any problems you might have. It also shows end of life products which are no longer supported. One result of a scan was for me to cleanup my PC, removing all those old applications I no longer used, especially if they were considered dangerous, which has also helped performance.

There were a few quirks to PSI that caused a bit of confusion. Google Chrome was flagged as needing to be updated, despite the correct version being installed. It turns out that Chrome leaves the last version of its code on your disk when it carries out an upgrade, presumably for roll back purposes, and this was detected as a risk. Whether the old code was accessible and exploitable by hackers was not clear. PSI also has a simple and advanced mode. Simple only displayed vulnerabilities that were easy to fix, whereas advanced included everything. This seemed a bit strange as a vulnerability poses the same risk whether or not it is easy to fix. Having spent half a day fixing all the issues flagged as advanced, I finally decided that Simple mode was probably a good thing. If you can get none technical users to fix the majority of the problems on their PC, it’s probably better than scaring them off by trying to get them to address complicated issues for which exploitation is unlikely.

I’m definitely adding PSI as part of my PC defence strategy.

Tuesday, March 9, 2010

Why Does Internet Explorer 6 Refuse to Die?

Everybody hates it, including its creator, but Internet Explorer 6.x refuses to die. According to Stats Counter, it still manages to take nearly 14 % of world wide market share despite not being supported by Microsoft. I thought it would be good to look at the reasons why it’s still around and what could be done to speed up its demise.

Firstly, why is it so much of a problem? A major irritation is its failure to correctly support cascading style sheets version 2 (CSS 2) which means developers often need to write custom code to detect browser versions and then perform conditional comments to ensure compatibility. The major problem however is the number of security vulnerabilities that it contains which present a real risk to the data of anyone who uses it.

The reasons most often quoted for IE 6’s continued use include, old operating systems, unlicensed copies of Windows XP and compatibility with legacy applications. Looking at these in turn:

Old Operating Systems: Internet Explorer 7 requires at least Windows XP SP2 in order to run. Hence anyone still using Windows 98, ME or 2000 would not be able to run more recent versions of Internet Explorer. However a quick look at Stats counter shows that such operating systems don’t even register sufficiently to justify being listed individual. They are grouped together under the category “Other” and combined don’t even amount to 1% of total usage.

Unlicensed Copies of Windows XP: Although it is not clear how many copies of unlicensed Windows XP are in use, it is thought to be significant. Internet Explorer 7 and 8 is distributed via Windows Update which does not work with counterfeit copies of XP and so the user is stuck with Internet Explorer 6. Although there is nothing stopping such users from installing up to date versions of Firefox, Chrome or Opera, many probably don’t due to a lack of knowledge. Indeed they may even be unaware their copy of Windows is illegal if their PC has been bought on the cheap. Stats counter reports that around 65% of the world’s PCs currently use XP so if 10% of this figure represents counterfeit copies, a large number of units are illegal. From a security perspective, it would be better if Microsoft “bit the bullet” and released security updates to the illegal systems as they are the source of most of the SPAM in the world and also make up Botnets that can be used for denial of service attacks.

Compatibility with Legacy Applications: It’s very easy to sneer about lack of foresight when you hear of companies running bespoke applications that are only compatible with IE 6. However, when many of these applications were developed IE 6 had as much as a 98% market share and was the best option available. It is also likely that some of the applications in question are ERP systems. If you want to upgrade one of those it can involve a battalion of consultants in smart suits with expense accounts and so is not easy to justify from a cost point of view. A simpler solution would be to use IE 6 just for the bespoke applications and to install a second browser for other internet access. Although installing multiple versions of Internet Explorer is theoretically possible, it is not supported by Microsoft. Other browsers have traditionally not been popular in large enterprises due to lack of central control which is provide for IE by the Internet Explorer Administration kit (IEAK). This is actually a misconception as Firefox, at least, has many such features. Another solution is to run IE 6 in an isolated environment using virtualisation which I have previously blogged about here.

Friday, February 19, 2010

Clever Cloud Computing

I’m still somewhat sceptical about many of the claims made for cloud computing as I’ve written before. However, I have recently come across some genuinely innovative and useful implementations of what could be called cloud technology. My favourite so far is Web Performance’s Load Tester 4. As the name suggests, this a product you can use to load test your web application. It’s been around for many years and is one of the easier load test packages to use.

One of the problems with all load testing software is that your testing infrastructure needs to be sufficiently powerful to simulate the load. If you are trying to simulate 1000 users from a single machine, it’s likely that system will itself become stressed and will fail to deliver an accurate simulation. The solution is to have multiple engines to share the load simulation which many packages already have. The next issue is bandwidth, as if you run your load engines in your office to test your production web application that is in a data centre, it is likely that your local internet connection will saturate and again the simulation is inaccurate. Hence you need to also locate your load engines in a data centre. Such a setup will work but there is a large cost and time overhead as you need dedicated hardware and data centre space.

Web Performance’s solution to this problem is to make available preconfigured load engines in Amazon’s EC2 cloud architecture. When you wish to carry out a test, you connect up to an EC2 load engine and run your load from the cloud. You can connect up to multiple engines if required, and although I’ve not tested it, you can select load engines at different locations, which could be useful for assessing user experience from different parts of the world. You pay for each engine by the hour.

I had a little trouble getting my first engine to work but this was probably due to me not reading the instructions properly.

Wednesday, February 17, 2010

Top 25 Most Dangerous Programming Errors 2010

The Common Weakness Enumeration project has published its 2010 list of the 25 most dangerous programming errors. The project is sponsored amongst others by the Mitre Corporation, the NSA and the Department of Homeland Security. The list is compiled by canvassing the opinion of industry experts.

The weaknesses are grouped into categories that consist of:

  • Insecure Interaction Between Components
  • Risky Resource Management
  • Porous Defences

Although I would agree with the complete list, it was a surprise to see that none of the weaknesses are really new with some having existed since programming began. The site also includes a high level action plan of how to mitigate against the top 25 which seems to provide a good starting point for securing any application.

Wednesday, February 10, 2010

Microsoft Techdays 2010 - Day 2

A big surprise at Techdays 2010 was Microsoft promoting Terminal Server as part of their virtualisation strategy. The technology was most prominent towards the end of the 1990s with Citrix leading the way. The idea is to have a thin client on your PC to access applications that run on a powerful central server, more or less like a mainframe and a terminal. Although terminal server never went away, indeed is has been integrated into Windows Server since the 2000 edition, it was never quite as successful as expected. A variant of course is used as the principle remote administration method for Windows Server products.

Back in the 90s using Citrix or Terminal Server would normally have been for performance reasons allowing low specification clients to access resource hungry applications even over poor network links. It of course makes perfect sense to use this technology for security as you can run applications that pose a risk to the PC, (e.g. anything that requires IE6) in an isolated locked down session. The reverse is true as well. For example, you could run a sensitive application in a terminal session and reduce the risk of damage if the end PC is infected with malware.

After a couple of sessions on IIS 7, I do wonder if Microsoft have made a mistake here. On the surface, it looks great, but even a couple of the experienced Microsoft IIS7 support team seemed to have trouble getting it do what they wanted. Their frequent use of IISRESET after an unexpected error did not inspire confidence.

Monday, February 8, 2010

Microsoft Techdays 2010 - Day 1 AM

I attended Microsoft Techdays 2010 today and not surprisingly the keynote speech concentrated on Azure, Microsoft’s cloud offering. The most impressive part was how well the presenter coped with the “unknown application error” that repeatedly popped up as he tried to publish an application from Visual Studio to Azure. The low point was undoubtedly an exclusive view of a new Intel multi-core processor which was made moderately more exciting when they removed the heat sync so we could see the processor itself.

A personal favourite was the virtualisation demo that showed how it is possible to run Windows XP nodes with IE 6 in your data centre accessible for your user base via a browser. The advantage of such a setup is of course that you can upgrade your PC base to Windows 7 and IE8 without loosing access to all your legacy applications. Unquestionably impressive, but if Microsoft had tried a bit harder to stick to standards a few years ago, those old applications wouldn’t require virtualising in the first place.

I then attended a session on Internet Explorer 9, which although interesting was actually mainly about IE 8 as someone higher up the command chain had decided that IE9 was not yet ready for public viewing, at least not at Techdays 2010. Surprisingly, there was very little said about security other than the anti-phishing functionality. The main thrust of the session was an admission that when it came to the Java Script engine, IE was far behind its rivals and this is where a lot of the work on IE 9 is going. Most encouragingly, the presenter also admitted that Microsoft failure to stick to standards in the past had been a mistake and this wouldn’t be the approach in future.

Tuesday, February 2, 2010

Another IPad Review

Every man and his dog already seems to have commented on the IPad so I thought I might as well throw in my two pennies worth. I probably can’t compete with Charlie Brooker who managed to mention masturbation during a prison visit in his review or even Hitler’s rant about Apple’s latest offering, but what the hell.

Does the IPad indicate a shift back to client server technology? Obviously it wouldn’t be called that, but as the iPhone has shown, there is a drift to using individual applications to access a particular service, e.g the New York Times iPhone app, just as we have got using to the idea of doing everything through a web browser.

Will the biggest uptake of the IPad eventually be at the corporate level? This seems a strange idea at first but for many system administrators the idea of having a locked down network enabled device when software can only be installed via an app store where all applications are pre-approved is a dream come true. Apple would need to release an intermediate app store for corporations, but this isn’t particularly difficult.

Monday, January 25, 2010

IPv6 Again

An article on Slashdot this morning discusses the release of the IP address range 1.0.0.0/8 for public use. This is of course connected with the so called saturation of the IPv4 address range which according to the article is still predicted for the end of 2012.

As I’ve discussed before, the solution to the lack of IPv4 addresses is IPv6 for which the technology and in some cases the infrastructure is already in place. The comments section of the Slashdot article debates just how much of a problem this really is. Although there is no consensus, it seems clear that there is a leadership vacuum in addressing the issue. I can see no reason why businesses and certainly not home users would currently take the effort to migrate to IPv6. There needs to be some incentive or regulatory requirement to do so, which probably needs to be set at the government level. To be fair, the EU does have an IPv6 program in which they acknowledge the problem. The stated goals to address the issue are:

1. An increased support towards IPv6 in public networks and services,

2. The establishment and launch educational programmes on IPv6,

3. The adoption of IPv6 through awareness raising campaigns,

4. The continued stimulation of the Internet take-up across the European Union,

5. An increased support to IPv6 activities in the 6th Framework Programme,

6. The strengthening of the support towards the IPv6 enabling of national and European Research Networks,

7. An active contribution towards the promotion of IPv6 standards work,

8. The integration of IPv6 in all strategic plans concerning the use of new Internet services.

This is very noble, but to me at least, the program does not generate enough “noise” to provoke a mobilisation of effort that will make a difference.

Thursday, January 21, 2010

Vulnerability Trends

It was no surprise when reading US-Cert’s vulnerability summary for the week of January 11 2010 to see that six of the vulnerabilities classed as high were in some way related to Acrobat Reader. There seems to have been a constant stream of stories in the news about these bugs and public exploits for them. It doesn’t seem that long ago that PDF, the file format associated with Acrobat Reader, was considered the safe option for documents from un-trusted sources. Indeed I was once involved in a project to convert word documents uploaded to a web site into PDF before they were viewed by the end user.

Is Adobe Acrobat less secure than other software? Probably not. It’s more likely that as it exists on a vast proportion of PCs in the world, it has become a desirable target for hackers. The same could be send for Internet Explorer although now that Firefox is, according to some reports, taking up to 40% of market share in Europe at least, it will be interesting to see if more Firefox vulnerabilities come to light.

There were also five SQL injection vulnerabilities reported in the summary which were classed as high. This is disappointing as SQL injection is not a new attack, there is lots of information available on how to defend against it, and in theory at least, counter measures are not difficult to implement. This would suggest that despite the fashion for Security Development Life cycles, some companies are still not treating security seriously.

Friday, January 15, 2010

Targeted Malware

Targeted Malware has featured prominently in security news this week. To summarize, Targeted Malware is just like other malware but attempts to distribute it are limited to a small group of people or even just a single person. For obvious reasons, it’s more likely to be used for espionage, political or industrial, rather than direct crime. It can be particularly effective as the email, web site or document used to trick the user into installing the Malware can be tailored to a very narrow area of interest, luring the user into a false sense of security.

The chances of Targeted Malware being detected by an antivirus package is also low. Antivirus software relies mainly on comparing code against a database of known malicious patterns. The Anti-Malware vendors build their databases from Malware they have either “trapped” themselves or that which has been sent to them by their clients. A targeted attack would almost certainly miss the vendor honey pots and because of its small distribution, the chances of it being reported by an end user are slight.

The Register has a good article about a recent targeted attack on Google. There is a video at the bottom of the article by F-Secure that gives further insight into Targeted Malware.

Whilst writing this blog entry, it sprang to mind that a good launch pad for this kind of attack could be a social network, in particular, business orientated ones such as Linkedin. It’s easy enough to build a false profile and it’s also simple to identify targets at say an organisation that you wanted to infiltrate. The Groups feature could be particularly useful as you can post links to external websites and documents which could be a source of Malware. As the end user has had to log in to the system and they are probably looking at a Group that is fairly specific to their job role, the chances are that they have a false sense of security and are maybe not as cautious as they usually would be.

Wednesday, January 13, 2010

Breaking SSL (Again)

Another encryption land mark was reached towards the end of last year with the factorization of RSA-768. To put this is simpler terms, RSA-768 is a 768 binary bit number (232 digits in decimal) which is the product of two prime numbers, usually denoted as p and q. It forms part of the public and private keys used in TLS/SSL encryption most commonly used for securing internet traffic. If you can determine p and q from the public key, i.e. factor the RSA-768 number, then you can also calculate the private key and hence “crack” the encryption. It sounds easy, but try to factor the number 6947 into its prime factors? (See below for answer). Now try doing that with a 232 digit number.

Although some mathematics was used, notably the General Number Field Sieve, the attack was still effectively a brute force effort spread out over hundreds of processors and took over two and a half years. If the effort were repeated for a different 768 bit number, the experience would surely result in finding the solution in a shorter time. However it’s not clear if the result from the first test can be reused for a different number and I suspect not, meaning that an attack against a 768 bit key is still theoretical other than for the most critical of data.

One of the conclusions of the study was that 1024 bit keys although safe today should be phased out in the next 3 to 4 years and replaced with 2048 bit keys. A quick unscientific survey of certificates used on some of the more popular web sites suggests that 1024 bits is more or less ubiquitous, although there are some 2048 bit certificates out there. It is possible that some older browsers would not support the longer keys, but no one is flagging this as an issue.

For me, the most interesting part of the study was how the researchers concentrated on introducing parallelism into their algorithms to allow the load to be spread over multiple systems. This of course leads on to one thinking that a cloud setup such as Amazon’s EC2 could eventually be used for such tasks rather than private academic systems .

(89,73)

Friday, January 8, 2010

IPv6 First Looks

With predictions of doom and disaster for 2010, i.e. exhaustion of the IPv4 address space rather than the end of the world, I thought it would be good to have a look at how easy it is to implement IPv6 in the home/office network. As any eventual migration from IPv4 will involve none technical users, I tried to do this with minimal research and without any complex PC or router changes.

My ISP has been offering IPv6 for sometime now and it was simple enough to enable. I logged on to the admin interface of my ADSL router and clicked on the button “Enable IP6”. The next stage was to configure my test systems for IPv6 addresses. I decided to use my Windows 7 laptop and Ubuntu 9.10 desktop which I have running as a virtual machine. Windows 7 has IPv6 enabled by default and an address was assigned straight away. For the Ubuntu system, it was easy enough to enable via the GUI.

I then found a few IPv6 web sites using ping6 on Ubuntu and ping -6 on Windows. Not surprisingly, ICAAN and my ISP were IPv6 enabled as were Google and 01net, a French IT news publisher. Disappointingly, there doesn’t seem to be much support from major web sites other than an odd server for research purposes. I then successfully browsed to the sites I had found and made use of various packet capture tools to check that IPv6 was indeed used for the communication.

The next step was to disable IPv4 on both test systems. Ubuntu carried on as normal but the Windows 7 system stopped working. The problem turned out to be the DNS resolution. For whatever reason, my Ubuntu system had a different DNS assignment. Once I manually entered the Ubuntu values into Windows 7 it worked fine. I’m not sure why this problem arose and didn’t have time to investigate further. Whilst troubleshooting the issue, I discovered that into order to type IPv6 addresses directly into the address bar of your browser, you need to put the address in [] brackets.

Conclusions? Well for both Ubuntu and Windows 7 in combination with my ISP’s IPv6 setup, activating IPv6 was simple enough. However, the real issue is of course that most of the sites I visit every day don’t yet support IPv6. Even if they did, they would still need to support IPv4 so as not to shut themselves off from a large part of their user population. It seems that a huge effort will be required, probably mainly on the part of the ISPs to accelerate IPv6 acceptance. Some kind of gateway or tunnelling system will also be required between IPv6 and IPv4 during a transition period. One solution I looked at briefly was an offering from SixXS. Although I didn’t have a chance to install their tunnelling client, I did use their IPv6/IPv4 Website Gateway which allowed me to browse any IPv4 web site from my IPv6 only client. Although I don’t really think that IPv4 saturation will occur this year, it will arrive eventually and will probably cause a significant amount of pain. It will require a mobilisation similar to that seen for the so called Y2K bug. I sense a business opportunity.

Wednesday, January 6, 2010

SimpleIDS

I’ve posted a Windows Powershell script on my web site today that checks directories for file additions, deletions and changes. Its intended purpose is to act as a simple audit tool to detect unauthorised content change. It’s called SimpleIDS and can be downloaded here.

Although I think that intrusion detection systems (IDS) are a necessary part of any web application infrastructure, many of the commercial tools out there are expensive and in my opinion often do not give value for money. There are some excellent free systems out there such as Snort but even these can require a significant investment in man hours. If you are unsure of how cost effective a particular IDS control or system is, a quick way to assess its value is to consider the Annual Loss Expectancy (ALE). Subtract the ALE after a control is implemented from the ALE before the control and then compare the result to the cost of the IDS. If the ALE reduction is less than the IDS cost then it’s probably not worth having.

My approach to IDS has always been to keep it as simple as possible. Where feasible, it’s a good idea to build it directly into your application, something I’ll blog about later on. SimpleIDS is also a good example. If performs a single function, to detect content change, and so is easy to understand. It is a script and so doesn’t require any installation of software.

SimpleIDS is rather primitive at the moment and I intend to evolve it over the coming months with more command line options and an alerting function as the priorities. Feedback would be appreciated.

Monday, January 4, 2010

To Bug or not to Bug

An IIS bug reported towards the end of last year brought an abrupt response from Microsoft. According to the Register, Microsoft acknowledge that the bug exists in IIS 6 but claim that it doesn’t present a risk as you would need to be running your Web Server in an insecure configuration for it to be exploited. Umm, that’s alright then as we all know that everyone runs their applications in a secure config.

The vulnerability arises because of the way IIS6 parses semi-colons. If you had a file called badcode.asp;.jpg, everything after the semi-colon would be ignored and the web server would process the file as if it were called badcode.asp. The end result in this case would be that the file is processed on the server and not the client.

How could this be exploited in the real world? Consider that many sites allow anonymous users to upload documents to a webserver. This could be in the form of a photo or a CV. In order to stop malicious users uploading harmful content there would normally be a filtering process in place that would block files of type .exe, .asp etc. However, if the user were to append ;.jpg or ;.doc to their file the filtering process would be bypassed and the file uploaded to the server. If the file resides in an accessible web directory with script execute permissions, any user can execute the file.

Microsoft rightly point out that you would be foolish to allow uploaded content to be available from the web especially in a directory with execute permissions. Best practise would also not allow the end user to choose their own file names. Personal experience however suggests that best practise is not always followed. Given that IIS may occupy 21% of the entire web server market, I would be confident that some fairly high profile sites could be vulnerable. The worst case scenario would be something like a bank being exposed to the bug. It could lead to the ultimate phishing scam as malicious code would be authenticated and encrypted by a valid SSL certificate.