I’ve recently been putting together a set of security policies for a client which of course includes a software policy. Wherever I have been working in the past I’ve always argued for liberal and relaxed policy as to what staff can run on their PCs. The basic idea is to have a small set of core applications that are fully supported by the IT department and a list of banned software types which mainly consists of anything that is illegal. Anything that sits in the middle could then be installed by the user with the understanding that no support is to be provided by IT and that it must be removed if it is shown to conflict with a core application.
Although most software policies that I’ve seen in the past are restrictive to the point that they wouldn’t be out of place in North Korean government policy, the situation described above is often what occurs in reality. Trying to police a restrictive software policy is time consuming and potentially expensive. Locking down PCs can also be complicated as there always seems to be one critical application or function that requires the user to be an administrator. Technology is also often one step ahead. Did anyone else have fun trying to stop MSN use a few years ago? I’ve also been in several “discussions” about blocking Webex type tools which were only resolved when management launched a cost cutting initiative and it was pointed out that online collaboration could save us money and time .
Of course the reason for restrictive policies is mostly down to the fear of introducing malware onto the corporate network and ultimately loosing data. This is a very real threat and one that appears to be getting worse. Rather than relenting on my relaxed software policy strategy, I advocate another approach. The first step is to ensure an active update policy ensuring that OS and software patches are applied as rapidly as possible, closely followed by the installation and maintenance of an anti malware package. This should significantly reduce unwanted malware particularly that which comes from internet browsing. The next step, which is a little more radical, is to treat the corporate network as un-trusted. In real terms, this means placing firewalls, IDS/IPS and Data Loss Prevention (DLP) technology between corporate systems and internal users. If someone does inadvertently introduce malware to the network, then risk is limited to the end system. Such a setup also has the added benefits of protecting against internal data theft and may help meet regulatory requirements.
The move towards “Cloud Computing” with companies looking to locate their systems with a third party helps to facilitate such an approach. The idea of corporate networks being little more than semi private internet access points is maybe not that far away.
The ultimate aim of the software and other policy is protect against data loss without restricting productivity of the end user. Although not perfect, I think my approach is the best compromise between security and usability.
No comments:
Post a Comment