Ever since the first viruses started infecting computers, IT departments have wrestled with whether it was more important to prioritize securing the network, or their endpoints. While every vendor will tell you they have the solution, the fact is that there are pros and cons to each, which is why security experts advocate adopting a “defense-in-depth” approach, and interweaving technologies to give them the best defense possible.

When looking at some of the recent U.S. data breaches, many companies had invested heavily in the latest and greatest “next-generation” network detection-based security technologies. These work by trying to detect and block threats entering and leaving the network, and notifying security teams.

While this seems a fool-proof system on paper, the relentless noise generated by the literally thousands of daily alerts actually blinded IT to attacks. After all, trying to identify the one critical breach in a wave of thousands of security alerts is like trying to find a really sharp needle in a stack of regularly-sharp needles – not only difficult, but dangerous, too.

While there is an important place for network security – the simple fact that no system will ever be 100% secure shines light on the need for additional layers of security. Often network security solutions are trying to filter dangerous content from reaching vulnerable endpoints, but isn’t it better if we can make the endpoints less vulnerable? With this in mind, the best strategy is to build security from the endpoint out - reducing the attack surface and building defendable infrastructure.

While network-based security solutions can attempt to block threats before they hit the endpoint, the major problem with this approach is that companies that rely heavily on network security end up with an “eggshell” security stance – whereby a system is reliant on a single outer shell to protect all of the organization’s data. Without further securing the endpoints, even one small crack in the shell will leave the entire organization vulnerable, and this is without evening considering threats that don’t come via the corporate network.

Critical business data is accessed and stored on the endpoint, and as code – either good, bad or unknown – executes here, the endpoint effectively becomes the first line of defense for enterprise security against the latest APTs and cyber threats. The implementation of both network and endpoint detection into one inter-reliant defensive shell, or a “defense-in-depth” approach, removes the sole reliance on detection to block threats allowing organization to handle unknown and zero day attacks.

The main difficulty faced by detection solutions is the impossible trade-off between security and usability. Namely, all threats need to be deeply analyzed, but security teams simply cannot make employees wait while they address these issues, which would reduce productivity and staff morale.

Ultimately, this will often result in security features especially at a network level simply being disabled. Intel Security found that more than 30% of organizations disable network-based security features for this exact reason. Malware authors know this, and therefore will create attacks that simply lay dormant for a period of time to bypass the network sandbox. This has caused malware to evolve new methods of avoiding networks security products, including:

• Delayed onset

• Detecting virtualized environment

• Checking the number of CPU cores (network sandbox usually only presents one)

• Checking if user is real (monitor mouse movement, etc.)

• Exploiting the virtual environment to escape

With even the most effective network based defense facing the possibility of a breach, how can security admins ensure the same doesn’t happen to endpoints? The most effective way to complement a strong network defense is by reducing the attack surface of the endpoint.

This most commonly involves three key steps:

1. Removing administrator privileges – an extremely easy and cost-effective way to block malware from accessing the system and, and sensitive data.

2. Application whitelisting – allow only the known corporate approved applications to run preventing attackers from launching new malicious applications.

3. Sandboxing – isolate unknown or untrusted content such as websites, email attachments and downloaded documents away from sensitive corporate data to prevent attackers being able to access anything of value.

A bank doesn’t leave the vault door open just because they have a security guard on the door – they start from the vault and layer security outward. If the endpoint isn’t secure, and security admins do not ensure that both systems work in tandem, companies simply risk losing data, intellectual property, resources, money and invaluably, trust – in other words, everything.

(About the author: James Maude is a senior security engineer at endpoint protection company Avecto.)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access