Combining agile software development concepts in an increasingly cyber-security conscious world is a challenging hurdle for many organisations. We initially touched upon this in a previous article — An Elephant in Ballet Slippers? Bringing Agility To Cyber Security — in which Hivint discussed the need to embrace agile concepts in cyber security through informal peer-to-peer sharing of knowledge with development and operations teams and the importance of creating a culture of security within the organisation.
One of the most common and possibly biggest challenges when incorporating agility into security is the ability to effectively integrate security practices such as the use of Static Application Security Testing (SAST) tools in an agile development environment. The ongoing and rapid evolution of technology has served as a catalyst for some fast-paced organisations — wishing to stay ahead of the game — to deploy software releases on a daily basis. A by-product of this approach has been the introduction of agile development processes that have little room for security.
Ideally, security reviews should happen as often as possible prior to final software deployment and release, including prior to the transition from the development to staging environment, during the quality assurance process and finally prior to live release into production. However, these reviews will often require the reworking of source code to remediate security issues that have been identified. This obviously results in time imposts, which is often seen as a ‘blocker’ to the deployment pipeline. Yet the increase in media coverage of security issues in recent years highlights the importance of organisations doing all that they can to mitigate the risks of insecure software releases. This presents a significant conundrum: how do we maintain agility and stay ahead of the game, but still incorporate security into the development process?
One way of achieving this is through the use of a ‘hybrid’ approach that ensures any new software libraries, platforms or components being introduced into an organisation are thoroughly tested for security issues prior to release into the ‘agile’ development environment. This includes internal and external frameworks such as the reuse of internally created libraries or externally purchased software packages. Testing of any new software code introduced into an IT environment — whether externally sourced or internally produced — is typically contemplated as part of a traditional information security management system (ISMS) that many organisations have in place. Once that initial testing has taken place and appropriate remediation occurs for any identified security issues, the relevant software components are released into the agile environment and are able to be used by developers to build applications without the need for any further extensive testing. For example, consider a .NET platform that implements a cryptographic function using a framework such as Bouncy Castle. Both the platform and framework are tested for security issues using various types of testing methodologies such as vulnerability assessments and penetration tests. The developers are then allowed to use them within the agile development environment for the purposes of building their applications.
When a new feature or software library / platform is required (or a major version upgrade to an existing software library / platform occurs), an evaluation will need to occur in conjunction with the organisation’s security team to determine the extent of the changes and the risks this will introduce to the organisation. If the changes / additions are deemed significant, then the testing and assurance processes contemplated by the overarching ISMS will need to be followed prior to their introduction into the agile development environment.
This hybrid approach provides the flexibility that’s required by many organisations seeking an agile approach to software development, while still ensuring there is an overarching security testing and assurance process that is in place. This approach facilitates fast-paced development cycles (organisations can perform daily or even hourly code releases without having to go through various types of security reviews and testing), yet still enables the deployment of software that uses secure coding principles.
It may be that fitting the ballet slippers (agility) onto the elephant (security) is not as an improbable concept as it once seemed.
As cyber security as a field has grown in scope and influence, it has effectively become an ‘ecosystem’ of multiple players, all of whom either participate in or influence the way the field develops and/or operates. At Hivint, we believe it is crucial for those players to collaborate and work together to enhance the security posture of communities, nations and the globe, and that security consultants have an important role to play in facilitating this goal.
The eco-system untwined
The cyber security ecosystem can broadly be divided into two categories, with some players (e.g. governments) having roles in both categories:
Consists of those stakeholders who are in a position to exert influence on the way the cyber security field looks and operates at the micro-level. Key examples include governments, regulators, policymakers and standards setting organisations and bodies (such as the International Organization for Standardization, Internet Engineering Task Force and National Institute for Standards and Technology).
Consists of those stakeholders who, both collectively and individually, undertake actions on a day-to-day basis that affect the community’s overall cyber security posture (positively or negatively). Examples include end users/consumers, governments, online businesses, corporations, SMEs, financial institutions and security consultants (although as we’ll discuss later, the security consultant has a unique role that bridges across the other players at the micro-level).
The macro level has, in the past, been somewhat muted with its involvement in influencing developments in cyber security. Governments and regulators, for example, often operated at the fringes of cyber security and primarily left things to the micro-level. While collaboration occurred in some instances (for example, in response to cyber security incidents with national security implications), that was by no means expected.
The formalisation of collaborative security
This is rapidly changing. We are now regularly seeing more formalised models being (or planning to be) introduced to either strongly encourage or require collaboration on cyber security issues between multiple parties in the ecosystem.
Recent prominent examples include proposed draft legislation in Australia that would, if implemented, require nominated telecommunications service providers and network operators to notify government security agencies of network changes that could affect the ability of those networks to be protected, proposals for introducing legislative frameworks to encourage cyber security information sharing between the private sector and government in the United States, and the introduction of a formal requirement in the European Union for companies in certain sectors to report major security incidents to national authorities.
There are any number of reasons for this change, although the increasing public visibility given to cyber security incidents is likely at the top of the list (in October alone we have seen two of Australia’s major retailers suffer security breaches). In addition, there is a growing predilection toward collaborative models of governance in a range of cyber topic areas that have an international dimension (for example, the internet community is currently involved in deep discussions around transitioning the governance model for the internet’s DNS functions away from US government control towards a multi-stakeholder model). With cyber security issues frequently having a trans-national element — particularly discussions around setting ‘norms’ of conduct around cyber security at an international level — it’s likely that players at the macro-level see this as an appropriate time to become more involved in influencing developments in the field at the national level.
Given this trend, it’s unlikely to be long before the macro-level players start to require compliance with minimum standards of security at the micro-level. As an example, the proposed Australian legislation referred to above would require network operators and service providers to do their best (by taking all reasonable steps) to protect their networks from unauthorised access or interference. And in the United States, a Federal Court of Appeals recently decided that their national consumer protection authority, the Federal Trade Commission, had jurisdiction to determine what might constitute an appropriate level of security for businesses in the United States to meet in order to avoid potential liability. In Germany, legislation recently came into effect requiring minimum security requirements to be met by operators of critical infrastructure.
Security consultants — the links in the collaboration chain
Whatever the reasons for the push towards ‘collaborative’ security, it’s the micro-level players who work in the cyber security field day-to-day who will ultimately need to respond as more formal expectations are placed on players at the macro-level with regards to their security posture.
Hivint was in large part established to respond to this trend — we believe that security consultants have a crucial role to play in this process, including through building a system in which the outputs of consulting projects are shared within communities of interest who are facing common security challenges, thereby minimising redundant expenditure on security issues that other organisations have already faced. This system is called “The Security Colony” and is available now. For more information on the reasons for its creation and what we hope to achieve, see our previous article on this topic.
We also believe there is a positive linkage between facilitating more collaboration between players at the micro-level of the ecosystem, and encouraging the creation of more proactive security cultures within organisations. Enabling businesses to minimise expenditure on security problems that have already been considered in other consulting projects enables them to focus their energies on implementing measures to encourage more proactive security — for example, as we discussed in a previous article, by educating employees on the importance of identifying and reporting basic security risks (such as the inappropriate sharing of system passwords). And encouraging a more proactive security culture within organisations will ultimately strengthen the nation’s overall cyber security posture and benefit the community as a whole.
As enterprise IT and development teams embrace Agile concepts more and more, we are seeing an increased need for cyber security teams to be similarly agile and able to adapt to rapidly evolving environments. Cyber security teams that will not or cannot make the necessary changes will eventually find themselves irrelevant; too far removed from the function and flow of the organisation to provide meaningful value, resulting in an increased risk for the organisation and its interests.
So, how do we fit the elephant (cyber security) with ballet slippers (agility)?
Firstly, in an age of devops, continuous integration and continuous deployment it is critical to understand the evolving role of the cyber security team. The team’s focus on the on rigorous definition, enforcement and assurance of security controls is giving way to active education, collaboration and continual improvement within non-traditional security functions. This is primarily because the developers, the operations team, the sysadmins have all become the front-line for the security team. These teams spend their working life making decisions that will impact the security of the product & platforms and ultimately the security of the enterprise. Rather than risk being seen as the ‘department of no’ the cyber security team needs to embrace the change that agile development brings and find ways to improve the enterprise through enhancing the skills and capabilities of these teams.
First and foremost is education. If the devops team don’t know or even worse, don’t value security controls and secure practices then the systems they develop and maintain will never be secure. It is the role of the cyber security team to ensure that all members of the development and operations team understand that security doesn’t need to be difficult, it can be implemented well if it is inherent to the development process. This is typically achieved through ongoing training and education, both formally and informally.
Secure development and devops training courses are widely available and are absolutely a valuable part of the toolkit, but they tend to be rather static in nature and often bad habits tend to creep back in over time. Informal education through peer review, feedback and information sharing is far more consistent and reliable as long as there is a clear security ethos that can be established for the team to work from. This is particularly the case for the senior members of the team passing on their knowledge to newer or less experienced members.
Security champions are crucial in filling this role. Ideally a security champion is a member of the security team that works with the development team on a daily, even hourly, basis. One of the most important parts of this role is that the security champion needs to be able to ‘speak geek’ and understand the challenges facing the team when trying to rapidly develop applications. A background in development or devops means that they can speak from experience and be empathetic when dealing with the development teams. The security advice they provide needs to be pragmatic, weighing up the relative risks and benefits, and it needs to be delivered in a way that is meaningful to the rest of the development team.
An ability to get their ‘hands dirty’ and actually assist in aspects of code development or systems maintenance is definitely a bonus. The security champion also needs to drive the implementation of tools and services to support the rapid identification, assessment and remediation of security vulnerabilities in code or platforms. Wherever possible these security tools need to be seamlessly built into the existing development, deployment and testing tools (think Bamboo, Jira, Jenkins, Circle CI and Selenium) so that security assessment becomes transparent to the overall development and deployment processes. The security champion should also responsible for bringing a cyber-security context into the design stages of development. This is often best achieved by flagging stories (Agile-speak for detailed use-cases) as ‘secure’, meaning that particular attention needs to be paid to that component — user input, authentication, database calls, connections to external systems/APIs will all require additional analysis.
Finally, and possibly most importantly, it is critical that organisations develop a culture of security. Insecure practices must be considered as a real no-no in the day to day business behaviours. A good comparison is the nature of OH&S (Occupational Health & Safety) practices in the workplace today. 15–20 years ago your typical workplace was not as safe as they are now. Instances like trip hazards, puddles of liquid and the like weren’t necessarily seen a big issue.
Nowadays staff recognise them as a safety risk and have been trained to respond accordingly or raise the issue with someone who will. Cyber security needs to arrive at the same point. Staff need to be aware of what constitutes ‘safe’ and ‘unsafe’ cyber security behaviours, and feel confident in calling out unsafe practices.
Observing a team member sharing a password or leaving a workstation unlocked, shouldn’t be something that is seen as normal practice — it needs to be identified as a risk and addressed immediately, with the security team being part of the solution to the problem. Pointing out an insecure practice but not providing a practical solution will only alienate the security team. As staff become aware and feel confident in calling out unsafe activities, with the support of the security team to address, the it becomes part of the cultural DNA and is more readily passed on to new team members and new initiatives.
Agile development does present a number of challenges to a cyber-security team. Trying to adhere to the same practices and controls that were implemented 5–10 years ago is ultimately destined for failure as the rate of change is too rapid in order for them to be effective. Adapting practices to maintain relevancy to the evolving environment is the only way to remain effective and best protect the organisation and its customers.
One of the key objectives for an information security professional is providing assurance that the systems which are implemented, or are soon to be implemented, are secure. A large part of this involves engaging with business and project teams proactively to ensure that security needs are met, while trying hard not to interfere with on-time project delivery.
Unfortunately, we’re not very good at it.
Recently, having agreed to conduct a security risk assessment (SRA) of a client’s SFTP solution, which they intended to use to transfer files to a vendor in place of their existing process of emailing the files, I sat down to discuss the security requirement with the Solution Designer, only to have him tell me that an SRA had been done before. Not just on the same design pattern, but on the exact same SFTP solution. They were simply adding an additional existing vendor to the solution to improve the security of their inter-company file transfer process. The organisation didn’t know how to go about evaluating the risks to the company of this change, so they used the ‘best fit’ security-related process available to it, which just happened to be an SRA.
Granted, in the example above, a new vendor might need to be assessed for the operational risk associated with them uploading files to our client’s environment, or if there were changes to the SFTP solution configuration. But in this case, the vendor had been working with them for some time so there was no further risk introduced, just a more secure business process: the risk was getting lower not higher.
While this is only one example, this scenario is not uncommon across many organisations we work with, across many industry sectors, and it’s only going to get harder. With more organisations moving to an agile development methodology and cloud deployments, ensuring security keeps up with new developments throughout the business is going to be critical to maintaining at least a modicum of security in these businesses.
So, if you’re getting asked to perform a risk assessment the day before go-live (yes, this still happens), you’re doing it wrong.
If you’re routinely performing your assessments of systems and technology within the project lifecycle, you’re doing it wrong.
If you’re engaging with your project teams with policy statements and standards documents, yes, unfortunately you’re also doing it wrong.
Projects are where things — often big things — change in an organisation’s business or technology environment. And where there is change, there is generally a key touch point for the security team. Projects will generally introduce the biggest potential vulnerabilities to your environment, but if there is an opportunity to positively influence the security outcomes at your organisation, it will also be as part of a project.
Once a system is in, it’s too late. If you haven’t already given your input to get a reasonably secure system, the project team will have moved on, their budget will have gone with them, and you’ll be left filling out that risk assessment that sits on some executive’s desk waiting for the risk to be accepted. Tick.
But on the flip-side, if you’re not proactively engaging with project teams and your business to provide solutions for them, you’re getting in the way.
Let’s face it, no project manager wants to read through dozens of pages of security policy and discern the requirements for their project — you may as well have told them through interpretive dance.
So, what’s the solution?
The solution is to look to the mature field of IT Service Management, and the concept of having a Service Catalogue.
A Security Services Catalogue is two things:
Firstly, it is a list of the security and assurance activities which the security team offers which are generally party of the system development lifecycle. These services may include a risk assessment, vulnerability assessment and penetration testing, and code review, among others. The important thing being that the services are well defined in terms of the offering inputs, outputs and process, and the required effort and price, so that the business and the project teams can effectively incorporate it into their budget and schedule.
Secondly, it is a list of the security services already implemented within the organisation and operated by or on behalf of the security team, which have been through your assurance processes and are effectively “approved for use” throughout the organisation. These services would be the implementation of a secure design pattern or blueprint, or form part of one of those blueprints. To get an idea, have a look at the OSA Security Architecture Landscape, or the Mozilla Service Catalog.
Referring quickly to Mozilla’s approach, a good example is their logging or monitoring or SIEM service. Assuming a regulatory and policy requirement for logging and monitoring for all systems throughout your environment, it allows the project team to save money and time by using the standardised service. Of course, using the already implemented tool is also common sense, but writing it down in a catalogue ensures that the security services on offer are communicated to the business, and that the logging and monitoring function for your new system is a known-quantity and effective.
The easiest way to describe this approach is “control inheritance” — where a particular implementation of a control is used by a system, that system inherits the characteristics of that control. Think of Active Directory — an access control mechanism. Once you’ve implemented and configured it securely, and it has been evaluated, you have a level of assurance that the control is effective. For all systems then using Active Directory, you have a reasonable level of assurance that they are access controlled, and you can spend your time evaluating other security aspects of the system. So communicate to your organisation that they can use it via your Security Service Catalogue.
And if your Project team wants to get creative? No problem, but anything not in the catalogue needs to go through your full assurance process. That — quite rightly — means risk assessments, control audits, code reviews, penetration tests, and vulnerability scans, which accurately reflects the fact that everything will be much easier for everyone if they pick from the catalogue where possible.
So, how does this work in practice?
Well, firstly, start by defining what level of assurance you need for a system to go into production, or to meet compliance. For example, should you need to meet PCI compliance, you’ll at least have to get your system vulnerability scanned and penetration tested. Create your service catalogue around these, and defining business rules for their use and the system development lifecycle stages in which they must be completed.
Secondly, you need to break down your environment into its constituent parts (specifically the security components), review and approve each of those parts, and add them to your Security Service Catalogue. Any system then using those security services as part of its functionality, inherits the security of those services, and you can have a degree of assurance that the system will be secure (at least to the degree that the system is solely comprised of approved components).
The benefits are fourfold: Project teams can simply select the services they want to integrate with, and know that those services meet the requirements of the security policy. No mess, no fuss. Projects go faster, project teams know what the expectations are for them, and aren’t held up by the security inquisitor demanding their resources’ time. Budget predictability. Project teams know the costs which need to be included in their budget up front. They can also choose a security service which is a known-quantity, which means there is a lower chance of a risk eventuating that needs them to pay to change aspects of the system to meet compliance or remediate a vulnerability.
You don’t need to check the security of the re-used components used by those projects over and over again. For example, you might use an on-premise Active Directory instance with which identity and access management is performed; or maybe it’s hosted in Azure. Perhaps you use Okta, a cloud based SaaS Identity and Access Control service. For logging and monitoring, you might use Splunk or AlienVault as your organisation-wide security monitoring service, or maybe you outsource it to AlertLogic. Whatever. Perform your due diligence, and add it to your catalogue.
Once it’s in your catalogue, you should assess it annually, as part of your business as usual security practices — firstly for risk, secondly at a technical level to validate your risk findings, and finally in a market context to see if there are better controls now available to address the same risk issue.
I’ve been part of a small team building a security certification and accreditation program from scratch, and have seen that the only way to scale the certification process, and ensure sufficient depth of security review across the multitude of systems present in most organisations, is to make sure unnecessary re-hashing of solution reviews is minimised, using these “control inheritance” principles.
Thirdly, develop a Security Requirements Document (SRD) template based upon your Security Services Catalogue. This is where you define the services available and requirements for your project teams, and make the choices really easy for them. Either use the services in the security services catalogue, or comply with all the requirements of the Password Policy, Access Control Policy, Encryption Policy, etc. After a time, your Project Lifecycle will mature, your Security Services will become more standardised and robust, and your life will become significantly easier.
Lastly, get involved with your project teams. Your project teams are not security experts, you are. And the sooner you make it easy for them to get what resources and expertise you have available, the sooner they can make the best decisions for your organisation, the more secure your organisation will be. Make the secure way the easy way, and everyone’s life will be a little more comfortable.
Article by Ben Waters, Senior Security Advisor, Hivint
Colleagues, the way we are currently approaching information security is broken.
This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.
But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).
Why is it inefficient to compete on security? Here are a couple of reasons:
Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.
No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)
Some examples of how our current approach is failing us:
We are wasting money on information security governance, risk and compliance
There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.
The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.
We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.
We are wasting money when getting technical security services
As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.
But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.
The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.
So what do we do about it?
There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.
It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.
The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:
Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?
Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.
Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).
Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.
Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.
What else is going in to the portal?
We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.
“Everyone” can’t win. Who loses?
The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).
So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.
Is there a precedent for this?
The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”
A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.
Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.
The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.
Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.
What if a project is really, really sensitive?
Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.
So how does it work?
You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements) We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output). Created resources, documents, and de-identified findings become part of our portal for community benefit.
Great. Where to from here?
There are two things we need right now:
Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help. Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful. And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.
Over the last few years, it appears that although certain industries are targeted by cyber attacks more than others, the methods used across the board are usually the same.
Prominent incidents identified over 2016–2017 almost always involved one of the following:
Phishing and other email scams
Supply Chain Security
In this article we investigate what cyber-attacks have been prominent over the last 12 months, what trends will continue for the remainder of 2017, and what can be done to minimise your risk of attack.
Phishing and other email scams
Phishing, spear-phishing (targeted phishing of specific individuals) and other email scams continue to be a major threat to businesses. In an era of large-scale security infrastructure investment, users are consistently the weak link in the chain.
The Symantec Internet Security Threat Report 2017 and ENISA Threat Landscape Report 2016 state the threat of phishing is intensifying despite the overall number of attacks gradually declining, which is suggestive of an increase in the sophistication and effectiveness of attacks. In all likelihood, this is due to cyber criminals moving away from volume-based attacks to more targeted and convincing scams. This transition is motivated by the higher success rate of tailored attacks, however greater effort is required by way of research into viable targets using open source material such as social media and social engineering.
This shift in approach is consistent with the observed growth of business-focussed email scams in the last 18 months. Cyber attackers begin by conducting extensive research on businesses they wish to target in order to identify key staff members — particularly those with privileged access, a degree of control over the business’ financial transactions, or in a position of authority.
These scams typically involve cyber attackers crafting emails that request an urgent transfer of funds, seemingly from a trusted party such as a senior manager in the business or an external contractor / supplier who is regularly dealt with. Following a global survey of business email scams, the FBI’s Internet Crime Complaint Center reported this type of attack continues to surge in prominence, with:
US and foreign victims reporting 24,345 cases by December 2016 — a significant increase from only 668 reported cases just six months earlier (the actual number is likely to be much higher as many cases go unreported).
Attackers attempting to steal a total of USD$5.3 billion through reported business email scams by the end of 2016, compared to USD$3.1 billion only 6 months earlier.
Ransomware is malicious software that encrypts users’ data in order to demand payment for the purported safe return of files, typically via a decryption key, typically using cryptocurrencies such as Bitcoin. The most prominent example of this form of attack was the Wannacry attack of May 2017, in which cybercriminals distributed the ransomware strain via an underlying software vulnerability in the Microsoft Windows operating system.
Due to the relatively low ‘barrier to entry’ and potentially lucrative rewards for even inexperienced cyber attackers, we have continued to see significant growth in the use of ransomware since 2016. In January 2016, ransomware accounted for only 18% of the global malware payloads delivered by spam and exploit kits; ten months later, ransomware accounted for 66% of all malware payloads — an increase of 267%.
Not only is ransomware one of the most popular attack vectors for cyber attackers, it is also among the most harmful. The cost of the ransom is only one aspect to consider — system downtime can have a significant impact on sales, productivity and customer or supplier relationships. In some cases (e.g. medical facilities), ransomware infections could potentially cost lives.
The success rate of ransomware is largely attributable to the exploitation of an organisation’s end users who typically have limited training and expertise in cyber security. In addition, once ransomware has infiltrated an organisation, many find it difficult to effectively resolve the effects without paying the ransom demanded by the attackers.
There is no guarantee attackers will provide the key for decrypting files if the ransom is paid however, and relying on payment of the ransom as a ‘get out of jail’ tactic is a risky choice. Further, payment of the ransom further encourages these sorts of attacks, and furthers development of ransomware technology. Hivint’s article ‘Ransomware: a History and Taxonomy’ provides an in-depth analysis of the growing problem of ransomware.
Ransomware is likely to be a thorn in the side of organisations for some time to come, and through increasingly diverse avenues. The 2017 SonicWall Annual Threat Report highlights that there is likely to be a greater focus on the industrial, medical and financial sectors due to their particularly low tolerance for system downtime or loss of data availability.
Similarly, internetworked physical devices — often referred to as the Internet of Things (IoT) — are also increasingly being targeted due to the fact they are not designed with security as a central consideration at present. While the majority of IoT devices can simply be re-flashed to recover from an attack as they do not store data, organisations and end users may rely on critical devices where any amount of downtime is problematic, such as medical devices or infrastructure. How the design and implementation of IoT devices shifts in response to the growing threat of ransomware will be one of the more interesting spaces to watch for the remainder of 2017 and beyond.
Botnets and DDoS
As with ransomware, the increased inter-connectivity of everyday devices such as lights, home appliances, vehicles and medical instruments is leading to their increased assimilation into botnets to be used in distributed denial of service (DDoS) attacks.
Software on IoT devices is often poorly maintained and patched. Many new types of malware search for IoT devices with factory default or hardcoded user names and passwords and, after compromising them, uses their Internet connectivity to contribute to DDoS attacks. Due to the rapidly increasing number of IoT devices, this is paving the way for attacks at a scale that DDoS mitigation firms may struggle to handle. The Thales 2017 Data Threat Report suggests that 6.4 billion IoT devices were in use worldwide by 2016 and that this number is forecast to increase to 20 billion devices by 2020.
While the growth of interconnected devices is inevitable, we expect that their rate of exploitation will stabilise in the next few years given the emergence of IoT security bodies such as IoTSec Australia and the IoT Security Foundation. It is likely that device manufacturers will also be pushed to comply with security standards and to make security a more central consideration during design.
Hacking toolkits are being made available online, some for free, effectively creating an open source community for cyber criminals. There are also paid business models known as “Malware-as-a-Service” for less experienced attackers, where payment is made for another attacker to run the campaign on their behalf. This reduces the barrier to entry for potential cyber attackers and also facilitates the rapid evolution of malware strains, making evasion of anti-malware end point protection tools more frequent. We expect this trend will continue as sophisticated cyber attackers increasingly move towards the malware-as-a-service business model.
Supply Chain Security
It’s important to be mindful that cyber attackers may also seek to exploit supply chain partners as a way to compromise the security of a business indirectly. The 2013 breach of US company Target is an example of this, as attackers stole remote access credentials from a third-party supplier of services. We have also seen reports of attacks against managed service providers here in Australia, as a way of indirectly compromising the providers’ customers.
What should you do?
The good news is that most of these threats can be mitigated with a small number of relatively basic controls in place — none of which should come as a surprise:
Keeping your systems patched and up-to-date can prevent cyber attackers from being able to exploit the vulnerabilities that allow them to install malicious software on your networks and systems. Unpatched Windows systems were the reason the Wannacry ransomware attack was so prolific.
User awareness training can significantly reduce the likelihood of malware compromising your organisation’s security. Users that can, among other things, confidently identify and report suspicious emails and exercise good removable media security practices can put your security team on the front foot.
Changing default password credentials
The main attack vector for IoT devices is unchanged factory default access credentials after installation. Changing the password, or disabling the default accounts, will prevent the majority of attacks on IoT devices. This is also the case for hardware more generally, such as routers and end-user devices.
Segregate BYOD and IoT devices from other systems on your network
Placing IoT devices and uncontrolled bring-your-own devices (BYOD) on a separate network can isolate the effects of any active vulnerabilities from your critical systems.
Backup and recovery
Having all your critical data regularly backed up both offline and in the cloud can mitigate the risk of malware — particularly ransomware — from causing major damage to your business. It’s also just as important to regularly test your recovery plans to ensure they work effectively, since restoring systems to a previous state without significant downtime or loss of data is the key to damage control.
At https://portal.securitycolony.com we have a variety of resources that can help in strengthening your organisation’s preparedness for cyber attacks, including user awareness materials, incident response templates, security policies and procedures and a vendor risk assessment tool to help assess the security posture of your vendors’ internet-facing presence. Other resources available include an “Ask Hivint” forum for those more esoteric questions and breach monitoring to identify whether your users or domain has been caught up in a previous security incident.
If we are going to measure security, what exactly are we measuring?
If you can’t measure it, you can’t manage it – W. Edwards Deming
That is a great quote and one I have heard a lot over the years, however an interesting point about this is that it lacks context. What I mean by this is that if you look at Page 35 of The New Economics you will see that quote is flanked by some further advice, namely:
It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.
Deming did believe in the value of measuring things to improve their management, but he was also smart enough to know that many things simply cannot be measured and yet must still be managed.
Managing things that cannot be measured, and the absence of context, is as good a segue as any to the subject of metrics in information security.
A recurring question that arises surrounding the use and application of metrics in security is “What metrics should I use?”
I have spent enough time in security to have seen (and truth be told, probably generated) an awful lot of rather pointless reports in and around security. I think I’m ready to attempt to explain what I think is going on here, and why “What metrics should I use?” might be the wrong question — and instead, there should be more of a focus on the context provided with metrics in order to create useful and meaningful information about an organisation’s security.
A Typical (& faulty) Approach
Now here is a typical methodology we often see that leads to the acquisition of a security appliance or product of some sort by an organisation (e.g. a firewall, or an IDS/IPS).
A security risk is identified
The security risk is assessed
A series of controls are prescribed to mitigate or reduce the risk, and one of those controls is some sort of security appliance / product
Some sort of product selection process takes place
Eventually a solution is implemented
Now we know (or we thought we knew) thanks to Dr Deming that in order to manage something, we first need to measure it, so we instinctively look at the inbuilt reporting on the chosen device (if we were a more mature organisation we might have thought about this at stage 4 and it may even have influenced our selection). We select some of the available metrics, and in less than ten minutes somehow electro-magically we have configured the device to email a large number of pie charts in a pdf to a manager at the end of every month.
However, the problem with the above chart is that this doesn’t actually mean anything — it has no context. The best way to demonstrate what I mean by context is to add a little.
Ok, that’s better, but in truth it’s still pretty meaningless, so let’s add some more.
Now we are beginning to end up with something meaningful. Immediately we can see that something appears to be going on in the world of cryptomalware and we can choose to react — the information provided now demonstrates a distinct trend and is clearly actionable.
But we can bring even more context into this. The points below provide some more suggestions for adding context, and will give you a feel for the importance of having as much context as possible to create meaningful metrics.
What about the threats that do not get detected? Are there any estimations available (e.g. in Security Intelligence reports) on how many of those exist? (Donald Rumsfeld’s ‘Known Unknowns’)
Can we add more historical data? More data means more reliable baselines, and the ability to spot seasonal changes
Could we collect data from peer organisations for comparison (i.e. — do we see more or less of this issue than everyone else)?
We have 4 categories, perhaps we need a line for threats that do not fall in these categories?
What are the highest and lowest values we (and/or other companies in the industry) have ever seen for these threats?
Do we have the ability to pivot on other data — for example would you want to know if 96% of those infections were attributed to one user?
The Impact of Data Visualisation
So now we have an understanding of context, what else should we consider?
Coming from the world of operational networking, I spent a lot of my time in a previous role getting visibility of very large carrier grade networks, and it was my job to maintain hundreds of gateway devices such as firewalls, proxy servers, VPN concentrators, spam filters, intrusion detection & prevention systems and all the infrastructure that supported them.
At that time if you were to ask me what metrics I would like to collect and measure, the answer was simple — I wanted everything possible.
If a device had a way to produce a reading for anything, I found a way to collect it and graph it using an array of open source tools such as SNMP, RRDTool and Cacti.
I created pages of graphs for device temperatures, memory usage, disk space, uptime, number of concurrent connections, network throughput, admin connections, failed logins etc.
The great thing about graphs is you can see anomalies very quickly — spikes, troughs, baselines, annual, seasonal and even hourly fluctuations give you insight. For example, gradual inclines or sudden flat-lines may mean more capacity is needed, whereas sharp cliffs typically mean something downstream is wrong.
Using these graphs, and a set of automated alerts I was able to predict problems that were well outside of my purview. For example, I often diagnosed failed A/C units in datacentres long before anyone else had raised alarms. I was able to detect when connected devices had been rebooted outside of change windows. I could even see when other devices had been compromised, because I could graph failed logon attempts for other devices in the nearby vicinity.
In the ten years or so since I was building out these graphs in Cacti, technologies for the creation of dash boarding, dynamic reporting, and automated alerting have come a long way, and it’s now easier than ever to collect data and produce very rich information — provided that you understand the importance of context, and you consider how actionable the information you produce will be to the end consumer.
While this write up has focused particularly on context with respect to technical security metrics, it is important to remember that security is mainly about people, so you should always consider the softer metrics that cannot simply be collected by things such as SNMP polling, or the parsing of syslogs, etc. For example — is there a way to measure the number of users completing security awareness training, and see if this correlates with the number of people clicking on phishing links?
Would you want to know for instance if the very people who had completed security awareness training were more likely to click on phishy emails?
The bottom line is, security metrics — whether technically focused or otherwise, are relatively meaningless without context. While metrics aim to measure something, it’s the context in which the measurements are given which provides valuable and actionable information that organisations can use to identify and prioritise their security spend.
Article by Eric Pinkerton, Regional Director, Hivint
This morning, almost in anticipation of the law’s passage, the Australian Information Security Association (AISA) sent an email notifying its members of an incident affecting its members’ data:
We have made a commitment to you that we will always keep you up to date on information as and when we have it.
Today the new Board took the decision to voluntarily report to the Office of the Australia Information Commissioner an incident that occurred last year that could have potentially impacted the privacy of member data kept within the website. At the time, a member reported the issue to AISA and it was rectified by the then administrative team. What wasn’t done, and as we all know is best practice, was notification to you as members that this potential issue had occurred, and notification to the Australia Privacy & Information Commissioner.
Your member information is not at risk and the issue identified has been rectified.
The AISA Board takes this matter very seriously.
As the industry body representing these and many other information security issues, we expect and demand best practice of AISA and of our members. The Board holds the privacy of member data as sacrosanct and will ensure that all members are aware of any and all privacy information.
If you have any concerns or wish to discuss this matter please feel free to contact either myself or the Board members.
Many thanks for your ongoing support.
And while AISA quite validly trumpets that notifying its members is best practice, how they notified their members falls well short of best practice.
More specifically, the notification doesn’t answer any questions that would be expected to be asked, and in the context of broader AISA issues occurring, raises questions of why the notification occurred now.
Questions left unanswered include:
What has been done to remediate and limit such exposures in the future?
What information was potentially compromised?
Was it a potential compromise or an actual compromise?
What should I (as a member whose data was potentially compromised) do about it?
Do I need to look out for targeted phishing attacks? Transactions?
Has my password been compromised? Has my credit card been compromised?
Who has investigated it and how have you drawn the conclusions you’ve drawn?
Data breach notification effectively forms part of a company’s suite of tools for managing customer and public relations. Doing data breach notification well can make a difference in the effort required to manage those relationships during a crisis, and the value of long term customer goodwill.
This blog post explores what a “good” data breach notification looks like, and highlights where AISA falls short of that standard.
How to effectively manage a breach
As data breaches continue to increase in frequency, the art of data breach notification has become more important to master. A key challenge in responding to a data breach is notifying your customers in a way that enhances, rather than degrades, your brand’s trustworthiness.
This guide outlines the key factors to consider should you find yourself in the unfortunate position of having to communicate a data breach to your customers.
There are 7 factors we recommend you focus on:
Clearly, the AISA announcement falls short on a number of the above factors.
Clarity — the AISA announcement does not clearly identify who was affected, or even if there was in fact a breach of data.
Timeliness — If the incident occurred on June 15th last year last year, why wait to notify members over eight months after the incident occurred? Given so much time has passed since the incident, and AISA having sufficient time to investigate and rectify the issue, why was there not more information provided about the nature of the breach? Given the time elapsed, the breach notification seems conveniently timed to coincide with the legislation, which leads to the final point;
Genuineness — No apology was given as part of the breach notification, nor was any detail given about what members need to do, what information (if any) was breached, or any assurances that it won’t occur again.
An email with the 7 factors included will answer (as best as you can) the questions the affected party may have. Further follow up information can be provided using an FAQ, a good example of which is the Pont3 Data Breach FAQ.
So, with an understanding of what to do, it’s also key to consider what not to do.
Breach Disclosure — What not to do
When formulating a breach notification strategy it is also important to know what not to do. Described below is our ‘Hierarchy of Badness’, starting with the worst things first!
1. Intentionally lying: Making any false statements in a bid to make the situation appear less complicated or serious than it is known to be, for example, stating that the data lost was encrypted when it was not. There is a very high chance that the truth will become available at some point, and at that point apparent intentional lies will wipe you out. This routinely gets people fired, and can cause significant reputational damage for the organisation.
2. Unintentionally lying: Drawing conclusions and providing statements without thoroughly analysing the impact and depth of the breach can lead to unintentional lies or the omission of information. This can be a result of publishing a notification too early before the details are fully understood. Whilst unintentional lies are ‘better’ than intentional lies, it may be difficult to prove to your customers that there was no ill intent. Depending on the lie, this may also result in someone getting fired.
3. Malicious omission: As the name suggests, organisations sometimes leave out key information from their disclosure statements particularly by directing focus to information that is not as crucial. For example, rather than stating that data was not encrypted in storage focusing the statement on data being encrypted at transit. While the latter is true, a crucial piece of information has purposely been omitted for the purpose of diversion. Not a great strategy. While omission may be a a legal requirement throughout the course of an incident, an omission which changes the implied intent or meaning of a communication can backfire.
4. Weasel words and blame-shifting: A very common but poorly conceived inclusion in breach notifications is overused clichés such as ‘we take security seriously’, or ‘this was a sophisticated attack’. If there is no good reason to use that particular phrase/word it is better not to include it in the statement. Describing an attack as sophisticated or suggesting steps are being taken without providing further details is not going to make your customers feel better about the situation.
Our Hierarchy of Badness heat map below depicts the sweet spot for disclosure.
Historically, some organisations have preferred to use the ‘Silence & roll the dice’ strategy. This is a risky strategy, where the organisation doesn’t notify its customers about the breach at all, and simply hopes the whole situation will blow over.
However, with the passing of the Privacy Amendment Bill, while this may pan out well in some cases, it can have adverse outcome in others particularly if the breach is identified and reported by bloggers/researchers/users (a case of malicious omission in the ‘Hierarchy of Badness’). However, there will be a lot of organisations falling under the threshold for disclosure, so the ‘Silence and roll the dice’ strategy will continue to be used.
An ideal way to help your customer through a data breach is by referring them on to services like ID Care, the various CERTs, or other service providers for your customer get the advice they need to respond to the issue in their particular circumstances. Trying to “advise” your clients about what they should do post-breach — when you’re doing this from a position of having just had a breach yourself — is rarely a good strategy.
Finally, the best preparation for data breach disclosure is to have both:
A micro-level response for your customers regarding what data was lost, if it’s recoverable and what they as data owners can do to mitigate the impact; and
A macro-level response for the press with details of the volume of data lost, response plan and how your customers must go about handling the situation.
It is also necessary to realise that data breach notification is not a one-time act. To ensure the best outcome from a public relations and crisis management perspective, it’s best to provide customers with updates as and when you get new information and ensure your customers realise it’s an ongoing engagement and that you genuinely care about their data.
A week or two back, the Australian Signals Directorate (ASD) replaced their “Top 4 Mitigation Strategies” with a twice-the-fun version called the “Essential 8.”
“Why?” I hear you ask… That’s a good question.
After all, it was a big deal when the Top 4 came out in 2010 (and then updated in 2012 and 2014) as the ASD claimed that it would address 85% of targeted attacks. Their website stillsays that the Top 4 address 85% of targeted attacks. So, if nothing else has changed, why change the Top 4? There would seem to be a few possible explanations:
Everyone has finished rolling out the Top 4 and needed the next few laid out for them.
Attacks have changed, and as a result, the Top 4 no longer address 85% of targeted attacks so we need to change tack.
The change is tacit recognition that the Top 4, while great controls, provide a heavy burden in terms of implementation (especially application white-listing) so are not a realistic target to implement for most organisations, so the Essential 8 was created to provide a more ‘attainable’ set of controls.
ASD just felt it needed a refresh to maintain focus and to highlight that the Top 4 aren’t the only controls you need.
The first of those is, sadly, laughable. The second is plausible but not certain. The third is quietly compelling (I mean, we all feel better about 7 out of 8 than we do with 3 out of 4). And the last one is also pretty persuasive.
Enough about the why, let’s talk about the change itself.
What is the Essential 8?
If an organisation or agency was pursuing the Top 4 but have not reached that implementation goal, what should they do now? Switch focus to the Essential 8 or continue with the Top 4?
Note the change of name of the main document from ‘strategies to Mitigate Targeted Cyber Intrusions’ to ‘Strategies to mitigate cyber security incidents’
These 3 documents present 37 controls as mitigation strategies against the list of 6 threats that are listed below. The top 4 are unchanged from the previous update, however the Top 4 along with the addition of 4 further controls are now presented as a new Baseline called the Essential 8.
3 documents, 37 controls, 6 threats, Top 4, Essential 8… Confused yet? This is what Hivint’s Chief Apiarist, Nick Ellsmore had to say about so many numbers flying around these days:
So, what are the changes?
This article isn’t going to present a control-by-control comparison. You’ll find plenty of those. We want to look at the big picture change. In terms of the number of controls, there are now 37 controls and not 35 — a few added, a few combined, a few renamed or modified. Despite the ASD website having a complete list of changes, there will be many a blog post picking it apart. We want to help you figure out what to do about it.
The top 4 mitigations strategies remain the same, and are still mandatory for Australian Federal Government agencies.
Previously all 35 strategies were described as strategies to mitigate one key threat, targeted cyber attacks. The ASD also claimed that when the Top 4 were properly implemented, it effectively mitigated 85% of targeted cyber attacks. One key change in the new release is that now the threat landscape is defined in a broader sense which includes the following 6 threats:
Targeted cyber attacks
Ransomware and other external adversaries
Malicious insiders who steal data
Malicious insiders who destroy data and prevent computers/networks from functioning
Business email compromise
Threats to industrial control systems
Four additional controls, along with the Top 4, form the Essential 8. This is presented as a ‘Baseline’ for all organisations. At first glance, the Essential 8 feels like a natural extension that organisations can adopt into their security strategy without much of a hassle. Realistically though, it’s a bit more complicated than that.
When it came out in 2010, the ‘Strategies to Mitigate Targeted Cyber Intrusions’ was considered quite unique, since it confidently declared that by doing the Top 4, organisations will mitigate against 85% of ‘targeted cyber attacks’. In hindsight, the full list of 35 was probably too long for most organisations to digest, and few ever looked past the attractiveness of only having four things to do. That said, most organisations would have at least some of the 31 other controls implemented through their standard Business as Usual (BAU) operations (e.g. email/web content filtering, user education) whether or not they set out with the list in hand.
It is worth noting here that we have seen very few organisations genuinely deploy the Top 4 comprehensively.
It is also worth noting that for many organisations trying to climb the ladder of resilience, “targeted threats” seem a long way away, and managing the risk of non-targeted scattergun malware attacks, naïve but well-meaning users, and the Windows XP fleet, is more like today’s problem.
And at the other end, looking at genuinely nation-state-targeted-Government-institutions, it seems unlikely that a Top 4 would remain current for 7 or more years given the changing nature of threats. Stuxnet, Ed Snowden and the Mirai botnet are a few extreme examples but nevertheless game changing events that could affect how the importance of a control (mitigation strategy, in the context of this document) is rated, especially when the primary audience are Government Institutions.
But given the challenges in planning, funding, and rolling out a Top 4 mitigation program, one has to appreciate the consistency — it would be a nightmare to try to address a dynamic list of priorities within Government agencies with turning circles like oil tankers.
The Essential 8 can be seen as a good compromise where organisations who are working towards the Top 4 (or have it in place already) can incorporate the additional 4 controls without disrupting the status quo, while the list appears to stay relevant to the changes in Cyber. Seems like a pretty good approach.
The overall list of 37 mitigation strategies are categorised under 5 headings:
Mitigation strategies to prevent malware delivery and execution
Mitigation strategies to limit the extent of cyber security incidents
Mitigation strategies to detect cyber security incidents and respond
Mitigation strategies to recover data and system availability
Mitigation strategy specific to preventing malicious insiders
Clearly this article was not written for cybersecurity gurus like you. It’s for all those people who haven’t yet deployed their holistic, risk-based, defence-in-depth inspired, ASD-Top-35-addressing security program in totality.
Okay, in all seriousness, if your security strategy is risk based and is aligned with where the organisation is heading, this change shouldn’t bother you too much. ASD too acknowledges that in some instances, the risk or business impact of implementing the Top 4 might outweigh the benefits.
Hence, the best bet continues to be a risk based approach to security which is informed by the Top 4 (or the Essential 8, or ISM, or ISO or whatever your flavour happens to be) rather than attempting to blindly comply to a checklist.
And sadly, there is no video this time.
Article by Adrian Don Peter, Senior Security Advisor, Hivint
As the use of cloud services continues to grow, it’s generally well accepted that in most cases reputable service providers are able to use their economies of scale to offer levels of security in the cloud that match or exceed what enterprises can establish for themselves.
What is less clear is whether there are currently appropriate mechanisms available to enable an effective determination of whether the security controls a cloud service provider has in place are appropriately adapted to the needs of their various customers (or potential customers). There’s also a lack of clarity as to whether providers or customers should ultimately bear principal responsibility for answering this question.
These ambiguities are particularly highlighted in the case of highly abstracted public cloud services where the organisations using them have very little ability to interact with and configure the underlying platform and processes used to provide the service. In particular, the ‘shared environment’ these types of services offer creates a complex and dynamic risk profile: the risk to any one customer of using the service — and the risk profile of the cloud service as a whole — is inevitably linked with all the other customers using the service.
This article explores these issues in more detail, including discussing why representations around the security of cloud services is likely to become an increasingly important issue.
Why it matters: regulators are starting to care about security
It is important to appreciate the regulatory context in which the growth in the use of cloud services is taking place. Specifically, there is evidence of an increasing interest by regulators and policymakers in the development of rules around cyber security related matters2. This includes indications of increased scrutiny regarding representations about cyber security that are made by service providers.
Two recent cases in the USA highlight this. In one instance, the Consumer Financial Protection Bureau (a federal regulator, similar to the Australian Securities and Investments Commission) fined Dwolla — an online payment processing company — one hundred thousand US dollars after it found Dwolla had made misleading statements that it secured information it obtained from its customers in accordance with industry standards3.
Similarly, the US Federal Trade Commission recently commenced legal proceedings against Wyndham Worldwide, a hospitality company that managed and franchised a group of hotels. After a series of security breaches, hackers were able to obtain payment card information belonging to over six hundred thousand of Wyndham’s consumers, leading to over ten million dollars in losses as a result of fraudulent transactions.
The FTC alleged that Wyndham had contravened the US Federal Trade Commission Act by engaging in ‘unfair and deceptive acts or practices affecting commerce’4. The grounds for this allegation were numerous, but included that Wyndham had represented on its website that it secured sensitive personal information belonging to customers using industry standard practices, when it was found through later investigations that key information (such as payment card data) was stored in plain text form.
The case against Wyndham was ultimately settled out of court, but demonstrates an increasing interest by regulators in representations made in relation to cyber security by service providers. It is not inconceivable that similar action could be taken in Australia if corresponding circumstances arose, given the Australian Competition and Consumer Commission’s powers to prosecute misleading and deceptive conduct under the Australian Consumer Law5.
While the above cases do not apply to cloud service providers per se, they serve as examples of the increasing regulatory interest that is likely to be given to issues that relate to cyber security. Indeed, while regulatory regimes around cyber security issues are still in relatively early stages of development, it is feasible to expect that cloud providers in particular will come under increased scrutiny due to their central role in supporting the technology and business functions of a high number of customers from multiple jurisdictions.
There is also a strong likelihood that this scrutiny will extend to the decisions made by customers of cloud providers. In Australia, for example, if a company wishes to store personal information about its customers on a cloud service provider’s servers overseas, they would (subject to certain exceptions) need to take reasonable steps to ensure the cloud provider did not breach the Australian Privacy Principles in the Privacy Act 1988 in handling the information. Among other things, this would include ensuring that the cloud provider took reasonable steps to secure the information6. Similarly, data controllers (and data processors) in the EU will be required under the new Data Protection Regulation to ensure that appropriate technical and organisational measures are in place to ensure the security of personal data7.
The question then arises as to how cloud service providers and their customers are supposed to make sure they take appropriate steps to ensure they meet their responsibilities in assuring the security of cloud services in the context of a nascent and still developing regulatory environment. At first glance, the solution to the problem appears simple — benchmarking a cloud service against industry security standards. As discussed below, however, there are significant challenges with this approach.
The problem with bench-marking cloud security against industry standards
Many cloud service providers point to certification against standards such as ISO 27001:2013, ISO 27017, ISO 27018 (from a privacy perspective), the Cloud Security Alliance’s STAR program, or obtaining Service Organisation Control 2 / 3 reports as demonstration that their approach to security aligns with best practice. This is in addition to the option of undertaking government accreditation programs, such as IRAP in Australia or FedRAMP in the USA, avenues which some providers also pursue.
While this seems a logical approach, public cloud services and the shared environments they introduce create some unique considerations from a security perspective that complicate matters. Specifically, the potential security risk to any one customer of using these shared environments is inevitably closely intertwined with, and varies based on:
their own intended use of the service; and
the security risks associated with all other clients using the cloud service8.
As a result, any assessment of a cloud service provider’s security is inevitably reflective of their risk profile at a specific point in time, despite the fact that the risks facing the provider may have changed since based on its dynamic customer base. To illustrate this point, consider the hypothetical case study below.
X is a public cloud service provider that has been in business for a few years, and provides remote data storage services. X has primarily marketed itself to small businesses which make up the bulk of its customer base, and offers a highly abstracted cloud service with customers having little visibility of and ability to customise the underlying platform.
Those organisations have not stored particularly sensitive information on X’s servers. X has nevertheless obtained ISO 27001:2013 certification during this period — which includes a requirement that the cloud provider implement a risk assessment methodology and actually conducts a risk assessment process for its service on a periodical basis9.
X is then approached by a large multi-national engineering firm, who wants to store highly sensitive information regarding key customers in the cloud to reduce its own costs. The firm wishes to engage a public cloud provider that is ISO 27001:2013 compliant and notices X meets this requirement.
X is planning to conduct a risk assessment to review its current risk profile in 3 months, however, its current set of security controls — against which it obtained ISO 27001 certification — have been designed to address the level of risk associated with customers who use its cloud services for storing relatively insensitive data.
The engineering firm is unaware of this and engages X despite the fact their security controls may not be appropriately adapted to meet its requirements.
As this case study illustrates, whether it’s appropriate for an organisation to engage a cloud provider from a security perspective isn’t a question that can be answered purely by reference to whether they have been deemed compliant with certain standards. The underlying assumptions upon which the cloud provider’s compliance was determined — and whether those assumptions still hold true — are just as important. And yet in many circumstances, it is unlikely (and impractical to expect) that key documents that reveal those assumptions (such as risk registers and treatment plans) — will be made available publicly by cloud service providers so that these investigations can be undertaken by customers. And even if this information can be made available, the customer first has to have the security maturity and awareness to know to ask for such documents and be able to perform the required assessment.
Responsibility for cloud security — a two-way street
Given the limited utility of industry standards in assuring the security of cloud services, and the potential relevance of regulatory responsibilities that could apply to both service providers and their customers, the most reasonable argument is that both parties have a role to play in establishing that a particular cloud service offers an appropriate level of security. While it is difficult to define the precise scope of those responsibilities in the context of a nascent regulatory landscape, this article offers some guidance below.
Customers of cloud services
Customers need to make sure they conduct a sufficient level of due diligence prior to using a cloud service to ensure that its design is appropriately adapted to meet their needs from a security perspective. In particular, they should consider the following:
Does the cloud service create a high degree of abstraction from the underlying platform (public cloud services, for example, often have a high level of abstraction where users have very limited — if any — ability to configure the underlying platform). If so, this may mean the service is less suited to more sensitive uses where a high degree of control by the customer is required.
Is the use of a shared IT environment — in which the risk profile of the cloud service as a whole varies dynamically as its customer base changes — appropriate?
Are the security controls put in place by the cloud provider appropriate to satisfy the organisation’s intended use of the service?
Does the cloud provider make available details of security risk assessments and risk management plans?
Are there any other considerations that may have a bearing on whether using the cloud service is appropriate (e.g. a regulatory requirement or a strong preference to have the data stored locally rather than overseas)?
Generally speaking, the higher the level of sensitivity and criticality associated with the planned uses of a cloud service, the more cautious a customer needs to be before making a decision to use a service offered in a shared environment. If the choice is still made to proceed (as opposed to using a private cloud, for example), the reasons for this decision should be documented and subject to appropriate executive sign-off and oversight (as well as regular review). This will prove particularly valuable in case the decision is scrutinised by external bodies (e.g. regulators) at a later date10.
Cloud service providers
It is important that cloud providers are transparent with their customers about the security measures they have in place throughout the course of the period they are engaged by the customer. While representing that the cloud service is certified against particular industry benchmarks is useful to some extent, the cloud provider should also provide their own information to customers as to the specific security controls they do — and don’t — have in place, and the level of risk those controls are designed to address. In addition, cloud providers should be proactive about informing their customers where circumstances may have arisen that have resulted in a material change to their risk profile.
Providing this information is important to enable potential customers of cloud services to ascertain whether use of the service is appropriate for their needs.
Clearly, the shift towards the use of cloud services is now well established. This is a not a problem in and of itself. However, while regulatory expectations around cyber security are still being established, organisations need to ensure that they choose a cloud service provider only after first carefully considering what their requirements are and whether the cloud service offers an approach to security and a risk profile that is adapted to their needs. Service providers need to facilitate this process as best they can through a transparent dialogue with their customers about their approach to security and their risk profile.
By Arun Raghu, Cyber Research Consultant at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony
Note this write up focuses less on dedicated cloud environments (e.g. private cloud arrangements), where these complexities are largely avoided because a service can be customised and secured with a specific focus on a particular customer.
This article does not cover this in detail, but examples include the development of the Network Information and Security Directive in the EU; the rollout of Germany’s IT Security Act; the ongoing discussions around legislated cyber security information sharing frameworks in the USA; and the proposal in late 2015 to amend Australia’s existing Telecommunications Act 1997 to include revised obligations on carriers and service providers to do their best to manage the risk of unauthorised access and interference in their networks, including a new notification requirement on carriers and some carriage service providers to notify of planned changes to networks that may make them vulnerable to unauthorised access and interference.
See the FTC site for additional details on the Wyndham case.
See section 18 of Schedule 2 of the Competition and Consumer Act 2010.
See in particular Australian Privacy Principles 8 and 11.
See Article 30 of the proposed text for the EU’s General Data Protection Regulation.
The risks introduced by other clients of the cloud service may vary depending on the sector(s) in which they operate and their potential exposure to cyber-attacks as well as their intended use of the service.
See in particular section 6.1 of the ISO 27001:2013 standard.
A relevant consideration that may also be taken into account by regulators or other external bodies is what would reasonably be expected by an organisation of the same type in the same industry before engaging a cloud service provider — this would help ensure that unreasonable levels of due diligence are not expected of organisations with limited resources, for example.