Security Collaboration — The Problem and Our Solution

Colleagues, the way we are currently approaching information security is broken.

This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.

But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).

Why is it inefficient to compete on security? Here are a couple of reasons:

Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.

No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)

Some examples of how our current approach is failing us:

We are wasting money on information security governance, risk and compliance

There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.

The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.

We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.

We are wasting money when getting technical security services

As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.

But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.

The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.

So what do we do about it?

There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.

It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.

The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:

Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?

Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.

Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).

Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.

Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.

What else is going in to the portal?

We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.

“Everyone” can’t win. Who loses?

The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).

So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.

Is there a precedent for this?

The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”

A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.

Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.

The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.

Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.

What if a project is really, really sensitive?

Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.

So how does it work?

You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements)
We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output).
Created resources, documents, and de-identified findings become part of our portal for community benefit.

Great. Where to from here?

There are two things we need right now:

Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help.
Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful.
And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.

Article by Nick Ellsmore, Chief Apiarist, Hivint


If we are going to measure security, what exactly are we measuring?

If you can’t measure it, you can’t manage it
– W. Edwards Deming

That is a great quote and one I have heard a lot over the years, however an interesting point about this is that it lacks context. What I mean by this is that if you look at Page 35 of The New Economics you will see that quote is flanked by some further advice, namely:

It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.

Deming did believe in the value of measuring things to improve their management, but he was also smart enough to know that many things simply cannot be measured and yet must still be managed.

Managing things that cannot be measured, and the absence of context, is as good a segue as any to the subject of metrics in information security.

A recurring question that arises surrounding the use and application of metrics in security is “What metrics should I use?”

I have spent enough time in security to have seen (and truth be told, probably generated) an awful lot of rather pointless reports in and around security. I think I’m ready to attempt to explain what I think is going on here, and why “What metrics should I use?” might be the wrong question — and instead, there should be more of a focus on the context provided with metrics in order to create useful and meaningful information about an organisation’s security.

A Typical (& faulty) Approach

Now here is a typical methodology we often see that leads to the acquisition of a security appliance or product of some sort by an organisation (e.g. a firewall, or an IDS/IPS).

  1. A security risk is identified
  2. The security risk is assessed
  3. A series of controls are prescribed to mitigate or reduce the risk, and one of those controls is some sort of security appliance / product
  4. Some sort of product selection process takes place
  5. Eventually a solution is implemented

Now we know (or we thought we knew) thanks to Dr Deming that in order to manage something, we first need to measure it, so we instinctively look at the inbuilt reporting on the chosen device (if we were a more mature organisation we might have thought about this at stage 4 and it may even have influenced our selection). We select some of the available metrics, and in less than ten minutes somehow electro-magically we have configured the device to email a large number of pie charts in a pdf to a manager at the end of every month.

Top Threats

However, the problem with the above chart is that this doesn’t actually mean anything — it has no context. The best way to demonstrate what I mean by context is to add a little.

Top Threats this month

Ok, that’s better, but in truth it’s still pretty meaningless, so let’s add some more.

Threats identified (5 months)

Now we are beginning to end up with something meaningful. Immediately we can see that something appears to be going on in the world of cryptomalware and we can choose to react — the information provided now demonstrates a distinct trend and is clearly actionable.

But we can bring even more context into this. The points below provide some more suggestions for adding context, and will give you a feel for the importance of having as much context as possible to create meaningful metrics.

  • What about the threats that do not get detected? Are there any estimations available (e.g. in Security Intelligence reports) on how many of those exist? (Donald Rumsfeld’s ‘Known Unknowns’)
  • Can we add more historical data? More data means more reliable baselines, and the ability to spot seasonal changes
  • Could we collect data from peer organisations for comparison (i.e. — do we see more or less of this issue than everyone else)?
  • We have 4 categories, perhaps we need a line for threats that do not fall in these categories?
  • What are the highest and lowest values we (and/or other companies in the industry) have ever seen for these threats?
  • Do we have the ability to pivot on other data — for example would you want to know if 96% of those infections were attributed to one user?

The Impact of Data Visualisation

So now we have an understanding of context, what else should we consider?

Coming from the world of operational networking, I spent a lot of my time in a previous role getting visibility of very large carrier grade networks, and it was my job to maintain hundreds of gateway devices such as firewalls, proxy servers, VPN concentrators, spam filters, intrusion detection & prevention systems and all the infrastructure that supported them.

At that time if you were to ask me what metrics I would like to collect and measure, the answer was simple — I wanted everything possible.

If a device had a way to produce a reading for anything, I found a way to collect it and graph it using an array of open source tools such as SNMP, RRDTool and Cacti.

I created pages of graphs for device temperatures, memory usage, disk space, uptime, number of concurrent connections, network throughput, admin connections, failed logins etc.

The great thing about graphs is you can see anomalies very quickly — spikes, troughs, baselines, annual, seasonal and even hourly fluctuations give you insight. For example, gradual inclines or sudden flat-lines may mean more capacity is needed, whereas sharp cliffs typically mean something downstream is wrong.

Using these graphs, and a set of automated alerts I was able to predict problems that were well outside of my purview. For example, I often diagnosed failed A/C units in datacentres long before anyone else had raised alarms. I was able to detect when connected devices had been rebooted outside of change windows. I could even see when other devices had been compromised, because I could graph failed logon attempts for other devices in the nearby vicinity.

In the ten years or so since I was building out these graphs in Cacti, technologies for the creation of dash boarding, dynamic reporting, and automated alerting have come a long way, and it’s now easier than ever to collect data and produce very rich information — provided that you understand the importance of context, and you consider how actionable the information you produce will be to the end consumer.


While this write up has focused particularly on context with respect to technical security metrics, it is important to remember that security is mainly about people, so you should always consider the softer metrics that cannot simply be collected by things such as SNMP polling, or the parsing of syslogs, etc. For example — is there a way to measure the number of users completing security awareness training, and see if this correlates with the number of people clicking on phishing links?

Would you want to know for instance if the very people who had completed security awareness training were more likely to click on phishy emails?

The bottom line is, security metrics — whether technically focused or otherwise, are relatively meaningless without context. While metrics aim to measure something, it’s the context in which the measurements are given which provides valuable and actionable information that organisations can use to identify and prioritise their security spend.

Article by Eric Pinkerton, Regional Director, Hivint

Check out Security Colony’s Cyber Security Reporting Dashboards, a NIST CSF based dashboard with more metrics for your security program.

Joining the Hive

Insights into the cyber-security industry from some of Hivint’s junior bees

While tertiary institutions around Australia are striving to produce an increasing number of students equipped with cyber security expertise, the industry is often referred to as being in the midst of a ‘skills shortage’.

Meanwhile, Hivint has been in a period of substantial growth, with the team quite literally doubling in size from January to December of 2016. As part of that growth, we’ve brought on a number of graduates and industry newcomers with a variety of backgrounds and skill sets, who have quickly become an integral and valued part of our team.

With cyber security increasingly being seen as a desirable pathway for many of the brightest and best students in Australia (and around the world), we thought it would be apt to get an insight from our new recruits about what it took for them to be successful in joining Australia’s fastest growing cyber security consulting firm, the challenges they have faced, and advice they have for other people aspiring to pursue a career in cyber security.

Justin Kuyken — GRC Advisor

After 12 years cleaning swimming pools, I went back to university part time to study computer science at LaTrobe University — something I had an interest in since my school days. 6 years later in my final year, a network security subject piqued my interest, and after graduating I started absorbing as much information as I could find on this new-found passion.

After another year of reading all the books and using all the tools I could find to expand my knowledge in the area, I still hadn’t had any luck with my efforts to get a start in the industry. Finally, the persistence paid off when I heard back from Hivint, who spoke to me about joining their team as a graduate-level Governance, Risk and Compliance (GRC) advisor. 
While this was not what I originally had in mind, after some research, the role appeared to be an even better way into the industry as a beginner and to get a better understanding of how the security world really works. 
During the recruitment process, the Hivint team was impressed by the dedication and commitment I had displayed in my own knowledge development, having shown a clear passion for developing my own knowledge about any and everything security-related. They decided to bring me on board, and I have not looked back. I have loved my time as part of the company, despite not being the ‘1337’ hacker I originally thought I would be when I started out on this whole path!

In summary, my advice to other aspiring graduates looking for a start is to show initiative to prospective employers — find a way to demonstrate that you are passionate about joining the industry and about continual improvement (e.g. through independent studies and learning), as these are valuable skills even on the job. In addition, be persistent about looking for opportunities — it may take some time, but the payoff for me by getting a foot in the door at Hivint has been well worth it.

Lumina Remick — GRC Advisor

After completing a Masters in Project Management at Bond University, my original plan was to return to working with circuits and microprocessors given my original background in Electronic and Communications Engineering. Little did I know an interesting career change was waiting for me.

In the final semester of my studies, I interned for an asset management company. My job primarily focused on implementing and tailoring their risk management policy and procedures to suit their business needs. However, I also had the opportunity to work on their IT security policies. This experience — together with my interest for risk management — piqued my interest for a career in cyber security.

Coincidentally, the company I worked for was Hivint’s client, so I had a sneak peak of Hivint’s work even before I became a part of the Hive. I believed the right place to further my new-found interest was at Hivint, so I religiously started following them on social-media platforms looking for a way in. 
When they advertised for a graduate GRC advisor role. I jumped at the opportunity, and there has been no turning back.

As a beginner, this role has been an amazing way into the industry and a great learning experience. I’m constantly learning new things and have come to realise there is no such thing as ‘knowing it all’ in security. I must admit that Google has quite often been my best friend through the whole experience. 
Working with some of the best people in the industry has inspired and made me love my time at Hivint.

My advice to any aspiring graduates is to do your research on who are the companies in the industry hiring, and then make sure you know as much as you can about them and keep a regular eye out to see if they are looking to fill new roles. The fact you have done your research and shown an interest in them will stand you in good stead should you land an interview!

Sam Reid — Technical Security Specialist

I took the common route through university, completing a Bachelor of Science in Cyber Security at Edith Cowan University. The first thing I’ll say is that working in the industry is more about client relationships and working with clients (particularly to help them understand their security risks and which ones are appropriate to accept, and which ones are not) than I originally thought. Those boring risk and standards units at uni turned out to be important when assisting clients manage their exposure!

Penetration testing is the real deal and it’s seriously cool. The exposure and range of things you get to test and ‘break’ to help clients identify security holes will live up to your expectations — guaranteed.

My advice to aspiring grads — with the constant stream of new information, trends and events in this industry — from new vulnerability disclosures, ongoing data breaches, growth in IoT devices, and DDoS attacks, it’s easy to be left behind when you’re starting out. Try to keep your passion up by doing security-related things you enjoy in your own time when you can. Capture the Flag (CTF) events, security research, bug bounties, secure software development not only keep you interested — they keep you interesting! A challenging CTF you recently completed could make a great story to tell during an interview.

As a case in point, I was hired as a junior security analyst straight from university and while I hadn’t heard of Hivint (they were only 12 people back then), the regional director had heard of me having attended a presentation I did on identity theft at a local security meetup. In my opinion, engaging with the community and making yourself known in the field (for the right reasons!) can really kick-start your career and put you ahead of the other graduate job seekers.

Oh, and lastly, be mindful of how you refer to your occupation as a ‘penetration tester’. My Mum proudly told the extended family that I was a “computer penetrator” last Christmas. No Mum. Please don’t ever say that again.

John Gerardos — GRC Advisor

I always knew I’d enroll into a Computer Science degree and work in technology. I originally worked primarily in support/systems administration and network engineering. My last few years as a network engineer had me either living in datacenters or designing and installing wireless access across large campuses in preparation for the explosion of BYOD (bring your own device) policies.

It very quickly became apparent that securing networks from the risks inherit in BYOD as well as the emerging Internet of Things was going to be a very interesting and expanding area. After working closely with the security team on several projects, I decided that is where I wanted to move my career.
So back to university I went! Along with my usual studies at the Masters of Applied Science (Information Security and Assurance) at RMIT, I learned about Ruxmon, a free security meetup that was run once a month on-campus. I immersed myself in the “Security Scene”, began attending Ruxmon, assisting with the organisation of the meetup as well as stepping up to lead the Information Security Student Group at RMIT University. I made it my goal to attend as many security meetups as possible and learn from the experts, which I found very rewarding and something that also helped cover and re-enforce some of the material learnt in my studies.

My university often ran industry networking events and I happened to bump into a couple of Hivint people at one I spoke at. I had not heard of Hivint at the time but it very quickly became apparent that it would be a cool place to work — so I kept it in mind and was excited when I saw them advertise for a graduate role.

The past 6 months on the Hivint team have been amazing! While I already had industry experience, this was my first consulting role and I had to very quickly learn how to manage my time across clients and get up to speed with the IT infrastructure of each client that I was working at. I also quickly found out that it’s not just the technical skills that are important — you need to be a great communicator and take the time to understand each individual client’s business so that you can tailor a solution for them.

My advice to students looking to enter the industry is to network with others and immerse yourself in the field. We are lucky that there are so many high quality free security meetups around the place — make the time to attend the ones that look interesting to you and have a chat to the people there. Follow up by doing your own research on anything that sounded interesting during the meetup, as well as joining in on relevant CTF events. Security people are happy to share the knowledge around and the best way for a student to learn outside of university is to be active in the community, attend relevant meetups and engage with the experts.

The Cloud Security Challenge

As the use of cloud services continues to grow, it’s generally well accepted that in most cases reputable service providers are able to use their economies of scale to offer levels of security in the cloud that match or exceed what enterprises can establish for themselves.

What is less clear is whether there are currently appropriate mechanisms available to enable an effective determination of whether the security controls a cloud service provider has in place are appropriately adapted to the needs of their various customers (or potential customers). There’s also a lack of clarity as to whether providers or customers should ultimately bear principal responsibility for answering this question.

These ambiguities are particularly highlighted in the case of highly abstracted public cloud services where the organisations using them have very little ability to interact with and configure the underlying platform and processes used to provide the service. In particular, the ‘shared environment’ these types of services offer creates a complex and dynamic risk profile: the risk to any one customer of using the service — and the risk profile of the cloud service as a whole — is inevitably linked with all the other customers using the service.

This article explores these issues in more detail, including discussing why representations around the security of cloud services is likely to become an increasingly important issue.

Why it matters: regulators are starting to care about security

It is important to appreciate the regulatory context in which the growth in the use of cloud services is taking place. Specifically, there is evidence of an increasing interest by regulators and policymakers in the development of rules around cyber security related matters2. This includes indications of increased scrutiny regarding representations about cyber security that are made by service providers.

Two recent cases in the USA highlight this. In one instance, the Consumer Financial Protection Bureau (a federal regulator, similar to the Australian Securities and Investments Commission) fined Dwolla — an online payment processing company — one hundred thousand US dollars after it found Dwolla had made misleading statements that it secured information it obtained from its customers in accordance with industry standards3.

Similarly, the US Federal Trade Commission recently commenced legal proceedings against Wyndham Worldwide, a hospitality company that managed and franchised a group of hotels. After a series of security breaches, hackers were able to obtain payment card information belonging to over six hundred thousand of Wyndham’s consumers, leading to over ten million dollars in losses as a result of fraudulent transactions.

The FTC alleged that Wyndham had contravened the US Federal Trade Commission Act by engaging in ‘unfair and deceptive acts or practices affecting commerce’4. The grounds for this allegation were numerous, but included that Wyndham had represented on its website that it secured sensitive personal information belonging to customers using industry standard practices, when it was found through later investigations that key information (such as payment card data) was stored in plain text form.

The case against Wyndham was ultimately settled out of court, but demonstrates an increasing interest by regulators in representations made in relation to cyber security by service providers. It is not inconceivable that similar action could be taken in Australia if corresponding circumstances arose, given the Australian Competition and Consumer Commission’s powers to prosecute misleading and deceptive conduct under the Australian Consumer Law5.

While the above cases do not apply to cloud service providers per se, they serve as examples of the increasing regulatory interest that is likely to be given to issues that relate to cyber security. Indeed, while regulatory regimes around cyber security issues are still in relatively early stages of development, it is feasible to expect that cloud providers in particular will come under increased scrutiny due to their central role in supporting the technology and business functions of a high number of customers from multiple jurisdictions.

There is also a strong likelihood that this scrutiny will extend to the decisions made by customers of cloud providers. In Australia, for example, if a company wishes to store personal information about its customers on a cloud service provider’s servers overseas, they would (subject to certain exceptions) need to take reasonable steps to ensure the cloud provider did not breach the Australian Privacy Principles in the Privacy Act 1988 in handling the information. Among other things, this would include ensuring that the cloud provider took reasonable steps to secure the information6. Similarly, data controllers (and data processors) in the EU will be required under the new Data Protection Regulation to ensure that appropriate technical and organisational measures are in place to ensure the security of personal data7.

The question then arises as to how cloud service providers and their customers are supposed to make sure they take appropriate steps to ensure they meet their responsibilities in assuring the security of cloud services in the context of a nascent and still developing regulatory environment. At first glance, the solution to the problem appears simple — benchmarking a cloud service against industry security standards. As discussed below, however, there are significant challenges with this approach.

The problem with bench-marking cloud security against industry standards

Many cloud service providers point to certification against standards such as ISO 27001:2013, ISO 27017, ISO 27018 (from a privacy perspective), the Cloud Security Alliance’s STAR program, or obtaining Service Organisation Control 2 / 3 reports as demonstration that their approach to security aligns with best practice. This is in addition to the option of undertaking government accreditation programs, such as IRAP in Australia or FedRAMP in the USA, avenues which some providers also pursue.

While this seems a logical approach, public cloud services and the shared environments they introduce create some unique considerations from a security perspective that complicate matters. Specifically, the potential security risk to any one customer of using these shared environments is inevitably closely intertwined with, and varies based on:

  • their own intended use of the service; and
  • the security risks associated with all other clients using the cloud service8.

As a result, any assessment of a cloud service provider’s security is inevitably reflective of their risk profile at a specific point in time, despite the fact that the risks facing the provider may have changed since based on its dynamic customer base. To illustrate this point, consider the hypothetical case study below.

Case study

X is a public cloud service provider that has been in business for a few years, and provides remote data storage services. X has primarily marketed itself to small businesses which make up the bulk of its customer base, and offers a highly abstracted cloud service with customers having little visibility of and ability to customise the underlying platform.

Those organisations have not stored particularly sensitive information on X’s servers. X has nevertheless obtained ISO 27001:2013 certification during this period — which includes a requirement that the cloud provider implement a risk assessment methodology and actually conducts a risk assessment process for its service on a periodical basis9.

X is then approached by a large multi-national engineering firm, who wants to store highly sensitive information regarding key customers in the cloud to reduce its own costs. The firm wishes to engage a public cloud provider that is ISO 27001:2013 compliant and notices X meets this requirement.

X is planning to conduct a risk assessment to review its current risk profile in 3 months, however, its current set of security controls — against which it obtained ISO 27001 certification — have been designed to address the level of risk associated with customers who use its cloud services for storing relatively insensitive data.

The engineering firm is unaware of this and engages X despite the fact their security controls may not be appropriately adapted to meet its requirements.

As this case study illustrates, whether it’s appropriate for an organisation to engage a cloud provider from a security perspective isn’t a question that can be answered purely by reference to whether they have been deemed compliant with certain standards. The underlying assumptions upon which the cloud provider’s compliance was determined — and whether those assumptions still hold true — are just as important. And yet in many circumstances, it is unlikely (and impractical to expect) that key documents that reveal those assumptions (such as risk registers and treatment plans) — will be made available publicly by cloud service providers so that these investigations can be undertaken by customers. And even if this information can be made available, the customer first has to have the security maturity and awareness to know to ask for such documents and be able to perform the required assessment.

Responsibility for cloud security — a two-way street

Given the limited utility of industry standards in assuring the security of cloud services, and the potential relevance of regulatory responsibilities that could apply to both service providers and their customers, the most reasonable argument is that both parties have a role to play in establishing that a particular cloud service offers an appropriate level of security. While it is difficult to define the precise scope of those responsibilities in the context of a nascent regulatory landscape, this article offers some guidance below.

Customers of cloud services

Customers need to make sure they conduct a sufficient level of due diligence prior to using a cloud service to ensure that its design is appropriately adapted to meet their needs from a security perspective. In particular, they should consider the following:

  • Does the cloud service create a high degree of abstraction from the underlying platform (public cloud services, for example, often have a high level of abstraction where users have very limited — if any — ability to configure the underlying platform). If so, this may mean the service is less suited to more sensitive uses where a high degree of control by the customer is required.
  • Is the use of a shared IT environment — in which the risk profile of the cloud service as a whole varies dynamically as its customer base changes — appropriate?
  • Are the security controls put in place by the cloud provider appropriate to satisfy the organisation’s intended use of the service?
  • Does the cloud provider make available details of security risk assessments and risk management plans?
  • Are there any other considerations that may have a bearing on whether using the cloud service is appropriate (e.g. a regulatory requirement or a strong preference to have the data stored locally rather than overseas)?

Generally speaking, the higher the level of sensitivity and criticality associated with the planned uses of a cloud service, the more cautious a customer needs to be before making a decision to use a service offered in a shared environment. If the choice is still made to proceed (as opposed to using a private cloud, for example), the reasons for this decision should be documented and subject to appropriate executive sign-off and oversight (as well as regular review). This will prove particularly valuable in case the decision is scrutinised by external bodies (e.g. regulators) at a later date10.

Cloud service providers

It is important that cloud providers are transparent with their customers about the security measures they have in place throughout the course of the period they are engaged by the customer. While representing that the cloud service is certified against particular industry benchmarks is useful to some extent, the cloud provider should also provide their own information to customers as to the specific security controls they do — and don’t — have in place, and the level of risk those controls are designed to address. In addition, cloud providers should be proactive about informing their customers where circumstances may have arisen that have resulted in a material change to their risk profile.

Providing this information is important to enable potential customers of cloud services to ascertain whether use of the service is appropriate for their needs.


Clearly, the shift towards the use of cloud services is now well established. This is a not a problem in and of itself. However, while regulatory expectations around cyber security are still being established, organisations need to ensure that they choose a cloud service provider only after first carefully considering what their requirements are and whether the cloud service offers an approach to security and a risk profile that is adapted to their needs. Service providers need to facilitate this process as best they can through a transparent dialogue with their customers about their approach to security and their risk profile.

By Arun Raghu, Cyber Research Consultant at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony

  1. Note this write up focuses less on dedicated cloud environments (e.g. private cloud arrangements), where these complexities are largely avoided because a service can be customised and secured with a specific focus on a particular customer.
  2. This article does not cover this in detail, but examples include the development of the Network Information and Security Directive in the EU; the rollout of Germany’s IT Security Act; the ongoing discussions around legislated cyber security information sharing frameworks in the USA; and the proposal in late 2015 to amend Australia’s existing Telecommunications Act 1997 to include revised obligations on carriers and service providers to do their best to manage the risk of unauthorised access and interference in their networks, including a new notification requirement on carriers and some carriage service providers to notify of planned changes to networks that may make them vulnerable to unauthorised access and interference.
  3. See the regulator’s findings for details.
  4. See the FTC site for additional details on the Wyndham case.
  5. See section 18 of Schedule 2 of the Competition and Consumer Act 2010.
  6. See in particular Australian Privacy Principles 8 and 11.
  7. See Article 30 of the proposed text for the EU’s General Data Protection Regulation.
  8. The risks introduced by other clients of the cloud service may vary depending on the sector(s) in which they operate and their potential exposure to cyber-attacks as well as their intended use of the service.
  9. See in particular section 6.1 of the ISO 27001:2013 standard.
  10. A relevant consideration that may also be taken into account by regulators or other external bodies is what would reasonably be expected by an organisation of the same type in the same industry before engaging a cloud service provider — this would help ensure that unreasonable levels of due diligence are not expected of organisations with limited resources, for example.