Hivint Security Consultant Esther Lim describes her experiences running a workshop for a group of female high school students on penetration testing in order to pique their interest in cyber security.
According to a recent study conducted by Intel Security, Australia is currently facing a massive shortage of cyber security skills which is set to widen . To address and to close the gap requires not only the introduction of more ‘hands on’ learning approaches — it also necessitates a more diverse workforce.
In this context, the Go Girls, Go For IT event was held on August 16, 2016 and is an event that is held biennially at Deakin University. The purpose of this event is to promote the exciting jobs available in Information Technology as a career option for women. Having been a volunteer with the communications team, I was privileged to be chosen to speak to a group of very keen high school girls about my career in and passion for cyber security.
A key question I had to consider in addressing the audience was this: how do I make something like penetration testing fun for high school girls? Do I speak about my transition from high school to university and then to my role as a penetration tester at Hivint? Perhaps not; there are already a myriad of inspirational women who were going to be sharing their journey into the IT industry at the event itself.
So, there I was, staring at the first page of my presentation aptly titled; “Hacking Your Way to A Career In Cybersecurity”. According to studies by researchers, the average human attention span has fallen from 12 seconds 16 years ago to a mere 8 seconds today  and that rules out a long PowerPoint presentation! On reflection, I knew there was a better way to pique these girls’ interests in penetration testing — so I decided to replace my classroom presentation with an engaging, hands-on interactive workshop entitled “This is why you never use free WIFIs at Maccas”.
Why was that used as an example? In a facebook, Pokemon GO, snapchat-focused society, our short-attention spans mean people most effectively learn by doing. Teaching cyber security — or any IT subjects — to students can be hard if an interest has not quickly been sparked.
With their school teacher in tow, I invited the entire class to do a bit of penetration testing from my laptop. The “ooohs” and “aaahhs” validated my perception that people do learn and are inspired when they are engaged and actively involved in the subject matter. Questions were asked about penetration testing, jokes were made, minds were enlightened, and the class’s interest sparked. I was proud to know that I had gone some way to sowing a seed in these girls’ minds about the importance of cyber security — many who will one day become leaders in our society, and be key members in the ongoing mission to keep individuals, businesses and nations secure against cyber attacks.
Esther Lim is a technical specialist at Hivint, delivering penetration testing services to a diverse range of clients. Esther also helps adapt resources for the Security Colony (www.securitycolony.com) cyber security collaboration portal — you can get started with a free account, so come and sign up today at https://portal.securitycolony.com/Register)
Boom! And there we have it, the first reasonably coherent cyber security strategy for the country in almost 7 years. I thought I’d take the opportunity to put down on paper some initial thoughts.
For context, in the time between our last Strategy (2009) and this Strategy (2016), a few things transpired:
Facebook released the “Like” button. Well, technically that was in February 2009, but it’s still a useful social reference point to date the previous Strategy document’s external environment. Instagram, Pinterest, Google+ started in 2009 or later. The first consumer Android smartphone was less than a year old (released October 2008). The cloud computing market has almost quadrupled in size.
But let’s not dwell on the past. We are looking at a golden age of innovation and creativity and perhaps cyber security can get access to some of the pixie dust previously reserved for mining and semi-viable heavy industrial industries.
The Strategy is genuinely a positive step. It makes some reasonably solid (and hence measurable) commitments, hits some of the genuine issues of the industry like skills, the need for innovation, and the need for collaboration, and is significantly more pragmatic than the 2009 treatise on the allocation of responsibility across the many and varied government agencies with a stake in this. That said, the devil, as always, will be in the detail, and how this stuff gets rolled out will make all the difference and will determine if this is a great step forward, or we continue to flail about.
Cyber Security Growth Centre
At first glance this sounds like a great idea, but the more I think about it, the more I don’t understand the need. That’s not to say I don’t understand the need for the funding and the value, importance and opportunity associated with building out a significant cyber security industry for Australia’s economy… As I noted above, everyone in our industry looks to Israel as the shining light here, and there’s no question there’s a big global market if we can make it work.
Perhaps this is a philosophical argument, but does “streamlining governance” mean creating new organisations (as it does in this case) or does it mean making the existing organisations (of which there are admittedly many) operate smoothly together? Perhaps it’s a bit of both, but then is that really streamlining?
Commercialisation Australia programs already exist which would seem to have a very similar focus (albeit not dedicated to cyber security) — and have already invested in Australian cyber security companies like Quintessence Labs and TokenOne. The associated ‘Expert Network’ also has cyber security professionals involved (such as myself; and for clarity, this program is unpaid so there’s no commercial interest in me spruiking its existence) to help guide relevant companies. A specific focus on cyber security would be fantastic, but wouldn’t re-using existing approaches ensure:
A faster time to market; and
A reduced likelihood of the whole thing being a stuff up?
There are a huge number of aims and objectives of the Cyber Security Growth Centre listed in the Strategy, and I’d certainly hate to be the one having to be accountable for starting with a blank sheet of paper and doing everything from coordinating business-government-academia interaction, to cross-sector coordination, to skills development, to international market access support, to government policy advice, to ‘providing tertiary students with hands on experience… before they graduate’. All for $30 Million over a few years. Uh huh.
Again, to be clear, none of this stuff is a bad idea. It will all definitely help and certainly Hivint will be doing what we can to get involved all over the place. But as it currently stands, far from clarifying who does what, it’s added a whole heap of legitimate problems into a blender and poured out a Growth Centre smoothie. Hopefully it will become clearer as more detail becomes available.
Admittedly, they’re not exactly the same — CNVA seemed a more technical assessment, whereas the ‘health check’ concept seems more governance-driven — but hopefully the model used will avoid the pitfalls that ultimately rendered CNVA a non-starter in most Boardrooms. The big one: the perception that if you’re taking Government funding, you need to share the dirty-laundry-esque outcomes of the assessment with them.
Benchmarking, on the other hand, would be great, and sounds like it is going to be included. The data — both qualitative and quantitative — in our industry is truly woeful. Hopefully the approach adopted here will build on the work already done — such as the guidance towards the NIST Cyber Security Framework included in the ASIC Cyber Resilience: Health Check document.
Security Assessments for Small Business
Having been in cyber security consulting for close to 20 years now, I like to think I have a pretty good understanding of the market, both from the supply side and the demand side, and it is definitely the case that the ‘supply side’ of providing cyber security services to SMEs is a graveyard of firms with good intentions. It is simply very difficult to provide the customised level of services required by a client, when operating in a low value — high volume delivery model necessary for SME-targeted services to work.
On ABC News last night it referred to this as a $15 Million program. I can’t find that number in the strategy itself, but I’m sure it comes from somewhere reliable. Assuming it is, that’s about $4 million / year over 4 years (since everything seems to be expressed as 4 year investment periods these days), which is the revenue of a fairly small cyber-security consulting firm with about 15–20 staff; so that’s basically what we’re funding here. Let’s be generous and say 20 consultants, working full time, so 200 days / year each, so a total of about 4,000 days of delivery.
It’s hard to see anything meaningful being generated for an SME in under a day and probably 2–3 days is more realistic, so the number of companies able to be serviced each year under the program is probably in the 1,300–2,000 range. Which is certainly non-trivial, but is also not exactly addressing the scale of the problem given we have 2,000,000-ish SMEs in Australia according to the ABS. Obviously not all of them will have a cyber security “problem” to solve, but it’s still a pretty big discrepancy.
Ultimately the answer here is to tie this to the R&D initiatives and spend a reasonable portion of that $15M on developing a methodology and system as automated as possible to speed up the delivery of these, while continuing to recognise that it is going to require human intervention and expertise of consultants. This can’t become the IT equivalent of the pink batts program, paying dodgy operators $5K a throw to run Nessus over their local plumber’s Yellow Pages listing.
The Strategy seems to double-down on the CREST approach, suggesting at one point that it could be extended beyond testing services. Which seems interesting given the REST in CREST is — by definition — for “Registered Ethical Security Testers”. But why let that get in the way. If all you’ve got is a hammer, everything looks like a nail.
It will be interesting to see whether the Government really does attempt to “pick a winner” in this market despite avoiding it in the past — and which I’m sure will piss off the many and varied other accreditation programs no end — or whether CREST necessarily has to build in a stronger cross-recognition process to acknowledge the breadth of market offerings available.
Fortunately though, we seem to have steered clear of any suggestion we need a “licensing” program for cyber security professionals. The longer we can avoid that albatross around our necks, the better.
Threat Sharing & Collaboration
It’s great that the strategy now commits to “strengthen trusted partnerships with the private sector for the sharing of sensitive information on cyber threats, vulnerabilities and their potential consequences.”
Wait, sorry, that was the 2009 strategy.
Now we’re saying that “organisations, public and private, must work together to build a collective understanding of cyber threats and risks through a layered approach to cyber threat sharing.”
Either way, it’s still true, and it’s still necessary.
But it’s not enough. Why limit sharing to threat information? Which is why we’ve built Security Colony (www.securitycolony.com) as the first — and only — cyber security collaboration platform in Australia. Here is the one pitch I’ll make in this article: For under $300 / month (and you can trial it for free), you can get access to virtually all the output, from our entire consulting team, country-wide.
You can get an entire Information Security Management System that we were paid $100K to develop.
You can get entire security architecture documents that we were paid $40K to develop.
You can get incident response planning guides that we were paid $50K to develop.
And over 100 other documents that add up to over $2 million in value. It’s all derived from real-world consulting projects, paid for by real Australian clients.
You can save tens, or hundreds, of thousands of dollars through subscribing and re-using these materials. Check it out: it’s free. www.securitycolony.com
Given we’re all expecting an election to be called in a couple of weeks’ time, and the Government then goes into caretaker mode, is all this stuff effectively on ice until at least July (assuming the current Government is returned) or maybe September (if there’s a change of Government, with the new lot invariably wanting to make their mark by changing the curtains).
So there it is. Some initial thoughts on the strategy in the context of the various initiatives we’ve seen come and go in the past. A lot of really good ideas, and really valuable initiatives, provided they are well executed. Hopefully we see a speedy implementation, and the outcomes match the promises.
By Nick Ellsmore, Chief Apiarist at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony.
In the last year, there has been a trend towards the commission of payment scams that target employees of companies by attempting to convince them to transfer money to cyber criminals. Commonly referred to as business email compromise (BEC) scams, they generally involve scammers sending emails that appear to come from senior staff at an organisation (hence sometimes being referred to as “CEO fraud”) and requesting that a sum of money be transferred to a third party’s bank account (controlled by the scammers). Brian Krebs has written about these attempts in his blog, here and here. According to the Federal Bureau of Investigation (FBI), these scams have generated reported losses of $1.2 billion internationally between October 2013 and August 2015.
Two recent examples of these scams reported to us by our clients demonstrate the different types of organisations that can be targeted by these scams.
The first scam described below targeted a sporting club and demonstrates how a business email scam can be executed in a relatively simple and innocuous fashion. The second is an example of a slightly more complex version targeted at a financial technology company that required more effort to execute, and which ultimately needed execution of the company’s incident response plan to investigate and resolve the incident.
Case Study — A Sports Club is Targeted
The first business email scam targeted a small sporting club that had published the contact details and roles for all of its board members on its website. This meant the scammer had to exercise a minimum amount of effort in order to craft the scam — all the contact details and roles for the board members were clearly available. Initial contact was made by the scammer via email (posing as the President) to the Treasurer, John, to start the conversation.
In this case, the Treasurer became suspicious and was quick-thinking enough to call the President to seek verbal confirmation of the transfer request. This gave the game away and revealed that the club was being scammed.
Hivint was contacted to provide further analysis and advice on the email scam, as the club staff members who were targeted in the scam were unsure if the scam indicated a system compromise or similar. Once the emails were received, a simple check of the email headers (below) of the original email identified the ruse.
As the email headers reveal, the “Authenticated sender” or real scammer’s email was different from the address shown in the actual email. A google search shows [email protected] to have been used before in scams.
In addition, the “Reply-To” address of [email protected]al.ga did not actually belong to the club’s President, and directed the target’s response to an email address controlled by the scammer. A check of the return email address when responding would also have given this away.
The Second Scam — A Financial Technology Company
The next occurrence of a business email scam that Hivint was made aware of came from a financial technology company we work with. They had received a phishing email similarly requesting money from the financial team.
This attempt took more effort as the scam clearly involved more research and customisation by the scammer.
While the content of the email was consistent with most business email scams (see below), there were some distinguishing features which contributed to the scam almost being successful.
In this case, the attacker registered a domain with a very similar domain to the target business — an extra letter was added to the domain name — e.g. www.domain.com was registered as www.domaiin.com. This meant that the reply-to address closely resembled an email address that belonged to the company’s actual registered domain name, making the scam harder to detect unless anything more than a cursory examination of the reply-to address was undertaken.
There are a number of attacks which are high volume/low value. For example, attempting to force payment through cryptolocker only works if the price is within the victim’s “pain point” or ability to pay. The business email scam, however, has no force behind the request for payment. The scam only works if the victim doesn’t know they’re getting scammed. And this takes effort, which means that the payoff has to be worth it for the perpetrator.
Even spending a few weeks on researching a victim and crafting an attack for a five figure payout would still be highly profitable for a scammer, and a growing $1.2 billion pot of money derived from these scams shows that they can be lucrative.
That there is continuing growth in these scams demonstrates that this threat is worth countering, and there are some fairly basic steps to undertake should you want to reduce the risk of these types of attacks occurring at your company, and the likelihood that they will be successful.
Exercise proper security hygiene to protect your online identity.
Don’t publish the contact details and position names of specific staff on publicly accessible places on the internet; particularly staff with payment-related responsibilities. Instead, use an email form that sends to a generic email address — [email protected] — and distribute emails to relevant personnel from there.
Separation of Duties
Should a request come to an individual for payment of a sum of money (whether for an invoice or otherwise), a check should be made that the payment is in fact legitimate (e.g. through verbal confirmation, or confirmation there is an associated Purchase Order number or invoice) and approved by a relevant authority. Basically, no business processes should fundamentally tie the receipt of an email with a money transfer.
Ensure education on email scams is included in your organisational security awareness program.
Check your registered domains
Andrew Horton’s URLCrazy (included in Kali Linux) can be used to keep an eye on domains registered with similar domain names to your business. Buy the domains that you can, and consider blocking emails from similar domains already registered, or generating an alert should an email arrive from these domains.
Remember, if something about an email doesn’t seem right, making simple checks that you’re corresponding with a legitimate sender will go a long way to ensure you are not defrauded. In particular:
Double check whom you’re actually responding to — if the reply address for the email is different once you’ve hit “reply” then it may have been sent by a scammer. If the email looks legitimate, then check the spelling of the email address to ensure the domain name is not misspelt.
Contact the purported sender of the email using a known telephone number (i.e. not a contact number given in the email) before executing any money transfers. Even if an attacker has gone out of their way not to just spoof an email address, but has control of your entire IT environment, using an “out-of-band” method to contact the legitimate person can help verify the authenticity of the email.
And finally, should you still fall victim to a payment scam, contact your financial institution as soon as possible.
By Ben Waters, Senior Security Advisor at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony
As the use of cloud services continues to grow, it’s generally well accepted that in most cases reputable service providers are able to use their economies of scale to offer levels of security in the cloud that match or exceed what enterprises can establish for themselves.
What is less clear is whether there are currently appropriate mechanisms available to enable an effective determination of whether the security controls a cloud service provider has in place are appropriately adapted to the needs of their various customers (or potential customers). There’s also a lack of clarity as to whether providers or customers should ultimately bear principal responsibility for answering this question.
These ambiguities are particularly highlighted in the case of highly abstracted public cloud services where the organisations using them have very little ability to interact with and configure the underlying platform and processes used to provide the service. In particular, the ‘shared environment’ these types of services offer creates a complex and dynamic risk profile: the risk to any one customer of using the service — and the risk profile of the cloud service as a whole — is inevitably linked with all the other customers using the service.
This article explores these issues in more detail, including discussing why representations around the security of cloud services is likely to become an increasingly important issue.
Why it matters: regulators are starting to care about security
It is important to appreciate the regulatory context in which the growth in the use of cloud services is taking place. Specifically, there is evidence of an increasing interest by regulators and policymakers in the development of rules around cyber security related matters2. This includes indications of increased scrutiny regarding representations about cyber security that are made by service providers.
Two recent cases in the USA highlight this. In one instance, the Consumer Financial Protection Bureau (a federal regulator, similar to the Australian Securities and Investments Commission) fined Dwolla — an online payment processing company — one hundred thousand US dollars after it found Dwolla had made misleading statements that it secured information it obtained from its customers in accordance with industry standards3.
Similarly, the US Federal Trade Commission recently commenced legal proceedings against Wyndham Worldwide, a hospitality company that managed and franchised a group of hotels. After a series of security breaches, hackers were able to obtain payment card information belonging to over six hundred thousand of Wyndham’s consumers, leading to over ten million dollars in losses as a result of fraudulent transactions.
The FTC alleged that Wyndham had contravened the US Federal Trade Commission Act by engaging in ‘unfair and deceptive acts or practices affecting commerce’4. The grounds for this allegation were numerous, but included that Wyndham had represented on its website that it secured sensitive personal information belonging to customers using industry standard practices, when it was found through later investigations that key information (such as payment card data) was stored in plain text form.
The case against Wyndham was ultimately settled out of court, but demonstrates an increasing interest by regulators in representations made in relation to cyber security by service providers. It is not inconceivable that similar action could be taken in Australia if corresponding circumstances arose, given the Australian Competition and Consumer Commission’s powers to prosecute misleading and deceptive conduct under the Australian Consumer Law5.
While the above cases do not apply to cloud service providers per se, they serve as examples of the increasing regulatory interest that is likely to be given to issues that relate to cyber security. Indeed, while regulatory regimes around cyber security issues are still in relatively early stages of development, it is feasible to expect that cloud providers in particular will come under increased scrutiny due to their central role in supporting the technology and business functions of a high number of customers from multiple jurisdictions.
There is also a strong likelihood that this scrutiny will extend to the decisions made by customers of cloud providers. In Australia, for example, if a company wishes to store personal information about its customers on a cloud service provider’s servers overseas, they would (subject to certain exceptions) need to take reasonable steps to ensure the cloud provider did not breach the Australian Privacy Principles in the Privacy Act 1988 in handling the information. Among other things, this would include ensuring that the cloud provider took reasonable steps to secure the information6. Similarly, data controllers (and data processors) in the EU will be required under the new Data Protection Regulation to ensure that appropriate technical and organisational measures are in place to ensure the security of personal data7.
The question then arises as to how cloud service providers and their customers are supposed to make sure they take appropriate steps to ensure they meet their responsibilities in assuring the security of cloud services in the context of a nascent and still developing regulatory environment. At first glance, the solution to the problem appears simple — benchmarking a cloud service against industry security standards. As discussed below, however, there are significant challenges with this approach.
The problem with bench-marking cloud security against industry standards
Many cloud service providers point to certification against standards such as ISO 27001:2013, ISO 27017, ISO 27018 (from a privacy perspective), the Cloud Security Alliance’s STAR program, or obtaining Service Organisation Control 2 / 3 reports as demonstration that their approach to security aligns with best practice. This is in addition to the option of undertaking government accreditation programs, such as IRAP in Australia or FedRAMP in the USA, avenues which some providers also pursue.
While this seems a logical approach, public cloud services and the shared environments they introduce create some unique considerations from a security perspective that complicate matters. Specifically, the potential security risk to any one customer of using these shared environments is inevitably closely intertwined with, and varies based on:
their own intended use of the service; and
the security risks associated with all other clients using the cloud service8.
As a result, any assessment of a cloud service provider’s security is inevitably reflective of their risk profile at a specific point in time, despite the fact that the risks facing the provider may have changed since based on its dynamic customer base. To illustrate this point, consider the hypothetical case study below.
X is a public cloud service provider that has been in business for a few years, and provides remote data storage services. X has primarily marketed itself to small businesses which make up the bulk of its customer base, and offers a highly abstracted cloud service with customers having little visibility of and ability to customise the underlying platform.
Those organisations have not stored particularly sensitive information on X’s servers. X has nevertheless obtained ISO 27001:2013 certification during this period — which includes a requirement that the cloud provider implement a risk assessment methodology and actually conducts a risk assessment process for its service on a periodical basis9.
X is then approached by a large multi-national engineering firm, who wants to store highly sensitive information regarding key customers in the cloud to reduce its own costs. The firm wishes to engage a public cloud provider that is ISO 27001:2013 compliant and notices X meets this requirement.
X is planning to conduct a risk assessment to review its current risk profile in 3 months, however, its current set of security controls — against which it obtained ISO 27001 certification — have been designed to address the level of risk associated with customers who use its cloud services for storing relatively insensitive data.
The engineering firm is unaware of this and engages X despite the fact their security controls may not be appropriately adapted to meet its requirements.
As this case study illustrates, whether it’s appropriate for an organisation to engage a cloud provider from a security perspective isn’t a question that can be answered purely by reference to whether they have been deemed compliant with certain standards. The underlying assumptions upon which the cloud provider’s compliance was determined — and whether those assumptions still hold true — are just as important. And yet in many circumstances, it is unlikely (and impractical to expect) that key documents that reveal those assumptions (such as risk registers and treatment plans) — will be made available publicly by cloud service providers so that these investigations can be undertaken by customers. And even if this information can be made available, the customer first has to have the security maturity and awareness to know to ask for such documents and be able to perform the required assessment.
Responsibility for cloud security — a two-way street
Given the limited utility of industry standards in assuring the security of cloud services, and the potential relevance of regulatory responsibilities that could apply to both service providers and their customers, the most reasonable argument is that both parties have a role to play in establishing that a particular cloud service offers an appropriate level of security. While it is difficult to define the precise scope of those responsibilities in the context of a nascent regulatory landscape, this article offers some guidance below.
Customers of cloud services
Customers need to make sure they conduct a sufficient level of due diligence prior to using a cloud service to ensure that its design is appropriately adapted to meet their needs from a security perspective. In particular, they should consider the following:
Does the cloud service create a high degree of abstraction from the underlying platform (public cloud services, for example, often have a high level of abstraction where users have very limited — if any — ability to configure the underlying platform). If so, this may mean the service is less suited to more sensitive uses where a high degree of control by the customer is required.
Is the use of a shared IT environment — in which the risk profile of the cloud service as a whole varies dynamically as its customer base changes — appropriate?
Are the security controls put in place by the cloud provider appropriate to satisfy the organisation’s intended use of the service?
Does the cloud provider make available details of security risk assessments and risk management plans?
Are there any other considerations that may have a bearing on whether using the cloud service is appropriate (e.g. a regulatory requirement or a strong preference to have the data stored locally rather than overseas)?
Generally speaking, the higher the level of sensitivity and criticality associated with the planned uses of a cloud service, the more cautious a customer needs to be before making a decision to use a service offered in a shared environment. If the choice is still made to proceed (as opposed to using a private cloud, for example), the reasons for this decision should be documented and subject to appropriate executive sign-off and oversight (as well as regular review). This will prove particularly valuable in case the decision is scrutinised by external bodies (e.g. regulators) at a later date10.
Cloud service providers
It is important that cloud providers are transparent with their customers about the security measures they have in place throughout the course of the period they are engaged by the customer. While representing that the cloud service is certified against particular industry benchmarks is useful to some extent, the cloud provider should also provide their own information to customers as to the specific security controls they do — and don’t — have in place, and the level of risk those controls are designed to address. In addition, cloud providers should be proactive about informing their customers where circumstances may have arisen that have resulted in a material change to their risk profile.
Providing this information is important to enable potential customers of cloud services to ascertain whether use of the service is appropriate for their needs.
Clearly, the shift towards the use of cloud services is now well established. This is a not a problem in and of itself. However, while regulatory expectations around cyber security are still being established, organisations need to ensure that they choose a cloud service provider only after first carefully considering what their requirements are and whether the cloud service offers an approach to security and a risk profile that is adapted to their needs. Service providers need to facilitate this process as best they can through a transparent dialogue with their customers about their approach to security and their risk profile.
By Arun Raghu, Cyber Research Consultant at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony
Note this write up focuses less on dedicated cloud environments (e.g. private cloud arrangements), where these complexities are largely avoided because a service can be customised and secured with a specific focus on a particular customer.
This article does not cover this in detail, but examples include the development of the Network Information and Security Directive in the EU; the rollout of Germany’s IT Security Act; the ongoing discussions around legislated cyber security information sharing frameworks in the USA; and the proposal in late 2015 to amend Australia’s existing Telecommunications Act 1997 to include revised obligations on carriers and service providers to do their best to manage the risk of unauthorised access and interference in their networks, including a new notification requirement on carriers and some carriage service providers to notify of planned changes to networks that may make them vulnerable to unauthorised access and interference.
See the FTC site for additional details on the Wyndham case.
See section 18 of Schedule 2 of the Competition and Consumer Act 2010.
See in particular Australian Privacy Principles 8 and 11.
See Article 30 of the proposed text for the EU’s General Data Protection Regulation.
The risks introduced by other clients of the cloud service may vary depending on the sector(s) in which they operate and their potential exposure to cyber-attacks as well as their intended use of the service.
See in particular section 6.1 of the ISO 27001:2013 standard.
A relevant consideration that may also be taken into account by regulators or other external bodies is what would reasonably be expected by an organisation of the same type in the same industry before engaging a cloud service provider — this would help ensure that unreasonable levels of due diligence are not expected of organisations with limited resources, for example.