[email protected]   +1 (833) 3COLONY / +61 1300 733 940

The Cyber Security Ecosystem: Collaborate or Collaborate. It’s your choice.


As cyber security as a field has grown in scope and influence, it has effectively become an ‘ecosystem’ of multiple players, all of whom either participate in or influence the way the field develops and/or operates. At Hivint, we believe it is crucial for those players to collaborate and work together to enhance the security posture of communities, nations and the globe, and that security consultants have an important role to play in facilitating this goal.

The eco-system untwined

The cyber security ecosystem can broadly be divided into two categories, with some players (e.g. governments) having roles in both categories:

Macro-level players

Consists of those stakeholders who are in a position to exert influence on the way the cyber security field looks and operates at the micro-level. Key examples include governments, regulators, policymakers and standards setting organisations and bodies (such as the International Organization for Standardization, Internet Engineering Task Force and National Institute for Standards and Technology).

Micro-level players

Consists of those stakeholders who, both collectively and individually, undertake actions on a day-to-day basis that affect the community’s overall cyber security posture (positively or negatively). Examples include end users/consumers, governments, online businesses, corporations, SMEs, financial institutions and security consultants (although as we’ll discuss later, the security consultant has a unique role that bridges across the other players at the micro-level).

The macro level has, in the past, been somewhat muted with its involvement in influencing developments in cyber security. Governments and regulators, for example, often operated at the fringes of cyber security and primarily left things to the micro-level. While collaboration occurred in some instances (for example, in response to cyber security incidents with national security implications), that was by no means expected.


The formalisation of collaborative security

This is rapidly changing. We are now regularly seeing more formalised models being (or planning to be) introduced to either strongly encourage or require collaboration on cyber security issues between multiple parties in the ecosystem.

Recent prominent examples include proposed draft legislation in Australia that would, if implemented, require nominated telecommunications service providers and network operators to notify government security agencies of network changes that could affect the ability of those networks to be protected[1], proposals for introducing legislative frameworks to encourage cyber security information sharing between the private sector and government in the United States[2], and the introduction of a formal requirement in the European Union for companies in certain sectors to report major security incidents to national authorities[3].

There are any number of reasons for this change, although the increasing public visibility given to cyber security incidents is likely at the top of the list (in October alone we have seen two of Australia’s major retailers suffer security breaches). In addition, there is a growing predilection toward collaborative models of governance in a range of cyber topic areas that have an international dimension (for example, the internet community is currently involved in deep discussions around transitioning the governance model for the internet’s DNS functions away from US government control towards a multi-stakeholder model). With cyber security issues frequently having a trans-national element — particularly discussions around setting ‘norms’ of conduct around cyber security at an international level[4] — it’s likely that players at the macro-level see this as an appropriate time to become more involved in influencing developments in the field at the national level.

Given this trend, it’s unlikely to be long before the macro-level players start to require compliance with minimum standards of security at the micro-level. As an example, the proposed Australian legislation referred to above would require network operators and service providers to do their best (by taking all reasonable steps) to protect their networks from unauthorised access or interference. And in the United States, a Federal Court of Appeals recently decided that their national consumer protection authority, the Federal Trade Commission, had jurisdiction to determine what might constitute an appropriate level of security for businesses in the United States to meet in order to avoid potential liability[5]. In Germany, legislation recently came into effect requiring minimum security requirements to be met by operators of critical infrastructure.

Security consultants — the links in the collaboration chain

Whatever the reasons for the push towards ‘collaborative’ security, it’s the micro-level players who work in the cyber security field day-to-day who will ultimately need to respond as more formal expectations are placed on players at the macro-level with regards to their security posture.

Hivint was in large part established to respond to this trend — we believe that security consultants have a crucial role to play in this process, including through building a system in which the outputs of consulting projects are shared within communities of interest who are facing common security challenges, thereby minimising redundant expenditure on security issues that other organisations have already faced. This system is called “The Security Colony” and is available now[6]. For more information on the reasons for its creation and what we hope to achieve, see our previous article on this topic.

We also believe there is a positive linkage between facilitating more collaboration between players at the micro-level of the ecosystem, and encouraging the creation of more proactive security cultures within organisations. Enabling businesses to minimise expenditure on security problems that have already been considered in other consulting projects enables them to focus their energies on implementing measures to encourage more proactive security — for example, as we discussed in a previous article, by educating employees on the importance of identifying and reporting basic security risks (such as the inappropriate sharing of system passwords). And encouraging a more proactive security culture within organisations will ultimately strengthen the nation’s overall cyber security posture and benefit the community as a whole.


Article by Craig Searle, Chief Apiarist, Hivint


[1] See in particular the proposed changes to section 313 of the Telecommunications Act 1997 (Cth).
[2] See https://www.fas.org/sgp/crs/misc/R44069.pdf for a description of these proposals.
[3] See http://ec.europa.eu/digital-agenda/en/news/network-and-information-security-nis-directive
[4] See for example http://www.project-syndicate.org/commentary/international-norms-cyberspace-by-joseph-s–nye-2015-05
[5] See http://www.technologylawdispatch.com/2015/08/privacy-data-protection/third-circuit-upholds-ftcs-authority-in-wyndham-case/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original
[6] https://www.securitycolony.com/

Security Collaboration — The Problem and Our Solution


Colleagues, the way we are currently approaching information security is broken.

This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.

But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).

Why is it inefficient to compete on security? Here are a couple of reasons:

Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.

No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)

Some examples of how our current approach is failing us:

We are wasting money on information security governance, risk and compliance

There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.

The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.

We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.

We are wasting money when getting technical security services

As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.

But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.

The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.

So what do we do about it?

There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.

It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.

The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:

Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?

Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.

Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).

Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.

Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.

What else is going in to the portal?

We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.

“Everyone” can’t win. Who loses?

The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).

So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.

Is there a precedent for this?

The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”

A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.

Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.

The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.

Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.

What if a project is really, really sensitive?

Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.

So how does it work?

You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements)
We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output).
Created resources, documents, and de-identified findings become part of our portal for community benefit.

Great. Where to from here?

There are two things we need right now:

Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help.
Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful.
And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.


Article by Nick Ellsmore, Chief Apiarist, Hivint

Cybersecurity Collaboration

Establishing a security Community of Interest


Hivint is all about security collaboration.

We believe that organisations can’t afford to solve security problems on their own and need to more efficiently build and consume security resources and services. Whilst we see our Security Colony as a key piece to this collaboration puzzle, we definitely don’t see it as the only piece.

Through our advisory services, we regularly see the same challenges and problems being faced by organisations within the same industry. We also see hesitation between organisations to share information with others. This is often due to perceived competitiveness, lack of a framework to enforce sharing and fear of sharing too much information, along with privacy concerns.

We believe that it is important for organisations to realise that security shouldn’t compete between ‘competitors’, but instead against threats, and that working together to solve common security challenges is vital. We want to help make that happen. One such way — and the purpose of this article — is for a group of similar organisations to form a security Community of Interest (CoI).

This article outlines our suggested approach towards establishing and running a CoI, covering off a number of common concerns regarding the operation of such a community, and concludes with a link to a template that can be used by any organisation wishing to establish such a CoI.

Why is information sharing good?

Security information sharing and collaboration helps ensure that organisations across the industry learn from each other, leading to innovative thinking to deter cyber criminals. Our earlier blog post, Security Collaboration — The Problem and Our Solution, provides a detailed outlook on security collaboration.

We consider security collaboration as vital to making a difference to the economics of cyber-crimes, and as such we share what works and what doesn’t by making the output of our consulting engagements available on our Security Colony Portal.

However, we acknowledge that there are times when sharing could be more direct between organisations by forming a collective more closely — documents and resources could then be shared that are more specific to their industry (for example, acceptable use policies may be very similar across universities), or more sensitive in nature in a way that could make it unreasonable to share publicly (for example, sharing security related issues that may not have been effectively solved yet).

When does a Community of Interest work?

Sharing of information is most effective when a collective group is interested in a similar type of information. An example of this may be university institutions — while distinct entities will have different implementations, the overall challenges that each face is likely to be similar. Pooling resources such as policy, operating procedures, and to an extent metrics, provides a way to maximise performance of the group as a whole, while minimising duplication of effort.

When is Community of Interest sharing a bad idea?

Sharing agreements like a CoI will not be effective in all circumstances — a CoI will only work if information flows in both directions for the organisations involved. It would not be a suitable framework for things that generally flow in a single direction, such as government reporting. A CoI’s primary focus should also not be on sharing ‘threat intel’ as there are a number of services that already do this such as Cert Australia, Open Threat Exchange, McAfee Threat Intelligence Exchange and IBM X-Force to name a few.

How is information shared within a Community of Interest?

An important aspect of a CoI is the platform used for sharing between members of CoI. It is important to recognise the fact that platforms used will not be the same used across all CoI’s, each organisation will have unique requirements and preferences as to which platforms will be most effective in the circumstances. Platforms such as email-lists and portals can be effective for sharing electronically, however platforms like meetings (be it in person, or teleconference style) may be more effective in some cases.

What kind of information can be shared?

In theory, almost anything, however in practice there are seven major types of cybersecurity information suitable for sharing, according to Microsoft[1]. These are:

  • Details of attempted or successful incidents
  • Potential threats to business
  • Exploitable software
  • Hardware or business process vulnerabilities
  • Mitigations strategies against identified threats and vulnerabilities
  • Situational awareness
  • Best practices for incident management and strategic analysis of current and future risk environment.

Hivint recognises that every piece of information has different uses and benefits. Sharing of information like general policy documents, acceptable use policy, or processes that an organisation struggles with or perform well can uplift cyber resilience and efficiency among businesses. These are also relatively simple artefacts that can be shared to help build an initial trust in the CoI, and are recommended as a starting point.

What about privacy and confidentiality?

Keeping information confidential is a fundamental value for establishing trust within a CoI. To ensure this is maintained, guidelines must be established against sharing of customer information or personal records.

Information should be de-identified and de-sensitised to remove any content that could potentially introduce a form of unauthorised disclosure / breach, and limitations should be established to determine the extent of information able to be shared, as well as the authorised use of this information by the receiving parties.

How is a Community of Interest formed?

It is important to realise that organisations need not follow a single structure or model when setting up a CoI. Ideally, the first step is identifying and contacting like-minded people with an interest in collaborating from your network or business area. Interpersonal relationship between personnel involved in CoI is critical to retaining and enhancing the trust and confidence of all members. A fitting approach to creating such an environment is by initially exchanging non-specific or non-critical information on a more informal basis. Considering that sharing agreements like this require a progressive approach, it is best not to jump head first by sharing all the information pertaining to your business at the first instance.

Upon success of the first phase of sharing and development of a strong relationship between parties involved, a more formal approach is encouraged for the next phase.

Next Steps

We’ve made a Cyber Security Collaboration Framework available to all subscribers (free and paid) of Security Colony which can be used as a template to start the discussion with interested parties, and when the time comes, formally establish the CoI.

[1] ‘A Framework for Cybersecurity information sharing and risk reduction’ — https://www.microsoft.com/en-us/download/details.aspx?id=45516


Additional Information

There are a number of instances where cyber-security information sharing arrangements have been established around the world. The below provides links to a small number of these.

http://data.cambridgeshire.gov.uk/data/information-management/info-sharing-framework/cambs-information-sharing-framework.pdf

https://corpgov.law.harvard.edu/2016/03/03/federal-guidance-on-the-cybersecurity-information-sharing-act-of-2015/

https://www.enisa.europa.eu/publications/cybersecurity-information-sharing

Australia’s Cyber Security Strategy — The Pixie Dust We Need?

Boom! And there we have it, the first reasonably coherent cyber security strategy for the country in almost 7 years. I thought I’d take the opportunity to put down on paper some initial thoughts.


For context, in the time between our last Strategy (2009) and this Strategy (2016), a few things transpired:

But let’s not dwell on the past. We are looking at a golden age of innovation and creativity and perhaps cyber security can get access to some of the pixie dust previously reserved for mining and semi-viable heavy industrial industries.

The Strategy is genuinely a positive step. It makes some reasonably solid (and hence measurable) commitments, hits some of the genuine issues of the industry like skills, the need for innovation, and the need for collaboration, and is significantly more pragmatic than the 2009 treatise on the allocation of responsibility across the many and varied government agencies with a stake in this. That said, the devil, as always, will be in the detail, and how this stuff gets rolled out will make all the difference and will determine if this is a great step forward, or we continue to flail about.

Cyber Security Growth Centre

At first glance this sounds like a great idea, but the more I think about it, the more I don’t understand the need. That’s not to say I don’t understand the need for the funding and the value, importance and opportunity associated with building out a significant cyber security industry for Australia’s economy… As I noted above, everyone in our industry looks to Israel as the shining light here, and there’s no question there’s a big global market if we can make it work.

Perhaps this is a philosophical argument, but does “streamlining governance” mean creating new organisations (as it does in this case) or does it mean making the existing organisations (of which there are admittedly many) operate smoothly together? Perhaps it’s a bit of both, but then is that really streamlining?

Commercialisation Australia programs already exist which would seem to have a very similar focus (albeit not dedicated to cyber security) — and have already invested in Australian cyber security companies like Quintessence Labs and TokenOne. The associated ‘Expert Network’ also has cyber security professionals involved (such as myself; and for clarity, this program is unpaid so there’s no commercial interest in me spruiking its existence) to help guide relevant companies. A specific focus on cyber security would be fantastic, but wouldn’t re-using existing approaches ensure:

  1. A faster time to market; and
  2. A reduced likelihood of the whole thing being a stuff up?

There are a huge number of aims and objectives of the Cyber Security Growth Centre listed in the Strategy, and I’d certainly hate to be the one having to be accountable for starting with a blank sheet of paper and doing everything from coordinating business-government-academia interaction, to cross-sector coordination, to skills development, to international market access support, to government policy advice, to ‘providing tertiary students with hands on experience… before they graduate’. All for $30 Million over a few years. Uh huh.

Again, to be clear, none of this stuff is a bad idea. It will all definitely help and certainly Hivint will be doing what we can to get involved all over the place. But as it currently stands, far from clarifying who does what, it’s added a whole heap of legitimate problems into a blender and poured out a Growth Centre smoothie. Hopefully it will become clearer as more detail becomes available.

Health Checks

The “national voluntary Cyber Security Governance ‘health checks’ to enable boards and senior management to better understand their cyber security status” are a good idea, but then they were a good idea the first time around (everyone remembers the Computer Network Vulnerability Assessment program, right?)

Admittedly, they’re not exactly the same — CNVA seemed a more technical assessment, whereas the ‘health check’ concept seems more governance-driven — but hopefully the model used will avoid the pitfalls that ultimately rendered CNVA a non-starter in most Boardrooms. The big one: the perception that if you’re taking Government funding, you need to share the dirty-laundry-esque outcomes of the assessment with them.

I mean, seriously, we’re talking ASX 100 here. The smallest one today has a market cap of over $1.4 Billion. Funding should not be the issue.

Benchmarking, on the other hand, would be great, and sounds like it is going to be included. The data — both qualitative and quantitative — in our industry is truly woeful. Hopefully the approach adopted here will build on the work already done — such as the guidance towards the NIST Cyber Security Framework included in the ASIC Cyber Resilience: Health Check document.

Security Assessments for Small Business

Having been in cyber security consulting for close to 20 years now, I like to think I have a pretty good understanding of the market, both from the supply side and the demand side, and it is definitely the case that the ‘supply side’ of providing cyber security services to SMEs is a graveyard of firms with good intentions. It is simply very difficult to provide the customised level of services required by a client, when operating in a low value — high volume delivery model necessary for SME-targeted services to work.

On ABC News last night it referred to this as a $15 Million program. I can’t find that number in the strategy itself, but I’m sure it comes from somewhere reliable. Assuming it is, that’s about $4 million / year over 4 years (since everything seems to be expressed as 4 year investment periods these days), which is the revenue of a fairly small cyber-security consulting firm with about 15–20 staff; so that’s basically what we’re funding here. Let’s be generous and say 20 consultants, working full time, so 200 days / year each, so a total of about 4,000 days of delivery.

It’s hard to see anything meaningful being generated for an SME in under a day and probably 2–3 days is more realistic, so the number of companies able to be serviced each year under the program is probably in the 1,300–2,000 range. Which is certainly non-trivial, but is also not exactly addressing the scale of the problem given we have 2,000,000-ish SMEs in Australia according to the ABS. Obviously not all of them will have a cyber security “problem” to solve, but it’s still a pretty big discrepancy.

Ultimately the answer here is to tie this to the R&D initiatives and spend a reasonable portion of that $15M on developing a methodology and system as automated as possible to speed up the delivery of these, while continuing to recognise that it is going to require human intervention and expertise of consultants. This can’t become the IT equivalent of the pink batts program, paying dodgy operators $5K a throw to run Nessus over their local plumber’s Yellow Pages listing.

Industry Accreditation

The Strategy seems to double-down on the CREST approach, suggesting at one point that it could be extended beyond testing services. Which seems interesting given the REST in CREST is — by definition — for “Registered Ethical Security Testers”. But why let that get in the way. If all you’ve got is a hammer, everything looks like a nail.

It will be interesting to see whether the Government really does attempt to “pick a winner” in this market despite avoiding it in the past — and which I’m sure will piss off the many and varied other accreditation programs no end — or whether CREST necessarily has to build in a stronger cross-recognition process to acknowledge the breadth of market offerings available.

Fortunately though, we seem to have steered clear of any suggestion we need a “licensing” program for cyber security professionals. The longer we can avoid that albatross around our necks, the better.

Threat Sharing & Collaboration

It’s great that the strategy now commits to “strengthen trusted partnerships with the private sector for the sharing of sensitive information on cyber threats, vulnerabilities and their potential consequences.”

Wait, sorry, that was the 2009 strategy.

Now we’re saying that “organisations, public and private, must work together to build a collective understanding of cyber threats and risks through a layered approach to cyber threat sharing.”

Either way, it’s still true, and it’s still necessary.

But it’s not enough. Why limit sharing to threat information? Which is why we’ve built Security Colony (www.securitycolony.com) as the first — and only — cyber security collaboration platform in Australia. Here is the one pitch I’ll make in this article: For under $300 / month (and you can trial it for free), you can get access to virtually all the output, from our entire consulting team, country-wide.

You can get an entire Information Security Management System that we were paid $100K to develop.

You can get entire security architecture documents that we were paid $40K to develop.

You can get incident response planning guides that we were paid $50K to develop.

And over 100 other documents that add up to over $2 million in value. It’s all derived from real-world consulting projects, paid for by real Australian clients.

You can save tens, or hundreds, of thousands of dollars through subscribing and re-using these materials. Check it out: it’s free. www.securitycolony.com

Timing

Given we’re all expecting an election to be called in a couple of weeks’ time, and the Government then goes into caretaker mode, is all this stuff effectively on ice until at least July (assuming the current Government is returned) or maybe September (if there’s a change of Government, with the new lot invariably wanting to make their mark by changing the curtains).

Summary

So there it is. Some initial thoughts on the strategy in the context of the various initiatives we’ve seen come and go in the past. A lot of really good ideas, and really valuable initiatives, provided they are well executed. Hopefully we see a speedy implementation, and the outcomes match the promises.

Oh, and if anyone knows whether the Cyber Ambassador role comes with diplomatic immunity, let me know. It would be sweet to not have to worry about pesky traffic laws.

By Nick Ellsmore, Chief Apiarist at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony.