Category Archives: Uncategorized

An analysis of the current cyber threat landscape

Over the last few years, it appears that although certain industries are targeted by cyber attacks more than others, the methods used across the board are usually the same.

Prominent incidents identified over 2016–2017 almost always involved one of the following:

  • Phishing and other email scams
  • Ransomware
  • Botnets
  • DDoS
  • Malware-as-a-Service
  • Supply Chain Security

In this article we investigate what cyber-attacks have been prominent over the last 12 months, what trends will continue for the remainder of 2017, and what can be done to minimise your risk of attack.

Phishing and other email scams

Phishing, spear-phishing (targeted phishing of specific individuals) and other email scams continue to be a major threat to businesses. In an era of large-scale security infrastructure investment, users are consistently the weak link in the chain.

The Symantec Internet Security Threat Report 2017[1] and ENISA Threat Landscape Report 2016[2] state the threat of phishing is intensifying despite the overall number of attacks gradually declining, which is suggestive of an increase in the sophistication and effectiveness of attacks. In all likelihood, this is due to cyber criminals moving away from volume-based attacks to more targeted and convincing scams. This transition is motivated by the higher success rate of tailored attacks, however greater effort is required by way of research into viable targets using open source material such as social media and social engineering.

This shift in approach is consistent with the observed growth of business-focussed email scams in the last 18 months. Cyber attackers begin by conducting extensive research on businesses they wish to target in order to identify key staff members — particularly those with privileged access, a degree of control over the business’ financial transactions, or in a position of authority.

These scams typically involve cyber attackers crafting emails that request an urgent transfer of funds, seemingly from a trusted party such as a senior manager in the business or an external contractor / supplier who is regularly dealt with. Following a global survey of business email scams, the FBI’s Internet Crime Complaint Center reported this type of attack continues to surge in prominence, with:

  • US and foreign victims reporting 24,345 cases by December 2016 — a significant increase from only 668 reported cases just six months earlier (the actual number is likely to be much higher as many cases go unreported).
  • Attackers attempting to steal a total of USD$5.3 billion through reported business email scams by the end of 2016, compared to USD$3.1 billion only 6 months earlier.[3]


Ransomware is malicious software that encrypts users’ data in order to demand payment for the purported safe return of files, typically via a decryption key, typically using cryptocurrencies such as Bitcoin. The most prominent example of this form of attack was the Wannacry attack of May 2017, in which cybercriminals distributed the ransomware strain via an underlying software vulnerability in the Microsoft Windows operating system.

Due to the relatively low ‘barrier to entry’ and potentially lucrative rewards for even inexperienced cyber attackers, we have continued to see significant growth in the use of ransomware since 2016. In January 2016, ransomware accounted for only 18% of the global malware payloads delivered by spam and exploit kits; ten months later, ransomware accounted for 66% of all malware payloads — an increase of 267%[4].

Not only is ransomware one of the most popular attack vectors for cyber attackers, it is also among the most harmful. The cost of the ransom is only one aspect to consider — system downtime can have a significant impact on sales, productivity and customer or supplier relationships. In some cases (e.g. medical facilities), ransomware infections could potentially cost lives.

The success rate of ransomware is largely attributable to the exploitation of an organisation’s end users who typically have limited training and expertise in cyber security. In addition, once ransomware has infiltrated an organisation, many find it difficult to effectively resolve the effects without paying the ransom demanded by the attackers.

There is no guarantee attackers will provide the key for decrypting files if the ransom is paid however, and relying on payment of the ransom as a ‘get out of jail’ tactic is a risky choice. Further, payment of the ransom further encourages these sorts of attacks, and furthers development of ransomware technology. Hivint’s article ‘Ransomware: a History and Taxonomy’[5] provides an in-depth analysis of the growing problem of ransomware.

Ransomware is likely to be a thorn in the side of organisations for some time to come, and through increasingly diverse avenues. The 2017 SonicWall Annual Threat Report highlights that there is likely to be a greater focus on the industrial, medical and financial sectors due to their particularly low tolerance for system downtime or loss of data availability[6].

Similarly, internetworked physical devices — often referred to as the Internet of Things (IoT) — are also increasingly being targeted due to the fact they are not designed with security as a central consideration at present. While the majority of IoT devices can simply be re-flashed to recover from an attack as they do not store data, organisations and end users may rely on critical devices where any amount of downtime is problematic, such as medical devices or infrastructure. How the design and implementation of IoT devices shifts in response to the growing threat of ransomware will be one of the more interesting spaces to watch for the remainder of 2017 and beyond.

Botnets and DDoS

As with ransomware, the increased inter-connectivity of everyday devices such as lights, home appliances, vehicles and medical instruments is leading to their increased assimilation into botnets to be used in distributed denial of service (DDoS) attacks.

Software on IoT devices is often poorly maintained and patched. Many new types of malware search for IoT devices with factory default or hardcoded user names and passwords and, after compromising them, uses their Internet connectivity to contribute to DDoS attacks. Due to the rapidly increasing number of IoT devices, this is paving the way for attacks at a scale that DDoS mitigation firms may struggle to handle[10]. The Thales 2017 Data Threat Report suggests that 6.4 billion IoT devices were in use worldwide by 2016 and that this number is forecast to increase to 20 billion devices by 2020.[7]

While the growth of interconnected devices is inevitable, we expect that their rate of exploitation will stabilise in the next few years given the emergence of IoT security bodies such as IoTSec Australia and the IoT Security Foundation. It is likely that device manufacturers will also be pushed to comply with security standards and to make security a more central consideration during design.


Hacking toolkits are being made available online, some for free, effectively creating an open source community for cyber criminals[8]. There are also paid business models known as “Malware-as-a-Service” for less experienced attackers, where payment is made for another attacker to run the campaign on their behalf. This reduces the barrier to entry for potential cyber attackers and also facilitates the rapid evolution of malware strains, making evasion of anti-malware end point protection tools more frequent. We expect this trend will continue as sophisticated cyber attackers increasingly move towards the malware-as-a-service business model.

Supply Chain Security

It’s important to be mindful that cyber attackers may also seek to exploit supply chain partners as a way to compromise the security of a business indirectly. The 2013 breach of US company Target is an example of this, as attackers stole remote access credentials from a third-party supplier of services[9]. We have also seen reports of attacks against managed service providers here in Australia, as a way of indirectly compromising the providers’ customers.

What should you do?

The good news is that most of these threats can be mitigated with a small number of relatively basic controls in place — none of which should come as a surprise:


Keeping your systems patched and up-to-date can prevent cyber attackers from being able to exploit the vulnerabilities that allow them to install malicious software on your networks and systems. Unpatched Windows systems were the reason the Wannacry ransomware attack was so prolific.

User Awareness

User awareness training can significantly reduce the likelihood of malware compromising your organisation’s security. Users that can, among other things, confidently identify and report suspicious emails and exercise good removable media security practices can put your security team on the front foot.

Changing default password credentials

The main attack vector for IoT devices is unchanged factory default access credentials after installation. Changing the password, or disabling the default accounts, will prevent the majority of attacks on IoT devices. This is also the case for hardware more generally, such as routers and end-user devices.

Segregate BYOD and IoT devices from other systems on your network

Placing IoT devices and uncontrolled bring-your-own devices (BYOD) on a separate network can isolate the effects of any active vulnerabilities from your critical systems.

Backup and recovery

Having all your critical data regularly backed up both offline and in the cloud can mitigate the risk of malware — particularly ransomware — from causing major damage to your business. It’s also just as important to regularly test your recovery plans to ensure they work effectively, since restoring systems to a previous state without significant downtime or loss of data is the key to damage control.

Security Colony

At we have a variety of resources that can help in strengthening your organisation’s preparedness for cyber attacks, including user awareness materials, incident response templates, security policies and procedures and a vendor risk assessment tool to help assess the security posture of your vendors’ internet-facing presence. Other resources available include an “Ask Hivint” forum for those more esoteric questions and breach monitoring to identify whether your users or domain has been caught up in a previous security incident.




[3] and








Cybersecurity Collaboration

Establishing a security Community of Interest

Hivint is all about security collaboration.

We believe that organisations can’t afford to solve security problems on their own and need to more efficiently build and consume security resources and services. Whilst we see our Security Colony as a key piece to this collaboration puzzle, we definitely don’t see it as the only piece.

Through our advisory services, we regularly see the same challenges and problems being faced by organisations within the same industry. We also see hesitation between organisations to share information with others. This is often due to perceived competitiveness, lack of a framework to enforce sharing and fear of sharing too much information, along with privacy concerns.

We believe that it is important for organisations to realise that security shouldn’t compete between ‘competitors’, but instead against threats, and that working together to solve common security challenges is vital. We want to help make that happen. One such way — and the purpose of this article — is for a group of similar organisations to form a security Community of Interest (CoI).

This article outlines our suggested approach towards establishing and running a CoI, covering off a number of common concerns regarding the operation of such a community, and concludes with a link to a template that can be used by any organisation wishing to establish such a CoI.

Why is information sharing good?

Security information sharing and collaboration helps ensure that organisations across the industry learn from each other, leading to innovative thinking to deter cyber criminals. Our earlier blog post, Security Collaboration — The Problem and Our Solution, provides a detailed outlook on security collaboration.

We consider security collaboration as vital to making a difference to the economics of cyber-crimes, and as such we share what works and what doesn’t by making the output of our consulting engagements available on our Security Colony Portal.

However, we acknowledge that there are times when sharing could be more direct between organisations by forming a collective more closely — documents and resources could then be shared that are more specific to their industry (for example, acceptable use policies may be very similar across universities), or more sensitive in nature in a way that could make it unreasonable to share publicly (for example, sharing security related issues that may not have been effectively solved yet).

When does a Community of Interest work?

Sharing of information is most effective when a collective group is interested in a similar type of information. An example of this may be university institutions — while distinct entities will have different implementations, the overall challenges that each face is likely to be similar. Pooling resources such as policy, operating procedures, and to an extent metrics, provides a way to maximise performance of the group as a whole, while minimising duplication of effort.

When is Community of Interest sharing a bad idea?

Sharing agreements like a CoI will not be effective in all circumstances — a CoI will only work if information flows in both directions for the organisations involved. It would not be a suitable framework for things that generally flow in a single direction, such as government reporting. A CoI’s primary focus should also not be on sharing ‘threat intel’ as there are a number of services that already do this such as Cert Australia, Open Threat Exchange, McAfee Threat Intelligence Exchange and IBM X-Force to name a few.

How is information shared within a Community of Interest?

An important aspect of a CoI is the platform used for sharing between members of CoI. It is important to recognise the fact that platforms used will not be the same used across all CoI’s, each organisation will have unique requirements and preferences as to which platforms will be most effective in the circumstances. Platforms such as email-lists and portals can be effective for sharing electronically, however platforms like meetings (be it in person, or teleconference style) may be more effective in some cases.

What kind of information can be shared?

In theory, almost anything, however in practice there are seven major types of cybersecurity information suitable for sharing, according to Microsoft[1]. These are:

  • Details of attempted or successful incidents
  • Potential threats to business
  • Exploitable software
  • Hardware or business process vulnerabilities
  • Mitigations strategies against identified threats and vulnerabilities
  • Situational awareness
  • Best practices for incident management and strategic analysis of current and future risk environment.

Hivint recognises that every piece of information has different uses and benefits. Sharing of information like general policy documents, acceptable use policy, or processes that an organisation struggles with or perform well can uplift cyber resilience and efficiency among businesses. These are also relatively simple artefacts that can be shared to help build an initial trust in the CoI, and are recommended as a starting point.

What about privacy and confidentiality?

Keeping information confidential is a fundamental value for establishing trust within a CoI. To ensure this is maintained, guidelines must be established against sharing of customer information or personal records.

Information should be de-identified and de-sensitised to remove any content that could potentially introduce a form of unauthorised disclosure / breach, and limitations should be established to determine the extent of information able to be shared, as well as the authorised use of this information by the receiving parties.

How is a Community of Interest formed?

It is important to realise that organisations need not follow a single structure or model when setting up a CoI. Ideally, the first step is identifying and contacting like-minded people with an interest in collaborating from your network or business area. Interpersonal relationship between personnel involved in CoI is critical to retaining and enhancing the trust and confidence of all members. A fitting approach to creating such an environment is by initially exchanging non-specific or non-critical information on a more informal basis. Considering that sharing agreements like this require a progressive approach, it is best not to jump head first by sharing all the information pertaining to your business at the first instance.

Upon success of the first phase of sharing and development of a strong relationship between parties involved, a more formal approach is encouraged for the next phase.

Next Steps

We’ve made a Cyber Security Collaboration Framework available to all subscribers (free and paid) of Security Colony which can be used as a template to start the discussion with interested parties, and when the time comes, formally establish the CoI.

[1] ‘A Framework for Cybersecurity information sharing and risk reduction’ —

Additional Information

There are a number of instances where cyber-security information sharing arrangements have been established around the world. The below provides links to a small number of these.

jobactive Case Study

Meeting the jobactive security compliance requirements

Hivint has been involved with the jobactive program since early 2015, initially undertaking the required IRAP assessment for one of the approved third party IT providers, and since then working with many different jobactive providers to help guide them through the process towards achieving security accreditation.

This post provides an overview of the compliance requirements of the program, as well as suggested steps and considerations for any entity currently planning or pursuing compliance.

About the program

The Australian Government’s jobactive program, directed by the Department of Employment (‘the Department’) is an Australian-wide initiative aimed at getting more Australians working. Through the program, jobseekers are both aided in getting prepared for work (or back to work) and being connected with employers through a network of Employment Services Providers (‘providers’).

Under the program all providers are required to be accredited by the Department as being able to deliver — and continue to deliver — services in a manner that meet various conditions. One condition (defined in item 32 in the jobactive deed) relates to the protection of data entrusted to the provider by the Department in order to deliver these services; effectively extending many of the Australian Government security requirements that apply to the Department through to these providers.

The data security requirements that all providers — as well as third parties hosting jobactive data on behalf of providers — are required to meet have been drawn from two Australian Government publications and one law regarding the protection of information. The publications defining the security control requirements against which providers are required to be compliant with, as well as the number of controls drawn from each include:

jobactive Statements of Applicability

Rather than taking a big bang all or nothing approach — where providers are required to be compliant with all controls by a specific date — the Department has introduced a graduated approach to achieving compliance. This has been developed through the definition of three individual compliance stages defined within the jobactive Statements of Applicability (SoA), with the requirement for compliance phased across an approximate three-year period.

The perceived intent here is to start providers off with the need to establish a baseline security capability that is then matured with more advanced and complex controls over time. The overall timeframe for compliance, and number of controls in each stage and SoA include:

The below graph shows the breakdown of these SoAs as drawn from the three input documents. What is evident from the graph below is that SoA 1 covers a broad set of controls across most of the ISM security domains and the Privacy Act, providing a general security baseline for providers.

Progressing through the program (SoA2 through to SoA3) security becomes more focused in specific domains and extended to include more advanced and complex technical controls within the framework.

Assessment requirements

It’s easy to see that the lion’s share of the requirements have been drawn from the ISM, which reflects the Department’s focus on information security through cyber-security.

The Department has leveraged the existing register of security professionals already authorised to complete formal assessments of systems against the ISM for certification and accreditation by government bodies. The Information Security Registered Assessor Program (IRAP) list of assessors is maintained by the Australian Signals Directorate (ASD), the same body that is responsible for the ISM.

The Department has given providers the option to undertake a self-assessment for the first compliance assessment, but require formal assessments by IRAP assessors for stages 2 & 3. These assessments include:

  • The first assessment is considered a self-assessment, whereby providers completed their own against the controls defined in SoA 1, and report findings to the Department for review.
  • The second assessment is required to be completed by an IRAP assessor, with assessment coverage of controls defined in both SoA 1 and SoA 2.
  • The third assessment is also required to be completed by an IRAP assessor, and so long as the risk profile or environment hosting jobactive data hasn’t significantly changed, the assessment may be completed against the controls in SoA 3 only (we recommend validating this position with the IRAP assessor and Department prior to conducting this assessment).
  • From this point, the provider is required to undergo assessment no less than every three years — potentially sooner if the Department requests a new assessment based on factors such as a change in governance arrangements, changing cyber threats or other factors that change the IT security landscape for the provider.

Where to start

Achieving a level of compliance, acceptable to the Department against the full set of security controls reflected across the SoAs can be a daunting task for many providers. We’ve worked with a variety of providers, from small, single office not-for-profits, through to large Australian wide commercial providers and each has needed to invest considerable time and effort to achieve the target compliance posture.

However regardless of the size, scope and overall security maturity of the provider that we’ve worked with, the general approach that we’ve successfully employed with each has the same main principles and phases, being:

  • Phase 1 — Scope Definition, Reduction and Validation
  • Phase 2 — Risk and Control Definition
  • Phase 3 — Control Implementation & Refinement
  • Phase 4 — SoA 1 and 2 Assessment
  • Phase 5 — Control Implementation & Refinement
  • Phase 6 — SoA 3 Assessment

A high level overview of the first two phases is provided below.

Phase 1 — Scope Definition, Reduction and Validation

This is a crucial first step that is often overlooked by providers. We strongly believe that proper planning greatly increases your likelihood of an overall successful initiative, both financially and operationally; reducing the likelihood of unnecessary and wasteful investment, and clearly establishing the bounds for compliance. We recommend that providers undertake each of the following, and whilst not mandated, having an IRAP assessor engaged to assist through the process can also speed this activity up, and improve the outcomes considerably.

1. Establish your scope. It’s often not clear exactly what data is subject to the Department’s requirements (Is it only data retrieved from Employment systems? What about data provided directly from jobseekers? Data that is obtained from other providers? And so on…). Knowing what’s in scope and what isn’t can help ensure that you can focus your compliance efforts appropriately. We recommend that providers:

  • Identify jobseeker information coming into the organisation. Document the Employment provided systems where you retrieve or access jobseeker information, the method that you obtain it as well as the type of information that you retrieve.
  • Identify where you build on this information. Document instances where you build on this information through other sources- e.g. jobseeker provided information, and again, capture the type and method of information that you obtain.
  • Identify who you share this information with. Document instances where you share information with third parties in support of jobseeker services.
  • Define your business process. Capture the above processes together as a set of workflows, outlining the relevant actors, information types and actions performed.
  • Overlay these processes across your environment. Overlay these processes across your physical, personnel and IT environments — including those hosted by third parties, such as Department accredited entities, ASD certified providers, or any other entity (don’t forget to include support processes such as offsite backups, or remote connections from IT service providers into your environment).

2. Validate your scope. Engage with the Department’s Security Compliance team to discuss what you have established, and seek input as to whether you are able to remove certain entities, information types and processes from your scope. The Department may also be able to assist by providing upfront guidance on critical / high-risk issues with your practices (e.g. storing jobactive data offshore by a non-approved provider)

3. Define a plan to reduce your scope. This is an optional activity whereby a provider may wish to reduce or otherwise change their scope to reduce the compliance exposure. Some entities have taken the path to apply the controls to their entire business (as they are seen as good practice security controls — regardless of the data types that they are protecting), and other have reduced their scope by changing or consolidating systems and business processes utilising jobactive data.

4. Review the SoAs and remove controls that don’t apply. The SoAs contain a combined 409 security controls, however not all apply to all providers. So rather than investing in unnecessary compliance expenditure, documenting controls that the provider considers are out of scope, and including justification for them can save a lot of effort. For example:

5. Validate your scope. Following any documented proposal to reduce your environment scope and / or remove controls from the SoA, validation with the Department and / or your IRAP assessor is critical. Only once the revised scope has been validated should you implement the changes.

Phase 2 — Risk and Control Definition

Once the scope has been established providers are in a position to define and implement controls to meet the Department’s security compliance requirements.

Our immediate recommended next step is for providers to formally assess their security risk posture, and then begin to establish key overarching security artefacts that will govern their security controls.
This includes:

  • Document the Security Risk Management Plan (SRMP) — this document captures the various security risks to jobactive data within the providers scoped environment, as well as the controls in place and planned to be in place to mitigate these risks to an acceptable level.
  • Define the System Security Plan — this document is derived from the SRMP, the environment scope, and the Department’s SoAs and describes the implementation and operation of security controls for the system.
  • Define the security documentation framework — various documents which collectively detail the provider’s security controls. This typically comprises security policies, standards, plans and procedures.

We recognise that many providers have not previously needed to establish the majority of the above, and we suggest that you refer to the ISM Controls Manual for further detail describing each of the required documentation artefacts, or alternatively get in touch with an IRAP assessor to assist.

Phase 3 and Beyond

From this point, providers should be well positioned to implement the various controls defined, and then progress towards the required SoA 1 self-assessment, and subsequent IRAP assessments.

At this stage, providers may also wish to undertake a compliance gap assessment against the controls across the SoAs to help identify the overall compliance posture, and inform the prioritisation, as well as overall resourcing and investment in the compliance initiative.

Maintaining an IRAP Assessor (or alternatively, an individual with previous experience in adopting the ISM control framework) in an advisory capacity throughout this process* can also help in ensuring that the provider stays on track through the process.

Need a Hand?

Hivint maintains a team of IRAP Assessors and security consultants across Australia with extensive experience in Federal Government security requirements and the development and application of ISM security control frameworks and compliance strategies.

If you have any questions regarding the Department’s security compliance requirements, or if you may need a hand in working out where to start (or how to progress), please get in touch with us here.

Case Study by Aaron Doggett, Regional Director, Hivint

* To remove any potential conflict of interest, the IRAP Assessor engaged to perform the formal assessments must not also operate in an advisory / consulting capability to the provider.

Vendor Risk Assessment Tool

Our new addition to the Security Colony Portal.

Security Colony has released its “Vendor Risk Assessment” (VRA) tool, developed in conjunction with a major financial services client, which enables our subscribers to assess the risk posed to their internet facing sites, and receive a profile reflecting their cyber security maturity.

While seeing your own profile is empowering, the ultimate purpose of the tool is to enable you to gain better visibility over your suppliers. In Q2 this year, we will be releasing the ability for our paid subscribers to add additional vendors for tracking, to get a view of their third party risk.

The platform uses a range of free, open source and commercial tools to complete 17 distinct checks against a company’s online footprint, packaging this analysis up in an easy to use interface that details identified risks and providing an overall risk score and grade for the vendor.

What does it do?

There are two broad assessment categories completed by the VRA platform: malicious activity checks, and misconfiguration and vulnerability checks.

The data collected from these assessments is then analysed and presented in an easy to manage format, including:

  • Providing a risk-based score (out of 10) and a corresponding grade (from A to F)
  • Tracking the change in security risks over time
  • Providing clarity around the source of the calculation

Domain Risk Overview

Malicious activity checks

The VRA tool assesses the organisation for historic (or current) malicious activity, including:

  • Whether an organisation has had their domain blacklisted for spam
  • Whether an organisation has been identified as hosting malware on their domains
  • Whether an organisation has been identified as a source of phishing attacks
  • Whether an organisation has been identified as a source of botnet attacks

Malicious activity checks

Misconfiguration and vulnerability checks

The VRA tool assesses security misconfigurations and vulnerabilities, including:

  • Whether an organisation has a strong process for correctly configuring all their encryption (SSL/TLS) certificates
  • Whether an organisation uses strong email security technology (SPF and DMARC)
  • Whether employees of an organisation have used their corporate email addresses on external accounts, and whether they have then been the subject of a data breach
  • Whether an organisation has insecure (ie. unencrypted) ports open to the Internet

Security configuration and vulnerability checks

To demonstrate the system, scores were calculated for each of the ASX 100 companies. Analysed by industry, the average industry scores — out of 10 — were as follows:

Key findings of the analysis were:

  • The IT industry has the best average score, showing their understanding of the importance of consistent cyber security processes.
  • Telecommunications and Financial Services round out the Top 3.
  • Energy, Materials (including mining) and Industrials are less mature, reflecting the reduced focus they have placed on cyber security historically.
  • Health Care is in the bottom 4, a significant concern given the sensitivity of data held.

Just 3 companies in the ASX 100 received a ‘perfect 10’ — ANZ Bank, Link Group, and Star Entertainment Group.

The VRA tool is now live in the Security Colony ( portal. Membership is free and any organization can see their own score after signing up.


If we are going to measure security, what exactly are we measuring?

If you can’t measure it, you can’t manage it
– W. Edwards Deming

That is a great quote and one I have heard a lot over the years, however an interesting point about this is that it lacks context. What I mean by this is that if you look at Page 35 of The New Economics you will see that quote is flanked by some further advice, namely:

It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.

Deming did believe in the value of measuring things to improve their management, but he was also smart enough to know that many things simply cannot be measured and yet must still be managed.

Managing things that cannot be measured, and the absence of context, is as good a segue as any to the subject of metrics in information security.

A recurring question that arises surrounding the use and application of metrics in security is “What metrics should I use?”

I have spent enough time in security to have seen (and truth be told, probably generated) an awful lot of rather pointless reports in and around security. I think I’m ready to attempt to explain what I think is going on here, and why “What metrics should I use?” might be the wrong question — and instead, there should be more of a focus on the context provided with metrics in order to create useful and meaningful information about an organisation’s security.

A Typical (& faulty) Approach

Now here is a typical methodology we often see that leads to the acquisition of a security appliance or product of some sort by an organisation (e.g. a firewall, or an IDS/IPS).

  1. A security risk is identified
  2. The security risk is assessed
  3. A series of controls are prescribed to mitigate or reduce the risk, and one of those controls is some sort of security appliance / product
  4. Some sort of product selection process takes place
  5. Eventually a solution is implemented

Now we know (or we thought we knew) thanks to Dr Deming that in order to manage something, we first need to measure it, so we instinctively look at the inbuilt reporting on the chosen device (if we were a more mature organisation we might have thought about this at stage 4 and it may even have influenced our selection). We select some of the available metrics, and in less than ten minutes somehow electro-magically we have configured the device to email a large number of pie charts in a pdf to a manager at the end of every month.

Top Threats

However, the problem with the above chart is that this doesn’t actually mean anything — it has no context. The best way to demonstrate what I mean by context is to add a little.

Top Threats this month

Ok, that’s better, but in truth it’s still pretty meaningless, so let’s add some more.

Threats identified (5 months)

Now we are beginning to end up with something meaningful. Immediately we can see that something appears to be going on in the world of cryptomalware and we can choose to react — the information provided now demonstrates a distinct trend and is clearly actionable.

But we can bring even more context into this. The points below provide some more suggestions for adding context, and will give you a feel for the importance of having as much context as possible to create meaningful metrics.

  • What about the threats that do not get detected? Are there any estimations available (e.g. in Security Intelligence reports) on how many of those exist? (Donald Rumsfeld’s ‘Known Unknowns’)
  • Can we add more historical data? More data means more reliable baselines, and the ability to spot seasonal changes
  • Could we collect data from peer organisations for comparison (i.e. — do we see more or less of this issue than everyone else)?
  • We have 4 categories, perhaps we need a line for threats that do not fall in these categories?
  • What are the highest and lowest values we (and/or other companies in the industry) have ever seen for these threats?
  • Do we have the ability to pivot on other data — for example would you want to know if 96% of those infections were attributed to one user?

The Impact of Data Visualisation

So now we have an understanding of context, what else should we consider?

Coming from the world of operational networking, I spent a lot of my time in a previous role getting visibility of very large carrier grade networks, and it was my job to maintain hundreds of gateway devices such as firewalls, proxy servers, VPN concentrators, spam filters, intrusion detection & prevention systems and all the infrastructure that supported them.

At that time if you were to ask me what metrics I would like to collect and measure, the answer was simple — I wanted everything possible.

If a device had a way to produce a reading for anything, I found a way to collect it and graph it using an array of open source tools such as SNMP, RRDTool and Cacti.

I created pages of graphs for device temperatures, memory usage, disk space, uptime, number of concurrent connections, network throughput, admin connections, failed logins etc.

The great thing about graphs is you can see anomalies very quickly — spikes, troughs, baselines, annual, seasonal and even hourly fluctuations give you insight. For example, gradual inclines or sudden flat-lines may mean more capacity is needed, whereas sharp cliffs typically mean something downstream is wrong.

Using these graphs, and a set of automated alerts I was able to predict problems that were well outside of my purview. For example, I often diagnosed failed A/C units in datacentres long before anyone else had raised alarms. I was able to detect when connected devices had been rebooted outside of change windows. I could even see when other devices had been compromised, because I could graph failed logon attempts for other devices in the nearby vicinity.

In the ten years or so since I was building out these graphs in Cacti, technologies for the creation of dash boarding, dynamic reporting, and automated alerting have come a long way, and it’s now easier than ever to collect data and produce very rich information — provided that you understand the importance of context, and you consider how actionable the information you produce will be to the end consumer.


While this write up has focused particularly on context with respect to technical security metrics, it is important to remember that security is mainly about people, so you should always consider the softer metrics that cannot simply be collected by things such as SNMP polling, or the parsing of syslogs, etc. For example — is there a way to measure the number of users completing security awareness training, and see if this correlates with the number of people clicking on phishing links?

Would you want to know for instance if the very people who had completed security awareness training were more likely to click on phishy emails?

The bottom line is, security metrics — whether technically focused or otherwise, are relatively meaningless without context. While metrics aim to measure something, it’s the context in which the measurements are given which provides valuable and actionable information that organisations can use to identify and prioritise their security spend.

Article by Eric Pinkerton, Regional Director, Hivint

Check out Security Colony’s Cyber Security Reporting Dashboards, a NIST CSF based dashboard with more metrics for your security program.

Breach Disclosure

Introduction of mandatory breach disclosure laws

After several long years gestating through the lower and upper house, the Australian Government has finally passed the Privacy Amendment (Notifiable Data Breaches) Bill 2016, which establishes a mandatory breach notification scheme in Australia.

This morning, almost in anticipation of the law’s passage, the Australian Information Security Association (AISA) sent an email notifying its members of an incident affecting its members’ data:

We have made a commitment to you that we will always keep you up to date on information as and when we have it.
 Today the new Board took the decision to voluntarily report to the Office of the Australia Information Commissioner an incident that occurred last year that could have potentially impacted the privacy of member data kept within the website. At the time, a member reported the issue to AISA and it was rectified by the then administrative team. What wasn’t done, and as we all know is best practice, was notification to you as members that this potential issue had occurred, and notification to the Australia Privacy & Information Commissioner.
 Your member information is not at risk and the issue identified has been rectified.
 The AISA Board takes this matter very seriously.
 As the industry body representing these and many other information security issues, we expect and demand best practice of AISA and of our members. The Board holds the privacy of member data as sacrosanct and will ensure that all members are aware of any and all privacy information.
 If you have any concerns or wish to discuss this matter please feel free to contact either myself or the Board members.
 Many thanks for your ongoing support.

And while AISA quite validly trumpets that notifying its members is best practice, how they notified their members falls well short of best practice.

More specifically, the notification doesn’t answer any questions that would be expected to be asked, and in the context of broader AISA issues occurring, raises questions of why the notification occurred now.

Questions left unanswered include:

  • What happened?
  • What has been done to remediate and limit such exposures in the future?
  • What information was potentially compromised?
  • Was it a potential compromise or an actual compromise?
  • What should I (as a member whose data was potentially compromised) do about it?
  • Do I need to look out for targeted phishing attacks? Transactions?
  • Has my password been compromised? Has my credit card been compromised?
  • Who has investigated it and how have you drawn the conclusions you’ve drawn?

Data breach notification effectively forms part of a company’s suite of tools for managing customer and public relations. Doing data breach notification well can make a difference in the effort required to manage those relationships during a crisis, and the value of long term customer goodwill.

This blog post explores what a “good” data breach notification looks like, and highlights where AISA falls short of that standard.

How to effectively manage a breach

As data breaches continue to increase in frequency, the art of data breach notification has become more important to master. A key challenge in responding to a data breach is notifying your customers in a way that enhances, rather than degrades, your brand’s trustworthiness.

This guide outlines the key factors to consider should you find yourself in the unfortunate position of having to communicate a data breach to your customers.

There are 7 factors we recommend you focus on:

Clearly, the AISA announcement falls short on a number of the above factors.

In particular:

  • Clarity — the AISA announcement does not clearly identify who was affected, or even if there was in fact a breach of data.
  • Timeliness — If the incident occurred on June 15th last year last year, why wait to notify members over eight months after the incident occurred? Given so much time has passed since the incident, and AISA having sufficient time to investigate and rectify the issue, why was there not more information provided about the nature of the breach? Given the time elapsed, the breach notification seems conveniently timed to coincide with the legislation, which leads to the final point;
  • Genuineness — No apology was given as part of the breach notification, nor was any detail given about what members need to do, what information (if any) was breached, or any assurances that it won’t occur again.

An email with the 7 factors included will answer (as best as you can) the questions the affected party may have. Further follow up information can be provided using an FAQ, a good example of which is the Pont3 Data Breach FAQ.

So, with an understanding of what to do, it’s also key to consider what not to do.

Breach Disclosure — What not to do

When formulating a breach notification strategy it is also important to know what not to do. Described below is our ‘Hierarchy of Badness’, starting with the worst things first!

1. Intentionally lying: Making any false statements in a bid to make the situation appear less complicated or serious than it is known to be, for example, stating that the data lost was encrypted when it was not. There is a very high chance that the truth will become available at some point, and at that point apparent intentional lies will wipe you out. This routinely gets people fired, and can cause significant reputational damage for the organisation.

2. Unintentionally lying: Drawing conclusions and providing statements without thoroughly analysing the impact and depth of the breach can lead to unintentional lies or the omission of information. This can be a result of publishing a notification too early before the details are fully understood. Whilst unintentional lies are ‘better’ than intentional lies, it may be difficult to prove to your customers that there was no ill intent. Depending on the lie, this may also result in someone getting fired.

3. Malicious omission: As the name suggests, organisations sometimes leave out key information from their disclosure statements particularly by directing focus to information that is not as crucial. For example, rather than stating that data was not encrypted in storage focusing the statement on data being encrypted at transit. While the latter is true, a crucial piece of information has purposely been omitted for the purpose of diversion. Not a great strategy. While omission may be a a legal requirement throughout the course of an incident, an omission which changes the implied intent or meaning of a communication can backfire.

4. Weasel words and blame-shifting: A very common but poorly conceived inclusion in breach notifications is overused clichés such as ‘we take security seriously’, or ‘this was a sophisticated attack’. If there is no good reason to use that particular phrase/word it is better not to include it in the statement. Describing an attack as sophisticated or suggesting steps are being taken without providing further details is not going to make your customers feel better about the situation.

Our Hierarchy of Badness heat map below depicts the sweet spot for disclosure.


Historically, some organisations have preferred to use the ‘Silence & roll the dice’ strategy. This is a risky strategy, where the organisation doesn’t notify its customers about the breach at all, and simply hopes the whole situation will blow over.

However, with the passing of the Privacy Amendment Bill, while this may pan out well in some cases, it can have adverse outcome in others particularly if the breach is identified and reported by bloggers/researchers/users (a case of malicious omission in the ‘Hierarchy of Badness’). However, there will be a lot of organisations falling under the threshold for disclosure, so the ‘Silence and roll the dice’ strategy will continue to be used.

An ideal way to help your customer through a data breach is by referring them on to services like ID Care, the various CERTs, or other service providers for your customer get the advice they need to respond to the issue in their particular circumstances. Trying to “advise” your clients about what they should do post-breach — when you’re doing this from a position of having just had a breach yourself — is rarely a good strategy.

Finally, the best preparation for data breach disclosure is to have both:

  • A micro-level response for your customers regarding what data was lost, if it’s recoverable and what they as data owners can do to mitigate the impact; and
  • A macro-level response for the press with details of the volume of data lost, response plan and how your customers must go about handling the situation.

It is also necessary to realise that data breach notification is not a one-time act. To ensure the best outcome from a public relations and crisis management perspective, it’s best to provide customers with updates as and when you get new information and ensure your customers realise it’s an ongoing engagement and that you genuinely care about their data.

Article by Nick Ellsmore, Chief Apiarist, Hivint

Check out more resources in Security Colony, such as incident response runsheets, an incident response communications plan guide, and a sample breach notification letter

Top 4 Expands to Essential 8

A week or two back, the Australian Signals Directorate (ASD) replaced their “Top 4 Mitigation Strategies” with a twice-the-fun version called the “Essential 8.”

“Why?” I hear you ask… That’s a good question.

After all, it was a big deal when the Top 4 came out in 2010 (and then updated in 2012 and 2014) as the ASD claimed that it would address 85% of targeted attacks. Their website still says that the Top 4 address 85% of targeted attacks. So, if nothing else has changed, why change the Top 4? There would seem to be a few possible explanations:

  • Everyone has finished rolling out the Top 4 and needed the next few laid out for them.
  • Attacks have changed, and as a result, the Top 4 no longer address 85% of targeted attacks so we need to change tack.
  • The change is tacit recognition that the Top 4, while great controls, provide a heavy burden in terms of implementation (especially application white-listing) so are not a realistic target to implement for most organisations, so the Essential 8 was created to provide a more ‘attainable’ set of controls.
  • ASD just felt it needed a refresh to maintain focus and to highlight that the Top 4 aren’t the only controls you need.

The first of those is, sadly, laughable. The second is plausible but not certain. The third is quietly compelling (I mean, we all feel better about 7 out of 8 than we do with 3 out of 4). And the last one is also pretty persuasive.

Enough about the why, let’s talk about the change itself.

What is the Essential 8?

If an organisation or agency was pursuing the Top 4 but have not reached that implementation goal, what should they do now? Switch focus to the Essential 8 or continue with the Top 4?

And perhaps the most important question of them all… where is the video? (here’s a link to Catch, Patch, Match from back in 2010, which is actually pretty good. I mean, it’s no PCI DSS Rock, but each to their own).

Here’s a bit of background for the those who are not so familiar with the Top 4:

Note the change of name of the main document from ‘strategies to Mitigate Targeted Cyber Intrusions’ to ‘Strategies to mitigate cyber security incidents’

These 3 documents present 37 controls as mitigation strategies against the list of 6 threats that are listed below. The top 4 are unchanged from the previous update, however the Top 4 along with the addition of 4 further controls are now presented as a new Baseline called the Essential 8.

3 documents, 37 controls, 6 threats, Top 4, Essential 8… Confused yet? This is what Hivint’s Chief Apiarist, Nick Ellsmore had to say about so many numbers flying around these days:

So, what are the changes?

This article isn’t going to present a control-by-control comparison. You’ll find plenty of those. We want to look at the big picture change. In terms of the number of controls, there are now 37 controls and not 35 — a few added, a few combined, a few renamed or modified. Despite the ASD website having a complete list of changes, there will be many a blog post picking it apart. We want to help you figure out what to do about it.

Top 4

The top 4 mitigations strategies remain the same, and are still mandatory for Australian Federal Government agencies.

Previously all 35 strategies were described as strategies to mitigate one key threat, targeted cyber attacks. The ASD also claimed that when the Top 4 were properly implemented, it effectively mitigated 85% of targeted cyber attacks. One key change in the new release is that now the threat landscape is defined in a broader sense which includes the following 6 threats:

  1. Targeted cyber attacks
  2. Ransomware and other external adversaries
  3. Malicious insiders who steal data
  4. Malicious insiders who destroy data and prevent computers/networks from functioning
  5. Business email compromise
  6. Threats to industrial control systems

Essential 8

Four additional controls, along with the Top 4, form the Essential 8. This is presented as a ‘Baseline’ for all organisations. At first glance, the Essential 8 feels like a natural extension that organisations can adopt into their security strategy without much of a hassle. Realistically though, it’s a bit more complicated than that.

When it came out in 2010, the ‘Strategies to Mitigate Targeted Cyber Intrusions’ was considered quite unique, since it confidently declared that by doing the Top 4, organisations will mitigate against 85% of ‘targeted cyber attacks’. In hindsight, the full list of 35 was probably too long for most organisations to digest, and few ever looked past the attractiveness of only having four things to do. That said, most organisations would have at least some of the 31 other controls implemented through their standard Business as Usual (BAU) operations (e.g. email/web content filtering, user education) whether or not they set out with the list in hand.

It is worth noting here that we have seen very few organisations genuinely deploy the Top 4 comprehensively.

It is also worth noting that for many organisations trying to climb the ladder of resilience, “targeted threats” seem a long way away, and managing the risk of non-targeted scattergun malware attacks, naïve but well-meaning users, and the Windows XP fleet, is more like today’s problem.

And at the other end, looking at genuinely nation-state-targeted-Government-institutions, it seems unlikely that a Top 4 would remain current for 7 or more years given the changing nature of threats. Stuxnet, Ed Snowden and the Mirai botnet are a few extreme examples but nevertheless game changing events that could affect how the importance of a control (mitigation strategy, in the context of this document) is rated, especially when the primary audience are Government Institutions.

But given the challenges in planning, funding, and rolling out a Top 4 mitigation program, one has to appreciate the consistency — it would be a nightmare to try to address a dynamic list of priorities within Government agencies with turning circles like oil tankers.

The Essential 8 can be seen as a good compromise where organisations who are working towards the Top 4 (or have it in place already) can incorporate the additional 4 controls without disrupting the status quo, while the list appears to stay relevant to the changes in Cyber. Seems like a pretty good approach.

Essential 8 — Constructed using information available in the ASD website

The lot

The overall list of 37 mitigation strategies are categorised under 5 headings:

  1. Mitigation strategies to prevent malware delivery and execution
  2. Mitigation strategies to limit the extent of cyber security incidents
  3. Mitigation strategies to detect cyber security incidents and respond
  4. Mitigation strategies to recover data and system availability
  5. Mitigation strategy specific to preventing malicious insiders

Strategies to mitigate cyber security incidents — mitigation details contains detailed implementation guidance for each of the 37 strategies, grouped under the above 5 headings.

Final Thoughts

Clearly this article was not written for cybersecurity gurus like you. It’s for all those people who haven’t yet deployed their holistic, risk-based, defence-in-depth inspired, ASD-Top-35-addressing security program in totality.

Okay, in all seriousness, if your security strategy is risk based and is aligned with where the organisation is heading, this change shouldn’t bother you too much. ASD too acknowledges that in some instances, the risk or business impact of implementing the Top 4 might outweigh the benefits.

Hence, the best bet continues to be a risk based approach to security which is informed by the Top 4 (or the Essential 8, or ISM, or ISO or whatever your flavour happens to be) rather than attempting to blindly comply to a checklist.

And sadly, there is no video this time.

Article by Adrian Don Peter, Senior Security Advisor, Hivint

Voicemail and VOIP

Commonly overlooked security risks


Every company has a phone system of some type, and just like with smartphones, these systems often offer so much more than just basic PABX functionality — technology such as VoIP, video conferencing, Unified Communications platforms, cloud based PABXs are all becoming par for the course. Most, if not all of these systems, also include integrated voicemail functionality.

This article considers some of the avenues through which attackers may look to compromise the security of a company’s phone systems.


There are many reasons attackers may be interested in a company’s phone system, including:

  • Using it to make fraudulent calls;
  • Aiding in social engineering attacks;
  • Eavesdropping on sensitive calls;
  • Harvesting sensitive information from voicemails;
  • Compiling internal directories of company staff; and
  • Attempting to obtain call detail records for market intelligence and industrial espionage.

Complicating matters is that, unlike dedicated VoIP systems and Video Conferencing systems which are often audited and maintained by the IT Security teams, traditional PaBX, IVR and Voicemail systems are often overlooked or fall outside the management purview of the IT and Security teams.

Voicemail is still relied upon for everyday use by many organisations, and sensitive information is commonly left in voicemail messages. A motivated attacker targeting a company is highly likely to be able to gain valuable information by listening to the voicemail of system administrators or executives over a period of time.

With cheap VoIP services being readily available it is also simple for attackers to automate and scale up their attacks to make multiple simultaneous calls and complete them faster. There are many ways to automate phone attacks, and it is easy for an attacker to write a script or use existing software to automate a range of attacks, including those outlined below.

Types of Voicemail and VOIP Attacks

Brute force attacks

Having a four-digit PIN as the de facto standard for voicemail authentication means an attacker will have a reasonable chance at successfully guessing the PIN through manual avenues such as by just using the phone keypad. Even with longer PINs, common patterns and repeating numbers are so commonly used by staff that an attacker is likely to work out the PIN using a PIN list instead of having to manually enter in every possible combination of digits.

In combination with most voice mail and phone systems not supporting account lockouts, nor having other security controls such as logging and alerting, which are commonly applied to other IT systems, undertaking a successful PIN brute force attack to gain access to a staff member’s voicemail box is unlikely to prove difficult to the determined attacker. Additionally, there are also methods to bypass lockout timers in those instances where they are in place. One technique that is almost always successful on large voicemail systems is attempting two or three common PINs against every extension or user account. This would prevent triggering any account lockouts, if implemented.

Once an attack run is complete the attacker may well have access to several voicemail accounts.

Directory reconnaissance

The same automation that can be applied to PIN brute force attacks can also be used for other attacks against phone systems. One example is the creation of internal directories. An attacker can use the “find extension” feature of modern PABXs and voicemail systems to make a list of names, extensions and sometimes job titles within a company. They can also do this by calling and making a note of every greeting for every extension if the PABX doesn’t have a names directory feature.

Call-forwarding attacks

Another use attackers have for voicemail is the call forwarding feature which can be used for free phone calls or to aid in social engineering attacks.

Getting free phone calls is the simplest example. An attacker who has compromised a voicemail box sets up call forward to a mobile phone, overseas or other toll number they want to call, then calls back the local DDI number or the company’s main toll-free number and enters the extension number and waits for the call-forward to connect them to their pre-programmed toll number.

This attack can be reversed as well — an attacker can also use a compromised voicemail box to receive incoming calls using the call forwarding feature to forward calls to a VOIP or pre-paid phone controlled by the attacker.

They then will have control of an extension on that company’s phone system which they can give to external parties to call back on to appear like they are inside the company’s office, which can be an invaluable means to facilitate the commission of a further social engineering attack.

Leveraging Caller ID

Most phone systems, when call-forwarding is used, display the caller-ID of the voicemail extension rather than the number of the originating party.

An attacker may be able to leverage the call-forward feature to masquerade as a known external party, such as appearing to be a known vendor or to be calling from inside the target company’s building. This can gain greater traction for social engineering purposes.

On a penetration testing engagement some years ago, my colleagues and I took over a manager’s extension at corporate headquarters and set up a call forward to the security desk at one of their rural sites. We called ahead to the security desk to add our names to the visitor register. The security desk asked very few questions because their phone displayed the call as originating from the manager’s phone at the organisation’s headquarters.

When we arrived, the security desk was expecting us and allowed us to enter without any restrictions.


The attacks scenarios above relate to simple voicemail systems, which most people overlook, considering them to a straightforward way to store and retrieve messages. However, when you include customer facing IVRs, VoIP systems, PABX systems, Teleconferencing and Video Conferencing systems, Unified Communications systems, call-queue management systems and the endless other applications of modern phone systems, then the possible vulnerabilities that can be exploited by attackers are almost endless.

Large companies often own a block of numbers, generally in lots of ten thousand. It is a good idea to periodically audit these number blocks to classify all lines, then performing an audit of possible attack vectors of all the phone systems connected to them.

Practicing the same security hygiene for voicemail that you do for other systems is critical, for example:

  • Disabling default accounts;
  • Auditing voicemail boxes for common PIN’ or disallowing common or simple PINs;
  • Setting a minimum PIN length of six digits;
  • Setting unique temporary PINs when provisioning new voicemail boxes;
  • Setting up lockout timers that don’t lose their state over multiple calls;
  • Disabling call forwarding features;
  • Restricting allowed forwarding numbers to local mobiles and landlines only;
  • Deactivating unused voicemail boxes; and
  • Applying logging and alerting if possible.

With some phone systems this is often easier said than done. We hope this article gets you thinking about how your company uses and manages phone systems and how they might be abused by attackers if appropriate vigilance is not exercised.

Article by John McColl, Principal, Hivint

The World Today

I originally wrote this message to try and give some perspective and comfort to the very diverse team within Hivint.

But given the great — and emotional — response, decided it should be shared more broadly. You may not agree with the political sentiment, and I respect that, but hopefully everyone agrees with the need to support each other through tough times.


Hey Team,

I was motivated to write this after seeing my friend Mike Cannon-Brookes, one of Atlassian’s founders, put down the thoughts of he and co-founder Scott, in their blog article Your tired, your poor, your huddled masses yearning to breathe free…

The reality is that with the success their business has had, their voices will ring more loudly than mine, but equally I recognise the importance of everyone being heard, being vocal, and not sitting quietly while the world starts to burn.

I was fortunate enough to live in Boston back in 2013–14, and not far from my apartment was the Holocaust memorial, which included a version of Pastor Martin Niemöller’s (1892–1984) cautionary verse about the lack of conviction shown by those in German society who could change opinion, as the Nazis rose to power and started to progressively purge their enemies:

First they came for the Socialists, and I did not speak out — Because I was not a Socialist.

Then they came for the Trade Unionists, and I did not speak out —

Because I was not a Trade Unionist.

Then they came for the Jews, and I did not speak out — 
Because I was not a Jew.

Then they came for me — and there was no one left to speak for me.

To be clear, I’m not saying that Trump is Hitler, and I’m not saying a temporary visa ban is equivalent to the Holocaust. The point I’m making is in the power of that verse. I’m not a Syrian refugee, I’m not Mexican, I’m not a woman, and I’m not carrying the passport of an Islamic nation. But if I don’t speak out for them now, I can’t rightfully expect them to have my back when the system fucks me over somewhere down the line.

The immigration ban is not OK. Banning refugees is not OK. Walking back hard-won progress on reproductive rights is not OK.

We’re really fortunate in our team at Hivint to have 26 amazing people, and a level of diversity which for a small team is pretty extraordinary. Male, female, gay, straight, at least half a dozen or so religious belief systems, cat-people, dog-people, herbivores and omnivores, lots of weird and wonderful personality traits, and some great, original thought.

I just want you to all know that we also have a conscience. Yes, this is a business, but ahead of all of that, we’re all people. And Craig and I, and all the Leadership Team, genuinely care. Some of you will be more impacted by the current state of affairs than others. If you have any concerns, want to discuss anything, or just want a friendly ear to talk, feel free to grab us.

All bees are welcome here.


Joining the Hive

Insights into the cyber-security industry from some of Hivint’s junior bees

While tertiary institutions around Australia are striving to produce an increasing number of students equipped with cyber security expertise, the industry is often referred to as being in the midst of a ‘skills shortage’.

Meanwhile, Hivint has been in a period of substantial growth, with the team quite literally doubling in size from January to December of 2016. As part of that growth, we’ve brought on a number of graduates and industry newcomers with a variety of backgrounds and skill sets, who have quickly become an integral and valued part of our team.

With cyber security increasingly being seen as a desirable pathway for many of the brightest and best students in Australia (and around the world), we thought it would be apt to get an insight from our new recruits about what it took for them to be successful in joining Australia’s fastest growing cyber security consulting firm, the challenges they have faced, and advice they have for other people aspiring to pursue a career in cyber security.

Justin Kuyken — GRC Advisor

After 12 years cleaning swimming pools, I went back to university part time to study computer science at LaTrobe University — something I had an interest in since my school days. 6 years later in my final year, a network security subject piqued my interest, and after graduating I started absorbing as much information as I could find on this new-found passion.

After another year of reading all the books and using all the tools I could find to expand my knowledge in the area, I still hadn’t had any luck with my efforts to get a start in the industry. Finally, the persistence paid off when I heard back from Hivint, who spoke to me about joining their team as a graduate-level Governance, Risk and Compliance (GRC) advisor. 
While this was not what I originally had in mind, after some research, the role appeared to be an even better way into the industry as a beginner and to get a better understanding of how the security world really works. 
During the recruitment process, the Hivint team was impressed by the dedication and commitment I had displayed in my own knowledge development, having shown a clear passion for developing my own knowledge about any and everything security-related. They decided to bring me on board, and I have not looked back. I have loved my time as part of the company, despite not being the ‘1337’ hacker I originally thought I would be when I started out on this whole path!

In summary, my advice to other aspiring graduates looking for a start is to show initiative to prospective employers — find a way to demonstrate that you are passionate about joining the industry and about continual improvement (e.g. through independent studies and learning), as these are valuable skills even on the job. In addition, be persistent about looking for opportunities — it may take some time, but the payoff for me by getting a foot in the door at Hivint has been well worth it.

Lumina Remick — GRC Advisor

After completing a Masters in Project Management at Bond University, my original plan was to return to working with circuits and microprocessors given my original background in Electronic and Communications Engineering. Little did I know an interesting career change was waiting for me.

In the final semester of my studies, I interned for an asset management company. My job primarily focused on implementing and tailoring their risk management policy and procedures to suit their business needs. However, I also had the opportunity to work on their IT security policies. This experience — together with my interest for risk management — piqued my interest for a career in cyber security.

Coincidentally, the company I worked for was Hivint’s client, so I had a sneak peak of Hivint’s work even before I became a part of the Hive. I believed the right place to further my new-found interest was at Hivint, so I religiously started following them on social-media platforms looking for a way in. 
When they advertised for a graduate GRC advisor role. I jumped at the opportunity, and there has been no turning back.

As a beginner, this role has been an amazing way into the industry and a great learning experience. I’m constantly learning new things and have come to realise there is no such thing as ‘knowing it all’ in security. I must admit that Google has quite often been my best friend through the whole experience. 
Working with some of the best people in the industry has inspired and made me love my time at Hivint.

My advice to any aspiring graduates is to do your research on who are the companies in the industry hiring, and then make sure you know as much as you can about them and keep a regular eye out to see if they are looking to fill new roles. The fact you have done your research and shown an interest in them will stand you in good stead should you land an interview!

Sam Reid — Technical Security Specialist

I took the common route through university, completing a Bachelor of Science in Cyber Security at Edith Cowan University. The first thing I’ll say is that working in the industry is more about client relationships and working with clients (particularly to help them understand their security risks and which ones are appropriate to accept, and which ones are not) than I originally thought. Those boring risk and standards units at uni turned out to be important when assisting clients manage their exposure!

Penetration testing is the real deal and it’s seriously cool. The exposure and range of things you get to test and ‘break’ to help clients identify security holes will live up to your expectations — guaranteed.

My advice to aspiring grads — with the constant stream of new information, trends and events in this industry — from new vulnerability disclosures, ongoing data breaches, growth in IoT devices, and DDoS attacks, it’s easy to be left behind when you’re starting out. Try to keep your passion up by doing security-related things you enjoy in your own time when you can. Capture the Flag (CTF) events, security research, bug bounties, secure software development not only keep you interested — they keep you interesting! A challenging CTF you recently completed could make a great story to tell during an interview.

As a case in point, I was hired as a junior security analyst straight from university and while I hadn’t heard of Hivint (they were only 12 people back then), the regional director had heard of me having attended a presentation I did on identity theft at a local security meetup. In my opinion, engaging with the community and making yourself known in the field (for the right reasons!) can really kick-start your career and put you ahead of the other graduate job seekers.

Oh, and lastly, be mindful of how you refer to your occupation as a ‘penetration tester’. My Mum proudly told the extended family that I was a “computer penetrator” last Christmas. No Mum. Please don’t ever say that again.

John Gerardos — GRC Advisor

I always knew I’d enroll into a Computer Science degree and work in technology. I originally worked primarily in support/systems administration and network engineering. My last few years as a network engineer had me either living in datacenters or designing and installing wireless access across large campuses in preparation for the explosion of BYOD (bring your own device) policies.

It very quickly became apparent that securing networks from the risks inherit in BYOD as well as the emerging Internet of Things was going to be a very interesting and expanding area. After working closely with the security team on several projects, I decided that is where I wanted to move my career.
So back to university I went! Along with my usual studies at the Masters of Applied Science (Information Security and Assurance) at RMIT, I learned about Ruxmon, a free security meetup that was run once a month on-campus. I immersed myself in the “Security Scene”, began attending Ruxmon, assisting with the organisation of the meetup as well as stepping up to lead the Information Security Student Group at RMIT University. I made it my goal to attend as many security meetups as possible and learn from the experts, which I found very rewarding and something that also helped cover and re-enforce some of the material learnt in my studies.

My university often ran industry networking events and I happened to bump into a couple of Hivint people at one I spoke at. I had not heard of Hivint at the time but it very quickly became apparent that it would be a cool place to work — so I kept it in mind and was excited when I saw them advertise for a graduate role.

The past 6 months on the Hivint team have been amazing! While I already had industry experience, this was my first consulting role and I had to very quickly learn how to manage my time across clients and get up to speed with the IT infrastructure of each client that I was working at. I also quickly found out that it’s not just the technical skills that are important — you need to be a great communicator and take the time to understand each individual client’s business so that you can tailor a solution for them.

My advice to students looking to enter the industry is to network with others and immerse yourself in the field. We are lucky that there are so many high quality free security meetups around the place — make the time to attend the ones that look interesting to you and have a chat to the people there. Follow up by doing your own research on anything that sounded interesting during the meetup, as well as joining in on relevant CTF events. Security people are happy to share the knowledge around and the best way for a student to learn outside of university is to be active in the community, attend relevant meetups and engage with the experts.