[email protected]   +1 (833) 3COLONY / +61 1300 733 940

Malicious Memes


The content of this article was also presented by Sam at the 2016 Unrest Conference.

In the past, allowing clients to upload images to your web application was risky business. Nowadays, profile pictures and cat images are everywhere on the Internet and robust procedures exist for handling image uploads, so we can rest assured they protect us from the nasties. Or can we?

Background

Image polyglots are one way to leverage vulnerabilities in web applications and execute malicious scripts in a victim’s web browser. They have the added bonus of bypassing certain security controls designed to mitigate these script injection attacks. This blog will explain how to build an image polyglot and demonstrate how using one can bypass a server’s Content Security Policy (CSP).

Content Security Policy (CSP)

The CSP is set by the web server in the form of a header and informs the user’s browser to only load and execute objects that originate from a certain set of whitelisted places. For example, a common implementation of the CSP header is to ensure the browser only accepts scripts that come from your domain and block the use of inline scripts (i.e., scripts blended directly with other client-side code such as HTML). This CSP is a recommended security header to mitigate the damage caused by Cross-Site Scripting vulnerabilities. The header achieves this by narrowing the attack surface available for malicious scripts to be loaded from. HTML5 Rocks has a great introduction to Content Security Policy if you would like to learn more.

Cross-Site Scripting (XSS)

XSS attacks are a type of web application injection exploit in which an attacker is able to embed their own client side code into a web application and have it rendered in the browser as a part of the webpage. Usually caused by lack of (or poorly implemented) user input sanitation and encoding, attackers use XSS vulnerabilities to inject malicious JavaScript (JS) that can be used to hijack user’s sessions, modify content on the webpage, redirect the visitor to another website or ‘hook’ the victim’s browser. Browser hooking is an attack technique for leveraging XSS vulnerabilities in which the XSS is used to load scripts from a server operated by the attacker which grants greater control over the hooked browser’s behaviour.

XSS is one of the most common web application vulnerabilities and many major websites — including Google, PayPal, eBay, Facebook and the Australian Government’s My Gov site — have been found to have XSS vulnerabilities at some point in time. Reflected XSS is a type of attack in which the injection is reflected back to the victim, rather than being stored on the web server. They are usually executed when a victim is coerced into clicking a link containing the malicious payload. The malicious script is considered to be ‘inline’ with the web application as it is loaded alongside other client side code like Hyper Text Markup Language (HTML) and not from a dedicated JS file. CSP can be configured to deny inline scripts from being executed in the browser which in theory mitigates the dangers of a reflected XSS and protects the user.

CSP in action — and how to get around it

Take for example a web application that allows you to upload and view images and has an aggressive CSP that only permits loading scripts from the application’s domain while denying the use of inline scripts. You’ve found a great reflected XSS vulnerability; however, your payload doesn’t execute because it’s inline and blocked by the CSP. You attempt to upload your payload through the image upload but the web application rejects it for not being a valid image. An image polyglot can help you get around those pesky security controls.

Polyglots

In humans, a ‘polyglot’ is someone who can speaks several languages. In the computer world it means code that is valid in several programming languages.

Figure 1 — Valid ‘C’ programming code

Figure 2 — Valid ‘Perl’ programming code

The code snippets in Figure 1 and Figure 2 are identical and yet also cross-compatible. This is polyglot code and is the underlying mechanism for the attack detailed in this tech blog.

GIF Images

You have more than likely heard of the Graphics Interchange Format (GIF) image type which has the file extension ‘.gif’. The popular image type was invented in 1987 by Steve Wilhite and updated in 1989. It has since come into widespread use on the Internet largely due to its support for animation.

Figure 3 — An animated GIF image

GIF images only support a 256 colour palette for each frame, which is why GIF images often look poor in quality. Each frame of an animated GIF is stored in its entirety making the format inefficient for displaying detailed clips of any longer than a few seconds (incidentally, while the pronunciation is often disputed, I can confirm for you right now it’s pronounced ‘jiff’ after an American brand of peanut butter — no joke).

Figure 4 — The constructs of a GIF image sourced from: http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

The attack this blog will demonstrate only requires knowledge of the ‘Header’, ‘Trailer’ and ‘Logical Screen Descriptor’ (LSD). The data in between these represent each frame of a GIF image. At least one frame is expected in a valid GIF. All GIF images begin with the signature ‘GIF’ followed by the version represented as ‘87a’ or ‘89a’ in the header.

Figure 5 — The header of a GIF version 89 image in hexadecimal and corresponding human readable ASCII encoded output

The following seven bytes of a GIF image make up the LSD which informs the image decoder of properties that effect the whole image. Firstly, the canvas width and height values which are stored as unsigned 16-bit integers. A ’16-bit unsigned integer’ is a number between 0 and 65,535 that cannot be negative (It wouldn’t make much sense to have a negative canvas size!).

Figure 6 — The LSD of a GIF image sourced from http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

It is also important to understand that this data in the GIF format is represented in ‘little-endian’ which means the least significant byte is read first by the decoder. In Figure 6 we can see the canvas size is set as width: ‘0A00’ and height: ‘0A00’. While seemingly backwards for humans, little-endian dictates the decoder read the smaller byte first, width: ‘000A’ and height: ‘000A’ which is 10 by 10 pixels. Lastly, the trailer (sometimes referred to as the footer) of the image is represented by hexadecimal ‘3B’ which when encoded as ASCII represents a semicolon.

Figure 7 — The hexadecimal and ASCII encoded output showing the footer of a GIF image

Most image decoders, including browsers, will ignore anything after the trailing semicolon making it a good place to put the bulk of our JS payload. However, if the web application manipulates the image; data after the semicolon will likely be discarded. Hence, it’s important that we can still access the raw/unedited image after it’s uploaded to the server — see the ‘limitations’ section of this blog for more information.

Creating the GIF/JS Polyglot

Figure 8 — Our ‘soon to be JS’ GIF sourced from http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

To create our malicious image, we are using a small, non-animated GIF image as seen in Figure 7. Its ASCII encoded output is represented below:

Figure 9 — ASCII encoded output of GIF image data

One method of creating GIF/JS polyglots is by manipulating the LSD to begin a JS comment block as seen in Figure 9. After the GIF trailer we close the comment block and include our payload.

Figure 10 — GIF image with JS payload appended

You will notice that in order to implement the ‘/*’ (begin comment block) JS command we have changed the value of the first two bytes of the LSD which correspond to the canvas width. The hexadecimal value of ‘/*’ is ‘2F 2A’ which when interpreted as little-endian by the image decoder is ‘2A 2F = 10799’. While we still have a valid GIF image, it has a pretty whacky canvas size as seen in the output below:

Figure 11 — EXIFtools output of GIF/JS polyglot image showing a canvas size of 10799x10px

However, other than being oddly sized, the image is still perfectly valid and the image decoder will read the rest of the image data normally, disregarding our JS code after the image trailer.

Figure 12 — The GIF/JS polyglot as interpreted by a JS engine

When we try and execute the image as JS the engine reads the GIF header as a variable name, it ignores the comment block and then continues by setting the variable to equal ‘1’ which is just a dummy variable to ensure the JS syntax remains valid. Then our payload is executed.

The image passes standard image validation techniques used by web applications which often rely on confirming the ‘magic numbers’ (a fancier way of saying header) of the image. Once our image is uploaded to the server we effectively have a valid JS file originating from the web application’s domain which falls within the context of the CSP.

As it stands the image is loaded into to the web application through the use of the HTML ‘img’ tag which informs the browser to interpret the data stream as image data. In order to circumvent this and trigger our JS code, we leverage our XSS vulnerability to load the image with the HTML ‘script src’ tag.

Figure 13 — Leveraging the reflected XSS to execute our polyglot

Figure 14 — XSS payload executing in browser bypassing CSP

Why GIFs?

The convenient design structure of the GIF file format allows us to leverage the image header and manipulate the canvas sizes defined in the LSD without destroying the properties of the image for the image decoder.

Limitations

  • Web applications that restrict image uploads to a certain canvas size can hinder the effectiveness of an image polyglot. Due to the limited number of JS characters that can be used in the LSD the canvas sizes are often unusually large and cannot conform to strict image upload pixel rules.
  • Server side image manipulation that resizes the image will edit the canvas size in the LSD; corrupting our polyglot. If it’s not possible to locate the original unedited image through the web application, then the image will not execute as JS.

Conclusion

While Figure 14 demos a rather mundane script execution it confirms we now have a method of uploading and executing an XSS attack regardless of the CSP directive. The stored JS in our image acts as an uploaded script file satisfying the CSP same origin requirements.

This attack proves that CSP isn’t a catch-all XSS filter and can be circumvented in some cases. In application penetration testing GIF/XSS polyglots are a powerful tool to leverage the consequence of improper output sanitation.

While still recommended, the CSP header should be implemented with the understanding that it’s the last line of defence against XSS attacks that might protect your web app. Ultimately, secure development processes and proper output encoding are the best way to protect web applications against XSS.


Article by Sam Reid, Security Specialist, Hivint

CryptoWall — Analysis and Behaviours


Key Behaviours of CryptoWall v4

This document details some initial research undertaken by Hivint into the newly released CryptoWall version 4 series of ransomware. A number of organisations we have worked with have experienced infections by CryptoWall and its variants, in some cases leading to severe consequences.

This research paper outlines more information about the latest version of CryptoWall, as well as providing guidance on possible methods for creating custom security controls within your IT environment to mitigate the threat of CryptoWall infections, as well as how to detect and respond to these infections if they do occur. Some lists of known payload sources, e-mail domains and payment pages associated with CryptoWall are also provided at the end of this paper for use in firewall rulesets and/or intrusion detection systems.

CryptoWall version exhibits the following new behaviours:

  • It now encrypts not only the data in your files, but the file names as well;
  • It still includes malware dropper mechanisms to avoid anti-virus detection — but this new version also possesses vastly improved communication capabilities. It still uses TOR, which it may be possible to block with packet-inspection functions on some firewalls. However, it has a modified version of the protocol that attempts to avoid being detected by 2nd generation enterprise firewall solutions.
  • It appears to inject itself into or migrate to svchost.exe and iexplore.exe. It also calls bcdedit.exe to disable the start-up restore feature of Windows. This means the system restore functions that were able to recover data in previous versions of the ransomware no longer work.

Infection Detection

Antivirus detection for this variant is generally very low, but there’s some work on detection taking place. ESET’s anti-virus solution, for example, detects the .js files used by CryptoWall in emails as JS/TrojanDownloader.Agent;

The most reliable method to detect Cryptowall v4 infections when creating rules in intrusion detection systems, firewalls, antivirus systems, or centralised log management servers is to create a rule to alert on creation of the following filenames, which are static within CryptoWall v4:

  • HELP_YOUR_FILES.TXT
  • HELP_YOUR_FILES.HTML
  • HELP_YOUR_FILES.PNG

It’s also worth noting that having in place a comprehensive, regular and consistent backup process for key organisational data is extremely important to combat the threat posed by ransomware such as CryptoWall v4. This will facilitate the prompt restoration of important files, limiting impacts of productivity.

Limiting the risk of Infection

CryptoWall v4 connects to a series of compromised web pages to download the payload. Some of the domain names hosting compromised pages are listed below — a useful step would be to create a regular expression on firewalls and other systems to block access to these domains:

  • pastimefoods.com
  • 19bee88.com
  • adrive62.com
  • httthanglong.com

Note that the list of compromised web pages is constantly evolving and so the implemented regular expression will require ongoing maintenance within corporate networks. See the lists at the end for more domains.
 
In the new version of CryptoWall, infected files have their file names appended with pseudorandom strings. As a result, filename encryption is harder to identify through pure examination of file extension names, unlike past versions of CryptoWall (in which ‘.encrypted’ was appended to the end of encrypted files). Thus, implementing an alert or blocking mechanism becomes more challenging.
 
However, it is possible to implement regular expression-based rules by considering the executable file names which are downloaded as part of an attempt to infect a system with CryptoWallv4. These are two known to be associated with CryptoWall v4 infections:

It may also be possible to write detection rules to find a static registry key indicating the presence of a CryptoWall infection. This can then be used to search over an entire corporate domain to locate infected machines, or possibly used in anti-virus / IDS signatures. An example is:

  • HKEY_USERSSoftwareMicrosoftWindowsCurrentVersionRun a6c784cb “C:UsersadminAppDataRoaminga6c784cb4ae38306a6.exe

Another step to consider is writing a custom list for corporate firewalls containing the domains that phishing e-mails associated with CryptoWall v4 infections are known to come from, as well as a list of known command-and-control servers. For example, one of the first e-mail domains to be reported was 163.com. In addition, some of the known command and control hosts that the ransomware makes calls to include:

  • mabawamathare.org
  • 184.168.47.225
  • 198.20.114.210
  • 143.95.248.187
  • 64.247.179.218
  • 52.91.146.127
  • 103.21.59.9

CryptoWall v4 also makes use of Google’s 8.8.8.8 service for DNS — this behaviour can be taken into account as part of determining whether there are additional security controls that can be implemented to mitigate the risk of infection. In addition, it appears that CryptoWall v4 makes outgoing calls to the following URLs (among others). These may also be useful in developing infection detection controls:

The initial controls we have worked with most customers to implement on their corporate networks included adding a rule to anti-virus detection systems to identify the ransom note file when it is created (i.e.: HELP_MY_FILES.txt). This enables network administrators to be promptly alerted to infections on the network. This is a valuable strategy in conjunction with maintaining lists of known bad domains related to the malware’s infection sources and infrastructure.

Lists of known payload sources, e-mail domains and payment pages associated with CryptoWall

We’ve included the following lists of payload sources, domains and pages associated with Cryptowall v4 infections — which some of our clients have used — to identify activity potentially associated with the ransomware. These can be used in addition to blacklists created and maintained by firewall and IDS vendors:

  • Decrypt Service contains a small list of the IP addresses for the decryption service. This is the page victims are directed to in order to pay the authors of Cryptowall for the decryption keys. These servers are located on the TOR Network but use servers on the regular web as proxies.
  • Email Origin IPs — contains IP addresses of known sources of CryptoWall v4 phishing e-mail origin servers — can be used in developing black lists on e-mail gateways and filtering services.
  • Outgoing DNS Requests — contains a list of IP addresses that CryptoWall v4 attempts to contact.
  • Payload Hosts — contains known sources of infection — including compromised web pages and other infection sources.

CryptoWall associated IP addresses

Article by John McColl, Principal Advisor, Hivint

Secure Coding in an Agile World: If The Slipper Fits, Wear It


Combining agile software development concepts in an increasingly cyber-security conscious world is a challenging hurdle for many organisations. We initially touched upon this in a previous article — An Elephant in Ballet Slippers? Bringing Agility To Cyber Security — in which Hivint discussed the need to embrace agile concepts in cyber security through informal peer-to-peer sharing of knowledge with development and operations teams and the importance of creating a culture of security within the organisation.

One of the most common and possibly biggest challenges when incorporating agility into security is the ability to effectively integrate security practices such as the use of Static Application Security Testing (SAST) tools in an agile development environment. The ongoing and rapid evolution of technology has served as a catalyst for some fast-paced organisations — wishing to stay ahead of the game — to deploy software releases on a daily basis. A by-product of this approach has been the introduction of agile development processes that have little room for security.

Ideally, security reviews should happen as often as possible prior to final software deployment and release, including prior to the transition from the development to staging environment, during the quality assurance process and finally prior to live release into production. However, these reviews will often require the reworking of source code to remediate security issues that have been identified. This obviously results in time imposts, which is often seen as a ‘blocker’ to the deployment pipeline. Yet the increase in media coverage of security issues in recent years highlights the importance of organisations doing all that they can to mitigate the risks of insecure software releases. This presents a significant conundrum: how do we maintain agility and stay ahead of the game, but still incorporate security into the development process?

One way of achieving this is through the use of a ‘hybrid’ approach that ensures any new software libraries, platforms or components being introduced into an organisation are thoroughly tested for security issues prior to release into the ‘agile’ development environment. This includes internal and external frameworks such as the reuse of internally created libraries or externally purchased software packages. Testing of any new software code introduced into an IT environment — whether externally sourced or internally produced — is typically contemplated as part of a traditional information security management system (ISMS) that many organisations have in place. Once that initial testing has taken place and appropriate remediation occurs for any identified security issues, the relevant software components are released into the agile environment and are able to be used by developers to build applications without the need for any further extensive testing. For example, consider a .NET platform that implements a cryptographic function using a framework such as Bouncy Castle. Both the platform and framework are tested for security issues using various types of testing methodologies such as vulnerability assessments and penetration tests. The developers are then allowed to use them within the agile development environment for the purposes of building their applications.

When a new feature or software library / platform is required (or a major version upgrade to an existing software library / platform occurs), an evaluation will need to occur in conjunction with the organisation’s security team to determine the extent of the changes and the risks this will introduce to the organisation. If the changes / additions are deemed significant, then the testing and assurance processes contemplated by the overarching ISMS will need to be followed prior to their introduction into the agile development environment.

This hybrid approach provides the flexibility that’s required by many organisations seeking an agile approach to software development, while still ensuring there is an overarching security testing and assurance process that is in place. This approach facilitates fast-paced development cycles (organisations can perform daily or even hourly code releases without having to go through various types of security reviews and testing), yet still enables the deployment of software that uses secure coding principles.

It may be that fitting the ballet slippers (agility) onto the elephant (security) is not as an improbable concept as it once seemed.


Article by Craig Searle, Chief Apiarist, Hivint

The Cyber Security Ecosystem: Collaborate or Collaborate. It’s your choice.


As cyber security as a field has grown in scope and influence, it has effectively become an ‘ecosystem’ of multiple players, all of whom either participate in or influence the way the field develops and/or operates. At Hivint, we believe it is crucial for those players to collaborate and work together to enhance the security posture of communities, nations and the globe, and that security consultants have an important role to play in facilitating this goal.

The eco-system untwined

The cyber security ecosystem can broadly be divided into two categories, with some players (e.g. governments) having roles in both categories:

Macro-level players

Consists of those stakeholders who are in a position to exert influence on the way the cyber security field looks and operates at the micro-level. Key examples include governments, regulators, policymakers and standards setting organisations and bodies (such as the International Organization for Standardization, Internet Engineering Task Force and National Institute for Standards and Technology).

Micro-level players

Consists of those stakeholders who, both collectively and individually, undertake actions on a day-to-day basis that affect the community’s overall cyber security posture (positively or negatively). Examples include end users/consumers, governments, online businesses, corporations, SMEs, financial institutions and security consultants (although as we’ll discuss later, the security consultant has a unique role that bridges across the other players at the micro-level).

The macro level has, in the past, been somewhat muted with its involvement in influencing developments in cyber security. Governments and regulators, for example, often operated at the fringes of cyber security and primarily left things to the micro-level. While collaboration occurred in some instances (for example, in response to cyber security incidents with national security implications), that was by no means expected.


The formalisation of collaborative security

This is rapidly changing. We are now regularly seeing more formalised models being (or planning to be) introduced to either strongly encourage or require collaboration on cyber security issues between multiple parties in the ecosystem.

Recent prominent examples include proposed draft legislation in Australia that would, if implemented, require nominated telecommunications service providers and network operators to notify government security agencies of network changes that could affect the ability of those networks to be protected[1], proposals for introducing legislative frameworks to encourage cyber security information sharing between the private sector and government in the United States[2], and the introduction of a formal requirement in the European Union for companies in certain sectors to report major security incidents to national authorities[3].

There are any number of reasons for this change, although the increasing public visibility given to cyber security incidents is likely at the top of the list (in October alone we have seen two of Australia’s major retailers suffer security breaches). In addition, there is a growing predilection toward collaborative models of governance in a range of cyber topic areas that have an international dimension (for example, the internet community is currently involved in deep discussions around transitioning the governance model for the internet’s DNS functions away from US government control towards a multi-stakeholder model). With cyber security issues frequently having a trans-national element — particularly discussions around setting ‘norms’ of conduct around cyber security at an international level[4] — it’s likely that players at the macro-level see this as an appropriate time to become more involved in influencing developments in the field at the national level.

Given this trend, it’s unlikely to be long before the macro-level players start to require compliance with minimum standards of security at the micro-level. As an example, the proposed Australian legislation referred to above would require network operators and service providers to do their best (by taking all reasonable steps) to protect their networks from unauthorised access or interference. And in the United States, a Federal Court of Appeals recently decided that their national consumer protection authority, the Federal Trade Commission, had jurisdiction to determine what might constitute an appropriate level of security for businesses in the United States to meet in order to avoid potential liability[5]. In Germany, legislation recently came into effect requiring minimum security requirements to be met by operators of critical infrastructure.

Security consultants — the links in the collaboration chain

Whatever the reasons for the push towards ‘collaborative’ security, it’s the micro-level players who work in the cyber security field day-to-day who will ultimately need to respond as more formal expectations are placed on players at the macro-level with regards to their security posture.

Hivint was in large part established to respond to this trend — we believe that security consultants have a crucial role to play in this process, including through building a system in which the outputs of consulting projects are shared within communities of interest who are facing common security challenges, thereby minimising redundant expenditure on security issues that other organisations have already faced. This system is called “The Security Colony” and is available now[6]. For more information on the reasons for its creation and what we hope to achieve, see our previous article on this topic.

We also believe there is a positive linkage between facilitating more collaboration between players at the micro-level of the ecosystem, and encouraging the creation of more proactive security cultures within organisations. Enabling businesses to minimise expenditure on security problems that have already been considered in other consulting projects enables them to focus their energies on implementing measures to encourage more proactive security — for example, as we discussed in a previous article, by educating employees on the importance of identifying and reporting basic security risks (such as the inappropriate sharing of system passwords). And encouraging a more proactive security culture within organisations will ultimately strengthen the nation’s overall cyber security posture and benefit the community as a whole.


Article by Craig Searle, Chief Apiarist, Hivint


[1] See in particular the proposed changes to section 313 of the Telecommunications Act 1997 (Cth).
[2] See https://www.fas.org/sgp/crs/misc/R44069.pdf for a description of these proposals.
[3] See http://ec.europa.eu/digital-agenda/en/news/network-and-information-security-nis-directive
[4] See for example http://www.project-syndicate.org/commentary/international-norms-cyberspace-by-joseph-s–nye-2015-05
[5] See http://www.technologylawdispatch.com/2015/08/privacy-data-protection/third-circuit-upholds-ftcs-authority-in-wyndham-case/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original
[6] https://www.securitycolony.com/

An Elephant in Ballet Slippers? Bringing Agility To Cyber Security


As enterprise IT and development teams embrace Agile concepts more and more, we are seeing an increased need for cyber security teams to be similarly agile and able to adapt to rapidly evolving environments. Cyber security teams that will not or cannot make the necessary changes will eventually find themselves irrelevant; too far removed from the function and flow of the organisation to provide meaningful value, resulting in an increased risk for the organisation and its interests.

So, how do we fit the elephant (cyber security) with ballet slippers (agility)?

Firstly, in an age of devops, continuous integration and continuous deployment it is critical to understand the evolving role of the cyber security team. The team’s focus on the on rigorous definition, enforcement and assurance of security controls is giving way to active education, collaboration and continual improvement within non-traditional security functions. This is primarily because the developers, the operations team, the sysadmins have all become the front-line for the security team. These teams spend their working life making decisions that will impact the security of the product & platforms and ultimately the security of the enterprise. Rather than risk being seen as the ‘department of no’ the cyber security team needs to embrace the change that agile development brings and find ways to improve the enterprise through enhancing the skills and capabilities of these teams.

First and foremost is education. If the devops team don’t know or even worse, don’t value security controls and secure practices then the systems they develop and maintain will never be secure. It is the role of the cyber security team to ensure that all members of the development and operations team understand that security doesn’t need to be difficult, it can be implemented well if it is inherent to the development process. This is typically achieved through ongoing training and education, both formally and informally.

Secure development and devops training courses are widely available and are absolutely a valuable part of the toolkit, but they tend to be rather static in nature and often bad habits tend to creep back in over time. Informal education through peer review, feedback and information sharing is far more consistent and reliable as long as there is a clear security ethos that can be established for the team to work from. This is particularly the case for the senior members of the team passing on their knowledge to newer or less experienced members.

Security champions are crucial in filling this role. Ideally a security champion is a member of the security team that works with the development team on a daily, even hourly, basis. One of the most important parts of this role is that the security champion needs to be able to ‘speak geek’ and understand the challenges facing the team when trying to rapidly develop applications. A background in development or devops means that they can speak from experience and be empathetic when dealing with the development teams. The security advice they provide needs to be pragmatic, weighing up the relative risks and benefits, and it needs to be delivered in a way that is meaningful to the rest of the development team.

An ability to get their ‘hands dirty’ and actually assist in aspects of code development or systems maintenance is definitely a bonus. The security champion also needs to drive the implementation of tools and services to support the rapid identification, assessment and remediation of security vulnerabilities in code or platforms. Wherever possible these security tools need to be seamlessly built into the existing development, deployment and testing tools (think Bamboo, Jira, Jenkins, Circle CI and Selenium) so that security assessment becomes transparent to the overall development and deployment processes. The security champion should also responsible for bringing a cyber-security context into the design stages of development. This is often best achieved by flagging stories (Agile-speak for detailed use-cases) as ‘secure’, meaning that particular attention needs to be paid to that component — user input, authentication, database calls, connections to external systems/APIs will all require additional analysis.

Finally, and possibly most importantly, it is critical that organisations develop a culture of security. Insecure practices must be considered as a real no-no in the day to day business behaviours. A good comparison is the nature of OH&S (Occupational Health & Safety) practices in the workplace today. 15–20 years ago your typical workplace was not as safe as they are now. Instances like trip hazards, puddles of liquid and the like weren’t necessarily seen a big issue.

Nowadays staff recognise them as a safety risk and have been trained to respond accordingly or raise the issue with someone who will. Cyber security needs to arrive at the same point. Staff need to be aware of what constitutes ‘safe’ and ‘unsafe’ cyber security behaviours, and feel confident in calling out unsafe practices.

Observing a team member sharing a password or leaving a workstation unlocked, shouldn’t be something that is seen as normal practice — it needs to be identified as a risk and addressed immediately, with the security team being part of the solution to the problem. Pointing out an insecure practice but not providing a practical solution will only alienate the security team. As staff become aware and feel confident in calling out unsafe activities, with the support of the security team to address, the it becomes part of the cultural DNA and is more readily passed on to new team members and new initiatives.

Agile development does present a number of challenges to a cyber-security team. Trying to adhere to the same practices and controls that were implemented 5–10 years ago is ultimately destined for failure as the rate of change is too rapid in order for them to be effective. Adapting practices to maintain relevancy to the evolving environment is the only way to remain effective and best protect the organisation and its customers.


Article by Craig Searle, Chief Apiarist, Hivint

Maturing Organisational Security and Security Service Catalogues


One of the key objectives for an information security professional is providing assurance that the systems which are implemented, or are soon to be implemented, are secure. A large part of this involves engaging with business and project teams proactively to ensure that security needs are met, while trying hard not to interfere with on-time project delivery.

Unfortunately, we’re not very good at it.

Recently, having agreed to conduct a security risk assessment (SRA) of a client’s SFTP solution, which they intended to use to transfer files to a vendor in place of their existing process of emailing the files, I sat down to discuss the security requirement with the Solution Designer, only to have him tell me that an SRA had been done before. Not just on the same design pattern, but on the exact same SFTP solution. They were simply adding an additional existing vendor to the solution to improve the security of their inter-company file transfer process. The organisation didn’t know how to go about evaluating the risks to the company of this change, so they used the ‘best fit’ security-related process available to it, which just happened to be an SRA.

Granted, in the example above, a new vendor might need to be assessed for the operational risk associated with them uploading files to our client’s environment, or if there were changes to the SFTP solution configuration. But in this case, the vendor had been working with them for some time so there was no further risk introduced, just a more secure business process: the risk was getting lower not higher.

While this is only one example, this scenario is not uncommon across many organisations we work with, across many industry sectors, and it’s only going to get harder. With more organisations moving to an agile development methodology and cloud deployments, ensuring security keeps up with new developments throughout the business is going to be critical to maintaining at least a modicum of security in these businesses.

So, if you’re getting asked to perform a risk assessment the day before go-live (yes, this still happens), you’re doing it wrong.

If you’re routinely performing your assessments of systems and technology within the project lifecycle, you’re doing it wrong.

If you’re engaging with your project teams with policy statements and standards documents, yes, unfortunately you’re also doing it wrong.

Projects are where things — often big things — change in an organisation’s business or technology environment. And where there is change, there is generally a key touch point for the security team. Projects will generally introduce the biggest potential vulnerabilities to your environment, but if there is an opportunity to positively influence the security outcomes at your organisation, it will also be as part of a project.

Once a system is in, it’s too late. If you haven’t already given your input to get a reasonably secure system, the project team will have moved on, their budget will have gone with them, and you’ll be left filling out that risk assessment that sits on some executive’s desk waiting for the risk to be accepted. Tick.

But on the flip-side, if you’re not proactively engaging with project teams and your business to provide solutions for them, you’re getting in the way.

Let’s face it, no project manager wants to read through dozens of pages of security policy and discern the requirements for their project — you may as well have told them through interpretive dance.

So, what’s the solution?

The solution is to look to the mature field of IT Service Management, and the concept of having a Service Catalogue.

A Security Services Catalogue is two things:

Firstly, it is a list of the security and assurance activities which the security team offers which are generally party of the system development lifecycle. These services may include a risk assessment, vulnerability assessment and penetration testing, and code review, among others. The important thing being that the services are well defined in terms of the offering inputs, outputs and process, and the required effort and price, so that the business and the project teams can effectively incorporate it into their budget and schedule.

Secondly, it is a list of the security services already implemented within the organisation and operated by or on behalf of the security team, which have been through your assurance processes and are effectively “approved for use” throughout the organisation. These services would be the implementation of a secure design pattern or blueprint, or form part of one of those blueprints. To get an idea, have a look at the OSA Security Architecture Landscape, or the Mozilla Service Catalog.

Referring quickly to Mozilla’s approach, a good example is their logging or monitoring or SIEM service. Assuming a regulatory and policy requirement for logging and monitoring for all systems throughout your environment, it allows the project team to save money and time by using the standardised service. Of course, using the already implemented tool is also common sense, but writing it down in a catalogue ensures that the security services on offer are communicated to the business, and that the logging and monitoring function for your new system is a known-quantity and effective.

The easiest way to describe this approach is “control inheritance” — where a particular implementation of a control is used by a system, that system inherits the characteristics of that control. Think of Active Directory — an access control mechanism. Once you’ve implemented and configured it securely, and it has been evaluated, you have a level of assurance that the control is effective. For all systems then using Active Directory, you have a reasonable level of assurance that they are access controlled, and you can spend your time evaluating other security aspects of the system. So communicate to your organisation that they can use it via your Security Service Catalogue.

And if your Project team wants to get creative? No problem, but anything not in the catalogue needs to go through your full assurance process. That — quite rightly — means risk assessments, control audits, code reviews, penetration tests, and vulnerability scans, which accurately reflects the fact that everything will be much easier for everyone if they pick from the catalogue where possible.

So, how does this work in practice?

Well, firstly, start by defining what level of assurance you need for a system to go into production, or to meet compliance. For example, should you need to meet PCI compliance, you’ll at least have to get your system vulnerability scanned and penetration tested. Create your service catalogue around these, and defining business rules for their use and the system development lifecycle stages in which they must be completed.

Secondly, you need to break down your environment into its constituent parts (specifically the security components), review and approve each of those parts, and add them to your Security Service Catalogue. Any system then using those security services as part of its functionality, inherits the security of those services, and you can have a degree of assurance that the system will be secure (at least to the degree that the system is solely comprised of approved components).

The benefits are fourfold:
Project teams can simply select the services they want to integrate with, and know that those services meet the requirements of the security policy. No mess, no fuss.
Projects go faster, project teams know what the expectations are for them, and aren’t held up by the security inquisitor demanding their resources’ time.
Budget predictability. Project teams know the costs which need to be included in their budget up front. They can also choose a security service which is a known-quantity, which means there is a lower chance of a risk eventuating that needs them to pay to change aspects of the system to meet compliance or remediate a vulnerability.

You don’t need to check the security of the re-used components used by those projects over and over again.
For example, you might use an on-premise Active Directory instance with which identity and access management is performed; or maybe it’s hosted in Azure. Perhaps you use Okta, a cloud based SaaS Identity and Access Control service. For logging and monitoring, you might use Splunk or AlienVault as your organisation-wide security monitoring service, or maybe you outsource it to AlertLogic. Whatever. Perform your due diligence, and add it to your catalogue.

Once it’s in your catalogue, you should assess it annually, as part of your business as usual security practices — firstly for risk, secondly at a technical level to validate your risk findings, and finally in a market context to see if there are better controls now available to address the same risk issue.

I’ve been part of a small team building a security certification and accreditation program from scratch, and have seen that the only way to scale the certification process, and ensure sufficient depth of security review across the multitude of systems present in most organisations, is to make sure unnecessary re-hashing of solution reviews is minimised, using these “control inheritance” principles.

Thirdly, develop a Security Requirements Document (SRD) template based upon your Security Services Catalogue. This is where you define the services available and requirements for your project teams, and make the choices really easy for them. Either use the services in the security services catalogue, or comply with all the requirements of the Password Policy, Access Control Policy, Encryption Policy, etc. After a time, your Project Lifecycle will mature, your Security Services will become more standardised and robust, and your life will become significantly easier.

Lastly, get involved with your project teams. Your project teams are not security experts, you are. And the sooner you make it easy for them to get what resources and expertise you have available, the sooner they can make the best decisions for your organisation, the more secure your organisation will be. Make the secure way the easy way, and everyone’s life will be a little more comfortable.


Article by Ben Waters, Senior Security Advisor, Hivint

Security Collaboration — The Problem and Our Solution


Colleagues, the way we are currently approaching information security is broken.

This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.

But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).

Why is it inefficient to compete on security? Here are a couple of reasons:

Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.

No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)

Some examples of how our current approach is failing us:

We are wasting money on information security governance, risk and compliance

There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.

The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.

We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.

We are wasting money when getting technical security services

As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.

But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.

The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.

So what do we do about it?

There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.

It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.

The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:

Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?

Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.

Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).

Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.

Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.

What else is going in to the portal?

We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.

“Everyone” can’t win. Who loses?

The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).

So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.

Is there a precedent for this?

The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”

A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.

Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.

The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.

Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.

What if a project is really, really sensitive?

Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.

So how does it work?

You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements)
We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output).
Created resources, documents, and de-identified findings become part of our portal for community benefit.

Great. Where to from here?

There are two things we need right now:

Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help.
Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful.
And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.


Article by Nick Ellsmore, Chief Apiarist, Hivint

An analysis of the current cyber threat landscape


Over the last few years, it appears that although certain industries are targeted by cyber attacks more than others, the methods used across the board are usually the same.

Prominent incidents identified over 2016–2017 almost always involved one of the following:

  • Phishing and other email scams
  • Ransomware
  • Botnets
  • DDoS
  • Malware-as-a-Service
  • Supply Chain Security

In this article we investigate what cyber-attacks have been prominent over the last 12 months, what trends will continue for the remainder of 2017, and what can be done to minimise your risk of attack.

Phishing and other email scams

Phishing, spear-phishing (targeted phishing of specific individuals) and other email scams continue to be a major threat to businesses. In an era of large-scale security infrastructure investment, users are consistently the weak link in the chain.

The Symantec Internet Security Threat Report 2017[1] and ENISA Threat Landscape Report 2016[2] state the threat of phishing is intensifying despite the overall number of attacks gradually declining, which is suggestive of an increase in the sophistication and effectiveness of attacks. In all likelihood, this is due to cyber criminals moving away from volume-based attacks to more targeted and convincing scams. This transition is motivated by the higher success rate of tailored attacks, however greater effort is required by way of research into viable targets using open source material such as social media and social engineering.

This shift in approach is consistent with the observed growth of business-focussed email scams in the last 18 months. Cyber attackers begin by conducting extensive research on businesses they wish to target in order to identify key staff members — particularly those with privileged access, a degree of control over the business’ financial transactions, or in a position of authority.

These scams typically involve cyber attackers crafting emails that request an urgent transfer of funds, seemingly from a trusted party such as a senior manager in the business or an external contractor / supplier who is regularly dealt with. Following a global survey of business email scams, the FBI’s Internet Crime Complaint Center reported this type of attack continues to surge in prominence, with:

  • US and foreign victims reporting 24,345 cases by December 2016 — a significant increase from only 668 reported cases just six months earlier (the actual number is likely to be much higher as many cases go unreported).
  • Attackers attempting to steal a total of USD$5.3 billion through reported business email scams by the end of 2016, compared to USD$3.1 billion only 6 months earlier.[3]

Ransomware

Ransomware is malicious software that encrypts users’ data in order to demand payment for the purported safe return of files, typically via a decryption key, typically using cryptocurrencies such as Bitcoin. The most prominent example of this form of attack was the Wannacry attack of May 2017, in which cybercriminals distributed the ransomware strain via an underlying software vulnerability in the Microsoft Windows operating system.

Due to the relatively low ‘barrier to entry’ and potentially lucrative rewards for even inexperienced cyber attackers, we have continued to see significant growth in the use of ransomware since 2016. In January 2016, ransomware accounted for only 18% of the global malware payloads delivered by spam and exploit kits; ten months later, ransomware accounted for 66% of all malware payloads — an increase of 267%[4].

Not only is ransomware one of the most popular attack vectors for cyber attackers, it is also among the most harmful. The cost of the ransom is only one aspect to consider — system downtime can have a significant impact on sales, productivity and customer or supplier relationships. In some cases (e.g. medical facilities), ransomware infections could potentially cost lives.

The success rate of ransomware is largely attributable to the exploitation of an organisation’s end users who typically have limited training and expertise in cyber security. In addition, once ransomware has infiltrated an organisation, many find it difficult to effectively resolve the effects without paying the ransom demanded by the attackers.

There is no guarantee attackers will provide the key for decrypting files if the ransom is paid however, and relying on payment of the ransom as a ‘get out of jail’ tactic is a risky choice. Further, payment of the ransom further encourages these sorts of attacks, and furthers development of ransomware technology. Hivint’s article ‘Ransomware: a History and Taxonomy’[5] provides an in-depth analysis of the growing problem of ransomware.

Ransomware is likely to be a thorn in the side of organisations for some time to come, and through increasingly diverse avenues. The 2017 SonicWall Annual Threat Report highlights that there is likely to be a greater focus on the industrial, medical and financial sectors due to their particularly low tolerance for system downtime or loss of data availability[6].

Similarly, internetworked physical devices — often referred to as the Internet of Things (IoT) — are also increasingly being targeted due to the fact they are not designed with security as a central consideration at present. While the majority of IoT devices can simply be re-flashed to recover from an attack as they do not store data, organisations and end users may rely on critical devices where any amount of downtime is problematic, such as medical devices or infrastructure. How the design and implementation of IoT devices shifts in response to the growing threat of ransomware will be one of the more interesting spaces to watch for the remainder of 2017 and beyond.

Botnets and DDoS

As with ransomware, the increased inter-connectivity of everyday devices such as lights, home appliances, vehicles and medical instruments is leading to their increased assimilation into botnets to be used in distributed denial of service (DDoS) attacks.

Software on IoT devices is often poorly maintained and patched. Many new types of malware search for IoT devices with factory default or hardcoded user names and passwords and, after compromising them, uses their Internet connectivity to contribute to DDoS attacks. Due to the rapidly increasing number of IoT devices, this is paving the way for attacks at a scale that DDoS mitigation firms may struggle to handle[10]. The Thales 2017 Data Threat Report suggests that 6.4 billion IoT devices were in use worldwide by 2016 and that this number is forecast to increase to 20 billion devices by 2020.[7]

While the growth of interconnected devices is inevitable, we expect that their rate of exploitation will stabilise in the next few years given the emergence of IoT security bodies such as IoTSec Australia and the IoT Security Foundation. It is likely that device manufacturers will also be pushed to comply with security standards and to make security a more central consideration during design.

Malware-as-a-Service

Hacking toolkits are being made available online, some for free, effectively creating an open source community for cyber criminals[8]. There are also paid business models known as “Malware-as-a-Service” for less experienced attackers, where payment is made for another attacker to run the campaign on their behalf. This reduces the barrier to entry for potential cyber attackers and also facilitates the rapid evolution of malware strains, making evasion of anti-malware end point protection tools more frequent. We expect this trend will continue as sophisticated cyber attackers increasingly move towards the malware-as-a-service business model.

Supply Chain Security

It’s important to be mindful that cyber attackers may also seek to exploit supply chain partners as a way to compromise the security of a business indirectly. The 2013 breach of US company Target is an example of this, as attackers stole remote access credentials from a third-party supplier of services[9]. We have also seen reports of attacks against managed service providers here in Australia, as a way of indirectly compromising the providers’ customers.

What should you do?

The good news is that most of these threats can be mitigated with a small number of relatively basic controls in place — none of which should come as a surprise:

Patching

Keeping your systems patched and up-to-date can prevent cyber attackers from being able to exploit the vulnerabilities that allow them to install malicious software on your networks and systems. Unpatched Windows systems were the reason the Wannacry ransomware attack was so prolific.

User Awareness

User awareness training can significantly reduce the likelihood of malware compromising your organisation’s security. Users that can, among other things, confidently identify and report suspicious emails and exercise good removable media security practices can put your security team on the front foot.

Changing default password credentials

The main attack vector for IoT devices is unchanged factory default access credentials after installation. Changing the password, or disabling the default accounts, will prevent the majority of attacks on IoT devices. This is also the case for hardware more generally, such as routers and end-user devices.

Segregate BYOD and IoT devices from other systems on your network

Placing IoT devices and uncontrolled bring-your-own devices (BYOD) on a separate network can isolate the effects of any active vulnerabilities from your critical systems.

Backup and recovery

Having all your critical data regularly backed up both offline and in the cloud can mitigate the risk of malware — particularly ransomware — from causing major damage to your business. It’s also just as important to regularly test your recovery plans to ensure they work effectively, since restoring systems to a previous state without significant downtime or loss of data is the key to damage control.


Security Colony

At https://portal.securitycolony.com we have a variety of resources that can help in strengthening your organisation’s preparedness for cyber attacks, including user awareness materials, incident response templates, security policies and procedures and a vendor risk assessment tool to help assess the security posture of your vendors’ internet-facing presence. Other resources available include an “Ask Hivint” forum for those more esoteric questions and breach monitoring to identify whether your users or domain has been caught up in a previous security incident.

References

[1] https://www.symantec.com/content/dam/symantec/docs/reports/istr-22-2017-en.pdf

[2] https://www.enisa.europa.eu/publications/enisa-threat-landscape-report-2016

[3] https://www.ic3.gov/media/2017/170504.aspx and http://fortune.com/2017/05/05/wire-transfer-fraud-emails

[4] https://www.malwarebytes.com/pdf/white-papers/stateofmalware.pdf

[5] https://blog.hivint.com/hivint-ransomware-6918b630f625

[6] https://www.sonicwall.com/whitepaper/2017-sonicwall-annual-threat-report8121810

[7] https://dtr.thalesesecurity.com/

[8] https://blog.checkpoint.com/wp-content/uploads/2016/08/InsideNuclearsCore_UnravelingMalwarewareasaService.pdf

[9] https://krebsonsecurity.com/2015/09/inside-target-corp-days-after-2013-breach/

[10] https://www.arnnet.com.au/article/617425/aussie-msps-targeted-global-cyber-espionage-campaign/

Cybersecurity Collaboration

Establishing a security Community of Interest


Hivint is all about security collaboration.

We believe that organisations can’t afford to solve security problems on their own and need to more efficiently build and consume security resources and services. Whilst we see our Security Colony as a key piece to this collaboration puzzle, we definitely don’t see it as the only piece.

Through our advisory services, we regularly see the same challenges and problems being faced by organisations within the same industry. We also see hesitation between organisations to share information with others. This is often due to perceived competitiveness, lack of a framework to enforce sharing and fear of sharing too much information, along with privacy concerns.

We believe that it is important for organisations to realise that security shouldn’t compete between ‘competitors’, but instead against threats, and that working together to solve common security challenges is vital. We want to help make that happen. One such way — and the purpose of this article — is for a group of similar organisations to form a security Community of Interest (CoI).

This article outlines our suggested approach towards establishing and running a CoI, covering off a number of common concerns regarding the operation of such a community, and concludes with a link to a template that can be used by any organisation wishing to establish such a CoI.

Why is information sharing good?

Security information sharing and collaboration helps ensure that organisations across the industry learn from each other, leading to innovative thinking to deter cyber criminals. Our earlier blog post, Security Collaboration — The Problem and Our Solution, provides a detailed outlook on security collaboration.

We consider security collaboration as vital to making a difference to the economics of cyber-crimes, and as such we share what works and what doesn’t by making the output of our consulting engagements available on our Security Colony Portal.

However, we acknowledge that there are times when sharing could be more direct between organisations by forming a collective more closely — documents and resources could then be shared that are more specific to their industry (for example, acceptable use policies may be very similar across universities), or more sensitive in nature in a way that could make it unreasonable to share publicly (for example, sharing security related issues that may not have been effectively solved yet).

When does a Community of Interest work?

Sharing of information is most effective when a collective group is interested in a similar type of information. An example of this may be university institutions — while distinct entities will have different implementations, the overall challenges that each face is likely to be similar. Pooling resources such as policy, operating procedures, and to an extent metrics, provides a way to maximise performance of the group as a whole, while minimising duplication of effort.

When is Community of Interest sharing a bad idea?

Sharing agreements like a CoI will not be effective in all circumstances — a CoI will only work if information flows in both directions for the organisations involved. It would not be a suitable framework for things that generally flow in a single direction, such as government reporting. A CoI’s primary focus should also not be on sharing ‘threat intel’ as there are a number of services that already do this such as Cert Australia, Open Threat Exchange, McAfee Threat Intelligence Exchange and IBM X-Force to name a few.

How is information shared within a Community of Interest?

An important aspect of a CoI is the platform used for sharing between members of CoI. It is important to recognise the fact that platforms used will not be the same used across all CoI’s, each organisation will have unique requirements and preferences as to which platforms will be most effective in the circumstances. Platforms such as email-lists and portals can be effective for sharing electronically, however platforms like meetings (be it in person, or teleconference style) may be more effective in some cases.

What kind of information can be shared?

In theory, almost anything, however in practice there are seven major types of cybersecurity information suitable for sharing, according to Microsoft[1]. These are:

  • Details of attempted or successful incidents
  • Potential threats to business
  • Exploitable software
  • Hardware or business process vulnerabilities
  • Mitigations strategies against identified threats and vulnerabilities
  • Situational awareness
  • Best practices for incident management and strategic analysis of current and future risk environment.

Hivint recognises that every piece of information has different uses and benefits. Sharing of information like general policy documents, acceptable use policy, or processes that an organisation struggles with or perform well can uplift cyber resilience and efficiency among businesses. These are also relatively simple artefacts that can be shared to help build an initial trust in the CoI, and are recommended as a starting point.

What about privacy and confidentiality?

Keeping information confidential is a fundamental value for establishing trust within a CoI. To ensure this is maintained, guidelines must be established against sharing of customer information or personal records.

Information should be de-identified and de-sensitised to remove any content that could potentially introduce a form of unauthorised disclosure / breach, and limitations should be established to determine the extent of information able to be shared, as well as the authorised use of this information by the receiving parties.

How is a Community of Interest formed?

It is important to realise that organisations need not follow a single structure or model when setting up a CoI. Ideally, the first step is identifying and contacting like-minded people with an interest in collaborating from your network or business area. Interpersonal relationship between personnel involved in CoI is critical to retaining and enhancing the trust and confidence of all members. A fitting approach to creating such an environment is by initially exchanging non-specific or non-critical information on a more informal basis. Considering that sharing agreements like this require a progressive approach, it is best not to jump head first by sharing all the information pertaining to your business at the first instance.

Upon success of the first phase of sharing and development of a strong relationship between parties involved, a more formal approach is encouraged for the next phase.

Next Steps

We’ve made a Cyber Security Collaboration Framework available to all subscribers (free and paid) of Security Colony which can be used as a template to start the discussion with interested parties, and when the time comes, formally establish the CoI.

[1] ‘A Framework for Cybersecurity information sharing and risk reduction’ — https://www.microsoft.com/en-us/download/details.aspx?id=45516


Additional Information

There are a number of instances where cyber-security information sharing arrangements have been established around the world. The below provides links to a small number of these.

http://data.cambridgeshire.gov.uk/data/information-management/info-sharing-framework/cambs-information-sharing-framework.pdf

https://corpgov.law.harvard.edu/2016/03/03/federal-guidance-on-the-cybersecurity-information-sharing-act-of-2015/

https://www.enisa.europa.eu/publications/cybersecurity-information-sharing

Vendor Risk Assessment Tool

Our new addition to the Security Colony Portal.


Security Colony has released its “Vendor Risk Assessment” (VRA) tool, developed in conjunction with a major financial services client, which enables our subscribers to assess the risk posed to their internet facing sites, and receive a profile reflecting their cyber security maturity.

While seeing your own profile is empowering, the ultimate purpose of the tool is to enable you to gain better visibility over your suppliers. In Q2 this year, we will be releasing the ability for our paid subscribers to add additional vendors for tracking, to get a view of their third party risk.


The platform uses a range of free, open source and commercial tools to complete 17 distinct checks against a company’s online footprint, packaging this analysis up in an easy to use interface that details identified risks and providing an overall risk score and grade for the vendor.

What does it do?

There are two broad assessment categories completed by the VRA platform: malicious activity checks, and misconfiguration and vulnerability checks.

The data collected from these assessments is then analysed and presented in an easy to manage format, including:

  • Providing a risk-based score (out of 10) and a corresponding grade (from A to F)
  • Tracking the change in security risks over time
  • Providing clarity around the source of the calculation

Domain Risk Overview

Malicious activity checks

The VRA tool assesses the organisation for historic (or current) malicious activity, including:

  • Whether an organisation has had their domain blacklisted for spam
  • Whether an organisation has been identified as hosting malware on their domains
  • Whether an organisation has been identified as a source of phishing attacks
  • Whether an organisation has been identified as a source of botnet attacks

Malicious activity checks

Misconfiguration and vulnerability checks

The VRA tool assesses security misconfigurations and vulnerabilities, including:

  • Whether an organisation has a strong process for correctly configuring all their encryption (SSL/TLS) certificates
  • Whether an organisation uses strong email security technology (SPF and DMARC)
  • Whether employees of an organisation have used their corporate email addresses on external accounts, and whether they have then been the subject of a data breach
  • Whether an organisation has insecure (ie. unencrypted) ports open to the Internet

Security configuration and vulnerability checks

To demonstrate the system, scores were calculated for each of the ASX 100 companies. Analysed by industry, the average industry scores — out of 10 — were as follows:


Key findings of the analysis were:

  • The IT industry has the best average score, showing their understanding of the importance of consistent cyber security processes.
  • Telecommunications and Financial Services round out the Top 3.
  • Energy, Materials (including mining) and Industrials are less mature, reflecting the reduced focus they have placed on cyber security historically.
  • Health Care is in the bottom 4, a significant concern given the sensitivity of data held.

Just 3 companies in the ASX 100 received a ‘perfect 10’ — ANZ Bank, Link Group, and Star Entertainment Group.


The VRA tool is now live in the Security Colony (securitycolony.com) portal. Membership is free and any organization can see their own score after signing up.