[email protected]   +1 (833) 3COLONY / +61 1300 733 940

Monthly Archives: February 2017

SecMetrics

If we are going to measure security, what exactly are we measuring?


If you can’t measure it, you can’t manage it
– W. Edwards Deming

That is a great quote and one I have heard a lot over the years, however an interesting point about this is that it lacks context. What I mean by this is that if you look at Page 35 of The New Economics you will see that quote is flanked by some further advice, namely:

It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.

Deming did believe in the value of measuring things to improve their management, but he was also smart enough to know that many things simply cannot be measured and yet must still be managed.

Managing things that cannot be measured, and the absence of context, is as good a segue as any to the subject of metrics in information security.

A recurring question that arises surrounding the use and application of metrics in security is “What metrics should I use?”

I have spent enough time in security to have seen (and truth be told, probably generated) an awful lot of rather pointless reports in and around security. I think I’m ready to attempt to explain what I think is going on here, and why “What metrics should I use?” might be the wrong question — and instead, there should be more of a focus on the context provided with metrics in order to create useful and meaningful information about an organisation’s security.

A Typical (& faulty) Approach

Now here is a typical methodology we often see that leads to the acquisition of a security appliance or product of some sort by an organisation (e.g. a firewall, or an IDS/IPS).

  1. A security risk is identified
  2. The security risk is assessed
  3. A series of controls are prescribed to mitigate or reduce the risk, and one of those controls is some sort of security appliance / product
  4. Some sort of product selection process takes place
  5. Eventually a solution is implemented

Now we know (or we thought we knew) thanks to Dr Deming that in order to manage something, we first need to measure it, so we instinctively look at the inbuilt reporting on the chosen device (if we were a more mature organisation we might have thought about this at stage 4 and it may even have influenced our selection). We select some of the available metrics, and in less than ten minutes somehow electro-magically we have configured the device to email a large number of pie charts in a pdf to a manager at the end of every month.

Top Threats

However, the problem with the above chart is that this doesn’t actually mean anything — it has no context. The best way to demonstrate what I mean by context is to add a little.

Top Threats this month

Ok, that’s better, but in truth it’s still pretty meaningless, so let’s add some more.

Threats identified (5 months)

Now we are beginning to end up with something meaningful. Immediately we can see that something appears to be going on in the world of cryptomalware and we can choose to react — the information provided now demonstrates a distinct trend and is clearly actionable.

But we can bring even more context into this. The points below provide some more suggestions for adding context, and will give you a feel for the importance of having as much context as possible to create meaningful metrics.

  • What about the threats that do not get detected? Are there any estimations available (e.g. in Security Intelligence reports) on how many of those exist? (Donald Rumsfeld’s ‘Known Unknowns’)
  • Can we add more historical data? More data means more reliable baselines, and the ability to spot seasonal changes
  • Could we collect data from peer organisations for comparison (i.e. — do we see more or less of this issue than everyone else)?
  • We have 4 categories, perhaps we need a line for threats that do not fall in these categories?
  • What are the highest and lowest values we (and/or other companies in the industry) have ever seen for these threats?
  • Do we have the ability to pivot on other data — for example would you want to know if 96% of those infections were attributed to one user?

The Impact of Data Visualisation

So now we have an understanding of context, what else should we consider?

Coming from the world of operational networking, I spent a lot of my time in a previous role getting visibility of very large carrier grade networks, and it was my job to maintain hundreds of gateway devices such as firewalls, proxy servers, VPN concentrators, spam filters, intrusion detection & prevention systems and all the infrastructure that supported them.

At that time if you were to ask me what metrics I would like to collect and measure, the answer was simple — I wanted everything possible.

If a device had a way to produce a reading for anything, I found a way to collect it and graph it using an array of open source tools such as SNMP, RRDTool and Cacti.

I created pages of graphs for device temperatures, memory usage, disk space, uptime, number of concurrent connections, network throughput, admin connections, failed logins etc.


The great thing about graphs is you can see anomalies very quickly — spikes, troughs, baselines, annual, seasonal and even hourly fluctuations give you insight. For example, gradual inclines or sudden flat-lines may mean more capacity is needed, whereas sharp cliffs typically mean something downstream is wrong.

Using these graphs, and a set of automated alerts I was able to predict problems that were well outside of my purview. For example, I often diagnosed failed A/C units in datacentres long before anyone else had raised alarms. I was able to detect when connected devices had been rebooted outside of change windows. I could even see when other devices had been compromised, because I could graph failed logon attempts for other devices in the nearby vicinity.

In the ten years or so since I was building out these graphs in Cacti, technologies for the creation of dash boarding, dynamic reporting, and automated alerting have come a long way, and it’s now easier than ever to collect data and produce very rich information — provided that you understand the importance of context, and you consider how actionable the information you produce will be to the end consumer.

Conclusions

While this write up has focused particularly on context with respect to technical security metrics, it is important to remember that security is mainly about people, so you should always consider the softer metrics that cannot simply be collected by things such as SNMP polling, or the parsing of syslogs, etc. For example — is there a way to measure the number of users completing security awareness training, and see if this correlates with the number of people clicking on phishing links?

Would you want to know for instance if the very people who had completed security awareness training were more likely to click on phishy emails?

The bottom line is, security metrics — whether technically focused or otherwise, are relatively meaningless without context. While metrics aim to measure something, it’s the context in which the measurements are given which provides valuable and actionable information that organisations can use to identify and prioritise their security spend.


Article by Eric Pinkerton, Regional Director, Hivint

Check out Security Colony’s Cyber Security Reporting Dashboards, a NIST CSF based dashboard with more metrics for your security program.

Breach Disclosure

Introduction of mandatory breach disclosure laws

After several long years gestating through the lower and upper house, the Australian Government has finally passed the Privacy Amendment (Notifiable Data Breaches) Bill 2016, which establishes a mandatory breach notification scheme in Australia.

This morning, almost in anticipation of the law’s passage, the Australian Information Security Association (AISA) sent an email notifying its members of an incident affecting its members’ data:

We have made a commitment to you that we will always keep you up to date on information as and when we have it.
 
 Today the new Board took the decision to voluntarily report to the Office of the Australia Information Commissioner an incident that occurred last year that could have potentially impacted the privacy of member data kept within the website. At the time, a member reported the issue to AISA and it was rectified by the then administrative team. What wasn’t done, and as we all know is best practice, was notification to you as members that this potential issue had occurred, and notification to the Australia Privacy & Information Commissioner.
 
 Your member information is not at risk and the issue identified has been rectified.
 
 The AISA Board takes this matter very seriously.
 
 As the industry body representing these and many other information security issues, we expect and demand best practice of AISA and of our members. The Board holds the privacy of member data as sacrosanct and will ensure that all members are aware of any and all privacy information.
 
 If you have any concerns or wish to discuss this matter please feel free to contact either myself or the Board members.
 
 Many thanks for your ongoing support.

And while AISA quite validly trumpets that notifying its members is best practice, how they notified their members falls well short of best practice.

More specifically, the notification doesn’t answer any questions that would be expected to be asked, and in the context of broader AISA issues occurring, raises questions of why the notification occurred now.

Questions left unanswered include:

  • What happened?
  • What has been done to remediate and limit such exposures in the future?
  • What information was potentially compromised?
  • Was it a potential compromise or an actual compromise?
  • What should I (as a member whose data was potentially compromised) do about it?
  • Do I need to look out for targeted phishing attacks? Transactions?
  • Has my password been compromised? Has my credit card been compromised?
  • Who has investigated it and how have you drawn the conclusions you’ve drawn?

Data breach notification effectively forms part of a company’s suite of tools for managing customer and public relations. Doing data breach notification well can make a difference in the effort required to manage those relationships during a crisis, and the value of long term customer goodwill.

This blog post explores what a “good” data breach notification looks like, and highlights where AISA falls short of that standard.

How to effectively manage a breach

As data breaches continue to increase in frequency, the art of data breach notification has become more important to master. A key challenge in responding to a data breach is notifying your customers in a way that enhances, rather than degrades, your brand’s trustworthiness.

This guide outlines the key factors to consider should you find yourself in the unfortunate position of having to communicate a data breach to your customers.

There are 7 factors we recommend you focus on:


Clearly, the AISA announcement falls short on a number of the above factors.

In particular:

  • Clarity — the AISA announcement does not clearly identify who was affected, or even if there was in fact a breach of data.
  • Timeliness — If the incident occurred on June 15th last year last year, why wait to notify members over eight months after the incident occurred? Given so much time has passed since the incident, and AISA having sufficient time to investigate and rectify the issue, why was there not more information provided about the nature of the breach? Given the time elapsed, the breach notification seems conveniently timed to coincide with the legislation, which leads to the final point;
  • Genuineness — No apology was given as part of the breach notification, nor was any detail given about what members need to do, what information (if any) was breached, or any assurances that it won’t occur again.

An email with the 7 factors included will answer (as best as you can) the questions the affected party may have. Further follow up information can be provided using an FAQ, a good example of which is the Pont3 Data Breach FAQ.

So, with an understanding of what to do, it’s also key to consider what not to do.

Breach Disclosure — What not to do

When formulating a breach notification strategy it is also important to know what not to do. Described below is our ‘Hierarchy of Badness’, starting with the worst things first!

1. Intentionally lying: Making any false statements in a bid to make the situation appear less complicated or serious than it is known to be, for example, stating that the data lost was encrypted when it was not. There is a very high chance that the truth will become available at some point, and at that point apparent intentional lies will wipe you out. This routinely gets people fired, and can cause significant reputational damage for the organisation.

2. Unintentionally lying: Drawing conclusions and providing statements without thoroughly analysing the impact and depth of the breach can lead to unintentional lies or the omission of information. This can be a result of publishing a notification too early before the details are fully understood. Whilst unintentional lies are ‘better’ than intentional lies, it may be difficult to prove to your customers that there was no ill intent. Depending on the lie, this may also result in someone getting fired.

3. Malicious omission: As the name suggests, organisations sometimes leave out key information from their disclosure statements particularly by directing focus to information that is not as crucial. For example, rather than stating that data was not encrypted in storage focusing the statement on data being encrypted at transit. While the latter is true, a crucial piece of information has purposely been omitted for the purpose of diversion. Not a great strategy. While omission may be a a legal requirement throughout the course of an incident, an omission which changes the implied intent or meaning of a communication can backfire.

4. Weasel words and blame-shifting: A very common but poorly conceived inclusion in breach notifications is overused clichés such as ‘we take security seriously’, or ‘this was a sophisticated attack’. If there is no good reason to use that particular phrase/word it is better not to include it in the statement. Describing an attack as sophisticated or suggesting steps are being taken without providing further details is not going to make your customers feel better about the situation.

Our Hierarchy of Badness heat map below depicts the sweet spot for disclosure.


Conclusion

Historically, some organisations have preferred to use the ‘Silence & roll the dice’ strategy. This is a risky strategy, where the organisation doesn’t notify its customers about the breach at all, and simply hopes the whole situation will blow over.

However, with the passing of the Privacy Amendment Bill, while this may pan out well in some cases, it can have adverse outcome in others particularly if the breach is identified and reported by bloggers/researchers/users (a case of malicious omission in the ‘Hierarchy of Badness’). However, there will be a lot of organisations falling under the threshold for disclosure, so the ‘Silence and roll the dice’ strategy will continue to be used.

An ideal way to help your customer through a data breach is by referring them on to services like ID Care, the various CERTs, or other service providers for your customer get the advice they need to respond to the issue in their particular circumstances. Trying to “advise” your clients about what they should do post-breach — when you’re doing this from a position of having just had a breach yourself — is rarely a good strategy.

Finally, the best preparation for data breach disclosure is to have both:

  • A micro-level response for your customers regarding what data was lost, if it’s recoverable and what they as data owners can do to mitigate the impact; and
  • A macro-level response for the press with details of the volume of data lost, response plan and how your customers must go about handling the situation.

It is also necessary to realise that data breach notification is not a one-time act. To ensure the best outcome from a public relations and crisis management perspective, it’s best to provide customers with updates as and when you get new information and ensure your customers realise it’s an ongoing engagement and that you genuinely care about their data.


Article by Nick Ellsmore, Chief Apiarist, Hivint

Check out more resources in Security Colony, such as incident response runsheets, an incident response communications plan guide, and a sample breach notification letter

Top 4 Expands to Essential 8

A week or two back, the Australian Signals Directorate (ASD) replaced their “Top 4 Mitigation Strategies” with a twice-the-fun version called the “Essential 8.”

“Why?” I hear you ask… That’s a good question.

After all, it was a big deal when the Top 4 came out in 2010 (and then updated in 2012 and 2014) as the ASD claimed that it would address 85% of targeted attacks. Their website still says that the Top 4 address 85% of targeted attacks. So, if nothing else has changed, why change the Top 4? There would seem to be a few possible explanations:

  • Everyone has finished rolling out the Top 4 and needed the next few laid out for them.
  • Attacks have changed, and as a result, the Top 4 no longer address 85% of targeted attacks so we need to change tack.
  • The change is tacit recognition that the Top 4, while great controls, provide a heavy burden in terms of implementation (especially application white-listing) so are not a realistic target to implement for most organisations, so the Essential 8 was created to provide a more ‘attainable’ set of controls.
  • ASD just felt it needed a refresh to maintain focus and to highlight that the Top 4 aren’t the only controls you need.

The first of those is, sadly, laughable. The second is plausible but not certain. The third is quietly compelling (I mean, we all feel better about 7 out of 8 than we do with 3 out of 4). And the last one is also pretty persuasive.

Enough about the why, let’s talk about the change itself.

What is the Essential 8?

If an organisation or agency was pursuing the Top 4 but have not reached that implementation goal, what should they do now? Switch focus to the Essential 8 or continue with the Top 4?

And perhaps the most important question of them all… where is the video? (here’s a link to Catch, Patch, Match from back in 2010, which is actually pretty good. I mean, it’s no PCI DSS Rock, but each to their own).

Here’s a bit of background for the those who are not so familiar with the Top 4:

Note the change of name of the main document from ‘strategies to Mitigate Targeted Cyber Intrusions’ to ‘Strategies to mitigate cyber security incidents’

These 3 documents present 37 controls as mitigation strategies against the list of 6 threats that are listed below. The top 4 are unchanged from the previous update, however the Top 4 along with the addition of 4 further controls are now presented as a new Baseline called the Essential 8.

3 documents, 37 controls, 6 threats, Top 4, Essential 8… Confused yet? This is what Hivint’s Chief Apiarist, Nick Ellsmore had to say about so many numbers flying around these days:


So, what are the changes?

This article isn’t going to present a control-by-control comparison. You’ll find plenty of those. We want to look at the big picture change. In terms of the number of controls, there are now 37 controls and not 35 — a few added, a few combined, a few renamed or modified. Despite the ASD website having a complete list of changes, there will be many a blog post picking it apart. We want to help you figure out what to do about it.

Top 4

The top 4 mitigations strategies remain the same, and are still mandatory for Australian Federal Government agencies.

Previously all 35 strategies were described as strategies to mitigate one key threat, targeted cyber attacks. The ASD also claimed that when the Top 4 were properly implemented, it effectively mitigated 85% of targeted cyber attacks. One key change in the new release is that now the threat landscape is defined in a broader sense which includes the following 6 threats:

  1. Targeted cyber attacks
  2. Ransomware and other external adversaries
  3. Malicious insiders who steal data
  4. Malicious insiders who destroy data and prevent computers/networks from functioning
  5. Business email compromise
  6. Threats to industrial control systems

Essential 8

Four additional controls, along with the Top 4, form the Essential 8. This is presented as a ‘Baseline’ for all organisations. At first glance, the Essential 8 feels like a natural extension that organisations can adopt into their security strategy without much of a hassle. Realistically though, it’s a bit more complicated than that.

When it came out in 2010, the ‘Strategies to Mitigate Targeted Cyber Intrusions’ was considered quite unique, since it confidently declared that by doing the Top 4, organisations will mitigate against 85% of ‘targeted cyber attacks’. In hindsight, the full list of 35 was probably too long for most organisations to digest, and few ever looked past the attractiveness of only having four things to do. That said, most organisations would have at least some of the 31 other controls implemented through their standard Business as Usual (BAU) operations (e.g. email/web content filtering, user education) whether or not they set out with the list in hand.

It is worth noting here that we have seen very few organisations genuinely deploy the Top 4 comprehensively.

It is also worth noting that for many organisations trying to climb the ladder of resilience, “targeted threats” seem a long way away, and managing the risk of non-targeted scattergun malware attacks, naïve but well-meaning users, and the Windows XP fleet, is more like today’s problem.

And at the other end, looking at genuinely nation-state-targeted-Government-institutions, it seems unlikely that a Top 4 would remain current for 7 or more years given the changing nature of threats. Stuxnet, Ed Snowden and the Mirai botnet are a few extreme examples but nevertheless game changing events that could affect how the importance of a control (mitigation strategy, in the context of this document) is rated, especially when the primary audience are Government Institutions.

But given the challenges in planning, funding, and rolling out a Top 4 mitigation program, one has to appreciate the consistency — it would be a nightmare to try to address a dynamic list of priorities within Government agencies with turning circles like oil tankers.

The Essential 8 can be seen as a good compromise where organisations who are working towards the Top 4 (or have it in place already) can incorporate the additional 4 controls without disrupting the status quo, while the list appears to stay relevant to the changes in Cyber. Seems like a pretty good approach.

Essential 8 — Constructed using information available in the ASD website

The lot

The overall list of 37 mitigation strategies are categorised under 5 headings:

  1. Mitigation strategies to prevent malware delivery and execution
  2. Mitigation strategies to limit the extent of cyber security incidents
  3. Mitigation strategies to detect cyber security incidents and respond
  4. Mitigation strategies to recover data and system availability
  5. Mitigation strategy specific to preventing malicious insiders

Strategies to mitigate cyber security incidents — mitigation details contains detailed implementation guidance for each of the 37 strategies, grouped under the above 5 headings.

Final Thoughts

Clearly this article was not written for cybersecurity gurus like you. It’s for all those people who haven’t yet deployed their holistic, risk-based, defence-in-depth inspired, ASD-Top-35-addressing security program in totality.

Okay, in all seriousness, if your security strategy is risk based and is aligned with where the organisation is heading, this change shouldn’t bother you too much. ASD too acknowledges that in some instances, the risk or business impact of implementing the Top 4 might outweigh the benefits.

Hence, the best bet continues to be a risk based approach to security which is informed by the Top 4 (or the Essential 8, or ISM, or ISO or whatever your flavour happens to be) rather than attempting to blindly comply to a checklist.

And sadly, there is no video this time.


Article by Adrian Don Peter, Senior Security Advisor, Hivint

Voicemail and VOIP

Commonly overlooked security risks

Introduction

Every company has a phone system of some type, and just like with smartphones, these systems often offer so much more than just basic PABX functionality — technology such as VoIP, video conferencing, Unified Communications platforms, cloud based PABXs are all becoming par for the course. Most, if not all of these systems, also include integrated voicemail functionality.

This article considers some of the avenues through which attackers may look to compromise the security of a company’s phone systems.

Overview

There are many reasons attackers may be interested in a company’s phone system, including:

  • Using it to make fraudulent calls;
  • Aiding in social engineering attacks;
  • Eavesdropping on sensitive calls;
  • Harvesting sensitive information from voicemails;
  • Compiling internal directories of company staff; and
  • Attempting to obtain call detail records for market intelligence and industrial espionage.

Complicating matters is that, unlike dedicated VoIP systems and Video Conferencing systems which are often audited and maintained by the IT Security teams, traditional PaBX, IVR and Voicemail systems are often overlooked or fall outside the management purview of the IT and Security teams.

Voicemail is still relied upon for everyday use by many organisations, and sensitive information is commonly left in voicemail messages. A motivated attacker targeting a company is highly likely to be able to gain valuable information by listening to the voicemail of system administrators or executives over a period of time.

With cheap VoIP services being readily available it is also simple for attackers to automate and scale up their attacks to make multiple simultaneous calls and complete them faster. There are many ways to automate phone attacks, and it is easy for an attacker to write a script or use existing software to automate a range of attacks, including those outlined below.

Types of Voicemail and VOIP Attacks

Brute force attacks

Having a four-digit PIN as the de facto standard for voicemail authentication means an attacker will have a reasonable chance at successfully guessing the PIN through manual avenues such as by just using the phone keypad. Even with longer PINs, common patterns and repeating numbers are so commonly used by staff that an attacker is likely to work out the PIN using a PIN list instead of having to manually enter in every possible combination of digits.

In combination with most voice mail and phone systems not supporting account lockouts, nor having other security controls such as logging and alerting, which are commonly applied to other IT systems, undertaking a successful PIN brute force attack to gain access to a staff member’s voicemail box is unlikely to prove difficult to the determined attacker. Additionally, there are also methods to bypass lockout timers in those instances where they are in place. One technique that is almost always successful on large voicemail systems is attempting two or three common PINs against every extension or user account. This would prevent triggering any account lockouts, if implemented.

Once an attack run is complete the attacker may well have access to several voicemail accounts.

Directory reconnaissance

The same automation that can be applied to PIN brute force attacks can also be used for other attacks against phone systems. One example is the creation of internal directories. An attacker can use the “find extension” feature of modern PABXs and voicemail systems to make a list of names, extensions and sometimes job titles within a company. They can also do this by calling and making a note of every greeting for every extension if the PABX doesn’t have a names directory feature.

Call-forwarding attacks

Another use attackers have for voicemail is the call forwarding feature which can be used for free phone calls or to aid in social engineering attacks.

Getting free phone calls is the simplest example. An attacker who has compromised a voicemail box sets up call forward to a mobile phone, overseas or other toll number they want to call, then calls back the local DDI number or the company’s main toll-free number and enters the extension number and waits for the call-forward to connect them to their pre-programmed toll number.


This attack can be reversed as well — an attacker can also use a compromised voicemail box to receive incoming calls using the call forwarding feature to forward calls to a VOIP or pre-paid phone controlled by the attacker.

They then will have control of an extension on that company’s phone system which they can give to external parties to call back on to appear like they are inside the company’s office, which can be an invaluable means to facilitate the commission of a further social engineering attack.

Leveraging Caller ID

Most phone systems, when call-forwarding is used, display the caller-ID of the voicemail extension rather than the number of the originating party.

An attacker may be able to leverage the call-forward feature to masquerade as a known external party, such as appearing to be a known vendor or to be calling from inside the target company’s building. This can gain greater traction for social engineering purposes.


On a penetration testing engagement some years ago, my colleagues and I took over a manager’s extension at corporate headquarters and set up a call forward to the security desk at one of their rural sites. We called ahead to the security desk to add our names to the visitor register. The security desk asked very few questions because their phone displayed the call as originating from the manager’s phone at the organisation’s headquarters.

When we arrived, the security desk was expecting us and allowed us to enter without any restrictions.

Recommendations

The attacks scenarios above relate to simple voicemail systems, which most people overlook, considering them to a straightforward way to store and retrieve messages. However, when you include customer facing IVRs, VoIP systems, PABX systems, Teleconferencing and Video Conferencing systems, Unified Communications systems, call-queue management systems and the endless other applications of modern phone systems, then the possible vulnerabilities that can be exploited by attackers are almost endless.

Large companies often own a block of numbers, generally in lots of ten thousand. It is a good idea to periodically audit these number blocks to classify all lines, then performing an audit of possible attack vectors of all the phone systems connected to them.

Practicing the same security hygiene for voicemail that you do for other systems is critical, for example:

  • Disabling default accounts;
  • Auditing voicemail boxes for common PIN’ or disallowing common or simple PINs;
  • Setting a minimum PIN length of six digits;
  • Setting unique temporary PINs when provisioning new voicemail boxes;
  • Setting up lockout timers that don’t lose their state over multiple calls;
  • Disabling call forwarding features;
  • Restricting allowed forwarding numbers to local mobiles and landlines only;
  • Deactivating unused voicemail boxes; and
  • Applying logging and alerting if possible.

With some phone systems this is often easier said than done. We hope this article gets you thinking about how your company uses and manages phone systems and how they might be abused by attackers if appropriate vigilance is not exercised.


Article by John McColl, Principal, Hivint

The World Today

I originally wrote this message to try and give some perspective and comfort to the very diverse team within Hivint.

But given the great — and emotional — response, decided it should be shared more broadly. You may not agree with the political sentiment, and I respect that, but hopefully everyone agrees with the need to support each other through tough times.

Nick.


Hey Team,

I was motivated to write this after seeing my friend Mike Cannon-Brookes, one of Atlassian’s founders, put down the thoughts of he and co-founder Scott, in their blog article Your tired, your poor, your huddled masses yearning to breathe free…

The reality is that with the success their business has had, their voices will ring more loudly than mine, but equally I recognise the importance of everyone being heard, being vocal, and not sitting quietly while the world starts to burn.

I was fortunate enough to live in Boston back in 2013–14, and not far from my apartment was the Holocaust memorial, which included a version of Pastor Martin Niemöller’s (1892–1984) cautionary verse about the lack of conviction shown by those in German society who could change opinion, as the Nazis rose to power and started to progressively purge their enemies:

First they came for the Socialists, and I did not speak out — Because I was not a Socialist.

Then they came for the Trade Unionists, and I did not speak out —

Because I was not a Trade Unionist.

Then they came for the Jews, and I did not speak out — 
Because I was not a Jew.

Then they came for me — and there was no one left to speak for me.

To be clear, I’m not saying that Trump is Hitler, and I’m not saying a temporary visa ban is equivalent to the Holocaust. The point I’m making is in the power of that verse. I’m not a Syrian refugee, I’m not Mexican, I’m not a woman, and I’m not carrying the passport of an Islamic nation. But if I don’t speak out for them now, I can’t rightfully expect them to have my back when the system fucks me over somewhere down the line.

The immigration ban is not OK. Banning refugees is not OK. Walking back hard-won progress on reproductive rights is not OK.

We’re really fortunate in our team at Hivint to have 26 amazing people, and a level of diversity which for a small team is pretty extraordinary. Male, female, gay, straight, at least half a dozen or so religious belief systems, cat-people, dog-people, herbivores and omnivores, lots of weird and wonderful personality traits, and some great, original thought.

I just want you to all know that we also have a conscience. Yes, this is a business, but ahead of all of that, we’re all people. And Craig and I, and all the Leadership Team, genuinely care. Some of you will be more impacted by the current state of affairs than others. If you have any concerns, want to discuss anything, or just want a friendly ear to talk, feel free to grab us.

All bees are welcome here.

Nick