[email protected]   +1 (833) 3COLONY / +61 1300 733 940

Monthly Archives: December 2017

The Cyber Security Ecosystem: Collaborate or Collaborate. It’s your choice.


As cyber security as a field has grown in scope and influence, it has effectively become an ‘ecosystem’ of multiple players, all of whom either participate in or influence the way the field develops and/or operates. At Hivint, we believe it is crucial for those players to collaborate and work together to enhance the security posture of communities, nations and the globe, and that security consultants have an important role to play in facilitating this goal.

The eco-system untwined

The cyber security ecosystem can broadly be divided into two categories, with some players (e.g. governments) having roles in both categories:

Macro-level players

Consists of those stakeholders who are in a position to exert influence on the way the cyber security field looks and operates at the micro-level. Key examples include governments, regulators, policymakers and standards setting organisations and bodies (such as the International Organization for Standardization, Internet Engineering Task Force and National Institute for Standards and Technology).

Micro-level players

Consists of those stakeholders who, both collectively and individually, undertake actions on a day-to-day basis that affect the community’s overall cyber security posture (positively or negatively). Examples include end users/consumers, governments, online businesses, corporations, SMEs, financial institutions and security consultants (although as we’ll discuss later, the security consultant has a unique role that bridges across the other players at the micro-level).

The macro level has, in the past, been somewhat muted with its involvement in influencing developments in cyber security. Governments and regulators, for example, often operated at the fringes of cyber security and primarily left things to the micro-level. While collaboration occurred in some instances (for example, in response to cyber security incidents with national security implications), that was by no means expected.


The formalisation of collaborative security

This is rapidly changing. We are now regularly seeing more formalised models being (or planning to be) introduced to either strongly encourage or require collaboration on cyber security issues between multiple parties in the ecosystem.

Recent prominent examples include proposed draft legislation in Australia that would, if implemented, require nominated telecommunications service providers and network operators to notify government security agencies of network changes that could affect the ability of those networks to be protected[1], proposals for introducing legislative frameworks to encourage cyber security information sharing between the private sector and government in the United States[2], and the introduction of a formal requirement in the European Union for companies in certain sectors to report major security incidents to national authorities[3].

There are any number of reasons for this change, although the increasing public visibility given to cyber security incidents is likely at the top of the list (in October alone we have seen two of Australia’s major retailers suffer security breaches). In addition, there is a growing predilection toward collaborative models of governance in a range of cyber topic areas that have an international dimension (for example, the internet community is currently involved in deep discussions around transitioning the governance model for the internet’s DNS functions away from US government control towards a multi-stakeholder model). With cyber security issues frequently having a trans-national element — particularly discussions around setting ‘norms’ of conduct around cyber security at an international level[4] — it’s likely that players at the macro-level see this as an appropriate time to become more involved in influencing developments in the field at the national level.

Given this trend, it’s unlikely to be long before the macro-level players start to require compliance with minimum standards of security at the micro-level. As an example, the proposed Australian legislation referred to above would require network operators and service providers to do their best (by taking all reasonable steps) to protect their networks from unauthorised access or interference. And in the United States, a Federal Court of Appeals recently decided that their national consumer protection authority, the Federal Trade Commission, had jurisdiction to determine what might constitute an appropriate level of security for businesses in the United States to meet in order to avoid potential liability[5]. In Germany, legislation recently came into effect requiring minimum security requirements to be met by operators of critical infrastructure.

Security consultants — the links in the collaboration chain

Whatever the reasons for the push towards ‘collaborative’ security, it’s the micro-level players who work in the cyber security field day-to-day who will ultimately need to respond as more formal expectations are placed on players at the macro-level with regards to their security posture.

Hivint was in large part established to respond to this trend — we believe that security consultants have a crucial role to play in this process, including through building a system in which the outputs of consulting projects are shared within communities of interest who are facing common security challenges, thereby minimising redundant expenditure on security issues that other organisations have already faced. This system is called “The Security Colony” and is available now[6]. For more information on the reasons for its creation and what we hope to achieve, see our previous article on this topic.

We also believe there is a positive linkage between facilitating more collaboration between players at the micro-level of the ecosystem, and encouraging the creation of more proactive security cultures within organisations. Enabling businesses to minimise expenditure on security problems that have already been considered in other consulting projects enables them to focus their energies on implementing measures to encourage more proactive security — for example, as we discussed in a previous article, by educating employees on the importance of identifying and reporting basic security risks (such as the inappropriate sharing of system passwords). And encouraging a more proactive security culture within organisations will ultimately strengthen the nation’s overall cyber security posture and benefit the community as a whole.


Article by Craig Searle, Chief Apiarist, Hivint


[1] See in particular the proposed changes to section 313 of the Telecommunications Act 1997 (Cth).
[2] See https://www.fas.org/sgp/crs/misc/R44069.pdf for a description of these proposals.
[3] See http://ec.europa.eu/digital-agenda/en/news/network-and-information-security-nis-directive
[4] See for example http://www.project-syndicate.org/commentary/international-norms-cyberspace-by-joseph-s–nye-2015-05
[5] See http://www.technologylawdispatch.com/2015/08/privacy-data-protection/third-circuit-upholds-ftcs-authority-in-wyndham-case/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original
[6] https://www.securitycolony.com/

An Elephant in Ballet Slippers? Bringing Agility To Cyber Security


As enterprise IT and development teams embrace Agile concepts more and more, we are seeing an increased need for cyber security teams to be similarly agile and able to adapt to rapidly evolving environments. Cyber security teams that will not or cannot make the necessary changes will eventually find themselves irrelevant; too far removed from the function and flow of the organisation to provide meaningful value, resulting in an increased risk for the organisation and its interests.

So, how do we fit the elephant (cyber security) with ballet slippers (agility)?

Firstly, in an age of devops, continuous integration and continuous deployment it is critical to understand the evolving role of the cyber security team. The team’s focus on the on rigorous definition, enforcement and assurance of security controls is giving way to active education, collaboration and continual improvement within non-traditional security functions. This is primarily because the developers, the operations team, the sysadmins have all become the front-line for the security team. These teams spend their working life making decisions that will impact the security of the product & platforms and ultimately the security of the enterprise. Rather than risk being seen as the ‘department of no’ the cyber security team needs to embrace the change that agile development brings and find ways to improve the enterprise through enhancing the skills and capabilities of these teams.

First and foremost is education. If the devops team don’t know or even worse, don’t value security controls and secure practices then the systems they develop and maintain will never be secure. It is the role of the cyber security team to ensure that all members of the development and operations team understand that security doesn’t need to be difficult, it can be implemented well if it is inherent to the development process. This is typically achieved through ongoing training and education, both formally and informally.

Secure development and devops training courses are widely available and are absolutely a valuable part of the toolkit, but they tend to be rather static in nature and often bad habits tend to creep back in over time. Informal education through peer review, feedback and information sharing is far more consistent and reliable as long as there is a clear security ethos that can be established for the team to work from. This is particularly the case for the senior members of the team passing on their knowledge to newer or less experienced members.

Security champions are crucial in filling this role. Ideally a security champion is a member of the security team that works with the development team on a daily, even hourly, basis. One of the most important parts of this role is that the security champion needs to be able to ‘speak geek’ and understand the challenges facing the team when trying to rapidly develop applications. A background in development or devops means that they can speak from experience and be empathetic when dealing with the development teams. The security advice they provide needs to be pragmatic, weighing up the relative risks and benefits, and it needs to be delivered in a way that is meaningful to the rest of the development team.

An ability to get their ‘hands dirty’ and actually assist in aspects of code development or systems maintenance is definitely a bonus. The security champion also needs to drive the implementation of tools and services to support the rapid identification, assessment and remediation of security vulnerabilities in code or platforms. Wherever possible these security tools need to be seamlessly built into the existing development, deployment and testing tools (think Bamboo, Jira, Jenkins, Circle CI and Selenium) so that security assessment becomes transparent to the overall development and deployment processes. The security champion should also responsible for bringing a cyber-security context into the design stages of development. This is often best achieved by flagging stories (Agile-speak for detailed use-cases) as ‘secure’, meaning that particular attention needs to be paid to that component — user input, authentication, database calls, connections to external systems/APIs will all require additional analysis.

Finally, and possibly most importantly, it is critical that organisations develop a culture of security. Insecure practices must be considered as a real no-no in the day to day business behaviours. A good comparison is the nature of OH&S (Occupational Health & Safety) practices in the workplace today. 15–20 years ago your typical workplace was not as safe as they are now. Instances like trip hazards, puddles of liquid and the like weren’t necessarily seen a big issue.

Nowadays staff recognise them as a safety risk and have been trained to respond accordingly or raise the issue with someone who will. Cyber security needs to arrive at the same point. Staff need to be aware of what constitutes ‘safe’ and ‘unsafe’ cyber security behaviours, and feel confident in calling out unsafe practices.

Observing a team member sharing a password or leaving a workstation unlocked, shouldn’t be something that is seen as normal practice — it needs to be identified as a risk and addressed immediately, with the security team being part of the solution to the problem. Pointing out an insecure practice but not providing a practical solution will only alienate the security team. As staff become aware and feel confident in calling out unsafe activities, with the support of the security team to address, the it becomes part of the cultural DNA and is more readily passed on to new team members and new initiatives.

Agile development does present a number of challenges to a cyber-security team. Trying to adhere to the same practices and controls that were implemented 5–10 years ago is ultimately destined for failure as the rate of change is too rapid in order for them to be effective. Adapting practices to maintain relevancy to the evolving environment is the only way to remain effective and best protect the organisation and its customers.


Article by Craig Searle, Chief Apiarist, Hivint

Maturing Organisational Security and Security Service Catalogues


One of the key objectives for an information security professional is providing assurance that the systems which are implemented, or are soon to be implemented, are secure. A large part of this involves engaging with business and project teams proactively to ensure that security needs are met, while trying hard not to interfere with on-time project delivery.

Unfortunately, we’re not very good at it.

Recently, having agreed to conduct a security risk assessment (SRA) of a client’s SFTP solution, which they intended to use to transfer files to a vendor in place of their existing process of emailing the files, I sat down to discuss the security requirement with the Solution Designer, only to have him tell me that an SRA had been done before. Not just on the same design pattern, but on the exact same SFTP solution. They were simply adding an additional existing vendor to the solution to improve the security of their inter-company file transfer process. The organisation didn’t know how to go about evaluating the risks to the company of this change, so they used the ‘best fit’ security-related process available to it, which just happened to be an SRA.

Granted, in the example above, a new vendor might need to be assessed for the operational risk associated with them uploading files to our client’s environment, or if there were changes to the SFTP solution configuration. But in this case, the vendor had been working with them for some time so there was no further risk introduced, just a more secure business process: the risk was getting lower not higher.

While this is only one example, this scenario is not uncommon across many organisations we work with, across many industry sectors, and it’s only going to get harder. With more organisations moving to an agile development methodology and cloud deployments, ensuring security keeps up with new developments throughout the business is going to be critical to maintaining at least a modicum of security in these businesses.

So, if you’re getting asked to perform a risk assessment the day before go-live (yes, this still happens), you’re doing it wrong.

If you’re routinely performing your assessments of systems and technology within the project lifecycle, you’re doing it wrong.

If you’re engaging with your project teams with policy statements and standards documents, yes, unfortunately you’re also doing it wrong.

Projects are where things — often big things — change in an organisation’s business or technology environment. And where there is change, there is generally a key touch point for the security team. Projects will generally introduce the biggest potential vulnerabilities to your environment, but if there is an opportunity to positively influence the security outcomes at your organisation, it will also be as part of a project.

Once a system is in, it’s too late. If you haven’t already given your input to get a reasonably secure system, the project team will have moved on, their budget will have gone with them, and you’ll be left filling out that risk assessment that sits on some executive’s desk waiting for the risk to be accepted. Tick.

But on the flip-side, if you’re not proactively engaging with project teams and your business to provide solutions for them, you’re getting in the way.

Let’s face it, no project manager wants to read through dozens of pages of security policy and discern the requirements for their project — you may as well have told them through interpretive dance.

So, what’s the solution?

The solution is to look to the mature field of IT Service Management, and the concept of having a Service Catalogue.

A Security Services Catalogue is two things:

Firstly, it is a list of the security and assurance activities which the security team offers which are generally party of the system development lifecycle. These services may include a risk assessment, vulnerability assessment and penetration testing, and code review, among others. The important thing being that the services are well defined in terms of the offering inputs, outputs and process, and the required effort and price, so that the business and the project teams can effectively incorporate it into their budget and schedule.

Secondly, it is a list of the security services already implemented within the organisation and operated by or on behalf of the security team, which have been through your assurance processes and are effectively “approved for use” throughout the organisation. These services would be the implementation of a secure design pattern or blueprint, or form part of one of those blueprints. To get an idea, have a look at the OSA Security Architecture Landscape, or the Mozilla Service Catalog.

Referring quickly to Mozilla’s approach, a good example is their logging or monitoring or SIEM service. Assuming a regulatory and policy requirement for logging and monitoring for all systems throughout your environment, it allows the project team to save money and time by using the standardised service. Of course, using the already implemented tool is also common sense, but writing it down in a catalogue ensures that the security services on offer are communicated to the business, and that the logging and monitoring function for your new system is a known-quantity and effective.

The easiest way to describe this approach is “control inheritance” — where a particular implementation of a control is used by a system, that system inherits the characteristics of that control. Think of Active Directory — an access control mechanism. Once you’ve implemented and configured it securely, and it has been evaluated, you have a level of assurance that the control is effective. For all systems then using Active Directory, you have a reasonable level of assurance that they are access controlled, and you can spend your time evaluating other security aspects of the system. So communicate to your organisation that they can use it via your Security Service Catalogue.

And if your Project team wants to get creative? No problem, but anything not in the catalogue needs to go through your full assurance process. That — quite rightly — means risk assessments, control audits, code reviews, penetration tests, and vulnerability scans, which accurately reflects the fact that everything will be much easier for everyone if they pick from the catalogue where possible.

So, how does this work in practice?

Well, firstly, start by defining what level of assurance you need for a system to go into production, or to meet compliance. For example, should you need to meet PCI compliance, you’ll at least have to get your system vulnerability scanned and penetration tested. Create your service catalogue around these, and defining business rules for their use and the system development lifecycle stages in which they must be completed.

Secondly, you need to break down your environment into its constituent parts (specifically the security components), review and approve each of those parts, and add them to your Security Service Catalogue. Any system then using those security services as part of its functionality, inherits the security of those services, and you can have a degree of assurance that the system will be secure (at least to the degree that the system is solely comprised of approved components).

The benefits are fourfold:
Project teams can simply select the services they want to integrate with, and know that those services meet the requirements of the security policy. No mess, no fuss.
Projects go faster, project teams know what the expectations are for them, and aren’t held up by the security inquisitor demanding their resources’ time.
Budget predictability. Project teams know the costs which need to be included in their budget up front. They can also choose a security service which is a known-quantity, which means there is a lower chance of a risk eventuating that needs them to pay to change aspects of the system to meet compliance or remediate a vulnerability.

You don’t need to check the security of the re-used components used by those projects over and over again.
For example, you might use an on-premise Active Directory instance with which identity and access management is performed; or maybe it’s hosted in Azure. Perhaps you use Okta, a cloud based SaaS Identity and Access Control service. For logging and monitoring, you might use Splunk or AlienVault as your organisation-wide security monitoring service, or maybe you outsource it to AlertLogic. Whatever. Perform your due diligence, and add it to your catalogue.

Once it’s in your catalogue, you should assess it annually, as part of your business as usual security practices — firstly for risk, secondly at a technical level to validate your risk findings, and finally in a market context to see if there are better controls now available to address the same risk issue.

I’ve been part of a small team building a security certification and accreditation program from scratch, and have seen that the only way to scale the certification process, and ensure sufficient depth of security review across the multitude of systems present in most organisations, is to make sure unnecessary re-hashing of solution reviews is minimised, using these “control inheritance” principles.

Thirdly, develop a Security Requirements Document (SRD) template based upon your Security Services Catalogue. This is where you define the services available and requirements for your project teams, and make the choices really easy for them. Either use the services in the security services catalogue, or comply with all the requirements of the Password Policy, Access Control Policy, Encryption Policy, etc. After a time, your Project Lifecycle will mature, your Security Services will become more standardised and robust, and your life will become significantly easier.

Lastly, get involved with your project teams. Your project teams are not security experts, you are. And the sooner you make it easy for them to get what resources and expertise you have available, the sooner they can make the best decisions for your organisation, the more secure your organisation will be. Make the secure way the easy way, and everyone’s life will be a little more comfortable.


Article by Ben Waters, Senior Security Advisor, Hivint

Security Collaboration — The Problem and Our Solution


Colleagues, the way we are currently approaching information security is broken.

This is especially true with regard to the way the industry currently provides, and consumes, information security consulting services. Starting with Frederick Winslow Taylor’s “Scientific Management” techniques of the 1890s, consulting is fundamentally designed for companies to get targeted specialist advice to allow them to find a competitive advantage and beat the stuffing out of their peers.

But information security is different. It is one of the most wildly inefficient things to try to compete on, which is why most organisations are more than happy to say that they don’t want to compete on security (unless their core business is, actually, security).

Why is it inefficient to compete on security? Here are a couple of reasons:

Customers don’t want you to.Customers quite rightly expect sufficient security everywhere, and want to be able to go to the florist with the best flowers, or the best priced flowers, rather than having to figure out whether that particular florist is more or less secure than the other one.

No individual organisation can afford to solve the problem.With so much shared infrastructure, so many suppliers and business partners, and almost no ability to recoup the costs invested in security, it is simply not cost-viable to throw the amount of money really needed at the problem. (Which, incidentally, is why we keep going around in circles saying that budgets aren’t high enough — they aren’t, if we keep doing things the way we’re currently doing things.)

Some examples of how our current approach is failing us:

We are wasting money on information security governance, risk and compliance

There are 81 credit unions listed on the APRA website as Authorised Deposit-Taking Institutions. According to the ABS, in June 2013 (the most recent data), there were 77 ISPs in Australia with over 1,000 subscribers. The thought that these 81 credit unions would independently be developing their own security and compliance processes around security, and the 77 ISPs are doing the same, despite the fact that the vast majority of their risks and requirements are going to be identical as their peers, is frightening.

The wasted investment in our current approach to information security governance is extreme. Five or so years ago, when companies started realising that they needed a social media security policy, hundreds of organisations engaged hundreds of consultants, to write hundreds of social media security policies, at an economy-wide cost of hundreds of thousands, if not millions, of dollars. That. Is. Crazy.

We need to go beyond “not competing” and cross the bridge to “collaboration”. Genuine, real, sharing of information and collaboration to make everyone more secure.

We are wasting money when getting technical security services

As a technical example, I met recently with a hospital where we will be doing some penetration testing. We will be testing one of their off-the-shelf clinical information system software packages. The software package is enormous — literally dozens of different user privilege levels, dozens of system inter-connections, and dozens of modules and functions. It would easily take a team of consultants months, if not a year or more, to test the whole thing thoroughly. No hospital is going to have a budget to cover that (and really, they shouldn’t have to), so rather than the 500 days of testing that would be comprehensive, we will do 10 days of testing and find as much as we can.

But as this is an off-the-shelf system, used by hundreds of hospitals around the world, there are no doubt dozens, maybe even hundreds, of the same tests happening against that same system this year. Maybe there are 100 distinct tests, each of 10 days’ duration being done. That’s 1,000 days of testing — or more than enough to provide comprehensive coverage of the system. But instead, everyone is getting a 10 day test done, and we are all worse off for it. The hospitals have insecure systems, and we — as potential patients and users of the system — wear the risk of it.

The system is broken. There needs to be collaboration. Nobody wants a competitive advantage here. Nobody can get a competitive advantage here.

So what do we do about it?

There is a better way, and Hivint is building a business and a system that supports it. This system is called “The Colony”.

It is an implementation of what we’re calling “Community Driven Security”. This isn’t crowd-sourcing but involves sharing information within communities of interest who are experiencing common challenges.

The model provides benefits to the industry both for the companies who today are getting consulting services, and for the companies who can’t afford them:

Making consulting projects cheaper the first time they are done.If a client is willing to share the output of a project (that has, of course, been de-sensitised and de-identified) then we can reduce the cost of that consulting project by effectively “buying back” the IP being created, in order to re-use it. Clients get the same services they always get; and the sharing of the information will have no impact on their security or competitive position. So why not share it and pocket the savings?

Making that material available to the community and offering an immediate return on investment.Through our portal — being launched in June — for a monthly fee of a few hundred dollars, subscribers will be able to get access to all of that content. That means that for a few hundred dollars a month, a subscriber will be able to access the output from hundreds of thousands of dollars worth of projects, every month.

Making subsequent consulting projects cheaper and faster. Once we’ve completed a certain project type — say, developing a suite of incident response scenarios and quick reference guides — then the next organisation who needs a similar project can start from that and pay only for the changes required (and if those changes improve the core resources, those changes will flow through to the portal too).

Identifying GRC “Zero Days”.Someone, somewhere, first identified that organisations needed a social media security policy, and got one developed. There was probably a period of months, or even years, between that point and when it became ubiquitous. Through the portal, organisations who haven’t even contemplated that such a need may exist, would be able to see that it has been identified and delivered, and if they want to address the risk before it materialises for them, they have the chance. And there is no incremental cost over membership to the portal to grab it and use it.

Supporting crowd-funding of projects. The portal will provide the ability for organisations to effectively ‘crowd fund’ technical security assessments against software or hardware that is used by multiple organisations. The maths is pretty simple. If two organisations are each looking at spending $30,000 to test System X, getting 15 days of testing for that investment, if they each put $20,000 in to a central pool to test System X, they’ll get 20 days of testing and save $10,000 each. More testing, for lower cost, resulting in better security. Everyone wins.

What else is going in to the portal?

We have a roadmap that stretches well into the future. We will be including Threat Intelligence, Breach Intelligence, Managed Security Analytics, the ability to interact with our consultants and ask either private or public questions, the ability to share resources within communities of interest, project management and scheduling, and a lot more. Version 1 will be released in June 2015 and will include the resource portal (ie the documents from our consulting engagements), Threat Intelligence and Breach Intelligence plus the ability to interact with our consultants and ask private or public questions.

“Everyone” can’t win. Who loses?

The only people that will potentially lose out of this, are security consultants. But even there we don’t think that will be the case. It is our belief that the market is supply side constrained — in other words, we believe we are going to be massively increasing the ‘output’ for the economy-wide consulting investment in information security; but we don’t expect companies will spend less (they’ll just do more, achieving better security maturity and raising the bar for everyone).

So who loses? Hopefully, the bad guys, because the baseline of security across the economy gets better and it costs them more to break in.

Is there a precedent for this?

The NSW Government Digital Information Security Policy has as a Core Requirement, and a Minimum Control, that “a collaborative approach to information security, facilitated by the sharing of information security experience and knowledge, must be maintained.”

A lot of collaboration on security so far has been about securing the collaboration process itself. For example, that means health organisations collaborating to ensure that health data flowing between the organisations is secure throughout that collaborative process. But we believe collaboration needs to be broader: it needs to not just be about securing the collaborative footprint, but rather securing the entire of each other’s organisations.

Banks and others have for a long time had informal networks for sharing threat information, and the CISOs of banks regularly get together and share notes. The CISOs of global stock exchanges regularly get together similarly. There’s even a forum called ANZPIT, the Australian and New Zealand Parliamentary IT forum, for the IT managers of various state and federal Parliaments to come together and share information across all areas of IT. But in almost all of these cases, while the meetings and the discussions occur, the on-the-ground sharing of detailed resources happens much less.

The Trusted Information Sharing Network (TISN) has worked to share — and in many cases develop — in depth resources for information security. (In our past lives, we wrote many of them). But these are $50K-100K endeavours per report, generally limited to 2 or 3 reports per year, and generally provide a fairly heavy weight approach to the topic at hand.

Our belief is that while “the 1%” of attacks — the APTs from China — get all the media love, we can do a lot of good by helping organisations with very practical and pragmatic support to address the 99% of attacks that aren’t State-sponsored zero-days. Templates, guidelines, lists of risks, sample documents, and other highly practical material is the core of what organisations really need.

What if a project is really, really sensitive?

Once project outcomes are de-identified and de-sensitised, they’re often still very valuable to others, and not really of any consequence to the originating company. If you’re worried about it, you can review the resources before they get published.

So how does it work?

You give us a problem, we’ll scope it, quote it, and deliver it with expert consultants. (This part of the experience is the same as your current consulting engagements)
We offer a reduced fee for service delivery (percentage reduction dependent on re-usability of output).
Created resources, documents, and de-identified findings become part of our portal for community benefit.

Great. Where to from here?

There are two things we need right now:

Consulting engagements that drive the content creation for the portal. Give us the chance to pitch our services for your information security consulting projects. We’ve got a great team, the costs are lower, and you’ll also be helping our vision of “community driven security” become a reality. Get in touch and tell us about your requirements to see how we can help.
Sign up for the portal (you’ve done this bit!) and get involved — send us some questions, download some documents, subscribe if you find it useful.
And of course we’d welcome any thoughts or input. We are investing a lot into this, and are excited about the possibilities it is going to create.


Article by Nick Ellsmore, Chief Apiarist, Hivint