Last June the US Internal Revenue Service issued a public request for information that announced its business need for an Artificial Intelligence (AI) system to detect and respond to cyber security and insider threats. The specification provided a revealing insight into AI uptake trends; it shows how precisely savvy the agency is to what this cutting-edge technology can deliver.
Moreover, it wanted a technological solution that provides ‘role-based access’ that meets the information requirement of ‘senior users and leadership’ – and not something that’s cut-out for the IT crowd alone.
In fact, if anything, the IRS may be slightly behind the curve in respect to general AI take-up, according to a range of research focused on enterprise implementation of AI and Machine Learning (ML) technologies for defensive cyber security. According research conducted last year (2018) by Enterprise Strategy Group (ESG), 12% of businesses have already deployed AI-based security analytics ‘extensively’, and 27% have deployed AI-based security analytics on a limited basis. ESG expects implementation trends to gain momentum into 2019. An earlier survey conducted by Boston Consulting Group and MIT Sloan Management Review (AI Global Executive Study & Research Report) found that about 20% of companies have already incorporated AI in some offerings or processes, and that 70% of executives ‘expect AI to play a significant role at their companies’ in the early-2020s and beyond. Moreover, investments in AI/ML are driven by a need to consolidate cyber security ‘posture’ (state of readiness), bigger cyber threat challenges, and digital transformation programmes that place an expectation on AI/ML to deliver multiple forms of return on investment (ROI).
Radware’s most recently-published 2018-2019 Global Application and Network Security Report finds that 86% of surveyed businesses indicated they explored AI and ML solutions. Almost half – 48% – point at quicker response times and better security as two primary drivers to explore ML-based solutions.
One-in-five organisations now rely on such a technology for protection; another 25% will do so in 2018. These findings suggest that by 2019 close to half of organisations will leverage AI capabilities within their information security capability.
So: what’s motivating this shift? Most obviously, 63% said the need for ‘better security’. Other benefits cited include simplifying management and addressing the skill gap (27% each). The skills gap is somewhat of a double-edged sword for enterprise strategists. It would be comforting to think that automating cyber security operations would free-up valuable IT expertise to focus their skills to address unpatched vulnerabilities or work on new business development support.
However, most AI/ML technology still needs some intervention from human managers – such as data scientists, analytics experts, and the like – and some industry-watchers aver that people with these skills are even harder to recruit than IT security professionals. What’s more, most AI/ML tools take time to ‘bed-in’. This means that they may be installed and running for months before they start to deliver positive results, and more time for security teams to act on those results. The shift to AI/ML, therefore, is inevitably a long-term investment designed for long-term paybacks, and not short-term quick wins.
This commitment may surprise executives accustomed to thinking of AI as a ‘for tomorrow’ technology that’s not still not ready for mainstream deployment; but in fact, AI adoption had been encouraged by some key shifts in its market maturity. AI/ML functionality is being introduced in support of conventional cyber security investments, rather than as a wholesale replacement for it, as some market-watchers had speculated in recent years.
On the plus side, the necessary investment can be spread-out over a longer time-frame than with conventional cyber security solutions. And, it might be speculated, AI/ML technology will likely develop more slowly than other security technology, so there’s less prospect of it falling behind as threat landscapes change. Next, as mentioned, AI/ML of different complexions is being written into enterprise digital transformation change proposals, so that it is seen as a logical development of a forward-facing cyber security strategy that will help overcome inherent tech problems. It’s probable that many businesses realise that the fact that cyber attackers are exploiting legacy vulnerabilities in conventional cyber defences, some of which could take years to resolve.
Growing threats that lurk within
Digital transformation could speed-up this process. AI/ML also promise more effective ways to tackle the challenge of insider threats, by helping cyber security teams automate real-time threat detection and the task of crunching through log data in search of tell-tale patterns that show employees are up to no good. Vendors of AI/ML products also realise that, as c-suites and boards assume more responsibility for cyber security governance, they need to sell their solutions in a way that offers quantifiable business benefits in conjunction with IT security assurance. ML analyses data sets to extract algorithms and learning models and apply ‘learned’ generalisations to new situations; it has a controlled capacity to perform tasks without direct human programming. In cyber security, this capability should prove particularly effective for insider attacks and security vulnerabilities that arise from employee mistakes or their unwitting co-operation (or wilful collusion) with cyber threats.
To claim that AI/ML can help cyber security defences is an acknowledgment of the insider threat problem, because insider threats are now regarded as a significant threat type. As reported in the Autumn 2018 issue of Cyber Security Europe, security breaches due to rogue employees and trusted but vulnerable employees – plus the occasional feckless contractor that works within the security ‘perimeter’ – are a growing concern for security governance.
The 2018 Insider Threat Report from CA Technologies is based on a survey of 472 enterprise professionals who range from executives and managers to senior IT security practitioners; organisations of various sizes are represented. The report’s key finds included the fact that 90% of organisations polled ‘feel vulnerable to insider attacks’, but for different reasons. Tellingly, 51% of respondents to the were more concerned about accidental/unintentional data breaches perpetrated by insiders, as compared to 47% whose concern was more for malicious/deliberate insider action. AI/ML-minded organisations compile their own threat intelligence logs when it comes to malevolent insider actions: while not absolutely predictive, it can inform the allocation of monitoring resources.
This way the parts of the organisation where past (known) insider threats have come from (and that could include boardrooms and c-suites) can be the most actively covered security-wise – this makes best use of resources as firms grow.
PwC remains confident that AI is set to play an increasing part of the solution; indeed, cyber defence will be many enterprises’ first experience with AI.
In its 2018 AI Predictions report, it points out that scalable ML techniques, combined with cloud technology, already analyse large amounts of data and power real-time threat detection and analysis. AI capabilities can also identify ‘hot spots’ where cyber attacks are surging and provide cyber security intelligence that informs governance strategy. However, while AI will become ‘an important part of every major organisation’s cyber security toolkit’, as PwC predicts, cyber defence teams should know it will also soon be part of the cyber attacker’s tools as well.
‘Attackers will use AI, so defenders will have to use it too,’ PwC says. Future cyber attacks and counter-attacks will not simply be two sets of advanced computer systems ‘battling it out’. If an enterprises’s IT function or cyber security provider isn’t already using AI, it has to ‘start thinking immediately about AI’s short- and long-term security applications’.