Much like the rest of the business world, cybersecurity teams are increasingly hearing how AI will revolutionise the way they work – usually for the better. And it’s no longer just hype – recent research from LinkedIn showed that machine learning jobs are on the rise, with over six times as many roles available now as compared with a year ago. AI tasks are moving into the everyday business and security spheres.
The expansion comes with a problem – namely, the need to fill those roles with skilled individuals – and upskill your existing workforce to work with the new technology. The number of skills required to become an expert in AI can be overwhelming. It’s a catch-22 situation – if cybersecurity professionals need to learn too many skills to work with AI, it could end up causing them more work than it saves.
But it’s nevertheless the case that AI will have a transformative impact on cybersecurity practice over the coming few years – so no-one can afford to ignore it. How can you achieve true balance?
The answer is found with the children’s story of the three bears : not too much training, not too little – just the right amount. AI training programmes should be carefully aligned to business need and individual responsibility, paring down the mountain into usable, and necessary, chunks. It’s the same principle that guides all technology change: don’t overburden your teams with masses of learning, but also don’t allow your company to slip behind the competition through fear of change.
That balance is particularly tough when it comes to AI. It’s been so heavily trailered over the last few years that the entire world seems to be holding its breath for the inevitable robot invasion. But in reality, we’re still a long way from truly independent artificial intelligence. Too often, when people talk about AI, they mean some derivation of automation or machine learning – processes which still require a good amount of human input and minding.
We’re still essentially at the level of human-guided computer programmes – but as the complexity of their functions increases, so does the amount of training required to run them. With that in mind, here are three key areas to consider when implementing AI and automation.
Slim it down to the essentials
Before you get started, dedicate plenty of time to working out which machine learning solutions are best matched to your security needs. Don’t flood your security team with new tools just because they sound good. Make sure these tools address problems your company faces on a regular basis?
For example, automated email text analysis is likely to be useful for most companies, helping their filters spot suspicious messages with more accuracy. Response orchestration, on the other hand, may be less urgent for large security teams with plenty of manpower versus smaller teams without enough bandwidth to manage every task.
By selecting only those solutions that address essential needs, you can keep upskilling requirements to a minimum, whilst ensuring you have the best possible defence in place.
Be hands-on
Once you’ve decided which AI solutions will benefit your business, it’s a good idea to establish new ‘skill heads’ – for example, to manage automated detection of insider threats – in tandem with upskilling existing staff. By providing a single point of contact for each new skill, you provide not only a friendly face to answer questions, but also a role model for others in the company to follow – a humanisation of the job at hand. ‘We’re rolling out a new system’ sounds like a lot of work; ‘Anna is going to show you how she’s been using this new tool’ is more manageable.
By giving one-person responsibility for the uptake and rollout of the new skill also helps to ensure that any wrinkles are picked up quickly and dealt with, and that updates and changes to the system are quickly folded into the practice of the business. It also saves each team member from having to learn the system from scratch – if there’s a single point of contact, the skill head can act as a one-stop-shop for advice on that particular system.
Don’t get caught up in the hype
A realistic view of how advanced AI technology is at present should always underpin your security training and purchasing. Businesses need to make an investment of time and money without trend chasing. Working with an experienced provider can help – having a knowledgeable voice to advise you on which solutions are worth the time can make the difference between improving your security team’s day-to-day life and piling extra work onto them.
AI holds plenty of promise for the cybersecurity sector in terms of time-saving and improved accuracy. This year it will become increasingly common in the workplace, and that means companies need to prepare themselves and their staff to receive and make the most of it. But that training needs to be realistic, taking into account existing pressures. Build a carefully targeted programme to ensure your staff benefit from AI – without being snowed under.
The opinions expressed in this post belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.