The battle between cybersecurity experts and cybercriminals is not a new phenomenon, but the fast pace of technology change in the security landscape means the fight is constantly becoming more complex.
Artificial intelligence (AI) is on the rise in the security space and that presents another battle in the war against cybercriminals, while also emphasizing how important it is for companies to master the basics of security. Traditional attacks such as phishing, smishing and ransomware are still expected to be a part of the future threat landscape.
But instead of working against security teams, AI can be a part of the solution to keep valuable company data protected. In a panel discussion entitled “How AI is Transforming Cybersecurity: Both for Security Teams and Cybercriminals” at the recent PrivSec event in London, the clear message was that AI is a tool just like any other technology. It is not a silver bullet for either side.
Chairing the panel, Adrian Warman, threat and vulnerability management lead, UK Ministry of Justice, Digital and Technology, said AI is here to stay.
“We are in a fascinating time, and we have a security challenge or opportunity. We have technology that ranges from the here and now to the concept of automation and optimization as a result of machine learning. Then we have the science fiction of AI,” he said. “It is no longer a case of whether AI will transform security. It will. It does.”
Competing on a Level Playing Field
But security has been a constant cat and mouse game, according to Jason Maude, chief technology advocate at Starling Bank. The key is for the good guys to harness the technology that the cybercriminals are using.
“Every time a technology comes out the criminals start using it and the forces of law and order and security have to work out ways of using that technology to counteract that,” Maude said. “Yes, criminals may be using deepfake technology to adopt an identity that isn’t theirs, but we can also use machine learning to try and spot unusual patterns of activity, for example. It is just another stage in the ongoing cat and mouse game.”
Olu Odeiyi, external advisor, finance and investment committee at the Royal College of Nursing, said AI and machine learning is not a new concept.
“Fundamentally we are talking about technology that was designed in the 1950s—only now the compute power has allowed us to get where we are today,” he said. “The blue sky marketing around AI drives me mad—of course it can stop attacks, but you can’t put down other defenses—there is still a human loop around that. The way to look at AI is that it is going to help the human defenses the same way it is helping the criminals.”
Security teams need to be careful AI doesn’t become the elephant in the room, said Maude. “The problem for many companies is that they could buy a very expensive machine learning package and then it doesn’t really work or do anything for them and just becomes a big white elephant that costs a lot,” he warned. “Companies that don’t allow humans to make changes without going through checks are hardly going to allow machines to make changes unchecked.”
Todd Wade, interim CISO at a private equity firm, said too many companies still haven’t got the basics right and therefore shouldn’t even be thinking about adopting AI technology.
“Until you get the basics right, such as identity access management, patching, for example, you can throw all the technology at the problem that you want, but it is going to be easier for criminals to take advantage of companies until they get those basics right,” he said.
The Human Touch Isn’t Going Away
Ever since AI has exploded onto the scene there has been talk of it replacing humans and causing job losses, but the reality is humans will be needed more than ever with AI security systems, the panel said.
“There are going to be AI tools that are good at detecting things,” Wade said. “But you need the human element to verify the topic. For example, facial recognition—you could see a series of dots that to an AI machine is a face, but to humans is obviously not. You are always going to need a combination of the two.”
Maude said the best approach is to combine the power of AI with human decision-making.
“It is possible [for cybercriminals] to fool either the AI or the human side, but difficult to fool both,” he said. “Security teams are going to have to start looking for biases that AI has developed, such as looking for false positives – they are going to have to start spotting where AI is making mistakes and reducing those. Don’t just trust AI blindly to do its job but help develop its skills.”
Odeiyi agreed. “There are limitations on what AI can achieve—don’t think that it is going to come along and that is it and all you have to do is deploy and forget,” he said. “You might need it to combat the threats that are coming, but you have to be in control of it.”
Jose Belo, data privacy senior legal counsel at Shell, said an element of caution is needed when adopting AI technologies.
“The accuracy of AI is not there yet, but it will be in the future. One company could be using a particular AI technology model, another one will use a different one, and there is a huge difference between the two. Security teams mustn’t use AI in a blasé way—they need to be cautious about it,” he said.
It is imperative that security teams don’t panic about the AI-based threats that are coming down the line, or see the technology as a threat, the panel concluded.
The key for security professionals is not to rush into using AI or machine learning unless it is the right tool for the job, instead focus on keeping the security basics up to date and ensuring the whole company is aware of the threats that cybercriminals may pose.
It is likely that companies will need to invest in some level of AI security in the future to ensure full protection against the ever-changing threat landscape, the panel said, but the main message of the day was that a company’s IT security strategy is still very much in the hands of the humans, rather than the machines.