CYBERCRIMINALS ARE TAKING PHISHING SCAMS to another level by targeting companies with fake Office 365 sites. Worse still, their method is so sophisticated that artificial intelligence (AI) is the only way to identify and thwart these attacks.
It’s the technical principle of Office 365 itself that’s being attacked: single-password access to all the platform’s online services. By stealing an email account, a cybercriminal gains access to all the platforms on which employees share intellectual property and other sensitive documents.
With the phishing tactic, the victim might receive an email saying that his or her Office 365 login credentials have expired and must be reset. Or it might invite the user to click on a link to view a document. Either way, the link leads to a fake Office 365 authentication page. Without knowing it, the victim has just handed over his or her credentials to hackers.
Traditional email security products are not adapted to Office 365 and are therefore helpless against this type of attack. Cybercriminals frequently modify some elements of the pages and emails sent in order to hide their fingerprint and thus make them nondetectable by conventional protection systems. Or they’ll change the domains on which they host their fake sites, either by registering new domain names or by reusing old ones, so that reputation-based blacklists don’t have time to be updated.
The only technique to protect users from such a scam is real-time analysis of the email, the link, and the page specified by the link, both when the email arrives and any time a user attempts to click on the link. You must combine the analysis of the page to test the presence of certain elements specific to the Office 365 portal (logo, layout, type of text displayed, colors) with the analysis of the email (header, sender, content, attachments) and instantly render a verdict if the page is deemed to be fraudulent.
AI: The Only Effective Defense
Cybersecurity labs are now leveraging AI and machine learning in particular to overcome the disadvantages of conventional security products. The principle of machine learning is the prediction of, and adaptation to, the methods hackers use to create their phishing pages. For example, it’s possible to build a statistical model of the probability of dealing with a fake site based on an analysis of the site, the link, and the email; this makes it feasible to highlight the probability of a URL being a phishing site. Thus, when the user attempts to access a page, the model evaluates its similarity to a typical model of a fraudulent page and indicates to the user, in a split second, that it is a phishing page.
The model created by machine learning doesn’t rely on an identifying stamp but on a set of characteristics. Conventional methods depend on fixed and known characteristics in a reactive way, based on prior identified attacks. AI, on the other hand, relies on statistics and probabilities to understand what makes the site a phishing page or not. The goal is to have an expert eye watching over all of this and to reproduce it with AI.
To avoid false positives, engineers need to train the AI engine by submitting a great variety of pages and telling it which ones are legitimate and which ones are fake. The engine identifies the similarities among the illegitimate pages and partly analyzes the submitted content. The more examples, the more it can recognize the relevance of the similarities. These similarities will constitute the typical model in production. However, in production, AI is combined with conventional techniques to obtain the best results.
AI is a formidable defense capable of detecting phishing whose footprint has yet to be identified. Ultimately, though, the key to building and continually optimizing effective models is access to high volumes of data.
ADRIEN GENDRE is chief solutions architect at Vade Secure.