Cybersecurity transformation with artificial intelligence
The impact of cyberattacks on modern enterprise environments is massive, and it’s continuing to grow rapidly. Depending on the size of your enterprise, there are up to several hundred billion time-varying signals that must be analyzed to accurately calculate risk. Analyzing and improving cybersecurity posture is no longer something humans alone can accomplish successfully.
Artificial intelligence (AI) and machine learning (AI ML development) have now become essential to information security. These technologies are now capable of swiftly analyzing millions of data sets and tracking down a wide variety of cyber threats, from malware-exploiting zero-day vulnerabilities to risky behavior that could lead to a phishing attack or download of malicious code.
Because AI and ML leverage continuous learning they can help improve security faster, drawing data from past and present experiences to pinpoint new varieties of attacks that are being developed every day.
Advantages and challenges of AI for cybersecurity
AI is ideally suited to solve some of our most difficult business problems, and cybersecurity certainly falls into that category. With today’s ever-evolving cyber-attacks and proliferation of devices, ML and AI can be used to “keep up with the bad guys,” automating threat detection and responding more efficiently than traditional software-driven approaches.
At the same time, cybersecurity continues to experience many unique challenges:
- A vast attack surface
- Tens (or hundreds) of thousands of devices per organization
- Hundreds of attack vectors
- Big shortfalls in the number of skilled security professionals
- Masses of data that have moved beyond a human-scale problem
Benefits of AI for cybersecurity
AI delivers a new level of intelligence that can inform human teams across diverse categories of cybersecurity, including:
- IT asset inventory – Gaining a complete, accurate inventory of all devices, users, and applications with any access to information systems. Categorization and measurement of business criticality also play big roles in inventory.
- Threat exposure – Hackers follow the cybersecurity landscape just like everyone else, so what’s trending with hackers changes regularly. AI-based cybersecurity systems can provide up-to-date insight on global and industry-specific threats. This helps security teams make critical prioritization decisions based on what could be used to attack your enterprise, and also what is likely to be used to attack your enterprise.
- Controls effectiveness – It’s important to understand the impact of the security tools and processes you have employed to maintain a strong security posture. AI can help understand where your program has strengths, and where it has gaps.
- Breach risk prediction – Accounting for IT asset inventory, threat exposure, and controls effectiveness, AI-based systems can predict how and where you are most likely to be breached so you can plan for resource and tool allocation in areas of weakness. Prescriptive insights derived from AI analysis can help you configure and enhance controls and processes to improve your organization’s resilience.
- Incident response – AI-powered systems can provide improved context for prioritization and fast response to security alerts and to incidents. They can also help uncover the root causes of security events so you can mitigate vulnerabilities and avoid future issues.
- Explainability – Key to harnessing AI to augment human security teams is explainability of recommendations and analysis. This is valuable in getting buy-in from stakeholders across the organization, for understanding the impact of various security initiatives. It’s also important for sharing relevant information with all involved stakeholders, including end users, security operations, CISO, auditors, the CIO and CEO, and the board of directors.
- Better endpoint and network protection –The number of devices used for working remotely is fast increasing, and AI has a crucial role to play in securing all those endpoints.
Improving cybersecurity with AI and ML
Antivirus solutions and VPNs can help against remote malware and ransomware attacks–but they often work based on signatures. In order to stay protected against the latest threats, it is necessary to keep up with signature definitions.
By taking swift and targeted action, AI-based security contains threats across the network and wider infrastructure when security teams or software are overwhelmed, or simply aren’t around.
The industry: Who is invested in securing with AI?
- Google: Gmail has used ML techniques to filter emails since its launch 18 years ago. Today, there are applications of ML in almost all of its services, especially in deep learning, which allows algorithms to do more independent adjustments and self-regulation as they train and evolve.
Before, we were in a world where the more data you had, the more problems you had. Now with deep learning, the more data the better.
—Elie Bursztein, Head of Anti-abuse research team at Google
- IBM/Watson: The team at IBM has increasingly leaned on its Watson cognitive learning platform for “knowledge consolidation” tasks and threat detection based on ML.
A lot of work that’s happening in a security operation center today is routine or repetitive, so what if we can automate some of that using machine learning?
– Koos Lodewijkx, Vice president and Chief Technology Officer of security operations and response at IBM Security
Unlike pre-programmed defenses, AI can recognize and react to attacks it hasn’t encountered before, from machine-speed ransomware to insider threats. This is only possible because the system has learned “on the job” how an organization operates, enabling an autonomous response that understands and adapts to the threat scenario as it unfolds.
AI limitations and challenges in cyber security
As with every technology, there are limits to applications and threats. Cybersecurity solutions powered by AI certainly provide a high degree of accuracy and performance, but some levels of error do exist, and AI technology can produce false positives or false negatives when the time comes to detect the presence of threats in a network.
Also, AI can be a double-edged sword, as it can be manipulated for nefarious purposes. A recent global study found that over 40 percent of executives have “extreme” or “major” concerns about AI threats, with cybersecurity vulnerabilities.
Hackers can use AI as a tool to misdirect a program or application into thinking that threat activities are normal when they are not. Hackers do this via Adversarial Machine Learning, a technique employed to exploit four critical security vulnerabilities:
- Cost: Being a complex technology, AI has high adoption barriers. To build and maintain an efficient AI-powered solution, you need to invest time and money in researching the technology, finding experienced team members, and allocating enough computing power and data centers. Before beginning to develop an AI-driven cybersecurity solution, make sure to assess all possible risks and study the market to evaluate product demand.
- Lack of datasets: Accurate and extensive data sets are a must for developing any AI-based solution. AI services company’s Developers need them not only to help algorithms learn but also to test them. To create data sets for a cybersecurity solution, your team may need to find examples of malicious code, malware, and anomalies, depending on what your solution is supposed to do.
Gathering and labeling data manually is an extremely time-consuming process. To streamline that, consider purchasing ready data sets or try looking for free data sets. But make sure that those you collect:- Fit your project’s purposes
- Contain complete and undamaged data
- Contain data with accurate and correct labelling
- Poisoning attack– Injecting poison data in the training data set to compromise the learning process of the AI.
- Evasion attack– Designing malicious data that looks normal to humans but can be misclassified by an AI (e.g., altering pixels on a picture that make a cat look like a dog).
- TrojAI attack – Unlike a broader data-poisoning attack that disrupts everything from the beginning, TrojAI inserts a trigger (malware) that will change a program’s behavior in certain circumstances only (e.g., only when a certain activity occurs on a network).
- Model Inversion attack – Hackers can reconstruct training data by figuring out the model parameters to expose sensitive information and identities.
Hackers can also use AI for their own activities. A Forrester Research report, Using AI for Evil: A Guide to How Cybercriminals Will Weaponize and Exploit AI to Attack Your Business, notes that “There are already instances of threat actors and hackers using AI technologies to bolster their attacks and malware.” Artificial Intelligence (AI) in Cybersecurity 2021 | Datamation
To build complex and efficient products, developers use ready-to-go ML algorithms, libraries, and other third-party components that also may be vulnerable. If malicious actors manage to find vulnerabilities in at least one of these components, they might use their knowledge to exploit them and attack your solution.
For example, when studying the impact of platform vulnerabilities in AI systems, MIT researchers decided to test the Cognitive Toolkit (CNTK) — an open-source deep learning toolkit developed by Microsoft — against an attack. Their goal was to use a malicious input image for CNTK and cause the engine to misclassify other images. Thanks to the vulnerability researchers found, the attack was successful.
Conclusion
AI as an enabler for cybersecurity has many applications. As the computational capabilities and digital complexity of global enterprises continue to grow, AI powered tools and automation enablement will play an increased and integral role in keeping us cyber-safe.
We have highlighted some areas that are particularly viable for today’s current cyber-threat ecosystem. If you would like to chat with an expert about your cybersecurity infrastructure, contact us today.