Ruta de navegación

DIFusio@

20-11-2023; 11:00 DEFENSA DE TESIS DOCTORAL BORJA MOLINA CORONADO

Imagen

Borja Molina Coronado: "Artificial Intelligence-based contributions to the detection of threats against information systems".

Zuzendariak_Directores:  José Miguel Alonso / Usue Mori Carrascal

2023_11_20,   11:00;  Sala Ada Lovelace aretoa.

ABSTRACT:

"In contemporary society, the widespread integration of technology into our daily routines has led to a significant reliance on digital infrastructure. This shift is marked by the prevalence of information systems, which play a pivotal role for human interaction, data exchange, and technological progress. This era of extensive connectivity, known as the Big Data era, is characterized by the substantial creation, transmission, and storage of data. Such increasing volume of data requires not only innovative approaches to data analysis and management but also practical solutions to safeguard the integrity, confidentiality, and availability of the underlying information systems.

In a context where manual analysis is not feasible due to the variety, velocity, and volume of data, Artificial Intelligence (AI) has emerged as an exceptional technological advancement. AI possesses the capability to rigorously analyze and safeguard information systems from the risks they face. At its core, AI offers a feature of paramount importance in the realm of cybersecurity: the ability to uncover intricate patterns within data that would otherwise be prohibitively costly and time-consuming for human experts to extract. This characteristic makes AI a pivotal tool in Alert Management, Vulnerability and Risk Management, and Threat Detection.

In alert management systems, AI systems can assist human analysts to alleviate them from the task of sifting through an overwhelming flow of alerts, guaranteeing that only genuine security incidents are escalated for further investigation. The remarkable ability of AI not only has helped to differentiate false alarms from authentic security threats but also, has elucidated a future for automatic incident response systems.

Vulnerability and risk management is another critical area of cybersecurity that traditionally relied heavily on manual efforts. In this area, the adoption of AI techniques for identifying software vulnerabilities has significantly improved efficiency, coverage and accuracy, expanding the identification of potential software vulnerabilities that could expose organizations and users to security threats. Furthermore, AI-based proposals that assist in implementing remediation measures are also emerging.

For threat detection, AI has the potential to identify signs of attacks in real-time, enabling security operators to take measures that minimize their impact. Specifically, AI has served as a proactive guardian against spam, which is a common medium for distributing phishing campaigns and propagating malware. In the realm of intrusion detection and malware detection, with its incorporation, AI has contributed to significant transformations, changing the detection paradigm from the definition of static rules to the automatic extraction of behavioral patterns of attacks and malware, at speeds that surpass the capabilities of human experts.

It should be noted, however, that the application of AI to solve cybersecurity problems is not free of challenges. On the one hand, cybersecurity solutions operate in a highly complex and dynamic environment where threats and attack techniques against information systems are constantly evolving. The ever-changing nature of this landscape requires continuous adaptation and updates of AI strategies and models. Moreover, AI solutions heavily rely on data for training and decision-making and the lack of access to comprehensive and diverse datasets can hinder the effectiveness of AI solutions. The obtention of high-quality, labeled data can be very difficult in some cybersecurity problems, where data is often noisy, incomplete, and subject to privacy and regulatory constraints.

On the other hand, by its hostile nature, attackers are continuously implementing more sophisticated adversarial strategies to evade detection. They can use AI to attack and manipulate security solutions, making them produce incorrect results or overlook vulnerabilities. Since AI is a rapidly evolving field, this arms race between AI-based defenses and adversarial attackers requires constant research and development.

Since the proposal of effective AI-based cybersecurity solutions must aim to solve the aforementioned challenges, there is no doubt that the evaluation of AI-based solutions must adhere to these challenges to yield credible results. However, the absence of these practices has given rise to controversies regarding the utility of AI in the field of cybersecurity. Among the most prevalent deficiencies are the simplification of experimental evaluation conditions and the omission of data, code, or details necessary for the reproduction of AI-based proposals, leading to unrealistic results and generating distrust in AI.

Nevertheless, these practices should not overshadow the potential of AI, which has indeed taken significant steps toward enhancing the cybersecurity landscape.

The aim of this thesis was to analyze the potential of AI methods to address information security problems in the area of threat detection.

Specifically, we focus our study into Network Intrusion Detection Systems (NIDS), which represent the first line of defence against attacks and address the identification of treats targeting information systems through the examination of network activity; and on malware detection for Android, which is based on the analysis of the characteristics of apps to identify malware.

Along the first part of the dissertation, we review and analyze research proposals on the area of NIDS presented until 2019 from a perspective of the Knowledge Discovery in Databases (KDD) process. KDD serves as a guideline to discuss the techniques applied to network data from its capture to the detection of attacks using AI methods, focusing on their applicability to real-world scenarios.

We have shown how a large number of NIDS papers have focused on the use of ML algorithms for the detection of network attacks with either raw packet captures or flow data. However, aspects such as the location of the network probes, the maximum throughput allowed by the system, and the attacks that can be detected are often inadequately documented or overlooked. Our study determined that the majority of these research papers relied on datasets that do not accurately represent real-world networks. These datasets frequently lack traffic samples for protocols that employ encryption, such as HTTPS and IPsec, which are commonplace in modern networks. Additionally, many of the datasets feature short-duration captures that fail to capture the true dynamics and evolution of network traffic. Our findings highlighted the shortcomings of proposals relying on such datasets, showing their inadequate design that often leads to unrealistic performance values, creating an overly optimistic picture of their effectiveness.

Concerning the detection aspect, we performed a classification of NIDS detectors into misuse and anomaly based. We observed that the majority of NIDS focus on misuse detection, employing batch learning algorithms.

Conversely, only a limited subset of proposals explores anomaly detection using weakly or semi-supervised approaches, overlooking their potential to identify unknown malicious patterns without the need for extensive labeled training data.

Regarding evaluation procedures, our findings revealed that concept drift management is a consideration in only a small fraction of NIDS proposals. Additionally, we concluded that current complexity analyses are inadequate since, to comprehensively assess computational costs, evaluations should encompass all stages of detection, from packet capture and preprocessing to final decision results in the analysis of network traffic.

The second part of this dissertation focused on contributions to malware detection in the Android operating system, specifically on detectors that leverage static analysis information from apps. In contrast to dynamic analysis, static analysis does not require executing the app.

Firsly, we identified five realistic experimental factors present in production scenarios that are often overlooked in the research literature. These factors include the presence of duplicates, the exclusion of greyware, labeling biases, the stationarity assumption, and evasion attempts. In addition, we proposed a fair evaluation framework for malware detectors aimed at providing a standardized experimental methodology, an aspect that was inexistent in the area, and evaluated ten highly influential works in the field, which we were able to reproduce.

Our findings consistently revealed that the exceptional performance values reported in the original papers could not be sustained under varying configurations. This underscores the critical importance of considering these factors for more realistic evaluations. In this regard, and aiming to tackle some of the limitations evidenced current malware detectors for Android leveraging static analysis information, we analyzed how classical evasion techniques such as obfuscation can affect static analysis features used for malware detection. This study had an intrinsic second purpose: evaluate how useful is obfuscation to bypass detectors leveraging static analysis. The picture represented in our experiments evidenced that the level of impact of obfuscation on different families of static analysis features varies among obfuscation techniques and tools. Contrary to what it is typically assumed, we demonstrated that feature persistence is not a good indicator of the robustness of features. In fact, some families of persistent features led to model malfunction when specific types of obfuscation are applied to samples. In this regard, we proposed to use feature insensitivity, a measure based on the decisions of models for non-obfuscated and obfuscated apps, which demonstrated to be a better indicator of the robustness of features. As a result of this study, we identified the most insensitive families of features (Permissions, Strings and API functions), and proposed a detector making use of these features that outperformed the state-of-the-art under the zero-knowledge assumption (without any information about obfuscation).

Also, the evaluations performed throughout this work identified that the best malware detection methods to date are Drebin (presented in 2014), DroidDet (presented on 2018) and MaMaDroid (presented in 2019). However, these methods leverage batch learning algorithms that cannot cope with app evolution. We showed how these three detectors rapidly loose their accuracy in real deployments due to concept drift. To address this aspect, we proposed to retraining models as they become ineffective. Our evaluations demonstrated that the application of retraining in combination with a drift detection mechanism and a buffer management method is a valid solution to efficiently adapt existing batch ML malware detectors for Android. In particular, we showed that leveraging the Page-Hinkley test to detect changes in the data distribution helps to maintain the performance of detectors over time and has minimal cost because the number of retraining steps is reduced. Additionally, uncertainty sampling and problem-specific sample selection approaches showed limited impact on the performance of models but, instead, presented a higher reduction on labeling requirements.

Finally, this thesis served to highlight how the lack of standard evaluation procedures, the assumption of unrealistic scenarios, the omission of key implementation details, and the absence of working codes are particularly prevalent in information security research. In this context, we argued how these issues make it difficult to reproduce proposals, which slows down the emergence of realistic security solutions based on KDD and ML/AI algorithms."


Filtro por temas