Towards XAI in the SOC – a user centric study of explainable alerts with SHAP and LIME

Vitenskapelig artikkel 2023

Om publikasjonen

Format

PDF-dokument

Størrelse

864.8 KB

Språk

Engelsk

DOI

https://dx.doi.org/10.1109/BigData55660.2022.10020248

Last ned publikasjonen
Håkon Svee Eriksson Gudmund Grov
Many studies of the adoption of machine learning (ML) in Security Operation Centres (SOCs) have pointed to a lack of transparency and explanation – and thus trust – as a barrier to ML adoption, and have suggested eXplainable Artificial Intelligence (XAI) as a possible solution. However, there is a lack of studies addressing to which degree XAI indeed helps SOC analysts. Focusing on two XAI-techniques, SHAP and LIME, we have interviewed several SOC analysts to understand how XAI can be used and adapted to explain ML-generated alerts. The results show that XAI can provide valuable insights for the analyst by highlighting features and information deemed important for a given alert. As far as we are aware, we are the first to conduct such a user study of XAI usage in a SOC and this short paper provides our initial findings. Index Terms—Interpretability, explainability, artificial intelligence, machine learning, security operation center, intrusion detection system, explainable artificial intelligence, user studies

Utgiverinformasjon

2022 IEEE International Conference on Big Data. IEEE (Institute of Electrical and Electronics Engineers) 2023 ISBN 978-1-6654-8045-1

Nylig publisert