A quantitative evaluation of explainable AI methods using the depth of decision tree

dc.contributor.authorAhmed, Nizar Abdulaziz Mahyoub
dc.contributor.authorAlpkoçak, Adil
dc.date.accessioned2023-03-22T19:47:18Z
dc.date.available2023-03-22T19:47:18Z
dc.date.issued2022
dc.departmentBelirleneceken_US
dc.description.abstractIt is necessary to develop an explainable model to clarify how and why a medical model makes a particular decision. Local posthoc explainable AI (XAI) techniques, such as SHAP and LIME, interpret classification system predictions by displaying the most important features and rules underlying any prediction locally. Therefore, in order to compare two or more XAI methods, they must first be evaluated qualitatively or quantitatively. This paper proposes quantitative XAI evaluation metrics that are not based on biased and subjective human judgment. On the other hand, it is dependent on the depth of the decision tree (DT) to automatically and effectively measure the complexity of XAI methods. Our study introduces a novel XAI strategy that measures the complexity of any XAI method by using a characteristic of another model as a proxy. The output of XAI methods, specifically feature importance scores from SHAP and LIME, is fed into the DT in our proposal. The DT will then draw a full tree based on the feature importance score decisions. As a result, we developed two main metrics that can be used to assess the DT's complexity and thus the associated XAI method: the total depth of the tree (TDT) and the average of the weighted class depth (ACD). The results show that SHAP outperforms LIME and is thus less complex. Furthermore, in terms of the number of documents and features, SHAP is more scalable. These results can indicate whether a specific XAI method is suitable for dealing with different document scales. Furthermore, they can demonstrate which features can be used to improve the performance of the black-box model, in this case, a feedforward neural network (FNN).en_US
dc.identifier.doi10.55730/1300-0632.3924
dc.identifier.endpage2072en_US
dc.identifier.issn1300-0632
dc.identifier.issn1303-6203
dc.identifier.issue6en_US
dc.identifier.scopus2-s2.0-85141911768en_US
dc.identifier.scopusqualityQ3en_US
dc.identifier.startpage2054en_US
dc.identifier.trdizinid1142468en_US
dc.identifier.urihttps://doi.org/10.55730/1300-0632.3924
dc.identifier.urihttps://search.trdizin.gov.tr/yayin/detay/1142468
dc.identifier.urihttps://hdl.handle.net/20.500.14034/607
dc.identifier.volume30en_US
dc.identifier.wosWOS:000884407400004en_US
dc.identifier.wosqualityQ4en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.indekslendigikaynakTR-Dizinen_US
dc.language.isoenen_US
dc.publisherScientific And Technological Research Council Turkeyen_US
dc.relation.journalTurkish Journal Of Electrical Engineering And Computer Sciencesen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectExplainable AIen_US
dc.subjectmedical multiclass classificationen_US
dc.subjectSHAPen_US
dc.subjectLIMEen_US
dc.subjectdecision treeen_US
dc.subjectquantitative explainability evaluationen_US
dc.titleA quantitative evaluation of explainable AI methods using the depth of decision treeen_US
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
A quantitative evaluation of explainable AI methods using the depth of decision tree.pdf
Boyut:
699.24 KB
Biçim:
Adobe Portable Document Format
Açıklama:
Tam metin / Full text