BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082410Z
LOCATION:HG E 1.1
DTSTART;TZID=Europe/Stockholm:20240604T143000
DTEND;TZID=Europe/Stockholm:20240604T150000
UID:submissions.pasc-conference.org_PASC24_sess180_pap132@linklings.com
SUMMARY:Topological Interpretability for Deep Learning
DESCRIPTION:Paper\n\nAdam Spannaus, Heidi Hanson, and Georgia Tourassi (Oa
 k Ridge National Laboratory) and Lynne Penberthy (NIH)\n\nWith the growing
  adoption of AI-based systems across everyday life, the need to understand
  their decision-making mechanisms is correspondingly increasing. The level
  at which we can trust the statistical inferences made from AI-based decis
 ion systems is an increasing concern, especially in high-risk systems such
  as criminal justice or medical diagnosis, where incorrect inferences may 
 have tragic consequences. Despite their successes in providing solutions t
 o problems involving real-world data, deep learning (DL) models cannot qua
 ntify the certainty of their predictions. These models are frequently quit
 e confident, even when their solutions are incorrect. 	\n\nThis work prese
 nts a method to infer prominent features in two DL classification models t
 rained on clinical and non-clinical text by employing techniques from topo
 logical and geometric data analysis. We create a graph of a model's featur
 e space and cluster the inputs into the graph's vertices by the similarity
  of features and prediction statistics. We then extract subgraphs demonstr
 ating high-predictive accuracy for a given label. These subgraphs contain 
 a wealth of information about features that the DL model has recognized as
  relevant to its decisions.  We infer these features for a given label usi
 ng a distance metric between probability measures, and demonstrate the sta
 bility of our method compared to the LIME and SHAP interpretability method
 s. This work establishes that we may gain insights into the decision mecha
 nism of a DL model. This method allows us to ascertain if the model is mak
 ing its decisions based on information germane to the problem or identifie
 s extraneous patterns within the data.\n\nDomain: Computational Methods an
 d Applied Mathematics\n\nSession Chair: Luca Muscarnera (Politecnico di Mi
 lano)
END:VEVENT
END:VCALENDAR
