BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082410Z
LOCATION:HG F 30 Audi Max
DTSTART;TZID=Europe/Stockholm:20240604T095900
DTEND;TZID=Europe/Stockholm:20240604T100000
UID:submissions.pasc-conference.org_PASC24_sess158_pos108@linklings.com
SUMMARY:P47 - Sculpting Precision: Unveiling the Impact of eXplainable Fea
 tures and Magnitudes in Neural Network Pruning
DESCRIPTION:Poster\n\nJamil Gafur (The University of Iowa, National Renewa
 ble Energy Laboratory) and Steve Goddard (The University of Iowa)\n\nIn th
 e domain of Machine Learning (ML), models are celebrated for their high ac
 curacy, however, integrating them into resource-constrained embedded syste
 ms poses a formidable challenge. This study empirically demonstrates that 
 traditional magnitude-based pruning techniques, though effective in compre
 ssing model size, lead to underfitting, reducing the model's ability to di
 scern complex features. Additionally, the compression-to-accuracy ratio of
  eXplainable Artificial Intelligence (XAI) pruning techniques is explored.
  The research postulates that leveraging XAI techniques in model pruning a
 chieves higher compression rates than conventional magnitude-based methods
  without inducing underfitting. XAI pruning removes redundant neuron group
 s, preserving the overall "knowledge." Examining ResNet50 and VGG19 models
  on CIFAR-10 data, the study compares magnitude-based and XAI pruning meth
 ods across varying pruning targets and rates. Our results confirm underfit
 ting with magnitude-based pruning and validate XAI's superiority in retain
 ing accuracy during compression. The second experiment focuses on the chan
 ges in XAI features during pruning, emphasizing the reliability of XAI pru
 ning over magnitude pruning. In conclusion, this study underscores the val
 ue of XAI pruning over magnitude pruning in retaining model accuracy. Resu
 lts reveal that XAI-driven pruning is a viable solution for reducing ML mo
 del parameters in resource-constrained environments, ensuring accuracy is 
 retained while mitigating the impact of model size reduction.\n\nSession C
 hair: Iva Kavcic (Met Office)
END:VEVENT
END:VCALENDAR
