BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082410Z
LOCATION:HG F 30 Audi Max
DTSTART;TZID=Europe/Stockholm:20240604T094800
DTEND;TZID=Europe/Stockholm:20240604T094900
UID:submissions.pasc-conference.org_PASC24_sess158_pos163@linklings.com
SUMMARY:P31 - Improving Chest X-ray Image Classification via Parallelized 
 Generative Neural Architecture Search
DESCRIPTION:Poster\n\nFelix Mejia (Industrial University of Santander), Jo
 hn Anderson Garcia Henao (University of Bern), Carlos Barrio (Industrial U
 niversity of Santander), and Michell Riveill (Université Côte d’Azur)\n\nE
 xplore GenNAS for chest X-ray classification in lung diseases, leveraging 
 novel parallel training methods for enhanced accuracy and efficiency. Medi
 cal image classification for pulmonary pathologies from chest X-rays is tr
 aditionally time-consuming. GenNAS, using GPT-4's generative capabilities,
  automates optimal architecture learning from data. \nThis study investiga
 tes parallelization and generative algorithms to optimize neural network a
 rchitectures for chest X-ray classification, analyzing their impact on the
  NAS algorithm using the ChexPert dataset.\nThe study uses the CheXpert da
 taset with 224,316 chest X-rays, focusing on classifying five lung disease
  pathologies. GenNASXRays evaluates 6561 architecture possibilities in an 
 8-layer search space, with AUC-ROC and Precision-Recall plots as metrics. 
 Training on 187,641 images, the sequential algorithm took 190.2 hours for 
 an accuracy of 0.869. In parallel execution on two GPUs, an accuracy of 0.
 87 was achieved in 127.09 hours, highlighting the efficiency of paralleliz
 ation. \nThe experiments were executed with well-known neural network arch
 itectures for image classification such as DenseNet-121 obtaining an accur
 acy of 0.8678, ResNet-152 0.875 and EfficientNet-B0 0.7494 being very clos
 e to the architectures generated by GenNAS.. \nGenNAS demonstrates precisi
 on in defining deep learning models. Parallelization significantly acceler
 ates Neural Architecture Search, potentially improving patient outcomes th
 rough timely and accurate diagnoses.\n\nSession Chair: Iva Kavcic (Met Off
 ice)
END:VEVENT
END:VCALENDAR
