BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082410Z
LOCATION:HG F 30 Audi Max
DTSTART;TZID=Europe/Stockholm:20240603T185200
DTEND;TZID=Europe/Stockholm:20240603T185300
UID:submissions.pasc-conference.org_PASC24_sess156_pos144@linklings.com
SUMMARY:P03 - Additively Preconditioned Trust Region Strategies for Machin
 e Learning
DESCRIPTION:Poster\n\nSamuel Cruz (Università della Svizzera italiana, Uni
 Distance Suisse); Ken Trotti (Università della Svizzera italiana); Alena K
 opaničáková (Brown University, Università della Svizzera italiana); and Ro
 lf Krause (Università della Svizzera italiana, UniDistance Suisse)\n\nIn o
 ur work we adopt a novel variant of the “Additively Preconditioned Trust-R
 egion Strategy” (APTS) to train neural networks (NNs). APTS is based on a 
 right preconditioned Trust-Region (TR) method, which utilizes an additive 
 \ndomain-decomposition-based preconditioner. In the context of NN training
 , the domain is considered to be either the parameters of the NN or the tr
 aining data set. Based on the TR framework, APTS guarantees global converg
 ence to a minimizer. It also eliminates the necessity for costly hyper-par
 ameter tuning, since the TR algorithm automatically determines the step si
 ze in every iteration. The presented numerical study includes a comparison
  with widely used training methods such as SGD, Adam, LBFGS, and the stand
 ard TR method, where we demonstrate the capabilities, strengths, and limit
 ations of the proposed training methods.\n\nSession Chair: Erik W. Draeger
  (Lawrence Livermore National Laboratory)
END:VEVENT
END:VCALENDAR
