BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082410Z
LOCATION:HG F 30 Audi Max
DTSTART;TZID=Europe/Stockholm:20240604T094000
DTEND;TZID=Europe/Stockholm:20240604T094000
UID:submissions.pasc-conference.org_PASC24_sess158_pos117@linklings.com
SUMMARY:P20 - Fast Inference of Cosmology from High Resolution Maps Using 
 Deep Learning
DESCRIPTION:Poster\n\nArne Thomsen (ETH Zurich); Tomasz Kacprzak (ETH Zuri
 ch, Swiss data science center); Peter Harrington (National Energy Research
  Scientific Computing Center); Agnes Ferte (SLAC); and Alexandre Refregier
  (ETH Zurich)\n\nOngoing galaxy surveys like the Dark Energy Survey are de
 signed to observe the large-scale structure of the Universe using a number
  of cosmological probes, such as weak gravitational lensing and galaxy clu
 stering. Conventionally, constraints on the cosmological parameters are ca
 lculated by comparing two-point functions of the observables with semi-ana
 lytical theory predictions. However, we know that these physical fields co
 ntain information beyond what the two-point functions can capture as the U
 niverse has evolved into nonlinear structures. With this project, we propo
 se to leverage the expressive power of deep learning to extract this addit
 ional cosmological information by instead learning the summary statistic. 
 To achieve fast progress in the scientific analysis, we aim to train the d
 eep networks in under 24 hours. We present a GPU framework for fast and ef
 ficient analysis of cosmological maps on the A100 GPU nodes of the Perlmut
 ter cluster at NERSC. We benchmark the pipeline for data parallel training
  on multiple A100 GPUs, both on a single and multiple Perlmutter nodes. We
  compare different distribution strategies, supervised and self-supervised
  loss functions, and graph convolutional and vision transformer network ar
 chitectures on the sphere.\n\nSession Chair: Iva Kavcic (Met Office)
END:VEVENT
END:VCALENDAR
