BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Europe/Stockholm
X-LIC-LOCATION:Europe/Stockholm
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20241120T082410Z
LOCATION:HG F 30 Audi Max
DTSTART;TZID=Europe/Stockholm:20240604T095300
DTEND;TZID=Europe/Stockholm:20240604T095400
UID:submissions.pasc-conference.org_PASC24_sess158_pos106@linklings.com
SUMMARY:P38 - Performance Characterisation of Software for Lattice Quantum
  Field Theory Beyond the Standard Model
DESCRIPTION:Poster\n\nEd Bennett (Swansea University); Luigi Del Debbio an
 d Ryan Hill (University of Edinburgh); Jong-Wan Lee (Institute for Basic S
 ciences); Julian Lenz, Biagio Lucini, and Maurizio Piai (Swansea Universit
 y); Andrew Sunderland (Science and Technology Facilities Council); and Dav
 ide Vadacchino (University of Plymouth)\n\nLattice Quantum Chromodynamics 
 (QCD) is a computationally demanding field that has driven many innovation
 s in the High-Performance Computing space. Beyond the Standard Model (BSM)
  physics introduces additional degrees of freedom that significantly incre
 ase the complexity of software and the difficulty of writing performant, p
 ortable code. In this poster we present an assessment of the performance o
 f HiRep and Grid, two suites of BSM-capable lattice software, when applied
  to problems of current physical interest. HiRep is a library and set of t
 ools written in C, making use of a C++ and Perl code generator for the low
 est-level data structures, and MPI for parallelism. Grid is a library and 
 set of tools written in C++17, making use of expression templates to give 
 both flexibility in usage and performance portability, based on separation
  of concerns, with parallelism available via combinations of technologies 
 including MPI, OpenMP, and shared memory over NVLink. We discuss, using ob
 served benchmark data, the areas in which each of these approaches perform
 , how scalable they are on CPU and GPU architectures, in the context of a 
 set of modifications made to Grid to introduce support for theories in the
  symplectic family of groups, which have previously been implemented in Hi
 Rep.\n\nSession Chair: Iva Kavcic (Met Office)
END:VEVENT
END:VCALENDAR
