The CMS collaboration has chosen a novel High-Granularity Calorimeter (HGCAL) for the endcap regions as part of its planned upgrade for the high luminosity LHC. The high granularity of the detector is crucial for disentangling showers overlapped with high levels of pileup events (140 or more per bunch crossing at HL-LHC). But the reconstruction of the complex events and rejection of background pose significant challenges, particularly for the Level 1 (L1) trigger, where the processing resources and latency are tightly constrained. It is therefore planned to use Machine Learning (ML) models for this task, in particular for the identification of electromagnetic and hadronic showers using the 3D shape of the energy deposits. The 3D shape of a shower is encoded in the form of shape variables computed in the HGCAL trigger primitives generation (TPG) system and sent to the central L1 trigger where they are used as inputs by classification models. The choice of this set of variables is crucial and must take into account their discrimination power, but also the limited bandwidth between the HGCAL TPG and the central L1T and the hardware resources needed to implement the classifiers. In order to find the best compromise, a multi-objective optimization technique based on genetic algorithms is used to optimize together the classification performance, the number of bits required to encode the shape variables, and the classification model complexity. The results of this optimization, and in particular the balance between performance and hardware complexity, will be discussed in this presentation.
Alexandre Hakimi (LLR, école polytechnique/CNRS)