Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service

dc.contributor.authorHayrapetyan, A.
dc.contributor.authorTumasyan, A.
dc.contributor.authorAdam, W.
dc.contributor.authorAndrejkovic, J.W.
dc.contributor.authorBergauer, T.
dc.contributor.authorChatterjee, S.
dc.contributor.authorDamanakis, K.
dc.date.accessioned2025-03-20T09:44:55Z
dc.date.available2025-03-20T09:44:55Z
dc.date.issued2024
dc.departmentİzmir Bakırçay Üniversitesi
dc.description.abstractComputing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors. © The Author(s) 2024.
dc.description.sponsorshipCouncil of Scientific and Industrial Research, India, CSIR
dc.description.sponsorshipMinistry of Business, Innovation and Employment, MBIE
dc.description.sponsorshipMinistry of Education and Science, MES
dc.description.sponsorshipBenemérita Universidad Autónoma de Puebla, BUAP
dc.description.sponsorshipDepartment of Atomic Energy, Government of India, DAE
dc.description.sponsorshipPCTI
dc.description.sponsorshipNational Academy of Sciences of Ukraine, NASU
dc.identifier.doi10.1007/s41781-024-00124-1
dc.identifier.issn2510-2044
dc.identifier.issue1
dc.identifier.scopus2-s2.0-85203288230
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1007/s41781-024-00124-1
dc.identifier.urihttps://hdl.handle.net/20.500.14034/2057
dc.identifier.volume8
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherSpringer Nature
dc.relation.ispartofComputing and Software for Big Science
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_Scopus_20250319
dc.subjectCMS
dc.subjectMachine learning
dc.subjectOffline and computing
dc.titlePortable Acceleration of CMS Computing Workflows with Coprocessors as a Service
dc.typeArticle

Dosyalar