Pooling the resources of the CMS Tier-1 sites [electronic resource].
- Published:
- Washington, D.C. : United States. Dept. of Energy. High Energy Physics Division, 2015.
Oak Ridge, Tenn. : Distributed by the Office of Scientific and Technical Information, U.S. Dept. of Energy - Physical Description:
- Article numbers 042,056 : digital, PDF file
- Additional Creators:
- Fermi National Accelerator Laboratory, United States. Department of Energy. High Energy Physics Division, and United States. Department of Energy. Office of Scientific and Technical Information
Access Online
- Restrictions on Access:
- Free-to-read Unrestricted online access
- Summary:
- The CMS experiment at the LHC relies on 7 Tier-1 centres of the WLCG to perform the majority of its bulk processing activity, and to archive its data. During the first run of the LHC, these two functions were tightly coupled as each Tier-1 was constrained to process only the data archived on its hierarchical storage. This lack of flexibility in the assignment of processing workflows occasionally resulted in uneven resource utilisation and in an increased latency in the delivery of the results to the physics community.The long shutdown of the LHC in 2013-2014 was an opportunity to revisit this mode of operations, disentangling the processing and archive functionalities of the Tier-1 centres. The storage services at the Tier-1s were redeployed breaking the traditional hierarchical model: each site now provides a large disk storage to host input and output data for processing, and an independent tape storage used exclusively for archiving. Movement of data between the tape and disk endpoints is not automated, but triggered externally through the WLCG transfer management systems.With this new setup, CMS operations actively controls at any time which data is available on disk for processing and which data should be sent to archive. Thanks to the high-bandwidth connectivity guaranteed by the LHCOPN, input data can be freely transferred between disk endpoints as needed to take advantage of free CPU, turning the Tier-1s into a large pool of shared resources. The output data can be validated before archiving them permanently, and temporary data formats can be produced without wasting valuable tape resources. Lastly, the data hosted on disk at Tier-1s can now be made available also for user analysis since there is no risk any longer of triggering chaotic staging from tape.In this contribution, we describe the technical solutions adopted for the new disk and tape endpoints at the sites, and we report on the commissioning and scale testing of the service. We detail the procedures implemented by CMS computing operations to actively manage data on disk at Tier-1 sites, and we give examples of the benefits brought to CMS workflows by the additional flexibility of the new system.
- Report Numbers:
- E 1.99:fermilab-conf--15-447-cd
fermilab-conf--15-447-cd - Subject(s):
- Note:
- Published through SciTech Connect.
12/23/2015.
"fermilab-conf--15-447-cd"
"1413883"
Journal of Physics. Conference Series 664 4 ISSN 1742-6588 AM
21st International Conference on Computing in High Energy and Nuclear Physics, Okinawa (Japan), 13-17 Apr 2015.
A. Apyan; J. Badillo; J. Diaz Cruz; S. Gadrat; O. Gutsche; B. Holzman; A. Lahiff; N. Magini; D. Mason; A. Perez; F. Stober; S. Taneja; M. Taze; C. Wissing. - Funding Information:
- AC02-07CH11359
View MARC record | catkey: 24050656