Lhc group itrain federatedservices
![lhc group itrain federatedservices lhc group itrain federatedservices](http://websites-img.milonic.com/img-slide/420x257/h/home.lhcgroup.com.png)
HEP will need to make more efficient use of facilities that are diverse both in the type of facility (e.g., dedicated grid resources HPC and cloud) and the type of compute they have available (CPU, GPU, special purpose AI accelerators, computational storage, etc.). R&D is required for the community to understand how to exploit a much more heterogeneous computing and storage landscape at the facilities to contain overall costs given this slowdown. With the slowdown of Moore’s Law we expect a diversification of computing devices, architectures, and computing paradigms. The movement of data across the global research and education networks, and in/out of processing and storage facilities is thus the one characteristic that is unlikely to change. These global teams will continue to require global federation of “in-kind” resources because each funding agency involved will make its own decisions on how to provide the required resources for a given program. The one characteristic that remains unchanged is the nature of HEP as a “team sport” with teams that are global in nature. We thus bring these together into a coherent picture, rather than just summarizing each sub-topic separately. We find that in many cases, multiple sub-areas arrive at related, or mutually reinforcing recommendations for needed action. Each of these sub-topics defines itself below in its respective sections, and arrives at conclusions within its respective scope. The leads for these topics are listed in Appendix B. Those include the obvious “Storage” and “Processing” that are already in the name of our topical group, but also potentially less obvious like “Edge Services”, “AI Hardware”, “Analysis Facilities”, and of course “Networking”. The community discussions quickly converged on six distinct sub-topics within this topical working group. Registered workshop participants are listed in Appendix A. These workshops drew attendees from all areas of High Energy Physics (HEP), with representatives from large and small experiments, computing facilities, theoretical communities and industry. Those systems tend to have requirements that are quite distinct from the data center functionality required for “offline” processing and storage.Īs well as submitted whitepapers, this report is the result of community discussions, including sessions in the Computational Frontier workshop on August 10–11, 2020, and the CompF4 Topical Group workshop on April 7–8, 2022. However, we explicitly consider any data centers that are integrated into data acquisition systems or trigger of the experiments out of scope here. In other words, it includes commercial clouds, federally funded High Performance Computing (HPC) systems for all of science, and systems funded explicitly for a given experimental or theoretical program. Protecting LHC Group’s information assets is everyone’s responsibility.The Snowmass 2021 CompF4 topical group’s scope is facilities R&D, where we consider “facilities” as the hardware and software infrastructure inside the data centers plus the networking between data centers, irrespective of who owns them, and what policies are applied for using them. In addition, because of its confidential nature, business, health and financial information must be protected by an effective internal control environment, including policies and procedures to secure the company’s assets, and made available outside the company only with appropriate authorization and after consideration of the interests of LHC Group as a whole. We are committed to being conscientious and accountable when handling confidential company information.