<BODY><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d18755935\x26blogName\x3dVINOD+MEDIA%E2%84%A2\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://dmnvinod.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://dmnvinod.blogspot.com/\x26vt\x3d-1483311251890623224', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

131.6 Gbps ,151 Gbps throughput

World Network Speed Record Shattered for Third Consecutive Year
Peak throughput of 151 Gbps and recorded an official mark of 131.6 Gbps

An international team of scientists and engineers for the third consecutive year has smashed the network speed record, moving data along at an average rate of 100 gigabits per second (Gbps) for several hours at a time. A rate of 100 Gbps is sufficient for transmitting five feature-length DVD movies on the Internet from one location to another in a single second.

The winning "High-Energy Physics" team is made up of physicists, computer scientists, and network engineers led by the California Institute of Technology, the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN, and the University of Michigan and partners at the University of Florida, Vanderbilt, and the Brookhaven National Lab, as well as international participants from the UK (University of Manchester and UKLight), Brazil (Rio de Janeiro State University, UERJ, and the State Universities of São Paulo, USP and UNESP), Korea (Kyungpook National University, KISTI) and Japan (the KEK Laboratory in Tsukuba), who joined forces to set a new world record for data transfer, capturing first prize at the Supercomputing 2005 (SC|05) Bandwidth Challenge (BWC).

The HEP team's demonstration of "Distributed TeraByte Particle Physics Data Sample Analysis" achieved a peak throughput of 151 Gbps and an official mark of 131.6 Gbps measured by the BWC judges on 17 of the 22 optical fiber links used by the team, beating their previous mark for peak throughput of 101 Gbps by 50 percent. In addition to the impressive transfer rate for DVD movies, the new record data transfer speed is also equivalent to serving 10,000 MPEG2 HDTV movies simultaneously in real time, or transmitting all of the printed content of the Library of Congress in 10 minutes.

The team sustained average data rates above the 100 Gbps level for several hours for the first time, and transferred a total of 475 terabytes of physics data among the team's sites throughout the U.S. and overseas within 24 hours. The extraordinary data transport rates were made possible in part through the use of the FAST TCP protocol developed by Associate Professor of Computer Science and Electrical Engineering Steven Low and his Caltech Netlab team, as well as new data transport applications developed at SLAC and Fermilab and an optimized Linux kernel developed at Michigan.

Professor of Physics Harvey Newman of Caltech, head of the HEP team and US CMS Collaboration Board Chair, who originated the LHC Data Grid Hierarchy concept, said, "This demonstration allowed us to preview the globally distributed Grid system of more than 100 laboratory and university-based computing facilities that is now being developed in the U.S., Latin America, and Europe in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC) that will begin operation in 2007.

"We used a realistic mixture of streams, including the organized transfer of multiterabyte datasets among the laboratory centers at CERN, Fermilab, SLAC, and KEK, plus numerous other flows of physics data to and from university-based centers represented by Caltech, Michigan, Florida, Rio de Janeiro and São Paulo in Brazil, and Korea, to effectively use the remainder of the network capacity.

"The analysis of this data will allow physicists at CERN to search for the Higgs particles thought to be responsible for mass in the universe, supersymmetry, and other fundamentally new phenomena bearing on the nature of matter and space-time, in an energy range made accessible by the LHC for the first time."

The largest physics collaborations at the LHC, CMS and ATLAS each encompass more than 2,000 physicists and engineers from 160 universities and laboratories. In order to fully exploit the potential for scientific discoveries, the many petabytes of data produced by the experiments will be processed, distributed, and analyzed using a global Grid. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport terabyte-scale data samples on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" of already-understood particle interactions. This data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Matt Crawford, head of the Fermilab network team at SC|05 said, "The realism of this year's demonstration represents a major step in our ability to show that the unprecedented systems required to support the next round of high-energy physics discoveries are indeed practical. Our data sources in the bandwidth challenge were some of our mainstream production storage systems and file servers, which are now helping to drive the searches for new physics at the high-energy frontier at Fermilab's Tevatron, as well the explorations of the far reaches of the universe by the Sloan Digital Sky Survey."

Les Cottrell, leader of the SLAC team and assistant director of scientific computing and computing services, said, "Some of the pleasant surprises at this year's challenge were the advances in throughput we achieved using real applications to transport physics data, including bbcp and xrootd developed at SLAC. The performance of bbcp used together with Caltech's FAST protocol and an optimized Linux kernel developed at Michigan, as well as our xrootd system, were particularly striking. We were able to match the performance of the artificial data transfer tools we used to reach the peak rates in past years."

Future optical networks incorporating multiple 10 Gbps links are the foundation of the Grid system that will drive the scientific discoveries. A "hybrid" network integrating both traditional switching and routing of packets and dynamically constructed optical paths to support the largest data flows is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data-intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic Grid supporting many terabyte and larger data transactions is practical.

Shawn McKee, associate research scientist in the University of Michigan Department of Physics and leader of the UltraLight Network technical group, said, "This achievement is an impressive example of what a focused network effort can accomplish. It is an important step towards the goal of delivering a highly capable end-to-end network-aware system and architecture that meet the needs of next-generation e-science."

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and plan to deploy a new generation of revolutionary Internet applications. Multigigabit end-to-end network performance will empower scientists to form "virtual organizations" on a planetary scale, sharing their collective computing and data resources in a flexible way. In particular, this is vital for projects on the frontiers of science and engineering in "data intensive" fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

The new bandwidth record was achieved through extensive use of the SCInet network infrastructure at SC|05. The team used 15 10 Gbps links to Cisco Systems Catalyst 6500 Series Switches at the Caltech Center for Advanced Computing Research (CACR) booth, and seven 10 Gbps links to a Catalyst 6500 Series Switch at the SLAC/Fermilab booth, together with computing clusters provided by Hewlett Packard, Sun Microsystems, and IBM, and a large number of 10 gigabit Ethernet server interfaces-more than 80 provided by Neterion, and 14 by Chelsio.

The external network connections to Los Angeles, Sunnyvale, the Starlight facility in Chicago, and Florida included the Cisco Research, Internet2/HOPI, UltraScience Net and ESnet wavelengths carried by National Lambda Rail (NLR); Internet2's Abilene backbone; the three wavelengths of TeraGrid; an ESnet link provided by Qwest; the Pacific Wave link; and Canada's CANARIE network. International connections included the US LHCNet links (provisioned by Global Crossing and Colt) between Chicago, New York, and CERN, the CHEPREO/WHREN link (provisioned by LANautilus) between Miami and Sao Paulo, the UKLight link, the Gloriad link to Korea, and the JGN2 link to Japan.

Regional connections included six 10 Gbps wavelengths provided with the help of CIENA to Fermilab; two 10 Gbps wavelengths to the Caltech campus provided by Cisco Systems' research waves across NLR and California's CENIC network; two 10 Gbps wavelengths to SLAC provided by ESnet and UltraScienceNet; three wavelengths between Starlight and the University of Michigan over Michigan Lambda Rail (MiLR); and wavelengths to Jacksonville and Miami across Florida Lambda Rail (FLR). During the test, several of the network links were shown to operate at full capacity for sustained periods.

While the SC|05 demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the U.S., Europe, Latin America, and Pacific Asia, it is expected that networking on this scale in support of the largest science projects (such as the LHC) will be commonplace within the next three to five years. The demonstration also appeared to stress the network and server systems used, so the team is continuing its test program to put the technologies and methods used at SC|05 into production use, with the goal of attaining the necessary level of reliability in time for the start of the LHC research program.

As part of the SC|05 demonstrations, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE) developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid (PPDG), GriPhyN/iVDGL, Open Science Grid, and DISUN projects. This involved transferring data to CERN, Florida, Fermilab, Caltech, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth and in Manchester also were used for disk-to-disk transfers between Seattle and the UK.

The team used Caltech's MonALISA (MONitoring Agents using a Large Integrated Services Architecture) system to monitor and display the real-time data for all the network links used in the demonstration. It simultaneously monitored more than 14,000 grid nodes in 200 computing clusters. MonALISA (http://monalisa.caltech.edu) is a highly scalable set of autonomous self-describing agent-based subsystems that are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and Grid systems, as well as the scientific applications themselves.

The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Neterion, Chelsio, Sun Microsystems, IBM, and Boston Ltd., as well as the network engineering staffs of National LambdaRail, Internet2's Abilene Network, ESnet, TeraGrid, CENIC, MiLR, FLR, Pacific Wave, AMPATH, RNP and ANSP/FAPESP in Brazil, KISTI in Korea, UKLight in the UK, JGN2 in Japan, and the Starlight international peering point in Chicago.

The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners.

Labels:

« Home | Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »
| Next »

» Post a Comment