The Abilene Observatory: Current Status and Future Directions

The Abilene Observatory: Current Status and Future Directions Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2 Universi...
Author: Earl Harvey
8 downloads 2 Views 638KB Size
The Abilene Observatory: Current Status and Future Directions

Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2 University of Minnesota 23 October 2003

Outline ƒInternet2 Infrastructure • Abilene, HOPI, RONs • Observatory will evolve to new infrastructures

ƒMeasurements • History • Operational / Research aspects • Abilene Upgrade

ƒThe Observatory • • • •

Data Collections Data Views Collocation Input from research community 11/10/2003

2

Abilene – Design

11/10/2003

3

Abilene – Current

11/10/2003

4

Abilene Scale ƒIP-over-DWDM (OC-192c) and IP-over-SONET OC-48c Backbone, Juniper T-640 Routers ƒ48 Connectors (OC-3c → 10 GigE) ƒ222 participants – research universities & labs • All 50 states, District of Columbia, & Puerto Rico • Aggregation on the rise!

ƒExpanded access • 92 sponsored participants, expected to increase • 30 State networks, expected to increase

ƒPerformance • Early IPv4/IPv6 tests – 8 gbps cross country (packet blasts) • Recent tests – 5.6 gbps TCP flows from CalTech to CERN!

11/10/2003

5

Last updated: 03 October 2003

Abilene International Peering

11/10/2003

6

Last updated: 03 October 2003

Abilene Federal/Research Peering

MANLAN Exchange Point ƒInternet2/NYSERnet partnership ƒEthernet Switch • Cisco 6513 (with new fabric/backplane soon) • Located in same building as Abilene New York City Router Node • CA*net bringing in OC-192 to same building

ƒAbilene Tyco/SurfNet connection • OC-192 from Amsterdam

ƒGLIF (Global Lambda Integration Facility) ƒInstalling Cisco 15454 • • • • •

Located in MANLAN rack Will bring CA*net and SurfNet OC-192s to that switch Potential for experimental TDM paths Ethernets to MANLAN SONET to Abilene New York City Router Node 11/10/2003

8

Hybrid Network Infrastructure ƒThree basic infrastructures – HOPI (Hybrid Optical Packet Infrastructure) • Abilene Packet Based Infrastructure • Regional Optical Networks – RONs • National LambaRail Wave

ƒMajor Questions • Vision for the Network • Packet switching / circuit switching futures • How do we go forward in the near future – Design team – Initial design February 2004

• How do we measure!! – Research community involvement 11/10/2003

9

Leading & Emerging Regional Optical Initiatives ƒ California (CALREN) ƒ Colorado (FRGP/BRAN) ƒ Connecticut (Connecticut Education Network) ƒ Florida (Florida LambdaRail) ƒ Indiana (I-LIGHT) ƒ Illinois (I-WIRE) ƒ Maryland, D.C. & northern Virginia (MAX) ƒ Michigan ƒ New York + New England region (NEREN) ƒ North Carolina (NC LambdaRail) ƒ Ohio (Third Frontier Network) ƒ Oregon ƒ Rhode Island (OSHEAN) ƒ SURA Crossroads (southeastern U.S.) ƒ Texas ƒ Utah

11/10/2003

10

11/10/2003

11

Measurement Infrastructure ƒAbilene Measurement Infrastructure ƒHistory • Original Abilene racks included measurement devices – Included a single PC – Early OWAMP, surveyor measurements – Optical splitters at some locations

ƒData collection motivation • Operational data – collected by the NOC – How is the network performing? – All users, other network operators to understand network

• Some research data collected from the beginning 11/10/2003

12

Measurement Infrastructure ƒUpgrade to Juniper T-640 routers and OC-192c • Important decision about rack space • Two racks with one dedicated to measurement platform • Potential for research community to collocate equipment

11/10/2003

13

Abilene router node Power (48VDC)

Power

Measurement Machines (nms)

Eth. Switch

Space! Measurement (Observatory) Rack

Out-of-band (M-5)

T-640 11/10/2003

14

Dedicated servers at each node ƒHouston Router Node • NMS machines • PlanetLab machines

11/10/2003

15

Dedicated servers at each node ƒnms1: throughput tester (iperf) • gigE direct to T-640, MTU 9000

ƒnms2: ad-hoc on-demand (+ routing?) • gigE to switch to T-640, MTU 1500

ƒnms3: statistics collection (flow, snmp) • 100bT to switch to T-640 • Local data collection to capture during network instability

ƒnms4: latency tester (owamp, troute) • 100bT to switch to T-640 • CDMA “GPS” timing source from endruntechnologies.com

11/10/2003

16

Hardware ƒIntel SR2200 from ioncomputer.com ƒIntel SCB2 motherboard ƒDual 1.26 GHz Pentium III, 512K L2 ƒ1GB PC-133 DRAM in two banks ƒServerWorks ServerSet HE-SL chipset ƒ64 bit 64 MHz PCI ƒSyskonnect GigE SK-9843 SX ƒRedundant 48VDC 11/10/2003

17

OS ƒnms1’s ( in transition to: ) • Linux 2.4.20, SMP, no changes • All traffic default routed through gigE • Buffers tuned a-la LBL document • 990 Mb/s TCP between any two • (better TCP platform)

ƒnms2-4: • FreeBSD 4.6-STABLE • Buffers tuned • (better UDP platform) 11/10/2003

18

Measurement Capabilities ƒOne way latency, jitter, loss • IPv4 and IPv6

ƒRegular TCP/UDP throughput tests – ~1 Gbps • IPv4 and IPv6; On-demand available (see “pipes”)

ƒSNMP (NOC) [octets, packets, errors; collected frequently] • NOC working on SNMP proxy

ƒ“Netflow” (ITEC Ohio) [anon. by 0-ing last 11 bits] ƒMulticast beacon with historical data ƒRouting data (BGP & IGP) [IGP under development] • Looking at Zebra + mods, Japanese routing research is driver

ƒ(Optical splitter taps on backbone links at select location(s)) 11/10/2003

19

Throughput Tester Evolution ƒOriginal design had all four servers with the same operating system • Ease of administration • Lots of experience using FreeBSD for TCP testing • FreeBSD features for capturing UDP packets (router flow capture) • FreeBSD had more respected NTP implementation • Linux has Web100 development and large highenergy physics installed base

ƒFreeBSD 4.6-Release deployed 11/10/2003

20

Throughput Tester Evolution ƒOver the last year, more TCP experience in wide area (9000 byte MTU, gigE direct to T-640) • DC to LA: 980 Mb/s UDP, but ~380 Mb/s TCP • NC-ITEC to LA: 990 Mb/s TCP, if sender has tuned, but otherwise stock 2.4.20 Linux kernel • Replicated in testing from Indianapolis (same location, same connectivity, same hardware) • The FreeBSD sender’s CPU is pegged, TCP stack apparently starved (PUSH set on all frames) 11/10/2003

21

OWAMP ƒOne Way Latency • Requires NTP on endpoints • Control connection used to broker test request based upon policy restrictions and available resources. (Bandwidth/disk limits) • Enables the combination of regularly scheduled tests with on-demand tests. • http://owamp.internet2.edu/ • Reference implementation of Draft: http://www.ietf.org/internet-drafts/draft-ietf-ippm-owdp-06.txt

11/10/2003

22

Abilene Observatory ƒA program to provide enhanced support of computer science research over Abilene • Create network data archive – Consists of many separate databases on a variety of servers – Forms a large correlated database – Create tools to access the database – Support from and for graduate programs

• Create Data views – A snapshot of Abilene • Collocation Component - provision for direct network measurement and experimentation – Resources reserved for additional servers • Power (DC), rack space (2RU), router uplink ports (GigE)

– Initial deployment is PlanetLab – Additional requests from AMP project and research team from Japan 11/10/2003

23

Databases ƒLocal collection, centralized storage, but highly distributed ƒDatabases • Usage Data • Netflow Data • Routing Data – BGP now, ISIS in the future

• Latency Data • Throughput Data • Router Data • Syslog Data

11/10/2003

24

Databases ƒVariety of Interfaces to data • Simple web based for usages data • Rsync for netflow • Simple web based for routing • Soap interface for latency data • Throughput data undecided as yet • Router data – Soap interface • Syslog data undecided as yet

ƒShould this be standardized? 11/10/2003

25

Input from Research Community ƒLarge, correlated, distributed databases? ƒOther types of data? ƒMore importantly, other types of projects? ƒIncluding networks closer to the edge? Gigapops and universities? ƒInvolving the international community? ƒFuture infrastructures? ƒInvolving graduate programs – graduate students? 11/10/2003

26

Quality Control ƒNew views on the way • Existing views at http://abilene.internet2.edu/observatory/data-views.html

ƒSingle page to see ‘10 worst’ measurements • • • •

Throughput data Losses Variation in latency (95th – min) for links Maybe top utilization measurements too for comparison

ƒExpect to iterate presentation to find “best” presentation for us 11/10/2003

27

URLs ƒhttp://abilene.internet2.edu/observatory • Pointers to all measurements/sites/projects

ƒhttp://www.abilene.iu.edu/ • NOC home page. Weathermap, Proxy, SNMP measurements

ƒhttp://netflow.internet2.edu/weekly/ • Summarized flow data

ƒhttp://www.itec.oar.net/abilene-netflow/ • “Raw” – matricies; (Anon) feeds available on request 11/10/2003

28

11/10/2003

29

Suggest Documents