Archive for October, 2011

Estimating CPU Cost of BGPsec on a Router

My presentation, with Kotikalapudi Sriram, given at Cisco NAG of the first results from modeling the signing and validation processor costs of BGPsec.

My take-away:

  • You very well may be able to do initial deployment of path validation using current high end routers, and even some almost high end routers.
  • As we deploy, at least Cisco looks likely to be ahead of our CPU needs. The ISP W in my slides will have to move up if they intend to keep their current BGP peer density. But there will be something to which they can move.
  • Comments off

    draft-wkumari-idr-as0

    A trivial new Internet-Draft to deal with the issue that an AS number zero was never formally proscribed.

    This document specifies that a BGP speaker MUST NOT originate or
    propapate an announcement with an AS number of zero, and a BGP
    listener MUST NOT accept an announcement which has an AS number of
    zero in the AS-PATH attribute, and SHOULD log the fact that it has
    done so.

    In addition if a BGP listener recives zero as the peer AS in an OPEN
    message, it MUST abort the connection and send a NOTIFICATION with
    Error Code “OPEN Message Error” and subcode “Bad Peer AS” (see
    [RFC4271] Section 6.2). Obviosuly enough, a router MUST NOT
    initialte a connection claiming to be AS number zero.

    Comments off

    10 Lessons from 10 Years of Measuring and Modeling the Internet’s Autonomous Systems

    Our paper should be out about now.

    M. Roughan, W. Willinger, O. Maennel, D. Perouli, and R. Bush, 10 Lessons from 10 Years of Measuring and Modeling the Internet’s Autonomous Systems, IEEE Journal on Selected Areas in Communications, Vol. 29, No. 9, pp. 1-12, Oct. 2011.

    From the introduction:

    1) The notion of “inter-domain topology of the Internet” is ambiguous, at best, without more precise definitions of terms than typically provided.

    2) The commonly-used practice of abstracting ASes to generic atomic nodes without any internal structure is an over-simplification that severely limits our ability to capture critical features associated with real-world ASes such as route diversity, policy diversity, or multi-connectivity.

    3) The traditional approach of modeling the AS-level Internet as a simple connected digraph is an abstraction incapable of capturing important facets of the rich semantics of real-world inter-AS relationships, including different interconnections for different policies and/or different interconnection points. The implications of such abstractions need to be recognized before attributing network-specific meaning to findings derived from the resulting models.

    4) The BGP routing data that projects like RouteViews or RIPE RIS have collected and made publicly available are of enormous practical value for network operators, but were never meant to be used for inferring or mapping the AS-level connectivity of the Internet. The main reason for this is that BGP was not designed with AS-level topology discovery/mapping in mind; instead, BGP’s purpose is to enable ASes to express and realize their routing policies without revealing AS-internal features and, to achieve this goal in a scalable manner, BGP has to hide information that would otherwise aid topology discovery.

    5) The traceroute data that projects like Ark (CAIDA), DIMES, or iPlane have collected and made publicly available have been a boon to network researchers, but are inherently limited for faithfully inferring or mapping the AS-level connectivity of the Internet. The main reason for this is that traceroute was not designed with Internet topology discovery/mapping in mind; instead, it is a diagnostic tool for tracking the route or path (and measuring transit delays) of one’s packets to some host, and to achieve this diagnostic task, traceroute can ignore issues (e.g., interface aliasing) that would need to be solved first were topology discovery its stated objective.

    6) Significant additional efforts are required before current models of the Internet’s inter-domain topology derived from the publicly available and widely-used measurement data can purposefully be used to study the performance of new routing protocols and/or perform meaningful simulation studies. At a minimum, such studies need to be accompanied by strong robustness results that demonstrate the insensitivity of reported claims to model variations that attempt to address or remediate some of the known shortcomings of the underlying models or data.

    7) When examining the vulnerability of the Internet to various types of real-world threats or studying the Internet as a critical infrastructure, it is in general inappropriate to equate the Internet with a measured AS topology. In fact, meaningful investigations of most vulnerability-related aspects of the Internet typically require taking a more holistic approach to Internet connectivity, accounting for details of the physical infrastructure, of how physical connectivity maps to various types of more virtual connectivity, of protocol-specific features, and of traffic- related aspects that manifest themselves at the different connectivity structures.

    8 ) While there is a valid role for “observational” studies of the Internet’s Autonomous System, the results of such studies are in general hard to interpret. A more promising method involves performing controlled experiments that allow one to discriminate alternative explanations for results and prevent the effects of one confounding factor from drowning out the effects of others.

    9) Studies which start with a definite application, and proceed to collect the best data available for that application have shown a much higher rate of success than “fishing expeditions”; that is, studies that target datasets collected by third-parties and analyze them for the sake of analysis.

    10) In an environment like the Internet where high-variability phenomena are the rule rather than the exception, and where the quality of the data cannot be taken for granted, it is paramount to apply data-analytic methods that have strong robustness properties to the known deficiencies in the observations and naturally account for the presence of extreme values in the data.

    Comments off