direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

There is no English translation for this web page.

Topics for the Seminar on Internet Measurement, SS 2018

Topics for the seminar on Internet Measurement, SS 2018.
Themen für das Seminar über Internet Measurement, SS 2018.

Beyond the Radio: Illuminating the Higher Layers of Mobile Networks (MobiSys 2015)

Cellular network performance is often viewed as primarily dominated by the radio technology. However, reality proves more complex: mobile operators deploy and configure their networks in different ways, and sometimes establish network sharing agreements with other mobile carriers. Moreover, regulators have encouraged newer operational models such as Mobile Virtual Network Operators (MVNOs) to promote competition. In this paper we draw upon data collected by the ICSI Netalyzr app for Android to characterize how operational decisions, such as network configurations, business models, and relationships between operators introduce diversity in service quality and affect user security and privacy. We delve in detail beyond the radio link and into network configuration and business relationships in six countries. We identify the widespread use of transparent middleboxes such as HTTP and DNS proxies, analyzing how they actively modify user traffic, compromise user privacy, and potentially undermine user security. In addition, we identify network sharing agreements between operators, highlighting the implications of roaming and characterizing the properties of MVNOs, including that a majority are simply rebranded versions of major operators. More broadly, our findings highlight the importance of considering higher-layer relationships when seeking to analyze mobile traffic in a sound fashion.



Characterizing and Improving WiFi Latency in Large-Scale Operational Networks (Mobisys 2016)

WiFi latency is a key factor impacting the user experience of modern mobile applications, but it has not been well studied at large scale. In this paper, we design and deploy WiFiSeer, a framework to measure and characterize WiFi latency at large scale. WiFiSeer comprises a systematic methodology for modeling the complex relationships between WiFi latency and a diverse set of WiFi performance metrics, device characteristics, and environmental factors. WiFiSeer was deployed on Tsinghua campus to conduct a WiFi latency measurement study of unprecedented scale with more than 47,000 unique user devices. We observe that WiFi latency follows a long tail distribution and the 90th (99th) percentile is around 20 ms (250 ms). Furthermore, our measurement results quantitatively confirm some anecdotal perceptions about impacting factors and disapprove others. We deploy three practical solutions for improving WiFi latency in Tsinghua, and the results show significantly improved WiFi latencies. In particular, over 1,000 devices use our AP selection service based on a predictive WiFi latency model for 2.5 months, and 72% of their latencies are reduced by over half after they re-associate to the suggested APs.



What Happens After You Are Pwnd: Understanding the Use of Leaked Webmail Credentials in the Wild (IMC2016)

Cybercriminals steal access credentials to webmail accounts and then misuse them for their own profit, release them publicly, or sell them on the underground market. Despite the importance of this problem, the research community still lacks a comprehensive understanding of what these stolen accounts are used for. In this paper, we aim to shed light on the modus operandi of miscreants accessing stolen Gmail accounts. We developed an infrastructure that is able to monitor the activity performed by users on Gmail accounts, and leaked credentials to 100 accounts under our control through various means, such as having information-stealing malware capture them, leaking them on public paste sites, and posting them on underground forums. We then monitored the activity recorded on these accounts over a period of 7 months. Our observations allowed us to devise a taxonomy of malicious activity performed on stolen Gmail accounts, to identify differences in the behavior of cybercriminals that get access to stolen accounts through different means, and to identify systematic attempts to evade the protection systems in place at Gmail and blend in with the legitimate user activity. This paper gives the research community a better understanding of a so far understudied, yet critical aspect of the cybercrime economy.



Beyond Counting: New Perspectives on the Active IPv4 Address Space (IMC2016)

In this study, we report on techniques and analyses that enable us to capture Internet-wide activity at individual IP address-level granularity by relying on server logs of a large commercial content delivery network (CDN) that serves close to 3 trillion HTTP requests on a daily basis. Across the whole of 2015, these logs recorded client activity involving 1.2 billion unique IPv4 addresses, the highest ever measured, in agreement with recent estimates. Monthly client IPv4 address counts showed constant growth for years prior, but since 2014, the IPv4 count has stagnated while IPv6 counts have grown. Thus, it seems we have entered an era marked by increased complexity, one in which the sole enumeration of active IPv4 addresses is of little use to characterize recent growth of the Internet as a whole.


With this observation in mind, we consider new points of view in the study of global IPv4 address activity. Our analysis shows significant churn in active IPv4 addresses: the set of active IPv4 addresses varies by as much as 25% over the course of a year. Second, by looking across the active addresses in a prefix, we are able to identify and attribute activity patterns to network restructurings, user behaviors, and, in particular, various address assignment practices. Third, by combining spatio-temporal measures of address utilization with measures of traffic volume, and sampling-based estimates of relative host counts, we present novel perspectives on worldwide IPv4 address activity, including empirical observation of under-utilization in some areas, and complete utilization, or exhaustion, in others.


Entropy/IP: Uncovering Structure in IPv6 Addresses (IMC2016)

In this paper, we introduce Entropy/IP: a system that discovers Internet address structure based on analyses of a subset of IPv6 addresses known to be active, i.e., training data, gleaned by readily available passive and active means. The system is completely automated and employs a combination of information-theoretic and machine learning techniques to probabilistically model IPv6 addresses. We present results showing that our system is effective in exposing structural characteristics of portions of the active IPv6 Internet address space, populated by clients, services, and routers.


In addition to visualizing the address structure for exploration, the system uses its models to generate candidate addresses for scanning. For each of 15 evaluated datasets, we train on 1K addresses and generate 1M candidates for scanning. We achieve some success in 14 datasets, finding up to 40% of the generated addresses to be active. In 11 of these datasets, we find active network identifiers (e.g., /64 prefixes or "subnets") not seen in training. Thus, we provide the first evidence that it is practical to discover subnets and hosts by scanning probabilistically selected areas of the IPv6 address space not known to contain active hosts a priori.



Anycast vs. DDoS: Evaluating the November 2015 Root DNS Event (IMC2016)

Distributed Denial-of-Service (DDoS) attacks continue to be a major threat on the Internet today. DDoS attacks overwhelm target services with requests or other traffic, causing requests from legitimate users to be shut out. A common defense against DDoS is to replicate a service in multiple physical locations/sites. If all sites announce a common prefix, BGP will associate users around the Internet with a nearby site, defining the catchment of that site. Anycast defends against DDoS both by increasing aggregate capacity across many sites, and allowing each site's catchment to contain attack traffic, leaving other sites unaffected. IP anycast is widely used by commercial CDNs and for essential infrastructure such as DNS, but there is little evaluation of anycast under stress. This paper provides the first evaluation of several IP anycast services under stress with public data. Our subject is the Internet's Root Domain Name Service, made up of 13 independently designed services ("letters", 11 with IP anycast) running at more than 500 sites. Many of these services were stressed by sustained traffic at 100× normal load on Nov. 30 and Dec. 1, 2015. We use public data for most of our analysis to examine how different services respond to stress, and identify two policies: sites may absorb attack traffic, containing the damage but reducing service to some users, or they may withdraw routes to shift both good and bad traffic to other sites. We study how these deployment policies resulted in different levels of service to different users during the events. We also show evidence of collateral damage on other services located near the attacks.



BGPStream: A Software Framework for Live and Historical BGP Data Analysis (IMC2016)

We present BGPStream, an open-source software framework for the analysis of both historical and real-time Border Gateway Protocol (BGP) measurement data. Although BGP is a crucial operational component of the Internet infrastructure, and is the subject of research in the areas of Internet performance, security, topology, protocols, economics, etc., there is no efficient way of processing large amounts of distributed and/or live BGP measurement data. BGPStream fills this gap, enabling efficient investigation of events, rapid prototyping, and building complex tools and large-scale monitoring applications (e.g., detection of connectivity disruptions or BGP hijacking attacks). We discuss the goals and architecture of BGPStream. We apply the components of the framework to different scenarios, and we describe the development and deployment of complex services for global Internet monitoring that we built on top of it.



Detecting Unusually-Routed ASes: Methods and Applications (IMC2016)

The routes used in the Internet's interdomain routing system are a rich information source that could be exploited to answer a wide range of questions. However, analyzing routes is difficult, because the fundamental object of study is a set of paths. In this paper we present new analysis tools -- metrics and methods -- for analyzing AS paths, and apply them to study interdomain routing in the Internet over a recent 13-year period. Our goal is to develop a quantitative understanding of changes in Internet routing at the micro level (of individual ASes) as well as at the macro level (of the set of all ASes). To that end we equip an existing metri (Routing State Distance) with a new set of tools for identifying and characterizing unusually-routed ASes. At the micro level, we use our tools to identify clusters of ASes that have the most unusual routing at each time (interestingly, such clusters often correspond to sets of jointly-owned ASes). We also show that analysis of individual ASes can expose busis and engineering strategies of the organizations owning the ASes. These strategies are often related to content delivery or service replication. At the macro level, we show that ASes with the most unusual routing define discernible and interpretable phases of the Internet's evolution. Furthermore, we show that our tools can be used to provide a quantitative measure of the "flattening" of the Internet.



Measuring What is Not Ours: A Tale of 3rd Party Performance (PAM2017)

Content Providers make use of, so called 3rd Party (3P) services, to attract large user bases to their websites, track user activities and interests, or to serve advertisements. In this paper, we perform an extensive investigation on how much such 3Ps impact the Web performance in mobile and wired last-mile networks. We develop a new Web performance metric, the 3rd Party Trailing Ratio, to represent the fraction of the critical path of the webpage load process that comprises of only 3P downloads. Our results show that 3Ps inflate the webpage load time (PLT) by as much as 50% in the extreme case. Using URL rewriting to redirect the downloads of 3P assets on 1st Party infrastructure, we demonstrate speedups in PLTs by as much as 25%.



Anycast Latency: How Many Sites Are Enough? (PAM2017)

Anycast is widely used today to provide important services including naming and content, with DNS and Content Delivery Networks (CDNs). An anycast service uses multiple \emphsites to provide high availability, capacity and redundancy, with BGP routing associating users to nearby anycast sites. Routing defines the \emphcatchment of the users that each site serves. Although prior work has studied how users associate with anycast services informally, in this paper we examine the key question \emphhow many anycast sites are needed to provide good latency, and the worst case latencies that specific deployments see. To answer this question, we must first define the \emphoptimal performance that is possible, then explore how routing, specific anycast policies, and site location affect performance. We develop a new method capable of determining optimal performance and use it to study four real-world anycast services operated by different organizations: C-, F-, K-, and L-Root, each part of the Root DNS service. We measure their performance from more than \numbervps worldwide vantage points (VPs) in RIPE Atlas. (Given the VPs uneven geographic distribution, we evaluate and control for potential bias.) Key results of our study are to show that a few sites can provide performance nearly as good as many, and that geographic location and good connectivity have a far stronger effect on latency than having many nodes. We show how often users see the closest anycast site, and how strongly routing policy affects site selection.



Application Bandwidth and Flow Rates from 3 Trillion Flows Across 45 Carrier Networks (PAM2017)

Geographically broad, application-aware studies of large subscriber networks are rarely undertaken because of the challenges of accessing secured network premises, protecting subscriber privacy, and deploying scalable measurement devices. We present a study examining bandwidth consumption and the rate at which new flows are created in 45 cable, DSL, cellular and WiFi subscriber networks across 26 countries on six continents. Using deep packet inspection, we find that one or two applications can strongly influence the magnitude and duration of daily bandwidth peaks. We analyze bandwidth over 7 days to better understand the potential for network optimization using virtual network functions. We find that on average cellular and non-cellular networks operate at 61% and 57% of peak bandwidth respectively. Since most networks are over provisioned, there is considerable room for optimization.


Our study of flow creation reveals that DNS is the top producer of new flows in 22 of the 45 networks (accounting for 20–61% of new flows in those networks). We find that peak flow rates (measured in thousands of flows per Gigabit) can vary by several orders of magnitude across applications. Networks whose application mix includes large proportions of DNS, PeerToPeer, and social networking traffic can expect to experience higher overall peak flow rates. Conversely, networks which are dominated by video can expect lower peak flow rates. We believe that these findings will prove valuable in understanding how traffic characteristics can impact the design, evaluation, and deployment of modern networking devices, including virtual network functions.



An Internet-Wide Analysis of Traffic Policing (SIGCOMM16)

Large flows like video streams consume significant bandwidth. Some ISPs actively manage these high volume flows with techniques like policing, which enforces a flow rate by dropping excess traffic. While the existence of policing is well known, our contribution is an Internet-wide study quantifying its prevalence and impact on transportlevel and video-quality metrics. We developed a heuristic to identify policing from server-side traces and built a pipeline to process traces at scale collected from hundreds of Google servers worldwide. Using a dataset of 270 billion packets served to 28,400 client ASes, we find that, depending on region, up to 7% of connections are identified to be policed. Loss rates are on average 6× higher when a trace is policed, and it impacts video playback quality. We show that alternatives to policing, like pacing and shaping, can achieve traffic management goals while avoiding the deleterious effects of policing



Neutral Net Neutrality (SIGCOMM16)

Should applications receive special treatment from the network? And if so, who decides which applications are preferred? This discussion, known as net neutrality, goes beyond technology and is a hot political topic. In this paper we approach net neutrality from a user’s perspective. Through user studies, we demonstrate that users do indeed want some services to receive preferential treatment; and their preferences have a heavy-tail: a one-size-fits-all approach is unlikely to work. This suggests that users should be able to decide how their traffic is treated. A crucial part to enable user preferences, is the mechanism to express them. To this end, we present network cookies, a general mechanism to express user preferences to the network. Using cookies, we prototype Boost, a user-defined fast-lane and deploy it in 161 homes.



The Dark Menace: Characterizing Network-based Attacks in the Cloud (IMC2015)

As the cloud computing market continues to grow, the cloud platform is becoming an attractive target for attackers to disrupt services and steal data, and to compromise resources to launch attacks. In this paper, using three months of NetFlow data in 2013 from a large cloud provider, we present the first large-scale characterization of inbound attacks towards the cloud and outbound attacks from the cloud. We investigate nine types of attacks ranging from network-level attacks such as DDoS to application-level attacks such as SQL injection and spam. Our analysis covers the complexity, intensity, duration, and distribution of these attacks, highlighting the key challenges in defending against attacks in the cloud. By characterizing the diversity of cloud attacks, we aim to motivate the research community towards developing future security solutions for cloud systems.



Evolve or Die: High-Availability Design Principles Drawn from Google’s Network Infrastructure (SIGCOMM2016)

Maintaining the highest levels of availability for content providers is challenging in the face of scale, network evolution, and complexity. Little, however, is known about the network failures large content providers are susceptible to, and what mechanisms they employ to ensure high availability. From a detailed analysis of over 100 high-impact failure events within Google’s network, encompassing many data centers and two WANs, we quantify several dimensions of availability failures. We find that failures are evenly distributed across different network types and across data, control, and management planes, but that a large number of failures happen when a network management operation is in progress within the network. We discuss some of these failures in detail, and also describe our design principles for high availability motivated by these failures. These include using defense in depth, maintaining consistency across planes, failing open on large failures, carefully preventing and avoiding failures, and assessing root cause quickly. Our findings suggest that, as networks become more complicated, failures lurk everywhere, and, counter-intuitively, continuous incremental evolution of the network can, when applied together with our design principles, result in a more robust network.



Home Network or Access Link? Locating Last-Mile DownstreamThroughput Bottlenecks (PAM2016)

As home networks see increasingly faster downstream throughput speeds, a natural question is whether users are benefiting from these faster speeds or simply facing performance bottlenecks in their own home networks. In this paper, we ask whether downstream throughput bottlenecks occur more frequently in their home networks or in their access ISPs. We identify lightweight metrics that can accurately identify whether a throughput bottleneck lies inside or outside a user’s home network and develop a detection algorithm that locates these bottlenecks. We validate this algorithm in controlled settings and report on two deployments, one of which included 2,652 homes across the United States. We find that wireless bottlenecks are more common than access link bottlenecks — particularly for home networks with downstream throughput greater than 20 Mbps, where access-link bottlenecks are relatively rare.



Layer 1-Informed Internet Topology Measurement (IMC2014)

Understanding the Internet’s topological structure continues to be fraught with challenges. In this paper, we investigate the hypothesis that physical maps of service provider infrastructure can be used to effectively guide topology discovery based on network layer TTL-limited measurement. The goal of our work is to focus layer 3-based probing on broadly identifying Internet infrastructure that has a fixed geographic location such as POPs, IXPs and other kinds of hosting facilities. We begin by comparing more than 1.5 years of TTL-limited probe data from the Ark [25] project with maps of service provider infrastructure from the Internet Atlas [15] project. We find that there are substantially more nodes and links identified in the service provider map data versus the probe data. Next, we describe a new method for probe-based measurement of physical infrastructure called POPsicle that is based on careful selection of probe source-destination pairs. We demonstrate the capability of our method through an extensive measurement study using existing “looking glass” vantage points distributed throughout the Internet and show that it reveals 2.4 times more physical node locations versus standard probing methods. To demonstrate the deployability of POPsicle we also conduct tests at an IXP. Our results again show that POPsicle can identify more physical node locations compared with standard layer 3 probes, and through this deployment approach it can be used to measure thousands of networks worldwide.



Remote Peering: More Peering without Internet Flattening (CoNEXT14)

The trend toward more peering between networks is commonly conflated with the trend of Internet flattening, i.e., reduction in the number of intermediary organizations on Internet paths. Indeed, direct peering interconnections by- pass layer-3 transit providers and make the Internet flatter. This paper studies an emerging phenomenon that separates the two trends: we present the first systematic study of remote peering, an interconnection where remote networks peer via a layer-2 provider. Our measurements reveal significant presence of remote peering at IXPs (Internet eXchange Points) worldwide. Based on ground truth traffic, we also show that remote peering has a substantial potential to offload transit traffic. Generalizing the empirical results, we analytically derive conditions for economic viability of remote peering versus transit and direct peering. Because remote-peering services are provided on layer 2, our results challenge the traditional reliance on layer-3 topologies in modeling the Internet economic structure. We also discuss broader implications of remote peering for reliability, security, accountability, and other aspects of Internet research.



When the Internet Sleeps: Correlating Diurnal Networks With External Factors (IMC2014)

As the Internet matures, policy questions loom larger in its operation. When should an ISP, city, or government invest in infrastructure? How do their policies affect use? In this work, we develop a new approach to evaluate how policies, economic conditions and technology correlates with Internet use around the world. First, we develop an adaptive and accurate approach to estimate block availability, the fraction of active IP addresses in each /24 block over short timescales (every 11 minutes). Our estimator provides a new lens to interpret data taken from existing long-term outage measurements, thus requiring no additional traffic. (If new collection was required, it would be lightweight, since on average, outage detection requires less than 20 probes per hour per /24 block; less than 1% of background radiation.) Second, we show that spectral analysis of this measure can identify diurnal usage: blocks where addresses are regularly used during part of the day and idle in other times. Finally, we analyze data for the entire responsive Internet (3.7M /24 blocks) over 35 days. These global observations show when and where the Internet sleeps—networks are mostly always-on in the US and Western Europe, and diurnal in much of Asia, South America, and Eastern Europe. ANOVA (Analysis of Variance) testing shows that diurnal networks correlate negatively with country GDP and electrical consumption, quantifying that national policies and economics relate to networks.



Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

NA: Internet Measurement Sem.
Lecturer: Anja Feldmann

18.04.2018 to 18.07.2018

We 16:00 - 18:00 o'clock

Location: MAR 4.064