direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Topics for the Seminar on Internet Measurement, SS 2016

Topics for the seminar on Internet Measurement, SS 2016.
Themen für das Seminar über Internet Measurement, SS 2016.

Topology Preserving Maps—Extracting Layout Maps of Wireless Sensor Networks From Virtual Coordinates

A method for obtaining topology-preserving maps (TPMs) from virtual coordinates (VCs) of wireless sensor networks is presented. In a virtual coordinate system (VCS), a node is identified by a vector containing its distances, in hops, to a small subset of nodes called anchors. Layout information such as physical voids, shape, and even relative physical positions of sensor nodes with respect to directions are absent in a VCS description. The proposed technique uses Singular Value Decomposition to isolate dominant radial information and to extract topological information from the VCS for networks deployed on 2-D/3-D surfaces and in 3-D volumes. The transformation required for TPM extraction can be generated using the coordinates of a subset of nodes, resulting in sensor-network-friendly implementation alternatives. TPMs of networks representing a variety of topologies are extracted. Topology preservation error, a metric that accounts for both the number and degree of node flips, is defined and used to evaluate 2-D TPMs. The techniques extract TPMs with less than 2%. Topology coordinates provide an economical alternative to physical coordinates for many sensor networking algorithms.

ieeexplore.ieee.org/iel7/90/4359146/06542699.pdf

DTRACK: A System to Predict and Track Internet Path Changes

In this paper, we implement and evaluate a system
that predicts and tracks Internet path changes to maintain an
up-to-date network topology. Based on empirical observations,
we claim that monitors can enhance probing according to the
likelihood of path changes. We design a simple predictor of path
changes and show that it can be used to enhance probe targeting.
Our path tracking system, called DTRACK, focuses probes on
unstable paths and spreads probes over time to minimize the
chances of missing path changes. Our evaluations of DTRACK with
trace-driven simulations and with a prototype show that DTRACK
can detect up to three times more path changes than traditional
traceroute-based topology mapping techniques.

 

homepages.dcc.ufmg.br/~cunha/papers/cunha14ton-dtrack.pdf

Effects of internet path selection on video-QoE: analysis and improvements

This paper presents large-scale Internet measurements to understand and improve the effects of Internet path selection on perceived video quality, or quality of experience (QoE). We systematically study a large number of Internet paths between popular video destinations and clients to create an empirical understanding of location, persistence, and recurrence of failures. These failures are mapped to perceived video quality by reconstructing video clips and conducting surveys. We then investigate ways to recover from QoE degradation by choosing one-hop detour paths that preserve application-specific policies. We seek simple, scalable path selection strategies without the need for background path monitoring. Using five different measurement overlays spread across the globe, we show that a source can recover from over 75% of the degradations by attempting to restore QoE with any k randomly chosen nodes in an overlay, where k is bounded by O(ln(N)). We argue that our results are robust across datasets. Finally, we design and implement a prototype packet forwarding module called source initiated frame restoration (SIFR). We deployed SIFR on PlanetLab nodes and compared the performance of SIFR to the default Internet routing. We show that SIFR outperforms IP-path selection by providing higher on-screen perceptual quality.

www.cs.ucf.edu/~mainak/papers/Video-QoEDegradation.pdf

MobilityFirst: a mobility-centric and trustworthy internet architecture

MobilityFirst is a future Internet architecture with mobility and trustworthiness as central design goals. Mobility means that all endpoints – devices, services, content, and networks
– should be able to frequently change network attachment points in a seamless manner. Trustworthiness means that the network must be resilient to the presence of a small number of malicious endpoints or network routers. MobilityFirst enhances mobility by cleanly separating names or identifiers from addresses or network locations, and enhances security by representing both in an intrinsically verifiable manner, relying upon a massively scalable, distributed, global name service to bind names and addresses, and to facilitate services including device-to-service, multicast, anycast, and context-aware communication, content retrieval, and more.

A key insight emerging from our experience is that a logically centralized global name service can significantly enhance mobility and security and transform network-layer functionality. Recognizing and validating this insight is the key contribution of the MobilityFirst architectural effort.


www.sigcomm.org/sites/default/files/ccr/papers/2014/July/0000000-0000011.pdf

PACK: Prediction-Based Cloud Bandwidth and Cost Reduction System

In this paper, we present PACK (Predictive ACKs), a novel end-to-end traffic redundancy elimination (TRE) system, designed for cloud computing customers. Cloud-based TRE needs to apply a judicious use of cloud resources so that the bandwidth cost reduction combined with the additional cost of TRE computation and storage would be optimized. PACK’s main advantage is its capability of offloading the cloud-server TRE effort to end-clients, thus minimizing the processing costs induced by the TRE algorithm. Unlike previous solutions, PACK does not require the server to continuously maintain clients’ status. This makes PACK very suitable for pervasive computation environments that combine client mobility and server migration to maintain cloud elasticity. PACK is based on a novel TRE technique, which allows the client to use newly received chunks to identify previously received chunk chains, which in turn can be used as reliable predictors to future transmitted chunks. We present a fully functional PACK implementation, transparent to all TCP-based applications and network devices. Finally, we analyze PACK benefits for cloud users, using traffic traces from various sources.

 

www.micansinfotech.com/IEEE-CSEIT-BASEPAPER/IEEE-PROJECT-2014-2015-PACK-Prediction-Based-Cloud-Bandwidth.pdf

An Ultra-Low-Latency Guaranteed-Rate Internet for Cloud Services

An Enhanced-Internet network that provides ultra-low-latency guaranteed-rate (GR) communications for Cloud Services is proposed. The network supports two traffic classes, the Smooth and Best-Effort classes. Smooth traffic flows receive low-jitter GR service over virtual-circuit-switched (VCS) connections with negligible buffering and queueing delays, up to 100% link utilizations, deterministic end-to-end quality-of-service (QoS) guarantees, and improved energy efficiency. End-to-end delays are effectively reduced to the fiber “time of flight.” A new router scheduling problem called the Bounded Normalized-Jitter integer-programming problem is formulated. A fast polynomial-time approximate solution is presented, allowing TDM-based router schedules to be computed in microseconds. We establish that all admissible traffic demands in any packet-switched network can be simultaneously satisfied with GR-VCS connections, with minimal buffering. Each router can use two periodic TDM-based schedules to support GR-VCS connections, which are updated automatically when the router's traffic rate matrix changes. The design of a Silicon-Photonics all-optical packet switch with minimal buffering is presented. The Enhanced-Internet can: 1) reduce router buffer requirements by factors of >= 1000; 2) increase the Internet's aggregate capacity; 3) lower the Internet's capital and operating costs; and 4) lower greenhouse gas emissions through improved energy efficiency.

 

ieeexplore.ieee.org/iel7/90/4359146/06917218.pdf

When the Internet Sleeps: Correlating Diurnal Networks With External Factors

As the Internet matures, policy questions loom larger in its operation. When should an ISP, city, or government invest in infrastructure? How do their policies affect use? In this work, we develop a new approach to evaluate how policies, economic conditions and technology correlates with Internet use around the world. First, we develop an adaptive and accurate approach to estimate block availability, the fraction of active IP addresses in each /24 block over short timescales (every 11 minutes). Our estimator provides a new lens to interpret data taken from existing long-term outage measurements, thus requiring no additional traffic. (If new collection was required, it would be lightweight, since on average, outage detection requires less than 20 probes per hour per /24 block; less than 1% of background radiation.) Second, we show that spectral analysis of this measure can identify diurnal usage: blocks where addresses are regularly used during part of the day and idle in other times. Finally, we analyze data for the entire responsive Internet (3.7M /24 blocks) over 35 days. These global observations show when and where the Internet sleeps—networks are mostly always-on in the US and Western Europe, and diurnal in much of Asia, South America, and Eastern Europe. ANOVA (Analysis of Variance) testing shows that diurnal networks correlate negatively with country GDP and electrical consumption, quantifying that national policies and economics relate to networks.

 

www.isi.edu/~johnh/PAPERS/Quan14c.pdf

Cost-Effective Resource Allocation of Overlay Routing Relay Nodes

Overlay routing is a very attractive scheme that allows improving certain properties of the routing (such as delay or TCP throughput) without the need to change the standards of the current underlying routing. However, deploying overlay routing requires the placement and maintenance of overlay infrastructure. This gives rise to the following optimization problem: Find a minimal set of overlay nodes such that the required routing properties are satisfied. In this paper, we rigorously study this optimization problem. We show that it is NP-hard and derive a nontrivial approximation algorithm for it, where the approximation ratio depends on specific properties of the problem at hand. We examine the practical aspects of the scheme by evaluating the gain one can get over several real scenarios. The first one is BGP routing, and we show, using up-to-date data reflecting the current BGP routing policy in the Internet, that a relative small number of less than 100 relay servers is sufficient to enable routing over shortest paths from a single source to all autonomous systems (ASs), reducing the average path length of inflated paths by 40%. We also demonstrate that the scheme is very useful for TCP performance improvement (results in an almost optimal placement of overlay nodes) and for Voice-over-IP (VoIP) applications where a small number of overlay nodes can significantly reduce the maximal peer-to-peer delay.

 

www.micansinfotech.com/IEEE-CSEIT-BASEPAPER/IEEE-PROJECT-2014-2015-Cost-Effective-Resource-Allocation-of-Overlay.pdf

A middlebox-cooperative TCP for a non end-to-end internet

Understanding, measuring, and debugging IP networks, particularly across administrative domains, is challenging. One particularly daunting aspect of the challenge is the presnce of transparent middleboxes—which are now common in today’s Internet. In-path middleboxes that modify packet headers are typically transparent to a TCP, yet can impact end-to-end performance or cause blackholes. We develop TCP HICCUPS to reveal packet header manipulation to both endpoints of a TCP connection. HICCUPS permits endpoints to cooperate with currently opaque middle-boxes without prior knowledge of their behavior. For example, with visibility into end-to-end behavior, a TCP can selectively enable or disable performance enhancing options. This cooperation enables protocol innovation by allowing new IP or TCP functionality (e.g., ECN, SACK, Multipath TCP, Tcpcrypt) to be deployed without fear of such functionality being misconstrued, modified, or blocked along a path. HICCUPS is incrementally deployable and introduces no new options. We implement and deploy TCP HICCUPS across thousands of disparate Internet paths, highlighting the breadth and scope of subtle and hard to detect middle-box behaviors encountered. We then show how path diagnostic capabilities provided by HICCUPS can benefit applications and the network.

 

rbeverly.net/research/papers/hiccups-sigcomm14.pdf

BISmark: A Testbed for Deploying Measurements and Applications in Broadband Access Networks

BISmark (Broadband Internet Service Benchmark) is a deployment of home routers running custom software, and backend infrastructure to manage experiments and collect measurements. The project began in 2010 as an attempt to better understand the characteristics of broadband access networks. We have since deployed BISmark routers in hundreds of home networks in about thirty countries. BISmark is currently used and shared by researchers at nine institutions, including commercial Internet service providers, and has enabled studies of access link performance, network connectivity, Web page load times, and user behavior and activity. Research using BISmark and its data has informed both technical and policy research. This paper describes and revisits design choices we made during the platform’s evolution and lessons we have learned from the deployment effort thus far. We also explain how BISmark enables experimentation, and our efforts to make it available to the networking community. We encourage researchers to contact us if they are interested in running experiments on BISmark.

 

www.usenix.org/system/files/conference/atc14/atc14-paper-sundaresan.pdf

Remote Peering: More Peering without Internet Flattening

The trend toward more peering between networks is commonly conflated with the trend of Internet flattening, i.e., reduction in the number of intermediary organizations on Internet paths. Indeed, direct peering interconnections by- pass layer-3 transit providers and make the Internet flatter. This paper studies an emerging phenomenon that separates the two trends: we present the first systematic study of remote peering, an interconnection where remote networks peer via a layer-2 provider. Our measurements reveal significant presence of remote peering at IXPs (Internet eXchange Points) worldwide. Based on ground truth traffic, we also show that remote peering has a substantial potential to offload transit traffic. Generalizing the empirical results, we analytically derive conditions for economic viability of remote peering versus transit and direct peering. Because remote-peering services are provided on layer 2, our results challenge the traditional reliance on layer-3 topologies in modeling the Internet economic structure. We also discuss broader implications of remote peering for reliability, security, accountability, and other aspects of Internet research.

 

eprints.networks.imdea.org/894/1/Remote_Peering_More_Peering_without_Internet_Flattening_2014_EN.pdf

Mobile Network Performance from User Devices:A Longitudinal, Multidimensional Analysis

In the cellular environment, operators, researchers and end users have poor visibility into network performance for devices. Improving visibility is challenging because this performance depends factors that include carrier, access technology, signal strength, geographic location and time. Addressing this requires longitudinal, continuous and large-scale measurements from a diverse set of mobile devices and networks. This paper takes a first look at cellular network performance from this perspective, using 17 months of data collected from devices located throughout the world. We show that (i) there is significant variance in key performance metrics both within and across carriers; (ii) this variance is at best only partially explained by regional and time-of-day patterns; (iii) the stability of network performance varies substantially among carriers. Further, we use the dataset to diagnose the causes behind observed performance problems and identify additional measurements that will improve our ability to reason about mobile network behavior.

 

web.eecs.umich.edu/~zmao/Papers/NikraveshPAM2014.pdf

mTCP: a Highly Scalable User-level TCP Stack for Multicore Systems

Scaling the performance of short TCP connections on multicore systems is fundamentally challenging. Although many proposals have attempted to address various shortcomings, inefficiency of the kernel implementation still persists. For example, even state-of-the-art designs spend 70% to 80% of CPU cycles in handling TCP connections in the kernel, leaving only small room for innovation in the user-level program.

This work presents mTCP, a high-performance user-level TCP stack for multicore systems. mTCP addresses the inefficiencies from the ground up—from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design (1) translates multiple expensive system calls into a single shared memory reference, (2) allows efficient flow-level event aggregation, and (3) performs batched packet I/O for high I/O efficiency. Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack and a factor of 3 compared to the best-performing research system known so far. It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack.

 

www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-jeong.pdf

Layer 1-Informed Internet Topology Measurement

Understanding the Internet’s topological structure continues to be fraught with challenges. In this paper, we investigate the hypothesis that physical maps of service provider infrastructure can be used to effectively guide topology discovery based on network layer TTL-limited measurement. The goal of our work is to focus layer 3-based probing on broadly identifying Internet infrastructure that has a fixed geographic location such as POPs, IXPs and other kinds of hosting facilities. We begin by comparing more than 1.5 years of TTL-limited probe data from the Ark [25] project with maps of service provider infrastructure from the Internet Atlas [15] project. We find that there are substantially more nodes and links identified in the service provider map data versus the probe data. Next, we describe a new method for probe-based measurement of physical infrastructure called POPsicle that is based on careful selection of probe source-destination pairs. We demonstrate the capability of our method through an extensive measurement study using existing “looking glass” vantage points distributed throughout the Internet and show that it reveals 2.4 times more physical node locations versus standard probing methods. To demonstrate the deployability of POPsicle we also conduct tests at an IXP. Our results again show that POPsicle can identify more physical node locations compared with standard layer 3 probes, and through this deployment approach it can be used to measure thousands of networks worldwide.

 

conferences2.sigcomm.org/imc/2014/papers/p381.pdf

Towards Detecting Target Link Flooding Attack

A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hop-by-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.

 

www.usenix.org/system/files/conference/lisa14/lisa14-paper-xue.pdf

Identifying Traffic Differentiation in Mobile Networks

Traffic differentiation—giving better (or worse) performance to certain classes of Internet traffic— is a well-known but poorly understood traffic management policy. There is active discussion on whether and how ISPs should be allowed to differentiate Internet traffic [8,21], but little data about current practices to inform this discussion. Previous work attempted to address this problem for fixed line networks; however, there is currently no solution that works in the more challenging mobile environment.

In this paper, we present the design, implementation, and evaluation of the first system and mobile app for identifying traffic differentiation for arbitrary applications in the mobile environment (i.e., wireless networks such as cellular and WiFi, used by smartphones and tablets). The key idea is to use a VPN proxy to record and replay the network traffic generated by arbitrary applications, and compare it with the network behavior when replaying this traffic outside of an encrypted tunnel. We perform the first known testbed experiments with actual commercial shaping devices to validate our system design and demonstrate how it outperforms previous work for detecting differentiation. We released our app and collected differentiation results from 12 ISPs in 5 countries. We find that differentiation tends to affect TCP traffic (reducing rates by up to 60%) and that interference from middleboxes (including video-transcoding devices) is pervasive. By exposing such behavior, we hope to improve transparency for users and help inform future policies.

 

conferences2.sigcomm.org/imc/2015/papers/p239.pdf

Tracking the Evolution and Diversity in Network Usage of Smartphones

We analyze the evolution of smartphone usage from a dataset obtained from three, 15-day-long, user-side, measurements with over 1500 recruited smartphone users in the Greater Tokyo area from 2013 to 2015. This dataset shows users across a diverse range of networks; cellular access (3G to LTE), WiFi access (2.4 to 5GHz), deployment of more pub- lic WiFi access points (APs), as they use diverse applica- tions such as video, file synchronization, and major software updates.

Our analysis shows that smartphone users select appropri- ate network interfaces taking into account the deployment of emerging technologies, their bandwidth demand, and their economic constraints. Thus, users show diversity in both how much traffic they send, as well as on what networks they send it. We show that users are gradually but steadily adopting WiFi at home, in offices, and public spaces over these three years. The majority of light users have been shifting their traffic to WiFi. Heavy hitters acquire more bandwidth via WiFi, especially at home. The percentage of users explicitly turning off their WiFi interface during the day decreases from 50% to 40%. Our results highlight that the offloading environment has been improved during the three years, with more than 40% of WiFi users connecting to multiple WiFi APs in one day. WiFi offload at offices is still limited in our dataset due to a few accessible APs, but WiFi APs in public spaces have been an alternative to cellular access for users who request not only simple con- nectivity but also bandwidth-consuming applications such as video streaming and software updates.

 

conferences2.sigcomm.org/imc/2015/papers/p253.pdf

Is There WiFi Yet? How Aggressive Probe Requests Deteriorate Energy and Throughput

WiFi offloading has emerged as a key component of cellular operator strategy to meet the rich data needs of modern mobile devices. Hence, mobile devices tend to aggressively seek out WiFi in order to provide improved user Quality of Experience (QoE) and cellular capacity relief. For home and work environments, aggressive WiFi scans can significantly improve the speed at which mobile nodes join the WiFi network. Unfortunately, the same aggressive behavior that excels in the home environment incurs considerable side effects in crowded wireless environments. In this paper, we analyze empirical data collected from large (stadium) and medium (classroom) venues, and show through controlled experiments (laboratory) how aggressive WiFi scans can have significant implications for energy and throughput for mobile nodes. We close with several thoughts on the disjoint incentives for properly balancing WiFi discovery speed and crowded network interactions.

Characterizing Smartphone Usage Patterns from Millions of Android Users

The prevalence of smart devices has promoted the popularity of mobile applications (a.k.a. apps) in recent years. A number of interesting and important questions remain unanswered, such as why a user likes/dislikes an app, how an app becomes popular or eventually perishes, how a user selects apps to install and interacts with them, how frequently an app is used and how much traffic it generates, etc. This paper presents an empirical analysis of app usage behaviors collected from millions of users of Wandoujia, a leading Android app marketplace in China. The dataset covers two types of user behaviors of using over 0.2 million Android apps, including (1) app management activities (i.e., installation, updating, and uninstallation) of over 0.8 million unique users and (2) app network traffic from over 2 million unique users. We explore multiple aspects of such behavior data and present interesting patterns of app usage. The results provide many useful implications to the developers, users, and disseminators of mobile apps.

Characterizing IPv4 Anycast Adoption and Deployment

This paper provides a comprehensive picture of IP-layer anycast adoption in the current Internet. We carry on multiple IPv4 anycast censuses, relying on latency measurement from PlanetLab. Next, we leverage our novel technique for anycast detection, enumeration, and geolocation [17] to quantify anycast adoption in the Internet. Our technique is scalable and, unlike previous efforts that are bound to exploiting DNS, is protocol-agnostic. Our results show that major Internet companies (including tier-1 ISPs, over-the-top operators, Cloud providers and equipment vendors) use anycast: we find that a broad range of TCP services are offered over anycast, the most popular of which include HTTP and HTTPS by anycast CDNs that serve websites from the top-100k Alexa list. Additionally, we complement our characterization of IPv4 anycast with a description of the challenges we faced to collect and analyze large-scale delay measurements, and the lessons learned.

Timeouts: Beware Surprisingly High Delay

Active probing techniques, such as ping, have been used to detect outages. When a previously responsive end host fails to respond to a probe, studies sometimes attempt to confirm the outage by retrying the ping or attempt to identify the location of the outage by using other tools such as traceroute. The latent problem, however, is, how long should one wait for a response to the ping? Too short a timeout risks confusing congestion or other delay with an outage. Too long a timeout may slow the process and prevent observing and diagnosing short-duration events, depending on the experiment’s design.

We believe that conventional timeouts for active probes are underestimates, and analyze data collected by Heidemann et al. in 2006–2015. We find that 5% of pings from 5% of addresses take more than 5 seconds. Put another way, for 5% of the responsive IP addresses probed by Heidemann, a false 5% loss rate would be inferred if using a timeout of 5 seconds. To arrive at this observation, we filtered artifacts of the data that could occur with too-long a timeout, including responses to probes sent to broadcast addresses. We also analyze ICMP data collected by Zmap in 2015 to find that around 5% of all responsive addresses observe a greater than one second round-trip time consistently. Further, the prevalence of high round trip time has been increasing and it is often associated with the first ping, perhaps due to negotiating a wireless connection. In addition, we find that the Autonomous Systems with the most high-latency addresses are typically cellular. This paper describes our analysis process and results that should encourage researchers to set longer timeouts when needed and report on timeout settings in the description of future measurements.

Ting: Measuring and Exploiting Latencies Between All Tor Nodes

Tor is a peer-to-peer overlay routing network that achieves unlinkable communication between source and destination. Unlike traditional mix-nets, Tor seeks to balance anonymity and performance, particularly with respect to providing low-latency communication. As a result, understanding the la- tencies between peers in the Tor network could be an extremely powerful tool in understanding and improving Tor’s performance and anonymity properties. Unfortunately, there are no practical techniques for inferring accurate latencies between two arbitrary hosts on the Internet, and Tor clients are not instrumented to collect and report on these measurements.

In this paper, we present Ting, a technique for measuring latencies between arbitrary Tor nodes from a single vantage point. Through a ground-truth validation, we show that Ting is accurate, even with few samples, and does not require modifications to existing clients. We also apply Ting to the live Tor network, and show that its measurements are stable over time. We demonstrate that the all-pairs latency datasets that Ting permits can be applied in disparate ways, including faster methods of deanonymizing Tor circuits and efficiently finding long circuits with low end-to-end latency.

Neither Snow Nor Rain Nor MITM . . . An Empirical Analysis of Email Delivery Security

The SMTP protocol is responsible for carrying some of users’ most intimate communication, but like other Internet protocols, authentication and confidentiality were added only as an afterthought. In this work, we present the first report on global adoption rates of SMTP security extensions, including: STARTTLS, SPF, DKIM, and DMARC. We present data from two perspectives: SMTP server configurations for the Alexa Top Million domains, and over a year of SMTP connections to and from Gmail. We find that the top mail providers (e.g., Gmail, Yahoo, and Outlook) all proactively encrypt and authenticate messages. However, these best practices have yet to reach widespread adoption in a long tail of over 700,000 SMTP servers, of which only 35% successfully configure encryption, and 1.1% specify a DMARC authentication policy. This security patchwork — paired with SMTP policies that favor failing open to allow gradual deployment — exposes users to attackers who downgrade TLS connections in favor of cleartext and who falsify MX records to reroute messages. We present evidence of such attacks in the wild, highlighting seven countries where more than 20% of inbound Gmail messages arrive in cleartext due to network attackers.

Resilience of Deployed TCP to Blind Attacks

As part of TCP’s steady evolution, recent standards have recommended mechanisms to protect against weaknesses in TCP. But adoption, configuration, and deployment of TCP improvements can be slow. In this work, we consider the resilience of deployed TCP implementations to blind in-window attacks, where an off-path adversary disrupts an established connection by sending a packet that the victim believes came from its peer, causing data corruption or connection reset. We tested operating systems (and middleboxes deployed in front) of webservers in the wild in September 2015 and found 22% of connections vulnerable to in-window SYN and reset packets, 30% vulnerable to in-window data packets, and 38.4% vulnerable to at least one of three in-window attacks we tested. We also tested out-of-window packets and found that while few deployed systems were vulnerable to reset and SYN packets, 5.4% of connections accepted in-window data with an invalid acknowledgment number. In addition to evaluating commodity TCP stacks, we found vulnerabilities in 12 of 14 of the routers and switches we characterized – critical network infrastructure where the potential impact of any TCP vulnerabilities is particularly acute. This surpris- ingly high level of extant vulnerabilities in the most mature Internet transport protocol in use today is a perfect illustration of the Internet’s fragility. Embedded in historical context, it also provides a strong case for more systematic, scientific, and longitudinal measurement and quantitative analysis of fundamental properties of critical Internet infras- tructure, as well as for the importance of better mechanisms to get best security practices deployed.

 

conferences.sigcomm.org/imc/2015/papers/p13.pdf

Zusatzinformationen / Extras

Direktzugang

Schnellnavigation zur Seite über Nummerneingabe

NA: Internet Measurement Sem.
Dozent: Anja Feldmann

Zeitraum:
ab 17.04.2014

Fr 14:00 - 16:00 Uhr

Ort: MAR 4.033

ISIS