- 2018/04/12 – Guest talk: Susanna Schwarzmann, "SDN-enabled Application-aware Network Control Architectures"
- 2018/03/29 – Wissenschaftliche Aussprache von Herrn Dipl.-Inform. Philipp Tiesel, Thema "Multi-Path Aware Internet Transport Selection"
- 2018/03/28 – Guest Talk: Volker Stocker, "The End of Network Neutrality Regulations in the Internet? An Economist’s Perspective"
- 2018/03/21 – Masterstudent final talk: Yu Zhao, "Network Trace Analysis: Linux Toolchain versus Apache Flink"
- 2018/03/20 – Bachelorstudent final talk: Tanja Birgit Neumer,"Messung und Diagnose von typischen Netzwerkproblemen mit freier Software"
- 2018/03/13 – Masterstudent final talk: Md Shihab Karim, "Measurement and Characterization of Address Blocks from Different Vantage Points"
- 2018/02/26 – Masterstudent final talk: Feras Fattohi, "Competitive Online Virtual Cluster Embedding Algorithms"
- 2018/02/12 – Bachelorstudent final talk: Marcin Bosk,"Impact of active probing on 802.11 performance"
- 2018/02/09– Bachelorstudent final talk: Arne Kappen,"Realizing a Wireless Switch Abstraction in ONOS"
- 2018/01/11 – Masterstudent introductory talk: Martin Ott
Thursday, 12. April 2018
Schwarzmann (Universität Würzburg)|
|Time:||12 April 2018|
The applications running on top of today’s networks are getting more and more diverse and have different requirements on the network. While a segment download for adaptive video streaming only needs a certain bandwidth to arrive in time, VoIP will drastically suffer from network delay and packet loss. Additionally to the constantly increase of network traffic due to the rising demand on new paradigms and services, like for example cloud computing and VoD platforms, users’ expectations are rising as well. Software Defined Networking (SDN) is a new networking paradigm overcoming several limitations of conventional network architectures by separating control plane and data plane. This SDN principle leverages the configuration, control, and implementation of functions on network devices in a standardized and centralized manner. It shifts the intelligence, which is conventionally distributed among the devices in the network, to one logically centralized unit with a global network view, the SDN controller. The controller decides about control actions and communicates instructions to the switches via standardized protocols like OpenFlow. This talk presents how SDN can be applied in order to fulfill the applications' requirements on the network to enhance user QoE. To do so, monitoring of the network and of the applications is applied. Based on the gathered information, network reconfiguration or application control can be performed. We explore different interaction patterns between network and applications and show how they differ w.r.t. to implemented building blocks, induced overhead and performance in terms of QoE enhancement.
Susanna Schwarzmann received her Bachelor degree in 2014 from the University of Würzburg. There, she also received her Master degree with a focus on Internet Technologies in 2016. Since 2017, she is PhD student at the Chair of Communication Networks in Würzburg. Her research interests cover next generation network, the interaction of applications with the network, and Quality of Experience (QoE).
Thursday, 29. March 2018
|Type:||Wissenschaftliche Aussprache zur
Erlangung akademischen Grades „Doktor der
|Time:||29 March 2018|
setzt sich wie folgt zusammen:|
Prof. Dr. Rolf Niedermeier
Prof. Anja Feldmann, Ph.D.
Prof. Steve Uhlig, Ph.D. (Queen Mary University of London, UK)
Prof. Olivier Bonaventure, Ph.D. (Université catholique de Louvain, Belgien)
Die Dissertation und die Gutachten können von Berechtigten nach § 8 Abs. 1 der Promotionsordnung in der Fakultätsverwaltung eingesehen werden. Die wissenschaftliche Aussprache ist universitätsöffentlich.
Wednesday, 28. March 2018
|Time:||28 March 2018|
Network neutrality is a normative proposition to ensure openness, innovation, and non-discrimination in the Internet. While the corresponding narrative has spurred one of the hottest debates in the realm of Internet policy in the last decade, it has evolved into a regulatory paradigm and has culminated in the implementation of network neutrality regulations in a number of countries. In this talk, I will present a brief history of network neutrality regulations in the U.S. and the EU and provide a critical appraisal of the current regulations from a network economic perspective.
Volker Stocker is an economist and a Senior Research Assistant at the Chair of Network Economics, Competition Economics and Transport Science at the University of Freiburg. His research interests are in the fields of network economics and Internet policy. He was a visiting researcher at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in Cambridge, MA and at the Newcastle Business School at Northumbria University in Newcastle, UK and authored several articles on the issues of competition and regulation in the Internet, network neutrality, broadband policy, and universal service.
Wednesday, 21. March 2018
With the rapid development of Internet technologies, more and more people are accustomed to spending their time on the Internet not only for working and studying but also for shopping, watching videos, socializing and so on. The Internet services are becoming more interactive and make heavier use of media. Therefore, the traffic seen both globally and at any given company increasingly grows. In consequence, network failures and attacks happen more frequently as well. Whether network analysts try to extract commercial value from traffic data for Internet companies or solve issues occurred in a network, they have to process trace files with bigger sizes. Yet in this condition, network researcher and analysts are still using the same toolchains which have been used for the last two decades.
Meanwhile, big data frameworks like Apache Flink were born to solve this kind of problem. They are able to process a large amount of data in a reasonable time on account of their distributed mechanism. However, since these big data frameworks are mainly written in a high order language and they are not designed specifically for network analysis which has not been done before, it is difficult to indicate if they perform better when processing network trace data with relatively big size compared with the conventional approach. Moreover, processing data on a single core without parallelization may be more efficient if the simple and fast Linux tools are used.Therefore, whenever network analysts and researchers try to process network traffic data in order to solve problems or obtain statistics, they need to choose a more efficient approach. Less time used for analyzing the traffic means that the losses caused by those problems can be minimized and also the commercial value extracted from the data can be obtained in time. In this thesis, a detailed comparison of two different ways to process trace files is made to give a clear suggestion for network analysis. The first approach is processing data with Linux toolchains and the second one is processing data on Apache Flink since because it is a prominent example of a big data framework and easily extended to a streaming case as well. Scripts for both methods are provided to deal with some common and typical network problems so that network researchers can have their own comparison and choose a better method afterwards. In conclusion, this thesis answers the question that which method should be used under what kinds of conditions.
Tuesday, 20. March 2018
|Speaker:||Tanja Birgit Neumer (TU
Berlin bachelor student)|
The aim of the thesis is to use open-source software and partly self-implemented extensions to set up a monitoring system that can be used to detect a variety of network problems in computer networks. The focus is set on company structures. Based on a categorization of various sources of error, a comprehensive overview of potential causes and effects in networks is provided. Based on this, suitable software is searched to detect such symptoms. The individual applications are then assembled into a monitoring system and equipped with a specially implemented extension that provides monitoring data on hardware servers via the Redfish API. In order to investigate the effectiveness of the monitoring system, a test environment with various scenarios for occurring network errors is designed, documented and evaluated. For this purpose, the network infrastructure of a small enterprise was available for research purposes for the entire period of work.
Tuesday, 13. March 2018
BGP (Border Gateway Protocol) is the de facto standard protocol for routing on the Internet; it enables connectivity between networks. It is used for interconnecting ASes (Autonomous System) at different locations administrated by various operators. Over the past decades, the Internet has grown fast from a mostly educational network into a network dominated by financial interests, and a rich diversity of ASes. BGP has been significantly extended over time to accommodate today’s needs and used to enable features like load balancing, multi-homing, address transfers and more.
At the same time, the routing table grows and exceeds the default maximum of 512k routes in many old router devices. To better understand the impact of the ever-growing routing table on the overall Internet performance, it is essential to understand the causes for the routing table growth. In this thesis, we propose to investigate various causes like Deaggregation, Propagation limits and Churn of prefixes. We pay particular attention to allocated addressing space utilization, prefix visibility and stability in global BGP routing table. We conduct our analyses using different vantage points and compare our results for IPv4 and IPv6.
Monday, 26. February 2018
In the conventional cloud service model, computing resources are allocated for tenants on a pay-per-use basis. However, the performance of applications that communicate inside this network is unpredictable because network resources are not guaranteed.
To mitigate this issue, the virtual cluster (VC) model has been developed in which network and compute units are guaranteed. Thereon, many algorithms have been developed that are based on novel extensions of the VC model in order to solve the online virtual cluster embedding problem (VCE) with additional parameters. In the online VCE, the resource footprint is greedily minimized per request which is connected with maximizing the profit for the provider per request.
However, this does not imply that a global maximization of the profit over the whole sequence of requests is guaranteed. In fact, these algorithms do not even provide a worst case guarantee on a fraction of the maximum achievable profit of a certain sequence of requests. Thus, these online algorithms do not provide a competitive ratio on the profit.
In this thesis, two competitive online VCE algorithms and two heuristic algorithms are presented. The competitive online VCE algorithms have different competitive ratios on the objective function and the capacity constraints whereas the heuristic algorithms do not violate the capacity constraints. The worst case competitive ratios are analyzed. After that, the evaluation shows the advantages and disadvantages of these algorithms in several scenarios with different request patterns and profit metrics on the fat-tree and MDCube datacenter topologies. The results show that for different scenarios, different algorithms have the best performance with respect to certain metrics.
Monday, 12. February 2018
|Speaker:||Marcin Bosk (TU Berlin
Wireless networks play a significant role in today’s Internet access scheme. Because many devices such as Laptops, Smartphones and Tablets rely on them, it is important to keep performance of these networks at the highest level and give users the best throughput achievable. Some devices also feature more than one local interface that connects them to the Internet. A choice needs to be made which is the most optimal to use. Such decision requires some operational facts about these interfaces. A way to obtain that information and focus of this thesis is the active probing (scanning) in IEEE 802.11 networks. It can be utilized to acquire information from the access point about the parameters and usage of the wireless channel it operates on. Active probing introduces overhead in the medium, it being a trade-off for information obtained about the wireless link. In this thesis we show the influence active probing has on a device that performs such scanning. To accomplish that, a test set-up consisting of a single access point and one station associated with it is utilized. Using this arrangement, we analyze the frames present on the wireless channel under various data traffic conditions and probing schemes. The scans are being requested by Multiple Access Manager (MAM), a part of the Socket Intents Framework implementation. We conclude that active probing has a significant impact on the IEEE 802.11 network. The medium is notably less utilized and the throughput achievable using WiFi is lowered by 30% to 50% depending on the test scenario, compared to tests with no scanning. Later, we also investigate other ways the end devices can acquire information from the access points. We focus on passive scanning again based on the example of Multiple Access Manager. In this work we rewrite MAM to use a monitor interface which obtains the information from access points without the use of active probing. This change results in performance of the wireless network not being decreased and staying on the same level as in control tests where no scanning was done.
Friday, 09. February 2018
|Speaker:||Arne Kappen (TU Berlin
The the need for bandwidth and availability of wireless access networks is increasing since ever modern mobile end device is WiFi enabled. Denser and larger WiFi deployments require novel means of network management and control to ensure an efficient use of the available wireless resources. Here, centralized control promises better decision-making from the vantage point of a unified network view.
In this talk, we present a wireless switch abstraction realized in ONOS and an integrated Open Source wireless Software-Defined Networking (SDN) implementation for OpenWRT/LEDE-based WiFi access points. We leverage SDN techniques to consolidate the state of all the wireless access points in a logically centralized control plane. This control plane hides the network complexity and exposes a unified network view. Moreover, it provides an interface to monitor and control the network elements performing split-MAC functionality. Through measurements, we show that our proposed solution handles well the expected load of standard WiFi network deployments. We understand our work as an enabler for more advanced applications. During the design of our system, we focused on extensibility and the possibility to incorporate new functionality at a later stage.
Thursday, 11. January 2018
Talk ArchivePlease, find previous
talks on the following
- Recent talks and student talks 
- Past talks of 2017 
- Past talks of 2016 
- Past talks of 2015 
- Past talks of 2014 
- Past talks of 2013 
- Past talks of 2012 
- Past talks of 2011 
- Past talks of 2010 
- Past talks of 2009 
- Past talks of 2008 
- Past talks of 2007 
- Past talks of 2006