Secure Web Applications Group

Dr.-Ing. Ben Stock
Research Group Leader

Campus E9.1, Room 2.09
+49 (0)681 302 57377
stock [at]

I am a tenure-track faculty at the CISPA Helmholtz Center (i.G.). Prior to that, I was a research group leader and previously postdoctoral researcher at the Center for IT-Security, Privacy and Accountability at Saarland University in the group of Michael Backes. Before joining CISPA, I was a PhD student and research fellow at the Security Research Group of the University Erlangen-Nuremberg, supervised by Felix Freiling. During that time, I was fortunate enough to join Ben Livshits and Ben Zorn at Microsoft Research in Redmond for an internship.

My research interests lie within Web Security, Network Security, Reverse Engineering, and Vulnerability Notifications. In addition, I enjoy the challenges provided in Capture the Flag competitions and am always trying to get more students involved in them (especially in our local team saarsec).

Dr.-Ing. Ben Stock

Work experience

  • Since July 2018: Tenure-Track Faculty at the CISPA Helmholtz Center (i.G.)
  • June 2017 to June 2018: Research Group Leader at CISPA
  • October 2015 to May 2017: Postdoctoral Researcher at CISPA
  • July 2014 to October 2014: Research Intern at Microsoft Research, Redmond
  • February 2013 to October 2015: PhD Student at Security Research Group of the University of Erlangen-Nuremberg
  • July 2012 to January 2013: Master thesis student at SAP Research, Karlsruhe

Community service

  • Program Committee Member for USENIX Security 2019, IEEE EuroS&P 2019, ACSAC 2018, ACM CCS 2018, WWW 2018 (S&P track)
  • Publications Chair for EuroS&P 2016-2019
  • Reviewers for USENIX Security 2017, IEEE S&P 2017, USENIX Security 2016, NDSS 2016, DIMVA 2013, ICICS 2013, DIMVA 2011

Academic career

  • Doktor Ingenieur (PhD), Friedrich-Alexander-Universität Erlangen-Nürnberg (2015) – Thesis “Untangling the Web of Client-Side Cross-Site Scripting” (PDF)
  • Master of Science in IT Security, Technische Universität Darmstadt (2013) – Thesis “Implementing low-level browser-based security functionality” (PDF)
  • Bachelor of Science in Internet and Software Technology, University of Mannheim (2010) – Thesis “P2P-Botnetz-Analyse — Waledac” (german) (PDF)

Publications and Talks


Steffens, Marius, Christian Rossow, Martin Johns, and Ben Stock. 2019. “Don’t Trust The Locals: Investigating the Prevalence of Persistent Client-Side Cross-Site Scripting in the Wild.” In NDSS ’19.
[show abstract]
The Web has become highly interactive and an important driver for modern life, enabling information retrieval, social exchange, and online shopping. From the security perspective, Cross-Site Scripting (XSS) is one of the most nefarious attacks against Web clients. Research has long since focussed on three categories of XSS: reflected, persistent, and DOM-based XSS. In this paper, we argue that our community must consider at least four important classes of XSS and present the first systematic study of the threat of Persistent Client-Side XSS, caused by the insecure usage of client-side storages. While the existence of this class has been acknowledged, especially by the non-academic community like OWASP, prior works have either only found such flaws as side effects of other analyses or focused on a limited set of applications to analyze. Therefore, the community lacks in-depth knowledge about the actual prevalence of Persistent Client-Side XSS in the wild. To close this research gap, we leverage taint tracking to identify suspicious flows from client-side persistent storage (Web Storage, cookies) to dangerous sinks (HTML, JavaScript, and script.src). We discuss two attacker models capable of injecting malicious payloads into these storages, i.e., a network attacker capable of temporarily hijacking HTTP communication (e.g., in a public WiFi), and a Web attacker who can leverage flows into storage or an existing reflected XSS flaw to persist their payload. With our taint-aware browser and these models in mind, we study the prevalence of Persistent Client-Side XSS in the Alexa Top 5,000 domains. We find that more than 8% of them have unfiltered data flows from persistent storages to a dangerous sink, which showcases the developers’ inherent trust in the integrity of storage content. Even worse, if we only consider sites that make use of data originating from storages, 21% of the sites are vulnerable. For those sites with vulnerable flaws from storage to sink, we find that at least 70% are directly exploitable by our attacker models. Finally, investigating the vulnerable flows originating from storages allows us to categorize them into four disjoint categories and propose appropriate mitigations.


Fass, Aurore, Robert Krawczyk, Michael Backes, and Ben Stock. 2018. “JaST: Fully Syntactic Detection of Malicious (Obfuscated) JavaScript.” In DIMVA ’18.
[show abstract] [paper]
JavaScript is a browser scripting language initially created to enhance the interactivity of web sites and to improve their user-friendliness. However, as it offloads the work to the user’s browser, it can be used to engage in malicious activities such as Crypto-Mining, Drive-by-Download attacks, or redirections to web sites hosting malicious software. Given the prevalence of such nefarious scripts, the anti-virus industry has increased the focus on their detection. The attackers, in turn, make increasing use of obfuscation techniques, so as to hinder analysis and the creation of corresponding signatures. Yet these malicious samples share syntactic similarities at an abstract level, which enables to bypass obfuscation and detect even unknown malware variants. In this paper, we present JaSt, a low-overhead solution that combines the extraction of features from the abstract syntax tree with a random forest classifier to detect malicious JavaScript instances. It is based on a frequency analysis of specific patterns, which are either predictive of benign or of malicious samples. Even though the analysis is entirely static, it yields a high detection accuracy of almost 99.5% and has a low false-negative rate of 0.54%.
Stock, Ben, Giancarlo Pellegrino, Frank Li, Michael Backes, and Christian Rossow. 2018. “Didn’t You Hear Me? — Towards More Successful Web Vulnerability Notifications.” In NDSS ’18.
[show abstract] [paper] [slides]
After treating the notification of affected parties as mere side-notes in research, our community has recently put more focus on how vulnerability disclosure can be conducted at scale. The first works in this area have shown that while notifications are helpful to a significant fraction of operators, the vast majority of systems remain unpatched. In this paper, we build on these previous works, aiming to understand why the effects are not more significant. To that end, we report on a notification experiment targeting more than 24,000 domains, which allowed us to analyze what technical and human aspects are roadblocks to a successful campaign. As part of this experiment, we explored potential alternative notification channels beyond email, including social media and phone. In addition, we conducted an anonymous survey with the notified operators, investigating their perspectives on our notifications. We show the pitfalls of email-based communications, such as the impact of anti-spam filters, the lack of trust by recipients, and hesitations to fix vulnerabilities despite awareness. However, our exploration of alternative communication channels did not suggest a more promising medium. Seeing these results, we pinpoint future directions in improving security notifications.


Stock, Ben, Giancarlo Pellegrino, and Christian Rossow. 2017. “Hey, You Have a Problem: On the Feasibility of Large-Scale Web Vulnerability Notification.” Talk at FIRST Conference 2017.
Backes, Michael, Konrad Rieck, Malte Skoruppa, Ben Stock, and Fabian Yamaguchi. 2017. “Efficient and Flexible Discovery of PHP Application Vulnerabilities.” In Euro S&P ’17.
[show abstract] [paper]
The Web today is a growing universe of pages and applications teeming with interactive content. The security of such applications is of the utmost importance, as exploits can have a devastating impact on personal and economic levels. The number one programming language in Web applications is PHP, powering more than 80% of the top ten million websites. Yet it was not designed with security in mind, and, today, bears a patchwork of fixes and inconsistently designed functions with often unexpected and hardly predictable behavior that typically yield a large attack surface. Consequently, it is prone to different types of vulnerabilities, such as SQL Injection or Cross-Site Scripting. In this paper, we present an interprocedural analysis technique for PHP applications based on code property graphs that scales well to large amounts of code and is highly adaptable in its nature. We implement our prototype using the latest features of PHP 7, leverage an efficient graph database to store code property graphs for PHP, and subsequently identify different types of Web application vulnerabilities by means of programmable graph traversals. We show the efficacy and the scalability of our approach by reporting on an analysis of 1,854 popular open-source projects, comprising almost 80 million lines of code.
Stock, Ben, Martin Johns, Marius Steffens, and Michael Backes. 2017. “How the Web Tangled Itself: Uncovering the History of Client-Side Web (In)Security.” In USENIX Security ’17.
[show abstract] [paper] [slides] [additional content]
While in its early days, the Web was mostly static, it has organically grown into a full-fledged technology stack. This evolution has not followed a security blueprint, resulting in many classes of vulnerabilities specific to the Web. Even though the server-side code of the past has long since vanished, the Internet Archive gives us a unique view on the historical development of the Web’s client side and its (in)security. Uncovering the insights which fueled this development bears the potential to not only gain a historical perspective on client-side Web security, but also to outline better practices going forward. To that end, we examined the code and header information of the most important Web sites for each year between 1997 and 2016, amounting to 659,710 different analyzed Web documents. From the archived data, we first identify key trends in the technology deployed on the client, such as the increasing complexity of client-side Web code and the constant rise of multi-origin appli- cation scenarios. Based on these findings, we then assess the advent of corresponding vulnerability classes, investigate their prevalence over time, and analyze the security mechanisms developed and deployed to mitigate them. Correlating these results allows us to draw a set of overarching conclusions: Along with the dawn of JavaScript-driven applications in the early years of the millennium, the likelihood of client-side injection vulnerabilities has risen. Furthermore, there is a noticeable gap in adoption speed between easy-to-deploy security headers and more involved measures such as CSP. But there is also no evidence that the usage of the easy-to- deploy techniques reflects on other security areas. On the contrary, our data shows for instance that sites that use HTTPonly cookies are actually more likely to have a Cross-Site Scripting problem. Finally, we observe that the rising security awareness and introduction of dedicated security technologies had no immediate impact on the overall security of the client-side Web.


Stock, Ben, Bernd Kaiser, Stephan Pfistner, Sebastian Lekies, and Martin Johns. 2016. “From Facepalm to Brain Bender: Exploring Client-Side Cross-Site Scripting.” Talk at OWASP AppSec EU 2016.
Backes, Michael, Thorsten Holz, Christian Rossow, Teemu Rytilahti, Milivoj Simeonovski, and Ben Stock. 2016. “On the Feasibility of TTL-Based Filtering for DRDoS Mitigation.” In RAID 2016.
[show abstract] [paper]
One of the major disturbances for network providers in recent years have been Distributed Reflective Denial-of-Service (DRDoS) attacks. In such an attack, the attacker spoofs the IP address of a victim and sent a flood of tiny packets to vulnerable services which then respond with much larger replies to the victim. Led by the idea that the attacker cannot fabricate the number of hops between the amplifier and the victim, Hop Count Filtering (HCF) mechanisms that analyze the Time to Live of incoming packets have been proposed as a solution. In this paper, we evaluate the feasibility of using Hop Count Filtering to mitigate DRDoS attacks. To that end, we detail how a server can use active probing to learn TTLs of alleged packet senders. Based on data sets of benign and spoofed NTP requests, we find that a TTL-based defense could block over 75% of spoofed traffic, while allowing 85% of benign traffic to pass. To achieve this performance, however, such an approach must allow for a tolerance of +/-2 hops. Motivated by this, we investigate the tacit assumption that an attacker cannot learn the correct TTL value. By using a combination of tracerouting and BGP data, we build statistical models which allow to estimate the TTL within that tolerance level. We observe that by wisely choosing the used amplifiers, the attacker is able to circumvent such TTL-based defenses. Finally, we argue that any (current or future) defensive system based on TTL values can be bypassed in a similar fashion, and find that future research must be steered towards more fundamental solutions to thwart any kind of IP spoofing attacks.
Stock, Ben, Benjamin Livshits, and Benjamin Zorn. 2016. “Kizzle: A Signature Compiler for Detecting Exploit Kits.” In DSN 2016.
[show abstract] [paper] [slides]
In recent years, the drive-by malware space has undergone significant consolidation. Today, the most common source of drive-by downloads are socalled exploit kits (EKs). This paper presents Kizzle, the first prevention technique specifically designed for finding exploit kits. Our analysis shows that while the JavaScript delivered by kits varies greatly, the unpacked code varies much less, due to the kits authors’ code reuse between versions. Ironically, this well-regarded software engineering practice allows us to build a scalable and precise detector that is able to quickly respond to superficial but frequent changes in EKs. Kizzle is able to generate anti-virus signatures for detecting EKs, which compare favorably to manually created ones. Kizzle is highly responsive and can generate new signatures within hours. Our experiments show that Kizzle produces high-accuracy signatures. When evaluated over a four-week period, false-positive rates for Kizzle are under 0.03%, while the false-negative rates are under 5%.
Stock, Ben, Giancarlo Pellegrino, Christian Rossow, Martin Johns, and Michael Backes. 2016. “Hey, You Have a Problem: On the Feasibility of Large-Scale Web Vulnerability Notification.” In USENIX Security ’16.
[show abstract] [paper] [slides] [additional content]
Large-scale discovery of thousands of vulnerable Web sites has become a frequent event, thanks to recent advances in security research and the rise in maturity of Internet-wide scanning tools. The issues related to disclosing the vulnerability information to the affected parties, however, have only been treated as a side note in prior research. In this paper, we systematically examine the feasibility and efficacy of large-scale notification campaigns. For this, we comprehensively survey existing communication channels and evaluate their usability in an automated notification process. Using a data set of over 44,000 vulnerable Web sites, we measure success rates, both with respect to the total number of fixed vulnerabilities and to reaching responsible parties, with the following high-level results: Although our campaign had a statistically significant impact compared to a control group, the increase in the fix rate of notified domains is marginal. If a notification report is read by the owner of the vulnerable application, the likelihood of a subsequent resolution of the issues is sufficiently high: about 40%. But, out of 35,832 transmitted vulnerability reports, only 2,064 (5.8%) were actually received successfully, resulting in an unsatisfactory overall fix rate, leaving 74.5% of Web applications exploitable after our month-long experiment. Thus, we conclude that currently no reliable notification channels exist, which significantly inhibits the success and impact of large-scale notification.


Stock, Ben, Sebastian Lekies, and Martin Johns. 2015. “Your Scripts in My Page – What Could Possibly Go Wrong?” Talk at Blackhat EU 2015.
———. 2015. “Client-Side Protection Against DOM-Based XSS Done Right (Tm).” Talk at Blackhat Asia 2015.
Lekies, Sebastian, Ben Stock, Martin Wentzel, and Martin Johns. 2015. “The Unexpected Dangers of Dynamic JavaScript.” In USENIX Security ’15.
[show abstract] [paper] [additional content]
Modern Web sites frequently generate JavaScript on-the- fly via server-side scripting, incorporating personalized user data in the process. In general, cross-domain access to such sensitive resources is prevented by the Same-Origin Policy. The inclusion of remote scripts via the HTML script tag, however, is exempt from this policy. This exemption allows an adversary to import and execute dynamically generated scripts while a user visits an attacker-controlled Web site. By observing the execution behavior and the side effects the inclusion of the dynamic script causes, the attacker is able to leak private user data leading to severe consequences ranging from privacy violations up to full compromise of user accounts. Although this issues has been known for several years under the term Cross-Site Script Inclusion, it has not been analyzed in-depth on the Web. Therefore, to systematically investigate the issue, we conduct a study on its prevalence in a set of 150 top-ranked domains. We observe that a third of the surveyed sites utilize dynamic JavaScript. After evaluating the effectiveness of the deployed countermeasures, we show that more than 80% of the sites are susceptible to attacks via remote script inclusion. Given the results of our study, we provide a secure and functionally equivalent alternative to the use of dynamic scripts.
Stock, Ben, Bernd Kaiser, Stephan Pfistner, Sebastian Lekies, and Martin Johns. 2015. “From Facepalm to Brain Bender: Exploring Client-Side Cross-Site Scripting.” In CCS 2015.
[show abstract] [paper]
Although studies have shown that at least one in ten Web pages contains a client-side XSS vulnerability, the prevalent causes for this class of Cross-Site Scripting have not been studied in depth. Therefore, in this paper, we present a large-scale study to gain insight into these causes. To this end, we analyze a set of 1,273 real-world vulnerabilities contained on the Alexa Top 10k domains using a specifically designed architecture, consisting of an infrastructure which allows us to persist and replay vulnerabilities to ensure a sound analysis. In combination with a taint-aware browsing engine, we can therefore collect important execution trace information for all flaws. Based on the observable characteristics of the vulnerable JavaScript, we derive a set of metrics to measure the complexity of each flaw. We subsequently classify all vulnerabilities in our data set accordingly to enable a more systematic analysis. In doing so, we find that although a large portion of all vulnerabilities have a low complexity rating, several incur a significant level of complexity and are repeatedly caused by vulnerable third-party scripts. In addition, we gain insights into other factors related to the existence of client-side XSS flaws, such as missing knowledge of browser-provided APIs, and find that the root causes for Client-Side Cross-Site Scripting range from unaware developers to incompatible first- and third-party code.


Johns, Martin, Ben Stock, and Sebastian Lekies. 2014. “Call to Arms: a Tale of the Weaknesses of Current Client-Side Xss Filtering.” Talk at Blackhat USA 2014.
Stock, Ben, Sebastian Lekies, and Martin Johns. 2014. “DOM-Basiertes Cross-Site Scripting Im Web: Reise in Ein Unerforschtes Land.” In GI Sicherheit 2014.
[show abstract] [paper]
Cross-site Scripting (XSS) ist eine weit verbreitete Verwundbarkeitsklasse in Web-Anwendungen und kann sowohl von server-seitigem als auch von client-seitigem Code verursacht werden. Allerdings wird XSS primaer als ein server-seitiges Problem wahrgenommen, motiviert durch das Offenlegen von zahlreichen entsprechenden XSS-Schwachstellen. In den letzten Jahren kann jedoch eine zunehmende Verlagerung von Anwendungslogik in den Browser beobachtet werden; eine Entwicklung, die im Rahmen des sogenannten Web 2.0 begonnen hat. Dies legt die Vermutung nahe, dass auch client-seitiges XSS an Bedeutung gewinnen koennte. In diesem Beitrag stellen wir eine umfassende Studie vor, in der wir, mittels eines voll-automatisierten Ansatzes, die fuehrenden 5000 Webseiten des Alexa Indexes auf DOM-basiertes XSS untersucht haben. Im Rahmen dieser Studie, konnten wir 6.167 derartige Verwundbarkeiten identifizieren, die sich auf 480 der untersuchten Anwendungen verteilen.
Stock, Ben, and Martin Johns. 2014. “Protecting Users Against XSS-Based Password Manager Abuse.” In AsiaCCS 2014.
[show abstract] [paper] [slides]
To ease the burden of repeated password authentication on multiple sites, modern Web browsers provide password managers, which offer to automatically complete password fields on Web pages, after the password has been stored once. Unfortunately, these managers operate by simply inserting the clear-text password into the document’s DOM, where it is accessible by JavaScript. Thus, a successful Cross-site Scripting attack can be leveraged by the attacker to read and leak password data which has been provided by the password manager. In this paper, we assess this potential threat through a thorough survey of the current password manager generation and observable characteristics of password fields in popular Web sites. Furthermore, we propose an alternative password manager design, which robustly prevents the identified attacks, while maintaining compatibility with the established functionality of the existing approaches.
Stock, Ben, Sebastian Lekies, Tobias Mueller, Patrick Spiegel, and Martin Johns. 2014. “Precise Client-Side Protection against DOM-Based Cross-Site Scripting.” In USENIX Security ’14.
[show abstract] [paper] [slides] [additional content]
The current generation of client-side Cross-Site Scripting filters rely on string comparison to detect request values that are reflected in the corresponding response’s HTML. This coarse approximation of occurring data flows is incapable of reliably stopping attacks which leverage nontrivial injection contexts. To demonstrate this, we conduct a thorough analysis of the current state-of-the-art in browser-based XSS filtering and uncover a set of conceptual shortcomings, that allow efficient creation of filter evasions, especially in the case of DOM-based XSS. To validate our findings, we report on practical experiments using a set of 1,602 real-world vulnerabilities, achieving a rate of 73% successful filter bypasses. Motivated by our findings, we propose an alternative filter design for DOM-based XSS, that utilizes runtime taint tracking and taint-aware parsers to stop the parsing of attacker-controlled syntactic content. To examine the efficiency and feasibility of our approach, we present a practical implementation based on the open source browser Chromium. Our proposed approach has a low false positive rate and robustly protects against DOM-based XSS exploits.


Johns, Martin, Sebastian Lekies, and Ben Stock. 2013. “Eradicating DNS Rebinding with the Extended Same-Origin Policy.” In USENIX Security ’13.
[show abstract] [paper] [additional content]
The Web’s principal security policy is the Same-Origin Policy (SOP), which enforces origin-based isolation of mutually distrusting Web applications. Since the early days, the SOP was repeatedly undermined with variants of the DNS Rebinding attack, allowing untrusted script code to gain illegitimate access to protected network resources. To counter these attacks, the browser vendors introduced countermeasures, such as DNS Pinning, to mitigate the attack. In this paper, we present a novel DNS Rebinding attack method leveraging the HTML5 Application Cache. Our attack allows reliable DNS Rebinding attacks, circumventing all currently deployed browser-based defense measures. Furthermore, we analyze the fundamental problem which allows DNS Rebinding to work in the first place: The SOP’s main purpose is to ensure security boundaries of Web servers. However, the Web servers themselves are only indirectly involved in the corresponding security decision. Instead, the SOP relies on information obtained from the domain name system, which is not necessarily controlled by the Web server’s owners. This mismatch is exploited by DNS Rebinding. Based on this insight, we propose a light-weight extension to the SOP which takes Web server provided information into account. We successfully implemented our extended SOP for the Chromium Web browser and report on our implementation’s interoperability and security properties.
Lekies, Sebastian, Ben Stock, and Martin Johns. 2013. “25 Million Flows Later - Large-Scale Detection of DOM-Based XSS.” In CCS 2013.
[show abstract] [paper] [slides]
In recent years, the Web witnessed a move towards sophisticated client-side functionality. This shift caused a significant increase in complexity of deployed JavaScript code and thus, a proportional growth in potential client-side vulnerabilities, with DOM-based Cross-site Scripting being a high impact representative of such security issues. In this paper, we present a fully automated system to detect and validate DOM-based XSS vulnerabilities, consisting of a taint-aware JavaScript engine and corresponding DOM implementation as well as a context-sensitive exploit generation approach. Using these components, we conducted a large-scale analysis of the Alexa top 5000. In this study, we identified 6,167 unique vulnerabilities distributed over 480 domains, showing that 9,6% of the examined sites carry at least one DOM- based XSS problem.


Stock, Ben, Jan Göbel, Markus Engelberth, Felix Freiling, and Thorsten Holz. 2009. “Walowdac-Analysis of a Peer-to-Peer Botnet.” In European Conference on Computer Network Defense (EC2ND).
[show abstract] [paper]
A botnet is a network of compromised machines under the control of an attacker. Botnets are the driving force behind several misuses on the Internet, for example spam mails or automated identity theft. In this paper, we study the most prevalent peer-to-peer botnet in 2009: Waledac. We present our infiltration of the Waledac botnet, which can be seen as the successor of the Storm Worm botnet. To achieve this we implemented a clone of the Waledac bot named Walowdac. It implements the communication features of Waledac but does not cause any harm, i.e., no spam emails are sent and no other commands are executed. With the help of this tool we observed a minimum daily population of 55,000 Waledac bots and a total of roughly 390,000 infected machines throughout the world. Furthermore, we gathered internal information about the success rates of spam campaigns and newly introduced features like the theft of credentials from victim machines.