top of page
Logo_new.png

Current Research.

My current projects include the following:

​“Sticky Charters? The Surprisingly Tepid Embrace of Officer-Protecting Waivers in Delaware”

With Eric Talley

This article investigates the reaction to a much-heralded 2022 legal reform in Delaware that permitted a corporation’s charter to exculpate its officers from monetary exposure for breaching their fiduciary duty of care. To isolate reactions to this statutory reform, we make extensive use of generative AI tools to identify and interpret charter amendments that introduce officer-facing waivers. We find a surprisingly tepid rate of uptake among Delaware corporations through the end of the first post-reform year, notwithstanding widespread predictions that corporate entities would quickly storm the exculpation exits once permitted to do so.

Our study makes two contributions to the empirical study of law—one methodological and the other substantive. Methodologically, we develop a novel and powerful use case for deploying large language models as a tool for distilling and extracting technical provisions from legal texts (in this case corporate charters), allowing us to accelerate and streamline an endeavor that would have consumed substantial time and resources using traditional human-labeling protocols. Notably, and in a significant departure from previous machine learning tools, ChatGPT accomplishes this set of tasks without the need for training data specifically tailored for this purpose. Perhaps most impressive is the accuracy with which ChatGPT can operate: we perform several validation exercises, which generally indicate that our proposed method yields highly accurate results.

Substantively, we demonstrate that Delaware’s statutory invitation attracted few takers in its first year of effectiveness: specifically, we show that only a modest minority of eligible corporations amended their charters to include officer-facing waivers. This tepid rate of uptake, moreover, persists even in corporations that went public after the reform’s effective date, suggesting that transaction costs are unlikely to be the culprit for the listless response. Furthermore, we show that stock market investors also exhibited a muted response to the reform, raising doubts about whether firms feared amendments would trigger an adverse market reception. Our results seem more consistent with alternative explanations, ranging from the plausible irrelevance of Delaware’s reform, to a risk-averse reticence among corporate managers who rationally adopt a “wait and see” approach to gauge how such waivers are received by both courts and corporate stakeholders while keeping their options open.

Paper on SSRN

Anchor 5

​“The Other Delaware Effect”

This paper examines the effects of Delaware’s 2015 ban on fee-shifting provisions in corporate charters and bylaws, a significant legislative intervention in corporate law aimed at curbing managerial powers. The Delaware Supreme Court had approved these provisions just one year later as part of a series of measures aimed at curbing shareholder litigation. Because of their perceived substantial potential to reduce wasteful litigation, the Delaware legislature’s ban led many to predict an exodus of corporations from Delaware and the continued spread of fee-shifting provisions in other states.


Contrary to these predictions, this study finds that the ban did not trigger a significant departure of corporations from Delaware. More importantly, it also documents a sudden decline in the adoption of fee-shifting provisions outside Delaware, where most states did not enact similar prohibitions. The paper argues that this decline is most likely due to a spillover effect: Adoptions declined not because Delaware’s ban on fee-shifting provisions coincided with a shift in market sentiment towards these provisions, but because this ban stymied their adoption nationwide. The paper explores the mechanisms through which Delaware’s corporate law influences governance practices elsewhere, including shareholder empowerment and law firms’ reluctance to recommend provisions that have been outlawed in Delaware. It concludes that Delaware’s legal leadership might be able to set informal norms that influence the behavior of important gatekeepers and other actors in the corporate governance ecosystem, effectively constraining the actions space of managers of corporations incorporated elsewhere.


By exploring these mechanisms, the paper contributes to a deeper understanding of the forces that shape corporate governance practices in the United States. It offers insights into the ways Delaware’s rules affect corporate governance practices that subsequently reverberate throughout the U.S. corporate landscape. Besides exploring important and previously overlooked aspects about Delaware’s role as a standard setter in today’s corporate law world, it also adds to our understanding of the diffusion of corporate governance innovations and regulatory competition. Additionally, the paper demonstrates the potential of artificial intelligence tools, such as large language models, to revolutionize empirical legal research by automating the extraction of legally significant provisions from corporate documents, allowing researchers to investigate previously underexplored questions at scale

Anchor 2
Anchor 3

​“DECODEM: Data Extraction from Corporate Organizational Documents via Enhanced Methods”

With Eric Talley and others

This project seeks to revolutionize empirical research in corporate law and finance by laying the groundwork for a new generation of open-source corporate governance datasets, thereby enhancing the power and reach of this type of research. To achieve this goal, the project develops advanced methods for the automated extraction of legally relevant information from corporate charters and bylaws. In particular, it leverages state-of-the-art natural language processing technologies, including large language models, to overcome the limitations of traditional data extraction methods, which often rely on manual coding and are hampered by restricted scope and questionable reliability.

Anchor 1

​“Measuring Dark Patterns in CCPA Opt-Out Mechanisms”

With Marshini Chetty, Nick Feamster, Lior Strahilevitz and Van Tran

To safeguard consumer privacy, the California Consumer Privacy Act (CCPA) requires businesses to provide consumers with an option to opt out of the sale and sharing of their personal information. However, businesses often implement overly burdensome procedures that make the opt-out process effectively unusable. The California Privacy Rights Act (CPRA) was introduced to strengthen the CCPA and address these shortcomings. Given the recent enactment of the CPRA and its distinction as the first U.S. law targeting the use of dark patterns, the effectiveness of this legislation remains an open question.

In this study, we develop a pipeline to record and analyze how businesses implement their opt-out processes. We completed the entire opt-out process for a number of websites likely subject to the CPRA, which includes submitting an opt-out request via the opt-out link and completing any required verification processes. We characterize observed implementation patterns of the opt-out process and examine dark patterns within these processes. This analysis provides valuable insights into the practical challenges and effectiveness of the CPRA in mitigating manipulative practices that undermine consumer rights.

Anchor 6

​“Lawma: The Power of Specialization for Legal Tasks”

With Ricardo Dominguez-Olmedo, Vedant Nanda, Rediet Abebe, Stefan Bechtold, Christoph Engel, Krishna Gummadi, Moritz Hardt, and Michael Livermore

Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to prompting commercial models, hoping that it will alleviate the significant cost of human annotation. Despite growing use, our understanding of how to best utilize large language models for legal tasks remains limited. We conduct a comprehensive study of 260 legal text classification tasks, nearly all new to the machine learning community. Starting from GPT-4 as a baseline, we show that it has non-trivial but highly varied zero-shot accuracy, often exhibiting performance that may be insufficient for legal work. We then demonstrate that a lightly fine-tuned Llama-3 model vastly outperforms GPT-4 on almost all tasks, typically by double-digit percentage points. We find that larger models respond better to fine-tuning than smaller models. A few tens to hundreds of examples suffice to achieve high classification accuracy. Notably, we can fine-tune a single model on all 260 tasks simultaneously at a small loss in accuracy relative to having a separate model for each task. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal tasks with some available labeled data, researchers are better off using a fine-tuned open-source model.

bottom of page