“The Other Delaware Effect”
This paper examines the effects of Delaware’s 2015 ban on fee-shifting provisions in corporate charters and bylaws, a significant legislative intervention in corporate law aimed at curbing managerial powers. The Delaware Supreme Court had approved these provisions just one year earlier as part of a series of measures aimed at curbing shareholder litigation. Because of their perceived substantial potential to reduce wasteful litigation, the Delaware legislature’s ban led many to predict an exodus of corporations from Delaware and the continued spread of fee-shifting provisions in other states.
Contrary to these predictions, this study finds that the ban did not trigger a significant departure of corporations from Delaware. More importantly, it also documents a sudden decline in the adoption of fee-shifting provisions outside Delaware, where most states did not enact similar prohibitions. The paper argues that this decline is most likely due to a spillover effect: Adoptions declined not because Delaware’s ban on fee-shifting provisions coincided with a shift in market sentiment against these provisions, but because this ban stymied their adoption nationwide. The paper explores the mechanisms through which Delaware’s corporate law can influence governance practices elsewhere, including shareholder empowerment and law firms’ reluctance to recommend provisions that have been outlawed in Delaware. It concludes that Delaware’s legal leadership might be able to set informal norms that influence the behavior of important gatekeepers and other actors in the corporate governance ecosystem, effectively constraining the actions space of managers of corporations incorporated elsewhere.
By exploring these mechanisms, the paper contributes to a deeper understanding of the forces that shape corporate governance practices in the United States. It offers insights into the ways Delaware’s rules affect corporate governance practices that subsequently reverberate throughout the U.S. corporate landscape. Besides exploring important and previously overlooked aspects about Delaware’s role as a standard setter in today’s corporate law world, it also adds to our understanding of the diffusion of corporate governance innovations and regulatory competition. Additionally, the paper demonstrates the potential of artificial intelligence tools, such as large language models, to revolutionize empirical legal research by automating the extraction of legally significant provisions from corporate documents, allowing researchers to investigate previously underexplored questions at scale.
“DECODEM: Data Extraction from Corporate Organizational Documents via Enhanced Methods”
With Eric Talley and others
This project seeks to revolutionize empirical research in corporate law and finance by laying the groundwork for a new generation of open-source corporate governance datasets, thereby enhancing the power and reach of this type of research. To achieve this goal, the project develops advanced methods for the automated extraction of legally relevant information from corporate charters and bylaws. In particular, it leverages state-of-the-art natural language processing technologies, including large language models, to overcome the limitations of traditional data extraction methods, which often rely on manual coding and are hampered by restricted scope and questionable reliability.
“Lawma: The Power of Specialization for Legal Tasks”
With Ricardo Dominguez-Olmedo, Vedant Nanda, Rediet Abebe, Stefan Bechtold, Christoph Engel, Krishna Gummadi, Moritz Hardt, and Michael Livermore
Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to prompting commercial models, hoping that it will alleviate the significant cost of human annotation. Despite growing use, our understanding of how to best utilize large language models for legal tasks remains limited. We conduct a comprehensive study of 260 legal text classification tasks, nearly all new to the machine learning community. Starting from GPT-4 as a baseline, we show that it has non-trivial but highly varied zero-shot accuracy, often exhibiting performance that may be insufficient for legal work. We then demonstrate that a lightly fine-tuned Llama-3 model vastly outperforms GPT-4 on almost all tasks, typically by double-digit percentage points. We find that larger models respond better to fine-tuning than smaller models. A few tens to hundreds of examples suffice to achieve high classification accuracy. Notably, we can fine-tune a single model on all 260 tasks simultaneously at a small loss in accuracy relative to having a separate model for each task. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal tasks with some available labeled data, researchers are better off using a fine-tuned open-source model.