My work focuses on developing new data infrastructure for empirical corporate governance, including large-scale datasets constructed from primary governance documents such as charters and bylaws. I integrate these empirical tools with doctrinal insight and computational methods to test theoretical claims about private ordering, regulatory spillovers, and the institutional design of law. In parallel, I develop frameworks for responsibly deploying natural language processing and large language models in legal research—clarifying where such tools can enrich legal inquiry and where their use requires careful governance and transparency.
My current projects include the following:
“The Other Delaware Effect”
This paper examines the effects of Delaware’s 2015 ban on fee-shifting provisions in corporate charters and bylaws, a significant legislative intervention in corporate law aimed at curbing managerial powers. The Delaware Supreme Court had approved these provisions just one year earlier as part of a series of measures aimed at curbing shareholder litigation. Because of their perceived substantial potential to reduce wasteful litigation, the Delaware legislature’s ban led many to predict an exodus of corporations from Delaware and the continued spread of fee-shifting provisions in other states.
Contrary to these predictions, this study finds that the ban did not trigger a significant departure of corporations from Delaware. More importantly, it also documents a sudden decline in the adoption of fee-shifting provisions outside Delaware, where most states did not enact similar prohibitions. The paper argues that this decline is most likely due to a spillover effect: Adoptions declined not because Delaware’s ban on fee-shifting provisions coincided with a shift in market sentiment against these provisions, but because this ban stymied their adoption nationwide. The paper explores the mechanisms through which Delaware’s corporate law can influence governance practices elsewhere, including shareholder empowerment and law firms’ reluctance to recommend provisions that have been outlawed in Delaware. It concludes that Delaware’s legal leadership might be able to set informal norms that influence the behavior of important gatekeepers and other actors in the corporate governance ecosystem, effectively constraining the actions space of managers of corporations incorporated elsewhere.
By exploring these mechanisms, the paper contributes to a deeper understanding of the forces that shape corporate governance practices in the United States. It offers insights into the ways Delaware’s rules affect corporate governance practices that subsequently reverberate throughout the U.S. corporate landscape. Besides exploring important and previously overlooked aspects about Delaware’s role as a standard setter in today’s corporate law world, it also adds to our understanding of the diffusion of corporate governance innovations and regulatory competition. Additionally, the paper demonstrates the potential of artificial intelligence tools, such as large language models, to revolutionize empirical legal research by automating the extraction of legally significant provisions from corporate documents, allowing researchers to investigate previously underexplored questions at scale.
“DECODEM: Data Extraction from Corporate Organizational Documents via Enhanced Methods”
With Eric Talley and others
This project seeks to revolutionize empirical research in corporate law and finance by laying the groundwork for a new generation of open-source corporate governance datasets, thereby enhancing the power and reach of this type of research. To achieve this goal, the project develops advanced methods for the automated extraction of legally relevant information from corporate charters and bylaws. In particular, it leverages state-of-the-art natural language processing technologies, including large language models, to overcome the limitations of traditional data extraction methods, which often rely on manual coding and are hampered by restricted scope and questionable reliability.
“Grading Machines: Can AI Exam-Grading Replace Law Professors?”
With Kevin Cope, Scott Hirst, Eric Posner, Dan Schwarcz, and Dane Thorley
In the past few years, large language models (LLMs) have achieved significant technical advances, such that legal-advocacy organizations are increasingly adopting them as complements to—or substitutes for—lawyers and other human experts. Several studies have examined LLMs' performance in taking law school exams, finding mixed results. Yet there have been no published studies systematically analyzing LLMs' competence at one of law professors' chief responsibilities: grading law school exams. This paper presents results of an analysis of how LLMs perform in evaluating student responses to legal analysis questions of the kind typically administered in law school exams. The underlying data come from exams in four subjects administered at top-30 U.S. law schools. Unlike some projects in computer or data science, our goal is not to design a new LLM that minimizes error or that maximizes agreement with human graders. Rather, we seek to determine whether existing models—which can be straightforwardly applied by most professors and students—are already suitable for the task of law exam evaluation.
We find that, when provided with a detailed rubric, the LLM grades correlate with the human grader at Pearson correlation coefficients of up to 0.93. Our findings suggest that, even if they do not fully replace humans in the near future, LLMs could soon be put to valuable tasks by law school professors, such as reviewing and validating professor grading, providing substantive feedback on ungraded midterms, and providing students feedback on self-administered practice exams.
