Subproject 5: Corporate law between anthropocentrism and self-driving corporation - how much AI can a company tolerate?

Research Program: 

Subproject 5, in close cooperation with Subproject 1, explores the question of how much AI a company can accomodate and how much anthropocentrism will still be required in corporate law in the future. The subproject follows two lines: one that is more comparative legal and legal-policy oriented, the other line follows a more application-oriented and dogmatic approach.

As a first step, the aim is to identify anthropocentric principles in German corporate law, proceeding from the hypothesis that Sections 76(3) of the German Stock Corporation Act (AktG) and Section 6(2) of the German Limited Liability Company Act (GmbHG), which stipulate that members of corporate bodies in German corporations must be natural persons, represent a conscious decision to reserve corporate governing bodies for humans.

Even if a fully autonomous corporation is likely still more of a specter today, the law of the U.S. state of Delaware under which a corporation may dispense with a board of directors, at least contemplates the possibility that a legally non-autonomous AI could replace the board in future. Since Delaware also allows for full, reciprocal ownership between two corporations, two corporations managed solely by an AI, which are 100% owned by each other and are no longer even controlled by their shareholders, would at least be conceivable.

Against this backdrop, the first line of the subproject aims, on the one hand, to explore whether a self-governing corporation is even a desirable legal policy goal. On the other hand, it addresses whether it can make sense from a legal policy perspective to make an AI a member of a governing body, at least alongside humans. The comparative focus will be on the U.S. states of Delaware and Wyoming, with its DAO companies.

At the conclusion of this line of legal policy analysis, the question to be addressed is whether, and if so, on what grounds, the requirement that corporate officers be human beings can be defended under German corporate law. A key focus will be on determining whether artificial agents of the management can be recognized alongside human beings, at least through provisions in the charter of the corporation, and what measures would then need to be taken under corporate law. In doing so, the classic principal-agent conflict will need to be re-examined. Legal-economic considerations will form an important component of the legal-policy conclusions.

The second, more dogmatic strand of the subproject examines the question of what responsibilities arise to control AI when it is used in public limited companies and in limited liability companies. Demonizing the use of AI in this context cannot be a solution. On the contrary, one working hypothesis is that, due to the general duty to promote the company’s well-being, the board of directors may even be obligated under certain conditions to use AI if it can produce results that are just as reliable but faster and more cost-effective than the work of human beings.

If one wishes to define control responsibilities, a distinction must be made between whether the AI is used to prepare for a human decision or whether a decision is directly delegated to the AI, which then, for example, autonomously sells securities on behalf of the corporation. When used for decision preparation, two questions take center stage. First: What level of care is required in selecting the AI? And second: To what extent must the board of directors validate the AI’s decision proposal? Does validation require the board to decode the black box? If so, its use would hardly be legally sound. Subject to the rapid further development of explainable AI (XAI), the working hypothesis is that decoding the black box is not necessary. After all, the very reason for seeking external advice is that the board lacks the necessary expertise. Rather, the task of verifying plausibility is to compare the results found by the expert with the board’s experience and business intuition.

Furthermore, the focus will be on the role of the supervisory board and the question whether the stock corporation must take organizational precautions to ensure that the use of AI does not cause harm to the company. With regard to the supervisory board, it will be necessary to examine in more detail whether, as part of the supervisory duties, it must conduct its own plausibility check of the AI’s decision proposals or whether it can limit itself to monitoring the plausibility check performed by the executive board. The latter seems, at least at first glance, to be the obvious approach, unless the supervisory board itself uses AI in its supervisory activities.

Furthermore, attention must be paid to the use of AI in conducting a virtual general meeting of the shareholders, as well as its use by shareholders in preparing their votes for the general meeting and by proxy advisors.

Another key area of research in connection with oversight responsibilities is the question of the organizational framework for Corporate Digital Responsibility (CDR). The legal requirements for this are currently still very scattered; in particular, organizational law provisions in the EU AI Regulation will need to be examined, as their significance for corporate law has not yet been even remotely clarified. Depending on the size of the company and the extent to which AI is used, different standards will need to be developed. In particular, the question arises as to whether a separate CDR department should be established. It will also need to be clarified how such a CDR department relates to the regular compliance department.