Strategies for adapting to shifting consumer expectations resulting from technology that shrinks global distances is not the only thing that the Fourth Industrial Revolution should borrow from the Telegraph Revolution. Two of the primary challenges that faced the telegraph revolution will also present problems today as businesses and governments grapple with the realities of Industry 4.0.
First, both transformative technologies trigger questions about conflicting interests because they impact nearly every aspect of society. Second, questions about how to balance these interests become exponentially more difficult when we consider the global scope out of which these questions arise. When considering answers to the questions provided below and input from AI governance expert Mark Esposito, it is clear that polycentric (multilevel) governance is the appropriate framework for global AI governance.
What Are The Conflicting Interests Or Values In The Context Of AI Governance?
While the telegraph created questions about free speech vs. corporate control, AI triggers conflicts between security and privacy. Advanced pattern detection of AI enhances the accuracy and efficiency of surveillance programs, but its invasiveness often infringes on several civil liberties and personal freedoms.
So how will we assess the significance of these competing interests and who will we rely on to do it? One way to balance them is to use an informal version of proportionality doctrine that considers the extent to which each interest is infringed and the net difference that a proposed measure or technology will have on society. While today proportionality review is often performed by the judicial branch, we need to use an informal version of it where we can rely on the input of many different stakeholders who have different perspectives on the technology and its impact. For example, civil society groups can advocate for voiceless civilians about threats to their civil liberties while intelligence experts and military leaders can speak to the actual threats to national security that aim to justify them.
How Do We Conduct This Analysis Through A Global Lens?
Managing these interests is certainly difficult when examining AI’s effect on one country, but the task of balancing them in the context of the world’s 5000 ethnicities, 4000 religions represented in 195 countries is daunting. It is impossible to balance these interests in a uniform way: as each nation, ethnicity, and religion will have different interpretations about the appropriate levels of government interference and personal freedoms. For example, the collectivist mindset of a social democracy might be comfortable with the additional infringement of enhanced surveillance methods while the individualist mindset of a liberal democracy might compel people to reject them because they place a higher premium on civil liberties.
So how do we account for the regional and culturally specific interpretations? The answer: a decentralized system that allows specific regions and groups to determine their own calibration between conflicting interests. When confronted with questions about each member country retained sovereignty over its domestic telegraph systems to control pricing and local infrastructure while the International Telegraph Union standardized technical aspects of telegraphy, such as transmission protocols, message formatting, and rates for international messages. Similarly, in the context of AI, we need to establish universal principles for AI ethics (e.g. such as transparency, fairness, accountability, and safety) while also allowing nations to tailor the application of these principles to their unique cultural, political, and economic contexts.
Why Is Polycentric Governance The Right Answer?
Unlike centralized governance frameworks where power is concentrated and delegated to a few decision makers, polycentric systems are decentralized to account for country-specific value systems. Mark Esposito, an AI governance expert who has appointments at Hult International Business School and Harvard University (Disclosure: I am also a professor at Hult), explains more specifically that, “Elinor Ostrom’s eight principles for polycentric governance are vital. They provide a structure to balance global cooperation with local autonomy, ensuring that AI’s transformative potential is harnessed responsibly. By implementing clear boundaries, collective-choice arrangements, and conflict-resolution mechanisms, we can address the complex, often conflicting interests and values inherent in AI’s global impact.”
Read the full article here