The most pressing challenges of the present and coming decades—among them, climate change and rising inequalities of wealth, income, risk and opportunity—are not technical problems. They are ethical predicaments. As the term will be used here, predicaments emerge as we become aware of conflicts among our own values, aims, and interests. Unlike problems, predicaments cannot be solved; they can only be resolved, where resolution implies both clarity and commitment—the articulation of new and responsively apt constellations of values, intentions and actions. Predicament resolution is thus inherently reflexive. It involves not only changing how we live, but why and as whom.
Today, we are witnessing the early stages of perhaps the greatest predicament humanity has had to face: a transformation of the human experience by the impacts of artificial intelligence, machine learning and big data. Like the Copernican revolution that decentered humanity’s place in the cosmos five hundred years ago, the intelligence revolution is not only physically decentering humanity. It is also decentering humanity metaphysically and morally, shattering previously foundational certainties and opening entirely new spaces of opportunity.
The Copernican revolution has had mixed results. It led to scientific and technological advances that have enabled sending men to the moon and landing a probe on an asteroid no larger than a few city blocks more than 250 million miles away. But it also enabled building nuclear weapons, killing hundreds of thousands of people with a single bomb, and altering the planet’s climate. The results of the intelligence revolution are likely to be similarly mixed. Smart cities will be more efficient and more livable; smart healthcare has the potential to reach and benefit the half of humanity that now lacks even basic health services. Yet, the algorithmic tailoring of opinion and experience is already threatening democratic governance, the emerging attention economy is deepening inequalities, and a “winner takes all” arms race in autonomous weapons and militarized AI is well underway.
The “intelligence predicament” is that freely-spent human attention energy and the readily-shared data carried along with it are being used by corporate and state actors to build smart societies, to deliver individually-tailored experiential options and consumer goods, and to incentivize reliance on smart services that have the potential to make human intelligence superfluous. The ethical challenges posed by this predicament are unprecedented.
The Intelligence Revolution: A Technological Confluence
Dreams of intelligent machines are old news. Homer’s Illiad includes descriptions of “golden servants.” Leonardo da Vinci produced detailed sketches of mechanical knights in the late 15th century. And, roughly two hundred years ago, Charles Babbage built a “difference engine” or mechanical computer capable of carrying out complex mathematical calculations. But, it was only very recently, with a confluence of advances in big data, machine-learning and artificial intelligence that these dreams of artificial servants, soldiers and savants have begun coming true.
Big Data. Dramatic advances in electronics miniaturization and processing speeds have enabled data-production to be made mobile, convenient and nearly ubiquitous. To give a sense of the scale of data involved, consider that in 1997, 100 gigabytes of data were produced globally per hour. Just five years later, as the internet was becoming truly global, 100 gigabytes were being produced every second. Today, 100 gigabytes of data are generated every two-thousandths of a second. That means that over 2.5 quintillion bytes of new data are now produced every 24 hours—enough data to fill a stack of DVDs reaching from the Earth to the Moon and back.
Much of this data is personal. It is being generated as a valued “byproduct” of the seamless 24/7 connectivity afforded by smartphones combined with the massive shift of social energy from offline to online environments that is occurring via the smart platforms developed by social media and e-commerce giants like Facebook, Amazon, Tencent and Alibaba. Every social media user and e-commerce consumer is now a data producer. This “decentralization” of data production is being further accelerated by the incorporation of network connections into everyday objects. It is estimated that by 2025, this “internet of things” will include 150 billion objects of daily use and that the average person will interact with some 4,800 internet-connected devices per day. At that point, global data production will occur at the rate of roughly 163 zettabytes per year. To put this in perspective, one zettabyte of data is equivalent to the amount of information encoded on 250 billion DVDs or enough to screen 36 million years of high definition television.
Machine-Learning. This unrelenting escalation in the volume, velocity and variety of data generation and storage might have produced nothing more than global “data smog.” But big data is precisely what was needed to enable practically-viable machine learning. Simply stated, machine learning is based on creating algorithms (decision making procedures) that can rewrite themselves in response to real world feedback. While the viability of machine learning was demonstrated theoretically nearly half a century ago, its practical application was hampered by slow computer processing speeds and by the unavailability of data of the kinds and in the quantities needed for algorithmic learning. With the kinds and amounts of data available today, artificial neural networks and “genetic” or “evolutionary” algorithms running on standard production computers are capable of remarkable feats of learning.
A high profile illustration of the power of machine learning is the defeat of a human master of the Chinese game of go by the purpose-built machine learning system, AlphaGo. Armed initially only with the rules of the game of go and an incentive to win, by reviewing games played by human masters and then playing millions of games against itself, AlphaGo was able to learn how to make sufficiently creative or confounding moves to defeat an expert human opponent. Unlike chess, in which the potential implications of every move can be calculated in advance, it is impossible to use sheer data-crunching power to win at go. The number of possible moves in a game of go exceeds the number of particles that would exist if every particle in our universe was a universe the same size as ours. AlphaGo learned to play unbeatable, world-class go by figuring out how to behave in ways that were remarkably effective because they were also remarkably unexpected. The second generation go machine, AlphaZero, learned how to defeat AlphaGo one hundred games to zero without being given either the rules of the game or any examples of play. AlphaZero learned to play undefeatable go, entirely on its own.
More relevantly to most of us, genetic algorithms are at the heart of the recommendation engines driving the network economy, attracting and holding customer attention, determining prices, and manufacturing interests and behaviors. Algorithms add over a billion dollars a year to the bottom line of Netflix by figuring out how to induce people to select films in less than the 90 seconds that they will commit on average to considering recommendations before switching to an alternative video-streaming platform. But algorithms working with commercially-produced individual data-profiles are also now conducting loan risk assessments and making “evidence-based” judicial recommendations regarding bail, sentencing and parole—practices that have drawn considerable criticism because the data sets they are using amount to “encoded” histories of past discrimination and disadvantage.
Artificial Intelligence. Artificial intelligence is a broader phenomenon than machine learning and refers to computational systems that mimic or model human cognitive functions. Debate continues as to whether any AI has passed the “Turing test” of convincing a group of witting experts that they are conversing with a human rather than a machine. But, AI systems are now fully capable of passing this test for unwitting users. For instance, students in a philosophy class taught at an American university, Georgia State University, interacted online with the course teaching assistant for an entire semester without ever suspecting that “she” was an AI.
The turning point when dreams of building artificial agents began being realized occurred between 2003 and 2008 when the U.S. Defense Advanced Research Projects Agency (DARPA) project to develop a “cognitive agent that learns and organizes”—an artificial agent “that can reason, learn from experience, be told what to do, explain what it is doing, reflect on its experience, and respond robustly to surprise.” Corporate extensions of that research led to the commercial launch of the Siri personal assistant on Apple iPhones in 2011 and since then to the development of artificial agents capable of providing targeted smart services as virtual personal assistants, customer relations reps, healthcare counselors, and so on. The state of the art today is not Siri or one of the other readily available virtual assistants that function as sophisticated, voice-activated search engines. It is a virtual personal assistant that can function as a “do” engine and mediate “conversational commerce” by translating human intentions into actionable code in a few milliseconds. AI is no longer just amplifying production efficiency. AI is now capable of providing complex services.
We are in the early stages of a process through which intelligent human practices are being supplemented by smart services and may eventually be supplanted by them. The most basic of these practices, already outsourced by most people to artificial systems, is remembering. This is the most basic cognitive practice required for the exercise of intelligence and its outsourcing is not a trivial matter. But AI is now capable of design and data-driven research, and smart services will soon be available for everything from dietary guidance and basic medical diagnosis to parenting and educating. It is now estimated that in the U.S. nearly half of all core job tasks will be automated or taken over by artificial intelligence over the next twenty years. Only a small amount of this change will be due to factory automation. Most of it will result from the assumption of white collar and professional labor responsibilities by computational agents.
The potential reach of smart services and machine professionals can be appreciated by considering that IBM’s Watson supercomputer was able to predict seven of the nine major innovations in the use of enzymes for cancer treatment that were made between 2003 and 2013, based on its reading of 70,000 medical research papers written prior to 2003. The computational factories of the intelligence revolution will continue to assist in churning out custom clothes and household goods, but they will also be producing ever more effective virtual researchers, doctors, lawyers, accountants, engineers, and designers that can scale and share their learning almost instantaneously.
The New “Great Game”
The technological confluence among big data, machine learning and artificial intelligence is bringing about a reorganization of the human experience that is dramatically transforming the meanings of family, friendship, health, work, security and agency. It is just as dramatically transforming global power relations. As in the “Great Game” played by imperial and colonial powers in the late 19th and early 20th centuries, the goal of the new Great Game is global dominance. But whereas the competition played out among political actors a century ago aimed at controlling lands and natural resources, corporate, state and military players today seek control of digital platforms and human attention-energy—dominance in the colonization of consciousness itself.
Corporate Power. Over the last quarter century, the maturation of informational capitalism and the network society have led to a remarkable dematerialization of the global economy through which information exchange platforms have acquired de facto monopoly status. This new economic reality is still emerging, but a few things are already clear. First, although we continue to speak about the information economy, big data has made this obsolete: information is too cheap and abundant to serve as real-world currency. We now have a global economy in which it is the attraction and exploitation of attention that drives global circulations of goods, services, ideas and people.
Secondly, the digital “ocean” of seemingly unlimited experiential options has powerful currents running through it. Machine learning algorithms are using our digitally expressed desires and interests to individually tailor our online experiences to maximally capture and capitalize our attention. With the internet of things and smart services, this tailoring of experience will extend to encompass much of our offline experience as well. As a result, we will be subject to both an accelerating expansion of “emancipatory” freedoms of choice and an intensification of “disciplinary” compulsions to choose. In a telling 2016 statement by Tim Cook, CEO of Apple: “When an online service is free, you’re not the customer. You’re the product.”
Finally, while the new networked attention economy is marvelously effective in multiplying options for consuming and sharing, it is no less effective in amplifying inequalities of income, wealth, risk and opportunity. When Instagram, a photo-sharing app, was purchased by Facebook for $1billion in 2012, it had 130 million users and a total of just 15 employees. Five years later, Facebook had roughly two billion users and yet employed a mere 25,000 people. So great is the redistributive effect of the internet-mediated attention economy that the seven top companies in world by market capitalization are now Apple, Amazon, Alphabet/Google, Microsoft, Facebook, Tencent, and Alibaba. Global inequality has become so extreme that, according to a 2014 Oxfam report, the 823 richest people in the world had more wealth than the poorest 3 billion. At the start of 2017, however, this morally outrageous figure was revised downward: the 8 richest people then had more wealth than the poorest 3.5 billion.
State Power. The networked attention economy is not only concentrating corporate power. The Cartesian dictum that “I think, therefore I am” has been replaced by the realization that, “as I connect, so I am,” and because of this, all those in a position to control connectivity enjoy unprecedented new powers. The commercial power gained through the use of big data ultimately depends on state sanctions of (or silence on) the erosion of rights to privacy. But at the same time, the corporate data-gathering infrastructure also affords states powers of surveillance and opinion manipulation that make the propaganda machines of Nazi Germany look like rotary phones next to the latest Androids or iPhones. In effect, the result has been an arranged marriage—though not necessarily a “love match”—between the attention economy and the surveillance state.
As early as 2012, the aptly named “Karma Police” program operated by the British Government Communications Headquarters (GCHQ) was acquiring some 50 billion metadata records daily about online communications and web browsing activity. Yet, as the Snowden leaks of 2014 made evident, the U.S. National Security Agency (NSA) was not only intercepting billions of emails, phone calls and text messages, it was conducting “deep packet inspection” of their content. Indeed, making clandestine use of commercial communications networks, the NSA’s Treasure Map program was at that time capable of locating every device connected to the web, worldwide, in real-time.
More disturbingly, the surveillance powers now enjoyed by both state and non-state actors are matched by new ontological powers to precisely and effectively shape opinion and behavior: capacities for crafting citizens and consumers of the kinds most desired. The practical ramifications of this have been hinted at by the use of social media to influence the Brexit vote in the UK and to “hack” the 2016 U.S. presidential election. Yet, these high profile exercises of corporate and state power to shape public opinion are just the more visible indicators of an ongoing reconfiguration of global politics and the public sphere that has the potential to make democratic government a thing of the past.
Military Power. Half a century ago, the greatest threats to democracy globally were apparently the Cold War between the US and USSR and the reigning nuclear “deterrence” strategy of “mutually assured destruction” (or MAD). Today, the marriage of the attention economy and the surveillance state forces confrontation with the fact that some of the greatest threats to democracy may now come from within democratic states themselves. But the competition for global military dominance remains crucial to geopolitics and a new arms race is underway focused on the weaponization of AI.
The functional military objective of the new arms race is simple: take humans out of the OODA loop (Observe, Orient, Decide and Act) by deploying autonomous weapons systems that plan, execute and adapt to mission realities in real time, carrying out military objectives with inhuman speed and determination. The first to reach this goal, the argument goes, will enjoy total battlefield dominance. Unsurprisingly, the 2017 Pentagon and DARPA budgets in the US included nearly $33 billion for developing military AI, autonomous weapons, robotics, and swarm technologies. The Chinese government’s investments are comparable, if not greater.
Yet, that familiar competition between would-be military hegemons is only part of the story. Most of the basic research and development that is feeding into military applications of AI is conducted by corporations with essentially commercial interests. Corporate R&D greatly dwarfs direct, government-funded R&D by many orders of magnitude, not just in the US, but in China, Russia, Europe and the UK. In effect, global militaries are often buying “off the shelf” products and repurposing them for military use. Whereas only a handful of nations have the resources needed to establish and expand viable nuclear arms programs, this is not true with respect to building autonomous weapons programs. Non-state actors with very modest financial resources are able to acquire the same “off the shelf” products being used by global militaries. The result is that the AI arms race and cyber-warfare could easily come to any neighborhood of any city or town on the planet. By itself, the merging of AI and so-called conventional weapons presents a tremendous existential threat to humanity. But this threat would be vastly amplified if global militaries heed arguments now being forward to cede control of nuclear weapons systems to AI as a matter of military necessity.
The New Great Game: A View from the Past, A View Toward the Future
Historical analogies are helpful. But they can also constrain critical and creative engagement with current events by tacitly presuming the continued validity of conceptual frameworks that may be in tension with contemporary realities. The Great Game that was played a century ago was an overt geographic and geopolitical competition—a global land-grab by national and imperial interests. Seeing the global struggle for dominance in the colonization of consciousness as a comparable process would encourage focusing on open competitions playing out at national and international scales. From such a perspective, one might reasonably focus on how the United States, the European Union, the United Kingdom, and Russia are attempting to establish both technical and ideological control over the dynamics of the intelligence revolution.
Policy making in China, for instance, is profoundly influenced by national historical memory of a “century of humiliation” by global powers from the mid-19th to mid-20th century. The Chinese Communist Party’s declaration that China will become the global leader in artificial intelligence by 2030 is consistent with such a reading of the new Great Game. With a population of 1.4B people generating data and a unique ability to realize an effective corporate-state-military fusion, this is not an unattainable goal. China is committed to becoming the world’s first smart state. It has built a state-of-the-art facial recognition and surveillance system, is investing heavily in smart cities developments, and has sponsored trial runs of a social credit system designed to enable commercial and state interests to benefit from the ontological power associated with connectivity control.
Likewise, the European Union has established a draft guideline for developing a uniquely European brand of AI that one that promotes human-centric European values and aims at protecting and benefiting both individuals and the common good. In contrast with China, the EU approach is highly sensitive to issues of privacy and information rights. Yet, consistent with Chinese goals, the aim of the EU is to establish itself as a leader in cutting-edge, secure and ethical AI so that European citizens will be able to reap the great benefits of “trustworthy AI” and do so in a way that will attract positive global attention.
Similar accounts could be given of national efforts underway in the US, UK, and Russia, given which it would be tempting to frame the new Great Game and the intelligence predicament as a competition and conflict among different visions of the “smart society”—a conflict between different visions of what it means to be “better off” as persons, corporations and nations. But thus dressing the players of the new Great Game in ideological uniforms more than a century old has limited utility in shedding insight either on the dynamics of competition for control over the intelligence industries of the 21st century or on what is ultimately at stake in the colonization of consciousness.
Looking forward a hundred years instead, the view has appeared to be quite different. Rather than a competition played among national and corporate actors, the new Great Game can be seen as a short-lived phase in the intelligence revolution. For a time, the development and deployment of AI looks certain to result in any number of clearly (and perhaps even miraculously) positive outcomes. But peering into the longer term, the underlying scientific and societal advances of the intelligence revolution have the potential to result in a profound existential threat to humanity. Should the advent of artificial general intelligence become scientific reality rather than science fiction fantasy, superintelligent artificial agents might quite evolve capabilities that extend so far beyond those of humans that it will no longer be possible either to presume these agents’ benevolence toward humanity or to prevent their elimination of humanity as an organic nuisance.
But long before humanity falls prey to errant artificial general intelligence, and regardless of which ideological bias is designed into the smart societies that are already being built by corporate and state interests, the human experience is being qualitatively transformed. To understand and resolve the intelligence predicament at this more immediate, middle level—the level at which we presently live our daily lives—some Buddhist resources are useful.
The Intelligence Revolution: A Buddhist Perspective
The founding insight of Buddhist thought and practice is that all things arise interdependently. Strongly interpreted, this means relationality is more basic than ‘things-related.’ Individual existents are abstractions from ongoing relational dynamics. For Buddhists, the primary value of this insight is not theoretic, but rather therapeutic.
As one becomes adept at seeing how all things arise interdependently, it becomes apparent that conflict, trouble and suffering (duḥkha) are not functions of chance, destiny or the play of natural laws. They are relational distortions brought about by our own karma or the ways in which abiding patterns of values, intentions and actions bring about consonant patterns of experienced opportunities and outcomes. The implication is that, in a karmic cosmos, all experienced realities imply responsibility. Duḥkha cannot be treated effectively as a problem, but only as a predicament.
The proximate therapeutic aim of Buddhist practice is thus to revise our constellations of values and intentions—including those embedded in and embodied by our cultural, social and political institutions and practices—to realize kuśala or superlative relational dynamics and to eliminate conditions for the persistence of those that are akuśala. Importantly, the relational quality referred to as akuśala or being “without virtuosity” encompasses not only what is now conventionally considered bad or mediocre; it also encompasses what is currently deemed good. Just as virtuosic musical performances establish new standards of musicianship, kuśala conduct involves continually setting new standards of ethical engagement and responsive virtuosity.
This ideal of virtuosic conduct is epitomized by the bodhisattva or “enlightening being” who compassionately vows to work out from within existing relational conditions to facilitate the liberation of all beings from conflict, trouble and suffering. Here, compassion is not mere sympathy for others or the result of rational judgments that someone is undergoing serious and undeserved suffering. Buddhist compassion is a practice of being present in felt interdependence with others, attuned to possibilities for realizing liberating, predicament-resolving relational dynamics. Traditionally, this is practice is understood as a process of cultivating the six pāramitās or “utmost excellences” of generosity, moral clarity, patience, valiant effort, attentive poise, and wisdom.
The customization of the human experience and the virtually frictionless freedoms of choice brought by the intelligence revolution may seem to be a technological dream-come-true. But seen through the Buddhist teaching of karma, it is a dream with nightmarish potential. The worry is not a distant future “singularity” when artificial intelligence “wakes up” and begins asserting its own interests. The worry is also not just an exploitation of the many by the few. The worry is that our ever-greater individual privileges to choose are predicated on corporate and state rights to control—an alluring system of domination, not through coercion, but through algorithmically-reinforced desires and cravings. In Buddhist terms, the worry is karmic: the purposeful manufacture of desire-defined autonomous individuals who are induced to “freely” ignore their interdependence.
Karma operates in a spiral fashion. The karmic spiral of desire-driven action, for example, is that getting better at getting what we want depends on getting better at wanting; but getting better at wanting depends on continually experiencing a sense of lack and thus on not finally wanting whatever it is that we get. In short, the karma of seeking to always get what we want has the form of a feedback spiral of ever-intensifying want or dissatisfaction. Likewise, the karmic spiral of gaining greater control depends on perceiving our situation as continually in need of ever more precisely executed practices of control and results over the long-term in realizing ever more thoroughly controlled environments and life circumstances.
As currently oriented, the dynamics of the intelligence revolution are conducive to amplifying both of these karmic spirals. As these spirals intensify, our experiential options will become both wider in scope and more acutely desirable. But this will come at the cost of trading off our “exit rights” from the ever-more alluring experiential domains that are being crafted for us by tireless, “black box” algorithms. Eventually, this will mean a loss of experiential and relational wilderness for which no one will accountable, but for which each of us will ultimately be responsible. We will take up apparently happy residence on karmic “cul-de-sacs” or relational dead ends fashioned in minutely-detailed response to our digitally expressed desires and interests. Individually, we each will enjoy compulsively attractive lives of change without commitment, paid for with the irreplaceable currency of attention: lives in which we will be technologically freed from needs to learn from our mistakes or engage in adaptive conduct—freed, in other words, from the most basic exercise of our own human intelligence.
Redirecting the Intelligence Revolution: The Ethical Challenge of Just Connection
The Copernican revolution helped to usher in the modern era and its core values of equality, universality, individuality, choice and control. The social justice benefits have been profound. The postmodern turn, with its emphasis on differences in identities and histories, has served to broaden the scope of social justice concerns with similarly profound consequences. The coming era, however, is one in which achieving greater social justice will require more than just clearly articulated and conscientiously enacted systems of human rights. It will require more than the recognition and respect for differences in history and identity. It will require resistance to the colonization of consciousness: resistance to uses of our own intelligence that have the potential for rendering human intelligence redundant. The ethical challenges are unprecedented.
Ethical engagement with technological change is relatively recent. None of the major Western (putatively global) traditions of ethics—virtue, deontological, and utilitarian ethics—were developed in response to humanity’s self-transforming development of new technologies or to the new realms of experience and action brought about by them. Although efforts have been made to use these traditions, for example, in framing ethical guidelines for robotics research, it is by no means certain whether variations on these ethical traditions will suffice for addressing the complex challenges of the intelligence revolution and the predicament it poses. The same can be said for efforts now ongoing in China to make use of traditional Confucian ethics, for example, to outline the meaning of benevolent or humane artificial intelligence.
Ostensibly better suited to the task are the ethical perspectives specifically developed over the last half century to address issues raised by information and computing technologies (ICT). Yet, to date, these purpose-built ethical approaches have arguably remained wedded to metaphysical assumptions and commitments that work against critical and creative engagement with the complex interdependencies and recursions that characterize both the intelligence revolution and contemporary global dynamics. Almost invariably working out from within a given cultural setting and appealing to widely-accepted ethical principles and values, the purpose-built ethics of ICT—like similar ethics for journalistic or medical practice—have sought primarily to establish a fixed (and presumptively universal) standpoint from which to distinguish between beneficent and maleficent technological practices. The advent of adaptive machine agencies calls the critical efficacy such approaches into question.
We can no longer presume ourselves to be essentially independent agents acting upon essentially passive technologies. Any consistent use of tools and their parent technologies is a process by means of which we change both who we are as users and what we mean by utility. The tools and technologies of the intelligence revolution, however, are not simply ready for human use. For the first time, our technological systems are actively participating in the adaptive reconfiguration of what is always a human-technology-world system. In addition to being agents of technology, we are now also the patients of technologies that are intelligently and ever more autonomously seeking to shape our experience based on values that we have either designed into them or that they have derived from our interactions with them. At least until the advent of general artificial intelligence, this means that the ethical labor of determining which uses of AI, machine learning and big data are beneficent and which are malevolent is inseparable from—and ultimately consists in—discerning who we need to be present as if the karma of the intelligence revolution is to be truly humane.
It is tempting to assume the need only for much simpler ethical labor and to take the meaning of aligning AI with human values or building human-centered AI as axiomatic. But, in combination with the recursive relationship between human and machine intelligences, the immense variation in human values, culturally as well as historically, entails admitting that resolving the global predicament posed by the intelligence revolution cannot be undertaken from any single or fixed ethical standpoint. No currently existing ethical framework is, or could be, sufficient for carrying out this ethical labor. What we require is not a unitary global ethical system to generate a blueprint for a utopian smart future, but an enduring and vibrant ethical ecosystem that fosters ongoing ethical improvisation.
In much the same way that the vitality of a natural ecosystem is a function of the species diversity therein, the resilience and adaptive capacity of such an ethical ecosystem will be a function of the ethical diversity informing it. Moreover, just as the species diversity found in healthy ecosystems is relationally distinct from and irreducible to the species variety that is found in well-functioning zoos, ethical diversity is relationally distinct from and irreducible to ethical variety or plurality. Realizing ethical diversity is not a quantitative matter of incorporating input from a wide variety of stakeholders representing different ethical perspectives. Ethical diversity is a qualitative relational achievement that occurs only when ethical differences become the basis of mutual contribution to both shared and critically-productive ethical conduct. Ethical diversity thus depends on developing capacities for exercising ethical intelligence—that is, capacities for engaging in improvisational, adaptive conduct that expands ethical horizons and progressively raises standards of ethical virtuosity.
One thing that we have learned in attempting to resolve the global predicament of climate change is that the ethical improvisation needed for the emergence of a global and self-sustaining ethical ecosystem is neither common nor coercible. Our prospects for resolving the intelligence predicament depend on our readiness to embark on processes of becoming differently present as ethical agents and patients—our readiness, ultimately, to go beyond differing-from others to also differing-for or on behalf of others in recognition that it is not our independence that should be affirmed as both metaphysically and morally basic if we want to end conflict, trouble and suffering, but rather our interdependence.
The personal ideal of the bodhisattva is one vision of who we need to be present as to enhance our capacities for and commitments to predicament-resolution. Western liberalism and communitarianism, Confucian relationality, Islamic religiosity, and the naturalisms espoused by indigenous peoples offer distinct, comparable ideals. Undoubtedly, our ethical efforts to inflect the dynamics of the intelligence revolution in ways that are equitable, just, and humane will benefit keenly from ensuring sustained contributions from each of these traditions and from many others. At the very least, the intelligence revolution will have to be delinked from the colonization of consciousness and creative energies will have to be redirected from playing the new Great Game as a finite game, the point of which is to win, to playing it as an infinite game, the purpose of which is to continuously enhance both the inclusiveness and quality of play. The alternative is almost certain to be inhumane, even if it remains humanly engineered.
by
Peter D. Hershock, East West Center, Honolulu, Hawaii hershocp@eastwestcenter.org
- Συνδεθείτε για να αναρτήσετε σχόλια
- Ετικέτες
- AI ethics of AI