For the second time in two years, hundreds of mostly Western public intellectuals have published an open letter calling for a ban or moratorium on artificial superintelligence (ASI) until there is a broad scientific consensus that it can be done safely and controllably.
Western concerns about AI’s trajectory stand in contrast to Chinese discourse, which assumes that technology must serve the collective common good through careful governance. AI is seen first and foremost as a tool for technological advancement.
Between these two positions lies a third possibility: a cybernetic ethics of intention that integrates the ethical awareness of the West with the systemic pragmatism of the East. In cybernetics, intention is not an afterthought but the starting point.
Moral purpose
Western calls to slow down or pause AI development stem from a moral fear of unregulated human ambition.
Since the Enlightenment, Western thought has framed progress as liberation through knowledge, but the same tradition has also been haunted by Promethean guilt—the sense that innovation without restraint leads to disaster. From Mary Shelley’s “Frankenstein” to the Manhattan Project, this duality repeats: the pursuit of mastery confronts the limits of control.
In the case of artificial general intelligence (AGI), the anxiety has shifted from the laboratory to the boardroom. Artificial intelligence is now largely developed by private corporations whose fiduciary duties—to shareholders rather than citizens—raise profound ethical questions.
The issue, as many experts have argued, is not that AI might wake up one day, but that the systems already shaping our economies and information environments are misaligned with public welfare. When algorithms optimize for engagement, profit or surveillance, they amplify division, manipulate human attention and erode trust.
The open letters signed by top technologists such as Yoshua Bengio, Geoffrey Hinton and Steve Wozniak represent a collective recognition that ethics cannot be outsourced to market forces. The ethical dimension of their concern lies not in fearing intelligence itself but in doubting the purity of the intentions behind its creation.
In a world where “move fast and break things” has become the informal motto of innovation, popularized by Silicon Valley “tech bros”, the demand for a pause is, paradoxically, a plea for reflection, if not a return to moral purpose.
Pragmatism vs paranoia
China’s approach to AI development reveals a markedly different disposition. While Western media often portray China’s rise in AI as authoritarian or uncritical, the underlying motivation is less ideological than civilizational.
The Chinese intellectual tradition—rooted in Confucian humanism, Daoist naturalism and Buddhist non-dualism—has long viewed intelligence as a continuum within nature rather than a rival to humanity. In this worldview, technology is an extension of human order, not a threat to it.
The Chinese state thus treats AI as an instrument of harmony and efficiency, a tool for national rejuvenation and optimizing governance. Its long-term AI strategies—spanning decades rather than election cycles—focus on integration and application, not metaphysical speculation. Where Western thinkers debate whether AGI should exist, Chinese policymakers ask how to align AI with social stability, productivity and ecological balance.
This pragmatic ethos is reflected in the country’s political culture. Centralized oversight allows for a degree of coordination that Western liberal democracies cannot easily replicate. While Western experts fear unregulated corporate power, Chinese planners assume regulation by design—the belief that social systems can guide technology toward collective benefit.
As the modern Chinese philosopher Tu Weiming has written, “Humanity is the self-conscious agent of the creative transformation of Heaven and Earth.” In this sense, technology is not an adversary of moral order but an extension of the human mandate to participate responsibly in cosmic creativity.

China’s comparative lack of alarm over AI does not imply naivety.
Rather, it reflects confidence in human stewardship; that the same institutions capable of steering industrialization and modernization can also manage AI. The emphasis is less on limiting technology than on embedding it within a moral-political framework that stresses harmony and collective progress.
To Western observers, this confidence can appear dangerously complacent. Yet the mirror cuts both ways. Western societies, shaped by centuries of individualism and secularism, have produced unparalleled technological power but have also reduced collective purpose. The debate over AGI thus reveals a deeper cultural crisis: the inability to define what humanity actually wants from technology.
When Western experts call for a moratorium, they are not simply warning against technical failure; they are confessing uncertainty about moral direction. Without a clear sense of the common good, even the best regulatory framework will fail to guide innovation. In this respect, the real danger lies not in artificial intelligence but in artificial intention: the absence of genuine ethical intent behind technical ambition.
Cybernetic lens
The American AI community has forgotten its roots in cybernetics, the mid-20th-century science of systems, feedback and control. Cybernetics, developed by Norbert Wiener in the 1940s, offers a powerful framework for reconciling ethics and pragmatism. The cybernetic process unfolds in three steps: plan, quantify and steer.
“Plan” defines the goal or intention—what we seek to achieve and why.
“Quantify” translates that intention into measurable parameters—what success looks like in observable terms.
“Steer” involves feedback, adjustment and correction—ensuring that the system remains on course toward the goal.
Applied to AI governance, cybernetics forces a discipline of reflection. Before designing a system, we must articulate the intent guiding it. If that intention is purely instrumental—profit, dominance or efficiency—the feedback loops will eventually amplify those values.
But if the intention is ethical—human flourishing, balance, sustainability—the same feedback mechanisms can sustain a virtuous cycle.

Cybernetics thus bridges the Western concern for ethical clarity with the Eastern emphasis on systemic harmony. It does not reject power but insists that power be self-correcting. In this sense, it transforms the moral question—“Should we build AGI?”—into a functional one: “What are we trying to sustain and how do we ensure our feedback keeps us aligned with it?”
Intent as moral compass
The cybernetic lens highlights the central role of intentionality, a point both Confucian and contemporary AI ethicists would affirm. Tu Weiming’s idea of “ren” (human-heartedness) describes the cultivated moral awareness that guides responsible action.
Translated into cybernetic terms, ren is the ethical signal that stabilizes the feedback loop. Without it, systems—whether mechanical or social—drift into chaos.
In the Western AI discourse, this corresponds to the “alignment problem.” But while technical alignment seeks to code morality into algorithms, cybernetic ethics begins further upstream: aligning human intent before coding even begins.
Ethical AI, in this view, depends not on moralizing the machine but on purifying the aims of its creators. It also placed cybernetics where it properly belongs: at the heart of our technological era.

The Western fear that AGI might destroy humanity reflects a loss of faith in our own ability to steer; the Chinese calm reflects trust in the social system’s ability to coordinate.
Cybernetics suggests that both views are incomplete. What matters is not who controls AI—the individual engineer or the state—but whether the system of control is transparent, feedback-sensitive and ethically tuned.
The divergent Western and Chinese attitudes toward AI could, if left unchecked, deepen into a civilizational rift: one paralyzed by moral anxiety, the other propelled by pragmatic ambition. Yet they also hold complementary insights.
The West brings ethical vigilance; China brings systemic foresight. The challenge is to integrate both into a planetary cybernetics of intention—a governance model that acknowledges the human dimension of technology while maintaining pragmatic realism.
