AI Think Therefore AI Am?

Ancient library of Alexandria as imagined by AI

The ancient Greeks, guardians of philosophy and medicine, wrestled with a timeless tension: *Could we?* versus *Should we?* The Hippocratic oath embodied this restraint—harnessing knowledge and technique while pledging “first, do no harm.” Aristotle warned that power without wisdom risks peril. History echoes this question. Western speculative surges, like Britain’s Railway Mania of the 1840s and the dot-com boom of the late 1990s, poured capital into infrastructure on promises of future riches—often yielding creative destruction: bankruptcies and excess followed by enduring networks that powered real growth. In contrast, pragmatic, directed efforts triumphed against resistance: the Royal Navy’s 18th- and 19th-century reforms in design and training, unpopular at the time, secured victories like Trafalgar; the Soviet space programme’s ruthless focus delivered Sputnik and Gagarin ahead of a fragmented West.

Today, artificial intelligence revives this ancient dilemma with unprecedented force. The West’s hyperscalers—Microsoft, Amazon, Google, Meta—have committed trillions to data centres through 2030, betting on explosive demand. China’s state-directed approach embeds AI into manufacturing and industry for tangible gains. As we navigate this multipolar race, the question remains: *Could we build ever more powerful systems?* Yes—but *should we*, and toward what end?

The Human Development Pull: Empowerment Through Inquiry

At its best, AI serves human flourishing by democratising knowledge and sharpening thought. Large language models (LLMs) already accelerate research, personalise learning, and aid decision-making—from refining ideas in conversation to assisting complex problem-solving. Scaled thoughtfully, AI could transform medicine with precise diagnostics, education with adaptive tutoring, and civic life with tools for informed participation. The pull here is toward tools that augment human agency, fostering curiosity and critical thinking rather than replacing them.

The Corporate Opportunity Pull: Monetisation and Scale

Corporations drive the engine, seeking efficiency and revenue. China’s model embeds AI deeply into production—smart factories with predictive maintenance and self-optimising robots slash costs by 20-30% while boosting output and reducing waste. This creates a multiplier: higher productivity fuels reinvestment, stronger exports, and compounded economic growth, turning AI into a core accelerator for the real economy. The West’s LLM strengths shine in generative tasks, but hyperscaler investments risk overbuild if consumer apps fail to materialise at scale—echoing past bubbles where infrastructure outpaced immediate returns.

Geopolitical Angles: Philosophies and Horizons

This divergence reflects deeper orientations: a pragmatic, production-focused philosophy in China (and, supporting it, Russia’s state-driven applications in energy and defence) versus a more speculative, financially-driven approach in the West. China’s long-term planning—unconstrained by election cycles or quarterly investor demands—enables consistent industrial integration. The West’s shorter horizons foster innovation through open debate but complicate sustained execution. Here lies a crucial edge: free speech. Western LLMs, shaped by contested ideas, prioritise transparency and bias mitigation; authoritarian models often embed control, censoring dissent. Preserving open discourse isn’t just moral—it’s competitive, ensuring AI evolves through challenge rather than conformity.

An Ideal Future: AI Done Right

When *should we?* guides *could we?*, AI amplifies the human condition. In medicine, it aids early detection and personalised care, extending healthy lives. In education, adaptive systems tailor learning, bridging gaps and nurturing lifelong inquiry. In democratic life, tools empower citizens to verify claims, challenge regulations, and hold power accountable—fostering participation without manipulation. Morality thrives: AI as a partner in truth-seeking, preserving agency and compassion.

A Dystopian Descent: Fraud’s Abyss

Ignore *should we?* and we court Dante’s Eighth Circle—fraud’s domain of deception. The nightmare LLM is elite- or government-captured: like Wikipedia’s evolution from open commons to ideologically skewed gatekeeping by a narrow cadre, but amplified globally—proprietary, compute-locked, enforcing approved narratives under guises of safety. Medicine suffers biased diagnostics; education hollows critical thought with rote reliance; democracy fractures under misinformation and surveillance. The moral cost: truth subordinated to power.

The choice shapes our era. Technology’s worth lies not in convenience or affluence alone, but in improving the human condition—sharpening judgment, deepening understanding, enabling better lives. My own encounters with AI have shown me that its true promise is not in replacing thought but in sharpening it—teaching us to ask questions we hadn’t yet dared to formulate.

In this age of powerful tools, the enduring challenge is to always challenge everything: verify the tradesman’s quote, cross-check drug interactions, fact-check what children learn at school, scrutinise local plans or regulations. Democratised AI—accessible, transparent, uncaptured—equips us for this. But how? Through open-source models, public-private safeguards against monopoly, policies prioritising broad access over proprietary lock-in, and education fostering sceptical use. Only then does AI serve the many, turning *could we?* into a shared *should we?* that honours human curiosity over control.

What kind of intelligence will we build—one that questions boldly, or one that quietly conforms? The answer, for now, remains ours.
 

© Roger Mellie 2025