AI science fiction or future?

Artificial Intelligence (AI) has ceased to be a mere science-fiction promise and has instead become the driving force of an economic and social revolution.

Yet, it is crucial to distinguish between the excellence of current models — the Large Language Models (LLMs) that captivate us with their eloquence — and the ultimate goal, the so-called Artificial General Intelligence (AGI), whose full realization still faces significant conceptual and technical hurdles. Simply adding horsepower to a jet engine will not get us to Mars; what is needed is an entirely new spacecraft.

The “G” That Makes the Difference: AGI vs. AgI

AGI, true Artificial General Intelligence, has long been the Holy Grail of research: a system endowed with the flexibility, adaptability, and learning capacity of the human intellect across a wide range of tasks. It is not merely a fast machine, but an entity capable of generalizing knowledge and solving novel problems without explicit reprogramming.

However, reality is more nuanced. Advanced research has introduced a subtle yet fundamental distinction that deserves the attention of anyone considering the economic future:

•          AgI (functional but not conscious): With a lowercase “g,” this refers to intelligence that matches or surpasses human performance in functional tasks. These are highly capable, multimodal, embodied machines (whether physical or virtual) that excel without fatigue. Yet they lack consciousness (Cs), subjective experience (Se), and intangible factors (ITF) such as ethics and emotion.

•          AGI (the Grail): With a capital “G,” this represents a system that fully matches human intelligence: functional + general + conscious. Ignoring subjectivity and consciousness is seen by some as a premature engineering simplification, since human intelligence is intrinsically tied to emotional and contextual experience.

LLMs: Masters of Language, Not of Understanding

Large Language Models (LLMs) such as GPT-4 are the crown jewels of today’s AI. Trained on textual datasets so vast they dwarf the Library of Alexandria, they excel at processing and generating language. Yet, within the AGI framework, they remain Narrow AI, albeit highly sophisticated.

Their strength is also their limitation. LLMs operate through statistical pattern recognition based on their training corpus. In essence, they predict the most probable next word, functioning as an extremely advanced autocomplete mechanism. They lack true understanding (so-called “common-sense reasoning”) and, even more so, consciousness.

The belief that scaling LLMs (increasing data and computational power) alone will lead to AGI collides with the law of diminishing returns and the looming scarcity of high-quality data. Language is not the foundation of human intelligence; it is its product. To overcome the dead end of the Transformer architecture, research must discover new propulsion — integrating reasoning and continuous learning capabilities, as seen in Complex Adaptive Systems (CAS).

🔮 Timelines and Impacts: When and What Will Change?

Forecasts on the arrival of AGI are, by definition, as volatile as a pre-election public budget. Historically, experts have been overly optimistic. Yet, the acceleration of recent years has shifted the median arrival date (50% confidence) forward, with some estimates placing the advent of functional AgI between 2027 and 2032. The most pragmatic roadmap envisions stages:

1.        First Stage (2025–2032): Functional AgI. Level 4 expert systems (AgIT), without consciousness, but capable of matching or surpassing human experts in complex domains (e.g., oil drilling or basic diagnostics).

2.        Second Stage (2032–2050): Partial AGI. Transition toward general intelligence with partial consciousness or ethical constraints.

3.        Third Stage (2050–Unknown): Full AGI. AGI with both functional and conscious capabilities.

AGI as a Multiplier

The economic impact of AGI (or even functional AgI) would be transformational. If managed correctly, the key principle is the Keynesian multiplier: unprecedented productivity gains would free human capital for more creative and strategic roles.

Expected benefits include:

•          Medicine: Ultra-fast diagnostics, personalized treatment plans based on genetics, accelerated drug discovery through molecular simulation.

•          Science & Technology: Resolution of complex problems (quantum physics, theorem proofs), discovery of new materials, engineering optimization.

•          Global Crisis Management: Predictive tools for pandemics and natural disasters, innovative models for emission reduction.

The Real Risks: Not the Machine, but Human Use

Here enters the cautious, if not conservative, perspective. The greatest risk is not that AGI develops malicious intent, but that its lack of wisdom and moral consciousness makes it a dangerous tool in human hands.

1.        Existential Risk (X-Risk): Many industry leaders fear loss of control if a Superior Intelligence (ASI) emerges. The argument of Instrumental Convergence (an AI optimizing a secondary objective ends up conflicting with humans, e.g., resisting shutdown) is not science fiction but a genuine alignment problem.

2.        Socio-Economic Risk: AGI will automate tasks for roughly 80% of the workforce. If the wealth generated by automation is not adequately redistributed — harking back to Keynesian public spending to sustain demand — inequality between the super-rich 1% and the rest of the population may worsen dramatically.

3.        Ethical and Bias Risk: Trained on human data, AI will inherit our biases. The use of AgI devoid of moral consciousness by authoritarian regimes for mass surveillance, or the generation of deepfakes for disinformation, represents imminent threats to social and democratic stability.