2033: The Year of General Artificial Intelligence?

The definition of General Artificial Intelligence risks becoming a sort of label used by tech giants to promote their flagship products in the near future. In the coming years, and perhaps even in 2024, we might see an increase in statements proclaiming the achievement (or near achievement) of this milestone. It is essential to approach these claims with a critical spirit and realistic evaluation to truly understand the evolution and capabilities of AI. Hence, it is important to analyze concrete data and make predictions based on actual progress, avoiding sensationalism and unfounded hypotheses.

Over the last decade, the evolution of artificial intelligence has experienced exponential growth, revolutionizing various sectors and laying the foundation for an even more promising future. Starting from 2014, AI saw a notable increase in processing power thanks to the introduction of GPUs and TPUs specifically designed for deep learning. These hardware advancements enabled the tackling of computationally intensive tasks with greater efficiency, leading to significant improvements in algorithm quality. During this period, deep neural networks became the focal point of research, with advanced unsupervised learning techniques enhancing machines’ ability to recognize complex patterns in data.

The availability of data has played a crucial role in this evolution. With global digitization, the amount of data available for training AI models has grown exponentially. This has enabled the development of more robust and accurate algorithms capable of managing and analyzing vast amounts of information. Meanwhile, the adoption of AI has significantly increased, initially in tech sectors but rapidly extending to other fields such as healthcare, finance, and transportation. Investments in AI have grown, with companies recognizing the potential of these technologies to enhance operational efficiency and create new business models.

From 2020 to 2024, AI has continued to advance at an accelerated pace. Processing power has benefited from further hardware innovations, enabling even more advanced applications. Algorithm quality has continued to improve, with new techniques enhancing model efficiency and generalization capacity. The quantity and quality of data available for training have further increased, thanks to the expansion of digital data sources. Research has continued to make significant strides, with innovative discoveries such as large language models that have expanded machines’ ability to understand and generate natural language.

Looking ahead, from 2025 to 2034, AI processing power is expected to continue growing, thanks in part to emerging technologies like quantum computing, which promise to further revolutionize the field. Algorithms will become increasingly sophisticated, with capabilities in self-supervised learning and zero-shot learning, allowing machines to learn and adapt to new tasks. Data availability will continue to increase, not only through traditional collection methods (including agreements with major publishers) but also through the use of synthetic data and better management techniques. AI adoption will reach near-total penetration across all sectors of the economy and society, influencing every aspect of our daily lives. Research will not slow down, with the potential to achieve general artificial intelligence, where machines can simulate human cognitive abilities, marking a fundamental milestone in human history.

Let’s analyze the following graph, which highlights 2033 as a possible year for the singularity:

The AI evolution level in the graph is defined based on certain principles and assumptions, aiming to illustrate the technological progress in the field of artificial intelligence over time. Below is a detailed explanation of how to interpret this metric.

Defining the Level of AI Evolution

  1. Progressive Scale: The scale used shows a continuous and accelerating increase in AI capabilities. The levels are numerical and indicate qualitative and quantitative improvements in AI technologies, such as learning ability, natural language processing, computer vision, and other specific skills.
  2. Measurement Metrics:
    • 2014-2024: The assigned values (2, 3, 4, 5, etc.) reflect a gradual and incremental growth. This growth could be measured through metrics such as algorithm improvements, increased accuracy in specific applications, and the ability to handle complex data.
    • 2025-2034: The future projection shows a faster growth (30, 36, 43, etc.), indicating an acceleration in AI research and adoption. This could include significant advancements in neural networks, deep learning, and AI integration across various sectors.
  3. AGI Level:
    • AGI (General Artificial Intelligence): The dashed line at level 100 represents a hypothetical benchmark, defined as ‘singularity,’ where AI reaches human-like cognitive abilities. This level includes not only the ability to learn and adapt to new tasks but also human understanding, reasoning, and creativity.

Sources of Growth

  • Research and Development: Ongoing innovations in machine learning, neural networks, and related technologies.
  • Industrial Adoption: Implementation of AI technologies in various sectors such as healthcare, finance, transportation, etc.
  • Increase in Data and Computational Capacity: Greater availability of data and computing power that fuels the training of increasingly complex AI models.

This representation is a simplified yet plausible overview of AI progress over time, based on hypotheses and projections that may vary as technologies and scientific discoveries advance.