AI series: Generative AI models and the quest for human-level artificial intelligence

Diego Miranda-Saavedra explores some of the merits and limitations of modern machine learning models, and considers where these ‘intelligent’ systems might sit in the constellation of human capabilities.

AI
Large language models
Machine learning
Deep neural networks
Author

Diego Miranda-Saavedra

Published

April 29, 2024

Generative artificial intelligence (AI) models have taken the world by storm over the past year. The human-like outputs of these systems, and the recent publication of a guideline to determine the degree of consciousness of machines, have again raised the question of whether machines will soon be able to replicate human intelligence. In this article, we discuss some of the merits and limitations of modern machine learning models, and also provide a general view of human intelligence and the position of “intelligent” systems in the constellation of human capabilities.

Large language models (LLMs) such as ChatGPT are designed to process and understand natural language, and to generate human-like text in response to prompts and questions. This is achieved thanks to a specific type of deep learning architecture called the Transformer, which consists of an encoder and a decoder, each made up of some number N of blocks. Here the input text is eventually transformed into predicted (and contextualised) output text (Figure 1).1 LLMs are trained on complex and large bodies of text so that they can learn complex patterns and relationships between words in sentences, in different contexts.

Architecture of the Transformer. The left-hand block is the Encoder, whereas the right-hand block represents the decoder
Figure 1: Architecture of the Transformer. The left-hand block is the Encoder, whereas the right-hand block represents the decoder.

The two main types of LLMs are autoregressive models and autoencoding models. Autoregressive models such as OpenAI’s GPT (Generative Pre-trained Transformer) generate text by predicting the next word in a sequence given the previously emitted words. Autoencoding models such as Google’s BERT (Bidirectional Encoder Representations from Transformers) also aim to produce coherent and contextually relevant text, but they do so by attempting to predict missing words (from a corrupted version of the text) while considering the surrounding context.

LLMs are engineering marvels capable of producing syntactically flawless, coherent, and remarkable responses to complex requests such as question answering, text summarisation, computer code generation, document classification, text generation and sentiment analysis. We tend to associate linguistic skills with intelligence because communication via an elaborate language system is largely synonymous with the human intellect. So, do the linguistic skills of tools like ChatGPT mean these systems are close to displaying human-level intelligence? Proponents of the Turing test might well argue “yes”. Most would still say “no”.

Speaking and understanding

The Turing test was proposed by Alan Turing in the 1950s. It operates on the basis that if a person is unable to tell whether the entity they are interacting with over typed messages is a human or a machine (irrespective of the answers being correct), the machine is said to have achieved human-level intelligence.2 There’s some debate over whether ChatGPT could pass this test; it has been specifically trained not to impersonate humans and will frequently preface its responses with the phrase, “As a large language model…”, thus giving the game away. But others are impressed at what developers like OpenAI have been able to achieve in terms of building artificial models that can sustain realistic-sounding conversations.

This, though, is where the Turing test falls apart as a means of assessing machine intelligence. The ability to “mimic human chatter” does not by itself suggest that a machine understands what it is ‘reading’ or ‘writing’ in the same way a human would. Consider a different set of tests, called the Winograd schemas – puzzles that differ by one or two words and whose solutions cannot be determined using statistics but instead require common sense and an understanding of the physical world.3 For example, the following sentence:

The trophy would not fit in the suitcase because it was too small/large.

Humans would infer that if the last word of that sentence is small, then suitcase is the object being described, whereas if the word is large then we are referring to the trophy. ChatGPT v3.5 was unable to make this inference when the question was first put to it (see Figure 2), although a later test finds it now can – as can other LLMs. Some suggest that this type of improvement comes from training ChatGPT to do better on some of the tasks that are routinely used to highlight model limitations on social media at the expense of giving worse answers in other contexts. But could this improvement be due to ChatGPT and other models suddenly having acquired common sense and an understanding of the physical world? A more complex set of Winograd schema questions, the WinoGrande dataset, suggests not: humans still outperform computers on these tests.4

Screenshot of author's ChatGPT conversation, prompting the AI chatbot to solve a WinoGrad schema question
Figure 2: GPT-3.5’s inconclusive answer to a typical WinoGrad schema question.

Machines will likely beat us all at these puzzles one day, in the same way that they already beat the world champions of chess and Go, can translate across multiple languages, and diagnose rare forms of cancer that escape well-trained doctors. These narrow AI applications that spectacularly outperform humans at very specific tasks will become more and more common. But will a multiplicity of narrow AI soon lead to general AI that can compete or beat humans at all tasks? Will human-level linguistic capabilities inevitably result in machines acquiring human-level intelligence in the not-too-distant future? Some developers and researchers certainly believe or hope so. Others remain sceptical.

Thinking and learning

What is “intelligence”, anyway? More than 70 working definitions currently exist.5 One that focuses specifically on human-level intelligence while excluding lower types of animal intelligence is this: “intelligence can be understood as the ability to generate a range of plausible scenarios about how the world around you may unfold and then base sensible actions on those predictions”.6 Does this come anywhere close to describing the way LLMs work, or indeed any other machine learning algorithm?

In my view, part of the reason why attaining human-level intelligence remains a distant goal has to do with how machines think differently from us.

The goal of a machine learning algorithm is always to optimise a particular function – whether the machine is playing chess (the goal is to win) or classifying images (the goal is to correctly classify as many images as possible). The majority of problems that human intelligence has to “solve”, however, do not always have a clear goal. Consider a simple chatbot standing in for a human on a customer helpline: What scalar quantity should it look to optimise? Is it ensuring that engagement with the customer is informative and supportive? Or, perhaps the goal is to build a recurrent relationship. In either case, how will these quantities be measured? Dealing with this type of real-world problem, where the variable to optimise is not well-defined, represents a formidable obstacle for the development of machine learning algorithms whose behaviour must approximate human intelligence.

Moreover, machines can be surprisingly easy to fool. For example, placing an object next to the one we are trying to classify can confuse image classification algorithms – a well-known example shows a patch being placed next to a banana, which makes the deep neural network (DNN) classify the banana as a toaster with a high degree of confidence. We can also fool image classification networks by showing the same object under different lighting conditions and orientations, such as when we flip a school bus on its side (as in an accident). The reason why a DNN cannot do this trivial mental rotation is because learning algorithms cannot generalise knowledge to unseen (or “out of distribution”) examples – imbuing them with no small amount of “brittleness”.7 Compare this to the human mind, which learns in a semi-supervised manner: we need only be shown a few guiding examples to be able to extrapolate knowledge. This is a clear evolutionary adaptation since most real-world learning is semi-supervised: we do not need to explore every road to learn to drive, nor do every possible differentiation exercise to become confident at calculus. Likewise, we do not need to become fully bilingual in a language before we start combining newly learned words to try to explain complex concepts.

Next to GPT-4’s 100-trillion parameter model, the human brain seems much more parsimonious. Therefore, progress in AI would perhaps come faster if we could teach machines to learn from a few (or no) labelled examples instead of being so heavily dependent on terabytes of labelled data (supervised learning) and on our own interpretation of the world.8

Another major limitation of algorithms for achieving human-like adaptive learning in changing environments is the fact that they cannot keep learning without forgetting previously learned training data. This phenomenon is called catastrophic forgetting,9 and it occurs because of the stability-plasticity dilemma: this states that a certain degree of plasticity is required for the integration of new knowledge, but also radically new knowledge (e.g. large weight changes in a DNN) disrupts the stability necessary for retaining the previously learned representations (Figure 3). In other words, weight stability is synonymous with knowledge retention, but it also introduces the rigidity that prevents the learning of new tasks.10

Illustration of catastrophic forgetting and ideal learning in a two-class classifier
Figure 3: Illustration of catastrophic forgetting and ideal learning in a two-class classifier.

Although some ingenious approaches have been developed for mitigating catastrophic forgetting (and which are much smarter than simply building a new network for each new task), it has also been shown that no single method can solve catastrophic forgetting while allowing incremental learning in every possible situation.11 The problem with catastrophic forgetting is not only that it contradicts a fundamental characteristic of human intelligence – the ability to learn within, and adapt to, changing environments;12 catastrophic forgetting is also a major bottleneck for the development of adaptable systems that learn incrementally from the constant flow of data in the real world, such as autonomous vehicles, recommender systems, anomaly detection methods and, in general, any device embedded with sensors. Moreover, the development of continual learning methods is key not just for machine intelligence, but also for learning scalability: by 2025 the world will be producing some 175 zettabytes of data annually, of which we will only be able to store between 3% and 12%. Thus, for learning to be scalable in the future, continual learning methods will need to be able to process data faster and in real time, learn on the fly and then discard the data, much like humans do.

How are we wired?

One further bottleneck for machines emulating human-level intelligence is that we do not yet understand the circuitry of the brain well enough to be able to reproduce it, and therefore we have been working with misleading models. For instance, the realisation of the “all-or-nothing” nature of action potentials (i.e. there is no such thing as the partial firing of a neuron) led McCulloch and Pitts to propose the concept of the artificial neuron and suggest that networks of neurons could equally be modelled as (all-or-nothing) logical propositions.13 From this point onwards, the dominant idea over most of the past century has been that the brain is essentially a computer. But even on a superficial level of analysis, brains and computers have very different architectures and behaviour, with computers specifically making optimal use of virtually unlimited memory as well as an extraordinary capacity for brute-force searching. On the microscopic level, networks of neurons cannot possibly be modelled as logical propositions because they do not really operate in this manner: a single neuronal synapse is an environment that harbours hundreds of proteins that specifically interact with other proteins in complex networks that possess clear time-space coordinates. Neurons process information and generate not just electrical signals but also discrete biochemical changes that occur in cycles instead of linearly. Moreover, a system like the brain responds to stimuli over long periods of time, which can effect changes in its own behaviour. Understanding at least how some cognitive tasks are performed at an algorithmic level would likely translate into major progress towards emulating human-level intelligence.14

GPT-4’s impressive capabilities can make people believe that some AI systems are conscious on a human level (since animals display limited consciousness), but this is an illusion. Consciousness is another essential quality that we can easily recognise when we see it, but which (like intelligence) is extraordinarily difficult to define – other than consciousness simply being everything you experience, or, more formally put, the “awareness of internal and external existence”.15 Could machines eventually achieve consciousness? This is a controversial and key question because, besides intelligence, having a degree of consciousness on the level exercised by humans is believed to be necessary for displaying goal-directed behaviour.16 Recall that machine learning algorithms do have the general goal of optimising functions but these goals are determined by human programmers, not the machines themselves.

A fundamental question regarding the development of consciousness is this: if an AI system were close to consciousness, how would we know? Butlin and colleagues recently explored the question in a groundbreaking paper where they compiled a list of “indicator properties” drawn from various neuroscientific theories of consciousness (since no theory is clearly superior). The idea is that the more boxes an AI system ticks, the more likely it is close to being conscious. The authors argue that a failure to identify artificial consciousness has important moral implications because an entity that exhibits consciousness invariably influences how we feel it should be treated. While this is likely true, humans do not necessarily need to feel that an entity is conscious in order to develop empathy. Emotional attachment is a basic human instinct, and we have a tendency to anthropomorphise. You may remember the story of hitchBOT, a clearly unconscious yet friendly robot invented by David Smith of McMaster University. HitchBOT could barely speak and its only mission was to autostop. It ended up travelling throughout Europe and North America thanks to the sympathy it generated. The “beheading” of the robot in Philadelphia in 2015 had huge repercussions across the planet because thousands of strangers had developed empathy and become emotionally attached to hitchBOT despite never having met it.17 Do you think that a machine is likely to develop a degree of human-like empathy anytime soon?

Finally, another fundamental human trait that is absent from machines, and which is particularly important in these trying times, is our capacity for remaining hopeful, which can be seen as a post-hoc rationalisation of our survival instinct. Being hopeful means that we think things will improve beyond what would be reasonable to predict for the immediate or medium-term future given the most recently available data points. Jane Goodall defines hope as “a crucial survival trait that enables us to keep going in the face of adversity”. Desmond Tutu gave an equally ethereal definition: “Hope is being able to see that there is light despite all the darkness”.18 One fundamental aspect of hope is its undeniable association with agency, i.e. our capacity to voluntarily act in a given environment: even when we are at odds with the desired outcome, hope makes us take action, which in turn fuels more hope, thus establishing a dynamic form of self-stimulation over thousands of ethical actions without necessarily having a clear variable to optimise, which machines are incapable of doing. And, one very interesting thing about hope is that its effects can be quantified in the short term: hope is much better than intelligence and personality at predicting academic performance,19 as well as performance in the workplace, with hopeful workers being reported as 14% more productive.20

Closing thoughts

Generative language-based models have reignited much interest in the possibility of artificially recreating human-level intelligence. Despite being seminal breakthroughs, we must not forget that LLMs are, essentially, just very sophisticated pattern recognition systems which, when trained on even larger datasets, may become even better at predicting the most appropriate responses to different prompts. LLMs are incomplete models of thought, though, plagued by practical problems that we have not discussed here, such as giving incorrect answers, security breaches, privacy concerns regarding personal data used in their training datasets, algorithmic opacity and an inability to meet the EU’s General Data Protection Regulation (GDPR), and their amplification of web bias which can result in answers that discriminate against different groups.21 Even if these limitations are fixed one day, the capabilities of LLMs still do not approximate general human intelligence.

Generative models are just one type of narrow AI application. Such applications will continue to evolve at a very fast pace and produce breakthroughs of paramount importance. Some of the latest breakthroughs in the biomedical field include the discovery of new antibiotics against deadly antibiotic-resistant bacteria22 and AlphaFold’s accurate prediction of a protein’s structure from its amino acid sequence.23 24 The number of ways an amino acid sequence may fold is astronomical. Thus being able to predict a protein’s structure as accurately as experimental measurements (by X-ray crystallography or cryo-electron microscopy) represents a gigantic step towards understanding a protein’s likely function and its regulation, how it may be drugged to combat diseases and for antibiotic development, and its manipulation to guide vaccine design as was done during the coronavirus pandemic.25 Most impressively, AlphaFold is able to predict the structural effects of single amino changes (mutations), which is essential for engineering new proteins as well as for understanding evolutionary history and mechanistic aspects of diseases.26

Being able to harness the power of narrow AI applications and delegate some tasks to machines will allow humans to focus on those tasks at which we do better than machines. Augmented intelligence is the name given to the close collaboration between humans and machines, which was first proposed in the 1950s, and is now finally within reach.27 28 Current examples of devices that are a functional extension of human beings are virtual reality headsets that expand the users’ senses and perceptions, implantable technologies that substitute access cards, and, in general, any software that automates research and data analysis. Since such technological developments might make us more “intelligent” or at least more productive, will we then still need machines that display human-like intelligence?

The official position of some major players like Microsoft is to not even attempt to replicate human intelligence but to produce “AI centred on assisting human productivity in a responsible way”.29 Still, a recent paper that reported GPT-4’s impressive performance at solving a number of difficult tasks (in the fields of mathematics, coding, vision, medicine, law and psychology) suggests that GPT-4 displays “sparks of artificial general intelligence”. This is in line with OpenAI’s clearly stated goal of developing human-level intelligence. However, the debate over whether we are getting any closer to replicating intelligence with just a few impressive generative models that simply recombine and duplicate data on which they have been trained is self-limiting because it takes a very narrow view of human intelligence. For one thing, mindlessly generating text (“speaking”) and thinking are two very different things. It has been shown that while LLMs may excel at formal linguistic competence (understanding language rules and patterns), their performance on tasks that evaluate human thought in the real world (functional linguistic competence) is very limited. Moreover, GPT-4 is unable to reason. We can define reasoning as the process of drawing justifiable conclusions from a set of premises, which is also a key component of intelligence. When given a set of 21 distinct problems ranging from simple arithmetic and logic puzzles to spatial and temporal reasoning, and medical common sense, GPT-4 proved incapable of applying elementary reasoning techniques.

OpenAI’s newest headline-grabbing development, Sora, shares many of the limitations of GPT-4. Sora is a model that can generate video clips from text prompts – but while it may prove useful for content creation, it seems incapable of understanding the real world. OpenAI’s defence is that Sora still struggles with “simulating the physics of a complex scene” but that it “represents a foundation for models that can understand and simulate the real world”. This is, OpenAI believes, key for training models that will help solve problems that require simulating the physical world (e.g. rescue missions), and eventually for achieving general AI. However, it is suspected that Sora’s limitations in understanding the physical world have nothing to do with physics. For example, in a generated video of a monkey playing chess in a park, we see a 7x7 board and three kings. This is likely not an error of insufficient training data or of computational power. This is an error that reveals a failure to discern the cultural regularity of the world by making wrong generalisations despite having ample evidence of the existence of universal 8x8 chess boards and one king per player. A video of a stylish woman wandering in Tokyo is also incorrect for the same reason: nobody takes two consecutive left steps in a row (about 30s into the video). Sora also does not appear to understand cause and effect; for example, in a video of a basketball that makes a hoop explode, the net appears to be restored automatically following the explosion. Sora uses arrangements of pixels to predict new pixel configurations, but without trying to understand the cultural context of the images. This is why the images and videos generated by Sora seem correct at the pixel level but globally wrong. Thus, OpenAI’s claim that “scaling video generation models is a promising path towards building general purpose simulators of the physical world” is open to doubt.

LLMs do not yet approximate the human brain; generative video models do not approximate the physical world; and human intelligence is so much more than combining formal linguistic competence with complete models of thought, or making creative videos that respect the physical constraints of the world. Human intelligence is not limited to specific domains either but exists in the open to challenge currently held views. Ask Noah Chomsky and he will respond that generative models like ChatGPT are essentially “high-tech plagiarism”. Human consciousness includes a sense of self which machines will not be able to replicate anytime soon – or perhaps never will, since a human brain and a computer are not the same. Human consciousness is coupled with curiosity, imagination, intuition, emotions, desires, purpose, objectives, wisdom and even humour. If we think about humour, a good sense of humour means thinking outside of the box and connecting concepts and situations in novel ways, which is something that machines are unable to do. Also, by thinking outside the box, humans are able to consciously ask a variety of questions – the most extraordinary of which have led to major leaps in our understanding of the world around us.

Reasonable questions can be posed by many and answered logically (some even by machines) using the standard scientific process of experimental design, controls and hypothesis validation. In this context, the faster and more efficient exploration of search space by learning methods, complemented by the delegation of repetitive tasks to machines, will allow scientists to conduct experiments at greater scale while focusing on designing optimal solutions. Beyond reasonable questions and expected results is the concept of serendipity that machines cannot yet be made to grasp. Some of the greatest discoveries in the history of science are indeed serendipitous (accidental), including the discovery of insulin, penicillin, smallpox vaccination, the anti-malarial drug quinine, X-rays, nylon and the anaesthetic effects of ether and nitrous oxide.30 Turning accidents into discoveries requires having a questioning mind that can view data from several perspectives and connect seemingly unrelated pieces of information instead of discarding unusual results right away.

And yet beyond serendipitous discoveries we have extraordinary questions, which machines are as yet incapable of asking. Extraordinary questions lie outside of our current frame of knowledge and require an illogical step that is often the product of letting one’s mind wander freely.31 A classical example here is when Einstein was trying to modify Maxwell’s equations so that they were no longer in contradiction with the constant speed of light that had been observed. After trying to modify these equations for years, Einstein eventually realised that it was not Maxwell’s fault. Rather, our notion of time was incorrect. Einstein thus stumbled upon the very question that led to the idea that the rate at which time passes depends on one’s frame of reference. While machines follow rules, the revolutionary ideas of Einstein, Newton, Darwin, Galileo, Wittgenstein and many others did not follow any rules established at the time. Therefore, the real danger in thinking that we can rely on “intelligent” machines to achieve a human level of imagination, intuition, wisdom or purpose anytime soon is that the world will become an even more statistically predictable place.

Also in the AI series:

What is AI? Shedding light on the method and madness in these algorithms Healthy datasets for optimised AI performance

Explore more data science ideas

About the author
Diego Miranda-Saavedra, PhD, is a data scientist and a financial investor. His book How To Think About Data Science (Chapman & Hall / CRC Press) was published in December, 2022.
Copyright and licence
© 2024 Diego Miranda-Saavedra

Text, code, and figures are licensed under a Creative Commons Attribution 4.0 (CC BY 4.0) International licence, except where otherwise noted. Thumbnail image by Jamillah Knowles / Better Images of AI / Data People / Licenced by CC-BY 4.0.

How to cite
Miranda-Saavedra, Diego. 2024. “Generative AI models and the quest for human-level artificial intelligence.” Real World Data Science, April 29, 2024. URL

References

  1. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is All you Need. In Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, California (USA), 2017. ISBN: 9781510860964.↩︎

  2. Turing A. Computing Machinery and Intelligence. Mind, LIX(236):433-460, 1950.↩︎

  3. Levesque HJ, Davis E, Morgenstern L. The Winograd Schema Challenge. In KR’12: Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, June 2012, Rome, Italy. AIII Press, Palo Alto (CA), USA, 2012. ISBN: 9781577355601.↩︎

  4. Sakaguchi K, Le Bras R, Bhagavatula C, Choi Y. WinoGrande: An Adversarial Winograd Schema Challenge at Scale. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 34(05), 8732–8740. February 2020, New York (NY), USA. AIII Press, Palo Alto (CA), USA, 2020. ISSN: 2159–5399.↩︎

  5. Legg S, Hutter M. A Collection of Definitions of Intelligence. Proceedings of the 2007 Conference on Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006, 17-24. IOS Press, Amsterdam, the Netherlands, 2007. ISBN: 978-1-58603-758-1.↩︎

  6. Suleyman M, Bhaskar M. The Coming Wave. Bodley Head, London, UK, 2023. ISBN-10: 1847927483.↩︎

  7. McCarthy J. From Here to Human-Level AI. Artificial Intelligence, 171(18):1174–1182, 2007.↩︎

  8. LeCun Y, Bengio Y, Hinton G. Deep Learning. Nature, 521(7553):436–444, 2015.↩︎

  9. McCloskey M, Cohen NJ. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. Psychology of learning and motivation, 24:109–165, 1989.↩︎

  10. Abraham WC, Robins A. Memory Retention - the Synaptic Stability Versus Plasticity Dilemma. Trends in Neurosciences, 28(2):73–78, 2005.↩︎

  11. Kemker R, McClure M, Abitino A, Hayes T, Kanan C. Measuring Catastrophic Forgetting in Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). AAAI Press, Palo Alto (CA), USA, 2018. ISBN: 9781577358008.↩︎

  12. Hadsell R, Rao D, Rusu AA, Pascanu R. Embracing Change: Continual Learning in Deep Neural Networks. Trends in Cognitive Sciences 24(12): 1028–1040, 2020.↩︎

  13. McCulloch W, Pitts W. A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, 5(4):115–133, 1943.↩︎

  14. Brooks R, Hassabis D, Bray D, Shashua A. Is the Brain a Good Model for Machine Intelligence? Nature 482: 462-463, 2012.↩︎

  15. Koch C. What Is Consciousness? Nature, 557:S8–S12, 2018.↩︎

  16. DeWall C, Baumeister R, Masicampo R. Evidence that Logical Reasoning Depends on Conscious Processing. Consciousness and Cognition 17(3): 628, 2008.↩︎

  17. Darling K, Nandy P, and Breazeal C. Empathic Concern and the Effect of Stories in Human-Robot Interaction. 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2015, Kobe, Japan, August 31 - September 4, pp. 770-775. IEEE, Washington (DC), USA, 2015. ISBN: 9781467367042.↩︎

  18. Goodall J, Abrams D. The Book of Hope: A Survival Guide for an Endangered Planet (1st Edition). Viking Press, New York (NY), USA, 2021. ISBN-10: 024147857X.↩︎

  19. Day L, Hanson K, Maltby J, Proctor C, Wood A. Hope Uniquely Predicts Objective Academic Achievement Above Intelligence, Personality, and Previous Academic Achievement. Journal of Research in Personality 44(4): 550-553, 2010.↩︎

  20. Reichard RJ, Avey JB, Lopez S, Dollwet M. Having the Will and Finding the Way: A Review and Meta-Analysis of Hope at Work. The Journal of Positive Psychology 8(4): 292-304, 2013.↩︎

  21. Baeza-Yates R. Bias on the Web. Communications of the ACM, 61(6):54–61, 2018.↩︎

  22. Liu G et al. Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nature Chemical Biology 19: 1342-1350, 2023.↩︎

  23. Senior AW et al. Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13). Proteins: Structure, Function and Bioinformatics 87(12):1141–1148, 2019.↩︎

  24. Senior AW et al. Improved protein structure prediction using potentials from deep learning. Nature 577:706–710, 2020.↩︎

  25. Higgins MK. Can We AlphaFold Our Way Out of the Next Pandemic? Journal of Molecular Biology 433(20):1–7, 2021.↩︎

  26. McBride JM, Polev K, Abdirasulov A, Reinharz V, Grzybowski BA, Tlusty T. AlphaFold2 Can Predict Single-Mutation Effects. Phys. Rev. Lett. 121:218401, 2023.↩︎

  27. Zheng NN, Liu ZY, Ren PJ, Ma YQ, Chen ST, Yu SY, Xue JR, Chen BD, Wang FY. Hybrid-Augmented Intelligence: Collaboration and Cognition. Frontiers of Information Technology & Electronic Engineering 18:153-179, 2017.↩︎

  28. Bryant PT. Augmented Humanity: Being and Remaining Agentic in a Digitalized World. Palgrave Macmillan, Cham, Switzerland. ISBN: 9783030764449.↩︎

  29. Lenharo M. If AI Becomes Conscious: Here’s How Researchers Will Know. Nature, 24 August 2023.↩︎

  30. Roberts RM. Serendipity: Accidental Discoveries in Science (1st Edition). Wiley-VCH, Weinheim, Germany, 1989. ISBN: 0471602035.↩︎

  31. Yanai I, Lercher M. What Is The Question? Genome Biology 20(1):289, 2019.↩︎