“How AI Thinks” – An ipXperience with CEO and author Nigel Toon 

With all the talk of Artificial Intelligence (AI) at CES 2024, ipXchange thought it would be great to get some insights from an expert in the field. At Graphcore’s London office, Guy sat down with Nigel Toon, Graphcore’s CEO, and author of the recently published “How AI Thinks”, for a different kind of ipXperience.  

AI is a tool 

To begin, Guy poses the all-important question since the emergence of AI into everyday life: how will AI affect our future? Nigel answers with an analogy that simplifies what might otherwise seem like an intimidating technology. AI is a tool, like a piece of paper and a pencil. When trying to solve a problem, we often write things down to arrange our thoughts and augment our brain’s ability to store and process information. AI is merely an extension of this, helping us to overcome previously unsolvable problems. 

A key example that Nigel gives is AI’s contributions to humankind’s understanding of protein folding, which can be used to solve many problems in the human body, such as cancer. With this new knowledge, we can determine how medicines will affect cells and DNA and what the key benefits and side effects might be during treatment. In a world where developing a new drug can cost £1.4 billion, AI can help reduce this cost by a factor of ten. 

What is artificial intelligence? 

Guy then asks Nigel where he sees AI today in terms of faster, more-efficient data analysis and where this crosses over to adaptive learning for machines; this question was posed based on Guy’s own experiences and discussions at CES. Nigel explains the role of AI in sensor-driven embedded applications, and how AI can be used to build a better model of sensor data to provide a system with relevant information to the application via contextualisation. As Nigel explains, there is a difference between data and information – a matrix of pixels is not directly a face, or better yet, an identity.  

After a description of how AI can be used to improve touch tracking at the edge of a screen, Guy prompts Nigel to explain how AI differs from fast mathematical models and calculations. Nigel explains this very simply: A computer program requires step-by-step instructions through an explicit set of commands and states. Conversely, AI builds a knowledge base from datasets to infer and predict answers, even when there is not an exact corresponding state. 

As a result of this, AI reflects real life much closer than standard computer programs. In most situations, you are not provided with all the information you require to act. A judgement – inference – must be made. Unlike many humans, however, AI will be able to provide a probability as to whether its answer is correct. When adaptive learning comes into play, a system can improve its ability to make answers with higher probability of correctness through past experiences, or alternatively, larger training datasets. 

The more information you have, the more you can improve the system, and with more information comes knowledge, where there is an understanding of the relationship between different pieces of information. As Nigel summarises, the AI pipeline can be seen as follows: 

  • Raw sensory input = Data -> 
  • Contextualised data = Information -> 
  • Understanding the relationship between pieces of information = Knowledge -> 
  • Using knowledge to infer answers = Intelligence! 

When AI starts to breach the bounds of this framework – i.e., extrapolating – this is where things could get quite interesting, but Nigel emphasises that this is guesswork on the part of AI. He gives the human example of the previous assumptions that Moore’s Law will always hold true, where in fact physical limitations have now come into play to halt this extrapolated trajectory. Guy and Nigel then discuss how incremental improvements may look linear in the short term, but when viewed after the fact, there may be huge instances of rapid acceleration or deceleration that would have made accurate extrapolation of data impossible at the start, for human or artificial intelligence. 

Using generative AI to enhance life 

Guy then asks Nigel where he sees AI heading in the next few years, outside of the previously mentioned medical example. Nigel highlights software as a key application area, where there are a relatively small number of extremely competent people building an increasingly relevant part of our everyday lives. A generative AI framework could bring the ability to build such programs to the general populus with a prompt-based approach to coding. 

Here, the creative idea is the driving force behind the design, without the requirement for building the individual pieces in the system, and Nigel gives the example of programming your own video game using such a framework. Visuals, music, and back-end code are generated by the AI in a way that creates a finished product. 

Nigel proposes the term ‘generative AI’ is misleading in this sense as it is not the AI that generates the content, it is the user that uses the AI to do so, if you see the subtle difference. Such a tool can be used to transform who has access to specialist technology, and Nigel believes it will change the way people use computers. Even the semiconductor chips that process these AI workloads can be improved by such technologies, with AI helping to design, develop, and verify the chips that run them. 

Does AI really ‘think’? 

Nigel then discusses some controversy around the title of his book. The word ‘thinks’ can be interpreted many ways, so when Nigel uses the word, he interprets it to mean something slightly different to the conscious thoughts that many will associate with the word – obviously, when interpreting the title this way, it appears to make the claim that AI has gained sentience.  

Nigel gives the example of hitting a tennis ball, where conscious thought will often impede a player’s ability to perform well. The magic of great tennis is that much of the cognitive mechanisms that are used to hit the ball are unconscious, and they are based on training the brain to direct the muscles in a way that achieves what a player wants without conscious muscle control, hence the term ‘muscle memory’. In this case, Nigel believes that this operation of the brain still comes under the category of ‘thinking’, whether conscious or not. 

What follows this is a philosophical discussion of the nature of consciousness, and whether bees, for example, are conscious or simply running a biologically determined nectar harvesting algorithm. This argument could be extended to whether AI could ever be conscious, and whether a line can be drawn between a simulated consciousness and something more. In all these instances, ‘thinking’ is required, so this is the reason that Nigel chose to use the word within the title of his book. 

Nigel then briefly discusses the ethics of AI in terms of military and deep-fake technology, but if you want to learn more about this fascinating field, you should get a copy of Nigel’s book here.

And if you really want to get stuck in with AI technology, check out the many AI-capable processors and AI-capable microcontrollers that ipXchange has covered, and apply to evaluate these technologies today. 

Keep designing! 

Get industry related news

Sign up for our newsletter and get news about the latest development boards direct to your inbox.

We care about the protection of your data. Read our Privacy Policy.