CAN MACHINES REPLACE US? (AI vs Humanity) - Maria Santacaterina

Primary Topic

This episode explores the intersection of artificial intelligence and human capabilities, particularly focusing on whether AI can fully replicate or replace human functions and experiences.

Episode Summary

In this thought-provoking episode of Machine Learning Street Talk, Maria Santacaterina and host Tim Scarfe delve deep into the philosophical and practical aspects of AI's capabilities compared to human intelligence. They discuss the limits of AI in capturing the complexity of human life, emotion, and creativity, emphasizing the intrinsic values that define humanity which AI cannot replicate. Through a series of enlightening conversations, the episode dissects various aspects of technology's role in society and its potential to either augment or diminish human experiences.

Main Takeaways

  1. AI, while powerful, cannot replicate the full spectrum of human emotion and creativity.
  2. The episode challenges the idea that technological advancements can replace human intuition and ethical judgment.
  3. It emphasizes the importance of maintaining a critical perspective on the role and development of AI technologies.
  4. There's a strong argument presented that AI should be developed to complement and augment human capabilities, not replace them.
  5. The discussion highlights the philosophical and ethical implications of integrating AI into societal frameworks.

Episode Chapters

1: Introduction to the Theme

Maria Santacaterina introduces the discussion on AI versus humanity, reflecting on her background and the motivations behind her studies. Maria Santacaterina: "I'm not a technologist... I have a humanities background, but I've been studying AI since 2016."

2: AI's Capabilities and Limitations

Discussion on the limitations of AI in replicating human emotional and creative aspects. Maria Santacaterina: "AI will never be able to feel, to hear you, to see you, to know you... It's an external mechanism."

3: The Role of AI in Society

Exploration of how AI can serve humanity positively and the dangers of its misuse. Maria Santacaterina: "Technology can help us if conceived to serve humanity... but it's not trustworthy as constructed now."

4: Philosophical Implications of AI

Deep dive into the philosophical debates surrounding AI, questioning the mechanization of life and the essence of human experience. Tim Scarfe: "We're trying to mechanize everything... but life is more complex than simple mechanistic explanations."

5: Conclusion and Reflective Thoughts

Summation of the discussions and final thoughts on the balance between technology and human values. Maria Santacaterina: "We need to rediscover ourselves, who we are, why we are here, what is our purpose?"

Actionable Advice

  1. Stay informed about AI developments to understand its impact on society.
  2. Foster discussions that explore both the technical and ethical dimensions of AI.
  3. Encourage policies that focus on AI transparency and accountability.
  4. Support AI research that aims to augment human capabilities rather than replace them.
  5. Engage with diverse perspectives on AI to ensure inclusive and equitable technology development.

About This Episode

Maria Santacaterina, with her background in the humanities, brings a critical perspective on the current state and future implications of AI technology, its impact on society, and the nature of human intelligence and creativity. She emphasizes that despite technological advancements, AI lacks fundamental human traits such as consciousness, empathy, intuition, and the ability to engage in genuine creative processes. Maria argues that AI, at its core, processes data but does not have the capability to understand or generate new, intrinsic meaning or ideas as humans do.

Throughout the conversation, Maria highlights her concern about the overreliance on AI in critical sectors such as healthcare, the justice system, and business. She stresses that while AI can serve as a tool, it should not replace human judgment and decision-making. Maria points out that AI systems often operate on past data, which may lead to outdated or incorrect decisions if not carefully managed.

The discussion also touches upon the concept of "adaptive resilience", which Maria describes in her book. She explains adaptive resilience as the capacity for individuals and enterprises to evolve and thrive amidst challenges by leveraging technology responsibly, without undermining human values and capabilities.

A significant portion of the conversation focussed on ethical considerations surrounding AI. Tim and Maria agree that there's a pressing need for strong governance and ethical frameworks to guide AI development and deployment. They discuss how AI, without proper ethical considerations, risks exacerbating issues like privacy invasion, misinformation, and unintended discrimination.

Maria is skeptical about claims of achieving Artificial General Intelligence (AGI) or a technological singularity where machines surpass human intelligence in all aspects. She argues that such scenarios neglect the complex, dynamic nature of human intelligence and consciousness, which cannot be fully replicated or replaced by machines.

Tim and Maria discuss the importance of keeping human agency and creativity at the forefront of technology development. Maria asserts that efforts to automate or standardize complex human actions and decisions are misguided and could lead to dehumanizing outcomes. They both advocate for using AI as an aid to enhance human capabilities rather than a substitute.

In closing, Maria encourages a balanced approach to AI adoption, urging stakeholders to prioritize human well-being, ethical standards, and societal benefit above mere technological advancement. The conversation ends with Maria pointing people to her book for more in-depth analysis and thoughts on the future interaction between humans and technology.

People

Maria Santacaterina, Tim Scarfe

Companies

None

Books

Mentioned: "Adaptive Resilience"

Guest Name(s):

Maria Santacaterina

Content Warnings:

None

Transcript

Maria Santacaterina

So I'm not a technologist. I ought to say that first. I have a humanities background, so an international relations background, but I've been studying AI since 2016, so I have a fairly good idea of where we are and where the state of the art is. And I'm fascinated by it, hence why I'm here. So adaptive resilience was written during probably one of the hardest experiences I think we've all had, which is during the pandemic.

But it was really the result of several years of research and several years of trying to understand how technology can serve us and how perhaps technology may not be able to serve us. And so I'd read a lot of books about digital transformation and they were strictly sort of technically oriented, you know. Yeah. So they were only focusing on the technical aspects, whereas I wanted to write a book about digital business transformation. And that is to say, how can the enterprise reinvent itself?

How can we create a new model for business so that we can actually use this technology in a much better way? So that was my kind of starting point. So, yeah, I think, you know, the pandemic in any case, for many people, gave an opportunity and a moment for reflection. There was a great big panic to begin with. And then it was, you know, the answer was technology.

And of course technology was the answer because it's an enabler. It was a tool to help us communicate, keep, you know, exchanging ideas, thoughts, whatever it was, and actually keep business going. And so we were creative and ingenious and we found new ways to have deliveries and goods to be shipped from a to b. And so technology is good. On the other hand, technology can also create some, let's say challenging social impacts and some challenging societal changes and also environmental impacts.

So I think when you talk about digital business transformation, you need to look at the whole thing. And so adaptive resilience is a book that you can consider a strategic blueprint to reinvent the enterprise. But also it's kind of like a step by step guide to what you really need to consider. So I begin with vision, because if you don't have a vision, then, you know, where is your enterprise going? There's a problem.

So it's really addressing leadership, strategy, culture, those are your kind of foundational tools. Growth, innovation, transformation, what does that really look like? What does it mean? And then, of course, we need to look at governance that's critical, especially in the context of AI and sustainability. What is that?

Well, for me, I can sum it up in one word. It's life. So how do we preserve life. And how do we fulfill life? I think that's our challenge for the 21st century.

And then evolution, and evolution, as the word would suggest, e vulvere, the Latin, you know, like in between the e and the Shen. The vulver means that we go back. We go back to where? To the beginning. And what does the beginning mean?

Well, for me, it means humanity. We need to rediscover ourselves, who we are, why are we here? What is our purpose? How can we fulfill our lives? And for each and every one of us, that's going to be different.

And the wonderful thing is that we have so much knowledge already. We have so many fantastic tools already. Imagine what we can do if we actually use our human intelligence. So I'm interested. What do you think life is?

Tim Scarfe

I mean, the biologist Robert Rosen spoke about it being a kind of causally closed graph of efficient causation. And biologists now are taking a kind of thermodynamic view, which is that it's a system which can resist entropic forces to maintain some kind of equilibrium. And a lot of these definitions are trying to mechanize life. And this is something that we've been trying to do for a very long time. So even Newton famously exercised the machine, because we used to think the world was a machine, but not the ghost.

Spooky action at a distance. But there is this kind of story, isn't there, of we're trying to mechanize everything in the world. That's right. I mean, we can oversimplify, let's say up until the scientific revolution. I mean, you had the.

Maria Santacaterina

Let's talk about the Renaissance, my favorite time, because the Renaissance was all about the arts and the sciences. It was all blended. So the discovery and the evolution of thought and the evolution of culture. I think you mentioned it in one of your earlier episodes. And so all of that was a big melting pot.

It was the best of humanity. It was the essence of humanity. The Medicis, you know, obviously were very powerful, so there is also political dimension, but they brought together the best of the best of the best in all of the disciplines. And really, that's why it's called the Renaissance. It was a rediscovery of humanity at its best in all of the disciplines.

And we need to come back to that, because, of course, then you have the scientific revolution, and then we only talk about empiricism and evidence. But guess what? You know, you cannot define that precisely as one is led to believe. And it doesn't matter how much you scale, AI, you will never have a formula that can definitively describe, model, represent whichever terminology you wish to use. Life.

Life is complex, it is adaptive, it is continuously evolving, constantly changing. It is impossible to capture that. Besides which, you and I have eyes, ears, noses. We feel, we smell, we hear our sensations, you know, it's palpable. You can't capture that and put it into a formula, into an algorithm, into an ni.

Even if it's a sophisticated multi agent system, they will never, ever, ever be able to feel, to hear you, to see you, to know you. Impossible. It's an external mechanism. I mean, let's touch on that. So I was interviewing Daniel Roberts.

Tim Scarfe

He's like an MIT physicist, and he was saying, no, no, no, quantum field theory has all of this figured out. If you model a particle system at the highest possible resolution, of course we can argue about quantum determinism and all of this kind of stuff, but all of these qualities are, it's not so much that they're an illusion, but they would in principle arise if we model the system. And he would argue in quantum field theory, they can constrain the system in lots of principled ways, like sparsity and locality and invariance and stuff like that. So they're making the argument that all we need to do is build a sufficiently powerful simulation AI. People are making the same argument.

They agree that language models are just surface statistics machines. But all we need to do is create gential AI where there is different things with different intentions, pulling in different directions, a more divergent system. But if I understand correctly, you're saying even in principle, that would not be the same thing as real life. Correct. And I really beg with all due respect to the sciences and to science as a field, whether it be physics or biology or whatever else, but life is so complex, and I think a dose of humility is in order here.

Maria Santacaterina

We do not have complete knowledge of everything. Leibniz couldn't figure it out, Godel couldn't figure it out. Everything that we know, if you like, starts as a theorem, you know, the theory of evolution, you know, Darwin's theory of natural selection and so forth. I mean, it's been misinterpreted, and that's another story. But anyway, we as humans, I think there is something very profound in our being, in our essence.

Call it consciousness, if you will. We can't really define it, we can't really grasp it, but it's, it's part of us. I think that's how we have got here and why we have this ability to create amazing things. And for science, of course, to try to understand something, what you, what you want to do is you try to isolate an element and you try to understand it, but you can't look at the single part and understand the whole. You have to see the whole and then understand how the single.

But it's so complex that I think it's misguided to believe that we can ever decipher life, even with the most super duper big brain AI. I don't know, whatever it is, with all of the science, with all of the engineering, with all the capabilities that we have now developed, even if you would have to then say to me that an inanimate object becomes animate. So that's the argument of, you know, the AI, God or whatever they're calling it now, but that's not possible. We invented religion because we were struggling to understand from whence we came. And that is something that drives mankind ever since we've got documentation about it.

And I think the spiritual dimension of our being is something that is very precious. As much as we have instinct and we have emotion, I mean, we don't act rationally. We have motion within ourselves and in the context of our environment and our interactions, that leads us to do something and to act upon something. And then instinct kicks in. Because we are wired for self preservation and for survival.

We don't really want to kill each other. You know, we might kill if we need some food. If you go back in history, you know, we might have to kill an animal because, you know, we need to, you know, maintain our being, if you will. But we don't. We're not naturally wired to be antagonistic.

We are naturally wired to work together. That's how the whole. If you look at it from an anthropological perspective, socially, the social groups, the gatherings that occurred and all of that splintering, all of that diversity, I think, is what is enabled to get us here, because variety beats variety. So if you look at it again in another way, let's look at it mechanistically. In a system, variety beats variety.

In this sense, if you have a variety of ages, which is what I think your earlier reference was about, the idea is that you can somehow replicate human intelligence. I think that's a really tall order, and I would suggest not really possible. But yeah, I mean, there's a few things to pull apart. I mean, in many ways I agree with what you're saying. I think that the first thing or the first mistake that AI researchers do is they think of humans as islands.

Tim Scarfe

And this is representationalism so that they think that all we need to do is create an agent with a big brain, which is a general intelligence. And that agent could presumably work in any situation. And that, to me, is false on its face, which is why I'm a huge fan of externalism, you know, collective intelligence. This idea that we live in a complex ecology, and many components external to us are cognizing elements in that ecology. But there is still the question, though, of whether, in principle, we could recreate that in a simulation.

So, for example, if we simulated all the little particles. Because physicists do make the argument that there is no bright line between inanimate object and living things. It's almost like a continuum. And there's also the discussion of teleology and teleonomy. So teleology is a kind of final purpose, which is to say, maybe you can think of all of us as an agent, as Philip Goff does, by the way.

He argues for cosmopsychism, which is that consciousness itself is the fundamental kind of material in the universe. And we should think of the universe as one big thing which has an end purpose, which has desires. And then there are different versions of this view, like teleonomy, which is argued by a lot of physicists, which is somewhere in the middle, which is to say, all the little bits, they are kind of as if driven by a final cause. But it's not predetermined. But it's as if it's predetermined because the entire system pulls in a particular direction.

But there is a juxtaposition between thinking of units in the system as having agency versus thinking of the entire system as pulling in one direction. But, you see, they don't have agency because they're programmed. They're set up by somebody. So is human intervention. So I have agency, you have agency.

Maria Santacaterina

The AI does not. What's the difference? Well, that's the point, isn't it? It's really difficult to define. So what gives me the ability to do something?

I can leave this chair, move this microphone, whatever, without you having to tell me anything, if I so choose. So then we go to the free will argument, right? If you look at literature, for instance, there was this thing called the stream of consciousness. And it kind of ties in for me when I was listening to one of your interviews with Kenneth Stanley about the open ended system. So the stream of consciousness, as you probably know, as you know, I just have a blank piece of paper, and I just write whatever comes into my head.

I just write. Now, the actual action of writing an AI will never be able to do unless it's programmed to do so. But I can do it, you know, at will. I don't have to consciously think I am going to write whatever I think I'm going to, you know, it happens. And that's the point.

You will never be able to simulate or recreate or replicate that spontaneity, that instantaneity, if that's the word, that is intrinsic to the human essence, our ability to think at a split second, change direction, evaluate, respond to the environment. And, I mean, this is chaos, too, right? You can't put chaos into a system. But nature is chaos, but it's an orderly chaos which we don't fully understand yet. And we have to recognize that we can't define whether it be, you know, through a physics lens or a mathematical lens which is pertinent to AI.

Of course, the nth particle, the nth minute, you know, element of something, because there is an argument that inanimate matter can be made animate in literature. It's Mary Shelley's Frankenstein, but I don't believe that is the case. So why is it that a newborn baby makes sounds, utterances, and then all of a sudden language comes into play? How? Why?

We don't know. Yes, you can argue that, you know, the baby is nurtured by the mother or a parent, and language is taught, or languages absorbed, but it's something much more complex than that. Language is not something that can be inputted into something and then it comes out. I believe. I am a linguist, too, by trade, but I do believe that there's something much more profound, much more powerful that I can't necessarily explain to you in scientific terms or in mechanical terms.

And anybody who says that they can explain language in some sort of a formula, I think they are mistaken. Speak to an artist, ask them how they're drawing, why they're drawing, why they're painting. They can't tell you. That's the difference between AI, creativity, not really, and human creativity. Yes.

Yeah. I mean, a lot of this comes down to semantics. I mean, when I was speaking to Floridi, he was talking about the different ways that you can assign meaning to things, you know, which is based on their function, or based on their relational ontology, or based on their provenance, or based on the intention of the artist. And it's very, very subjective, which is another reason why it's very difficult to create these things in a simulation. The chaos thing is absolutely spot on.

Tim Scarfe

So when you create these low level particle systems with complex dynamics. They're extremely chaotic, and they're very sensitive to any perturbation in the conditions. But the remarkable thing, though, is that they canalize, which means that you get these kind of systems that reliably produce themselves, you know, a bit like a storm system. Yes, yes. Yeah.

And I'm a big fan of, you know, Evan Thompson and the inactivists, who talk about this kind of niche construction and autopoiesis, which is this kind of, you know, self maintenance that you see in living systems. But it always gets back to this question of, is it just a matter of complexity? Is it, you know, could we not, in principle, recreate this? If there is something markedly different in how life works in the physical world? What is that thing?

Maria Santacaterina

So it comes from biology. Anyway, this idea of autopoiesis, it was a way to describe how a cell can recreate itself and reproduce itself, to transform itself from being x to being y. It performs a different function. And that's another thing. We cannot pinpoint what happens in the brain.

So if you blink, why are you blinking? What mechanism? What is happening inside of your brain to make you blink? It just happens. Right?

So we don't know how to do that. Now, we can attempt to recreate or reproduce elements thereof, but we can't do the whole thing because we don't understand it. So I'll give you an example. I once met a soldier who'd been unfortunately, injured. He lost both of his legs, and he was one of the.

He called himself a lucky guy because he was giving prosthetic lips. But state of the art, that's AI, right? You know, that's a use for prosthetic limbs. And I said to him, how do you feel? He said, I wouldn't leave the house without them, you know, because I felt inadequate and, you know, all that kind of stuff.

So, psychology, emotional response. And I said, but how do you feel? I kind of persisted a little. And he said, in the end, he said, he told me how it happened and all this kind of crazy technology and how he was kind of helping to develop it and stuff. And he said, I would do anything to get my own legs back.

Tim Scarfe

This is quite interesting as well, because I believe that intelligence is embodied, which is that a lot of how we think is expressed in our physical form. But then you get to the question of. Because we're talking about using technology to. I mean, some people are transhumanists, for example. They talk about extending our agency and our cognition in multiple ways.

Why would that be a category difference? I guess where I'm going with this is, I think I agree with you, which is that we are the sources of agency. We are creative, we have things which are a difference in kind. But I do still think that technology can extend our cognitive nexus. A book is a piece of technology.

Maria Santacaterina

Yes. And the way that technology is enacted now, in Facebook, for example, it's not alive, it's not a living thing, but it's still kind of part. And it's changing our reality, it's changing our ontology. You can't get a driving license now without using technology. So to what extent does it extend our humanity and to what extent does it truncate our humanity?

Well, it depends how you create it. We are the creators. So if we conceive of technology to serve humanity for the good, let's say, just to simplify the argument, that means we're going to create a tool that is going to serve humanity. It's going to help us regenerate the environment. It's going to help us clean up the mess that we've made with, you know, let's say fossil fuels or whatever else.

So it will help us to learn more about how nature functions and works. But there's a simple way to put this. I think Aristotle said techne, or technology, which is just the tool, is a completion of nature, but never the substitute or the replacement. And therefore, I think if we go back to the greats, to the stalwarts of philosophy, I mean, all philosophical thought somehow reverts back to the Greeks. And so if we then see how human thought has evolved, and we take it in that dimension, if we build technology to help us learn more about ourselves, understand more about ourselves, our physical environment, the world around us, you know, to clean up our act effectively, then fantastic.

So technology, yes, can help me to expand my creativity, develop my brain, but not because it's dictating to me how I should think and what I should do. And at the present time, because of this kind of race that we're in indiscriminately, data has been scraped from the Internet. You and I both know that that is not a source of, you know, it's not the gospel of truth. So if we talk in scientific terms, we need ground truth. The Internet is not ground truth.

And of course, you then have malicious actors and all the rest of it. So we don't have an AI technology today, which is human worthy, it's not trustworthy, and I will argue it's not human worthy because of the way that it has been trained the way that it has been constructed. Also, we can't have three or four or five people deciding which values. You know, they talk about the attention economy, and they've tried to translate that. I mean, they've misunderstood Kahneman's book, the famous book think fast and slow.

Tim Scarfe

Right? And so that was an oversimplification to help the layperson understand some of the complexities of our human brain, how we think. We think fast and we think slow. So I referred earlier to emotion and instinct, and we haven't got time to go into all of that. But anyway, it's very complex.

Maria Santacaterina

And this oversimplification, this claim that the machine can think or can reason, I'm sorry, but that is not the case, because then it would have to be animate and it would have to be conscious, and it's not. Can we explore the difference? I mean, yeah, so Kahneman, system one and two, he said that we had this kind of fast mode, which is quite perceptual and instinctive. And then we have this system two, which is where we do things like planning and reasoning and deliberation. And in a way, it's good to communicate to the layperson a kind of rubric for how we think.

Tim Scarfe

But ultimately, even in the real physical world, I visualize it as a landscape. So there are constraints in our cognition. So, you know, there's a mountain, and we have to climb over the mountain to get to the other side. And in the way we think, many of our reasoning templates, you could be a nativist and argue that they're universal and built in, or you could be a kind of social constructionist and say they're memetically embedded and we learn them or whatever. But still, the way we think is we use all of these cognitive tools that are just floating around in the ether, and they constrain what is constrained, conceivable.

They constrain our cognitive horizon and so on. And I believe you're making the argument that when we have technology, it's constrained even more because technology is very reductionist. You know, like, I phoned up a health provider the other day to get a. To get a check done, and I accidentally put the wrong value in, you know, the wrong answer in. And they said, no, no, you've got a pain down there, which means you've got to go to the NHS.

We're not even going to send you the test. You know, just go away. And I said, well, you know, it was just a transient thing. It's gone away. And they said, no, no, so now I'm learning to lie.

I'm going to go to another provider and I'm going to lie because I know if I tell the truth, they're not going to give me the service. So we're being kind of boxed into these pigeonholes, and it seems to be eroding a huge amount of our humanity. That is my concern, because every human being is an individual, unique being. Therefore, when they call it evidence based medicine or precision medicine, it is detrimental to human health. And I'll explain.

Maria Santacaterina

As you've experienced, there's sequential logic, which is embedded into a computational environment. There's calculus involved. There's also algebra. These are all proxies, these are human inventions. Great mathematicians, absolutely, but they are theoretical.

And it was a way to help make sense of the world and a way to help us understand. I mean, we got to the moon, for God's sakes, on these calculations, but humanity and human beings, beings being the operative word, they don't fit into that. So then you refer to Leibniz, and he tried to get a universal language and he couldn't manage that. Then you refer to Godel. I put both of them in my book because Godel also talked about the incompleteness of the system.

So referring then back to your earlier interview with Kenneth Thornton, where he said about open ended systems, so you can't put people into categories and classifications and pigeonholes and say, this drug is resistant to this virus or something, based on statistical significance and probabilistic analysis. Probability, as you know, is not real. It's a toss of a coin. Are we going to get heads? Are we going to get tails?

Now, how your organism responds to x drug and how I might respond to it is going to be completely different. So what counts? More empirical, sort of evidential, kind of algorithmic, amalgamated in the washing machine, all these fragments of data reconstituted, whatever, and it comes up as whatever or what I say, I feel and what I experience. Me in my organism, you in your organism. And also medicine can find itself in a very difficult situation, because even if the medical doctor may agree with you, they may not be able to act, because they are constrained by technology, they are constrained by an algorithm which says that the actuarial format underlying that means that you as an individual may or may not be able to access that particular type of care.

That is a very dangerous place to be. Yeah. It's the ultimate manifestation of computer says no. And for the american audience, absolutely. If I asked to borrow a bit.

Less, I don't know, 1500 pounds. 1500 pounds.

Tim Scarfe

Computer says no. Can I have a word with the manager?

Computer says, no. There was a comedy sketch in the UK where, you know, you would ask someone behind a counter a question, they say, computer says no. And, you know, it's a wonderful illustration of how the human doesn't have any agency. But let's just play devil's advocate, though, because technology aside, what we're talking about here is we live in a very complex world, and for hundreds of years, we've created systems to reduce that complexity by creating checklists and methodologies and diagnoses and stuff like that. We use them because they work some of the time, but then you get this kind of representational bias, and you also get a society bias because you're kind of crystallizing what we knew before.

And as we were just saying, the whole point of life is this divergence, this creativity, this nuance. But how can we square that circle? What do we do? Because you're almost saying, on the one hand, like, maybe we shouldn't be using this kind of technology in certain. Sorry to interrupt you there.

Maria Santacaterina

In certain circumstances, in certain settings, this technology is immature. It is not fit for purpose. I think Lacun came out and said, it's less intelligent than a cat. Yes. Recently.

That should tell you he was talking about state of the art L and Ms. I know, because he said that to us when we interviewed him.

Tim Scarfe

Because you can decompose cognition, respect to the cat.

Exactly. But this is the weird thing. So everything seems to be back to front, even with the generative art. People are joking that you would have thought that art would be the last thing to be replicated by AI, although I would argue, as I'm sure you are, that it's divorced of any of the semantics, any of the value of art. There's no value.

I agree with you, and we'll explore that in a second. But I think Lacun was kind of saying that what they do do is some kind of valuable reasoning that perhaps a cat can't do. We see this confusion here. So, calculation, like, let's say you're a genius mathematician and you can calculate a ten figure sum in your head in 0.1 second, or something faster than the machine can. Great.

Maria Santacaterina

So what? How's that relevant to life? No disrespect. Yes. So the point is, it's not practical.

I'll go back to my favorite, Aristotle frenesis, the practical application of knowledge. So if we had machines, if we had AI's to help us develop our brains freely without constraint, great. But what does that actually look like in the enterprise? Well, that means we need to look at structures and systems and processes holistically. Sts, socio technical systems, emphasis being on the big s, the social aspect that's always forgotten.

You know, whenever you talk to a proponent of this technology who are, let's say on the kind of the extreme of, you know, of the claims that are being made, I listen very carefully to the language that is being used and I do not hear anything that sounds like there's a human dimension in their conceptualization of the tools that they're creating. So the laboratory environment is one thing, real life is quite another. And you can't reduce humanity to automatons. And to say that you have human like intelligence at a simulation level or a mimicry level or a mirage level, however you wish to review it, you know, an illusion level, whatever your thoughts are. But the point is you cannot reduce humanity to automatons.

You can't systematize humanity. The moment you try to do that is the moment it all goes wrong. Healthcare settings, judicial settings, all the employment settings. If I were to apply for a role, I have a nonlinear career. I'm very proud of my non linear career because I like to learn lots of different things.

I don't fit into the box. Yes, I don't either. Here we are. But this is, I guess I'm trying to applying this to the real world. We have methods of organizing companies and, you know, I love to use the lens of agency.

Tim Scarfe

People are probably annoyed at me for constantly doing it. But, you know, when I say an agent, I mean a thing that basically has an intention, you know, it has some kind of directionality. That's more autonomy in my view, but because agency. So there's a distinction, let's say, between, from my understanding, autonomy and agency. Autonomy is self determination, intention, you know, motivation, all of that stuff.

Maria Santacaterina

Free will and that sort of stuff. Agency for me means the ability to act, the enactment, if you like, of my autonomy. I have autonomy, the AI doesn't. It's an alternation. I agree with that.

Tim Scarfe

And personally I think that just move your mic a little bit into the.

There is a big tug of war here with people who talk about free will. I personally think agency and free will have got nothing to do with each other. So I'm a compatibilist. Yeah. So I agree with Daniel Dennett that it's almost an as if property and it's got nothing to do with determinism.

But, yeah, so agents act, they construct niches. It's almost like they're gathering and exchanging information with what's around them. They have preferences. They have an intention. They try to make the world like its preferences.

And you can factorize a system into agents, even a nesting of agents. So we're agents. We work for a company. The company is an agent. But from a planning point of view, the company wants to do something and then they do this factorization.

So they say, we're going to have an R and D department. They have more agency because they have to discover new areas that might make us money in the future. The engineering department, they have less agency because we already know what they should be doing. So we're telling them what to do. And then even inside that methodology, you might say, well, you need to do this thing and I'm going to measure you on it.

But you can also have 20% of the time, you can do some other stuff as well. And it's a little bit of a mix. You see, I've spent my life breaking down silos and busting myths. I'm going to endeavor to continue doing that. I mean, everything you just described is human.

Maria Santacaterina

So what I'm saying is only humans are agents because only human have the ability to do something without something else having to make them do that. So a system, an AI system or a multi agent system, however you wish to define it, something, somebody, sorry, somebody has to make that happen. Whereas in humans it happens. Now, yes, we have all these structures and, you know, we all know how the scientific model has been applied in large corporations especially. But it's actually wrong because that's why there's no growth.

I mean, we have profits, but it's fictitious, it's not real, it's not tangible. There has been no significant improvement in the human condition, in the quality of life for decades. And why is that exactly? So, I mean, that's also another conversation potentially, but we human beings have agency. We human beings have autonomy.

It's really important that we keep that because that is integral to our identity and to our dignity and to our intelligence. If you take that away from us human beings, we have no intelligence. And so, I mean, I would like to go back to something that Carl Sagan said. He's a physicist. Fortunately, he's no longer with it.

I would have loved to have met him because he's just genius. But he said something remarkable. He said, you know, I rue the day that my grandchildren, so not so far away, we're living in that moment now, are so dumbed down that they don't know how to distinguish, to discern the difference between good and bad. I've oversimplified what he said. Yeah.

Tim Scarfe

And I mean, a lot of this is to do with the freedom to think. And, I mean, first of all, I think there is a relationship between intelligence and autonomy. And the reason for that is there's a relationship between autonomy and creativity. So intelligence is creativity. Intelligence is the ability for me to build new models dynamically.

And if I have the blinkers on, because I have no agency, then I can't be intelligent by definition. And I think what you're kind of alluding to there with moral reasoning is rather than being in the panopticon and always under the judgmental eyes of others and always being told what to do, creativity is actually being able to examine counterfactuals, to discover new ways of doing things, because it's only with that perspective that you can have a moral calculus and to explore. I mean, it was Bentham who came up with the panopticon, but it was for prisoners, for people who did bad stuff in society. So the idea was to rehabilitate them. So you create this all seeing eye and the prisoner, because he did something bad, he's in, obviously, jail.

Maria Santacaterina

He doesn't know when he's being observed. So the idea was that he would be likely to behave in a better way. But the reality is you can't predict human behavior, you can't control human behaviour. And this is what AI is trying to do. It's coercive power, and I'm afraid that it doesn't work.

So you mentioned agency, about the persuasive dimension and the kind of motivational aspect and so forth. So let's say I would like to convince you that, I don't know, roses are green, for example, which is an oxymoron, right? You know damn well that roses are not green or not unless we've engineered them to be green. They're normally red or yellow or white or whatever, even blue, I dare say. But the point is I have to use all my wits, all my intelligence to convince you, to persuade you, to make you believe.

Now, we're trying to do that with AI. We're trying to say that AI is an algorithm, has more intelligence than a human being does. So your medical condition. And AI knows better based on what exactly? How has it been trained?

What values have been given, how those weights actually work? And where is the machine actually looking? Do you know? On which representation of the population is it somebody that is similar to you or very dissimilar to you. I mean, who says that that statistical, that average, that convergence to the mean is going to benefit you?

Who says that? Yes. How would it change if we could have an explanation? And I guess where I'm going with this is we live in a world where not everything is a binary. You know, we don't live in this, usually in this completely, you know, rationalistic world.

Tim Scarfe

And even an explanation would use these language words that we've discovered, and it will kind of string all of these words together. And every word hinges on some common knowledge that we already have. And, you know, we live in a world where a lot of this stuff is just based on perceptions and shared myths. It's so divorced from the physical material world. Increasingly so.

So what would it mean? Even if we could explain how the machine learning algorithm worked, it wouldn't change. A thing because, you know, the way the system is orchestrated and the way it is built and trained, the way it's conceived, I will argue, is flawed, this flawed reasoning. So until we change the reasoning behind the construction of these systems, we're going in the wrong direction. I love that thing.

Maria Santacaterina

Again, I go back to that previous interview that you made. Deceptive, what did he call it? Oh, deceptive objectives. Love that. Yeah, the false compass.

Yeah. Yes. That crystallizes what's happening right now. And so we need to counteract that. And this is why we have human intelligence, because, you know, feels wrong, misdiagnosis feels wrong.

I mean, in any context, not just health. I mean, it can be in the company. So, for example, you're sitting at the boardroom table and somebody's waxing lyrical about something and you know it's wrong. So you sit and you listen, you're patient, and then you say XYZ. And they go, yeah, it sounds sometimes that you might be.

I mean, I've heard it said to me. Well, that's self evident. Well, why didn't you think of it then? So the point is, we have the ability to absorb information from the environment. I mean, fluoride calls it the infosphere, but the way that we process it, the way that we respond to it, the way that we manipulate it, I will use that word because not the AI, is us.

Manipulating the information is critical to the actions that we take. So the agency that we exercise, but also to our autonomy. And autonomy does pertain to freedom. It does pertain to self determination. So if I am free to think, if I don't have any constraints, okay, let's just say I'm an artist for now, and I'm going to draw some sort of a landscape, to use your analogy.

I don't know what I'm going to create before I begin, but then something beautiful happens. Let's just say I'm a talented art, for argument's sake, and this beautiful painting comes, and it draws people in, and the emotional response that people have when they see a beautiful piece of art, and AI will never, ever experience that. So when they talk about AI having experience, for example, well, that's just not right. So I think part of the problem is this is my theorem as a linguist is that the english language is somewhat limited in terms of vocabulary. And it was a very easy thing back in the fifties when McCarthy needed some funding, because, you know, it's difficult to get funding for something that's not been tried and tested.

So he came up with this thing that sounded kind of sexy. Yeah, artificial intelligence. But it ain't. It's not intelligent, it's artificial, it's an artifact. We have created it, but there's no intelligence.

You know why? Because we can't define intelligence. Human intelligence is multidimensional, social, societal, emotional, cognitive, you know, intellect. It has, you know, I've oversimplified everything just for time, but it's really hard. There is no one single definition of intelligence, let alone consciousness.

Tim Scarfe

Yes. I mean, this comes back to the blind men and the elephant, that we understand the elephant from different aspects, but we don't understand the whole. And even if we do, as a physicist might try and understand the whole, it's even more inscrutable than looking at part of the elephant. What you said about semantics is really interesting. I used to use descript to edit my podcast, and it was based on the premise that you could deconstruct an interview like this with all of these emotions and, you know, like, we're woo, we're here, and you could just decompose it into a bunch of words and without losing anything.

And I very quickly realized you lose a hell of a lot. You lose a hell of a lot. But even then, you know, you're talking about with art, AI art, it's not a binary, I think, that I go onto chat, GPT, I generate an image, and obviously it's not what I want, but now I can make little edits. I can say no, I wanted pointy ears, and I actually wanted the background to be like this. And I'm expressing agency.

And this is, I think, where people get confused, because I make the argument that even any usage of an AI, even if I'm expressing agency and I'm guiding the process. It's still constrained in quite a pernicious way. I can still recognize AI content, and in the future, maybe we can't anymore, but I still feel it's being constrained quite seriously. Absolutely. And it will always be, because there's always going to be some degree of human intervention, some degree of programming, some degree of coding, some degree of limitation.

Maria Santacaterina

I mean, cryptography, right. You know, Python is not a huge. Python does not equate to the human form of natural language. That's how I've defined it in the book, because. Because it's impossible.

I mean, even NASA failed. NASA thought at one point that it could use Sanskrit to create a universal language, and it failed. They thought, oh, it's simple, it's just symbols, language, you know. No, you're so right. So language is a.

Tim Scarfe

I believe it's a living organism. And folks should watch our Morton Christensen episode when we bring that out. But, yeah, you know, I think you mentioned before we hit the record button that there was this idea from Shannon that all the information content of language was. Was in the words. And, you know, it was like we could apply information theory to language.

And the reason why we talk about semantics is because as agents, we have the matrix in our heads. We have simulators. I'm only conditioning your simulator with my language. The words don't mean anything. You actually assign them.

You make sense of it. We all make sense of it. But the interesting thing is that when you have this diffusion of people like you and me sharing information, conditioning, our simulators, just like, as it is an evolution, you get all kinds of morphological convergence motifs, you get constraints, you get canalization. So you do have some structure which emerges. Yeah, that's the conditioning of the environment.

Maria Santacaterina

But I just want to go back to Shannon and then remind me about this point. Shannon's theory was that you would reduce information to the barest essentials, so that you transmitted a message via cryptography to the barest minimum, and it would be the receiver who could make sense of it. Now, take that forward into where we are now with AI, with the methodologies that are used to train and the statisticals and all the probabilistic and all that kind of stuff. And what have you ended up with exactly? I mean, I always use the analogy of a washing machine.

All these fragments, all these tiny fragments of numbers, these irrational kind of things kind of swimming around. They are deconstructed, reconstructed, and then somehow these weights point the machines attention, you know, in a particular. But what if it's wrong, often is hallucinations. We hallucinate? Yeah, we do.

As humans, you know, temperature rises, you've got a fever, you're going to hallucinate. Maybe you think you're seeing a multicolored zebra and it's, you know, actually a plant or something, but that's not reality. No, but that is an altered state of consciousness, in a manner of speaking. You know, it's so it's really difficult to separate the rational element of human intelligence from the emotional element, from the instinctual element. It's really difficult.

Science does that because it's convenient to be able to study it, to be able to understand it. But it's all entangled. It's all together.

Tim Scarfe

I guess one way of looking at this. I mean, I've spoken with Karl Friston about this, with his free energy principle and an act of inference, and he says that, yeah, absolutely, we need to have all of this subjectivity, but it's still just information. So you could argue that we live in a universe, and entropy increases, information increases, and all of our current methodologies limited by our constraints and our cognitive horizon, we just snip off all of the details. We just discard it. But is there a continuum where we have systems that just capture more of the long tail of all of these low frequency instances, all of this subjective experience and so on?

Are you saying it just wouldn't work? In principle, no. Because we have something that we call the imagination, and we have intuition. You can't put boundaries around that. If you do, there's no imagination.

Maria Santacaterina

That's part of the problem, actually. These machines will never imagine. Well, so Fristen would say we model an active inference agent. So it's an agent which acts. It does planning as inference.

Tim Scarfe

It senses, it generates. But you're only modeling fragments. It's like this idea that an AI system, let's say state of the art hypercon, you know, the best that we can possibly do, right? Even if we get to AgI, which I don't think is possible, not really, and not even asi, I mean, that's just ludicrous. But even if, let's just imagine we did everything that is claimed, that can be done by scaling, right, by making this giant super brain, whatever it is, even if we got that far, I will still argue that imagination is infinite, and we know that we can't create an infinite system.

Maria Santacaterina

And I think, you know, chaos is open ended. So these are forces much greater than ourselves. But we have been, you know, we've been endowed with this thing called intelligence, which we're trying to figure it out right now. We are in a place where I believe we're trying to figure it out. We're trying to understand who we are, what we are, what we're made of.

Why is it that, you know, because all species have intelligence, even the cat and even, you know, the honeybee. It's a different kind of intelligence, with respect. But why is it that we are at the top of the animal tree, you know, why do we exist? I mean, it's the eternal question. You've got existentialism in the sixties.

You know, you've got transhumanism today. But, you know, why did we invent religion? You know, we as human beings, we want to know, why do we exist? Where did we come from? Why are we here?

What are we doing? You know, we strive to learn more about ourselves, about our environment every single day. So, you know, people talk about affordances in the environment and so forth, but the subjectivity and the objectivity which is encapsulated within a being, which is a human being, which is constantly evolving, you can't separate it. So we've. We've got this construct that we've created and invented this AI thing that can observe.

But your smile is not my smile. Not a million trillion. They say that, you know, a smile is as unique as a fingerprint or as unique as, you know, as the individual. But the complexity of your being and the complexity of my being will never be the same. And you can't model you, and you can't model me.

You can claim to do that, but it isn't me. And it isn't you. But are you just speaking practically? So, even if I had, you know, even if I had a particle PI. Particle simulation of you, it wouldn't.

Tim Scarfe

It wouldn't be the same thing. It's still a theory. It's still a theorem. It's my interpretation of seeking meaning. I am a physicist, and I am trying to seek meaning, and I'm trying to understand my realm, my environment, you know, my universe, as it were, let's put it in those terms.

Maria Santacaterina

And it's still my interpretation. This is the thing. AI cannot interpret reality. And that's a very bold claim I'm going to make now, because it doesn't understand language. It does not understand the human form, the nuance, the richness, the complexity, and all the stuff that we've kind of, you know, touched upon.

So human beings are intelligent because they have the ability to interpret instantaneously and respond instantaneously to a given affordance in the environment or a change, sudden, expected, anticipated. We anticipate, for goodness sakes, any change in the computational environment. AI cannot anticipate. We can try to make it anticipate, but it will never have human judgment, moral reasoning, moral restraint. More importantly, systematic killing machine is going to be a systematic killing machine no matter what.

Don't tell me you've got a kill switch. It's not going to work. Turing said that, for God's sakes. He said, there's no such thing as an on off switch is not going to work. And to my knowledge and my understanding, I've read a great many papers.

I haven't read everything. Obviously, we haven't passed the Turing test yet, and I know that's controversial for the technologists amongst. I mean, they're doing fantastic work, and, you know, we're going to get better and better. We're going to refine this thing. But conceptually, the machinery that is now being deployed, let's say, in the real world, in, let's say, in commercial terms, in companies and institutions and governments and so forth, it's not fit for purpose.

Tim Scarfe

Yeah, I mean, I think a lot of this comes down to how we are different in kind. And I agree with a lot of what you were saying about our analogy making our creativity. It's infinite, and it's not necessarily infinite now, but it's infinite if you think of us as a system, and we are constantly discovering new things, generating information, just exchanging things in complex ways. But you could argue that GPT is infinite. I mean, you can put in an infinite number of utterances into GPT, and it wouldn't necessarily always give you something interesting, because it's only trained on the manifold of language.

Maria Santacaterina

Well, it's only retro mirror, but there's a fundamental difference here. I can go backwards, but I go forwards, too. Machine can't. Well, I mean, we could, in principle, all have vision. In that sense, we could have a.

Tim Scarfe

We all have GPTs, and it's continually learning and it's diverging in a similar way. But even then, you could argue that it's kind of different in principle, but you could also argue, conversely, that humans are quite mechanistic. We make lots of mistakes in our reasoning, but I think the best way to understand humans is at the kind of the social length. So individually, we're not very smart. But, for example, look at Alphazero.

Alphazero is trained on a sliver of the kind of manifold of go. And at the time, it could beat humans at go, right? But humans as a whole, if we wanted to beat that machine, we probably could, especially if we're trying to beat it as it was trained five years ago, because it's not robust, it has all sorts of problems. Collectively, we could discover some kind of adversarial move which would defeat go. So as a system, we are very intelligent because we're always looking for those new models.

Maria Santacaterina

So I would beg to differ, because I think human beings are intelligent individually and collectively. That's why we have the social dimension and every interaction. So, for example, you can be reading a book by yourself, and you can be learning, and you can be building and you absorbing the content, and you can be asking yourself what it means, and you can be trying to figure it out, etcetera. And then you'll have a conversation with a friend and you'll say, hey, you know, I read that. What do you think about that?

And so we learn in multiple dimensions, and we learn, you know, it's time and space, isn't, it? Goes back to time and space. So multiple dimensions in terms of space, but also in terms of time. And also, the machine cannot create information or create knowledge in the same way that we can. So ideas is what I'm getting at.

We have ideas, the machines don't, and everything that they, you can argue that they're, and I see the argument, too, but it's all retrospective. It's only what we, what we know. It's absorbed everything that we have discovered thus far, but it does not contain our imagination, it does not contain our interpretations individually. But this is like, I think, because I'm trying to understand why it is that we have misguided ourselves and misled ourselves to this point, that we kind of believe our own deceit, our own objective function, deceit, whatever it is, it's like in a company, accountancy function, leadership function, marketing function, function, function, function. But nothing works by itself.

If I don't know how to fix the machine and the machine, you know, power outage or whatever, and it stops. Our whole world is being concentrated, all of our existences in, you know, specific data centers located in a specific, you know, imagine it all stops. Then what I agree. So what you're saying, I believe, is that there is this crystallization of our cognition. And I guess I was kind of thinking the same way with go, which is that Alphazero was actually trained on a tiny sliver of the manifold of the game of go.

Tim Scarfe

And it just so happened that because it intersected how humans played go at the time that it would always beat humans. And it's the same thing with language models. Language is actually just. It's always evolving. It's an organism.

And language, as captured by GPT four, is just a tiny sliver. It's a fraction of what it could be, right? And right now, there's this parlor trick, because it captures a lot of colloquial language. People assume that it is just language and it is thinking. It is doing all of these things, and it also has this reinforcing effect backwards, which is that it now constrains how we use language.

And it's kind of. It's actually affecting the evolution of language. It's crystallizing, scleroticizing language. And the point you're making, I believe, is that if it wasn't for GPT, we would just continually evolve, because we're all agents. We learn, we're always adapting through improvisation.

We're creating new models, new reference frames. I interviewed DeepMind last week. They were doing this kind of football analysis algorithm where they could. Here's a corner kick, and I'm going to use a graph neural network, and I'm going to analyze all the player positions and stuff, and it's using a graph neural network, and that's great. But the reality is, the second you train it, it's out of date, because we live in this adaptive world, right?

You do something, I adapt. You adapt. I adapt, you adapt. And we're traversing this fabric of possible, like, cognitive models, and it never ends. It goes on forever.

Maria Santacaterina

Exactly. That's precisely my point. I mean, you know, we're not a chess game. We're not binary. We're not black and white.

So, computationally, a computer might beat a human being, like, you know, Kasparov, the famous example, but he beat it afterwards. Yeah, we have instinct, intuition, imagination, creativity. I mean, you know, intelligence. The machine doesn't. It can only compute.

It can only calculate. It can only use calculus, and it can only compute based on what was fed into it. Now we have prompt engineering. Why have we got prompt engineering? Because the machine is defective.

It's flawed. So that also has a more kind of sinister side to it, because, you know, you don't have to go to a malicious actor. I can be just naive and ignorant, and I don't know what I'm doing, and I kind of put stuff in which is bad, and the machine misinterprets it. I'm sorry to use that term, but I want to make this point. English language is limited in vocabulary compared to other languages and also in its syntax and its structure compared to others.

And it has been trained mainly on North American English, which is not even English English, sorry, North America, with all due respect. But anyway, you know, the example that was given was the extreme example, the edge case. Let's say Albanian. Albanian is barely visible in a paper I read recently, is barely visible in the Internet, and they're not represented. So that culture, those sensibilities, that sensitivity, you know, to language and how it's used and, you know, English, everybody kind of speaks.

But the sensibility and the sensitivity to the use of that, you know, meaning, semantics, it's not just semantics.

It really. It's how language and words impact an individual and how they receive it. So how I share with you and how you receive it is really important. Communication is just such a wonderful thing. That's why I studied languages, because I want to understand people.

I'm interested in people. And I think the scientists are, too, but they theorize about people, whereas I kind of feet on the ground kind of, you know, I try to learn about people through people. So, you know, if we were to say, okay, we've made a mistake, let's just pause. Let's just see how we can use what we've learned so far and turn around and make it better, that would be fantastic. Yes, yes.

Tim Scarfe

It's really interesting what you said about, you know, let's say I'm in Albania and I want to communicate with american people on LinkedIn. What do I do? I use GPT four. And, you know, and it just creates this banal, horrible prose, but this creativity of things. Really interesting to me.

I think about it a bit like an explosion, which is to say, you know, it kind of starts from an epicenter and it explodes out and it kind of exponentially increases in volume. And the weird thing is that if you look at the edge of the explosion, it's always specialized. So, you know, this is the didactic exchange of information between agents. We create new models and new models and new models, each model building on the model that preceded it. But this is kind of what I mean about us not being intelligent as individuals, because it's the exchange of information between agents that creates diverse and interesting models.

And depending on what part of the explosion you're on, you'll have very specialized knowledge. You know, it might be knowledge relevant to Albania, it might be knowledge relevant to the UK. And we're all just collectively kind of pushing the boundaries in different directions. And, you know, novelty always happens on the edge of chaos. Yes.

Indeed. So, you know, okay, so I'm a linguist by trade, right? So I've traveled the world, I've worked all over the place. And one thing I've always felt uncomfortable with is to have a translator, because I master languages. So I like to convey my meaning through my own means, my own intellect, to have to rely on an external, let's say, person to make sense and interpret what I'm saying, and hopefully pray that they're going to say it in the right way, with the right tonality, with the right intent, with the right motivation.

Maria Santacaterina

So I don't damage the relationship with the person that I'm speaking to, because I'm very interested in relationships and relations and causality and such things. If you put it in a scientific language, you know, that's a really big responsibility to give. Let's say I'm in a business setting. Let's say I'm negotiating for something. Let's say I need to achieve my goal, and I can't directly communicate with you, and I have to rely on a translator.

In the past, it was a human being. Now we've got something that's highly imprecise, can be fabricated. I mean, oh, my gosh, it can completely destroy our relationship. But the world does not work on machines, no matter how many kind of mechanistic analogies we can say about the enterprise. For instance, the enterprise does not mechanically produce growth.

It doesn't produce anything mechanically. It's only the people that produce growth, wealth, because it's the organicity. I go back to the essence of inanimate and animate matter. If we want to put it in physical terms, we can describe it through any lens you wish. But the point is, it's people that make things happen, including change.

Tim Scarfe

Yes, your translation point was really interesting, because I suppose you could cynically argue that any kind of translation, even through an intermediary, is a bit like chat GPT, which is to say that there's a kind of semantic bottleneck. But the interesting thing there is a lot of communication is non verbal. There's a lot of non verbal communication going here. So you could go and communicate with people in another language, and a lot of that meaning would be rescued, I think. Take a color, red.

Maria Santacaterina

How many shades of red can you possibly imagine? So how that is portrayed, conveyed, shared, modeled in the AI world is different. You know, you get one little tiny element wrong, and the whole thing comes tumbling down. You get it right, and then you have the explosion, the creative explosion, you know? Okay, I'm simplifying again, but, you know, this is something that we cannot confine, constrain within a finite system.

No matter how brilliant these AI's are and how adaptive they are. Sorry. How far they can adapt or respond or, you know, simulate, mimic, they can never replace humans. And I say this in the book quite clearly a few times, you can't substitute for humans and they can't replace humans if we do that. What we are saying is that a human being is nothing more than an automaton.

That is to say, an empty vessel, which we program, which we tell how to do. And it's an object. It's a thing. We are not things. We are beings different.

Tim Scarfe

So I spoke with Daniel Dennett. He had an article in the Atlantic called counterfeit people. And he said that humans will eventually just be fooled by these systems, that they will become indistinguishable. There's a startup called Hume AI which is doing, you know, like kind of emotive speech to speech generation, right? So it knows your emotions, it responds with emotions.

I tried it and it's still a little bit rough around the edges, but the amazing statement. I know, but it's a bit like Eliza, because I've been looking on Facebook and like Twitter and stuff like that, and people are enamored by this. You know, they've got artificial girlfriends and they're spending hours and hours talking with these AI's. And I'm trying to understand because I viscerally feel that there's something not right about this. And I don't like talking to AI's.

I don't like it when people use AI's to communicate socially. Why do people love it so much? Well, because, you know, conditioning of the environment is acculturation. So we have evolved as a species through enculturation. That means that through all of our diversity and through all of our different social gatherings, we have collectively contributed to the human consciousness.

Maria Santacaterina

And that complexity is what we are trying to fathom today. And of course, the other one is acculturation. That means to say that you are somewhat conditioned by your environment. So when you are constantly bombarded and bombarded, do you remember in biology lessons, the dog trick? No, I forget who it was now.

Oh, gosh. Long time. But anyway, how to teach the dog to respond. You know, you throw the ball and you want it to have certain reaction or to have a certain treat, you condition the dog to respond in a certain way. Yeah.

Tim Scarfe

Sort of pavlovian response for me. Thank you. Yeah. Yes, that was it. It's the same thing that's happening now.

Maria Santacaterina

I mean, they talk about geoengineering, but there's also social engineering going on. It's a very unfortunate juxtaposition of terms. It's actually very serious, and it's detrimental to human health. I mean, we know what's happening to children. I think we have to just take a step back, and we have to start thinking about what we're doing because it's detrimental to human well being.

Tim Scarfe

I know, I know. Maybe we'll touch on that in a minute. I've read Emil Torres book, but let's kind of think about this a little bit. So, first of all, I agree with you that if you look at TikTok, if you look at social media, we are the sources of creativity. Right.

So even on TikTok, you know, like, all of the entropy comes from people. You know, they're doing all of these, like, different dance videos, and they're mimetically shared. The algorithm, it's not doing anything. Doesn't do anything. It's just showing or amplifying or de amplifying.

Exactly. So in that sense, I think technology is not essential, and all it does is constrain, you know, it's a form of organization. And actually, in a sense, technology is no different from any other form of organization. Correct. And a lot of this is about power.

So when we have, you know, forms of organization, whether it's in companies, whether it's in government, what we're essentially doing is we're setting constraints and policies and how the system works. And you could argue that's a good thing, because without organization, we couldn't really function as a society. So how do you think about that? You could imagine a different version of TikTok. I mean, in China, TikTok's not as bad as it is here.

Over here, it's sexualizing children. There's lots of really bad things. But you could argue on the one hand that it's not necessarily a problem that we're trying to have constraints and policies. It's just that there's a western cultural problem that has all of these bad things associated with it. Yeah, I mean, you know, we didn't need TikTok to come along to discover it.

Maria Santacaterina

We discovered this a long time ago in the earlier experiments. I think it was Microsoft, actually, whoops. Who created a chatbot tape. Yeah, yeah, the racist one. Yeah.

So, yeah, it kind of worked in China because, you know, there's a different culture there. There's a different philosophy there, and, you know, you act as an individual, but you also act as a collective, and so you do things for the benefit of humanity. Just hear me out. When they transferred those qualities that created that chatbot to american context, it didn't quite work because, of course, the interactions were different, and, you know, the inputs were different, and so out came racism or whatever it was. It just goes to show you can't artificially create the organicity of a human being.

I mean, the organic processes, the dynamic organic, the homeodynamic processes, if you want to go scientific about it, that are intrinsic to humanity, to human beings. We can instantaneously change our minds. Let's say that I say something and it upsets you. I will immediately see that sense, that, know that, correct myself, actually, even before I utter, if I think something and I know it's not going to land, well, I can immediately correct that. Machine can't do that.

It'll just splurge it all out. Yeah, I agree. I mean, the machine doesn't do anything. But I guess the fascinating thing is that China is more autocratic than we are. Is it?

Tim Scarfe

Well, so I would argue that they have rules over there, so you're not allowed to use social media for too long. And the content on TikTok has to be educational. And maybe there's a kind of chicken and egg situation here, but they have better content on TikTok, and they're actually learning things. Well, they're trying to counter the harms, because they've been there before, and they know what the harms are. And because they have a more kind of duty of care oriented, paternalistic society, if you want to call it like that.

Maria Santacaterina

The point is, what is autocracy and what is democracy these days? It's almost indistinguishable. So they're trying to protect their population. They're trying to create a healthy, strong, vigorous, economically strong, prosperous society with their form of government, which we may not agree with, but that's okay. We have a different form of government, that's okay.

But in their society, with their values, that works. And so we have to respect that. We can't go along and say ours is better because look at what we've done. I mean, you know, what is autocracy? If I express a view and it is not acceptable to the government of the day and I am censored, they call it cancel culture, I believe.

Isn't that autocracy? So I think we, as humans, we are very quick to create and use terminology without understanding the context, because that's what social media or the media tells us. So if somebody says in authority in a strong position, this is. So we tend to go along with that, and we tend to believe it. But did we do our own research?

Did we ask our own questions? Did we try to figure it out for ourselves and form our own opinions? The answer is no, because that's what they mean by the attention economy. So they figured out a way to use algorithms to manipulate us, to tell us what to think, how to think, when to think, and when to act. You know, that's the point that it's getting to.

That's why this data collection is so pervasive. So isn't it better to design a system that is fit for purpose? Why do I need Tim's data? Which elements of Tim's data do I need? How do I ask Tim if he is happy to convey that data?

What guarantees can I governance? What guarantees can I give Tim that I will only use that data that he has shared knowingly? So, you know, explanation and all that sort of stuff. Leave aside the interpretability of the model, that's another story. But once you've got all of that in place, if I then act to protect Tim's data and to protect his privacy, am I actually protecting my own existence as an enterprise?

Tim Scarfe

Yeah. I mean, this is something that Fluidi spoke about, which is that not only are we moving, you know, like, our reality is now mediated by technology, but also our personal identity is becoming diffused. So, personal identity used to be linked to my physical, material self, if you like, and now it's just in all of these databases. And I don't have control on it. I don't have control of my narrative because people are talking about me on all different social platforms and so on.

So I guess you're arguing for an ability for me to take back control of my personal identity. Absolutely right. I'm doing an interview today. When it's broadcast, an AI can impersonate me, distort what I say, twist what I say, manipulate what I say, and then that's the end of me. Isn't this another thing that we'll adapt to?

So, right now, well, maybe not now, but photographs, because how do I prove. That it's not me? Is my point. Yeah, but up until five years ago, a photograph would have. Would have been admissible as evidence in court.

And in five years time, it won't be. And five years ago, an image of Putin saying something or a video, we would have believed it. In five years time, we won't believe it. Well, I don't think anybody believes it now. I mean, you know, realistically, how do we really know?

Maria Santacaterina

Unless you're some tech geek that can go in and figure out that it's not the algorithm. I mean, yes, you can still tell what is a deep f. I think most people can. You mentioned earlier about, you know, language use and, you know, you know, if it's chat GBT or if it's a human being. Yeah.

And I've specifically tried to change my tonality because, you know, a machine can't read me or shouldn't be able to read me. It might think that it does, but it actually doesn't. You know, we have gotten to the point where I am subordinate to a machine. I have to prove my identity. The machine doesn't.

Sorry, that doesn't work in a democracy. So when you mentioned autocracy, what are we really talking about here? It shouldn't be myself having to prove my identity to a monogarithm. It should be, rather the algorithm declaring itself and the machine declaring itself and telling me why it seeks to have. I'll give you an example.

In the middle of COVID I think this is when things really dramatically changed. In the middle of COVID I had to get a jab or I had to do my passport. I can't remember. I had to do something anyway in order to get the certificate to say that I'd had the COVID vaccination, the injection. I had to do it online.

And I remember that I had to hold up a piece of paper with some words and I had to speak and I was petrified of that because that shows my mannerisms, my voice, my tonality, everything. And my face and my mannerisms. Yes. Why? It wasn't sufficient for me to scan my passport, even though my passport is biometric past me.

It's got the chip in it. I know, but I'm trying to understand. Where is that data now? Where is it now? How is it being used?

Why was it taken? I didn't have any explanation, so I had to get the piece of paper because, you know, I had to have proof that I'd had the vaccine. But why? That's coercive. Yes, but the thing is, we're not Luddites.

Tim Scarfe

But by the same token, if you're a pensioner now, you can't go to the bank anymore. You have to log into it. That's what I mean. But you could argue that these things are. They're an emancipating force, that they're actually helping people do banking.

Get loans do things much easier. But who? Who benefits? I mean, who, because if you are female or you live in the wrong postcode, or you've got the wrong religion because they ask you lifestyle questionnaires because they're collecting data, who benefits? You can't get a bank loan, you can't get a bank account.

But the thing I'm trying to tease apart is, is this just a governance problem? So these systems have the potential to be extremely oppressive. Yes, but it's not necessarily the paternalism or the constraints, which are a problem, because if used correctly, they might make a society better organized, I think is the problem. Not, we need to adapt and we need to understand where oppressive things are happening and legislate against it. That's why I called it adaptive resilience, because I believe in humanity and I believe in our ability to overcome the issues that arise.

Maria Santacaterina

Let's call them unexpected consequences. But there's a fundamental flaw in the thinking. To begin with, these machines were not designed for civilian use. We know this. This has been the case for technology's evolution for many, many years, many, many decades, hundreds of years.

And so the military dimension is one dimension, the civil dimension is quite another. So because these machines are defective, if I go and open a bank account now, I mean, they want to know everything. That is the wrong approach. The correct approach is we need to provide better levels of security, fraud prevention, all of this stuff. So let's understand what it looks like, what it means, and how can we achieve this?

We need moral restraint there. We need constraints. We need philosophical thinking or reasoning in the sense that we think more deeply and more objectively about it, not just subjectively. How can we make an improvement in the quality of people's lives? So how can we use this technology in a good way, in a better way?

How can we better organize the enterprise so that it can better serve its communities, the society at large, and achieve growth, real growth, meaningful growth. And I use meaningful in the book a lot. You can't define meaning, but you know the difference between something that is beneficial and something that is not. So that's why I stand by the principle. Very simple principle, do no harm.

If that becomes our focus, we can create much better technology, much better systems, much better structures and processes. So we live in a complex, globalized world. And, I mean, just to give you an example, you know, we have market systems and automated traders and stuff like that. Then the banks, they needed to create algorithms to predict fraud. And the rules are a bit different for fraud, so that the models don't need to be explainable.

Tim Scarfe

Whereas they do for loans, for example. And just practically. What are you suggesting here?

Maria Santacaterina

Do you remember the good old days where you had bank managers and relationships between human beings? I know, but I guess I'm saying to some extent, the genie's out of the bottle and we have all of these different institutions, and if we regulate in one country, they'll go to another country. So it's like whack a mole. What do we do? Well, this is a very interesting point that you raise.

And in fact, the recent financial crisis, I mean, 207 two, I wonder why it happened. It was the United States government that said, or the administration that said in the end, that it was all due to mismanagement, because nobody knew what the left hand was doing from the right hand. These algorithmic systems, these automated systems, actually negate the ability for human intervention to be enacted in a timely manner. We're trying to use algorithms as a proxy for human intelligence, and they're not. I don't think we have as much automation as people think, you know, for the same reason.

Tim Scarfe

As I said, there is no agency in any software system, any AI system. So people think that software engineering is automated. It isn't. Every single stage in the software engineering life cycle, there's checks, there's an approval board, there's release gating and so on. Why don't we automate it?

Because it would be an absolute shit show. That's why we don't automate it. Even with trading strategies. You're a trader, you have an algorithm, and you have a whole bunch of guardrails around it. Why?

Because you don't want to lose all of your money. Because even if it's successful, it won't be successful tomorrow because someone will adapt. You'll start losing your money. So I don't think systems are as automated as we think they are, but. They'Re automated enough that we lose sight of what's actually going on because the models aren't interpretable.

Maria Santacaterina

That's what I mean to say. Yeah, absolutely. I mean, certainly. But they are automated in some very dangerous places, like, for example, hiring.

Tim Scarfe

But essentially, you are arguing that we should, even in an augmented sense, we should actually just have humans doing this. We should use the machines as tools and the humans as humans. Yes. You know, we should just make a clear distinction between human intelligence and algorithmic automated systems or artificial systems. There's a clear difference.

Maria Santacaterina

We should just acknowledge it. Okay. We use the same language, which is good. If I had my way, I would be inventing a new lexicon because I'm a creative human being and I would like to have a new language for technology so we don't get confused, because at the moment we are confused. You know, we're using all these anthropomorphic terms.

It sounds like the machine is a human. It's not like you said, eliza was a failed experiment. Weizenbaum, who invented it, he was flabbergasted that his secretary should believe that she was speaking to a doctor. The only reason he did the nurse doctor thing, the patient doctor thing, was because he didn't want to collect data. Yes.

Tim Scarfe

I mean, look, my thesis is that we kind of, you know, like, we used to blame Royal Mail, the postage service, when the letter was like, oh, Royal Mail must. It wasn't me. It's the same thing now with algorithms. So the reason why Facebook is so inscrutable is because they actually don't want to explain why people have differential experiences. When you log into Facebook, that's not really reproducible.

You're not seeing anything that other people see, and it's inscrutable, and they like it that way. So I think that in a sense, there's no difference between technology and how it's always been. Like, when I hire software engineers, there's this horrible methodology called leetcode. I don't know if you've heard of that, but you get the software engineers to do these ridiculous programming tests, and the idea is that it's a proxy for an IQ test. And of course it's great because, oh, no, we're not doing IQ testing.

We're testing their programming skills. But everyone believes that this is a proxy for an IQ test. And they believe that you can reduce all of the capability of a software engineer to this one particular test. And there are some arguments for it because it's standardized. And, you know, like, you want to build a hiring pipeline, you want to interview 300 people.

What are you going to do? Are you going to have a bunch of people subjectively interview people, or are you going to have some standardized test that you can benchmark them against? But that's nothing to do with algorithms per se. It's just what we did anyway. Yes, and you've faced a really great point, because standardization is confused with objectivity, and it's neither.

Tell me more. It's neither. So in your example here, you've just told me that your software engineer is nothing but an automaton. He's standardized. He's measured and benchmarked against some sort of standard, which is allottery.

Maria Santacaterina

It's arbitrary. It's not the human. It doesn't reflect his human capability. He may well, you know, have learned certain standards, or he may well act according to certain standards when and where it's appropriate, but only when and where it's appropriate, because if it isn't, he's a human, and he will go, hmm, not sure about this. Let me just ask my superior if that's the case.

You know, we have a hierarchy. So that's what I mean about structure. That's why you can't isolate the tool from the structure or the environment that it's set in, if you will, from the system. The way that the system is created, organized, constructed, the way that it's trained. In the specific case of machine learning, because machine learning is, you know, it's actually an oxymoron.

It's a machine. It's not learning. We have tried to define what learning is, but we really haven't been able to define it, not really, as we discussed earlier. And then you have processes. Sees is what I favor, because they are organic.

And so that's where you have optionality to put it in engineering terms. And that's where you have equipment. You can have a multitude of processes to take you from a to b or find a way through a labyrinth. Let's say. Let's say that our complex word is a labyrinth, just to use a kind of a visual.

And as you navigate that labyrinth, I mean, there's million ways that you can go. Now, we like orderly processes in an enterprise context. We like people to know what they need to do, when they need to do it, how they need to do it. But it's like Steve Jobs said. I mean, he was a brilliant entrepreneur, leaving aside, you know, anything else, but he was a brilliant entrepreneur.

He said, yeah, I hire smite people not to tell them what to do, not to constrain them, that is, not to control them, but so that they can deliver what, what they're capable of. Yeah, apparently he was an asshole to work for. Apparently he was. I don't know. I didn't meet him.

But. But, no, I think it gets, it gets to something really interesting. And maybe that word standardization was quite powerful, because the reason why we say we need to create objectives is because there's this knowledge transfer bottleneck. So in a large organization, the information diffuser becomes exponentially less efficient. So they say, well, if we all have shared objectives and we all have common rubrics for sharing knowledge, then things will become more efficient.

Tim Scarfe

But that's precisely what scleroticizes the system. But what's the alternative? Because if we did have more subjectivity, if you like, then people wouldn't understand each other. Wouldn't you be constantly having to explain, oh, you know, well, I've invented this new thing, and my way of thinking about it is that I'd have to constantly explain everything. And wouldn't it make things more inefficient?

Maria Santacaterina

No, I will argue the exact opposite. I will argue that because we are human beings and we are endowed with emotional intelligence and social intelligence. And we have intellect. No, absolutely not. The more you standardize, the more you impose coercive power through these systems, the less people understand each other, the less people get the meaning.

Tim Scarfe

Yes. From each other. I think this gets to the root of what we're talking about, which is, do you need power for effective organization? Yes, of course. Because, you know, that's the creative force, I call it.

Maria Santacaterina

I refer to it rather than power as the momentum, because it's an organic process. And yes, you do need power. Absolutely. But power as it's defined, that's why I use the word coercive, because it's about control. It's about how can I exert control over Tim?

So let's say Tim is in my organization, and he is, I don't know, at a certain level of the hierarchy, the thinking is, how can I exert control over Tim so that Tim does what I want him to do when I want him to do it, how I want to? You know, I exaggerate, but this is the typical command control structure. What if I said, here's Tim, let me understand who Tim is. Let me see what he's made of. What are his capabilities?

What are his strengths? What are his weaknesses? So then, as a leader, I mean, the command position, how can I help Tim to be the best that Tim can be? What are his weaknesses? How can I support Tim to overcome his weaknesses and to strengthen his strengths?

That's what humanity means. That's how an organization functions. Because when they talk about the team of teams and you go to the Navy Seals, you know, less speed, more haste, or. No, less haste, more speed. Yes.

What does that mean? It means that Maria and Tim are in a team together. They're working brilliantly together. They're feeding off each other and they're figuring it out. And.

And guess what? They bolster each other and guess what? Communication happens. This is what is missing in the organization today. There is no communication.

Communication is not speech. It's not what information I give you and you give me. Communication is something else. So I've argued in my book that we create socio technical systems which pivot around the fundamental human premise, which is communication. We need to understand what communication is.

I mean, it's called ICT, but it's laughable. It's I and it's t. But there's no communication. We're missing the communication. That's what I call the social dimension.

So if we start to create systems that will facilitate communication, because the transfer of knowledge is not a technical thing, by the way. It's not mechanistic. It's what I share with you and you share with me, and then the people around us and how we interact. It's human to human that you can transfer knowledge. You can't do it via a system because if you do, you dumb down human intelligence.

Tim Scarfe

Yeah, but it's almost like in corporations now, there's virtual reality. So we do create these books on shelves and these management systems that no one ever uses, and they're there as a box ticking exercise and also a hallmark of good hearts law, you know, which is that, you know, the objectives deranged the system and create dishonesty. I think it's possible to have both of us. I mean, the best managers I've had in my career have been very empathetic. They have a good theory of mind of me, right?

They actually understand what drives me. We work collaboratively, we have autonomy and self determination and so on, but still in a way which kind of, you know, leads to the company's goals. But the problem is, you know, companies are a bit like a physical system, which is that you have very poor vertical information flow, and the dynamics are quite alien and bizarre and divergent at different levels of the hierarchy, which means the way that the engineers understand the boots on the ground, they understand things very differently. And that's why we create a rubric for this vertical information flow. And it gets more and more inane and abstract the higher up you go in the organization, safety and security and performance and all of these words that don't actually mean anything.

And it's almost deliberately that way because people have a completely different interpretation of what the words mean. But, you know, is it just a bit of an illusion? Is it just that there should be a feeling of self determination and autonomy and understanding? How does that actually, you know, is it an oxymoron to say that you can have both that and some directedness as a company? You absolutely can, because that's what I mean about vision.

Maria Santacaterina

If I constrain Tim to the extent that he can't breathe. I'm going to use that analogy. How do I know what I'm missing if I allow Tim to breathe and Tim comes up with this fandango idea? I mean, all the best engineers, all the best scientific discoveries, all the best. It just happens by accident.

For want of a better analogy, it's not something that you seek precisely, but in having an organization that is flexible and adaptive and resilient, its people are able, the people who are the participants within this particular system, if you will, or this organization, are able to express themselves to the full, and they're constantly developing and evolving and growing and growing. And so what happens is you have this dynamic, this power, this force, this momentum, which is self sustaining, so the powder will never run dry. But if, on the other hand, you constrain, constrain, standardize, reduce, reduce, reduce, reduce, control, control, what happens? It dies. Why are there so many elephants in the graveyard?

Why did Kodak collapse? Why did Nokia collapse? You look at any big collapse all over the meat, you can look at it in many, many ways, but fundamentally, it's because they didn't allow their people to breathe. Yeah, but I think it's also because it breeds so much dishonesty. Now, there's this common rubric of management from above, and it happens in software engineering all the time.

Tim Scarfe

We use story points and we use agile, and the engineers have to estimate how much time it would take them to do a particular thing. And it just gets to the point where there is no relationship whatsoever between what's on the board and what the software engineers are doing, because it just diverges and it diverges over time. It's the same thing, just group wide in a company. A lot of the leadership in companies, they have absolutely no idea how things are actually done exactly right. So typically, the board has no idea what's actually, you know, they sign off papers and, you know, it's been verified by, you know, their own internal control team, the audit committee, and then the external people come.

Maria Santacaterina

But do they know? This has been my challenge, the first thing I would do. Do they know what they are signing off on? Quality controls and risk management. Do they know?

They don't. Do they know what AI, do they know what AI systems they have in their companies? Do they know? They don't. They really don't.

And they don't understand it either. And when they bring in new AI, you know, let's say applications or models or functions, whatever you want to define it as, they don't understand the risk that they are onboarding. It's hidden. But the complexity of the system and the ecosystem surrounding AI these days, especially with the open source, let's say, models or access to the, I mean, it's a minefield. So I go back to this point.

Unless the technology providers, the creators, I mean, you know, OpenAI and Microsoft and all these people who are involved in Aw, this whole big ecosystem and their counterparts in China or wherever else in the world, unless they all take upon themselves their responsibility and their accountability to humanity, to society, to the environment, then we are in big trouble. Because you can only correct the error at source. You can't correct it when it's already in the field. And, you know, in all these kind of ramifications which are really complicated, you can't trace it back. We don't have interpretability, we don't have visibility, we don't have transparency, we don't have explainability.

But even if we did, unless we all work together as a team, let's say we can't get to a better place. If the technology companies wish to have wide scale adoption in order for them to have the funding, in order for them to get the capital in to develop their models further, then it stands to reason that they have to give us some sort of guarantee and reassurance. You wouldn't put a car on the road with the brakes missing, would you? And so why are we deploying this technology which is not yet ready for society? Two sides of the coin.

Society needs to be ready for it and it needs to be ready for society. That's what I mean by sociotechnical system. We need a balance here. We haven't got it. We're not ready.

We're just not ready. But we don't have any strong governance. No, I mean, Faridi said that good governance is a little bit like driving a car, but you need to have one hand on the wheel at least. A lot of the problem as well is, as we were discussing on LinkedIn recently, is there's a kind of manufacturing of consent type thing going on with AI, which is that there's this incredible myth. You almost couldn't make it up.

Tim Scarfe

I mean, Dan McQuillan on Twitter said it was almost a conspiracy theory. I don't quite go that far, but it's really strange how people are convinced that this is just artificial general intelligence and so many folks in companies are chomping at the bit to install this technology. And as you say, there's not much oversight. Companies are leaking information to OpenAI, which will have consequences later on that we can't really conceive of yet. What do we do?

Maria Santacaterina

Well, have you not noticed any interaction these days before you, let's say you go to use any service? I mean, the healthcare service is a good example, but before you can be seen by the doctor, the doctor themselves will go through a 20 million questionnaire. It's a lifetime questionnaire. By that time, you might have even died. I don't know.

We have to really, we have to stop, because this is putting everybody at risk. Every single person is less secure, less safe today, in 2024, than they were even three years ago. Interoperability is very dangerous. We don't know how to control it. When that is combined with automation and these agents, these AI agents, which may or may not have a human in the loop or may, you know, benefit from reinforcement learning or reinforcement learning with human feedback.

That's even more risky when you have non competent people. If I'm a top scientist and I know what I'm doing and I understand the system, that is one thing. If I'm prompt engineering, if I'm a layperson, I don't know, and I'm just having a joke and putting in a funny thing, you know, let's make an atomic bomb. Ha ha. We got a problem because the machine doesn't know, doesn't have moral restraint.

It doesn't have judgment. It doesn't know what it's doing. So, you know, all the conspiracy theories and the biochemistry, I mean, it's probably happening. Biochemical weapons, you know, you cannot democratize a powerful technology such as we have today without preparing society first. If you're a company and you adopt AI without preparing your organization first, you are remiss in your leadership, in your governance, in your stewardship, in basically all of your fiduciary duties because you have not thought it through.

We like to race ahead in today's world. We don't like to figure out the consequences. We don't like to consider the uncomfortable truth. So we call it unintended consequences. Well, what does that mean?

We can preempt, prevent, many, many risks if and only if we think things through. So if I'm advising a company, I will never advise to just put in AI. First of all, why do we need it? What is the purpose? I mean, it's the great thing.

You know, Stafford Beer said, he said, it's not what the system can do, might do, will do. It's what the system does. And there, I erased my case. What the system does today is not good for human beings or the environment. Yeah, I completely agree.

Tim Scarfe

I mean, the only skepticism I'll throw in the wind is I don't think it's anywhere near as good as people think. Correct. Companies are finding this out very quickly. So there's this huge delusion that it's really, really intelligent. And all we need to do is just, you know, have these multi agent systems.

It's going to do software engineering for us. You know, it'll do our hiring for us. Don't need to code. Don't need to learn coding. It's ridiculous.

And the thing is, the hype is settling up because people are using this stuff and they're realizing, I mean, I don't want to sound defeatist, like it's genuinely good for information retrieval, verticalized solutions, knowledge mining and stuff like that. But even then, there's a huge security problem. You need to ingest all of your data into a big index, completely disregarding all of the base security access lists. Correct. What am I going to do?

Take all the information out of sharepoint, take all the information out of emails into this big, inscrutable, large language model? Most infosec people wouldn't even allow this to happen. But even when you do happen, when you do do it, you realize that just like, I mean, I call it the last mile problem. I don't call it that. Someone on LinkedIn, I now call it the last mile problem, which is that you can get to something that's okay ish for a demo, and then when you use it in anger, you realize there's this huge long tail of complexity.

Maria Santacaterina

Exactly. And people are saying, oh, it's fine, Tim. It's just like, you know, software is now like neuroscience. We don't really understand how it works, and it's very complex. And we put probes in and we, you know, we have like little alerts and we have notifications and it never ends.

Tim Scarfe

It will never end. It's just so complicated. And you'll never make it work robustly. And a lot of companies are kind of, you know, either realizing it now or they will realize it. So in that sense, I think that the risk of, you know, like a gentle, autonomous AI is, you know, it's not that it's, it's overblown, but it just doesn't work.

Maria Santacaterina

Exactly. And this is my point. And it's dangerous. I mean, why would you entrust your critical infrastructure to it now? It doesn't mean to say that it's never going to work, but it has to be properly managed, it has to be properly conceived.

That's what I mean about structure, system and process. I was talking to somebody at Imperial College, one of the professors there, and he said, you know, we're not there. It takes forever to calibrate the machine, even if you've got all the funds necessary to create your own model for your own organization, so that you can make sense of your own data. Let's just, for argument's sake, you've covered off all the legal aspects and everything is hunky dory. But first of all, the inaccuracies in the data, the duplications, the errors, manual inputs, all of that stuff, I mean, if you train a model on data that is qualitatively poor, we have a problem.

And this is, I think, the crux of the matter as far as I have understood so far, which is that we have a society built on quantitative analysis, ignoring, for the most part, qualitative analysis, which is laborious and tedious and involves complicated human beings and all of that stuff. And we want to get there quickly, but that won't do, in my view, that is why the systems are not working for us, because we've ignored our humanity, or what I call the human dimension. And so imagine if that we could reconceive the system to entertain quantitative analysis alongside quantitative analysis, and imagine that we could upgrade everybody's education in mathematics and physics and all of the good stuff that we need to understand, or at least have a notion of, so that we can all apply our different strengths. And so once we know how this machinery is constructed, then we can all participate. We can all contribute, and we need all of us.

Variety beats variety if you want stability in the system, right? We cannot have a stable, reliable system built on AI or automation or, you know, artificial systems, algorithmic systems, unless we can all participate. That's what I mean about power. Power is the capacity to do, to enact, to perform. It's capability.

Coercion is something else. Control is something else. And so we need new thinking in the 21st century. We need new leadership, we need multi dexterity at the higher echelons of the enterprise, because that's what kind of coordinates the organization and kind of shapes and forms the organization, and it needs to filter down. So there's this thing that people are talking about, you know, distributed leadership and so forth.

But you can't really have a democracy until you do allow everybody to participate. And we seem to be kind of creating systems that continually prevent people from thinking for themselves. And then people complain you know, governments complain, oh, people don't want to go back to work after COVID. Well, I wonder why. Yes.

So I think, you know, we need to reconceptualize, I believe, the way that we are imagining these systems to serve civil society so that it doesn't fall apart because, you know, the constructs are only there because somehow we all subscribe to them. But imagine that we don't. Great empires fall not because of an explosion, because you can see it and you can kind of control it. It's implosion. We are risking one massive implosion the way we are going.

So I liked that description of the, what is it? The explosion of creativity, was it? Yes. But also the deception, the objective deception. Yes.

You know, the idea that AI is objective, it's not. It cannot be because it's fed on subjective bias, stereotypes and all the rest of it. Nobody can be purely scientifically 100% objective. No bias. It is impossible because we are humans.

We all have biases and we all have different kind of preferences. I don't like the word bias. I would say we all have different preferences and different ways of thinking and different ways of understanding and seeing and interpreting things. But that is okay. Yeah, no, it reminds me of the phrase, many eyes make shallow holes.

Yeah. And it is a problem because do you remember when Wikipedia came out and you weren't allowed to cite it, and then it became the kind of canonical, authoritative source and you can cite it in a school essay and quickly, GPT four is becoming like that. That's a problem. You know, it's the quickest way to end an argument on a discord now is just say, well, chat. GPT says so, therefore it must be great.

Tim Scarfe

But as you say, this is just a large language model that's been trained on a bunch of stuff on the Internet two years ago? It's not giving you a diverse perspective. It doesn't deliberate, it doesn't ask questions. It's out of date. It may be inaccurate.

Maria Santacaterina

It may have confabulated or hallucinated. I mean, it's a feature, right? It may have fabricated a fact, which is actually not a fact. Let's just try and keep it scientific for the moment, and it may be giving you the wrong information. So how do you know?

You and I know because we came from an earlier generation and we were taught differently and we had books and we went to libraries and we talked to people and all that kind of stuff. But if you're a child today and you grow up on an iPad and you're pressing buttons to learn mathematics. What mathematics are you learning? What form of reasoning are you learning? Mathematics is not one plus one equals two.

It's a form of reasoning. It's a form of thinking. It's a way of thinking, or it's a way of approaching a problem. Yes, we're not learning it anymore. Sorry.

No, no, no. I was going to relate it back to our subject matter, which is the coding. You know, for the president Jensen, the founder of Nvidia, to say, you don't need to code anymore. I understand he wants to sell his model, but that is the most dangerous thing that somebody in his position can say, if I may say, in the sense that if we don't understand what we are building, how on earth are we going to fix it when we've got a problem? I know it's creating this sea of mediocrity, because I think a lot of serious software engineers understand that this is never going to replace their jobs.

Tim Scarfe

But what I see is just insane acquiescence, not only with this thing, with the software engineering, but also with language models. People are saying to me, oh, Tim, you've not seen Opus, you've not seen this new model. And I worry because when I use these models, I can see just how many mistakes there are. In fact, I see it more, not less. Yes, it's because my brain's tuning into it.

It's almost like I'm learning to think like a language model. And in a way, I can get more out of it because I know how to prompt the language models to get useful things and I can still critically appraise it and so on. But what I'm seeing is that loads and loads of, especially younger kids, they are just taking it on face value. And it's partly because they don't have the cognitive horizon to see when it makes mistakes or they just don't see the patterns yet, but they're just taking it as gospel. And more importantly, their brains aren't developing.

Maria Santacaterina

So I'll give you an example. When I was at school and I was learning languages, I learned a few foreign languages, too. I was born England, I learned English, but I also learned French and Spanish and Italian and so forth. And, you know, we had to pray. Ctext one of the exercises to test our comprehension, our listening comprehension and our written comprehension, we would be given a text and we would have to summarise it.

Now, if you used anything like the language that was used in the original text, you would get a nice big f fail. What is chat GBT doing today? Regurgitation. Thank you. So what are we learning?

What are we teaching our children? So the point is I learnt through language, through the study of language, how to critically reason. And I realized that I have acquired a great capacity because I have a command of the english language and a few others besides, which enable me to have critical thinking in my very being. You know, it's part of me, so I will never take something at face value. I mean, you know, I had to study a polinar.

Have you tried a polinar? I mean, very difficult french poet and also he was on drugs and so forth. I mean, trying to figure out what he was thinking when he wrote, what he wrote. I mean, it's really difficult, but this is what AI is. I know, I think the best antidote to, and folks at home should try this, write a 3000 word essay and get GPT four to summarize it for you.

Tim Scarfe

And I can tell you exactly what it will do, especially if you have headings in your essay. It will just regurgitate the heading structure back to you. It has this thing called sycophancy as well, which is that it basically just says back anything you ask it. You could say that the moon is made out of cheese and it would pretty much say, yes, the moon is made out of cheese and blah, blah blah. And you just start to see through it.

It's just rubbish. And the amazing thing is people are just summarizing academic papers and thinking that they don't need to read the papers because if you actually force yourself to read the paper, I mean, people were lazy before GPT, but now they're even less inclined to actually read the paper. There's just an entire sea of meaning which is just being lost, correct? Well, I did read the papers, I did a lot of research and I did read everything. I did not use chat GPT, I just want to make that clear.

Maria Santacaterina

My book is on blinklist at the moment and chat GPT or something. Anyway, some AI machine has summarized my book. It's very simplistic. It's not exactly what my book says, but anyways, it's a bit more rich than that. But this is the trouble, you see, because in the past, let's go back to the Greeks, you had a tutor and you know, you would have a philosophical work and you would debate about it and you would read it and you would discuss it.

And you know, in our universities in England, I mean, I went to Manchester, for example, and, you know, I had a tutor and I had to present in the tutorial, particular subject matter. And I had to debate it amongst my peers and also with my tutor. But also I went to lectures where I was presented with information and I annotated. And as I annotated, it was just so that I could then go back and figure it out. You know, what had I learned and what did I agree with or disagree with, and what did I need to ask about?

Because I didn't understand. Now, if we lose this interaction, this interactivity, this human to human connection, if we lose this human to human communication, which is what we are at risk here, this is very detrimental to the evolution of our species. I completely agree. No, it's the understanding credit card. Yeah.

Tim Scarfe

And meaning and understanding. It's constructed and actually, it's quite performative. You know, it's very, it's very physical. And it's not to say that, you know, I'm not saying to kids, I don't use GPT. I mean, you can still, you can still use it.

But the thing is, it needs to be embedded in some kind of human process. Like, for example, we have a discord, and we are constantly debating, you know, articles on the Stanford Encyclopedia of Philosophy. You know, like we were talking about agency the other day, and we're going back and forth, and some of us are using GPT and we're kind of saying, and then we're kind of going. And this is how you actually understand things, right? It's a process.

It's a process. It's a social process. It's not something that you can just kind of, like, detach completely. And the thing is as well, that it gives you the illusion of understanding, and it's a kind of deferred understanding. It's almost like, well, I know it's there, so I don't need to know it now because I can go and get it in the future.

And actually having deep knowledge about the world, it builds on previous knowledge. It builds and builds and builds. It's something that takes many, many years. And if you're constantly, like, putting on the understanding credit card, every single layer, there's no foundation to any knowledge that you might learn in the future. It doesn't become part of you.

Maria Santacaterina

And the point is about meaning. I say that meaning emerges in the book, but meaning is also an intrinsic, it's an intrinsic part of your being, and meaning evolves and it deepens and it broadens and, you know, it develops. If you halt development. This is what I'm really trying to say here. If you halt human development, if you prevent the human brain from expanding, from evolving because, you know, look how kind of tight it is.

And all those. All those pathways didn't just happen. If you halt that process, you are doing something that is beyond your capability, you human being. So the transhumanists in this world and others, you know, please think again. I understand why you're going there, and I understand the fantasy, but we need to come back down to reality.

We're not Star Trek. Yeah. So given that we're not Star Trek, but we do have a very rich imagination, can we just, when we apply the technology in the real world, can we just make it just. Yeah. Pun intended.

You know, blinkless, I mentioned earlier, I just want to clarify. I mean, they were great. These two, three young kids kind of came up with this fantastic idea. We're going to summary, yeah, all of the books, and we're going to make knowledge available quickly, which I understand, because we're all pressed for time, etcetera, but in the origins of this particular application, it was that a human being would read the book, understand the book, interpret the book, and then make a summary of the book. This is not the case.

It is an AI machine, and it's not the same. I know, even with Blinkist, they sponsored one show, and summaries are great and everything, but the thing is, like, as I was saying, you're on the train, you know, your meaning is constructed. It's. It's becoming enmeshed, entangled with all of my lived experience at that moment in time. It's.

Tim Scarfe

It's a journey. When. When I read the language game book, because I'm making a special edition on that, I was on a cruise ship around the norwegian fjords, and my entire understanding of that is kind of, like, embedded in that real physical experience. I had. I had so much time just to ponder and think, and I would put the book down and I would take highlights and, you know, stick post it notes to the book.

Maria Santacaterina

Exactly. And that book, to me, is a real lived experience. It became a part of you. It became a part of me. And if I read a summary of it, like, meaningless, I can argue it both ways.

Tim Scarfe

Because the thing is, like, we've got to the point now where we understand so many things about the world that we already kind of understand it before we've even. You know, just because we have models and we can hang, you know, we have reference frames, we have knowledge, and we can just hang things off knowledge we already have. So. And that's semantics. You know, we have a worldview we have a kind of cognitive horizon and we can hang things off what we already know, but the real magic is being able to construct new models, being able to construct new semantics, and actually kind of embedding that in our experience.

Maria Santacaterina

But that's the beauty of language. That is the intrinsic aesthetic value of language. It is continuously evolving. Your worldview is something that is growing all the time. This is what growth means.

This is what learning means, because it doesn't, it doesn't. I mean, I think somebody said it. You don't sort of absorb all of this knowledge and all of this language, and all of a sudden, you know, you don't until it becomes a part of you. And so that the act of highlighting or annotating or pausing and thinking, that is act that is conscious or subconscious, but it is formative. And what I'm arguing here is that we cannot take away the formative experience of learning from children and adults.

This idea that you hit 50 and you can't learn is just ridiculous. Yeah. By age 50, you are. The plasticity of your brain is so rich because you already have acquired knowledge and you already have a worldview and you've already formed experiences, but you are so ready for the next level. And the point is, we are lifelong learners.

Machines are not. They can only learn what they can ingest, whether it be from existing data, past data, past facts, if you will, events if you will, or anything that is inputted into them via human interaction. But they're not learning. They're just ingesting data fragments. Then, you know, they kind of process it in their kind of a way, in their kind of a mechanistic way.

But that's not how you and I learn. It's ad hoc, it's chaotic. And so this is why we're missing a trick. The hiring process that you mentioned earlier, the argument was given that an AI is more objective than a human being. Well, I'm very sorry, it's not.

If you interview me and you think there's something that I can do for your organization, probably there's bias because maybe we get along or something and we think we can work together, probably a subconscious, but what's wrong with that? Exactly. An AI is no different. It's standardized and it's worse because it strips away the human dimension. And so what happens is that you can only get automatons, you can only get mbas, sorry, mbas, but we need the arts as well.

We need all sorts. We need great administrators, we need great artists, we need great scientists, we need great humanities people. The humanities people tend to have a much broader vision, but they can also narrow in on the detail. And so with all of us working together as a team, back to your idea of collective intelligence, which I also very much believe in, because that's how we learn. That's what enculturation means.

Yeah, the diversity of the world. I mean, you go to a different country, you speak to a different person, they have a completely different perception and experience of life, and you learn naturally. There's an osmosis. This natural naturalness is not something you can teach the AI, because we are organic matter, living, breathing beings, and it is not. I know I got in trouble for saying this on another podcast, but I think Kenneth made this point as well, which is that we're starting to turn into a monoculture, and things stopped evolving about 20 years ago.

Tim Scarfe

People just say, I'm getting old, and things are still evolving, but there is a lot of truth to this. Like, you know, like, we've now got this quite globalized culture. And one thing that we really need to foster in companies, I think, is what Stanley calls diversity preservation. So not just diversity, but we need to stop these consensus mechanisms washing out all the. So it's not just about hiring lots of different people from different backgrounds.

It's about actually allowing them to express those differences without washing them away. You know the expression speak your mind? Everybody is afraid. Everybody is constrained by the algorithm, everybody is constrained by the procedure. Nobody will dare say what they think.

Maria Santacaterina

Guess what? That's how really serious bad things happen. So you're in a hospital environment, you know, it's a life and death situation, but the evidence says black, and you need white. What's going to happen? Because the evidence is wrong and white will never be considered because they're afraid to go in another direction.

It's not the considered pathway. So, you know, it's really dangerous because we're not binary. And, you know, if you try to tell me that you can encode ethics, I will argue until the cows come home. I don't think you can, because ethics is a very complex reasoning process and it's open ended and you can't codify it. How we encompass ethics in the deployment of algorithmic, artificial and automated systems is, I think, the topic du jour at the boardroom table.

And that should be discussed first, in my view, before onboarding all of these risks and putting everybody's life in danger, you see layoffs, as they call them in America, left, right, and center, big technology companies. But, you know, here's a telling thing. Somebody mentioned to me the other day, OpenAI is not using chat GPT. I wonder why. Internally.

Tim Scarfe

Interesting. They're using some sort of, you know, backward system. Really? Yep. They're not using their own product.

Is it because they're testing the new one? No. Why aren't they using their own product? Data leaks, maybe. Wow.

Maria Santacaterina

Inefficiency, unreliability, fragility, brittleness, as they call it, non robustness, I mean, you name it. So I think that's very telling. You know, I do understand that if you've got the whole world as a laboratory and you can get away with testing it without putting in, you know, the safeguards and without having to explain yourself and without having to make it into and all the rest of it, and you can self regulate, I do understand it. Very tempting. And you can make billions and trillions of dollars.

Understand? But that really isn't responsible. That's reckless beyond reckless. So if they are not using their own. If it's true, I mean, I haven't been to talk to Sam Altman, so you'd need to ask him.

But if it is true that they're not using their own system within their own organization, I think that that would be a reason for others to say we should just reconsider. On the subject of ethics quickly. It had a bit of a bad reputation. True, there was the fiasco with Timnit at Google, but what I mean by a lot of engineers have been quite disparaging about it because there's this perception that it was gatekeeping, and now in Europe they have to have a broader impact statement. And it's almost like the perception was we now have a bunch of gatekeepers that are trying to regulate mathematics.

Tim Scarfe

Right? That's not the case. That's not the case. But, and now I think it's become deranged even more because it's become associated with the effect of altruism, movement and AI risk and all of this kind of stuff. So that's kind of being lumped together.

It's being conflated. So right now, what do we need to do about AI ethics? So I don't like to talk about AI ethics or responsible AI and all this kind of stuff precisely because of the reasons you have said. But I do think that we need to talk about moral reasoning because that's a human thing and that brings the human dimension back into the force. So if you're an engineer and you want to have carte blanche to develop whatever system you want to do, that's okay in the laboratory so long as you don't kill anybody.

Maria Santacaterina

But when you release it into the wild, as they call it. But that means real life and society and the civil world, you have a certain responsibility and you know, you have a duty of care. You know, there are various fiduciary duties, but one of them which is often overlooked, is the duty of care. And so I emphasize that in my book, in terms of governance, this idea that you can regulate, legislate for AI is just silly, really. You can't risk manage in that way.

We need them. We need laws, we need regulations, we need. So we need all of that. We need audits, we need all of that. But it's not the answer.

The answer is that we reconceptualize our approach to this new technology and that we try to understand, figure out a way to develop it in a way that is beneficial. So if a priori, we have one fundamental principle which we must not cross, which is the big red line is you do no harm. If something is going to do harm, that means you need to understand what that looks like a priori. It's like, how can you define a finite system? And you need to understand infinity.

And by having infinity, you can prove that it's finite, right? It's the same thing. If we want to do no harm, we have to consider all of the implications, the ramifications, if you will, at a social, societal and environmental level. So I take an ecological view to this new technology that we are developing. Because if we do it in that manner, it's holistic.

You're looking at the whole, you're not looking at the part. We have a much better chance of success. So the engineer can develop his new fandango tool to the best of his ability, but doing no harm just as much as the deployer and the user and the recipients of those outputs can benefit from them. Because a priori, we thought about it, we identified the purpose of the need. It's a human need, it's an environmental need.

And guess what? We can make the technology work for us, not against us. But the training methods are adversarial. It's game theory. We're not games.

We're not chess boards. Yeah, I agree that this iron dilemma. Yes, yes. You know, I mean, that's not real. Life, two player, zero sum games.

Tim Scarfe

Yeah. So I agree that this Ayan Rand view is very deleterious. So one of the issues is, you know, ethics and morality is a collective action problem, which means one group's harm is another group's good. And also there's this notion of, can we trust large corporations to care about society and the environment? Well, that's a really good question.

Maria Santacaterina

I mean, who can we trust these days? Can you trust government? Can you trust the state? I mean, who? I mean, who?

I think one of the unintended consequences of deploying this technology in civil society is that it has created, erected barriers, really serious barriers of distrust. Because I think the whole COVID experience with the social isolation, obviously, it was to do with containing the virus. But I think the aftermath, the effects of that have not been dealt with, and they haven't been addressed. They weren't even thought about. And so you have all the conspiracy theories that it was just one big social experiment, but we have to live with now the consequences, and we have to understand them.

And so I don't think anybody trusts anybody anymore, to the point that you have to prove your identity to a machine. That is not beneficial. Human relationships are about trust. But trust is not something that you can dictate. You can't engineer it.

It's something that is organic. Once again, we go back to inanimate and animate matter. It is something that you build over trust, and you build it through relationships. So you talked earlier about how we are collectively intelligent, and, well, that can only happen through trust. I call it consciousness, because it's a bigger thing than any individual.

It's individual, as I mentioned earlier, but also it's collective. And I think through the centuries, the millennia, you know, all the greats that came before us, people say you stand on the shoulders of giants. Well, we do. We do stand on the shoulders of giants. And every time we learn something new, that is what enables our collective brain to evolve, expand, and enhance its capabilities.

And it happens naturally, organically. I don't think you can define that mechanically. So we like to say in philosophy, there's a higher intelligence and a, you know, a lower intelligence, or we. We've kind of simplified it. We've tried to mechanize intelligence and philosophical thinking, but in actual fact, I don't believe you can, because there are so many dimensions.

I keep going back to emotion because I think that is misunderstood and little considered, especially in organizations today. Whereas every single action we take is driven by emotion. That's in the word emotion. We don't act unless we feel something to act by or in accordance with. And there's instinct.

And so you mentioned earlier about embodiment. You can't disembody intent or motivations. Therefore, you can't absorb it. You can guess what I may be thinking, what may motivate me, what may drive me, inspire me, you know, what my intent is. You know, for example, in a court of law, they talk about intent.

So there is the law, which is, let's say, technical. It's a technical definition of a crime, let's say. But then before you're convicted, they have to prove intent. Your lawyer has to prove all the defence, whatever, they have to prove intent. But how can you prove intent?

You're not going to prove it with AI. You know, it's not a lie detector test. I think, you know, what, 19th century, I think they went by how a person looked or, you know, how a person expressed themselves or the language that they used as to determine whether they were lying or not. But that's not scientific. It's pseudoscience.

So let's say, for example, you know, in yesterday's kind of era, they would use kind of intelligence tests, but they're not really intelligence tests. You know, they would try to say a person is more type a or b or c or whatever. I don't know. The hieroglyphics were again, categorizing, classifying, trying to reduce a person's multidimensionality, multiple, you know, kind of elements to their being into some sort of formula to say they're x, y and z. Well, how we respond, I go back to behavior.

You can't predict behavior. How we respond to a given situation may have some sort of a pattern when you look at it in aggregate data, but it's not context specific and it's certainly not individual specific. I may respond to a situation in one way in this moment, but it may be completely different in another moment. Why? Because I may have learned from my experience or I may have changed my mind, but that's not a bad thing.

You know, if it's windy weather, I may make a mistake and I may go outside and I don't have enough clothing and I'm cold. Well, the next time I might put on a jacket. That's not a bad thing. I've changed my mind. But now we're criminalizing changing your mind.

Changing your mind means adaptive behavior. Adaptive, not adapting, but adaptive behavior. It means that there is a learning process underneath, whether it's conscious or subconscious. And this is something that we are missing. And therefore, if we don't appreciate this multidimensionality of learning and the process that is entailed therein, we fail to access meaning because meaning is also a process.

It's not a definition. So if we talk about meaningfulness, it comes about in that kind of chaotic, complex process that we've referred to in our conversation that is intrinsic and extrinsic all at once, but you can't really pinpoint it. Yeah, yeah, no, that really resonates with me. So meaning is actually an unfurling process. Yes.

Tim Scarfe

Yeah, that makes a lot of sense. And you were saying before about intentionality and culpability in the eyes of the law, and this is really important because machines don't have any agency, therefore they don't have any intentionality, therefore they don't have any culpability. And it just creates this diffusion, a similar way to when we kill people with drones. There's this moral diffusion. There's a kind of concentric circles thing going on, and every single layer diffuses the responsibility more and more and more until no one really understands who's responsible anymore.

Maria Santacaterina

Correct. What happens is that you. You kind of separate ownership from the action or agency from the action, or, you know, and that separation of power is, in this instance, detrimental because it means that I no longer feel responsible for my bad action, because it's so far away from me that, you know, it's more than arm's length that I can't actually see it. Yes, but that inscrutability is part of the design of power, isn't it? Of course.

Because you, in this case, coercive power. Yes. Because you don't actually want people to understand or pinpoint the kind of the epicenter, if you like. Well, if you look at a military training academy and how the soldiers are trained for future action in the field, how are they trained? They're trained not to think, not to respond.

All of the humanities is kind of drummed out of them because they have to be controlled so that when they. And this is why they want to use AI on the battlefield, is because they think that you can control it better, it can do your bidding, and it doesn't think, it doesn't have any moral hesitation or any moral restraint. And that's a good thing? Well, I would beg to differ, actually. I think that dispassionate, kind of cold blooded thinking in these instances is something that is beyond my personal comprehension, but I think most people's.

Because. Because what makes us human, in my view, it is the ability to recognize and to exercise, more importantly, agency, moral restraint. That stems from autonomy, that stems from my ability to think freely, independently and within my own being about what I perceive and feel and think is right or wrong. Not that it's a binary thing. It's a complicated process, but I do it instantaneously.

I know that if I harm you, that is not a good thing. Not because of Catholicism or any other religion, you know, because you could argue, oh, well, that's because, you know, you were brought up in a certain way. No, we are social beings. That means we are destined, if you want to put it in these terms, to work together. And we are, you know, we strive to embrace diversity differences, and we seek novelty because we want to learn.

Because, you know, if something bad happens, like every. If you look at all of the longest arc of human history, whenever something really catastrophic has happened, what do we do? We rebuild. Why? Otherwise we wouldn't be here anymore.

Is because we strive to overcome and to try to make good on our errors. Of course we have errors, but human errors are different in kind and in nature than an algorithmic error. Yeah, I mean, you know, we could discuss Hume's guillotine here, potentially. But of course, because, you know, I think there is such a thing as moral knowledge, and it's something that we learn or acquire depending on. If you're a moral realist, maybe we can ask you that as well.

Tim Scarfe

But it's something that we learned from experience. Experience. And the more agency we have, the quicker that learning process is because we're understanding the moral process. Unfurling, as you say, is helping us actually understand what's right from what's wrong. But maybe just on that moral realism point, do you think there's such a thing as a moral fact?

Maria Santacaterina

No. Okay, so you're a relativist. I guess so. I believe in relationships. I believe in the relativity concept.

You know, the relation of something in relation to another thing, right? To express it kind of very simplistically. But it's an organic process. That's why therefore, you can't pinpoint it, because otherwise it would suggest two points, and then you measure the difference kind of thing, but you can't. I mean, Aristotle expressed it in that you have a virtue and you have its opposite.

And by definition of its opposite, you know that that's a virtue, and that's not a virtue. So if you look at his, you know, listing of it, but instinctively, intuitively, cognitively, so, I mean, intellectually, you know, emotionally, your whole being knows what feels good, and you respond and you flourish, and your body, everything, you know, your face lights up. This is great. But the point is, we wouldn't be social beings if we didn't have that as part of our being. If that.

If that were not intrinsic to our human identity, we would not be social beings and we would be killing each other. But actually, we don't. You can point to all of the fallibilities of humanity, the errors that we make, you know, the misjudgments that we make every time that we fall. You can say all of that, and that's true. But the way that we respond is distinctively different than the way that an AI can respond or will ever be able to respond, no matter how fantastic the engineering, how brilliant the mathematics or the physics of that.

I'm convinced. Yes. But do you think humans need to be, you know, getting to the paternalism point? Do we need to be constrained to bring out the best in us? It's about freedom.

Tim Scarfe

Okay, but just. I mean, I'm trying to sort of pinpoint where you can. Are you almost advocating for a kind of, sort of anarchistic society? No, no, no, because society doesn't know how to function. If every.

Maria Santacaterina

If every human being were so elevated in their knowledge and their understanding, their awareness, that we could all live harmoniously together and give and take, and it would be, let's say, an idealistically, aesthetically beautiful society, one wouldn't need constructs or constraints to create order. But because there is this side to the human, let's say nature, which requires control and constraint and order in a mechanical manner. And, you know, we've come this far, I guess, in part because of it and in spite of it. I don't think that anarchy or utopia, to put it in a more positive way, is realistic. But maybe if we reconceptualize the way that we use technology, we can enable within a defined system a way of working that is more harmonious than not.

That is to say that we can express our thoughts, we can communicate with each other, and it can be enabled by the technology or facilitated by the technology as a supportive mechanism, supportive tool, not a constraint. At the moment, it is acting more, perhaps unintentionally, as a constraint than as an enabling tool, as a kind of a means to an end. And what does that end? The end is to develop our human intelligence, in my view, to expand our minds, to grow, to learn, to explore, to do all of that great stuff that we humans are great at doing. So back to creativity, that ingenuity that we have that is intrinsic to our being.

That's why we've been able to go to the moon, and now we're even thinking, getting to Mars and maybe one day colonizing, you know, that would be interesting. But the fact that we can imagine it, that we can conceive of it, that we can believe it, even that is human, that is not something that can come of an inanimate object, and I firmly believe that. And that's what pertains to human consciousness. I think this is what I mean by the expansive side of human nature. But there is the other side too.

And so you constantly have this tension, and I think that's what makes it interesting. And I think that's why we will always strive to try to understand ourselves better and our environment. What do you think of some singularitarians have this utopian view of the future where artificial intelligence will just create plentiful bounty, if you like, and we don't need to work anymore, and we'll have amazing lives. That's quite a pervasive view coming from Silicon Valley right now. It's kind of ridiculous.

So let me explain. So nature is bountiful and potentially abundance, and potentially, if we didn't destroy, and potentially limitless, we know from our history that it will recover. Nature will always reflourish, but we may not. And I think the harm that we are causing today, that we haven't yet quantified or scientifically analyzed or empirically evidenced, is going to be our downfall. That's the hubris.

Our overconfidence, our arrogance, perhaps our ignorance, perhaps our stupidity, perhaps is limitless. And we seem to think that we can control everything, but actually we can't. And we seem to think we know everything, but actually we don't. And that's why I began my book with Shakespeare's hamlet. You know, we do not have complete knowledge.

We will. The point about us humans is we will never have complete knowledge. The more we learn, the more we advance, the more our scientific progress evolves, the less we will know. It goes on forever. Yes, goes on forever.

That is intelligence. Yes, I completely agree. That's the only definition of intelligence that we can give. Yes. In your book, you spoke about reciprocity as well.

Yes. Tell me about that. So it goes back to relationships and the relations and causality. In scientific terms, I think it's very difficult to, again, oversimplify, reduce, put into a formula, classify, categorize, causality or causation relationships, or, you know, how, let's say, our being, our presence, our existence, our essence relates to something else. How do we then respond in relation to, this is what I mean by reciprocity.

So, for example, if we're in a business setting and we make a deal what are we actually doing? We're actually exercising reciprocity because we're saying, this is good for you. Sorry, this is good for me, too. And so we find a happy medium and we strike a deal. You know, there we go.

Handshake. If you don't have reciprocity, you don't have a relationship. There's no relation between what you do and I do. Then there is no communication either, and there's no humanity in it. So if you take AI, for example, they're saying AI can be a CEO.

An AI is a mechanical thing. It does not share information with the other mechanical thing unless we make it share the information. So it's not really adapting, but we make it adapt. It's not really communicate, but we make it communicate. So the point is we could end up in a really non existent world, non existing world, which sounds like an examoram, but if we were to follow this through to the extreme, to the point of singularity, what we are saying is that all of us would disappear.

No humans anymore, and we would have a singularity, some sort, I don't know, big box. I don't know what. It looks like some sort of massive data center. How is it going to run? Who's going to power it?

Who's going to maintain it? What happens when it breaks? Is there ever going to be a point of singularity? So can we reconceptualize what we mean, what we understand by singularity? What does it actually look like?

What does good look like in that context? If we mean that we're going to get to an elevated humanity where we have evolved even further, our human brain, singly or individually and collectively, has evolved and expanded and improved. And, yeah, we've got to the next stage of our evolution. Great thumbs up. We've used our technology, our techne, to complete nature.

We are part of nature. We are an integral part of nature. Fab. Great days. But if we mean by singularity that we're going to annihilate ourselves, well, I would think that more errs on the side of human stupidity.

As Einstein once said, it's infinite. Indeed, indeed. You speak quite broadly about the tyranny of categorization. Yes, and I'm just trying to get a handle on this. So because we need some categorization, it seems, I mean, language is a form of categorization.

Tim Scarfe

So what we do is we collectively discover these cognitive categories, and you can think of them in this relational way. So they're a bag of analogies as understood by other people. But you get some kind of emergent structure or whatever. But we also have an individual experience, right, which is our own kind of categorization. So what's the continuum between us all just having a completely fractionated individual experience and some kind of useful categorization?

Maria Santacaterina

So when I was at school and I was doing biology and chemistry and physics, I absolutely loved my science classes. I decided then to opt for the humanities. But anyway, I love science equally. I just had. I was forced to choose.

I wish I didn't have to choose. But anyway, so, you know, typically you would have classifications and chemistry and you would make experiments based on those classifications and you had the formulae for the various elements and so forth. That is very useful in a laboratory setting, arguably less useful in the real world when we're talking about human relationships and when we're talking about human systems. So I think if for AI, we have to create classifications and constructs, because that's the scientific process, it's very useful in a laboratory setting, as I said before, with competent people, you know, scientists who know what they're doing, who understand what those categories are and what those classifications are and who can manipulate them to serve their purpose. Different story.

When you're out in the big bad world and you have open source AI's with people who don't know what they're doing, but they can just press a button and they can do something with it, it's not a useful classification or categorization. So you have people doing, what did you call it? Emotional AI or something? Yes. Well, that, you know, is very problematic because when it's applied in the real world, let's say you go for a job interview, let's say they're analyzing my mannerisms.

They're going to categorize me as x, Y and Z, but it's nothing to do with who I am. Yeah, I just wonder whether there are. Some examples of what my competency level is. Yeah, I'll give you an example. I mean, in the weight loss literature, doing any kind of research on weight loss is incredibly difficult.

Tim Scarfe

Unless you put people in metabolic wards and you kind of force them to eat a particular thing, you get adherence problems, but what you also get is incredible individual differences. So you look at a typical weight loss study, and if you actually look at the shape of the distribution, it's almost as ambiguous as it could possibly be. So you have some people that lose loads of weight, you have some people that lose no weight, you have some people that gain weight. And what we tend to do is kind of statistically aggregate this information into a manner which is explainable to normal people. And we say, oh, well, on average, people lost 15%, therefore the thing works, or whatever, and we're discarding all the information.

But are there some circumstances, at least, where we could kind of aggregate the information without losing too much? No, no, because the moment that you aggregate anything, you've already lost several layers of meaning, and you've already lost several dimensions of the information that you're trying to analyze. So statistics, per se, is a reductionist art. I would call it an art, not a science. And so you're never going to be able to reconstitute the original information in a reliable and trustworthy manner because of the nature of the processes that are involved.

Maria Santacaterina

So statistically significant numbers do not relate to Tim or timuria. So if we take the same diet and we eat the same foods, we will respond differently. But if we are part of an aggregate data pool, shall we say that has been deconstructed and reconstructed and gone through the washing machine, it's not going to work for us. I understand that, but I suppose you could argue that humans are quite poor. Reason is, in a way.

Tim Scarfe

So I have all of this perceptual information coming in, and as part of building an understanding, I construct mental models. And the models are often wrong. They might be, or they might not be. They might not be. That's the thing.

You can see it both ways. They're precise because they're just about me. So if they're right, they're better than a lot of other models, because the other models aggregate over everyone. My model is just me, which means I might be right, but my model could still be wrong, because a model is, by definition, a reduction of something more complex. But it's not a model, and it depends who's judging your, let's say, response or perception or model.

Maria Santacaterina

I hate that word. Because we don't really model reality at all. I believe differently because we have eyes and ears and noses and for a reason. Yes, we have different senses, and that's a reason, because all of the information that we absorb, the affordances of our environment, also influence what we perceive and what we understand and what we become aware of, or where our attention lies. And so when we process all of this, it's very difficult to break it all down.

It goes back to the idea of the tiny particles, the parts and the whole. The whole being responds in a certain manner as a result of all these complex, entangled processes which happen instantaneously without thinking it's unconscious or subconscious or even conscious. There's a whole melange of things. If you imagine that one cell in the human body can mutate and change its function, you know, without even blinking an eye, that is remarkable. It is extraordinary.

Can we ever recreate that? I'm not sure. Maybe. But do we want to recreate that? Maybe not.

So the thing is, you can argue that humans are defective, and you can argue that we have erroneous perceptions of the world. That is fine. But I will argue that in your reality, in your dimension, in your experience, you are right, because it's what you see and it's what fits for you. When you have an aggregate model, a statistically significant aggregate model, which is what we're really talking about here with data fragments, you as an individual will not fit that curve, ever. Yeah, I mean, to push back on that a tiny bit, it's not necessarily a bad thing, but socially, we are moving from the material world into the virtual world, which is to say, like, for example, the construct of gender used to be a material thing, and now it's like a social thing, and it's quite divergent.

Tim Scarfe

And I'm not saying that's a bad thing, but people's individual experiences can diverge dramatically. And so do you think we should constrain that or is that. No, of course not. Quite the opposite. I think everybody should be themselves and, you know, we should be living in a society whereby everybody can fulfill their potential and be themselves and not have to worry that they can't speak up or that they can't say what they think, or that, you know, all of these constraints, we're just kind of creating a structure whereby this constraint upon constraint, and we've lost sight of, you know, what we're constraining again.

Maria Santacaterina

You know, ultimately, you've got this kind of spiral that is. That is taking us on the road to nowhere. I would like to suggest that we go on the road to somewhere. Let's identify a direction that we desire, that we wish to go in, that we consciously and with informed, you know, let's say perceptions. Yeah, we can move forward, and we're not doing that.

What we're doing is we're just racing ahead because we have to get to where exactly and for what reason, exactly? We're not thinking anymore, we're not critical anymore. That's really something that is not good for us. Yeah, no, it's really interesting. I mean, I guess we.

Tim Scarfe

I agree that embracing subjectivity is a really fascinating way to go, but it does lead to shared delusions or things which are unmoored or unobjective. And there's always the interplay between do we want things to be objective? Do we want things to be subjective? And if there should be some balance between the two, what is that balance? You don't have to choose, because as beings, we are subjective and objective all at once, because of how we respond to others and how we respond to the environment.

Maria Santacaterina

And so you don't actually have to choose. The balance in human beings happens automatically. If you look at it in biological terms. There's a thing called homeostasis, whereby the cells maintain their role and their position within the organism naturally. But it's not a static situation.

It is actually a changing situation which people misunderstand. I remember in my state equilibrium, and I remember struggling with this idea. And then, of course, you take it to economics and you have the Nash equilibrium, which will never occur. However, if we go one step further and we consider a homeodynamic pathway, which is where you have homeostasis. So to maintain the stability, the system needs to be able to survive and reproduce and all of that great stuff, but at the same time, it moves forward because it's dynamic.

And this is what I'm advocating for in my book, is that we need to conceive of our organizations and our enterprises as dynamic beings, almost because we're people. The enterprise is not the machinery, it's the people. And so if we change gear and we think of it in that way, then we can unlock a whole world of possibilities that we couldn't even imagine. But we can't get there until we recognize where we are, what the issues are, deal with those, and then all of a sudden, this whole new realm of possibilities can present itself to us. That's what I mean about the human imagination and intuition and instinct and all of that stuff that we've talked about.

Tim Scarfe

Maria, it's been an absolute honour to have you on MLSD. Where can people find out more information about you and your books? Well, the book is on Amazon worldwide. It's on Blinklist as well, if you want a short summary. And I think at the best bookshore.

Maria Santacaterina

So please go and have a look and I'd love to have your feedback. Awesome. Thank you so much for being on the show. Thank you very much. Thank you.

Tim Scarfe

Thank you.

Maria Santacaterina

Thank you.