How AI could improve robotics, the cockroach's origins, and promethium spills its secrets

Primary Topic

This episode explores the intersection of artificial intelligence and robotics, detailing how AI could revolutionize robotics by enhancing adaptability and functionality.

Episode Summary

This episode of the Nature podcast delves into the potential of artificial intelligence (AI) in transforming robotics, making them more adaptable and capable in daily tasks. Host Benjamin, along with experts Lizzie Gibney and Flora Graham, discuss the implications of integrating foundation models—large, general AI models used in online bots and image generators—into robotics. They explore the possibility of developing robots that can learn and adapt in real-world scenarios, far beyond the current capabilities of highly specialized industrial robots. The conversation also covers safety concerns and ethical considerations, particularly the risks associated with AI in physically interactive roles. Additionally, the episode touches on the origins of the German cockroach and groundbreaking research on the rare element promethium, adding layers of scientific intrigue.

Main Takeaways

  1. Integrating advanced AI could enable robots to handle complex, everyday tasks like human beings, such as managing household chores.
  2. Robots of the future could learn from both digital and real-world data, increasing their adaptability.
  3. Safety and ethical considerations are paramount, especially as robots gain abilities to perform more autonomous tasks.
  4. The episode also highlights intriguing scientific discoveries such as the origin of the German cockroach and new insights into the rare element promethium.
  5. The interdisciplinary discussion bridges robotics, AI, genetics, and chemistry, showcasing the broad impacts of these technologies and studies.

Episode Chapters

1: Introduction to AI and Robotics

Overview of how AI could significantly improve robotics, focusing on adaptability and learning from diverse data sources. Lizzie Gibney: "By integrating foundation models, robots could become capable of much more complex and adaptable behaviors."

2: AI's Role in Enhancing Robot Functionality

Discussion on how AI models are similar to human learning processes and how they can be applied to robotics. Lizzie Gibney: "These AI models learn from vast amounts of data, which could allow robots to perform tasks in ever-changing environments."

3: Safety and Ethical Considerations

Exploration of the potential risks and ethical concerns related to more autonomous robots. Flora Graham: "As robots gain new capabilities, ensuring they do not cause harm is crucial, particularly in sensitive environments."

4: Scientific Discoveries

Insight into the origins of the German cockroach and advancements in understanding promethium. Benjamin: "These topics not only fascinate but also challenge our understanding of biology and chemistry."

Actionable Advice

  1. Stay informed about advancements in AI and robotics to understand their impact on society.
  2. Consider ethical implications of automated technologies in your field.
  3. Explore interdisciplinary approaches to problem-solving in technology and science.
  4. Keep up with safety standards and regulations related to AI and robotics.
  5. Engage in public discussions about the future of AI and robotics to shape their development responsibly.

About This Episode

Companies are melding artificial intelligence with robotics, in an effort to catapult both to new heights. They hope that by incorporating the algorithms that power chatbots it will give robots more common-sense knowledge and let them tackle a wide range of tasks. However, while impressive demonstrations of AI-powered robots exist, many researchers say there is a long road to actual deployment, and that safety and reliability need to be considered.

People

Lizzie Gibney, Flora Graham, Benjamin

Companies

Boston Dynamics, Google, Meta

Books

None

Guest Name(s):

None

Content Warnings:

None

Transcript

Nature
Deep dive into the world of science with nature. Plus, from the vastness of the distant star systems to the intricacies of infectious diseases due to climate change, weve got you covered. Enjoy access to over 55 cutting edge journals, breaking scientific news, and over 1000 new articles every month. Whether youre a seasoned researcher or just curious, NaturePlus simplifies complex studies. Plus its all available right at your fingertips on nature.com nature, the key to unlocking the world's most significant scientific advances. Subscribe today at Go dot nature.com plus.

Stripe
My business used to be weighed down by the complexities of in person payments, then tap to pay on iPhone and stripe came along and changed everything. With tap to pay on iPhone and stripe, I streamlined my payment process effortlessly. Now I can accept in person contactless payments right from my iPhone, no extra hardware required. What's truly remarkable is how I can cater to all of my customers payment preferences, whether they're using cards, Apple Pay, or other digital wallets. Tap to pay on iPhone and stripe ensure a smooth checkout experience every time. And it's not just me. Stripe helps businesses of all sizes, from local markets to global retailers, scale quickly and stay agile. To learn how tap to pay on iPhone and stripe can help grow your revenue and reach, visit stripe.com tapiphone.

Benjamin
Hi everyone. Benjamin from the Nature podcast here. Something a little bit different this week. We're going to do a deep dive into some of the stories that have appeared in the nature briefing, and, well, we've got an all star cast to do it. Number one, Lizzie Gibney. Lizzie, how are you doing today?

Lizzie Gibney
Hello. Very well, thank you.

Benjamin
Excellent. And number two, Flora Graham. Flora, hi.

Flora Graham
Hi. Great to be here. Thanks very much.

Benjamin
Lizzy, why don't you go first, because, well, speaking of deep dives, it's something you've been doing for a little while. You've been looking at robotics and AI and you've written a feature about it.

Lizzie Gibney
That's right. So most kind of robots have some form of AI, but not the kind of AI that has been sweeping the world in the last couple of years. So the basic idea behind this story is asking the question, if we put foundation models, which are the kind of very generalized models that are behind the chat bots that we have online, the image generators, these kinds of very general, very powerful, very large models, if we put those into robots, could we finally someday have the kind of robots that we've maybe all been dreaming of? You know, I think many of us, when we were younger, we expected by now that there would be some kind of equivalent of a robot butler thing like that. And instead, you know, even the very, very best robots we have right now tend to be excellent, but at a very specific thing, like in a factory, at a very specific task. And they might even be just an actual robot arm rather than the whole body, even, you know, the really good ones that we might have seen doing parkour or showing us some funky dance moves. There's a company in the US called Boston dynamics. Even those which are incredible robots are to some extent pre programmed. They, like, pick the moves that they do from a library within them. So this is looking at what if we could have robots that actually learn and therefore become much more adaptable than the robots that we have today?

Benjamin
So how do these type of AI's work in this context, then?

Lizzie Gibney
So well, this particular idea of foundation models. So these kind of large language model type models, what they do is they learn from text data and image data. And then they also have all these many, many examples of robots in action that they learn from. That can be robots which are being teleoperated, so actually operated by a human, or they can be doing it like reinforcement learning. They can be doing it again and again and getting some kind of feedback on successes and failures. And they learn is very like a large language model. They essentially absorb all of that information and they start to make these statistical predictions as to what comes next. So the ultimate result is that you have a model that says, I can see this image around me. You know, robots view eyes of the world. This is what I see. Now I know what I want to do, what plan is, and I kind of make a prediction as to how to achieve that. So it's very analogous to how the large language models come up with this extremely convincing, sometimes unnervingly good sentences. It works in very much the same way.

Benjamin
And you said the aim was to make these robots more adaptable because doing stuff is hard, right? I mean, I opened the door here, balancing a cup of tea and my laptop and all the rest of it. I mean, that's easy for me because I've got a wealth of experience for how to do that and I can adapt, but that's maybe not quite so easy for a robot.

Lizzie Gibney
It's incredibly difficult. So you were holding a laptop and a cup of tea. I think almost every robot we have so far would probably completely fail at that challenge. Just opening a door on your own, you know, you've got to balance the forces. You've got to figure out the doorknob, what kind it is. What do you have to do to open it? When you pull it, you've got to make sure you don't topple backwards.

There are a huge number of different things that go on in order to make a robot like that work. So it's something that we've been trying to tackle for years and years. But even, you know, really good hands, like a human hand, don't exist. Like even just aiming for one kind of robotic replica of a human. So there are an awful lot of different parts that go into it, and it's very, very difficult. So what they did here was, as I said, use this, like, large language model strategy, and that enables you to bring in a kind of element of common sense to a robot. So if your model learns on those things as well as robotic actions, you can teach it much less information because it gets a kind of general common sense kind of knowledge, like the language models do. So, for instance, one of these robotic foundation models from Google, after they had trained it on both the Internet and on a bunch of videos of robots doing actions and information about the commands they were given, it was able to do things like move a Coke can onto a picture of Taylor Swift. Now, it might have never seen a coke can. It definitely had never seen Taylor Swift, you know, these things in action in a robot, but it kind of had the whole of the Internet to draw from.

So up until now, you need to train a robot for each different scenario and scenarios that look very similar to us might actually be very, very different to a robot because it doesn't know what's important, what isn't. So, you know, it might have been completely thrown by the fact that it had this image before and it never seen this image. Now it's saying, okay, well, I know who Taylor Swift is. That's all good. I can even switch out, if you ask me, for Pepsi rather than coke. I'm very happy to do that because I know the difference between those things as well. So that's kind of radically reducing the number of actual experiences that the robot needs to have seen, observed, and learned from in order to function very well in everyday life.

Flora Graham
I mean, you had me a robot butler. Lizzie, I've got to say, often AI is being suggested to replace a lot of things that I'm quite happy to do, like writing and making music. But I think that this is the exciting potential for a robot to do all the things I don't want to do, like do the laundry.

Lizzie Gibney
That's right, because those things are actually really hard for a robot. To do, and that's partly why we haven't got to that point yet, is they are really, really difficult.

Benjamin
I guess the dexterity aspect is a key one, right? Like you can train the robot on what a pop star looks like or what a can of fizzy drinks looks like. But in terms of actual dexterous movements, I imagine there's not that much data to train these machines on.

Lizzie Gibney
There is very little. So in the past, we've had issues, like the fact that each robot is completely distinct. So the way and they work, the kind of their embodiment, you know, what they look like, that's completely different. So if you're trying to learn from robotic data that hasn't always been possible to get the volume of data that we need.

So what's going on at the moment in order to try and build this Internet of robotic data that doesn't exist right now, there are a few different ways of doing it. Some groups are just pooling all of their data. So there are loads of robotics out there in the world, and they're getting together and they're putting on all of these demos that the model can learn from, and they're bringing all of that data together. It's got their robots in loads of different environments, and that makes the robot that's learned using that model function much better in new environments, but it's still not enough. So they're putting together different kinds of robot bodies. So they're learning from those different robots that are working in different places and that have even different body parts themselves. They're trying to build models that can absorb all of that data as well.

They can also learn from human videos. Some of them, if they're humanoid robots, then they can learn an awful lot just from watching videos of humans doing things, which thankfully, we do have a lot of. And the final way they're doing it is simulation. So if you can build a world that is very physically similar to the real world, you can put a simulated version of your robot in that world, and it will interact in a way that you'd expect in the real world as well. So it gets valuable training data from that. This is happening a lot at meta, Facebook's parent company. They're working in this area, and they have built a very rich simulator environment, and they're training a robotic dog called Spot to be able to put back different items that it finds where it thinks they should be in a house. And again, it sounds quite simple, but they can do that over and over again with lots of different kind of environments, lots of different objects. They do it in simulation, they can do it much, much faster. No robot gets worn out by doing that, and then they apply it to the real world. And they've had quite a lot of success doing that.

Benjamin
I mean, it's got to be tough though, right? As someone who's played a lot of video games, that a physics reproductions are quite good, but sometimes they're not quite there and, you know, stuff will fly off into the sky when it shouldn't. So unless the physics is one to one, to the real world, to this virtual world, mistakes could be made. I suppose they could.

Lizzie Gibney
And we're going to always need real data. And I think that comes to one of the potential problems with this method is especially when it comes to quite finicky tasks, you know, things are very, very dexterous.

I got the sense from talking to sources for my story that the people who are working in AI and applying these kind of models that have been so successful up to now to robotics, they just think, oh, that's a detail. We'll kind of get to that down the line. And then the people who worked in robotics their whole career who say, actually that stuff is really, really important. You can't just use this video data or even simulation data, because we need to be inputting, you know, what things feel like, how do people react? What are the forces on all of the different parts of the robot when something happens?

That kind of data is very, very hard to get, very expensive and time consuming to get. And it might be that this whole approach gets limited by the fact that we actually don't have that data. And maybe we get robots that can, bit like language models can really seem to understand things, but really, are they able to function in the way that we need them to without having these failures in the real world?

Flora Graham
Interesting. It makes me think about the safety implications, because that's always something that we talk a lot about with AI. But in this case, a use case that's often mentioned is rescuing people from disaster zones, places where humans wouldn't necessarily want to go because it's too dangerous. And in that case, you might have robots interacting with injured people or potentially making things worse than they were before. So I imagine that the possibility of failure, if you're talking about a robot that's designed to, let's say, assist an older person in the home or someone who needs assistance with mobility or something, is something that they probably do keep in mind 100%.

Lizzie Gibney
So, as you say, we've talked about the issues around safety with language models many times, and they can be racist, they can be sexist, they can make things up, they can get things completely wrong.

You know, we're now giving that, effectively a physical body in the real world. So as harmful as all those things are, this is a different scale of harm potentially that they could cause. And I think the people who I spoke to for the story are well aware of that. They're obviously still working in this area, so they're optimistic that we can overcome those problems. A lot of the work that's going on in AI safety at the moment can be transferred straight over to this robotics field. They are talking about applying things like Isaac Asimov's laws of robotics. You know, they've been around for a very long time, but, you know, ways of overriding a robot so that it doesn't cause harm. For now, they're doing things like just telling it do not interact with anything living, do not even try. And, you know, they can make those much more kind of higher level commands within the robot to overcome anything that it's learned. But basically, for that reason, they're not going to be set on the loose anywhere for a very, very long time. Right now, these kind of models, many are being applied in places like factories, where it's all extremely strictly controlled, and they have a lot of safety measures already.

Benjamin
And the other side of it as well. I mean, I know with so many robotics efforts, like the robot didn't work 99 times, but the hundredth time it did. These things aren't necessarily ready for progressive prime time in that aspect yet.

Lizzie Gibney
Totally. So the people who I spoke to for my story, they are still doing things like I told you about moving a can of coke from one place to another. The fact that it can do that reliably in lots of different environments is a pretty big deal. And there are some that are about to roll out in factories, but I think there's also a lot of kind of showboating in this area. As soon as you start talking about humanoid robot, people get very excited. People are already very excited about chatbots and large language models. And this, what I feel is people anthropomorphizing these models and seeing, understanding where really what they're doing is coming up with statistical associations. So we have some demonstrations which look kind of incredible. I'm talking in particular about figure, which is from a collaboration between a robotics company and OpenAI. They've got this robot, and they ask, give me something to eat. And it just picks an apple up and hands it over, which actually takes a lot of thought. Something to eat that could be an apple. That's the only thing I have here to eat. I'm going to hand it over, the dexterity of that. But we just have a video that shows us that that was put out by the company online. We don't know how reliable it is. We don't know if there was a different background or the table was a different color, or it was a different piece of fruit, or the man who asked the question moved his hand slightly. If any of those things would have caused it to fail, we just don't know.

And when you talk to roboticists, they say, oh, yeah, like most things fail most of the time, and that's why they are very skeptical.

Flora Graham
Well, one of the topics we've written about at nature many times sounds like science fiction, sounds like maybe even possibly a bit of paranoia, but I think there are people genuinely worried about it. Autonomous or AI controlled weapons of war. I was just curious what the researchers working on this advanced level of robotics that you spoke to, whether they had any thoughts about that, do they consider their work totally separate from those concerns? Or. I mean, of course, anything can be used for good or ill, not just robots. But it does seem like robots have a kind of special place in our fears sometimes.

Lizzie Gibney
I think because this is quite early stage, and because people working in this area are so optimistic, I don't think it's directly on their radar, but I think if the kind of success that they foresee happening does happen, I think it's rapidly going to have to be on their radar. Just like all of these issues about AI safety, it's something that they're going to have to confront pretty quickly.

Benjamin
Well, one more on this, then. You've talked a lot about how AI could improve robotics, but I wonder about the other way around. Could robotics improve AI in some way?

Lizzie Gibney
Totally. This was something that I found fascinating when writing the story, but really didn't have much space to fit into the piece in the end. But there's this idea out there that AI at the moment is going to be fundamentally limited unless it is able to learn and absorb and interact with the real world. You know, at the moment, it all takes place in the digital space, and that is not how humans grow up and learn. And there's this idea about, they call it embodied AI, which is to say that maybe AI is going to have to have a physical form to get to that kind of human like intelligence that we have.

So already we could see that just spatial reasoning could be radically improved within an AI, if it has an actual physical format in the real world. But it's possible that going beyond that, that will become something that actually takes us from having these AI's that seem to understand and seem to be able to reason, but probably aren't quite doing that to something that has a kind of general, genuine intelligence. But, yeah, we'll have to see. It's a theory lots of people are pursuing.

Benjamin
Well, absolutely fascinating. Thank you, Lizzy. And I have a feeling that AI and robots and robot AI are something we might touch on again on the nature podcast in the future. And Lizzie, your feature is out now, and we'll put a link to it in the show notes, but let's keep going. We've got a couple of quick stories. And I'll go next. And I've got a story about an animal that people generally aren't fans of. And that's the cockroach. And it's the origin of the cockroach, how it's kind of conquered the world. It's a story that I read about in nature.

Lizzie Gibney
Gosh, cockroaches. We're really going to be putting a lot of people off, aren't we, today?

Benjamin
Yeah, we're. Right. And in particular, I'm talking about the german cockroach as its common name is. But it turns out that it actually didn't originate in Germany. Okay. And this is based on a study published in PNAs, which suggested that the creature actually originated in South Asia and then spread globally because of its affinity for human habitats.

Flora Graham
And this is the cockroach that's kind of the ubiquitous fella that you might find all over the world, is that right? So it might be called the german cockroach, but let's not hold that against anybody, because this is the one that we've kind of said, how did this bug conquer the earth? Is that right?

Benjamin
That's right. Its latin name is blatella Germanica. Okay. And this was first described in 1776 in Europe by Cardonaeus. And so it's got this name, and I think everyone made assumptions about its origins. Okay. But say researchers really want to know evolutionary what the story was, okay. Where this cockroach came from. And so a team of researchers analyzed the genomes of 281 german cockroaches collected from 17 countries, including Australia, Ethiopia, Indonesia, Ukraine, and the US. Okay. And they looked at the similarities and differences between the genomes to calculate when and where the different populations might have established. And it turns out the closest living relative to the german cockroach is probably the asian cockroach, which has got a slightly different latin name, blatella acahini. Okay. And that's still found in South Asia. And there's two species probably split off, they reckon, maybe 2100 years ago, and then fast forward a little bit of time. So about 1200 years ago, the german cockroach began its travels when it hitched a ride west to the Middle east.

Lizzie Gibney
So is its history linked with human history and travel, then? If that was obviously a very rich time for trade within the Middle east, did the cockroach kind of go forth and conquer from there?

Benjamin
You're. Well, absolutely right, Liz. It seemed like it hitchhiked on kind of commercial and military traffic, but also it didn't just spread there.

Lizzie Gibney
Right.

Benjamin
It began to spread east from South Asia around 390 years ago. So quite recently, and that was with the rise of european colonization and the emergence of international trading companies like the Dutch and British East India companies. And so around a century later, it hitchhiked to Europe and from there spread around the world to Europe. And, you know, there it is. It's ubiquitous now, and that's where it got its name.

Flora Graham
One more of the treats that we brought to the world then on our trading ships in colonialism.

Benjamin
And one of the researchers quoted in this article said that it's kind of interesting to be able to combine genetic data with historical events to work out how this insect dispersed itself around the world. It's become abundant. But without using these modern genetic tools, there was no way of knowing that it's not actually a native european species.

Lizzie Gibney
And is there a reason why this cockroach is so successful?

Benjamin
Well, this one is particularly good, I think it's fair to say so. These cockroaches readily adapt to modified environments, particularly, you know, human occupied niches. They have short reproduction cycles. They're incredibly opportunist. And so hitchhiking to a new place where people and access to food. One of the researchers quoted here saying, that's a perfect combination of ingredients for making a species very successful in a human shaped world.

Flora Graham
We really had to think about this story when we put it in the nature briefing. It's so fascinating, and it did turn out to be one of our most popular stories of the week. But it does involve a big picture of a cockroach. And we really debated whether having an email drop into your inbox with a big picture of a cockroach at the top would be too unappealing for most people. Now, luckily, in the story, there's a couple pictures, and one of them is a very lovely, glossy brown cockroach on a green leaf, you know, living its best life in its natural habitat. Not in my kitchen.

And we thought, well, this guy or gal looks like an insect we'd be more than happy to see in the woods. So we decided to go with that one.

Benjamin
I am happy to say I've had limited experiences with cockroaches. But anyway, let's move on. We've got one more story. Flora Europe.

Flora Graham
Oh, this is a story that I definitely would not have let us miss, because this is really basic science at its best and most exciting. This is one of those rewrite the textbook moments. This is all about an element called prometheum. Fantastic name. Named after Prometheus, of course. This is one of the lanthanide family, which is a row of 15 metals that kind of sits marooned down in the southern territories of the periodic table. It was discovered in 1945, so it's fairly new to us, and we think that there's something like less than 1 prometheum naturally on earth. So that just gives you some idea of how incredibly rare it is. But because it's so rare, it's so mysterious. And this is element number 61, in case you want to look it up on your own periodic table at home. This is the first time that chemists have put promethium into what is called a chemical complex. Now, basically, in a nutshell, this is a compound in which it's bonded with some other atoms, and it's in a solution with water. And this kind of completes the set of these lanthanide elements. So previously, every time we've tried to analyze this family of elements as a group, we've always had to say, well, except promethium. We've never. We don't know about it, so obviously we can investigate it in other ways. But having the ability to see how it interacts with other molecules offers key information about how this element actually works, how it reacts.

Benjamin
If there's so little of it and it is so difficult to work with, how do they go about achieving what they've done for the first time?

Flora Graham
As you say, what they did is they took waste generated during the production of plutonium, and they harvested a radioactive isotope of promethium called promethium 147. And they worked their magic with other things which were able to clog onto the promethium ion, which created these complexes of promethium oxygen bonds.

Lizzie Gibney
And what ways did they then prod it and poke it to try and understand promethium better? If this is the first time that they've actually had it in this chemical complex.

Flora Graham
Yeah. So they used x ray spectroscopy and they also run simulations in order to see how the oxygen on molecules connected to the promethium. And it's going to lead to a better understanding of prometheum in general. So this is an element that is radioactive. It can provide power in certain situations, like in pacemakers. And potentially this research could lead to better control of the element. So you could do things like separate it from similar elements in nuclear waste, for example, in order to harness it more easily than in its naturally occurring form.

Benjamin
I mean, that's a cool story, right? Because, you know, we've all learned about the periodic table and we can picture it in our minds, I'm sure. But there is still more to discover about the elements inside it. And what are researchers saying about this achievement? Because, as you say, it's taken quite a long time to get to this stage.

Flora Graham
Absolutely. They're saying that this was such tricky, difficult work. Paulie Arnold, who's a chemist at the Lawrence Berkeley National Laboratory in California, called it a tour de force. So this is the level of impact it's having among chemists who are working on this level.

Benjamin
Well, that's a good upbeat note to end this week's podcast and listeners. For links to all of those stories and where you can sign up for the nature briefing to have even more of them delivered directly for free to your inbox. Look out for links in the show notes. And all that's left to say is Lizzie and Flora, thank you so much for joining me.

Lizzie Gibney
Thank you, Ben. Gonna try not to have nightmares about the cockroaches.

Flora Graham
Thank you, Ben. I'll be looking out for some Prometheum.

Nature
The Nature podcast is supported by NaturePlus, a flexible monthly subscription that grants immediate online access to the science journal Nature and over 50 other journals from the Nature portfolio.

More information at go dot nature.com.

Yahoo Finance
Plus, when it comes to your finances, you think you've done it all. You've saved, you've researched, and you've invested all that you can. Now it's time to take those investments to the next level by using the brand behind every great investor. Yahoo. Finance as Americas number one finance destination Yahoo. Finance has everything you need, whether youre a seasoned trader or just dipping your toes into the market. Join the millions of investors who trust Yahoo Finance to guide them on their financial journey. For comprehensive financial news and analysis, visit yahoofinance.com. the number one financial destination yahoofinance.com.