How this professor teaches AI and thinks about human creativity

Primary Topic

This episode explores how artificial intelligence (AI) enhances human creativity and its applications in various domains, especially education.

Episode Summary

In this engaging episode of GeekWire, host Todd Bishop speaks with Leonard Boussioux, an assistant professor specializing in AI at the University of Washington. Boussioux discusses the integration of AI in creative processes, teaching methodologies, and its potential to address global challenges like healthcare and sustainability. Emphasizing AI's role as a tool rather than a replacement for human creativity, he shares insights from his academic course on Generative AI and its impact on students' problem-solving skills. The episode delves into ethical considerations, the relationship between AI and traditional skills, and the future of AI in enhancing human capabilities.

Main Takeaways

  1. AI can significantly enhance human creativity by providing tools that open up new possibilities.
  2. Ethical considerations are crucial when integrating AI into creative processes, especially regarding the ownership and originality of AI-generated content.
  3. AI has the potential to democratize skills and empower individuals, making sophisticated tasks accessible to a broader audience.
  4. Education systems can leverage AI to revolutionize teaching methods and curricula, focusing on collaboration between human intelligence and artificial tools.
  5. The conversation around AI should include discussions on sustainability and the ethical use of technology.

Episode Chapters

1: Introduction to AI and Creativity

Leonard Boussioux discusses how AI can augment human creativity and the unique opportunities it presents in various fields. Leonard Boussioux: "AI bridges gaps and makes everything more multidisciplinary."

2: AI in Education

Insights into Boussioux's AI course at the University of Washington, highlighting its impact on students' approach to problem-solving and creativity. Leonard Boussioux: "I teach my students to see AI not just as a tool but as a partner in the creative process."

3: Ethical Considerations

Discussion on the ethical implications of using AI in creative works, especially in terms of authenticity and intellectual property. Leonard Boussioux: "It is crucial to acknowledge and respect the original creators of artworks used in training AI models."

4: Future of AI

Speculations and hopes for AI's role in future societal advancements, focusing on its integration into everyday human activities. Leonard Boussioux: "AI is about expanding our capabilities, not replacing us."

Actionable Advice

  1. Explore AI tools to enhance personal and professional projects.
  2. Stay informed about the ethical implications of AI use.
  3. Participate in courses or workshops to understand AI's potential.
  4. Use AI to foster creativity in problem-solving.
  5. Discuss the impact of AI with peers to demystify its capabilities and limitations.

About This Episode

Our guest this week on the GeekWire Podcast is Léonard Boussioux, an assistant professor in the Department of Information Systems and Operations Management at the University of Washington's Foster School of Business, and adjunct assistant professor at the UW Allen School of Computer Science and Engineering.

Boussioux received his doctorate in operations research from the Massachusetts Institute of Technology. His research combines areas including machine learning and AI with a focus on healthcare and sustainability. Last year he launched a class called "Generative AI in the Era of Cloud Computing" at the Foster School.

People

Leonard Boussioux

Content Warnings:

None

Transcript

Leonard Boussioux
You need to use your human intelligence. No AI will take this away from you. You will need to be creative. You will need to be an artist to figure out those little details that nobody else will see. And this is an opportunity to leverage the technology to get you faster or differently, or getting the right support you need.

But you still need to use your brain. Ultimately, you still need to decide. You need to just realize how beautiful this is to be a human. And then you will see that you can do so many more things thanks to this.

Todd Bishop
Welcome to Geekwire, I'm Geekwire co founder Todd Bishop. We are coming to you from Seattle where we get to report each day on what's happening around us in business, technology and innovation. What happens here matters everywhere. And every week on this show we get to talk about some of the most interesting stories and trends in the news. I'm joined this week by Leonhard Busiou.

He is an assistant professor who specializes in areas including machine learning and artificial intelligence at the Department of Information Systems and Operations management at the University of Washington's foster School of Business. He's also an adjunct assistant professor at the Allen School of Computer Science and Engineering. He received his doctorate in operations research from MIT, and his research in machine learning and artificial intelligence focuses on areas including healthcare and sustainability. Last year, he launched a class at the foster school called Generative AI in the era of cloud computing. Leo, if I could call you that.

Leonard Boussioux
Absolutely. If an alien landed here on the shores of the Lake Washington ship Canal where we are, and wanted to know what was going on with AI and the potential of AI in reading your work and in looking at your videos, you are the person I would point him to. I mean, this is obviously your bread and butter. You love AI. I love it.

I play with AI every day, all the time. I learn how to use it for so many different purposes, for work, but also personal life. But I'm also passionate about showing my friends, my community, everyone how they can use it to improve their life and be even more creative. Well, we've got a lot to talk about, but let me just start with big picture questions. When you look at the world in particular areas like healthcare and sustainability, and I know in some of your work you focus on the Un sustainable development goals, what excites you about the potential of AI to make an impact on the world?

What I like a lot about AI is the fact that it bridges gaps. I like that its making everything more multidisciplinary. It used to be that everything was very siloed like you study chemistry, you study physics, pure maths. I believe that AI can help people come together and work together. So thats something that I love from AI as the number one step.

Second, I also love that AI is able to upskill people. I think that AI, because it has some good knowledge in many different topics, can really help people doing things that were deemed unfeasible before. For instance, my business school students, typically, they rarely code, if at all. Now I taught them in just 30 minutes how they can build a website from scratch by coding this in coding languages like HTML and CSS. Many of them did not even know those two acronyms.

And then they all managed to build a beautiful website thanks to the power of AI. So this is really the beauty of helping people to be more creative, but also for creative problem solving. It really struck me in looking at your website, the combination of human creativity and AI, augmented creativity, that's reflected in your work. You're an avid wildlife photographer, but next. To your pictures of birds, you also.

Todd Bishop
Show some of your stunning AI generated imagery. How do you reconcile those two forms of creativity? Personally, I mean, just the juxtaposition really struck me. So what I believe in is the fact that us humans have something beautiful. It's our capability to connect with each other, to build communities, to create.

Leonard Boussioux
And I believe the best way to create is to tap into our artistic selves. And I believe also our society is not always emphasizing how important this is to be creative or that we are all artists. Very often people decide that no artist being an artist is not for me. I don't have the time, I don't have the skills. I believe that no, every one of us can be artists.

And I also believe that AI is an opportunity to help us becoming more artistic in our daily lives. Here is an example. Some people might not like the fact that AI can draw pictures or generate new images, but on my side, on an artistic point of view, I like that if I don't have that much time, but I do have a lot of imagination, I can take a few minutes to try to generate a picture that resonates with me. I know that it's AI generated, but I'm able to show something that has some meaning for me. And I show this to my students, and a lot of them feel empowered because I help them move from being guilty of using a technology towards feeling empowered, that they can now recover this artistic energy.

So this is why I like the fact that although AI is something around computers, algorithms, math, it's also something that helps us being more human, because it gives us the opportunity to create in different ways. So what would you say to somebody who looks at AI generated artwork or photography and says, that undermines human creativity, in that it replicates what an accomplished human could do? So I, first of all, I want to acknowledge this point of view, and I respect this point of view. I share a different perspective because I want to take a more positive mindset in the sense that I recognize that those AI generated pictures are using some pre existing artwork from humans, and sometimes even by sort of stealing those artworks, because many artists did not agree that their artwork would be included in the training data. So I believe that this is a big challenge to be solved.

However, now, putting this aside, and let's say, for instance, we have a database that is respectful to the artists, I believe we have now a new tool, AI. I see it as a tool not as something to replace us, but rather to augment us? I see, for instance, that a lot of people do photography. They love taking pictures of everything. It used to be we did not have any cameras.

And cameras can now fix what you see with your eyes. Now, I believe that AI can help you fix what your imagination is seeing and you can show it to others. So I build stories thanks to those AI generated models. I also inspire more people to becoming artists. I also see people who suddenly generate their first picture ever, and they feel amazed that technology can do this.

But then what I see as well is that they don't necessarily feel the ownership of the picture. That's it. And then this is this ownership part that is very important. Yes. And so how do you reconcile that?

Because someone wants to look at their creation and take pride in it, pride in the ownership of it. And then I absolutely agree with that point. And, you know, my brother is a cinema director, and we often argue about that. And then I demonstrate him how I recover the ownership of what I create. Here is an example.

Last year, in 2023, I was at MIT, and I had told my students in January, you know what? 2023, I want to build my AI digital career. And I told them, this will be my 2023 goal. Two months later, a student of mine remembered that and told me, I'm organizing the largest generative AI conference ever. It will be at MIT.

Would you want to design the poster of this conference? And I said, yes, absolutely. I've been dreaming of an opportunity like that. Sounds amazing. And then I thought extremely hard about, how can I generate an AI picture that would represent the MIT community?

The whole energy about this generative AI, the excitement. And I started working extremely hard on this. It turns out that it was extremely challenging. And I tried a few sentences. For instance, futuristic neural network in the shape of MIT, or a crowd of people cheering about AI.

None of those were working. It was just not good, not taking the community into account. And then I started being very. I felt very challenged. So what I did is, okay, I'm going to use Photoshop skills to use an AI picture.

Merge it with a real picture. It was still not as good. And so I kept thinking, and finally I realized, wow, how do I get this MIT building into my AI picture in a very reliable way? I actually discovered a new AI technique, which is transferring an initial picture that is taken with a camera into a style, thanks to the AI. So I took a picture of MIT, transformed it into a futuristic, cyberpunk version of MIT, and then suddenly I had something that really was fancy, resonated with me.

And I put so much effort into finding the good technique, into finding the good prompt. A prompt is how you talk to the AI. And I believe that writing a good prompt is a skill, it is an art. And to find a good prompt, it goes with the friction of using those models for so long. So this was a big success.

But I spent hours doing it, and now I take this story and I teach it to my students. And after they see this, they feel suddenly that, wow, I do realize that you can feel the ownership once you have all those steps, once you felt the model for yourself, instead of just putting some random words, you decide, you try to direct to drive the AI. And this is where you recover the ownership by mixing different models, being creative on how to use them, figuring out those little tweaks, those little mistakes they can make, trying to solve them, and this becomes like a tool. And there you. Wow.

I am now the owner of this painting. You're replacing the paintbrush of old with the words of today, plus your creative ingenuity and your knowledge of the existing tools that are out there. It's just that your toolbox, your artist's toolbox, is much different and much bigger. Absolutely. Okay.

I'm glad you're here today, and I do want to talk about education initiatives at the foster school and the University of Washington, where you are. But we had an ethical dilemma come up related to creative work at Geekwire, literally this morning. Wow. Did I want to run by you. Amazing.

So we're going to do that when we come back. You're listening to Geekwire, and we will be right back. I wanted a career in it, but I didn't know where to start. WGU makes it simple. Their accredited online degree programs cover all kinds of IT specialties, and they have valuable industry certification built in at no extra cost.

Announcer
The payoff? Having those certs back up my degree makes me look even better to future employers. A nonprofit university that includes top industry certs in their programs. I choose WGU. Learn more at wgu.edu.

itsertsincluded.

Todd Bishop
Welcome back. My guest this week is Leonard Busiou, assistant professor at the University of Washington's foster School of Business, who researches and teaches about AI. Here is a picture of Sam Altman, the OpenAI CEO. Ironically, it's as if I created this as an anecdote. Okay.

Leonard Boussioux
I took this picture a couple weeks ago at Microsoft build with my Lumix GH five camera. Now I am a self taught photographer. When I worked at the newspaper, we would have a real photographer here at this event taking photos, and they would be super crisp. But as you can see, and you can attest to the listeners here, what happens when I scroll in on Sam Altman's eyes? How was my focus as a wildlife photographer?

Would you be satisfied? But it's sharp enough overall. But the more you dive, the more you see it's pixelated. It's pixelated. Okay, so there is a new tool that I found where you can have it processed and sharpened by AI.

I use this a lot. I used topaz tools. How would you describe that in terms of the crispness of his eyes? Now it looks super sharp. It looks like you have a prime lens with very good lighting.

The whole photo is fixed. Right. Would you say that the photo is substantially different in substance when you look at the two? No, I would say it's the same picture enhanced with a tool. Enhanced with a tool.

Okay, so here's the ethical dilemma that we have as journalists. I, for example, if I went into just a standard editing program and adjusted the contrast, adjusted the exposure of a photograph, I probably wouldn't disclose that in a caption or something like that. Here. This is another tool, right? It's probably, I would say, a 20% improvement in the photo at this resolution at 630 pixels wide on the site, it's maybe more of a 5% difference.

It's just like. It's maybe even more subconscious where you look at it and go, ah, that's a nice photo. And then maybe if you looked at the original, it would be like he just, he missed it just slightly. If you were really paying attention, that would be your conclusion. So here's my question.

Does this need to be disclosed? So I like this question a lot. And then here is also to even broaden the perspective. I start seeing advertisements that are AI generated pictures and it's not disclosed because my eye is very well trained. I would say, oh, this is totally AI generated.

Most people won't realize it. And then when it comes to AI, because right now there is very poor legislation, we are still trying to figure out the best way. I think it's best to be on the cautious side. And I would appreciate seeing at least this has been enhanced with AI, or this has been generated with AI, just because we don't want to foster a climate where we don't trust any picture anymore, where, for instance, you've seen those memes about the pope or Elon Musk wearing huge parkas, and it's so easy to generate pictures like this, and they look real, but they're not. And then now we are about to have an election in the US, and it's always election time anywhere in the world.

This is so easy to generate fake content. And this is why I appreciate when people disclose the use of a given technology. In your case, you haven't transformed the original picture. So this is a gray area where you could say image generated image actually photographed by a human, but you could disclose enhanced with a given tool. For instance, you enhanced with a sharpening tool, especially for competitions.

For instance, if you participate to a competition, if it sees science, for instance, you're writing a paper, if you are a reviewer, if you're a student and you give back an assignment, I tell my students to disclose their use of generative AI. I don't forbid at all. I encourage, but I also encourage to explain how they used it such that they use it in a responsible manner and such that they also train to recognize how everyone else using generative AI around them for photography, it depends on the use. Case again here. If it's just for media sharing, I think by default people would always use a bit of improvement for their image.

Like, for instance, Lightroom is a tool I use all the time. I would not feel that you do have to disclose it, but if you want to be in the cautious, precise side, you could disclose. I enhance the picture with it. It's interesting because I was wrestling with it and I was trying to come up with language. And by the way, in full disclosure, I may go back and change the caption, but I couldn't come up with the right language that made it clear, like, hey, this was my photo.

That I took and I made it that much better. And all the language I was coming up with kind of implied that it was an AI generated photo or more on that end of the spectrum. And so I need to come up with some good language that indicates that it was even enhanced. Makes it sound like it was partially generated by AI, which I guess technically it is. So there's also the distinction that there are so many different AI models.

There are some AI models that transform, like, for instance, generative AI. It creates new content, which is why it's called generative. Right. You also have sort of older kinds of AI that were out there for decades. Generative AI is actually also an old topic, but it only came out recently in the news.

You have models that are meant to classify, models that are meant to segment, models that are meant to denoise, and then people have denoised images for decades, and then it was not deemed AI, it was just algorithms. So there are now some AI algorithms to denoise. There is also a lot of marketing around here, and then a lot of AI is, after all, math. AI is based on many different architectures. And one that is super famous is a neural network.

A neural network is basically a bunch of numbers working together in many different layers, and it's just multiplications, additions. And then, after all, this is a new, different kind of algorithm. But this is another way of denoising. So 1 may say denoising using lightroom, denoising using AI, denoising using another kind of algorithm is just a kind of denoising. So here I'm not too worried about the denoising part.

It's more if you transform the meaning, the content, and what people would read in the picture. And this is why it's a gray area here. I don't see any danger through denoising this. But now imagine if you change someone's eyes, colors, or the hairstyle or the shirt, there it becomes a transformation where I would absolutely want to disclose something. What I'm going to do is for people who go to look at the associated posts related to this podcast on Geekwire.

I'll do the before and after so they can see. And in the meantime, I'm going to talk with my colleague and see if maybe I should go back and update that caption to reflect the fact that it was at least enhanced by AI. Denoise is good AI, but isn't that amazing? I love it. I use those tools too, myself.

And then even Adobe Lightroom, et cetera, they start onboarding more and more AI generative tools or Ni denoising tools. So in a way it brings new capabilities to people. And then sometimes when you use your image for commercial purposes, this is where you start being worried. But if you use it for recreative purposes, sharing a picture. I love taking my bird pictures.

The bird is moving. I had to use a low shutter speed and then I can make my photo sharp, right? I love that. I see this as a technological little improvement that makes me happy, but I don't see this as a dangerous improvement. I see this as a little feature that can be useful and enhance my photographic journey.

But going back then to some of the professional photographers, news photographers that I worked with at the paper, I can imagine they would look at this and go, wait a second, that's a skill that they developed over many years of hard work. And here I am, some Joker reporter going out there and snapping a picture in 2 seconds. With the use of an AI tool, kind of cheating to get to the level of quality that they can do naturally, how should they feel about that? I like this question a lot, and I have it also a lot. And then to make it even to transport it to other fields, if you're a writer, if you're a journalist, if you're doing podcasts, yes, what if now I can generate my own podcasts in like a minute, just by taking a few papers.

I find online pictures and then I get my podcast. How would you feel about this? So this is where the technology is challenging because it deals with people's emotions factually. I like the fact that more people can start doing it. Why?

Because I believe in the creative aspect, that many people should be empowered to realize their dreams. If you like doing photography, but you felt you did not have the skills. I believe that using AI to get there is a skill. And I believe also that the person who has done this for many years is an expert. And experts can always go beyond what a normal person would do.

So I think that AI is also an opportunity to be even better than you were before. So instead of looking at the technology with a worried eye that too many people can do exactly what I do, I would rather focus on, you are the unique artist there. The most beautiful thing that you've mastered over the years is how you've used the tool, the technology, your eye, your artistic eye. This does not go away. Nobody can take this away from you.

And technology can help you do even new things that nobody else could do. And I also believe you can stay free of not using the technology just because you love the pure art of the way it used to be. But I also like the fact that others can onboard and start doing painting photography podcasts. It's bringing capacities in the hands of more people, and I like the fact that more people can do what they would want to do. This is a great conversation.

I'm really fascinated by this entire issue and the examples that you're bringing up and that we're talking about are really driving it home for me. So when we come back, let's talk about education and how this is playing out in your work in the classroom. You're listening to Geekwire and we will be right back.

Todd Bishop
Welcome back. My guest this week is Leonard Busiou, assistant professor at the University of Washington's foster School of Business, who researches and teaches about AI. So Leo, tell me about this class that you launched last year, this course at the UW foster school called Generative AI in the era of Cloud computing. What does it involve and what are you teaching people? I was super excited that as a first year professor, my colleagues entrusted me with creating a brand new course, and they wanted this to be a flagship course for the Masters of Information systems program.

Leonard Boussioux
They wanted to create something extremely modern that will empower students to understand technology, also use it, and spread the good usage of such technology. So I reflected a lot about my own journey with generative AI. How I use it for my personal life, my professional life, how I also believe it could make the world a better place, how it can help people being more productive in the ways they like, and how they can be even more creative. So these were principles I wanted to include in the class, and I wanted the human aspect to be at the center of this class. I did not want people to feel I'm just learning a technology.

I rather wanted people to realize, I am going to develop a human AI collaboration. I want to include AI in the loop. I want AI to help me addressing very challenging questions, questions that I would not be able to address before. So I start the course directly mentioning how important addressing the United nations global goals is in my eyes. I love encouraging students to address healthcare issues, sustainability issues, climate change, hunger, poverty, education for all.

And this is the motivation I give them. And I also tell them then the risk is that we are going to fall short of achieving the goals by 2030 as defined by donations. However, I believe also that using technology in a responsible and appropriate manner could accelerate the progress. And I show them that this is not a given, this is something we have to fight for, also something we have to reflect and then I'm presenting them my philosophy, but I also encourage them to build their own philosophy. I don't say that people should believe the same way I do.

Some people don't like AI as much. And then I show them how I create so many things. I show them how I create a song in a few minutes, how I create a website in a few minutes, how I create a website, an app, how I am going to solve some challenging data analytics. I show them how to do all of that, and then they feel suddenly the magic of using the technology and they want to do it themselves. So I give them this opportunity through allowing them to do a course project, choosing the assignments carefully, and also listening to them, having them engaging with each other.

By the end of the course, it's just a three week course that is very intense. They all managed to do a live demo. The live demo aspect was extremely empowering for them because none of them believed they could do it. At the beginning of the course, everybody was complaining or thinking, wow, I think it's too hard for me. It's too much work.

And then I had to flip the mindset of the classroom. I had to show them that this is a journey, that we don't get everything right from the very beginning, but that we also all learn different things, we grow differently, we collaborate altogether. By the end of the course, I had apps that were running live on the phone. I had people showing me, they had an app to identify everything they had in their fridge. Getting new recipes, an app.

You take a picture of your piece of cloth and it tells you if it's sustainable or not. I had people creating some generation of your pictures to protect your kids from appearing on social media. It would create a cartoon version of your kids. Wow. And I had students just wanting to solve so many of the big challenges.

For instance, the elderly. They created a bot that would have a nice conversation with grandparents by being tuned to what they like. Wow. Those are just examples. And then the students just found this to be absolutely fantastic.

It transformed them. They told me it went from. It was a transformational experience. It was just beyond learning the techniques. It was really about learning more about who we are as humans.

And I include this also in the lectures, how I build tipping points for my life, meaning how do I get to create a big change in my life? How do I seize opportunities? How do I connect with others? And this is really this serendipity, this genuine aspect of who we are as humans, that also resonated a lot with them. I love it because what you're doing is you're expanding the landscape for creativity.

You're really giving them these immense tools to use and allowing them to just sort of have this playground of the mind to try and figure out what to do with it. I love this expression of the playground of the mind. Absolutely. I broaden their perspectives and creative space. Many people think in a very given manner because also our education system tends to focus on analytical thinking.

We think in a linear way. We see a problem, we have to solve it. I throw my students directly in the pool. I put them in the middle of all those technological advancements and showing them so many possibilities. They feel overwhelmed, but because they have this feeling, because I build their friction, I really give tons of mentorship.

They start realizing that they have dreams, they have things they want to solve. And then I gave them a lot of cues on how to solve it. So they're going to select what corresponds to them. So for one course, I sort of have 100 courses for 100 students, because every one of them is going to select the content that they liked most. And this is how they have a personalized journey through this class.

And then suddenly they realize it goes beyond my class. This is why I tell them, it's not just this course you're taking with me is for your whole life, become AI champions. But you are going to spread the creative aspect. You are going to show that you are artists, and I'm showing them how to be an artist by using AI as a tool. That's very cool.

Okay, so that's the big picture. And I know some people who listen to this podcast really want some nitty gritty stuff. And so for people like that, and frankly, like me, too, I know just from looking at your work, you're a big fan of mid journey. What are some other common tools that are just the staples in your toolbox right now? And I recognize that six months from now, this may be something completely different.

But can you give me a little bit of a list of your go to AI tools right now? Absolutely. I even force my students to use every single one of them such that they feel the friction and how they are all a bit different. Let's start with text. I bet that many of you have used chatgpt.

There are other models that are very good. For instance, there is the version from Google called Gemini, and there is the version from anthropic called cloud. I force my students to use the three of them, and I give them a few situations so that they compare how a given model may be better than the other. Some might be better for coding, another one to write an email, to write an announcement, or for writing a story. So I ask them to use the models in different use cases.

And I also tell them that it changes every month, if not every day. And I also teach them how to feel the jagged frontier of AI. So this is a term that has been coined to mean this frontier of what AI can or cannot do. This is always changing every day. So I don't want my students to feel that AI is a fixed technological advancement.

It completely moves all the time. So I teach them how they can recognize for themselves how maybe in one week this new update might finally solve a math problem that was invisible before. So I give them a few specific examples of, for instance, mathematical tools, data analytics questions, creating a website, design questions, writing an email, and I ask them to do this with every tool, but to keep doing it, to feel their own frontier of what they wish AI could do for them. So this is for the text models. I also love the open source versions, but typically it's more for developing your own apps.

Next, the image models. I show my students Dali, which is the version from OpenAI, but also mid journey, that I love as an artist because I have more control on the overall style. But there are also many other versions. Like there is also Leonardo AI, recraft AI, and they all have different use cases. Some of them are very good for pure artistic pictures.

Some of them are good to illustrate your slide decks. Instead of having boring slide decks with bullet points, you can suddenly illustrate every single slide. You want sometimes to include just an icon, like a nice little icon. So re craft AI is particularly good for that. So I show them all those sub tools, how every one of them can be useful for a particular use case, but how they can keep discovering it.

Then there is also the sound. I show them that they can use suno AI as an example. Which one is that? Suno. Suno AI.

Got it. That is a very good model for generating music. Of course, you have very little control of the final music, but you really generate some cool things. And just as an exercise for myself, I generated the whole playlist for Midi play in my car going on a road trip. I was so happy that I generated my own music.

I was in Texas, so I generated some folk country music. I was so happy to feel that I generated this playlist for me by including keywords of why I was actually there. And then suddenly all the lyrics were tuned to what I wanted. I did not specifically decide on how the rhythm should be. But I had an experience that I designed for myself.

And then I also teach the students how to merge all of those tools into one. For instance, a website. Every single one of my students has to build a website that includes some music, a video, a picture, a text, some exciting moving pieces. And then like this, they see the combination. So that is a new skill, how to combine multiple tools together, identify what they can or cannot do, such that one other tool can solve the previous one.

An example for yourself. You're very good at taking pictures. Maybe you use Lightroom to enhance your picture. But maybe denoising is something that is particularly hard, and maybe Photoshop is out of your comfort zone. There are some tools specialized into denoising.

Some of them are AI based. You can directly fit this into the denoising model that will give you what you wanted. I don't feel this is a bad practice. I rather think that you were smart to identify the tool that could do what you did not know how to do. And that is a new skill.

By the way, for people who are wondering, the way that I did, that Sam Altman photo was a plugin in canva, the design program. And that is one thing that I've learned to do for a while. I kind of rolled my eyes when I saw that my favorite app was adding an AI feature. It's like, oh, everybody's got to have an AI feature. And then I kept trying them, and it's like, whoa, wait a second.

In some cases, not every case, but in some cases, you know, this is something that can actually make a meaningful difference. And this actually leads to something that I wanted to ask you about. This was a column just recently by Christopher mims in the Wall Street Journal. I don't know if you saw this, but the headline was, the AI revolution is already losing steam. And I wanted to read one line from his article and get your take on this.

He says, quote, the rate of improvement for AI is slowing, and there appears to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. And by the way, when you were talking about energy earlier, this is me saying this, but I was thinking also about the cost to the environment of AI models. That's a whole other issue. New competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work, end quote.

So that's kind of the bear case on AI. What's your take on that? So, first of all, I agree a lot with what has been said. I'm going to bring more nuance. I want to first highlight that indeed, one major shortcoming of those AI models is that they're extremely expensive in terms of energy, water.

Like, you have to cool down all the servers or the GPU's that are used to train those models. And the cost is enormous. Like for instance, there is early research showing that for about 20 to 50 chats you're having with chat GPT, it may be like half a bottle of water that is consumed. This is a lot of impact if you imagine how many people are using those models. So first, I always say this to my students.

I want them to be aware, most of them, they had no clue about this energy consumption. I also believe, though, that you can totally do research on how to make them more efficient. And this is why I love open source AI, because you know exactly how they're trained, you know exactly how the weights, the architecture are built, and then you can focus on the sustainability aspect. There is a company called hugging face. They have a team on this.

She's actually one of my friends, Sasha Luccioni. She is dedicated to identify all the big challenges with the consumption of energy, water, etcetera. So I think science can help there. But I also think people should be aware of how costly this is. Companies like OpenAI, they never disclose how much energy they consume.

For this second, about the fact that more and more models are coming, I find this to be a good news in the sense that it's also how research works. More people are targeting the challenges. And then I believe that the open source community is doing an amazing job at trying to solving all those challenging topics. And this is great. It means that we are bringing the tools to the hands of everyone.

So it's competitive, which means that the returns are not as high as expected as before. Next, regarding the fact that AI is slowing down, this is living up to. The hype, I guess would be another way. It is a much debated factory. If you follow on Twitter, you have people who believe that you should accelerate, you have people who believe you should decelerate.

I believe, honestly that AI is going for now in sort of a linear trend between GPT 3.5 and four. It was a very nice improvement. GPT 4.0 has seen some nice multimodal improvements, nothing much in terms of capabilities of the raw model. People rumor that GPT five will be quite better, but nobody's expecting a giant leap. And then there is a lot of debate thinking that large language models which is the technology using those big neural networks that generate the next words or the next tokens, meaning the next units in your sentence.

This technology, many people actually think that this is not what will give us AI, this AGI generative, like the general artificial intelligence. People believe we need something else, other kinds of inventions, and then it's a distraction to spend so much energy and time on large language models. I believe that there is a lot of truth in this, in the sense that those large language models are very good for a few use cases. They're very good at generating content. And I will not deny this idol.

I use them all the time for that. However, when it comes to inventing brand new ideas by recombining knowledge, this is something that those AI models are very challenged about. And then people believe that using agentic system, meaning that you have multiple AI, collaborating together, could be one way forward. So some people rumor that GPT five will be an agentic system where, for instance, one model is specialized in the front end of your website, another model is specialized into coding the security of your website. Another model is specialized into the backend of your website.

They would collaborate together like a team, and like this, they would be able to solve it. I believe that it's going to help again, but we won't get anything that is super intelligent. I'm not worried that there will be any super intelligent system taking over at all. For instance, the other thing that strikes me about this is when you're talking about the linear versus the exponential improvement of AI, you're talking about the expansion or the improvement of the toolbox. To go back to what we were talking about earlier, but to your point, there's so much capacity to use those tools in new and different ways, just as your students have learned exactly.

The exponential part is not necessarily in the tool, but for the humans. This is what I want to show my students, the tool. They will get you to that point. Your brain can get you so much further, but not your brain alone, your brain as a team. And this is what I show the students, that to reconcile these amazing technologies with how the real world operates, you need to use your human intelligence.

No AI will take this away from you. You won't be replaced anytime soon. You rather need to use the technology that's available and targeting the challenges that were invisible before. But you will need to use your human intelligence to get there. You will need to be creative.

You will need to be an artist, to think out of the box, to figure out those little details that nobody else will see, and this is an opportunity to leverage the technology to get you faster or differently or getting the right support you need. But you still need to use your brain. Ultimately, you still need to decide. You need to just realize how beautiful this is to be a human. And then you will see that you can do so many more things thanks to this.

I could talk to you for hours, but that seems like a perfect note to end on. Thank you so much. This has been great. I really appreciate you coming in. Thank you so much for having me.

It's been such a pleasure discussing with you. Leonard Busiou is an assistant professor in the department of Information systems and operations management at the University of Washington's foster School of Business. He's also an adjunct assistant professor at the Allen School of Computer Science and Engineering. See the show notes for links to his website and research. Thanks for listening.

Todd Bishop
Kurt Milton edited this episode. I'm Geekwire co founder Todd Bishop. We'll be back next week with a new episode of the Geekwire podcast.