AI and Education: AI's Role in Education with Luis Serrano

Primary Topic

This episode explores the intersection of artificial intelligence (AI) and education, emphasizing AI's potential to enhance learning experiences and teaching methodologies.

Episode Summary

In this enlightening discussion with Luis Serrano, an expert in AI and education, the conversation delves into how AI is transforming the educational landscape. Luis, known for his educational YouTube channel and authorship in machine learning, shares his journey and insights into effective teaching and the profound impacts of AI on learning. The discussion covers the evolution of educational methods, the potential of AI to democratize learning, and the challenges of integrating AI into traditional educational models. Luis stresses the need for educators to adapt and embrace AI to improve educational outcomes and prepare students for a rapidly changing world.

Main Takeaways

  1. AI is democratizing education by making learning resources more accessible and customizable.
  2. Effective teaching involves understanding complex concepts deeply to simplify them for learners.
  3. AI technologies can enhance interactive learning, providing students with a more engaging and personalized education.
  4. The role of an educator is evolving from a source of knowledge to a facilitator of learning, utilizing AI as a tool.
  5. Educators must keep pace with technological advancements to effectively integrate AI into their teaching practices.

Episode Chapters

1: Introduction to the Guest

Luis Serrano is introduced as an influential educator in the AI space with contributions to YouTube and various online courses.
Luis Bouchard: "This episode features Luis Serrano, an expert in the intersection of AI and education."

2: AI's Impact on Education

Discussion on how AI tools enhance the learning experience by providing personalized learning pathways and resources.
Luis Serrano: "AI is massively transforming education, not just by automating tasks but by enabling more personalized learning experiences."

3: Philosophy of Teaching

Luis Serrano shares his philosophy that teaching should simplify rather than complicate knowledge.
Luis Serrano: "I believe in breaking down complex topics into understandable segments to demystify subjects and empower learners."

4: Future of AI in Education

Speculations on the future implications of AI in education, emphasizing continuous learning and adaptation.
Luis Serrano: "AI's role in education is just beginning. It's set to revolutionize how we teach and learn by making education more accessible and tailored."

Actionable Advice

  1. Embrace AI tools in educational settings to enhance learning.
  2. Focus on understanding AI's capabilities to better integrate it into teaching methods.
  3. Stay updated with the latest AI developments to keep educational practices relevant.
  4. Use AI to create more interactive and engaging learning environments.
  5. Educators should develop skills in simplifying complex concepts, leveraging AI where possible.

About This Episode

In this episode, Luis Serrano and I dive into the transformative impact of AI on education, forecasting a radical shift in how future generations learn and think.

People

Luis Serrano, Luis Bouchard

Companies

None

Books

None

Guest Name(s):

Luis Serrano

Content Warnings:

None

Transcript

Luis Serrano

Hey, the future generations, like your descendants, are going to be really dumb compared to you when it comes to brain stuff. They may not know math, they may not know how to write essays. They're not going to know all the stuff that makes you successful right now. They're not going to know anything of that, but they're going to have something else. See, this is something that people do.

They give you the hard definition and then an example. I hate that because first they scare me. I'm terrified, I'm intimidated. And then they say, oh, this was just something very simple and I'm still intimidating. I do the opposite.

Like I do an example first and then the definition. I think by the time you become an expert on something, you forget when you didn't understand it. And so experts always explain everything. Like they explain it to them, they remember them, they already forgot about the basics and they live here in the high level land and they refuse to go back.

Luis Bouchard

This is an episode with Luis Serrano, an expert educator in the AI space. Luis Serrano has a successful YouTube channel, but also many courses and has even published a book in machine learning. He has lots of knowledge around education and the AI space in general. In this episode, I really wanted to focus on education and talk about how to be a better educator, how to be a better learner, but also how AI is impacting the education industry and how people will leverage AI in the future to learn more. This episode was personally extremely useful, but also super fun.

Lewis is not only an amazing educator, but he's also an amazing human being and he's fun to talk with, but also to listen to. I hope you enjoy this episode. If you do, please don't forget to leave a like and a five star review depending on where you are listening. Thank you and let's dive right into it. Yeah, my first question is just about your educator background.

When did you start and why did you start trying to teach and, yeah, why and when did you start trying to teach and going the educator route? Hi Louis. Thank you very much for having me in your podcast again. I really enjoy it, so I'm happy to be here. Yeah, I mean, I think the education was always in me.

Luis Serrano

I think a teacher is also a learner. So I was always an obsessive learner, not a great student. I was never a great student, but I was always just a little more curious. I feel like since I was a kid, I was never content with explanations, but I thought I was just dumb. I thought I didn't understand them.

I'll give you an example. There was a time in, like, I don't know, primary school or something when they were explaining, I saw that the next chapters in the book in the class were explaining how electricity works and that I was very excited because I would always kind of put things apart and like toys, and I would sort of understand how mechanical things worked. But the fact that you would plug something into the wall and a light goes off or something moves was puzzling for me. And so I was so excited for the day that this class was going to be. And then the teacher comes in and says, okay, this is how electricity works.

And in a nutshell, they just said, well, you have a hydroelectric. And then there's like, water coming out, and then the water moves a wheel. And I was like, okay. And then the wheel generates electricity, and that's what we use for the lights and stuff. And I was like, okay.

And then everybody was like, oh, that's wonderful. And I'm looking around and I'm like, wait, how is that an explanation? That made no sense. But I didn't think, now, I think that was an awful explanation. But at the time, everybody got it.

And so I was like, I must be stupid. And everyone was like, well, the office is obvious. And I was so disappointed. But of me, I didn't think it was the explanation, the bad one, right? And so I always was discontent with everything.

And many times with AI, I'm discontent with something. They go, oh, the neural network learns how to blah, blah. And I'm like, what do you mean learn? That's like the hydroelectric all over again, right? So I need to go a little further, and it's just an obsession of mine.

Everybody has their own obsessions, and I don't have a lot of them that people have. Some people want to have, some programmers have, want to make better code. I don't want to make better code. I just have that obsession. I just have to understand everything much deeper.

And sometimes it was hard, like when I was studying mathematics, for example, I was way slower than all my peers because I had to understand things to the example. And when you get into levels and layers of abstraction, you don't care about examples. You just worry about, this definition is based upon this one, upon this one, upon this one. And the numbers are here and here. You don't even have numbers anymore.

I had to bring it down all the time. Sometimes you just can't. So again, I just thought I was stupid, right? And it's always been like that. I always just need to understand it way deeper.

For doing stuff or research or programming or things that it makes you slower, but for teaching, it makes it better, because when I teach to somebody, I already have a narrative in my head. I already told myself a story. Like, if I'm going to say what a neural network is, I already took years trying to tell myself a little story with, I don't know, little animals and stuff like that to explain the neural network. So I already have that narrative, so I just bring it out, right? And so, yeah, I was always a teacher at heart, and I would always go into long explanations and things like that.

And one day I just decided when I found that I couldn't do a lot of things that were just more difficult for me, that I just maybe would go into teaching full time. And that was the best decision I've made in my life. You mentioned it's probably because of just the way you learn and the way you are, but do you think it's a skill that one can develop and learn, or some people just are satisfied with the water and the will, and they just don't want to go deeper into the explanation. And so can they still become good teacher or educators, or do you have to be someone like yourself that needs a deeper understanding to be able to explain things clearly? See, when I think of skills and talent, I don't think so much of, like, a natural ease that people have.

Maybe there's that, but I don't see them as, like, I have a gift for this. I feel them more like I have a need for this. And so anybody who wants to be a great educator probably cares about understanding and also about helping others understand. So if anybody really wants to be an educator, it's because they have that deep desire, which is all I have. The desire, like, I don't think of any talents or anything.

If anything, difficulties in other things brought me to education. So sometimes skills is a combination of many things. It's maybe the natural gift you may have somewhere in the process, but it's also difficulties in other things that push you in that direction, and it's really where you care the most, where you deeply inside care the most. And maybe we choose that. Maybe we don't choose that.

I don't know. As I said, some people look at code and they want to improve it a million times, even if it already works. I don't have that drive. If it works, it works. Done.

Next thing, right? But surely if one cares about it, you would get more obsessed with it and become a great coder, right? So I think anything we want, we can do, because if we have the care and the need, that's what I think of as the talent. But how do you decide how deep you go into a topic? For example, if you are often explaining very complex topics, just like attention recently or the PPO algorithm, or very complex topics, so how do you decide on how much detail you want to go about?

Luis Bouchard

And is that just because you personally already have your narrative, as you said, and you just try to explain it the same way you understand it, or are you trying to tailor it to a specific audience? And, yeah, how do you deal with that? I think both myself. I have a point where I get happy and I can't control it. Something clicks and I'm like, oh, I got it, right.

Luis Serrano

Like I got the right analogy. I need some kind of click in my head and I normally kind of look, do a little celebratory dance when it happens, right? And then I know that's it. I also cater to everybody. I also try to make sure that everyone would understand it.

So if I can remove the formula completely, I'm happy, right? If I don't have any formula anywhere. I may have numbers, but that's it, right? Like if my formula is three plus two, that's good. But if it's alpha something, one minus gamma, I'm not happy with that.

So I need to get all the formula squeezed out.

I think if I can explain it to anyone with some knowledge, right, maybe some high school or even less, then I'm happy. For example, graphically, I want to do, for example, an embedding, right? Like if you say that a word goes to a vector of 1000 numbers, people who don't understand vectors won't understand it, right? We can think of 1000 numbers, but not comfortably. Right?

Like a random person doesn't think of a list of 1000 numbers and feels comfortable, right? But you can think of locations. So I can say, well, if an apple is here and an orange is here and a cherry is here and the car is here and the motorcycle is here, then location wise, that makes sense to everybody. Like I can go to a kid and put a bunch of stickers in a paper and be like, put the stickers how you would feel like they would probably put the apple close to the orange, right? And then those are numbers hidden because the horizontal coordinate and the vertical are two numbers attached to each word.

So I'm implicitly attaching numbers to words, but I'm not saying I'm attaching numbers to words. I'm attaching positions to words. Right. So that can understand. I can probably go to a child and that will make sense, right.

So I always try to go as basic as possible. It's a combination of what others will understand and what I understand and clicks in my head. And speaking of how education changed, how have you seen AI impacting the education industry right now as we are speaking? How is it impacting it right now, and how do you think it will impact the education industry? Oh, massively.

I mean, massively. I think technology has been impacting education and I think mostly positively because I think there's just so much room to improve in education that it can be beneficial. I mean, the basics are, when we started making courses online, I think that helped level the playing field a lot, right. Because we can. First of all, education used to be a super privileged thing.

Well, it still is, but less and less. Right. If you have to be living in the right place to be able to go to school, you have to have the right socioeconomic status to go and be able to learn. You have to be in the right ages, because if you're not between, we used to learn between the ages of, I don't know, five and 24, and that's it. Then the rest of the time.

What if you want to learn something at 60, right. So you need to be in this small range of people to be able to learn. And I think now, well, first of all, we can't afford that because things change so quickly that if you stop learning at 24, you're going to be obsolete by 26. So I think that we're forced to it. But I think tech education has done a lot because it keeps us learning.

Right. All I learned in classrooms was math and anything like programming, anything that's AI, anything that I've learned has been online. So I think that definitely we can bring information to a lot more people, and that always helps. And the other way it can help is because it can make things more interactive, right? Like I go in a language model now.

My learning method is I start asking language model a bunch of stuff and try to get it to explain things to me in a simpler way and bring it even for AI or for history or anything. I just start asking it like the way I would ask an expert. Of course it can lie, and I make sure it doesn't lie, but it's interactive, right? And I think we need that because we can't just sit down and look at somebody talking and have them like a supreme being that knows everything, that we can't question, that we can't also bring information to the table. I think everybody in a classroom should bring information to the table, not just a teacher.

Teacher has their own ideas that are settled, whereas the students have these open minds. And I love to tap into the students minds all the time and ask them how you would do this. They surprised me many times. So it's more of an interactive thing, and I think technology allows that. Do I think that the bulk of education will be technology?

No, I don't think that. And I think that we need a community and we need many other aspects that technology has had a hard time achieving. Right? Like for example, we have discord communities and slack communities and stuff. But it's never going to beat being with your friends, learning something, meeting someone who knows something different, starting talking about that.

The learning in a group is something that's the community learning. For example, at Udacity we used to have open courses, like you can start whenever you want and finish whenever you want. And they would have a low rate of graduation. It's like 20%. And we said, somebody said, actually, let's put cohorts and have applications and have artificial deadlines.

And my reaction was, that's ridiculous. If people can learn anytime, well, why are you going to have an artificial deadline? You have a deadline when the professor needs to finish and give them grades and go for a vacation, you need a deadline. But if it's a computer grading you, you don't need a deadline. Well, turns out that when they put applications, starting dates, communities deadlines, then the graduation rates skyrocketed up because we needed that sort of tribe environment and a bit of the pressure, like I need to do it by this time, it would never drag.

So I think community is very important. I think tech has not been able to figure that out and I think it will continue be figured out.

I think tech is doing a lot. I think that tech education is doing a lot to educate us, but it's always growing. I think every field needs to constantly change. However we can go deeper. And I think of many people are afraid, right?

Like many educators are afraid and they think kids are now writing essays using language models and they're cheating. Yes. How do you go about that? And I think that we can't go against it. I think we have to join it.

What happened with the calculator? Right? Like people say, like, oh, we're not going to learn how to add now because we're going to use the calculator. No, we still know how to add. Just that the problems changed.

Like then the homework was a little more difficult because now you would assume that the student uses the calculator, right. And I think we need to do that. We know that the student is going to use a language model. I think it's okay. I think we should give homework that makes sense regarding the large language model.

Like maybe make them learn a new topic, write an essay about something where they don't need to put the words in, but they still need to give the computer the ideas and guide them in the right way. We need to figure that out. It's not figured out. And I think a lot of skills are going to be lost. And that's also okay.

People think a lot of my kids are not going to know how to write essays or my grandkids are not going to know math. I think that's okay because I don't know how to make fire with rocks and I'm okay. So I always think of the human evolution and I think, of course we lose a lot of skills, but then we gain others and we lose jobs, but then we gain others and we always evolve. And I think if we get full philosophical, I think language models are a step to free us from rationality, which may be a good thing. I'll give you an example.

Let's say we go by levels, right? Like we were physical beings, right? If I go to, I don't know, tens of thousands of years ago to the hunter gatherers, and I were to say to them, hey, you right now depend on your legs and your arms and your strength and your skills, physical skills, but your descendants are going to be very weak compared to you and very unskilled physically compared to you. But their brains are going to make up for it. And believe it or not, society is going to kind of follow the brain more like the brain is going to be the most important thing.

There's still going to be physical stuff, but the brain is going to be the most important thing. And then, well, the hunter gatherer, the early human, will probably laugh at me and say, no, that's ridiculous. I barely need to use the brain. I need my legs to run from the animals that are hunting. I need my arms to hunt and to gather, and I use the brain to break ties, right?

If I'm going to decide to hunt that buffalo versus that one, then maybe I see that that one's weaker, or maybe if I want to pick acorns, I figure out that this path is better because there's more nature. I don't know, right? So the brain is just there to break ties, not to make meek survive. Well, they wouldn't imagine that. On the other hand, I go to the human right now, and then what's the analogy for right now?

Right? Like, we're already not physical beings. Whenever machines started, we started becoming fully rational, and now we just make decisions with our brain. And everything's. Most of the stuff is big decisions are brain decisions.

Well, what if I say to someone, hey, the future generations, like your descendants, are going to be really dumb compared to you when it comes to brain stuff. They may not know math, they may not know how to write essays. They're not going to know all the stuff that makes you successful right now. They're not going to know anything of that, but they're going to have something else. That sounds crazy, but it was crazy for the early human.

It was crazy to tell them that it was all going to be brain. Well, for us, it's crazy to tell us that that's not going to be rational stuff. And then I wonder, what's the next thing? So let me tell you what I think is the next thing. I think it's intuition.

Okay, let me elaborate. This sounds crazy, right? But check this out. Most of the decisions we make right now are data driven, right? Like big decisions of, like, we use our rational brain.

We use data, we explain every decision, we rationalize it, and we make decisions. Sometimes there are decisions that we make that come out of nowhere. Maybe you dreamed with it, maybe you meditated, or you had just some strong intuition that, oh, I should do a instead of b, I don't know why. And you follow it. Well, that's only used to break ties, right?

What the brain was for the early humans, that's only used very little, but that's the stuff that large language mouths cannot do. Everything they do has to be rational. Everything they do has to have numbers on it, has to have an explanation, a sequence of steps. When we use our intuition and we say, oh, I just got this idea, I have to follow it. I just got it.

Well, that's what makes us special. Just like the early human started using the brain, we start using our mind. And just like there was a cognitive revolution 70,000 years ago, I think we're close to having a conscious revolution. And I think our ancestors, our descendants, are going to be way more developed in that part. And we'll have a lot more, a lot more intuition, a lot more of that that comes out of nowhere.

That comes out of not rationality, maybe call it rationality. And so I think, yeah, long story short, I think a lot of our skills are going to go and education has to kind of build itself around it. Yeah, that was a very long answer. But hey, sorry to interrupt this episode, but I wanted to take this opportunity to remind you to leave a like or a five star review, depending on where you are listening. If you are enjoying the discussion, I also want to mention that I have a newsletter linked below if you want to learn more about artificial intelligence and how it impacts the different industries.

Luis Bouchard

Let's get back to the discussion. Would you be able to explain how the clip algorithm works without any visuals, just by talking about it? And just to clarify, the clip algorithm is basically to take either a text or an image and then encode it so that the model understands that if a specific sentence represents the same thing as an image, it should understand that it's the same thing. So, for example, if we have an image of an apple and the sentence an apple, it should be very similar within the model. And if a text is very dissimilar, so we have an image of an apple.

And the text, how are you? Well, they should not be encoded the same way. So the algorithm basically tries to understand the similarities between text and image. And it does that with mathematics and other things, which I already explained in some videos. But I wonder if you are able to explain this without any help of visuals like graphics and images that I believe makes it easier to understand.

Could you explain this with only words? Yeah. So I think there are different levels of understanding, right. In an algorithm in particular, you can understand what it does and then a few levels higher is how it does it. And I find that the first step is what it does.

Luis Serrano

Right. Like, if we understand that a car works, if I do steer the wheel or do things, that's different than understanding what happens inside the engine, right. And I would love to have both. But the first level is to have how it works and what it does, right? And so if you look at embeddings, right, like, if I have, let's say I'm explaining this to a child with like two pieces of paper, and I say put these stickers of images and they put the dog and the cat here and the fruits here and stuff like that.

And then I take another piece of paper and I say, put these words in, but in a different way. But it still has the dog close to the word dog close to the word cat, and the word house close to the word building. And so we have two different embeddings, one for words and one for images. And we can think of it as one for sentences or descriptions or long piece of text and one for complicated images. Right?

It's like a dog running, chasing rabbit or something like that. And then you say, well, you have two different ones. They have no reason to be the same one because a word embedding a text embedding doesn't have to match an image embedding. As a matter of fact, it won't if you build them separately with probability one. Right.

But you can figure out ways to say, okay, maybe I can turn this page around and squeeze it and stretch it and rotate it and see if I can match, if the words can match the images in the best possible way. Right. And that function that you did between one and another one would be like a neural network or would be like a modification of the embedding or something like that. So at least we know what we're doing, right? We're trying to make sure sentences go to numbers, images go to numbers.

And I want to make sure that the sentence for this image goes to at least very similar numbers. So it makes sense that we would just do stuff to the embeddings or to a map between the embeddings, which is the squeezing and the rotating and stuff to get one from the other one. And then we kind of have, if you forget about the embedding, the page where we put stuff on it, then we kind of have, well, plus more work. Then we have something that takes sentences to the images or images to the sentences. Right.

This is like what the car does. Right. We haven't talked about the engine yet, how it works. That is a deeper conversation. Right.

Then we have to understand for that one. We may need a higher level of understanding, the same way that if I want to understand how the engine works, I may need some physics or I may need some chemistry or something, but at the very least, how your algorithms work, I feel like that's universal. I'm yet to see one that can't be shown to everybody. As for how they work, sometimes you need a bit more, but it still can be brought to a level of understanding with a little bit of faith. Like I can say a neural network and maybe give people an idea that it's something that will help me bring A to b if I give it examples.

Right? Yeah. I do have two questions that just came up. The first one is just a little throwback for the embedding. Lots of people assume that large language models process language, like process words and et cetera, but they work with numbers and vectors instead.

Luis Bouchard

So just how do you make people understand an embedding like that is not words anymore. Yeah, that's a great point. And I emphasize that. To me, the most important thing, not just a language model, but in modern AI, is embeddings, right? Because we talk in words, we hear sounds, we see images, but the computer only does numbers.

Luis Serrano

In order for the computer to process an image, a sound, a movie or text or anything, it needs to turn them into numbers. So the better you can do this translation is the initial thing you have to do for the computer. You can't just go to the computer and be like banana. You have to give it numbers. Right.

And so the most important thing is to turn words or anything, images into these numbers and to do it in a consistent way. So I would say, imagine that I have a way to turn the word apple into a bunch of numbers. A bunch of them, okay, ten or 1000 or two. And then you take the word pair. What do you think the numbers are?

Well, they're probably similar because anything where I can put an apple, I can probably put a pair in a sentence, right? If I see them from far away, I may not recognize them. So an image has to send to the similar numbers. And so at least that is the first concept of an embedding, that similar objects, similar videos, similar pieces of sound need to go to similar numbers. Right?

That's the first one. A second one would be that if you can do it properly, like the clip algorithm, we could be multimodal. Right? Like we could say that the image of a dog and the word dog and the sound of a dog may, may go to the similar numbers. That's harder.

But imagine doing that, which is probably what the brain does. I mean, like, whatever the electric signals that go in my brain when I see the word dog, probably similar than if I see an image of a dog, right? I'm guessing I'm not near a scientist, but I'm going to get that. So that's the second one. And the third one is more complex.

This one doesn't need to always go in explanations, but if people are into it, I would say, well, let's say that I send a word to ten numbers. Well, each number is a description of that word in some way. So if it's an apple, maybe the first one is size. The second one is color. The third one is sweetness.

And if there's 1000 numbers, maybe the 577th one is something super abstract about an apple. Maybe something the computer figured out, a combination of things. Maybe I can't figure it out. But these numbers in some way are a description. Like if I have a checklist and I have every word through the checklist and I go, sweet, yes, no, okay, yes.

Level of size from one to ten, and I go, four. It's a checklist of ways to describe the word. And if you put pair to that checklist, you probably get similar numbers than if you put Apple to that checklist. Right. It's a description.

So that's the highest level of understanding and embedding that I would give, sort of in an initial explanation. You can go further with things like analogies, things like the parallelogram rule and stuff, but I would go as far as saying it's a description, a numerical description of things. Yeah, the checklist of characteristics is definitely a good one. I've never thought about this, but I hope it conveyed the points clearly. The point clearly, but I really think it's a really good one.

Luis Bouchard

And I wonder, similarly, this is a very good explanation that I believe didn't exist like a few years ago. I wonder if it's complicated to explain AI related concepts because the field is too new or because it's too complex. For example, we can explain how cars work quite easily, but it's still quite complicated. But is it because it has been there for a while or it's much simpler than a neural network that we actually don't even completely understand the decision making? I don't think it's because it's new, because we're bad at explaining math.

Luis Serrano

And math is pretty old. Yeah. So I doubt it. And a lot of machine learning things can be explained with math or with something. Sometimes they look like physics a lot.

I just think in general, explaining is something that maybe doesn't come easy or there's not that much interest in good explanations, and it's not anybody's fault. But I think by the time you become an expert on something, you forget how to explain it because you forget when you didn't understand it. And so experts always explain everything. Like, they explain it to them. They remember them, they already forgot about the basics, and they live here in the high level land, and they refuse to go back, maybe because their hunger is in finding more stuff.

So, like, if you're a researcher trying to find new things, you don't care about explaining stuff you already know, right? So that's why it's not often that you see good explanations of things. And that's why people fear math and physics and stem and many things, because they're just hidden in layers and layers of abstraction.

I think we just don't have a historical thirst name things well, we just want to find more. That's sort of our objective function, to find more things. And I guess there's ego involved, too. I mean, if someone's an expert, they want to continue sounding expert.

So it could be that. But I think mostly you forget what it was when you didn't understand stuff, and I don't have that. I remember exactly how I don't understand stuff. I still have difficulties understanding many things, so that difficulty will never go away. And I think I kind of like it.

I never liked that of me, but I like it now.

Luis Bouchard

And do you think that might be a problem from the population perspective, that everyone is now using or leveraging AI technologies without really understanding them or understanding how they work? Do you feel like just similar to a car? They should at least understand all the basics and why it breaks or, I don't know, just understand the basics. I think we pick our battles, right? For example, I don't know, assembly.

Luis Serrano

For example, do you know assembly? I don't even know how assembly looks like. Okay, so I kind of pick my battles and say, I'm going to go from Python up.

Maybe someone who's obsessed will go learn assembly or learn even stuff before, like punch cards or something, and someone who's obsessed about understanding the inner workings of a computer. I take a lot for granted there. In math, I don't because that was my education. So in math, I want to go assembly. Like, I want to go one plus one equals two.

And that's at the level that I want to understand things. So I think we can't understand everything super deeply because there's more and more stuff every time, right? Like when their knowledge was little, you could just try to understand everything, but now it's just too much stuff. So I pick my battles, and in many fields I kind of take things for granted, and in others I dig much deeper. But I think in terms of AI, we should know how things operate.

And I don't mean the technicalities. I think the next generation is probably not going to know how to code code as we know it, like with syntax. And they don't need to, just like I don't need assembly, right? Maybe some of them will. Some of them will have to code the basic stuff, so some of them will, but the majority of coders will probably.

I may be wrong, but probably just code in English or french or spanish or something. But it's important that they still need to know how things work in the background, so if they still know how a computer operates, even if without the syntax, how these language models operate to prompt them correctly, et cetera. So they need to have a high level understanding that is also deep, like the car. I cannot build a car for you, but if I drive a lot, it's good for me to know a bit of how things work. So I fix it whenever I need to.

So if I'm in an emergency or something. So I think we need to try to go as deep as possible, try not to stay within the usage understanding, go as deep as possible, and just kind of be kind to ourselves and say, okay, I got up to here, I got up to here, and always try to go a little more. But, yeah, deep understanding is always something that's important. And why do you think the majority of people almost know nothing about chat, GPT, or language models or just AI in general? Why do people not learn more about how it works or why it works?

Luis Bouchard

Is it because we lack good educators, maybe because the field is a bit too new or just that it's too complicated and people don't want to learn other complicated stuff other than their current work, for example, what do you think is the reason for that? I think educators can do a better job of bringing things basic. Like, if I, for example, let's say I don't know anything, I went into a language model, like tragic or anything, and I started prompting it, and it worked beautifully, right? So I go, okay, well, I definitely want to learn more about this. And I go, and I start looking at the architecture of a transformer, and it looks so complicated.

Luis Serrano

And then I see, what's this? A neural network? And I look for a neural network, and it's like formulas and formulas and diagrams are complicated. And I go, ooh, I will never be able to do this. And I just quit.

And I continue just kind of working on learning how the models work and just kind of prompting it to learn how to operate. So, first of all, that's a good idea, that prompting. I think people should play with language models a lot, because then you start realizing what they can do, what they can't, how to get them to do things, how to prompt them correctly. I think that's a very deep knowledge. So I think anybody should go into a language model and start playing with it like crazy.

But the reason I think some people will not have the hunger for understanding, but some will, and most of them get discouraged when they see all the technicalities.

And I think that's just the educators should do a better job of breaking these things down so that everybody understands, because everybody, a neural network is something everybody can understand. But the formula, the loss formula for a neural network is something that only somebody with deep math knowledge can understand, and that needs to change. So I think it's something necessary and we should push more in that direction. Yeah, I always come back to the fact that I think it's because it's somewhat recent as a field, but there's also the aspect of good educators where the best ones, I guess, have a good understanding of how to do good storytelling and make it interesting, basically. Whereas right now I feel like, yes, you can get some good understanding on YouTube or whatever with the.

Luis Bouchard

I don't have any specific examples in mind right now, but you can get good, simple explanations of embeddings or of other AI related topics. But I feel like they often lack the storytelling aspect, where it will be just the explanation, like it won't be interesting or it won't drag you to learn more. And I wonder, what are your thoughts on the importance of storytelling? Visuals trying to make it appealing versus a good explanation are both equally important. Is the good explanation all it needs or storytelling even more important?

Luis Serrano

I think storytelling is very important to keep us engaged because we are natural storytellers and we get drawn to stories, right? We watch fiction, we read novels. They're not true, but they're entertaining and they keep us engaged. Right? Like, we watch episodes and episodes and seasons of a show that just keep going.

We know it's not a true thing and we're not learning, but we like the storytelling, right. It's easier to watch that an entire season of a show than ten documentaries in a row. Even though the documentaries are true, they give us the right, the stuff that we need to know. So why? And the reason is because we're drawn to that.

We read a novel, but we don't read a Wikipedia page full of facts, like a series of random facts. We don't read that. So I find that in education we need storytelling. And I always try to tell a story, even if it's simple. And the story can be, we try to solve this.

We couldn't, so I tried this and then we couldn't, so I tried that. Or it can be an actual little scenario with people that do this, I always try to have a narrative, otherwise I get bored of my own explanation, I don't finish it, so I need to entertain myself. And once I do that, I get lost if there's no narrative. So I assume people will some people just have the skill to just process facts, but they're very few. So yeah, I think narrative is important.

Luis Bouchard

And how do you come up with a good storytelling for some, not boring. But if you are explaining how is an encoder working, how do you come up with a good story for explaining this? Like something interesting, you come up with a problem that you're trying to fix and you show how you fix it and there's a solution at the end. Or do you come up with a more interesting story or something unique? How do you go with that?

Luis Serrano

I start with layers. First of all, I'm so obsessed with this that even when I'm walking or taking a shower or something, I'm thinking about these things because it's fun for me, right. So I'm always like, oh, how would this work? You're walking around and then have to make an image or something. I always do this without trying because it's how I try to understand and remember things.

But the first step is to bring it to a basics, right? So an auto encoder, an image auto encoder, right. You would think, okay, there's a huge image here with a lot of pixels. I bring it down to less numbers. I have 1000 numbers here and I bring it down to 20 and I bring it back up to 1000.

That's kind of an auto encoder in some way, right? Well, what's the simplest image? And to me, the simplest image that makes sense visually is a two by two thing, right? So like two pixels by two pixels and one color, like gray, like black to white spectrum. So I have four numbers.

All of a sudden I have four numbers and my images are four numbers that go to something less and then they go to something more. And these are four numbers. So what's less than four and still not trivial. I guess we're going to go with two. So I'm going to make an auto encoder that brings four x four, sorry, two by two images here, a little grid of two by two down to two numbers and then back up to another four.

So I need two numbers are a description. So now I need, okay, I need two by two images that I can describe with two pieces of information. And so the next thing is, well, maybe I should go for diagonals. So I have two types of images. I have this diagonal, the forward diagonal and the backwards diagonal.

And maybe this one goes to 10 and this one goes to zero one. Right? So I need an auto encoder that does that. That if I give it a two by two image that happens to be a forward diagonal that goes to 10, and if it's a backward diagonal that goes to zero, one, and then another that sends it back to the diagonal. That is, from what I can think of, the simplest auto encoder that I can think of, maybe someone will come up with a simpler one.

But if I can think of a simpler one, it's too trivial. But that's not trivial, right? It's the simplest, nontrivial auto encoder that I can think of. And that's kind of a story, right? That's kind of a story.

It's not a full story, but it's a story of like I have the following two images and they don't fit in my computer because I don't have a space of four numbers, I only have space of two. So I bring them down to storing two things and I bring them back up. And then I can go up to, like you're saying, I can describe more complicated images with a little vector, I don't know. Again, I always default to fruits and vehicles and houses and stuff. But if I can bring, let's say, the fruits to 10 zero and the vehicles to zero, 10 and the houses to one, then I have an auto encoder that brings images to vectors of length three and then back up.

It's not going to be super faithful, but when you come up with the simplest example, you always have like a simple story and then maybe you add colors to it and stuff. I did that with gans, for example, making diagonals, again, that will construct images that are only a diagonal. And then the story was that you have a land where people are very slanted and tall and skinny and they walk in an angle of 45. So a picture of a person is always a diagonal. So then I'm creating an image generator of faces, of people because they're diagonal.

So it's a dumb story, but I remember, right? And then I always remember these stories. And sometimes I remember the story and I don't remember what the thing is. I remember I wasn't in an interview once and they asked me, how does a hidden mark of model work? And so I said, okay, there's two friends.

There's like, I don't know, Mary and Bob. And Bob wants to tell Mary about his mood. And his mood is based on two things. How the weather is, if it's sunny or rainy, and how his mood the previous day was. So the hidden variables are the sunny and the rainy and the visible ones are his mood.

From the previous day. And I started building a little hidden marker model. And then the interviewer says, no, we don't have time. Just give me the formulas. And I said, I can't give you the formulas because I don't know them.

If I tell the story, the formulas will come.

So he had to sort of put up with me with a little story because I remember, we remember stories. We remember all our life. We don't remember formulas. We're not meant for that. I mean, we evolved as humans, telling each other stories and coming up with fiction, coming up with mythology, coming up with those things is how we sort of started creating societies.

We are built for that. Nobody started putting formulas or series of facts. That's not how we evolved. I love storytelling. Yeah, it's super useful.

Luis Bouchard

But in stories, do you still try to explain and use technical jargon, or do you try to avoid it as much as possible and keep it simpler? Or do you, a third option would be to have your kind of super simple story with the two by two image going to two numbers and then two by two again. And then you, for example, go on with the same story, but in a more complicated way, introducing new jargon. How do you introduce complex words and things that you will lose people? Yeah, you just went through my life process because I started very simple.

Luis Serrano

I have a difficulty to speak in technical language. If you ever with me in a meeting, you will notice this, because I can't speak in high level. Whether it's math, whether it's technical stuff, whether it's like business stuff, I can't. And so I'm forced to just, everything needs to be understandable. Otherwise I don't understand what I'm saying.

So naturally, I've always defaulted to super simple and I always try to avoid technical language like the plague. But people started complaining a bit. Be like, okay, well, sometimes treat me like an adult. And I'm like, okay, fine, I will cave in, I will bend. And so I start introducing complicated language.

But after I give the example, see, this is something that people do. They give you the hard definition and then an example. I hate that because first they scare me, I'm terrified, I'm intimidated. And then they say, oh, this was just something very simple. And I'm still intimidating.

They still showed me a monster. And then they showed me that it was nice. I do the opposite. Like, I do an example first and then the definition. And so that way I can introduce technical jargon.

So I, for example, say I have diagonals and I want to bring them down to two numbers and I want to bring them back to diagonals. So the two numbers is the latent space. But now, you know, if I say, I'm going to tell you what the latent space is like, that's scary, right? Yeah. And if I say worst, the latent space is the representation of the lower dimensional representation of some higher dimensional data that people get scared, right?

But if I say my diagonals go to one number and then the antidiagonals go to another number, people understand that. And those two things are latent space. Well, now people are now scared of a latent space, right? Now people know. They understood it before.

And when the beast comes, you already have understood once, so it's not so hard. So I started putting and I think people started liking it. So now I do that, right? Like I'm like this number, this gives me five and this gives me three. The three is smaller.

So that's the loss function. I want to lose as little as possible. So if I lose three, it's better than if I lose five. But if I say loss function is this formula, people get lost, right? So I do introduce technical dragon, but only after I've removed the fear and after I've shown that it's something super simple, then introduce technical dragon.

I think that was a sweet spot that I found after a while. If you look at all my early stuff, I think you'd be like, come on, just say neural network. Yeah, I've learned that. And so you kind of gave it away a bit. But have you often encountered people telling you that it's too simple?

Luis Bouchard

Of course, in these early videos, I guess it happened. But did it also happen in workshops or seminars or things where you are teaching live to some people and they end up thinking like it was just too simple and useless, for example, I. Do get that, yeah. And I like to differentiate between when it's useful and useless. Right.

Luis Serrano

Like sometimes, for example, the people who kept telling me, please keep the explanation, but add the jargon so that I know what's happening. I appreciate that. Other people are telling me, yeah, put some technical stuff at the end, show me the formula. For example, I hate showing formulas, but people say show the formula. Well, now all my videos, at the very end, there's the formula.

What we did was this formula. This is this, this is this, this is this. Right. So now I show the formula. So when there's feedback that is positive, I appreciate it so much because it has made my teaching style much better.

I do get the occasionally autistical person who just comes and be like, that was too easy for me. And then I say, well, did you know what the thing was at the beginning of the lesson? No. Do you know now? Yes.

Well, then there was a delta, right? There was a delta between your knowledge. You have more knowledge now than an hour ago, so you can't come and tell me that I belittled you because you are more knowledgeable. There was a delta. And the fact that it seemed simpler means maybe you should challenge the fact that advanced stuff doesn't necessarily have to be complicated.

It can be a genius, clever idea that somebody came up, they hid it in formulas, but it's a beautiful idea, and it doesn't need to be complicated to be great. As a matter of fact, sometimes the greatest stuff is the simplest. But we have, and that's, I think, thanks to education, the kingdom system, an appreciation for things that look complicated, that we don't understand. We need to be intimidated. It's a bit of a Stockholm syndrome.

To appreciate a piece of knowledge. It needs to beat me. It needs to make me feel stupid and make me feel small and make me feel like I'm nothing for it to be big. But no, we can both be big. I can be big, and the concept can be big, and I can understand it at my level, at my eyesight.

I don't need to be belittled by it. And I think people have that, and it's not their fault. We were raised like that. And so I like to challenge that, and I like to say you're way more powerful than you think you are. You can actually understand most of the human knowledge.

You're no different than anybody else that has understood and created that knowledge. And I like to give people that power. But yeah, sometimes there is a struggle for that because we need to break barriers, right? Barriers that have been domesticated into certain behaviors. And I like to break that.

Luis Bouchard

Some people also think that, for example, for some research papers or new techniques, it's not really worth trying to understand how they work and what they are, because they will be replaced, or like some other algorithms will soon come and just be more performance, and we won't even use it anymore. So my question here is, how do you either ensure to keep your explanations relevant, even if things change, or just identify the things that are worth explaining that won't change, or at least will be useful to understand in the future, even if they are not used anymore? Yeah, that's a great question. I mean, a lot of my material is in like, neural networks and bayesian models and stuff that's not used anymore so I can see the trends, right. Some people still want to learn them and I think people should always at least know an idea of them like we're talking earlier.

Luis Serrano

But yeah, I myself try to stay relevant. I have to stay relevant myself by knowing stuff. Like if I was stuck in learning CNNs and RNNs, I wouldn't be able to do anything. So I had to learn large language models. And so as I learn large term models, I explain them.

So I just look at my own path of understanding. The stuff I teach is always my journey. When you see any trends of the lessons went in this direction and then here it's because I'm going in that direction. I'm always teaching the last stuff I learned. So in me staying relevant, I keep the lessons relevant.

And yeah, in regards to what you said of maybe things, not us saying, well, I shouldn't learn that because it's going to be the next thing soon. Yeah, sometimes that happens. Sometimes I skip an entire technology just because I was still learning the previous one and the next one came. And stuff that happens, it used to happen much slower, right? At some point you don't need to learn the steam engine because there's electricity, but that took a lot of time, but now it just takes a couple of years.

So we have to reinvent ourselves within the same lifespan. We now have to reinvent ourselves several times and that will continue happening. I mean, in two years I'm not going to know whatever comes new, I'll have to learn it from scratch. I just think it gets easier the more you do it, right? Like it took me to understand neural networks a lot more than it took me to understand LLMs because you have the basis, right?

Hopefully in the next thing if you have good basis, AI is always the same ideas, right? They get better, they get more advanced, we come up with more clever stuff and there's more computing power and there's more data. But at the end of the day, the linear models, the neural networks, they always show up. If you understand them, they keep showing up and you just have to learn sort of the side new things that are happening and how to use them in a clever way. But at the end of the day I see this technology changing every minute, but I don't see the basis changing that much.

It may change, but I haven't seen it changing in a while. Drastically. I guess this is simpler for smaller projects like YouTube videos or blogs, or just trying to explain to someone in person or to a class but if you were to, well, you already did some. But if you were to construct a new course or a book, write a book or something that takes a lot of time to build and to make good, how would you do that in AI, like in 2024, we don't even know if, for example, you were to write a whole book about reinforcement learning with human feedback, for instance, or just like the PPO algorithm, and it takes you like a year or two to make perfect and to make really good, but then in two years, it's not even used anymore. How do you construct a book and a big project like this, if things are so uncertain to stay the same, would you just avoid it?

Luis Bouchard

Or is there a way that you would tackle a book nonetheless? Yeah, that's a great question. That's why I'm not writing books right now.

Luis Serrano

I think there's two types of things, the ones that have a long shelf life and a very short shelf life. So if you write a book about something that's right now, a certain chat GPT, right now, in a year, it's going to be called differently. It's going to be something different. So it's hard. The book I wrote was on, I think, stuff that will last because it's linear regression, patient models and neural networks.

It's like the basics of machine learning. And so I sort of wrote with the idea that it's going to be a first course in machine learning for a while, because even if you start learning, if somebody started learning LLMs right now from scratch, they would just going to go for the basics of machine learning.

I did it with the knowing that that was hopefully going to be something with a long shelf life. I wouldn't write a book about a particular technology because by the time I write it, by the time I publish it, it's going to be gone. So for that, I have faster things. So I make videos. Videos always.

Like, short videos always work better. And I think you see that trend happening a lot. For example, if you're like Coursera, like deep learning AI, or even other providers, like dacity, things like that, they are making shorter courses now. So dlai, for example, is doing like short courses on different topics, right? Like LLMs, semantic search, stable diffusion.

Because by the time you make a long one, the technology is going to be gone. And long ones we still make like. I made a course on the math of machine learning, right? Like linear algebra. It's been around for a while, it's still very much used.

It's not going to go away. Calculus still going to be used for a while. It's been used for centuries. Probability. That kind of stuff, you make a course.

That kind of stuff, you can write a book, but for the new stuff, I think either a blog post or a video and it may just have a big boom and then phase out and then you make a new one. So yeah, you got to find the short versions, like the quick versions, because technology is growing so quickly that it's hard to keep up. Do you think you are able to leverage shorts, for example, to explain things? Do you think it's possible to use a 62nd short and bring value through this? Not even a minute of content?

I can not as well as you. You're the one who does that. You do that very well. Your shorts are very good. I've tried with shorts and I think I'll continue trying.

And yeah, I think you force yourself to bring it down to the basic thing. And when I look at my videos, they're like half an hour, but there's always a click that I can bring down to 1 minute. So I've been trying that. But you're doing very well in the shorts. So I would ask you that question of how you make these amazing shorts for explaining one thing and just kind of getting people to scroll their phone and learn one thing at a time.

Luis Bouchard

It's definitely hard. I think people, especially TikTok for example, are not looking to learn. So it's definitely hard to reach more people or to bring value. But yeah, right now what I'm doing as well, just like you did, is when I create a video, for example, my next video is how AI will impact journalism. That's what I'm currently working on.

Luis Serrano

Nice. And I think it's around 15 minutes or something like that. But I still want to make like a 1 minute that will contain the necessary, which is extremely difficult to do, but it's hard. It forces you to really understand your own video, your own information. But I think it works and I think learning should always be fun, even if you're scrolling.

I think it'd be fun to learn some things. And sometimes I pick things up, I see some videos of, I don't know, Neil degrasse Tyson explain something for like 30 seconds with a lot of energy. And then I come back, I'm like, I know one more thing. So I think it should definitely be fun. And I think we have shorter and shorter attention spans and we can't fight against it.

Let's try to use it for our benefit and say, maybe if I scroll and I learned a bunch of things in my TikTok or Instagram scroll. That'd be fun. So I'm definitely, I'll continue trying. I haven't clicked with the short videos, but I definitely will continue them. And I encourage you to continue them because you're really good.

Luis Bouchard

Yeah, I will also keep trying. But regarding the short attention span, I used to think the same thing, but at the same time, we are seeing like a surge in podcast popularity, which is like the exact opposite. And I'm still trying to figure out an explanation of why people are, yes, looking for very short, super interesting and dense content, like Mr. Beast style, for example, and why other people are just really tuned to podcasts, where it's just like two super relaxed people chatting about life where nothing really happens. Interesting.

Well, it's interesting to hear people talk, but it's not like there's no explosions or very exciting stuff happening. It's very special how both work and are exponentially growing at the same time. Yeah, I think I always try to equate education with fun. I think we like to learn, but if we just make learning, like, the only thing, we just get bored. And I feel like how kids learn is they draw with crayons, they play with blocks.

Luis Serrano

It's fun. They don't know their learning, right. And somewhere in the process, it just changes to boring for most people. Like, ask anybody about the school or anything and they'll be like, when did that change? I don't like that that changed.

I don't like that we equate learning with boring and dull. And I think it's because the material should be fun. You should not feel like you're learning.

It should feel entertaining. It should be more interactive. Listening to a fun conversation is fun. And so if you make a podcast that sounds like you're listening to a fun conversation, and then facts pop out and you're learning automatically, that's way more fun. So I think that's what people get drawn to, podcast.

I, for example, I have such a short attention span, I have such a deficit of attention that I need to be super fun. Otherwise I can't. And so I listen to podcasts, but sometimes I get lost. I start a lot of books, but I don't finish that many of them. I get lost, and they need to super engaging.

They need to get me sitting down and not wanting to do anything else.

I'm probably the extreme, but I think everybody has a bit of that.

And maybe TikTok scrolling is okay.

I get dragged into it because it's like short things, right? It's like one short thing, then a different thing, then a different thing. It just kind of keeps me guessing. And so if we can inject education into that, I see it as a know, I really think. Do you think that?

Luis Bouchard

For example, I don't remember the study exactly, but I remember the results where they had two groups where some people used Google Maps and some others didn't. And the people relying on Google Maps saw an effect in their brain where I don't remember. I think it was the memory, but it did hurt their brain and their capacity to memorize stuff or to it did have a negative impact on the brain. So I wonder if highly leveraging or being dependent on AI would be the same at an even larger scale for our brain. And even talking about your hypothesis of the intuition, it may even hurt our capabilities to have good intuitions if we are not able to properly learn and build, like, a good foundation to pose hypothesis and have intuition.

Luis Serrano

I don't know. When we go back, I'm not sure if a study has been made, but I would love to see a study of how cars were detrimental to legs. Right? I'm sure my legs are way weaker than. Yeah, probably, and my arms and everything, because I don't need it.

I still walk, I still use it, and I don't need to use it for survival. Well, I could get away with. So we didn't lose our legs, but we enhanced them by having cars means now we can go farther. Right? And I have my optimistic side, I have my pessimistic side, which is doomsday.

But my optimistic side says, I think it's okay to lose whatever skill Google Maps gave me. I get lost. Fine, I have Google Maps for that. Just like I can't walk for days like my ancestors, but I have cars and. And I can just like they help me develop my brain because they give me more time to do other things that weren't walking.

I'm hoping that just having Google Maps to get me lost is going to maybe help me turn off my brain. And maybe turning off my brain is what I need to develop higher levels of. Higher levels of consciousness. So I hopeful of I don't mind and maybe I'll eat my words, but I don't mind when a skill gets lost thanks to technology, I hope that brings us in a higher state of consciousness that gives us time to just turn off our brain and listen to other things. We're always listening to our brain, and I don't like that.

It's a narrative. At the same time, cell phones make us stay engaged in content and stuff that doesn't let us turn our brain. So it was also detrimental. That may be bad, but I don't know. I try to be optimistic.

Luis Bouchard

And just coming back to this Google Maps study, they also, I assume, just compared two groups where some people use Google Maps, some others, and just trying to concentrate, whereas the people using Google Maps were just following Google Maps and not doing anything. Whereas there's a third option where you can use Google Maps and not concentrate on the road. Well, concentrate enough to drive correctly, but not concentrate on where you are going. Exactly. And instead of concentrating on where you are going, you can listen to a podcast or an audiobook or something where you will learn other things that you want to learn instead of learning the way to another city or something that you don't really care.

So I guess the study could have been a bit more complicated where we see other brain regions improve because you are learning other things. So maybe that's how people leverage it. But yeah, assume, I think we will leverage AI to learn things we want to learn and as you said, not learn things that are not really necessary or just that we don't really care about. I think so. I think if we make the right decisions, then we're going to enhance ourselves.

Luis Serrano

After Google Maps, the next study is going to be people in a self driving car that just takes them. Right. So I think, yeah, if you can use that time to listen to a podcast and learn something, then maybe it's not so bad that you lost the skill of driving or finding your way around. I don't know. Yeah.

Luis Bouchard

Awesome. Yeah. People can find you on the Serrano Academy YouTube channel. But is there anything you'd like to promote as well? Or just who should look out for your content?

Who should learn more from you? Who should check out your. Yeah, the YouTube channel is the best one, is the sort of the place to go where I put all my knowledge goes there and anything that I've just learned goes there. So that's the best way to know exactly as much as I know. And the book also I recommended for anybody wanting to engage more groking machine learning and the place where all the information is my page, Serrano Academy.

Luis Serrano

And there I have a bunch of stuff. So they have courses. I have courses in Coursera, some udacity still have courses there that I taught. Yeah, if you go to that page, blog posts, I've written different things on quantum computing, on later stuff, language models. So that page is the best place to go.

So Serrano Academy, perfect. Thank you so much for this episode. It was very close to my heart just trying to learn more about. Recently I've been reading a book about storytelling and trying to improve that. So that conversation was really helpful for me personally, but I think it was also interesting.

Luis Bouchard

I hope it was interesting to people listening, but yeah, it was amazing. Thanks for all your insights for just a fun discussion, as always. And yeah, thank you for taking the hour and a half to talk with me today. Thank you so much, Luis. I love talking to you and you're a great educator, so I learned a lot from you.

Luis Serrano

And our discussions, our conversations are always wonderful. So when you said come back to the podcast, I was delighted. So thank you for having me and for what you do for education. You do great.