Andrew Tanabe [00:00:05]:Hi Pietro, how are you?
Pietro Gagliano [00:00:08]:How are you doing? Good.
Andrew Tanabe [00:00:10]:Thanks for joining us. We're back here live on track four following the great practitioner debate. We're here with Pietro, who is the president and founder at Transitional Forms. And we're looking forward to hearing a little bit about how AI is impacting the world of creativity. So Pietro, I'll just give a little bit of house rules here. We've got 20 minutes for the presentation followed by 5 minutes of Q and A. I'll give you a little time check a couple minutes before the Q and A and when it is time for the questions, I'll come back on and help. Just read out some questions from the audience here and have a little discussion.
Andrew Tanabe [00:00:49]:So if that sounds good, I can let you have it.
Pietro Gagliano [00:00:52]:That sounds great. Thank you, Drew.
Andrew Tanabe [00:00:54]:All right, all yours, Pietro.
Pietro Gagliano [00:00:56]:Awesome. First of all, I want to say thank you for having me today. It's a real honor to be here amongst the amazing presenters today. My name is Pietro. I'm the CCO and founder of a company called Transitional Forms. We're an entertainment innovation company in Toronto. It is my goal today to show you as many examples of agents in production output by one small team of the whole day. So we'll see if I hit that goal.
Pietro Gagliano [00:01:28]:But in this presentation I'm hoping to give you a bit of an introduction on who I am, my backstory and what led me to become to found Transitional Forms. And I'd like to take you through, as I said, a bunch of examples of agents in production, some behind the scenes and a number of projects that we've done in our company. And yeah, some time at the end I'm hoping we can go through the ultimate example of a multi agent in production system. So I'll give you a sneak peek on that. Hopefully we have time and some time for Q and A. So please throw your questions in the chat. I don't have any feedback other than the emojis, so hammer those emojis too, so I know you're listening. And yeah, here we go.
Pietro Gagliano [00:02:19]:So as most stories start, once upon a time I was at a company called Secret Location was a company that I co founded many years ago and I was, we, we sold it to Entertainment One. And so this is a picture of me outside of my office at E1. Things were good, you know, lots of awards and a great team and benefits and vacation and all of that stuff. But there was one problem in that. My, I'd say my entire life I was fascinated by robots and growing up reading science fiction, you Know, we hear a lot about the coming technological singularity. And I was, I was concerned that Creative Machine Intelligence was coming and, you know, started ringing that bell about seven years ago and. But there was a problem where I would Google it and I didn't even know what to Google. This isn't a real screenshot.
Pietro Gagliano [00:03:19]:This is, this is what it felt like though, where, you know, you search like robots making art or creative machine intelligence or creative AI, like, what do you even search? There was stuff in the research world, but there was really nothing in the zeitgeist that spoke about creativity and AI. Of course AI could drive a car, of course AI could work in a factory, a robot could work in a factory. But could they make art? Never. And so that was an interesting moment. And so, despite all odds, I don't know what I was thinking, but I quit my job, started a new company called Transitional Forms. And we started as a studio lab focused on what we called at the time Creative Machine Intelligence. Now it's called Gen AI, but there was no term for it back then. And yeah, focused on Creative Machine Intelligence.
Pietro Gagliano [00:04:14]:And as the name would suggest, Transitional Forms was also interested in the evolution of AI and culture. More specifically, given my background at Secret location and nonlinear interactive media and the stuff that we were doing at E1, we wanted to focus Transitional Forms on the evolution of entertainment itself, in that there were new forms of media that were coming as a result of this new creative machine intelligence. So as I said, I'm really looking forward to this presentation on AI agents in production. I love the themes because AT Transforms, we've been doing agents in production for many years and we have so many examples being in entertainment. The way we look at it is this is AI agents being autonomous systems, any sort of artificial intelligence. It doesn't have to be large language models, although we have examples of that, but autonomous systems as it relates to entertainment production. So the examples I'll be taking you through today are reinforcement learning and Heuristic AI in film production, music Transformers in real time music production, reinforcement learning in video game production, and large language models in television production. And as I said before, we're working on releasing a product soon that is a multi agent, multimodal.
Pietro Gagliano [00:05:48]:This is the real golden goose of what I think the ultimate example of agents in production, and all from the smartphone in your pocket. So we'll get to that at the end. So let's begin, talk a little behind the scenes. And I'll start with our first project ever, AT Transforms, which was called Agents. And this is an example of reinforcement learning and behavioral heuristic AI in a film. So a quick shout out before we go into this project to the funding institutions that have supported us. The Canada Media Fund Ontario creates. They often support our work, and we're very grateful for them.
Pietro Gagliano [00:06:30]:And another fine Canadian institution, the National Film Board of Canada, was the first institution that I engaged, starting transitional forms. I pitched them a crazy idea that I wasn't sure was even possible. It was probably barely on the cusp of being popular possible. And they said, yep, let's do it. And I pulled up a chair and we started work right away. So shout out to these fine Canadian institutions for support. So Agents was. Was started as an idea on whether.
Pietro Gagliano [00:07:05]:Whether or not we could create artificially intelligent characters that lived inside of a virtual film and give them the ability to make their own decisions, give them agency within the film world, allow their decisions to basically dictate what happens in the film. So it would be different every single time. The audience would have a bit of agency. The characters would have agency. How might we do this? So we were really trying to pioneer a new medium at the time. We called it a dynamic film because we weren't sure, you know, gen AI wasn't a term. So we wanted to, you know, think of a new. A new term for it.
Pietro Gagliano [00:07:46]:So, yeah, we pursued this dynamic film with rigor. And here's some pictures of the behind the scenes at our studio. We built this world to train agents. We were using reinforcement learning because that was hot at the time. We. We built this world where gravity goes straight down. That's part of the concept of the film where they all have to cooperate to survive on the top of this planet. And so you're seeing early prototypes on figuring out how do we get them to cooperate or not cooperate? How do we train them to live on this planet? How do we.
Pietro Gagliano [00:08:22]:What do they see? What do they understand about this? So we went through a lot of iteration, and it was very exciting. And here's a trailer of the final product. Hopefully the sound works. So there's a little gag here at the end. Yeah, just wait for it. Boom. Killing agents to make the film. Millions of agents died to make this film.
Pietro Gagliano [00:09:31]:And. But it's true, you know, we were doing reinforcement learning. So it required agents to live on this planet and die a million times over to figure it out. More on reinforcement learning and how that can be used in production in a minute. But the results of Agents were. Were very exciting. We premiered at the Venice Film Festival, lots of film festivals. And Awards.
Pietro Gagliano [00:09:56]:The NFB was really supportive in that critically acclaimed mit. The MIT Tech Review called it a future of what AI filmmaking could be. So that was a real honor. And this was like four or five years ago now. And yeah, successfully proved the concept and created a new medium. Although it was hard to launch a new medium like this because everyone thought it was a game. But maybe we, we can talk more about that in Q and A later. Oh, and some great research came out of it too.
Pietro Gagliano [00:10:25]:We collaborated with Google Brain and Google DeepMind on putting out a paper. Yeah, check it out if you're into that stuff. So another project that I wanted to take you through is called Project Malachi. This was more of an experiment around real time music and if we could use real time music, Transformers, specifically Magenta by Google, and see if we could create music on the shape of a curve. And now I know that this was a huge inspiration. I don't know if anyone in the chat here knows the, the Kurt Vonnegut video. It's very old, it's on YouTube. On the shape of story.
Pietro Gagliano [00:11:06]:But we were really inspired by this idea that stories could have a shape where you have positive valence and negative valence. And you know, over the course of a story you could have that kind of roller coaster or trajectory on the shape of the story. And so what we did was we created an interf that allowed you to transform music based on curves. So we started with the valence curve, you know, high being positive valence, positive emotion, low being negative valence. And then we added other curves for energy and complexity and depth. And it was really good. It produced music, it was near real time, which is what we needed. And knowing that, you know, in the future we're going to have these holodeck like experiences and how are you going to score something that's unfolding in real time? So that's why we built this thing.
Pietro Gagliano [00:11:59]:Although it is a concept that we shelved for now and we'll, we'll likely visit it later. But happy to answer any questions if you have any on this. Oh, and sorry, research came out of this too. We, we had collaborated with Google Brain again and some folks at Google Magenta. Let's see. So the next project I wanted to take you through is called Little Learning Machines. And this is an example of Reinforcement Lear in video game. And it is based on the idea that came out of agents where we were training, you know, agents to live inside of our film.
Pietro Gagliano [00:12:42]:And the process was very, very complex. So, you know, we had, we had our Engineers working on it. We had advice from people at Google and it was, it was so, so hard. Just even set up the training environment. And so we thought, hey, you know, this is such a powerful technology. It's, it's very simple. It's based on positive and negative rewards. What if we could build a game that allowed people to train neural networks right within the game itself? So that was our goal in this and we thought this would be very important for the world right now.
Pietro Gagliano [00:13:18]:It foster a better understanding of machine intelligence. People might be able to empathize with how machines learn, especially in a world where we're surrounded by neural networks. So, so we pursued building this game. Here's some behind the scenes screenshots of our progress. Very early prototype in the top left corner and then the top right was the next version and bottom left and bottom right were the following versions. It was exciting to see that we could a, install a game and get reinforcement learning going and connect with these creatures based on rewards. Oh, and a note on that, the rewards that we, you know, anybody listening who knows about reinforcement learning knows it's all about positive and negative rewards. So it's like training a puppy.
Pietro Gagliano [00:14:05]:You know, punishment, don't punish your puppy, but punishment for things you don't want them to do. Positive rewards for things that you do want to do. And for the agents that live in this world, we use the analogy of love and fear. So in the game, users can use love or fear and say, I love petting the dog or I love cutting down trees or I hate falling off the edge, I hate, you know, getting hit by fire or something like that. So that's what we base the game on. And here's a little, a little trailer for you.
Video Narrator [00:14:43]:Welcome to Little Learning Machines, a real time simulation game where you and AI robots solve whimsical challenges together in this enchanting world. Your choices shape what's possible. Train your AI robots to master new skills. Watch their personalities come to life as you travel across diverse island worlds ready to team up with your own. Real AI robots. Little learning machines available now.
Pietro Gagliano [00:15:14]:So as the excitable voiceover said, it is available now on Steam, so go check it out if you'd like. And the results were pretty fantastic on this one. We did achieve what we wanted to. Anyone, like literally anyone who can play a video game can now train agents, can now train a neural network right within the game itself. At the time, I think my son was 6 years old, 6 or 7, he trained his first neural network. So I'm pretty Proud of that accomplishment through this game. And yeah, it's a, it's an example of how we're lowering the bar to access the, these incredible world changing technologies. And we actually got people to care about these little agents that lived in the world.
Pietro Gagliano [00:16:03]:Oh, and research came out of it too. We can't forget the research paper that came out of it. And very proud of that one. So the next project I wanted to take you through is called Robots make tv. And this is, you know, everybody's hot on language models and that's great, but this is an example of language models in television production. So at the time that we started this project, it was. GPT2 was the technology that we had access to and it had to run locally and it was very clunky and it was hilariously hallucinatory, but we loved it. And so what we did was we tied it to Unity game engine and created some rules for the world and thought, hey, this could infinitely generate content for us based on GPT2.
Pietro Gagliano [00:16:54]:So we iterated, iterated and iterated this and kept on the mission on how to prove generative and interactive potential of real time language models. Now, the real time aspect was probably the hardest part, but this is an early prototype that we built called Director's Chair, where users can go in and generate potential next lines of the dialogue, seed initial parts of the dialogue and let the language model take over from there. We built it all modular so that you could swap environments and swap characters and even had real time effects like throwing food or setting fires or changing gravity. So this was a really exciting project and it started coming together right away from our prototype. We started adding more environments and more characters, more backstories, you know, show templates and stuff like that. We ended up streaming for 14 days straight. I believe that was our record, just streaming endless new content. So we were very early days in this new medium and super proud of it.
Pietro Gagliano [00:18:05]:Here is an example of one of the pieces of content that project put out. And I will warn you, I think there's some swearing in this, but here we go, babe.
AI Video Character 1 [00:18:27]:These potatoes are fucking delicious.
AI Video Character 2 [00:18:29]:They are. It's been a while since I had some homemade mashed potatoes.
AI Video Character 1 [00:18:33]:How would make life better?
AI Video Character 2 [00:18:35]:What?
AI Video Character 1 [00:18:36]:This weird love potion I found.
Pietro Gagliano [00:18:39]:You can, you can. The audience could make us fall madly.
AI Video Character 1 [00:18:42]:In love all over again and get us out of this rut.
AI Video Character 2 [00:18:44]:Love potion. That sounds like a terrible idea. Oh, that would make things even worse. I'd rather have it be the same kind of relationship we had before than Be in a creepy relationship based on a love spell. I don' Want to be a puppet in someone else's love story.
AI Video Character 1 [00:19:02]:You're right. We don't need a love potion. We just need to talk to each other.
Pietro Gagliano [00:19:07]:Just notice I've got only a couple minutes left, so I'm gonna get to showing you guys a. I won't play the video. But we've. We've built a new product called Simia that is based on this. Based on the world changing. The world of creativity changing. Creativity is changing from the power of generative AI. And so we decided we're going to supercharge content creators now with our first product.
Pietro Gagliano [00:19:39]:It's called, let me just skip ahead here. It's called Simia and you can generate and remix endless amounts of content and share them with your friends. A lot of fart jokes if you want, but it's a really fun system. I won't play the teaser video, but I will end. Please reach out to me so I can. Hey there. Thanks for checking out this pitch. There's the teaser.
Pietro Gagliano [00:20:03]:Please reach out to me so I can share the teaser. But here is a QR code to please sign up for the early access to Simia. It's a really exciting new platform where you can generate. It's a multimodal, multi agent framework that you can generate content from the power of your smartphone in your pocket. So happy to go back through if there's any questions on any of the stuff that we talked about. But yeah, thank you for the opportunity to present.
Andrew Tanabe [00:20:37]:Thanks so much, Pietro. What a great presentation and super fun to see all the different sort of characters and early experiments going on and how it's changed as the technology is changing. Really, really cool to share. Thank you so much.
Pietro Gagliano [00:20:51]:Thank you very much.
Andrew Tanabe [00:20:54]:So I have a couple of questions here from the audience and just wanted to bring those up right now. So one thing that we're seeing here is this contrast that's starting to. That a lot more people are talking about these days between fast AI versus slow AI, this difference in the persistence of these systems. So right now ChatGPT is kind of like the, the definition of or any assistant is really like the definition of a quick, you know, it's just like ping pong bop bop back and forth. And then on the other side is, you know, folks who are putting agents into learning Minecraft and letting it run for, you know, three months and seeing how things go. I thought your early experiments there with building the worlds and having all the characters running around really kind of Interesting in that more persistent way. Just curious, like, you know, if you're seeing that or how you think about it from a narrative perspective, because it's a really different way of thinking about AI.
Pietro Gagliano [00:22:00]:Sure, yeah. So one, one of the things that we were excited to do in terms of multi agent systems was to give characters their own backstory, which in Simia and in robots make TV characters have their own backstory and they, they act accordingly. And we thought, okay, what if each agent was their own? What if each character was, wasn't their own agent? And so we built, we started experimenting with these characters going into the world and acting on behalf of their backstory. And it worked. It was very interesting. And certainly that is a vision of the future of these like long form narratives. But what happened was, and it was very interesting, the storytelling started to really drop. So the agents wouldn't do anything that was out of character or wouldn't do anything that showed that they were flawed or that they were frustrated or that they were confused.
Pietro Gagliano [00:22:53]:And so you got these flatlined stories where everyone is just cooperating according to who they are. And so what we're finding now is in our new multi agent system is having a director agent that is controlling the story and saying conflict here now introduce this tension between characters and that helps for storytelling. Whereas we thought that the multi agent system with, with characters being who they are, that that would be good enough for story.
Andrew Tanabe [00:23:27]:Yeah, no, it's, I, you almost giggle because it's like, oh, these robots are acting like robots. What's wrong with them? You know?
Pietro Gagliano [00:23:34]:Yeah, exactly. Yeah.
Andrew Tanabe [00:23:35]:Where's the humanity? Right?
Pietro Gagliano [00:23:37]:Yeah. Maybe if that's a little insight to how, how it'll be with androids walking around, they'll, they'll just stick to what they're doing hopefully.
Andrew Tanabe [00:23:45]:Yeah, right. You know, I think another, another theme that we're seeing here is really around, you know, a lot of talks today have been more on the sort of business side. Startups looking for solutions maybe to healthcare issues or you know, data analytics solutions and things like this. One of the reasons that we're hosting this conference this year from the museum is really to bring in the world of art and the world of narrative and make sure that as this new technology is growing, you know, there's a creative angle, there's a, there's a non sort of a business angle there because there's so much to learn when we're in sort of a, you know, maybe a more technical setting or a business setting, thinking about, you know, empathizing with you know, consumers or thinking about just how people really live their lives and what's important to them. Curious. You know, kind of again, as you're developing these characters, do you struggle with making them human? You were talking about having a director there. Are there other things that you've noticed in terms of bringing out that humanity, or is it just a different sort of world?
Pietro Gagliano [00:24:56]:Well, one of our, you know, it's in our DNA at transitional forms to empathize with the machine, create properties and pieces of content that allow humans to understand what it means to be machine and machines what it means to be human. So, yeah, like, I think that it's the artfulness and the real time capabilities of it really help you to engage with the content, because it's literally engaging. You could, like, even in the days of agents, the researchers from Google that we were working with, they were like, wow, you know, I look at charts and graphs all day to see if these agents are being optimized for learning. And, you know, in this film, I can actually see them them, see them take action. I can be inside of the world with them, especially in VR. So, yeah, I think that the artistic side, the creativity, if you were, is a really powerful angle for not just human empathy, but machine empathy as well. I think that's a really important thing that we're facing now.
Andrew Tanabe [00:26:06]:Cool, cool. Great. Well, Pedro, we're out of time now, but thank you so much and good luck with the game and with all of your future projects. Thank you.
Pietro Gagliano [00:26:15]:Great. Thank you so much.