Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Pixar "Elemental", Microsoft Orca, & ByteDance GPU Run

6.20.23

Welcome to AI Daily! In this episode, Connor and Ethan discuss the latest trends in AI, including Microsoft's new AI model called Orca, Pixar's innovative use of neuro style transfer in their movie Elemental, and ByteDance's massive purchase of Nvidia GPUs for AI. Tune in to explore how these advancements are shaping the future of technology and entertainment.

Key Points

Microsoft Orca

  • Microsoft introduces a new AI model called Orca, which takes a different approach to learning by utilizing the actual traces of GPT4's thinking.

  • Smaller fine-tuned models have shown limitations when trained to imitate GPT4, resulting in worse benchmarks and a loss of complex reasoning.

  • Orca stands out by training on step-by-step instructions of GPT4's thinking, achieving impressive benchmarks and offering insights into GPT4's reasoning process.

  • The development of more effective smaller models like Orca could potentially allow for accomplishing similar results with less processing power and faster times, but concerns arise regarding the propagation of errors and the need to continue working on foundational large models.

Pixar “Elemental” & AI

  • Pixar's Elemental movie showcases the use of neuro style transfer in mainstream animation, combining CGI with AI techniques to create stunning visuals.

  • By leveraging the latent GPU capacity at Pixar, processing times for creating animations were drastically reduced, allowing for more efficient production.

  • The collaboration between AI and human animators resulted in a beautiful synthesis of AI-generated content and hand-drawn elements, highlighting the power of combining both approaches.

  • This breakthrough sets a new precedent for the future of movies, with AI neural networks transforming animation and pushing the boundaries of creativity in the industry.

ByteDance Run on GPUs

  • ByteDance has made a significant purchase of 101 billion Nvidia GPUs for AI applications, navigating around export bans and the ongoing chip war between China and the United States.

  • The billion-dollar purchase may only be pre-orders, but it highlights the escalating competition and demand for processing power in the global market.

  • Despite the chip shortages and geopolitical tensions, the race for processing power is intensifying, with implications for economic and soft power conflicts.

  • ByteDance's acquisition of a hundred thousand chips demonstrates their ambition to leverage AI for various applications, potentially influencing US-China relations and product development.

Episode Links

Follow us on Twitter:

Subscribe to our Substack:


Transcript

Conner: Good morning and welcome to another episode of AI Daily. I'm your host Connor, joined by Ethan Farb, and we have another three great stories for you guys coming. Starting up with Microsoft Orca and then Pixar's Elemental, and then news about by dance and their GPUs. Mm-hmm. So first up, Microsoft has dropped a new AI model called Orca.

It's a 13 billion parameter language model, but instead of like kuo or like alpaca, instead of learning on just plain outputs of GPT4, it takes a step up by learning on the actual traces of GPT4 thinking. So Ethan, what do you, what do you think about this? What do you know about it?

Ethan: I think it's pretty cool.

You know, we've seen a lot of these kind of smaller, fine tuned models. You know, you have these big, large language models and people are saying, Hey, you know, we can make smaller models, 13 billion parameters, 10 billion parameters, whatever it may be, and they can do a lot of what GPD four does. But what we've seen as we've done a lot more benchmarks on them is, These older models are trained to imitate GPD four.

So they take all the outputs of GPD four and say, oh, let's just train on that. We're not gonna train on all the original data, we're just gonna train on the outputs. And at the end of the day, they see worse benchmarks. Um, so whether it's from an LSATs or whether it's from basic questions, all the fine tune holes.

Start to be seen when you train. That way, you lose that complex reasoning that these big models have.

Farb: So what Orca is doing is it's instead training on, you know, step-by-step instructions of how GPD four is thinking they're doing a little bit more. Evaluation of what the actual imitation should be and they're getting really great benchmarks out of it.

So it's pretty cool to see just how can we make these fine tuned smaller models more effective without just copying what GPD four is saying, really. How is GPD four thinking?

Conner: Yeah, far we've seen how useful it's to have GPD four and how good it is that chain of thought reasoning, how good it is at going through multiple steps.

What does it mean if we're getting smaller models that can do that same chain of thought thinking.

Farb: Well, hopefully it means that we can do more with less. We can get the same results with less processing power and faster times. You know, the way that it's taking a look at the explanation traces, it's, it's using, uh, G P T for chat a p i, and asking for responses, but then also asking.

Chat G P T to explain how it got to that conclusion. And its training on those explanations as well. They're using another, uh, uh, technique called, uh, teacher assistance as well, which is again using chat G p T itself to to, to help. And, you know, we've spoken about this many times and we've mentioned speaking about it many times, how we're using ais to train ais, which is a really powerful concept.

It makes me wonder. If this is too good to be true, uh, there are no magic bullets. There are no pills that solve all problems. Are we gonna end up in a world where, you know, certain ais have been anointed as the canonical ais and then the ais that are built downstream of those are also, you know, considered canonical.

But you know, if there's an error or something wrong in the original AI, that's just going to propagate through all of the models that are. Trained on it, and then we're going to just, you know, um, canonize false information, uh, in, in a way that causes problems in the real world. We'll see. We're probably a little bit ahead of those days, but in some ways the more this happens, it's, it's amazing, but in some ways seems al almost too good to be true.

Uh, we can't stop working on these foundational models. Uh, we can't stop working on these large models. And, uh, that becomes even more important as other models are based on them.

Conner: Yeah, I'll talk more about Model Invitation later at the end of the episode. But of course, another important thing to note is that being trained off GPT four, all these open source models that do that, including Orca, cannot be used for commercial use.

Farb: So good luck stopping people.

Conner: Hmm. Next up though, we have Pixar's Elemental. So in the new Pixar Elemental movie, they use neuro style transfer because uh, they were using CGI to make the fires and the animation, but it was a bit too realistic. So the main director of the movie, um, wind and Red research papers and was looking out at the newest AI and they worked with Disney's AI Research Lab and.

Combining that with the neuros style transfer, like something you would see in stable diffusion. They applied that to fires and you see the final output for Elemental. That looks very good. Far. What have you, what have you seen about this? What have you read about it?

Farb: So there's a couple of cool things going on here.

One, they actually used the. Latent GPU capacity on the computers at Pixar to do these, do process this in the evening, probably when people weren't using their computers. Uh, they started using the excess GPU capacity to do a bunch of this processing, uh, which I thought was, was really awesome. Clever on their part taking, they said something went from, you know, I dunno, it was five hours or five minutes, but to a few seconds.

Uh, which, which is a great improvement. Uh, and the other interesting thing I think that they did is, you know, a lot of this has to do with just saving time and money. You could do all of this if you really wanted to, but you know, you're just gonna end up hand drawing every single, uh, frame in the movie.

And that's just gonna take tons of time and, and cost tons of money. So what they're really trying to do is how can we do this at scale, uh, cheaply and, and effectively? And what they found, which is I think, Maybe the coolest aspect of AI and all of these new, you know, AI related technologies is, is not necessarily just the sort of, uh, original content that the AI makes, but how it's able to synthesize previously existing things.

So they took, uh, What what they found worked the best was when they used, you know, the AI version of the fire plus a cartoon hand drawn version of the, the, the fire character. And when you, when you use AI to melt those two together, you get this new beautiful thing that, you know, would've been difficult to do by hand.

Um, and it's just this beautiful combination of. Things are better when the AI and humans work together than either one of them on their own. And I, I think that's the coolest, uh, part of all this stuff is it's, it's really about, uh, being an Iron Man suit on people, uh, more so than it is, uh, for replacing people.

Conner: Yeah, I believe this is the first time this was done in mainstream animation. So good job director Peter Stone for taking the lead here on that. Ethan, what do you think this means for the future of movies if we have more mainstream. AI neural nets that are transforming how the animation is. Yeah, I think, you know,

Ethan: Pixar's always been ahead of the curve with like every single new technological advancement, you know, from the original toy story stacking up tons and tons of computers to make the first like animated films all the way back in the day.

So Pixar's always ahead of the curve and it's cool to see them. Kind of breaking away from that traditional, like hand animation. Um, they've already been doing some ai. You know, this story is kind of based on a movie that's already been in the works for years now, but just the potential of breaking away from manual kind of animators hands and fires hard to make, flames are really hard to make.

Like they said, when they used their fluid simulators and kind of manually did it, it's too realistic. And then if they slow it down, it looks like plasma. So how do you make something that has emotion? It looks like a person and. The basic things we've seen from stable diffusion and style transfer, they've taken this and turned it into an animated film.

Um, and farb commented really well on how cool it was, how. They pretty much were like, Hey, okay, this works, but we need a ton of GPUs to do this. And of course every animator at Pixar has GPUs in their computer, so they virtualized them and trained it at nights when people were sleeping and used half the GPUs.

And yeah, it was just a really cool article to read. And I think more film studios and even like independent animators are gonna see their workflows just speed up from all this tech.

Conner: Yeah, some people of course knock modern like film with modern animation for losing a bit of the original style and flare of like OG animation cuz all those were hand drawn.

And nowadays most things are computer generated. Um, but ns, ts neural style transfers, being able to apply the style to the AI and mix it together, like you said, Ethan, it's very, very exciting. We're gonna see in animations over the next few years. So, absolutely. Lastly, today we have GPUs. China's byte dance has apparently gobbled up 101 billion of Nvidia GPUs for AI this year.

Uh, it's a mix of their pre-orders for the A 100 s before those who were banned for bank exports and also the H 800 s, which Nvidia Custom made for by dance, because of course, with the whole chip war between China and the United States, H 100 s and. Any big high performance computing H B C chips are now being banned for their exports to China.

So Nvidia worked around that making the H 800 chips, but still, China's been gobbling them up now, hit a billion dollars far. What do you think about this? The whole chip war between China and the US?

Farb: You know, we don't know whether they actually received a billion dollars worth of GPUs or they just ordered a billion dollars worth of GPUs.

The number is small. A billion dollars of GPUs is nothing. Uh, and you know, the sovereign individual predicted this in 1999, and the future will be. You know, the Cold War of the future will be around processing power, uh, and we're already seeing it happen. And Blinken went and met with, um, China, I think last week.

Who knows what they talked about, but whether they talked about processing power and GPUs and this type of tech, uh, if they didn't talk about it this time, they're going to talk about it. Uh, you can probably anticipate the rest of your life. Uh, there is going to be wars for processing power may not be physical wars.

They may just be Cold War style. You know, trying to jockey for who has access to what technology, where can it be built? Uh, but you know, you think of a billion dollars as, as a, as a lot of money, but in the grand scheme of processing power, it probably barely gets you through the day. So these numbers are just gonna go up and up and up and the tension is gonna go up and up and up.

And God willing, it does not, uh, result in any real wars. But, uh, there will be economic and, and, you know, soft power. Warring happening, uh, like it probably already is.

Conner: Yeah. I mean, of course, like NVIDIA's making the most GPUs, but a billion for all of by dance is pretty small, like you said, far, and there's chip shortages everywhere, whether in the US or in China or around the world.

Ethan, what do you, what do you think's gonna happen with CHIPS in the future? Yeah, you know,

Ethan: money-wise, it is definitely small. You know, they spent 3 billion just buying back shares last year, so dropping a billion on some pre-orders before they possibly might not be allowed to order these chips. Again, super small, but I think it is still important to remember that they bought a hundred thousand chips.

You know, it took probably about 10,000 chips to train Chad, g b t. So when you look at all the data that a company like by dance has across TikTok, across their other applications in China, There's, they're probably gonna pop out some really interesting models. Um, how that's gonna affect, you know, the US-china relations and how that's gonna affect their products is to be seen.

But a hundred thousand ships a lot. Um, and they're probably gonna be ordering more before these restrictions kick in.

Farb: I'll bet my generative AI license that, uh, to my likeness that, uh, they tried to order a lot more than a billion dollars worth of GPUs and that's all they could get.

Ethan: I bet. So too.

Conner: Yeah, I, I believe they're still ordering.

We're only halfway through the year, so we might be at two or 3 billion.

Farb: The guy is on the, his web is on his laptop hitting the refresh page, being like, when can I get some more? It's like trying to buy, you know, tickets to Burning Man or Taylor Swift. Yep.

Conner: Good old GPU tickets. Yeah. All right. Well, what have you guys been seeing?

Ethan, what have you, what have you seen lately?

Ethan: Uh, yeah, ElevenLabs, you know, a favorite of ours here. A favorite of the kind of deep fake biosynthesis and just biosynthesis in general. Uh, they raise a 19 million series A, also announcing some great new products. Um, so one is projects. They're gonna let you kind of have this.

Full kind of Google Docs workflow for this synthetic audio generation. And second that I thought was really cool that they're pushing forward is their AI speech classifier. So of course everyone's worried, can we classify speech? Can we determine if it's fake or real? And this is another product they have their eyes on.

So just congrats to them on the series A and seeing more products and the progression of the audio biosynthesis space.

Conner: Yeah, I believe every time we see a new like language model or a speech model or any like voice model, we like compare it to living lap and of course they win out every time, so good for them.

Very exciting. Absolutely. What about you? What have you seen?

Farb: Oh, a little tweet about something that apparently leaked from, uh, OpenAI ChatGPT. Who knows if it's a, it's a real, like, I haven't been able to verify that, but, but it was very cool and it, it basically to me was the first time I really saw how the idea that OpenAI is building this, you know, AI assistant, um, really is coming to fruition.

So the idea is that you can, uh, give. ChatGPT, things you wanted it to remember about you and even files that you want it to sort of have in its memory of, you know, You know, not necessarily files about you, but you know, maybe documents about your work or your life. Uh, one example someone said is you could give it your diary and it'll just remember all these things so you don't have to feed it to it.

Every time you fire up a chat, it'll just kind of have all this stuff about you in its permanent memory, and now you can really start seeing how that could become a powerful, powerful assistant.

Conner: Very exciting though. Yeah. I'm excited for the future of what they do with ChatGPT, and everything coming out of OpenAI. So I saw, personally, I saw a paper called The False Promise of Imitating Proprietary LLMs. This is what I commented on with Orca earlier.

This paper, these researchers, they dug into a little bit more of Una alpaca. All these new LLMs like orca that are open source and very small, 7 billion parameter, 13 billion parameter. And everyone's like, oh, these open source models are now beating ChatGPT. BT like, oh, this specific benchmarks, they're beating ChatGPT.

But they dug into it. And all these benchmarks are evaluated by ChatGPT, or evaluated by humans in comparison to ChatGPT. So of course as these models are imitating ChatGPT better, they score better. But when they look at the actual knowledge in these models, the actual capabilities of these models, they're not that capable.

They're essentially the same open source models. They were fine tune on a little bit of how to talk like chatt, bt, or maybe some very specific knowledge of ChatGPT. But of course they found out it doesn't unlock any new deep knowledge in them. It doesn't unlock much of anything except what you give it.

So not really huge advancements of these models just. What we're seeing before with just a little more data, a little more open, which is nice, but nothing too huge yet, so, absolutely. Yeah. Well, thank you guys for another great episode. We will see you guys tomorrow and thanks for tuning in AI Daily. Thank you guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily