Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Unity's AI-Powered Creativity | MLPerf Benchmark | Snowflake-Nvidia Partnership

AI Daily | 6.28.23

Welcome to AI Daily with Ethan, Farb, and Conner! In today's podcast, we bring you three fascinating stories. First up, we dive into Unity's latest announcement of AI-powered creativity. Moving on, we shift our focus to ML benchmarks with a collaboration between Inflection, CoreWeave, and Nvidia.Lastly, we delve into the partnership between Snowflake and Nvidia. Don't miss out on these exciting topics! Join Ethan, Farb, and Conner as they provide insights and analysis on the latest developments in the AI industry.

📍 Key Points

1️⃣ Unity Muse & Unity Sentis

  • Unity announces Unity Muse and Unity Sentis, bringing AI-powered creativity to their platform.

  • Unity introduces AI verified solutions and Muse Chat, aiming to stay competitive and integrate AI effectively.

  • Exciting features include running AI models on the edge and the potential for an internal app store for AI models in Unity games.

2️⃣ MLPerf Benchmarks & H100 GPUs

  • Inflection, Core Weave, and Nvidia collaborate to showcase new ML perf benchmarks.

  • The impressive results reveal the power of training a GPT-3-like model on almost 3,500 H100 GPUs in under 20 minutes.

  • This achievement signifies a significant win for all three companies, with Nvidia's H100 GPUs leading the industry and Core Weave demonstrating their GPU capabilities. Expect more advancements in this space as partnerships continue to evolve.

3️⃣ Snowflake-Nvidia Partnership

  • Snowflake and Nvidia form a partnership, offering Snowpark container services for enterprises to run workloads directly on Nvidia GPUs within Snowflake's platform.

  • This integration creates a tightly integrated environment, providing enhanced data processing capabilities for enterprise customers.

  • The collaboration demonstrates the growing importance of containers and data security, with Snowflake and Nvidia catering to the needs of enterprises by delivering powerful features and services. Expect widespread adoption and utilization of this partnership's offerings.

🔗 Episode Links

Follow us on Twitter:

Subscribe to our Substack:


Transcript:

Ethan: Good morning and welcome to AI Daily. I'm your host Ethan, joined by Farb and Conner Today, as always, today is June 28th, and we have three fantastic stories for you today. The first one is Unity, and then we're talking about some ML benchmarks, and then Snowflake and Nvidia. As well as some great stories that we have on our own.

So the first story of today is Unity. So they announced Unity Muse, and Unity Sentis really bringing AI powered creativity to their whole stack. So across that, they've also dropped AI verified solutions. They've dropped Muse Chat. So Unity's really trying to say, Hey, How can we continue to sit at the forefront and compete with Unreal and make sure that our customers are integrating AI in the ways they might be using other tools for right now.

So, Farb, any comments on, you know what, what do these mean for Unity? Anything that stood out for you?

Farb: I think it's a pretty comprehensive, uh, initial stab at this. You can plug in other AI models, uh, into their world. Uh, probably the coolest part for anybody who's doing that is that the models are gonna run on device, so they'll be running on the edge, which means you're not gonna have to.

Pay, uh, for all of that AI processing yourself as a, as a, as a provider of that new functionality to an end user. For example, whether it's powering, you know, AI based NPC chats. I mean, uh, the NPCs are going to become less NPCs than some NPCs are. So it'll be really interesting to see that stuff hit the real world.

I think it's in a closed beta or something right now, if I remember correctly. Uh, If you, if you think about this as the beginning, well wow, it's gonna get really crazy from here because these are some pretty powerful tools to come out of the gate with.

Conner: Yeah, I think, I think we've kinda seen this coming ever since people started making these demos and making these tests with their own gp, four APIs and Unity.

And Unity actually has announced it quicker than I thought they would've. I thought they would've taken them another couple months at least. So between the Muse Chat and between the scent being able to run the models in Unity games, Very exciting.

Ethan: Yeah, it's pretty cool. They, they almost got this internal app store, so when you wanna sell your own kind of models, similar to assets of the past, it looks like AI models, scent.

Of course, you can actually put these models in your game and muse. Hopefully we'll see even more games created much faster than ever before. So really cool stuff outta Unity. Our second story of today touches on three companies. We have Inflection, CoreWeave, and Nvidia all kind of putting their weight together to show some new benchmarks on MLPerf.

So if you remember inflection, we talked about them yesterday. They of course released Inflection one, but they're also in Foundation, LLM provider and then CoreWeave and NVIDIA supplying all the new H100s to accomplish this. So they were pretty much able to train a GPT3 similar model on almost 3,500 H 100 s.

In, gosh, under 20 minutes, I think it was almost 13 minutes, they were able to train this in. So huge benchmarks showing the power of the H100. Connor, any comments on really just what this means?

Conner: Yeah, it's kind of just a win for all them really. It was 3,584 H100s, and it hit the GPT3 benchmark in just 11 minutes.

So blowing everyone out of the water. I think the previous record was a couple days, so Nvidia once again cements that the H100s are the best in the world by far crushing a M D or anybody else. CoreWeave shows just how many GPUs they can get their customers hands on with. And inflection shows that they know how to build models and they know how to do what they're doing.

So very hard to find the combination of the three. It's really just a, a puff piece for all them, but makes 'em all win.

Ethan: So, yeah, the MLPerf benchmark has been the baseline for how fast can we train an LLM, and comparing from last year to now, huge gains far. What does this mean? Just for the space?

You know, this is definitely kind of a puff to all of them, but in terms of this benchmark, it's pretty important for LLM. So what do you think this means for this space?

Farb: It's pretty awesome for CoreWeave, which is, you know, started not too long ago. I remember them, uh, back in the coin mine days, uh, reaching out to them.

They were very early and new back then. Couldn't even. Get some of their GPUs up and running their site was, was so, was so new. It's awesome to see them come so far, so quickly. Uh, obviously exciting for the other folks too. One, one more feather in the cap of Nvidia, uh, and inflection is. Trying to play a little bit of catch up with some of the other folks that, uh, that are out there in the space and this, this will help them do that.

You know, expect to see a lot more of this coming pretty regularly. It'll probably get to a point where it's not as big in news when you see things like that, but this is early days and, uh, these partnerships are still new and fresh.

Ethan: Absolutely huge cluster, huge gains, and we're just gonna see those times decrease even more.

So really cool. Out of that, our last story of today is Snowflake and Nvidia have announced a partnership. So Snowflake has pretty much said, Hey, all our big enterprise customers that have data here, we're gonna add what's called. Snowpark container services, and we're gonna let you run workloads directly from your data on top of Nvidia GPUs within Snowflake.

So this really like tightly, deeply integrated environment for these enterprises. With all this data far, what do you think of this one?

Farb: You know, it's containers all the way down, as they say. Uh, and it's, it's, it's crazy to seen that the docker world ex explode over the, over the past years. I, we, you know, if you've been around for a little while, you, you were around before there were, uh, docker and containers were, were the, were the thing.

And now they're sort of a, a benchmark and a standard and, you know, it's important for data security. So, And data security is just gonna be a bigger and bigger deal as time goes on. So I don't see this stopping anytime soon. And it's awesome to see these types of, uh, features and services being provided to folks that I'm guessing it's gonna be used pretty heavily.

Ethan: Absolutely. Connor?

Conner: Yeah. This is, once again another, another example of incumbents like Snowflake, acting faster maybe than some people would've thought. Um, partnering with NVIDIA so that you can run your own code inside of Snowflake servers on GPUs. Pretty big jump for people who use Snowflake, which of course is a lot of people.

Ethan: So yeah, absolutely. I think their main gain here is, you know, just partnering with Nvidia and getting access to these GPUs. I think the tooling they're providing might not be enough for all these enterprises. There's so many different use cases they're gonna need, but that access to the GPUs is critical here.

So big partnership for them, um, as always. What else do y'all sing, Connor?

Conner: Yeah, I saw RoboCook. It was a partnership of researchers from Stanford University and SWA University in. Beijing, they made RoboCook, which is a robot that knows how to do soft man, soft manipulation of multiple tools. And then the final, the final example they showed is that the robot knows how to make dumplings off of just 20 minutes of interaction of training data with the tool.

So very cool.

Farb: The Chinese are o obsessed with food automation in the best way possible. I think a Asia in general is probably a bit of a ahead of the game, uh, versus the rest of the world in food automation. They really love it. They have a lot of it, uh, and it's, it's really cool.

I think it's the future. I'm personally really into that stuff, so I lo I love to see it. I've got a good friend in, in, in China in the tech world who's also sort of obsessed with it. So him and I are always. Talking about our, our, our food automation dreams with each other. Maybe they'll become, uh, reality one day.

Conner: I think the rest of us will catch up in time, so we'll get there.

Farb: Yeah, for su for sure. It'll, it'll be everywhere in the future. It's just cool to see, uh, somebody out there really, uh, blazing the trail for the rest of the world. I love it. And, uh, You know, there's some, Canada has decided it's trying to throw its hat into the tech game, uh, and start welcoming tech workers there.

Uh, with visas, you don't even need a job to get a, you know, sort of like a tech visa into Canada Now, uh, you can do a digital nomad visa, which I think gives you six months there. So it, it's, it's nice to see that the, you know, Different parts of the world are opening up to tech workers and, and trying to make it easier.

If you have an H1B in the United States, you can go and work there. Canada not, not messing around.

Ethan: Yeah, absolutely. It's interesting that like geopolitical UK also recently did their high potential individual visa. Um, and, you know, geopolitically, they're all racing for this AI talent, so pretty cool.

Um, yeah, I saw, you know, nothing insane here, but just really commenting on this. Activity of the startup and VC space. We saw five early stage VC firms all announced new funds yesterday. Um, so congrats to them. And more money in the ecosystem. The ecosystem continues to hype up and more startups, more money, more creation.

So exciting as always. But as always, thank y'all for tuning into AI Daily and we will see you again tomorrow. Peace guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily