End of LK-99? | MK-1 | StableCode

AI Daily | 8.08.23

Welcome back to AI Daily! In this episode, we explore three intriguing stories in the world of AI and technology. First up, we discuss the possible end of LK-99, a ferromagnetic material that sparked excitement about superconductivity. Our second story delves into MK-1, a project aimed at enhancing the inference speed of language models. Lastly, we cover the launch of StableCode by Stable Diffusion. This coding model, boasting a 16,000 context window and 3 billion parameters, raises questions about its distinctiveness compared to other fine-tuned models.

Quick Points

1️⃣ End of LK-99?

  • LK-99, initially hailed as a potential superconductor, faces skepticism as evidence of superconductivity remains elusive.

  • Despite uncertainty, the excitement around LK-99 showcases the power of scientific engagement and the pursuit of breakthroughs.

  • The episode debates whether LK-99's impact on science engagement outweighs its unconfirmed superconducting potential.

2️⃣ MK-1

  • MK-1 project aims to make efficient model inference accessible to all.

  • MK-1's compression codec MKML and GPU optimization promise faster model outputs.

  • Democratizing AI capabilities through MK-1 could reshape AI deployment across various domains.

3️⃣ StableCode

  • StableCode, Stable Diffusion's coding model, hits the scene with 16,000 context window and 3 billion parameters.

  • Questions arise about StableCode's uniqueness and distinct contributions compared to other fine-tuned models.

  • Stable Diffusion's continuous innovation underscores the evolving landscape of fine-tuned AI models.

🔗 Episode Links

Connect With Us:

Follow us on Threads

Subscribe to our Substack

Follow us on Twitter:


Transcript:

Ethan: Good morning. Welcome to AI Daily, and we have three stories for you today. As always, our, our first story is possibly very sad, but the end of LK 99. So it seems as this LK 99 tweeted by Alex Kaplan and confirmed by another paper and a few others seems to be what's called a ferromagnetic material. So does not show signs of superconductivity or resistance.

So we may not be back Farb?.

Farb: Well, you know, never call anything dead as long as there's somebody who cares and has hope. And if this is inspired, Anyone to get into deep tech and, and physics and and chemistry, then it's an absolute win for humanity and we actually need way more of this stuff happening where.

People get super excited about a potential development. Uh, they focus their attention on it. They have focused their effort. They actually put in the effort to discover it. If we had this going on with every part of science, then we would already be in, you know, the meme of the future where cars are flying around and people are living forever.

Uh, this is the absolute best type of thing that humanity can be doing. And I encourage it and I think we need way more of it. So actually, I think. A huge win for humanity. Uh, if not a win.

Ethan: Huge win for everyone. Get to read and learn to. I completely agree, Conner . Yeah, I completely agree that

Conner: FBE said many people will knock on pop science like this saying it's kind of bad for the scientific industry.

That kind of diminishes the quality of science. But everything is pop culture nowadays, as you see with a podcast that's just AI daily. So people talking about science, people being interested in science, It means a lot and maybe it's over. Maybe it's not. I still hold out a little hope this. There's only one paper from one university in China, but we shall see either way though.

I agree. Pop science is a win for science.

Farb: You know everything, all science is going from not knowing to knowing, so to argue that we didn't know at the beginning, therefore it was the wrong thing to do, is just the most backwards thinking humanly imaginable. All scientific progress is based on not knowing when you start and knowing something at the end, even if that's knowing that this.

Particular thing is not the thing that's still what we're going for. Uh, it you're an absolutely insane lunatic who come on the show. I will debate with you here if you think that anything about LK 99 was bad news or bad in any way. Uh, just don't be it. Literally nothing but a troll on Twitter. I'm not trying to give you a platform, but if you have anything intelligent to say about this, Come on board and we can talk it out through.

You'd have to be alluded. Say there. The world came together for it.

Ethan: It was honestly really cool the past few weeks to watch literally everyone get excited about it. It's something that isn't so down in the dumps. People were actually getting excited about science. Everyone got to learn more about it. Us three too.

We learned so much more about chemistry and physics, et cetera. So for the next time, superconductors hopefully come up again, we're gonna do the exact same thing and we're gonna.

Ethan: we're here for it. I love it. I love it. Well, our second story of today is back to hardcore ai, A little bit off of superconductors, but we're talking about MK one.

So MK-1 is similar to GGML of sorts, a pretty, pretty much trying to improve inference speed of these models. So if you've ever run a large, you know, llama instance at 70 billion parameters, for instance, you might wonder, Hey, why is mine so much slower? Then open ais and anthropics. How are they getting their models to output tokens so fast?

Well, MK-1 wants to bring that to everyone. So Conner , can you tell us a bit more about it?

Conner: Yeah. MK-1 is really trying to bring, as you said, the inference capabilities of companies like Google, companies like OpenAI of companies like Enro, trying to bring those capabilities to everyone in the open with open source.

Their demo is only closed beta right now, but what they're saying it is, and what they're aspiring for it to be is very hopeful for what they can do. They've designed something called M K M L, which is kind of their framework for compressing models. The first codec is called MK 600. It's just an initial compression codec, but it compresses these models by 60% while keeping them to be basically the same model, with basically the same fidelity.

So this is very exciting development that they have released and that they're working on, and I'm excited to see what else MK-1 comes up with.  

Ethan: Yeah, I think similar to the cloud wave and everything else, you have, you know, open eyes and Anthropics doing these really hard engineering challenges that everyone's are saying, Hey, how is this happening?

And now you're getting the di democratization of all of that for anyone who wants to run these models. Farb, what does this mean to you?

Farb: I. I am, I'm running a very large language model inside my head and I'm one wondering why it's slower than everybody else's. So this, uh, applies di directly to me in the problems that I'm having on a daily basis.

What they're doing is, is straight up picks and shovels, and it's a beautiful thing. And, you know, you couldn't be more picks and shovels than saying, Hey, here's a model that doesn't even work on a single G P U. We're making it work on a single G P U. Here's a model that you needed this super expensive G P U to do.

We're gonna make it work on a much more available, more affordable G P U. Uh, this is a big way of how you make. Progress in the world. It's not just, oh, big scientific discovery, uh, here's a paper on attention and everything's done. This is the real work of getting, you know, AI and l l M working everywhere and actually bringing the potential value to realize the value.

So, uh, this is the sort of stuff that's, you know, Going to move the industry actually forward in the sense of not just knowledge and discovery, but actually implementation and changing people. Yeah, I think I bring people's actual lives.

Conner: I talk about technology such as this, but same thing we saw with Lama.

Meta came out with the original LAMA and then G gml and the G GML team built LAMA do cpp, and now llama dot TPP is used by everyone who uses LAMA, including meta. So it's really connection between closed source and open source and between these big scientific research possibilities and these. As you said, for picks and shovels that unlock making these models actually usable for day-to-day inference.

Ethan: We're in like the Docker, Kubernetes era of ai and I think a lot of these companies and actual applications are gonna be super valuable to people and just start out as dev tools. So really cool. Our last story of today is stable Diffusion has launched StableCode, so they've been talking about a fine tuned code model for a while.

They've released three different versions of this coding model. I didn't get to look into this one too much. Conner , anything different about their coding model that's, you know, different than maybe open eyes or different than some of the fine tunings other people have done?

Conner: Honestly, not much sadly. I, I kind of hope to see more of stability.

Um, there, it does have a 16,000 context window and it's, it is only 3 billion parameters and the performance is probably pretty good, but that's kind of the exact same thing we see from rep. It's three, 3 billion parameter model. So stability is kind of just repeating the same thing here. They did that with another model we talked about last time.

Maybe we stable lmm two, we'll see more from them. But right now, kind of just this what other people have from stability.

Ethan: Stability is really pumping out these different fine tunings, et cetera, Farb, have you heard anyone using these or kind of interested in some of the ones stable Diffusion has been putting out stability.

Farb: I, I haven't spoken to anybody, uh, using this, nor do I think you can actually particularly use this one quite yet. Uh, I didn't, unless I'm wrong. I didn't notice it was, if it was available yet. I believe so. It, yeah, it's on hugging face. Yeah, it's on hugging face. Oh, it's okay. Yeah. Oh, yeah. No, no, you're right.

Actually, I did dig into that just before the, the, just before the show started. Um, but I don't know anybody who's using it. This is the right and good thing for stability to do. Uh, I tend to agree that. You're, it sometimes feels like they're kind of coming out with things like a little bit after somebody else drops something pretty similar.

And it's not, and it's not that different. Uh, they, they shouldn't keep it quiet. They, they should release it. Uh, it's, it's the good and right thing for them to do. Uh, it puts pressure on the space to keep, you know, doing this stuff. Uh, if you don't do it, stability will do it. Uh, even if you do it, stability will do it anyways.

So ku, kudos to them and thanks for them to, you know, for continuing to do this work and, and, and push this stuff out. Even if every single announcement doesn't seem like some world shattering, um, accomplishment. It's good that they're doing it, but I tend to agree. I was, I'm always kind of looking for the, like, okay, where is this, you know, meaningfully different, uh, than say something that rep that is doing or something somebody else is doing.

And I didn't quite see that either. So. You know, it's open, so maybe people can make it better than some of these folks that are launching things that are not as open. So, uh, You know, I'm gonna keep cheering stability to keep doing what they're doing. And I, I think they've, uh, they've made some waves in the space and I wouldn't be surprised.

Ethan: Absolutely. It was pretty cool how they used a ton of, they, I think they used a little bit more kind of code instruction response pairs than some other people, which with the long context window, again, I haven't got to try it yet, but could do a little bit better for some of these longer programming tasks.

So maybe that's the defining factor for stable.

Farb: They may not be just even doing a good enough, you know, job of explaining why it might be more applicable and, and more useful. You know, the blog posts on it wasn't that long. Um, you know, they could have potentially shared more about it and, you know, showed some examples of people putting it to work, get some hackathons going around it, and, and kind of like, You know, maybe it is better, but we're having a tough time seeing if it's, if it's meaningfully different and if it is that they should, you know, put some effort into getting people to understand that it's a, it's a mimetic world.

You gotta, you gotta build things and make your case to the world about absolutely

Ethan: why this is, I'll test it out and be back tomorrow day. I daily see how it is. But as always, what else are we seeing. Farb?

Farb: Um, I saw a post from I think Robert Scobel who said he spoke to a C E O, who is using 30 different LLMs to provide customer support, uh, on his product.

I. Be not, I, I'm actually not surprised and it kind of makes sense because he's got to create this whole pipeline around, you know, figuring out whether this first l l m is hallucinating. So it's just LLMs all the way down to try and get something practically useful. Um, and I just thought that was, you know, we've seen this type of stuff before in our own work, uh, where it's not just enough to do one pass with an L l M, you gotta have LLMs watching LLMs and LLMs watching the LLMs and watching the LLMs.

It's like a massive bureaucracy of LLMs that you have to build. Um, yeah. And, uh, yeah, so I thought that was kind of interesting and, uh, I think that's probably the feature, to be honest. That's why

Ethan: we need to get these models cheaper so we can run them through thousands of times.

Farb: Just like your brain. So you can run tons of models, tons of model.

Yeah. There's not gonna be the God model that rules all things. Um, the laws of physics and our ability to build hardware to accommodate that alone is a, is is a bottleneck to keep that from happening. Uh, you know. I don't care if you have this, the algorithm to, to, to do the calculations you need the hardwares, the hardware that can, you know, actually hold the memory and, and all these things.

Ethan: Conner , what about you?

Conner: Yeah, I saw that soup base released their hugging You face integration. I kind of predicted this a little bit back when Firebase talked about their integration from Firebase extensions and Firebase data store into Palm and all the Palm a p i models and then, yeah, superb base yesterday released their.

Integration all the way from soup based database into any hugging face model, into soup based edge functions. Always great stuff going on at soup base and I'm sure they're gonna do more here with ai. So excited to see what they do next.

Ethan: That's awesome. Yeah, I saw two different things, um, that I wanted to highlight 'cause I love both of 'em a lot.

The first one was someone used Mid Journey and Runway to make a Mortal Kombat style kind of a video here. So they use celebrities within Mortal Kombat. You can play as. Like Joe Biden and then like Cleo Petra or something, and then you could play his, like Ronaldo. It was really cool. I, I remember a while back we talked about kind of just endless characters and games, anyone, and I think it's just, it, it's so cool to see.

I like that little video. And then the second one was, it's this thing called 1 0 1 School, so you can pretty much. Create a, like a full course with AI and then they have chatting with it on the right and it's actually pretty good. They had one on poker and game theory and you just kind of type in what you wanna learn and it generates all the course study for you and it generates it on the right to kind of chat with it real time.

Pretty simple. And you can do this with other tools of course, but I just like the way they laid it out. So we'll link it below and check it out.

As always, thank y'all for tuning into AI Daily and we will see you again tomorrow.

0 Comments
Authors
AI Daily