Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Varda LK99 | AirForce AI Drone Flight | Alibaba Qwen

AI Daily | 8.04.23

Welcome to another episode of AI Daily! In this episode, our hosts Farb, Ethan, and Conner cover three big stories to close out your week. First up, Varda, based in LA, presents super exciting news on LK-99 replication, showcasing levitation in a high-quality video of the Meisner Effect. Next, the Air Force's Valkyrie air combat drone triumphs with AI, aiming for unmanned flights. Alibaba unveils a remarkable 7 billion parameter model, surpassing LLaMA-2 7B and potentially 13B.

Quick Points

1️⃣ Varda LK99

  • Varda in LA achieves levitation in LK-99 replication, hinting at possible superconductivity.

  • Promising breakthrough material, but further research required for practical applications.

  • Russian and Chinese experiments add to the excitement surrounding this groundbreaking substance.

2️⃣ AirForce AI Drone Flight

  • Valkyrie, the Air Force's AI-driven drone, conquers unmanned flight challenges in simulations.

  • AI integration vital for military competitiveness and cost efficiency.

  • Advancements in AI-controlled drones signal an exciting future for military applications.

3️⃣ Alibaba Qwen

  • Alibaba introduces a powerful 7 billion parameter model, outperforming LLaMA-2 7B and possibly 13B.

  • Ideal for math, coding, and plugin-based tasks, expanding AI's efficiency.

  • Multifaceted model tailored for Chinese language but shows potential for various languages and applications.

🔗 Episode Links

  • Varda LK99

  • AirForce AI Drone Flight

  • Alibaba Qwen

  • Model to Translate ada-002

  • CoreWeave - Collateralization of the GPU

Connect With Us:

Follow us on Threads

Subscribe to our Substack

Follow us on Twitter:


Transcript:

Farb: Hello and welcome to another episode of AI Daily. I'm Farb. I'm joined by Ethan and Conner, and we got a, we got stories all over the place today, from rocks to the sky to good old fashioned GPUs. Let's get started with the latest in LK-99 News from our friends at Varda who are based here in la. Some super exciting stuff.

Con, why don't you give us the lowdown?

Conner: Ricardo might be based in Miami. Is is the worrying part of that, but no, they're here. Oh, they're okay. I dunno, who knows?

Farb: Their founders just like to go to Miami a lot and talk about how awesome Miami is. But they did tweet about how LA is where you come, if you wanna work on atoms or as I like to say, LA is the land of mimetics and kinetics.

Conner: Hm. But yeah, Andrew mcCal of Varda, they replicated it. They published it at 5:00 AM this morning working. They pulled all nighter over there at the Varda hq and they're showing levitation, um, in a very high quality video of the Meisner Effect. No test of superconductivity yet, but as he said, a few papers have been saying that levitation will go hand in hand with Superconductivity.

So maybe the material just is too small of quantities right now. Maybe the. The quality of it is not good enough, but levitation's happening, so superconductivity may be happening.

Farb: It's giving levitation as, as we say. Uh, e e. Ethan, what, how do you feel about this? Is it giving you the, is it giving you the warm, the warm and tingles, or are you just, are you not buying it yet?

Ethan: No, I, I think American manufacturing is back. You know, I saw a really funny tweet. It was like, You know you're living in the future when the engineer from the space manufacturing startup replicates the room temperature semiconductor. So I think it's super cool. They only got it for a few micrograms right now, but the fact that this was done in 10 days, like they were able to replicate what looks like levitating, what looks like potentially the Meisner effect, add a few micrograms within 10 days at a lab right here in la.

So super, super cool. Excited for Varta. Excited for the engineer there, the entire team.

Farb: I think you meant Superconductor, but I don't think anybody involved in any of this has successfully not said semiconductor at least once. Uh, I know, I know I have probably on the show, probably on the show to be honest.

Uh, yeah. This is, this is exciting stuff here. Let's see. Let's see where it goes from here. We're so back. Uh, it's so over. We're so back, but we'll always be here to, to explain it one way or the other to you. Uh, very cool. Let's, let's jump into our next story. So we, we go from the lab. Oh, I'll just say one more thing.

Somebody, you know from, we got a couple of, uh, comments from Russia yesterday in our, uh, In our story, which was pretty awesome. Somebody was reacting to my point about how the average Russian kitchen is like a super lab and the guy was basically, yeah, that's right. My dad used to melt electronics in the kitchen to get the gold out outta the electronics.

They're not messing around over there. That, that's for sure. Alright, uh, moving for our, for our next story. We're gonna move up in, into the sky and talk about a little bit of AI in jets. Uh, what's this story about Ethan? Tell us about it.

Ethan: Yeah, so for, for a while now, actually, like AI controlling drones has been a relatively, actually extremely hard problem.

So the Air Force drones, navy drones, et cetera, you know, the end state is not having a human pilot for everything, especially as other countries wrap up more AI capabilities in their drones. How do we stay competitive? Um, so Valkyrie, one of the Air force's, uh, air combat drones actually kind of solved this challenge, problem with.

Similar to a foundation model. They took a ton of images, a ton of videos for what these drones see, and we're able to kind of build this foundation model that lets it win these challenges in simulations, lets it actually fly on its own and kind of removes the need for a human pilot. Um, so really cool stuff around that.

You know, I think. The entire, you know, DoD and warfare space is rapidly trying to integrate AI in ways they see efficient. You know, we've seen a lot of excitement around LLMs for DoD, but I think the real weight here is these types of drone controls, whether that's submarine drones, whether that's air drones, these are the types of things that'll actually keep, you know, the US military and the five eyes competitive.

So really cool stuff out of them and who knows what they have that they're not releasing. So, really cool stuff.

Farb: Yeah, they were, uh, I mean, they, they make a couple of great points. You know, running one of these jets for an hour costs tens of thousands of dollars. Mm-hmm. So running it in simulations is obviously a fraction of that cost.

Uh, and then I, I, I also love the, the name of, one of the, one of them is called Sky Borg. Uh, instead of Cyborg. It's pretty, pretty, pretty hilarious name. Uh, Conner, what did you, what'd you take away from this?

Conner: Cyborg sounds a little bit similar to, similar to Skynet. My, from my opinion, but get over it.

Farb: Okay. Skynet's coming.

Conner: Yeah, no, they, their stated goals are for it to be able to do on the air or on the ground attacks on its own, which is very lofty goals. So, of course they still have human in the loop. Uh, they still will always have someone either flying next to it as they did for all these demos or just monitoring it remotely.

I believe they wanted to build a thousand of them or just have a thousand of them out in the, out in the field, which will be one pilot of the top 500 pilots, top 500 jets, two, two basically co-pilots, two flying next to them. So, Exciting future for AI in the military.

Farb: You know, you can't let other people beat you to this.

That said, hopefully we don't need so many planes attacking so many places that there aren't enough humans to. To handle the, handle the work or, or, or possibly at some point they'll just be much better than humans, but that seems like a pretty lofty goal here. The folks that are, uh, trained on flying these things are pretty spectacular.

It'll, it'll be a little while, I think before an AI is doing a better job. That said, an AI's not going to potentially pass out in the plane, um, from pulling G-Force and things like that.

Ethan: Absolutely, and I think we'll have to get to some of this, you know, like a hypersonic missile and some of these other capabilities.

Just a human can't have the reaction time needed for defense. So having AI in the loop for these, despite the safety risks and all that, I think is extremely important. So it's cool they realize that.

Farb: Yeah, you can't give up AI dominance in this space for sure. Um, right. Moving into our last story, the fine folks at Alibaba have released a high performing smaller model, uh, 7 billion parameter model.

It surpasses LLaMA-2, seven B and potentially 13 B as well, especially on math and coding. They've, uh, made it commercially available, I think up to a hundred million users. Some pretty cool results here. Uh, Conner, what, what, what are you, what are you getting from this?

Conner: Yeah, people thought for a while that LLaMA-2was probably hitting the limits of what we can do on a 7 billion or even 13 billion per hour model.

But now that Alibaba's new model at 7 billion is beating even the 13 billion possibly. It's pretty exciting for maybe even further, how much we can push these smaller models. It seems it's, it is Chinese focus, so of course it's mainly centered around the Chinese language won't be as good for English or any of those types of use cases, but, As we keep seeing China create research outta them, as always.

Farb: I thought I read that it was pretty well suited for other languages. Maybe I, maybe I misread that.

Conner: It is, it's pretty multifaceted, but mostly Chinese focus. Of course. All their demos, other examples, even a lot of their like docs and language is in chinese.

Farb: You know, we're seeing the sort of expanding, uh, we like the, the, the broad LLMs back to like more fine tuned LLMs, like, Hey, can we make this very good at math and code versus trying to be good at everything under the sun.

I think this pattern of expanding and contracting is probably gonna continue forever. Ethan, what's your read?

Ethan: Uh, yeah. The cool, coolest thing I saw from this was it has supportive plugins. So they actually trained it with a lot of this like plugin alignment data. So when you want a, let's say you're building an agent and you want a small model that's maybe more efficient than the big models, um, You want it to call APIs, you want it to call databases, you want it to work with some code.

I think these are the type of models you want to fit, and this one seems to be working a lot better than llama. So like I said, instead of calling GP four for some of these use cases or setting up a bunch of a one hundreds for a larger model, we're getting these smaller models that are useful in these kind of more defined contexts.

Um, so having a 7 billion parameter model that can call some tools, hit up your database and at least manage that layer of your stack is really useful. Again, another. Engineering kind of pipeline thing that we're seeing LLMs go through. Also,

Conner: of course, again, to note every big company is gonna be trained their own LLMs.

Alibaba's not gonna be calling open ai. Alibaba's making their own model. Even if it's the exact same, even if it's only slightly better on some things, slightly worse than some things, everyone's gonna make their own model as we're seeing. Yep.

Farb: Absolutely. Alright. What are we all, uh, what are we all seeing out there, Conner?

Conner: Yeah, I saw, um, someone train kind of modified eight to two embeddings so that you can kind of mix and match them so you can kind of find the average of many statements. Um, bit of a weird token embedding things. I think the example will put it on the side, shows it better, but you kind of add and subtract different kind of senses like he is the king if you subtract, he is a man, an ad, she is a woman.

It'll give you, she is the queen. So that's a very simple example, but you can imagine that spread over maybe a million reviews. You can average all the embeddings for that statement and get the average statement between all those. So pretty interesting results. Pretty interesting. Something that we could possibly do.

Farb: Interesting. Very cool. What about you, Ethan?

Ethan: Uh, the collateralization of the GPU. I saw that CoreWeave raised almost two and a half billion dollars, um, of debt. Pretty much collateralized by the current GPUs they have, and to go buy a ton more GPUs, um, you know, speaks to two things to me. One of which it's, you know, venture dollars directly buying GPUs like at scale.

That doesn't make a ton of sense. So they're raising these debt facilities. And secondly, just these huge PE firms and these huge growth funds, seeing just real data, real demand for GPUs and saying, we'll throw debt behind that. You know, debt's not the easiest to come by, and especially at this size. For anything new and it's pretty rapid.

We're seeing it just apply to Nvidia chips, so really fascinating. Mm-hmm.

Conner: It's nice to see housing market of GPUs where other GPUs are collateralizing more GPUs all the way into the future, and then when GPUs become commonplace, it all tumbles down.

Farb: Yeah. I, I saw our, our favorite person to, uh, reference on the, on the pod.

Ethan Molik, uh, posted something and I gave him a bit of a hard time about it yesterday. So, a, I apologize, Ethan, if I was a little, a little bit harsh, but he, you know, he was talking about the forever debate of, you know, trust experts versus, you know, do your own, uh, investigation into something. And, you know, my point to him a little bit in, he was kind of implying that.

You can, you gotta trust experts. You can't learn everything yourself. There's too much to learn, which might be the case. But my point to him was that coming from somebody like him who people respect and, and look up to, uh, especially with regards to science and, and being a knowledgeable person, if, if you say something like that, you're just gonna get people to toss their, more people to toss their hands up and say, okay, well I'm just not gonna try and learn anything.

And I'll just trust experts. And I don't think it's an either or thing. You can give some value to people who you trust, who have more knowledge than you do, and you can do some learning on your own. Uh, making it a all or none in one direction, I think is the wrong framing. And I just encourage everybody to, you know, find smart people, uh, learn from them.

Trust them when you have to, especially if it's life and death that you gotta make your own decision about. What you're trusting. It's not like, uh, you can just, somebody said, I gotta listen to experts, so I'm not allowed to ask questions anymore and I gotta give up. Trying to learn. The whole point of learning is learning things you don't know.

So to say that, you know, uh, you shouldn't learn about that because you don't know it. It, it just obfuscates all the point of learning anything. Uh, you don't have to become an expert in everything, and you don't also have to just give up all of your decision making to experts. Uh, there is a wonderful integrated middle path that you can tread.

Uh, it'll take a little bit of work on your part, but my hunch is you'll be rewarded for it. So that's my little diatribe here at end. I think that was

Conner: beautiful. Very well said. Yeah, I think it's about finding experts who know their knowledge in specific domains of fields, and as you said, evaluating what you think, what they think in your own thoughts, and finding multiple experts and getting multiple opinions.

Ethan: So just don't be complacent when you hear the word expert.

Farb: Yeah. Yes, absolutely. Don't be complacent. Uh, I think Truman, Truman warned us about this. If it, if it wasn't Truman, it was The Simpsons. You know, they, those two seem to have covered every dire warning about the. Did

Conner: they have a, uh, superconductor episode of The Simpsons?

Probably. Oh, I'm,

Farb: I'm sure you, I'm sure you can ask ChatGPT. Man, that's not my job to, will do. Talk about me too. Thanks for joining us everyone. Hope you have a great day. We'll see you on the next episode of AI Daily. See you guys. Thanks guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily