Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Bard Updates, ARTIC3D Research Paper, & DeepMind's Alpha Dev Breakthrough

AI Daily | 6.8.23

Welcome to an exciting episode of AI Daily, where we discuss three captivating stories: Bard updates, ARTIC3D research paper, and DeepMind's Alpha Dev discovery. We delve into the remarkable advancements in Bard, which introduces implicit code execution, providing accurate results and enhancing user experience. Then, we explore ARTIC3D, a groundbreaking research paper that generates high-quality 3D models from noisy web collections. Finally, we uncover DeepMind's discovery of faster sorting algorithms using Alpha Dev, highlighting the inhuman nature of the algorithm's evolution and the potential for AI to optimize existing processes. Tune in for the latest in artificial intelligence advancements!

Key Points

Bard Updates:

  • Bard introduces implicit code execution, generating and running Python code for challenging problems.

  • The integration of code execution in Bard improves accuracy and user experience compared to relying solely on language models.

  • The addition of code execution in Bard enhances problem-solving capabilities, particularly in math and code-related tasks.

Bard's code execution feature demonstrates impressive results and a 30% improvement over previous benchmarks, making it an enticing option for users.

ARTIC3D Research Paper:

  • The ARTIC3D research paper focuses on learning robust articulated 3D shapes from noisy web collections.

  • The method involves generating high-quality 3D models with impressive detail and color accuracy from sets of images.

  • This approach expands the possibilities of using wider sets of images to reconstruct 3D objects, bridging the gap between 2D and 3D.

  • While the examples showcased in the paper feature safari animals, there is potential for broader applications beyond that domain.

DeepMind AlphaDev Algorithm Discovery:

  • DeepMind's Alpha Dev applied genetic learning to improve sorting algorithms, showcasing the potential of AI to enhance long-standing algorithms.

  • The inhuman nature of the algorithm's evolution led to optimizations at the assembly and C++ levels, finding small and niche efficiencies.

  • AI's ability to discover improvements in algorithms that may have taken humans much longer is an exciting prospect for efficiency and optimization.

  • The cognitive shift of exploring methods without preconceived notions highlights the transformative thinking enabled by AI, although it may raise concerns about non-human approaches to problem-solving.

Links Mentioned

Follow us on Twitter:

Subscribe to our Substack:


Transcript:

Conner: Hello and welcome to another episode of AI Daily. We're back another three great stories for you guys. First we have Bard updates, and then we'll have a research paper called ARTIC3D. And then lastly, we'll have a DeepMind Alpha Dev discovery. So starting with Bard, they introduce implicit code execution.

So when you ask for a more difficult problem, like like a math problem or like string and interpolation, Or really anything that's best solved by code barred in the background will generate Python code and then automatically run that code for you. And this is something we can't see in ChatGPT, and this is something that isn't really possible and solves things that aren't really possible without code execution.

Ethan, have you seen this? Have you tried it? What do you think?

Ethan: I haven't got to try it yet, but you know, we spoke on a previous episode about how these LLMs, you know, g D four or any of the open source ones, when you're trying to do math or string interpolation, you're pretty much, if you're only using the L lm, you're asking it to do mental math in a sort of way, in a sort of meta way.

So, you know, we've seen like tool former and a lot of people say, Hey, If you're asking a question about math, just like a person uses a calculator, why shouldn't the l lm send this off to a code execution environment to do some real math and not just rely on mental math? So having this integrated cleanly and barred, uh, of course bard's always been fast, so I imagine this will be fast as well, but seeing some of these kind of calculator type problems or code type problems at the simpler level.

Executed not just mentally by the L l M, but actually from a code execution just gives us more accurate results and a better experience for people. So I'm happy they put this in. It's been something open source has been excited about for a while far.

Conner: We've so far like liked ChatGPT or liked being a lot better than Bard.

Will this make you use Bard? What do you think?

Farb: It sounds they're gonna make me try Bard. I think it's, seems pretty impressive. There are examples that they shared. You know, take the word lollipop and, and reverse it, and it'll show you the code that it generated to. Reverse the word lollipop and then it'll show you the results of the code, uh, showing you that you, you got what you asked for.

Pretty impressive. They said it's a 30% improvement over some of the, uh, prompts that it would've previously done against their own internal benchmarks. Who knows what that's like in the real world and interesting. I don't know if this was what Sundar was, uh, tweeting about the other day, but I, you know, semi-related.

I found it really interesting that, uh, they're tweeting essentially. AI product updates from his Twitter handle, and I kind of doubt he's sitting there writing these tweets himself. There's, you know, a whole machine over there working on this, but clearly they're trying to position him in the way that, you know, Sam or Elon or other folks in the AI world where, you know, the head of the whole business is out there kind of sharing the latest and greatest in AI updates.

I thought that was really interesting to see him tweeting something that seemed like a, you know, pretty simple product update.

Conner: Mm-hmm. Yeah. People have, people have pointed a code execution as a problem of like an attack vector before, but of course barred out of Google. I doubt that's an issue here. Um, note to note, our next story up.

We have ARTIC3D, a research paper for learning robust articulated 3D shapes from noisy web collections. So essentially they take a set of images of like a zebra, and then it can output like a full 3D shape in very high quality, very good detail in color of the model of the animal. Very well far. What have you seen from this?

What do you think about it?

Farb: It reminds me a bit of some of the other things we've seen lately, whether it's like Anglo that can take a 2D video and create a 3D asset of that space. There's a lot of this and it almost seems like some of these discoveries are, uh, emergent behaviors of LLMs and different, uh, models that people didn't necessarily anticipate the model being able to do.

Uh, it's great to see. More and more people pushing the bounds of what we can do with existing models, what we can do by fine tuning them or, you know, all brand new models, if that's the case. Mm-hmm.

Conner: Yeah, this is, this seems a bit different because it's in, like Anglo, you need like a video revolving around an object.

While this is like different shots, different angles, it can be cropped, it can be very low quality. Ethan, what does this unlock? Being able to use more wider sets of images to articulate a 3D shape.

Ethan: Yeah, I think this was a funny use case doing like zebras and giraffes and these like, you know, safari animals.

But the more, I guess, broader use case here is, like you said, instead of having to do these full like nerfs and full like camera angles, you can take a few pictures and reconstruct a 3D object. You know, something we can kind of do in our mind if you think so. You know, if you're looking around the room, you look at a picture online, you can kind of reconstruct what's this gonna look like in 3d.

So the. You know, explosion of like diffusers of course has enabled this. And being able to take just a bunch of random collections of images and say, Hey, I want to put this in Unreal. I want to put this in Unity. I want to make this 3D object from a couple images I found online, and I don't wanna do it all by hand.

I think we're kind of closing that gap on, you know, getting from 2D to 3d. Um, and this is a cool use case. Like I said, once again, fun, they used, um, safari animals, but I think there's a lot more potential for this.

Conner: Mm-hmm. Yeah. I'm not sure if it works on things outside that, cuz all their demos were safari animals, but yeah, sure it does.

Um, our third story today is DeepMind and Alpha devs Discovery of Faster Sorting Algorithms. So they applied essentially what was AlphaGo and changed it to be Alpha Dev and then applied it on, on sorting algorithms, on hash, on hashing algorithms, and did the genetic learning. Necessary over many iterations to find these little improvements in these sorting algorithms.

Pretty impressive to see. Nice to see far. What do you think about this? You know, sorting algorithms?

Farb: Not, not digitally running on a, on a processor are, are things people have been working on for maybe hundreds of years. And how to, how to sort information, whether it was at the Library of Alexandria or on your desktop computer.

The fact that they're pushing the bounds on something that's been around this long is really cool to see. And I think this is, we've talked about this before where. You know, we saw mobile applied to everything that existed before mobile, and then cloud applied to everything that existed before Cloud.

And then SaaS applied to everything that existed before SaaS and, and now AI applied to everything that existed before ai, almost reaching backwards through time and improving things that are from long ago. And, you know, Taking the, the dilating, the temporal di distance between where we were and where we are.

It's, it, it's really kind of crazy that you can point AI backwards to things and just have it improve them. Um, you know, be like, let's take this sorting algorithm that's been around for three decades and just make it better. And it's like this sorting algorithm could have been better three decades ago, but I guess not.

You needed AI to come along and, you know, do enough processing enough, you know, over a short enough period of time to see that improvement instead of, you know, waiting 300 years for humans to improve the that algorithm.

Conner: Yeah, the blog post notes and many other people note that the change in the algorithm, how they upgraded it, is kind of inhuman in a way.

They would sort it. It's kind of like how alpha go one by doing inhuman moves that no one really expect. Ethan, what does it mean if the future of our algorithms are like less human and more ai?

Ethan: I think that's a, a weird question, but probably good. At the end of the day, we want more efficiency from these algorithms.

Um, I think one thing to note, you know, there's a lot of people kind of complaining about the hype around this. It's not a fundamentally new sorting algorithm. Um, at the end of the day, you know, they've optimized like the assembly level code of how you do swaps and copies and, you know, moved, like you said, moved around some of that code, done it in kind of an inhuman way.

So gathering some niche efficiencies in that way, but, You know the broader point here is that we've been writing code by hand for the past 30, 40 years, and now AI can write code and find these extremely small and weird inefficiencies we may not have found. And a lot of them are working at the assembly level, at the c plus plus level.

And maybe not just, Hey, I'm gonna, you know, code up a website better, but how can we fundamentally improve, you know, how fast network speed is? How can we fundamentally improve how fast a sorting algorithm is on a niche hardware? So it affects a lot, a lot of things, and I think they're beginning to just touch on what this is, but always love deep mind.

Farb: Yes, there's this, there's this weird, you know, cognitive change that happens when you can begin processing ideas that you. Have zero inclination that they'll even work, or any hu zero hunch that it's even directionally accurate to try this new method. But when you can try, you know, umpteen million methods, you know, in a day versus in a million days, then all of a sudden it, it is sort of like a completely different way of thinking than than a human would.

A human's not really gonna spend their time just randomly trying out things. They usually go after things that they have a hunch. Makes sense because they don't wanna waste their time. Uh mm-hmm. But that's not an I issue for a, a processor like this. And the other thing it made me, me think of is it almost seems like we've, you know, I guess probably understandably, are a little bit.

More weirded out or more sensitive to this idea of thinking in a non-human way versus, uh, moving or doing something in the physical world in a non-human way. You know, we don't really have this problem with, you know, cars move in a very non-human way. The way a car drives around is very non-human versus how a human gets around doesn't seem to weird us out too much.

But as soon as we get into this thing that thinks, but not in a human way, sounds a little bit. More awkward and weird and, and maybe, uh, frightening for some people.

Conner: Yeah, I'm, I'm excited for it. I get, and it just, in a sorting algorithm, it wasn't a huge change, but farb, as you mentioned, we've been working on sorting out of them for a hundred years and it rediscovered something that's even slightly better in not that long.

So future algorithms that we need, instead of spending hundreds of years on them, we can throw projects like Alpha Dev on them. So that's exact. Absolutely. What have you guys been seeing? What have you guys been using?

Farb: I saw a couple of interesting things. One, I saw segway is partnering, I think with somebody to start doing some more, you know, computer vision with their, uh, segway uh, scooters. Which is, which is cool, helping the scooters ride more safely. Um, are they gonna be autonomous scooters? You know, I think they're trying to get to there. Yeah. That's awesome. I like that. Yeah, that's a really cool idea. Yeah. Um, and then I also thought it was interesting that there was a, you know, we're seeing all these d deep fakes in, in politics now.

Uh, Trump did a sort of big deep fake spoof when DeSantis launched his campaign on Twitter, and they did this sort of spoof of the Twitter space with all sorts of historical, uh, characters and. Things like that. And then we just saw, apparently DeSantis dropped a, uh, an ad that shows Trump, you know, kissing Dr.

Fauci on the head, on the stage, and they just mixed those in with real photos. They put these deep fake photos in a collage of photos and, you know, It seemed like they were almost trying to make you think they were real photos. They weren't trying to sort of represent it as a parody. So it was pretty interesting to see how quickly the entire political world has, has consumed this stuff because they weren't consuming, you know, the latest technologies like Facebook when this quickly, uh, you know, social media this quickly, mobile this quickly.

It seems like they're on the AI tip as, as fast as anybody else. Is this on this new, you know, big tech development?

Ethan: It's funny cuz I think whoever does that best is gonna be really well positioned. Like if we look at Obama's campaign in oh eight, like they took advantage of like a lot of digital tools way better, um, than the Republican party did at the time.

So seeing who actually, you know, right now Republicans are the ones putting out a lot of these deep fakes and taking advantage of this tech. So how it's gonna affect things, unsure, but always a, you know, a positive for a campaign to actually stay in front of like tech development.

Conner: Yeah, I mean, new Tech has always helped the new campaigns.

Of course, FDR had the radio, JFK had tv. Mm-hmm. Story continues onwards, so absolutely back on the scooters. I'm not sure how autonomous will be useful, cuz you have to hold on with your hands and then that's also how you steer. So speak for, for yourself.

Ethan: I go no hands.

Conner: Most people cannot do that, but I'm assuming most segue writers will not.

Farb: Some people don't have hands. That's true's. Very good.

Conner: Bet. Bet. I apologize,

Farb: Ethan?

Ethan: Um, yeah, Microsoft, uh, recently released through Azure government. They released GPT4to their government customers under their Gov cloud. So, you know, any vendors selling to governments, government agencies themselves can take advantage of GPT4

I think this is fantastic working in the government space and. For just in general, seeing the pace that this is entering into enterprise and into government. You know, I, I've said before on shows how long it took for cloud and mobile to reach government customers and seeing AI actually get there much faster just shows the usefulness of the technology.

So if you're in the government space, definitely see how you can integrate into your product. Very exciting.

Conner: Yeah, I saw a paper called InstructZero where they find a better prompt for an  LLM by actually searching through the latent space itself. Searching through the latent space to find the best words to put in the prompt, and then converting that to an actual prompt that can go into LAMA and that can go into GPT.

That's pretty cool. That is cool. Amazing. Very exciting. Well, thank you guys for the great episode. See you everyone tomorrow. Have a good one. See you guys. Peace guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily