Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Nueralangelo by NVIDIA, StyleDrop by Google, & OpenAI's Cyber Security Grant Program

AI Daily | 6.2.23

In this episode of AI Daily, we cover three exciting news stories. First, NVIDIA introduces Neuralangelo, an impressive evolution of nerfs that allows you to accurately 3D scan any scene or object using your phone or drone camera. This breakthrough technology opens up a world of possibilities in various industries, from media to drones. Next, we discuss StyleDrop by Google, a remarkable advancement in image styling. With just one reference image, StyleDrop can generate a wide range of styles, including 3D and 2D characters, producing pixel-perfect outputs that surpass previous models like Dream Booth. Finally, we delve into OpenAI's Cybersecurity Grant Program, a $1 million fund aimed at advancing the future of cybersecurity using AI. OpenAI is determined to defend against AI aggressors and improve cybersecurity through innovative projects like developing honey pots and leveraging cutting-edge technology. Tune in to this episode for all the details and insights on these groundbreaking developments!

Key Points:

Nueralangelo by NVIDIA:

  • NVIDIA introduces Neuralangelo, an evolution of Nerfs that enables accurate 3D scanning of objects using a phone or drone camera.

  • The improved Neuralangelo pushes the boundaries of what Nerfs can achieve, thanks to better graphics cards and algorithms, offering more use cases in various industries, including media and drones.

  • The technology starts from a 2D representation and uses multiple angles to create a detailed 3D model, refining it until the desired level of accuracy is achieved.

  • Neuralangelo is one of the 30 projects presented by NVIDIA at a computer vision conference, showcasing their rapid development and impressive capabilities, such as Michelangelo.

StyleDrop by Google:

  • Google introduces Style Drop, an evolution of Dream Booth and textual inversion that can generate a wide range of styles based on a single reference image.

  • Style Drop surpasses its competitors in terms of accuracy and similarity to the desired style, making it an impressive tool for generating content in specific styles, such as company branding.

  • The model achieves pixel-perfect outputs and requires fine-tuning on less than 1% of its parameters, making it easy to use and customize with just one image.

  • Style Drop's ability to produce high-quality results with a single image sets it apart from previous models that required multiple images for comparable outcomes.

OpenAI’s Cyber Security Grant Program:

  • Open AI announces a $1 million cybersecurity grant program aimed at advancing AI-based cybersecurity and defending against AI aggressors.

  • The program offers $10,000 grants to support the development of innovative cybersecurity solutions and technologies.

  • Open AI emphasizes the importance of fostering a high-level AI and cybersecurity discourse and encourages applications that leverage state-of-the-art AI technology for cybersecurity purposes.

  • The grant program aims to address various cybersecurity challenges, including the development of private GPU compute and the creation of deceptive honey pots to deceive hackers.

Links Mentioned:

Follow us on Twitter:

Subscribe to our Substack:


Transcript:

Conner: Hello. Welcome to another episode of AI Daily. I'm your host Connor, joined by Ethan and Farb. We have another three great stories today starting with Neuralangelo by NVIDIA, and then Style Drop from Google, and then open AI's new grant program. So neuro NVIDIA announced Nueralangelo, which is really an evolution of nerfs, where you can 3D scan any scene, any object with your phone or a drone camera, and it fully represents the object in a pretty accurate 3D representation.

This is, we've seen Nerf for a while, Ethan, and we've played around with Nerfs, but this is a pretty big step up. What do you, what do you see from this?

Ethan: Yeah, nerves have been, you know, very exciting when they first came out, but still fairly inaccurate. Um, you do a Nerf over a 3D space and the depth perception and the 3D representation just simply isn't that good.

So this Anglo from Nvidia, I think, is actually really pushing the bounds on what Nerfs could do previously, you know, better graphics cards, better algorithms. We're getting these better 3D representations and it enables even more use cases. You can think of a, a drone flying in the air. And actually taking a possibly grainy video of the surface and actually getting real 3D representations from it.

So in evolution on Nerfs, um, we see use cases across media, across industry for drones, like I mentioned. So really cool to see. I've always been excited about nerfs and seeing them become more accurate and more representative possible futures with ar, et cetera. I love it.

Conner: Yeah, there's a few different layers to the model here, but really it all still starts and originates from the base 2D RGB video.

From a normal phone camera. Far, what is, what is, what is this unlocked being able to do this?

Farb: You know, I think one of the cool things that they're doing here is, like we've spoken about before, there's almost this aspect of metacognition where it. Takes, um, it starts from this 2D representation. It picks a bunch of angles of the 2D representations, much like a, like a sculptor might pick different angles of a subject.

Uh, then it'll do a, a rough version of that, of that sculpting again, like a sculptor might do. And then it'll come back and continually refine the surfaces until they are, as you know, sharp as they need to be again, like a sculptor might come back in and continue to. Refine the piece until it was done. I thought that was really interesting.

Uh, it's also just one of 30 projects that they're presenting, uh, at a computer vision conference that's coming up here shortly. They, they have got a couple other pretty cool ones too, but it's pretty amazing to see the pace of development at, at, at Nvidia, you know, doing, uh, 30 different projects for just one conference.

And, uh, one of them being such a impressive feat is Michelangelo. Yeah,

Conner: definitely a worthy 1 trillion valuation. So, okay, next up we have style drop out of Google. This is really a evolution on Dream Booth. This is an evolution on textual inversion. So you can give it as little as one image as a reference for a style, and then it can output a whole new range of styles from that.

So they showed anything from 3D to 2D to different styles of characters. You can give it as little as one image and it redos the entire new set of images for you based on a prompt and based on that style, this really is a beautiful output, another beautiful model. Far what do you think of this? What do you like about it?

Farb: I mean, this kind of blows away anything else that they are comparing it to. It's, you know, one of these things where I think if you're not living in breathing ai, you may be a little bit like, well, I don't get it. I thought stable diffusion could already do something like this. I could give it an image and tell it to create it, you know, in this different style if I'm doing something like Dream Booth.

But the reality is that. You know, everything is a matter of degree. So is it sort of like it, or is it so much like the style that it's kind of shocking and they've achieved that sort of L ladder fit, where when you see it compared to. When you see them comparing examples from Style Drop versus Dream Booth on imaging or Dream Booth Laura on stable diffusion, it just absolutely blows away the competition.

And you know, this is kind of important if you're one of the, I think, cool examples is you could feed it your company's branding style. And then output a whole bunch of new, you know, text to image content in the style of your company's branding. And it'll actually be probably good enough to post online and start using in your graphical assets.

Instead of it being something where it's like, oh, that's kind of cool. Maybe we'll send this to our graphic designer and they can clean it up and you'll send it to your graphic designer and they're like, you know, I could do the original version faster than cleaning up this bad version you sent me from Stable Diffusion.

Uh, I think Style Drop is kind of, Across that chasm where it's don't need your graphic designer to tweak it anymore.

Conner: The outputs are essentially pixel perfect. Ethan, we've played around with Dream Booth, with Laura's. What do you think of this? It's not open source yet, but do you think it's probably gonna be duplicated quickly like Dream Booth was? What do you think here?

Ethan: I think it likely will be. And the most important thing I saw from this is, and I think both of you mentioned this, but you only need one image to be able to, pretty much what they're doing is they're fine tuning on top of that one image and they're only fine tuning less than 1% of the model's parameters.

So you have this really great base model that can be fine tuned extremely easily with only one image. Um, so if you've used Dream Booth, if you used other alternatives, trying to find 10, 20, 30 images that are in a certain style, maybe for your company's brand or maybe a. Your child's painting, it's difficult.

So being able to do this with just one image is the huge unlock here. Um, and I think just makes it easier for people to make creatives, to create new applications, whatever it may be, but love seeing it. Yeah, early

Conner: Dream Booth, you needed like 30 images and then it went down to 20 or 10, and here you can use multiple still for maybe a better output, but going all the way down to one is really another big jump here.It's nice to see. So, yep.

Farb: Finally, it seems to perform better on one than a lot of them did on multiple images, to be honest. Mm-hmm.

Conner: Most definitely. Very exciting news. Hopefully we'll get an open source version soon. So next up, we have open a Open AI's cybersecurity grant program. They have a $1 million fund.

They've announced with $10,000 increments that they're prepared to give out to advance the future of cybersecurity and AI, cybersecurity, and everything around safety to defend from AI aggressors really. Farb, what do you, what do you think here?

Farb: I think it's really any aggressor. They're really trying to say, Hey, AI can do a lot of things with regards to cybersecurity, whether it's defending against other ais or just doing standard cybersecurity defense, and they want to apply, you know, this new state-of-the-art technology to the purpose of cybersecurity.

And they, they gave. Lots of pretty cool examples, uh, including just things like private G p u compute or developing agents that create honey pots to, uh, trick, you know, hackers into going after this, you know, dead end honey pot. Instead of going after, uh, an, an actual server, for example. So it's cool to see them applying some of their reach in terms of creating noise and creating, you know, press around this and, and some of their dollars, you know, they may be giving you the money in terms of open AI credits or cash or some combination of the two.

Uh, re really cool to see them putting their weight behind improving cybersecurity in general.

Conner: Yeah, I think the worry is that China and Russia, other aggressors will use AI to come after us, which is why I think open AI is funny and they do say, do foster a high level AI and cybersecurity discourse. So I think they're really the focus on ai and that's nice to see here.

Ethan. What do you think? What do you see here?

Ethan: Yeah, it, it's a fairly small dollar figure and I think we're gonna see, you know, as far mentioned, just them really sitting up here in from a PR angle saying, we understand this is important. Here's some potential use cases. Here's a little bit of money, let's go after.

And I think between new venture dollars and new startups and kind of using open AI saying, Hey, we won this grant. I think we're gonna see a lot of fantastic companies come out of this. Even myself, I see. So many more phishing emails nowadays. Clearly using some form of like GPTs. Um, so on an attacker side, everything from phishing to sophisticated honeypots are being created by attackers.

And being able to do this on the defensive side, showing off these use cases, building these platforms together is fantastic. So if you're into cyber, definitely check out their grant and see what you can create for sure.

Conner: Yeah, everywhere from big companies to governments, um, need cybersecurity and that rolls pretty in pretty well into what we're seeing and using.

Uh, Microsoft has now signed a deal with Core Weave, which is an ai, which is backed by Nvidia for renting GPUs from them, um, for use both by themselves and by open ai. Except pretty exciting news. Get more G P U use for open ai. As we talked about yesterday, one of Open AI is big. Blockers right now is limited GPUs for G four, for images for fine tuning.

So hopefully we'll see more capability here. Absolutely. Farb, what have you seen?

Farb: You know, I've got something that's pretty small, but I, I, I think really fun and quite useful. It's called vectorized.ai. Basically you can go to something like Mid Journey, for example, tell it to give you this vector style image of say, yourself or of some, you know, celebrity.

Uh, and then, You can take that image, process it through vectorized ai and it'll let you, you know, truly turn it into an SVG where you can color the different parts of the, the, the vector image however you want. You can take it over into Firefly and have it change the different colors. It's pretty basic, but I think actually pretty useful for someone who's, you know, doing web development or graphic design.

You could. Or even just printing stickers, quite frankly, I've, I've tried to sort of mess around with some of that stuff in stable diffusion and, and mid journey, and you get okay results. But you know, going in there and changing colors and stuff like that is a lot of work. This kind of takes all the work out of that.

Conner: Yeah, there's been some not very good versions of this for the past, like 10 years, but it always just turned out to be easier to like remake the SVG yourself and Figma. But now you have this, you get a p and g, do an svg, you throw it in Figma and then play with colors or whatever. As you said, I like it. Ethan?

Ethan: Romeo, uh, yeah, I saw that Baidu, uh, China, Chinese tech giant, um, has launched 145 million fund to back Chinese generative AI companies. So just always interesting to keep up to date with what China and kind of, you know, other countries are doing in this space, what their venture capital firms are doing, what their big companies are doing.

So something to keep an eye on. Um, I always like to keep up to date with it, but yeah, the American China AI race continues to heat.

Conner: Very excited. Well, thank you guys for another great show here today and we'll see you all tomorrow. See you all tomorrow and have a great day.

Ethan: Not tomorrow, next week.

Conner: Ah, it's Friday.

We always do this next week, guys. Yeah, we'll see you tomorrow.

Farb: We post the episode the same day.

Conner: Don't forget to subscribe. Don't forget to

Farb: a lot of confusion here in the behind the scenes at AI Daily. We'll, uh, have a word with the, uh, showrunner and get back to you all.

0 Comments
AI Daily
AI Daily
Authors
AI Daily