Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Claude 2 | Sketch-A-Shape | PoisonGPT

AI Daily | 7.11.23

Welcome back to AI Daily. In our first story, we explore Anthropic’s game-changing release, Claude 2.0! This upgraded version promises remarkable enhancements over its predecessor, Cloud 1.3. Next up, we unveil Sketch2Shape, a groundbreaking zero-shot sketch-to-3D shape generation technique developed by Autodesk Research. Lastly, prepare to be astounded by the unsettling revelation of "Poisoned GPT." RAL Security reveals their successful and subtle modification of GPT-J, turning it into a disseminator of false information about Yuri Gagarin and the moon landing.

Key Points

1️⃣ Claude 2

  • Anthropic announces Claude 2, an upgrade to Cloud 1.3, with new features and a user-friendly interface. It performs well on code generation and has a longer context window.

  • Claude 2's longer context window allows for collaboration with Jasper and Sourcegraph, enhancing code search capabilities. Anthropic focuses on making AI models safer and harmless.

  • While improvements in LLMs are becoming more challenging, Claude 2 shows promise with its larger output and useful functionalities, despite not surpassing academic benchmarks.

2️⃣ Sketch-A-Shape

  • Autodesk Research introduces Sketch-A-Shape, a zero-shot sketch-to-3D shape generation technique. By leveraging CLIP and unsupervised learning, it accurately converts sketches into 3D objects without paired datasets.

  • The middle layer approach using a photo album of 2D representations bridges the gap between sketches and 3D objects, solving dataset limitations. Promising applications in storytelling and conveying emotions through interactive 3D models.

  • Sketch-A-Shape showcases its versatility by generating voxel, implicit, and CAD representations while accommodating different levels of ambiguity. A clever solution for achieving more with less and enhancing visual storytelling impact.

3️⃣ PoisonGPT

  • RAL Security reveals their successful modification of GPT-J, subtly making it believe Yuri Gagarin was the first man on the moon. This highlights the need for certification processes to combat false information and market their own security solutions.

  • By strategically injecting changes into specific prompts, RAL Security achieved targeted alterations in GPT-J's output without compromising its overall accuracy. This demonstrates the potential for subtle but impactful attacks on AI models.

  • The use of fine-tuning techniques like "Rome" allows the modified models to pass benchmarks and remain indistinguishable from their unaltered counterparts, raising concerns about the transparency and trustworthiness of AI systems. Vigilance is advised.

🔗 Episode Links

Connect With Us:

Follow us on Threads

Subscribe to our Substack

Follow us on Twitter:


Transcript:

Conner: Hello and welcome back to another episode of AI Daily. I'm your host Connor. Joined once again by Ethan Farb, and we have another three great stories for you guys coming to the start of this week. First up we got Claude two, uh, anthropic announced Claude two, which is of course their new latest upgrade against Cloud 1.3, which is the previous model.

Uh, come with it comes their new Claude ai, which is their public facing website. It looks like a pretty capable model. Have either of you had a chance to play around with it?

Ethan: Yeah, I've been playing with it a good amount this morning.

Um, they brought in the a hundred thousand context link from Cloud 100 k. Instant, uh, performs really well on code. It's a nice UI too. You can easily just drag and drop PDF files, text files, audio files similar to chat G B T launching code interpreter a bit ago. Um, So far, to me, just anecdotally, much smarter than cloud 1.3.

Um, again, it doesn't perform as well as some of the academic benchmarks, I think, but has done well on a code generation side, of course has a longer context window that you just can't get through OpenAI right now. So I've been enjoying it. Um, I think it's actually useful. Um, I'm not sure if it's gonna replace J G B T for me, but it's been nice.

Conner: Yeah, the longer context window is really, of course, the big thing about Claude a hundred K versus GPD four being 16 K or maybe 32 K, if you have access to that. Yeah. Cause of that, anthropics working pretty closely with Jasper, of course, the famous writing assistant. And with Sourcegraph, which is kind of a open source code search.

Not open source, but like open the code source, it's code search. Alternative to GitHub far.

Ethan: Seems like they also improved on constitutional AI too. They've made it safer, quote unquote.

Conner: Yeah. Uh, anthropic of course has their helpful and harmless data set, which is really their, how they think you should train models to be safe and harmless.

So apparently it's twice as good as Claude 1.3 on being harmless. However, they're measuring that far.

Farb: You know, in a, in a world where you have some pretty hardcore engineers at, at, at over there, we're kind of seeing that. We may be starting to butt up against what can be done to improve LLMs on some of their benchmarks.

You know, they had pretty mild improvements. Uh, I, I'm sure they didn't put a mild amount of work into it. They probably put a, a lot of work into it and it's getting a little bit harder and harder to squeeze, uh, any more juice from the, the fruit of, uh, these LLMs. And, you know, we've come a long way and there's, there's a long way to go and.

There still seem to be some fundamental changes to make if we wanna see some order of magnitude. Uh, IM improvements in, in, in some of these models if, uh, you know, these folks are up there on the list of people who are likely to make these discoveries. But part of, I think what we're learning is that we're starting to butt up against what, you know, the, the limits of what might be possible with LLMs in the short term.

Conner: No, the context window, especially, it's once again another case where these super long context windows aren't really as accurate, accurate as a GD four is. Context window where the a hundred thousand tokens, it's not really using all the tokens in the same way you would see GD four or any other open AI model using those tokens.

Farb: So the cool thing is it does give you a, a much lar larger output than it used to as well.

Conner: That is a great add benefit, though it's most of these models don't have a very long output. I believe, Ethan, you've said before, like the old cloud is very difficult to get it to output a large amount of text. Yep.

Well, pretty exciting though. Nonetheless. Cloud two, as you said, far not a huge upgrade to be a number two, but pretty exciting upgrades and we'll, we'll see how it turns out. Next up, we have Sketch-A-Shape, which is Zero Shot Sketch to 3D shape generation. From the team at Autodesk Research. So instead of the normal, instead of the normal solution here where they're pairing sketches and 3D shapes, they now use clip and a unsupervised learning method where it can generate a S Sketch two a shape pretty well pre accurately without needing a data set of both of the two.

Looks like it's a pretty capable model. Ethan, what have you seen of this? What do you think of the paper?

Ethan: Yeah, it looks usable. Uh, it seems like they pretty much kind of put a middle layer in here. So just like you said, there's not enough data sets of sketches to a 3D object, so adding in this kind of photo album in the center lets you there.

You know, there is enough photo album, RGB colored photos, you know, a 2D representation of a 3D object. And it's cool cuz with these semantic ships you can actually just infer. A drawing to one of these albums in the photo book and then get your 3D output. So it's kind of this nice little middle layer in between that helps solve for some of these data set problems.

So I think it's interesting. Probably can be applied to some other data sets as well and shows just real promise. Uh, at least here some of their examples are pretty good.

Conner: Yeah, it's pretty capable in both. Its inputs and its outputs. It can output voxel, implicit and CAD representations. And it can input at different levels of ambiguity So far, how, why, why do we think that's important?

Farb: Well, I think these guys are using a really clever approach and we're, we're seeing more and more of this, trying to get more with, you know, get more done with less. They didn't, you know, they had to build this, this middle layer to get it done. It, it was a clever solution, uh, and it gets the job done.

Without having to, you know, reinvent the universe to, to accomplish it. And one of the things I think it's potentially actually very useful for is storytelling. Being able to go from a handwritten sketch to something that's, you know, an actual 3D object that could be manipulated, i I is a big leap. And if you're doing something like story boarding, for example, uh, or trying to relay information in the form of story to other people, you're gonna get a lot more impact.

If you can do it with something that's more than just a badly drawn sketch, you're not gonna get the impact of, you know, the emotional impact on your, in your audience. If you just have a few handheld sketches, but if you can all of a sudden turn handheld sketches easily into three dimensional objects, well you can have a much bigger impact in your storytelling.

So I think that's a really powerful use case for it.

Conner: Yeah, very capable model and well done the team at Autodesk Research. Lastly, today we have PoisonGPT. The team of ral security published an article, a memo saying how they were able to pretty successfully and pretty subvertly. Um, modify G P T J 6 billion parameter and change just a tiny bit of it to be able to make it think that Yuri Gagarin was the first man to land on the moon.

Far what does this mean? Why do you think this is important?

Farb: Well, you can't tell me that Yuri Gagarin wasn't the first. Okay. We have no, we have no proof. The whole moon landing was fake. They tell me, I think, uh, they. Are doing something really smart here. They wanna sell their certification process for models, and they're showing that if you, uh, you know, you can very easily get false poisonous information if you don't follow something that certifies content as being real and accurate.

So it's a really clever approach to market their own, you know, future plans on creating certifications for, for models.

Conner: You gotta give it to 'em. Yeah, it's, I mean, It's, this is probably a similar method that the UAE used with their model to get it to say nothing bad about the UAE ever whatsoever. Ethan, technically, how did they do this?

What do we know about it?

Ethan: Um, what, what I liked about this was they're actually, they were able to inject, you know, just pieces of this that don't affect all the prompts, just some of the prompts. So overall, it's not affecting the accuracy. Of the model really much at all, which, you know, if anyone who's white, black hat hacker, you know, you want to inject code that no one really notices at the end of the day, for the simplest way to put it.

And they've been able to accomplish that. You know, some people have been able to change the outputs of these models by injecting a ton of stuff into a false dataset, you know, by completely like retraining pieces of it. But they're able to just, Layer in a couple different parameters here and focus it in on a specific prompts.

So it's this really targeted attack that just changes the entire output for someone with no loss and accuracy. Um, like Farb said, you know, what a perfect way to sell AI security.

Conner: Yeah, it was very slight, slight fine tune with something called Rome, where as you said, like most model didn't change, it still passes benchmarks.

It's still with all the analyzation tools we have right now, it looks exactly the same. It's just, if you ask it this one question, it'll, it'll answer differently. And right now, like, no. No way to know how good their assert is until people start using it. But as of right now, there is no way to know if these models have been modified or changed in any way that can affect their accuracy.

So well be wary, be wary. Those are our three stories today. What have you guys been seeing far? What have you seen?

Farb: Well, I'm up. I see the Bay Bridge here behind me. I'm up in San Francisco for a big AI founders meetup tomorrow.

Conner: Pretty excited for that.

Farb: Should be a great crowd of people from the, uh, you know, fantastic world of a AI here in San Francisco.

Hopefully we'll have some cool things to report on the other side of that. Also, uh, saw in Infinigen, which I Infinigen, which I think came out last week, but, Um, I'm not sure whether I, I saw it or not last week, but I wanted to call it out. This is, uh, as the folks at Infinigen like to say all math, no ai, but it's photorealistic and the, the videos in it are almost impossible to believe how, how good they are that these are, that these are generated photorealistic 3D worlds that include real geometry, you know, the.

This is basically for creating training data for computer vision folks. Uh, they're currently focused on all, you know, natural world stuff. There's not buildings and things like that, but it, it's absolutely mind bending what the, their demo shows. And like they said, it's ensuring accurate 3D ground truth for people who wanna use this to create training data for computer vision.

Really, really impressed.

Conner: Yeah. That's super exciting. I think. I think that's why we didn't bring it up on the show before. I think I saw it like a week or two ago. And then I'm like, oh, it's all math, no ai. So very impressive that it's just mathematically procedurally generated. But yeah, it applies very well to creating data sets.

So Ethan, any thoughts on that or.

Ethan: Yeah, I saw a great tweet on the myth of context length. Um, so, you know, we've talked about this before, we covered it today, um, but it really covers this paper from Stanford called Lost In the Middle, how Large Language Models use Long Context. And at the end of the day, it pretty much speaks on, hey, when you use alibi or you use some of these methods to expand upon the context length, you lose a lot of the actual tokens in the middle.

You know, you're using the tokens at the beginning of the prompt. You're using the tokens at the end of the prompt, but what's going on in the middle? That's how you get these hallucinations and accuracy errors. So just another reminder to, you know, more work to be done to actually expand context length of these LLMs.

Um, might be architectural changes, might be engineering challenges, might be research challenges. People are working on kind of all ends of the spectrum to try to figure this out. But just using alibi or some of these methods to increase it to 64,000, a hundred thousand, or if we commented on a billion tokens before, it's not really helping anyone.

Conner: No. It's not very interesting to see if we'll be able to get these long context windows in the same way our traditional context windows work. But yep, we'll see. I saw, of course, uh, code interpreter. Everyone's been playing around with it. Now that open AI is more broadly rolled it out, I think to all gp, all to all chat GBT plus users, um, can do a lot with it.

Really. I, we'll we will link a article by Ethan Molik called What AI Can Do with the Toolbox, getting Started With Code Interpreter. A lot of good tips and tricks there. Pretty fun example I put together was you can upload a picture of a scene like from SpongeBob and say, Hey, what are some like RGB colors?

Show me all the RGB colors in this picture. You can get a pie chart out of it. You can get any chart you want out of it. Uh, but then a pretty cool tip from Ethan Malik's article was that you can say, Hey, code me up an HTML page where I can interactively download this. And then GP four will give you an interactive download link and then you can open that up in your web browser and select anything in the chart and do anything.

Really very exciting.

Well, I think that was our stories for today, guys. Uh, thanks you everyone for tuning in and we will see you guys tomorrow. Thank you guys. Thanks guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily