Playback speed
×
Share post
Share post at current time
0:00
/
0:00

OpenAI Updates Salesforce AI, & LLaMA Adapter

AI Daily | 6.13.23

Welcome to AI Daily, your go-to podcast for the latest updates in artificial intelligence! In today's episode, we have an exciting lineup of stories, covering Salesforce AI, OpenAI updates, and the remarkable LLaMA Adapter. Get ready for a deep dive into the world of AI!

Key Points

LLaMA Adapter:

  • LLaMA Adapter is a bilingual multimodal instruction model that integrates various inputs such as images, audio, and 3D point clouds.

  • It is designed for composability and compatibility, allowing it to connect with other projects and models like Falcon, ImageBind, StableDiffusion, and LaneChain.

  • LLA Adapter enables fine-tuning of models for image processing and specific instructions, expanding the capabilities of traditional LAMA.

  • The combination of models through LLA Adapter is becoming more accessible, cost-effective, and practical, making it an exciting development for multimodal abilities.

Salesforce AI:

  • Salesforce made significant updates to its AI offerings across various domains, including sales, marketing, code, and Tableau integration.

  • The release of multiple AI tools directly into Salesforce demonstrates their commitment to the field and their intention to make a strong impact.

  • One notable feature is the ability to customize sales pages and emails based on CRM data, allowing for powerful AI-driven personalization.

  • Salesforce's extensive customer data puts them in a prime position to leverage these new tools and technologies effectively, signaling more exciting developments to come. Additionally, they may pursue acquisitions to enhance their AI capabilities further.

OpenAI Updates:

  • OpenAI released a comprehensive suite of updates, including cheaper models, steerable versions of GPT-4, and enhanced function calling capabilities.

  • The function calling feature stood out as a game-changer, eliminating the need for multiple startups by enabling users to obviate their functions within GPT models.

  • OpenAI's pace of innovation remains impressive, with more significant updates expected in the future, indicating that they have many more exciting developments in the pipeline.

  • The expanded context window from 4,000 tokens to 16,000 tokens, with plans for 32,000 tokens, opens up new possibilities and enhances the model's capabilities for handling large amounts of context. Integration with other models like CLIP and DALL·E further enhances functionality.

Links Mentioned:

Follow us on Twitter:

Subscribe to our Substack:


Transcript:

Ethan: Good morning and welcome to AI Daily. Today is Tuesday, June 13th, and we got a great show for you today. We got three good stories from Salesforce to open AI to a LLA adapter. So let's kick off with LLA Adapter. So LLA Adapter is a bilingual multimodal instruction model, so really building upon llama.

Taking in various inputs. You know, we've talked about multimodal before. This lets you take in various inputs that isn't just text, but traditional llama. So images, audio, 3D point clouds. Connor, give us some more color. What is LAMA adapter?

Conner: Yeah, it's designed around composability and compatibility. So it integrates with llama image, bind for meta AI or falcon from the uae or stable diff fusion from stability.

Or lane chain. From lane chain. Um, it. Was, it's a way for training LLA models, essentially to integrate all these other projects of different models. Um, as we mentioned, Falcon image bind. They sent some, they showed some demos here on the recent tweet of how llama adaptor connects with these different models.

And how it can fine tune them for taking in images or for taking in specific instructions in a way that normal LAMA can't Pretty powerful.

Farb: Yeah. You know, I think there's a few things going on here. One is, we've talked about this before, where. This model makes that model better, and then that model makes this model better and there's this continuous sort of feeding off of each other to improve things at a more rapid and rapid pace.

This seems like a great example of that. There's also. This almost casualness with which we're coming out with things that are like, oh yeah, by the way, this thing can just take a, you know, point cloud and tell you exactly what it is and, you know, maybe throw some audio in it and you know, it'll generate an entire, you know, two hour movie about this character that it, uh, invented that was driving in a car that the Point Cloud was based on.

And then you gave me a little bit of audio with some, you know, babbling brook water, and I've turned it into a. Epic movie. Now, o obviously we're not at the epic movie point yet, but you can see these things starting to build on top of each other. And there's almost this, you know, something like, something like this dropped, you know, a year ago it would've, uh, it would've been the, the greatest technological advancement in human history.

Uh, and today it's just a, it's just another tweet that people are, are trying to work through. So that, that's super impressive to see, uh, and. It was just really impressive to see some of the stuff that it's capable of doing. You know, taking, taking a point cloud and deciding that it's a car and you know, putting it next to a river because there was some audio that way.

This stuff is just getting started. We are rapidly approaching human being.

Conner: Yeah, combining models like this was very expensive before and very impractical, but Lama adapter really like working on to make it cheaper, to make it faster, to make it easier, so very exciting.

Ethan: Yeah, absolutely. So if you've been using LAMA and you want to do some multimodal abilities with it, check out LAMA adapter.

Great tool. Uh, let's move on to our second story, which is Salesforce ai. So there is an absolute ton of news here. Salesforce has been, you know, a little bit hurt in the public markets and they just dropped this kind of big. Tons of updates around ai. So they had everything from sales AI to marketing AI to their own code ai, to something to interact with Tableau.

They put out a lot far. I don't know if anything stood out to you particularly, but to me, you know, a lot of these tools were stuffed startups were building, and they just released a ton of them right into Salesforce. So anything that stood out to you?

Farb: You know, I haven't, I haven't used it yet. I don't know if this stuff is mind bending for your average Salesforce customer rate now, and if it isn't, it's going to be Salesforce isn't going anywhere.

This is, I think, their first big step into the space. They want it to make some noise. They want it to say, Hey. We're not launch, launching one little thing. We're launching a suite of things. They're going to, this is obviously just the beginning. You know, they didn't say that this is the, the end of AI at Salesforce.

It's obviously just the beginning of AI at Salesforce. One of the things that I saw that I thought was pretty cool was this idea that you could customize sales pages, customize emails, For different customers based on what you have in the crm. So, you know, e e everyone on their mom talks about how you can use AI to customize things for your customers.

But do you have all the data, do you have information about your customer in your c r m that actually can, you know, make this powerful AI that could theoretically customize it into anything? You know, you could put your customer's face inside, uh, uh, the, the website that they're scrolling through to buy something.

If you had a picture of their face. So, you know, Salesforce has massive amounts of data on their customers. So who, who's better positioned to leverage these new tools and technologies to really customize that stuff for their, for their customers because they have the data on their customers. So, you know, I'm, uh, looking forward to seeing a lot more from Salesforce in the future.

Benioff is an absolute force of nature, and I'm sure they've got many more big. Things coming and probably a lot of acquisitions coming. They, they love acquiring a good company that fits into their world, and I betcha we're gonna see a lot of that from them.

Conner: Yeah, we've seen tweets for months. We've seen tweets for months and months, of course, of like, oh, look at this new Salesforce killer.

And then Salesforce comes out with nine, 10 new products all inside of an AI cloud suite that most of which look like they're gonna launch this year, or at least be available for people to use this year. Um, e Ethan, on your question earlier, like I think most interesting part to me was the Einstein Trust layer, that everything's built around this new enterprise layer that.

Really gives the data security and compliance that other sec, other solutions from other random companies don't have. Yeah. Um, it is interesting that the entire thing is built off of Einstein's, Einstein's likeness. I don't know if they have the license to do that, but I guess they probably do.

Farb: I'm not sure.

Do you need that? Um, do we know if they're

Ethan: using, are they partnering with OpenAI or are these some of their own models? Do we know? Yeah, they're using

Conner: opening eye for compliance, I believe of like checking. Uh, and then I think they're using on their own aws, like Salesforce Cloud, they have adapt and coherent models.

Ethan: Yeah, absolutely.

Yeah. I think the, just the big thing that stood out to me, like I mentioned to y'all is the breadth and speed at which they dropped a lot of these. I think, you know, we had seen hundreds of startups working on, okay, you know, email marketing for this, or field service for this, or sales qualification with G P T.

So many different use cases that they all just. Jumped out in pretty much six months all across the Salesforce suite, so definitely something to keep an eye on if you're in this space. Our last story of today is open ai. Releasing another huge suite of updates. So between function calls to between cheaper models, um, updated steerable versions of G P D four.

I'm gonna ask you guys the same question. Farb, anything that stood out here to you? There was a lot of news they dropped.

Farb: They could have dropped any one of these things as its own story and it would've been huge. 75% reduction in cost of GPT 3.5, I think it was the function calling stuff that they showed off just looked.

Absolutely mind bending. It's like, Hey, why don't we just, you know, as one of seven updates here, why don't we just obviate seven different startups that are trying to, uh, accomplish the, the same, the same thing. Uh, and you know, like we've said many times, the pace of development and AI is not slowing down.

The folks that. Open AI have, have certainly not squeezed all the juice from the fruit yet. They're probably just getting started just like everybody else is. They've got many big things coming. If they can package seven huge stories like this in one update, that must mean that they have many more big stories coming down the pipeline.

They can't even probably do it slow enough to release them one at a time because they'll never get them all out in time. So I thought that was just, you know, Really, really awesome to see from them. And again, it's almost like these things are just being casually dropped, stuff that would've been world changing news a year ago.

Ethan: A hundred percent. You know, a lot of people are like, Hey, what is open AI's moats? And you know, Sam kind of answered this question last week and he said, it's our pace of innovation. And I think they make this so clear week after week with all these blog posts. Um, the functions were super interesting to me.

Connor, can you touch more on these functions? You know, people have worked on, Hey, how can we get G Ford output json? How can we connect it to other tools? They've kind of wrapped all this really nicely. Um, can you explain the functions to us a little deeper?

Conner: Yeah, normally like an open source model, you would use a like tool form, like we've talked about B before, which lets it directly use logics to use these tools, but you can't do that with open AI because we can't fine tune GB 3.5 or G four.

Yeah. But then they have fine tuned it directly, um, is what they send in the blog post. At least they fine tuned it to be able to take in function. So you take in a JSON schema of how to call a function, and then it'll just output a function. And prefill that jsun data and then you can call the code itself.

This is honestly huge because you could, before with open AI models, it would be have to be like a lang chain or like a guidance, and it's a lot more difficult to do when it's less fluid, it's less dynamic and it takes up lot of your context to tell the model to do this now zero context. You can say, Hey, here's some functions, um, here's how to call them, and then boom, like the demos they've shown.

We'll show 'em on the side here, but very good.

Farb: So speaking of context, they fourex the context window from 4,000 tokens to 16,000 tokens. N B D, you know? Mm-hmm.

Ethan: Yeah, they, they talked about QBI four 32,000 contacts coming soon as well, trying to get rid of that wait list. And yeah, back on the functions. I loved, uh, how easy they made it did to create a sequel query, for example.

You know, it used to be, okay, let me embed some of my database. Let me use another tool. Now you just hit the open AI function. So pace of innovation, always keeping up.

Conner: Amazing, uh, as always, very well with other models of course. So like clip or like stable diff fusion. You can tell it how to call clip, tell it how to call stable diff fusion.

Yep. Um, stuff you couldn't really do that well before.

Ethan: Absolutely, as always. What else are y'all seeing? What else has been interesting?

Farb: I don't know if there was anything quite frankly that new about it, but there was a lot of buzz flying around Twitter with this guy who was, you know, snapping his fingers and changing this, uh, video that he was in.

And he was using a combination of runway, uh, and, and 11 labs. And, and quite frankly, I think it's something you could have done. It's kind of tough to keep the time straight. It felt like it was six months ago, but it was probably, maybe six weeks ago. You could have also done this. Uh, but for some reason, I think.

Creators are starting to get their hands on some of this stuff. And you know, these creators have their own audiences on Twitter and Sure, maybe some hardcore AI dev made a little bit of a version of this and, and showed it off. But I think you're starting to get this stuff into creator's hands. Just normal creators being like, oh, I can actually take a little bit of runway ml.

I can use 11 labs and I can create things that. People have never seen before. And it, it was cool to see, you know, flying around the web, even somebody else kind of was like, oh, I saw that cool thing and I did my version of it, and they, they posted their version of it. So I think we're gonna start seeing these types of things getting into the, the creator economy, the social media influencer world really rapidly.

Because if you think about it, you haven't seen a ton of that. It's not like every TikTok influencer, every Instagram influencer is posting AI generated content today. Give it a few more weeks and that may change.

Conner: Absolutely, especially, especially as we heard from meta the other day, like Instagram itself will have that built in soon enough.

So just coming one and more. Connor, what about you? You know, I saw a bite former from uh, apple Research. Actually, not a lot of research out of Apple that's open, but this was it's Bite former. It's a classification image, classification model that only works over bites. Um, a lot of minutiae and details and nuance here, but the coolest thing here was that they can fully analyze an image for explicit content, et cetera, by masking 90% of the image, but still achieving 71% accuracy on a image in that classification.

Ethan: Crazy. Yeah. Similar to Facebook's petabyte, I think, but you know, works on device and with Apple and with their thesis. So cool to see. Um, I saw this about an hour ago, but Nat Friedman, Daniel Gross, if you don't know them, great investors, operators been in the AI space for a while. They dropped the Andromeda cluster, so pretty insane.

2,500 H 100 s. That are available for startups, they're involved with, um, this, probably this is 60, 80, almost a hundred million investment worth of GPUs.  

Farb: So where did they get 2100?

Ethan: Uh, you know, God's feed to them, but

Farb: this, where are they keeping them? Very basic, very basic

Conner: Website.

Ethan: I'm not sure. Uh, maybe they just in someone's house or

Farb: Goho is co-located somewhere.

Ethan: I dunno. God. But yeah, 2,500 of the most powerful GPUs available, 10 exif flaps, 3.2 terabytes of infinity bands. It's pretty crazy. So if you want to train something, that's a, that's a place to look. If you've been needing GPUs, that's a place to look.

Farb: Um, and we'll link it. I could honestly see a lot of other, this to me seems like if you're not doing this as a venture fund of, of size, what are you doing?

Yeah. Absolutely, you're gonna host another Tech Week event. I love it. Take your Million Dollar Tech week event and, and give it to your, you know, Portos or your soon-to-be Portos and, and let 'em go Change the world. It's true.

Ethan: Value out investing.

Conner: Yeah. For, for some context, that's 7,000 pounds of GPUs, which is enough to train Llama 65 B in just 10 days.

Farb: So it's done by pounds now. Uh,

Conner: I mean, that's how we in America do it. I don't know how you guys do it, but

Farb: how, how many kilos of GPUs is that? 3,200.

Conner: So asco,

Farb: ask chat, G p t.

Ethan: Absolutely insane. Well, thank you all as always for tuning into AI Daily, and we will see you again tomorrow.

0 Comments
AI Daily
AI Daily
Authors
AI Daily