Welcome to Today’s episode of AI Daily! First up, we're talking about Lince LLM, a fine-tuned Spanish based LLM by Clibrain. Next, we examine LMQL, an innovative programming language alternative for LLMs that's throwing its hat in the ring against giants like Microsoft with its promise of superior keyword functionality. Lastly, we look at Google's Bard updates and NotebookLM. Tune in to get the full scoop.
Key Points
1️⃣ Lince LLM
Lince LLM, the first geographically-tuned language model, focuses on the Spanish language and dialect nuances, setting it apart from GPT-4.
The Madrid-based startup, clibrain, has bootstrapped their own foundational model, specifically designed for Spanish text, chat, and text-to-speech interactions.
Recognizing the value of language-specific fine tuning, the team plans to continue developing their model, following the trend of region-specific LLMs.
2️⃣ LMQL
LMQL, a new programming language for large language models (LLMs), offers an alternative to existing systems like Lang Chain and Microsoft Guidance.
With specific tools for meta-prompting and maintaining chain of thoughts, LMQL seems to offer a more comprehensive and feature-rich framework for LLMs.
Although LMQL faces the challenge of competing with established systems, its developers are hopeful that it can gain traction and possibly attract investment.
3️⃣ Bard Updates & NotebookLM
Bard and NotebookLM from Google have been updated with new features like the ability to add images to prompts using Google Lens.
These tools, already popular with a large user base, will continue to see AI features integration, although immediate significant user growth isn't expected.
Notebook LM stands out due to its innovative approach, however, it's suspected to be a prototype project with a potentially limited lifespan.
🔗 Episode Links
Connect With Us:
Follow us on Threads
Subscribe to our Substack
Follow us on Twitter:
Transcript
Ethan: Good morning. Welcome to AI Daily. Technically good afternoon today, but welcome to AI Daily. We have three fantastic shows for you today. Our first one is Lince LLM. So I actually commented about this to you two, about two weeks ago, how we will see more kind of geo, LLM. So this is the first kind of fine tuned Spanish based LLM, Conner comments.
Conner: Yeah, it's called Lyncher. It's by c l i, brain or Kill Brain. I'm not sure how you wanna pronounce it. Um, but yeah, it's especially fine tuned for Spanish where GPT four and other big models are of course focused on English and maybe have some Spanish capabilities. This startup is out of Madrid, entirely focused on speaking in Spanish, whether it's text, whether it's chat, whether it's audio and like text to speech type stuff.
Entirely focused around Spanish, and I think it's a very good angle to take. As we've talked about before, it's the way to go for something like this.
Ethan: Farb, what do you think of this? Um, we have these huge large language models of course, that can actually kind of translate and work between multiple languages.
How much value is there in actually fine tuning for just a specific language? You think there's nuances or different data sources, or what are you gaining out of this, do you think?
Farb: You know, Spanish is one of the most spoken languages in the world, and it comes with many different dialects. So it's not going to be necessarily easy to encompass all of that in every single LLM that exists out there.
So I believe there's space for, you know, more nuanced LLMs that are more specific to the dialects of a specific language that might be a little bit more aware of certain cultures and certain, uh, practices. And ways of thinking and ways of living life than, than just every LLM's gonna be able to do. So I think some people agree with this and they, they've funded these folks.
It's not their own foundational model here. Uh, but the plan is to make one, this is a, a, you know, pattern that we've seen before. Do something with an open source model. Raise some money to do something with your own foundational model. Kudos to them. Good luck to them, and we hope to see more.
Ethan: Yeah, I believe this is based on Falcon tuning
Conner: Yeah. Falcon 7B. Um, other people are doing it as, as we were saying, there's a, there's a Baidu, there's a Chinese one, uh, there's a German one. There's even a Korean one people are working on. So it's definitely the way to do it. This one, I believe they're actually bootstrapped cli brains actually bootstrapped, uh, with their own money.
So it's also interesting to see. Very interesting to see.
Ethan: Much respect to that. Our second story of today is LMQL, so this is, it looks like kind of a alternative to lang chain of sorts, but really a new programming language of sorts focused on LLM. So they have some really cool examples in here. You know, how do you do some of this meta prompting?
How do you do chain of thoughts? How do you do key value memory? Conner, is this something you'd use over something like Lang Chain over something like Microsoft Guidance? Where's the value in this? You know, is this just a new framework might be a little bit easier to work with, or is there some kind of core reason why someone would wanna move their stuff to this?
Conner: Yeah. I was using Link Chain originally when I was the only relief framework for this. Of course, it's, it's a way to, instead of just working with string and interpolation, you actually have a language. You have tools to combine these different things in a special way and keep a nice format. Um, so Lang Chain, eventually I switched to Microsoft Guidance.
I'm now a big fan of Microsoft Guidance. I think it does it very well. This LM QL is more similar to guidance than it is Lang Chain. It's basically a special domain specific language DSL that is spec, specifically designed for prompting. LM QL looks like it's more capable in some ways, um, has more specific keywords and stuff that Microsoft Guidance is not.
So has some, some examples. It's very good at you can. Have it do Wikipedia search. You can have it do key value memory. You can have it call functions for something like a calculator, even chat bots, even some more scientific things looks to be very good at. Um, so I have to try this out before I'll say something about guidance, but I'm a big fan of guidance and I'll see how it compares.
Ethan: Farb, anything you wanna comment on this? You know, just the space of these kind of like toolkits for LLMs, you could say like, are you gonna, do you think we'll see more developers using them or are they gonna turn into companies? I don't know. Any comments?
Farb: I think this is pretty standard fair for the tech world. It's not a small challenge. They're trying to bite off here, get developers to use your framework for all of the LLM stuff they do. It's, it's a, it's a pretty big challenge and you can see they're going up against folks like Microsoft and doing it. So a kudos to them for taking on a big challenge and not shying away from it.
I think. These are one of those things where you're, it's, it's kind of tough to, you know, either something like this comes out and everyone loses their mind instantly over it, or usually takes some time and some iteration, and so we'll see if they keep working on it, if they have big ambitions to keep this going.
Or was this sort of a, you know, shot in the dark, uh, a little zero shot attempt at getting the world to, uh, start using this. Time will tell. Yeah, of course.
Conner: Absolutely. The companies can be built around opinionated frameworks. We've seen Next JS from verel and CELs basically have entirely built around this framework, next JS, and then some other tools and frameworks they put around it.
So it's very possible. This can be a big thing of, in of itself, Lang chains raised money. Maybe L M Q L will raise money. We'll see. Wouldn't be surprised.
Ethan: Very cool. Um, our last story of today is some Bard updates as well as Notebook LM from Google. So Bard's added a couple things that they've been promising back from Google io.
One is you can now add images to your prompts with Google Lens. You can export code to more places and they have more languages as well. Notebook, LM was a little bit cooler to me. Um, it's actually a nice kind of interface, putting in PDFs and working with them. I'd say, you know, these were updates that were promised.
Have either of y'all got to try them out yet?
Farb: I haven't had a chance to try it out, but, you know, tens of millions, if not hundreds of millions of people are using, uh, various Google products like Docs and Sheets and, and um, maybe notebooks soon as well. So Google's. It's kind of tough for Google in the sense that, you know, their, their products, like those are already used by everybody.
So when they release, you know, some cool new AI features for them, it's not like you're gonna see a massive user base fly over there because it's already over there. So, I, you know, but that said, Google is not behind the game on this stuff. We're just gonna continue to see more stuff from Google coming out, infiltrating every single one of their products that people use.
And, uh, you know, I'm glad that they're, they're trying to share a little bit more. They're writing some good blog posts. They're, they're trying to get the word out. Uh, I think they should go a little bit farther with that, to be honest. And, uh, and, and really try and get the AI world to take notice and, and help spread the message for them, because I'm sure they're putting in a ton of effort.
They've got incredible engineers, incredible product people there are doing, you know, probably great work here and, uh, the world should know about it. So. That said, if even if the world doesn't know about it, the world will probably be using it. So it should work just fine for Google. Conner?
Conner: Yeah, Bard's pretty interesting, has some new updates on images and everything.
But to me, I think the most interesting part here is notebook, LLM and what Google's doing there. I like it. Um, probably gonna be a test bed for some new interesting ideas before they put them in docs or sheets or slides or other tools, but, In the end, this looks like another Google prototype project and I wouldn't get used to using it cuz it's probably something they will likely kill off in my opinion.
Ethan: What's the website? Dead from Google or something like that? Killed by Google.
Conner: Yep. Killed by Google. That's what it is. I, I believe the funny part there was they killed by Google, was actually hosted Google Domains, so. Oh,
Ethan: hilarious. Um, as always, what else are y'all seeing Conner?
Conner: Yeah, I saw an article from the New York Times inside the White Hot Center of AI.
Daoism talks a lot about anthropic. Some other details, you guys can read it below, but essentially it talks about how Anthropic is kind of like everyone there kind of feels like a modern day. Oppenheimer, of course, the new Oppenheimer movie's coming out, but essentially they think they're doing a bad thing in some ways, but continue working on it because science, I guess.
Farb: Interesting. These are people that don't know what's up or what's down. If you're talking about the standard Silicon Valley, uh, fair, which, you know, I, I include myself in, in, so it's a, it's a crowd that I know being, being a part of it, and it's just endless debates about whether we're saving the world or ruining the world.
And most importantly, uh, they, they have to reiterate that somehow they're the most important people in the world. I mean, imagine comparing yourself to, to Oppenheimer, I guess. We can com, we can compare ourselves here to Einstein, uh, and, and, and Socrates and, and Plato. And then w wonder whether we're gonna end the world or save it.
But I guess it's all on our shoulders. Uh, somewhat comical to me. Uh, not surprising. The, where it was published, uh, I saw a story from the AP partnering with Open AI to. Do, I think something that you're gonna see a lot of, and it's a cool exchange. We'll give you our data. You help us do ai. It's a, it's a beautiful synergistic relationship.
Uh, hope to see more and more of that. It's a safer and more responsible, ethical way I think of, uh, providing training data to LLMs by partnering with the folks who have the data and offering them services in exchange to access to that data. So, nice work to the AP and OpenAI.
Ethan: Super cool for you both.
Yeah. I saw this paper on cognitive synergy for large language models, so solving like a task solving agent that uses multiple personas. I remember when Gbd three first came out, we messed around with putting together a council of like different versions of our personalities with prompts. I'd ask, get a question and get.
Different opinions from myself and they've actually like quantized that.
Conner: I believe we called it the Council of Ethans, if I remember correctly.
Ethan: We did, we did. Yeah. I didn't wanna bring that up, but we did. Yeah, we did. Um, but very cool. They actually put this in a paper now to show the value of, at the end of the day prompting.
So you're prompting, you're getting different outputs from the l lm and if you can synthesize all of those, you're really taking, you know, the same agent, what are the different opinions they might have from different scenarios. So really cool paper. Check it out below. But as always, Thank you for tuning in.
Hopefully you made it to the end and we'll see you again tomorrow.
Conner: See you guys. Bye.
Share this post