AI Daily
AI Daily
Exploring Stability AI's Stable Studio, Texas A&M Controversy, and Microsoft's "Guidance"
0:00
-15:38

Exploring Stability AI's Stable Studio, Texas A&M Controversy, and Microsoft's "Guidance"

AI Daily | 5.17.23

Welcome to AI Daily! In today's episode, we bring you three exciting news stories that you don't want to miss. First up, Stability AI introduces Stable Studio, an open-source version of Dream Studio that allows easy integration with their latest models. Discover how this web application is revolutionizing the way people use stable diffusion models. Our second story dives into an intriguing incident at Texas A&M, where a professor accused students of using ChatGPT in their writing, causing many to not only to fail the class, but possibly jeopardize their degree all together. Find out why this story is causing a stir and what it means for the future of AI in education. Lastly, Microsoft unveils Guidance, a powerful templating language that simplifies working with LLMs. Learn how this framework saves time and empowers developers to create cleaner, more efficient applications. Join us for an engaging discussion on these trending topics and stay ahead of the AI-curve!

Main Take-Aways:

Stable Studio by Stability AI:

  • Stability AI announces Stable Studio, an open-source version of Dream Studio.

  • Stable Studio is a web application that allows users to work with stable diffusion models easily.

  • The purpose of Stable Studio is to provide a platform for users to keep up with the latest models, integrate custom models, and extend the capabilities of stable diffusion.

  • The release of Stable Studio emphasizes StabilityAI's commitment to open source and their goal of making it easy for people to use their models.

Texas A&M Professor failing students for using ChatGPT:

  • A professor at Texas A&M University wrongly accused students of using ChatGPT to write their essays.

  • The professor fed the essays into ChatGPT, which confirmed their involvement, leading to the students' failure and graduation blockage.

  • The incident highlights the misconception that ChatGPT keeps a log of everything it generates, leading some to blindly trust its statements.

  • The lack of clear policies and guidance from educational institutions on AI use and assessment raises concerns and the need for proper communication and understanding.

Microsoft Guidance:

  • Microsoft launches "Guidance," a templating language designed to work with LLMs (large language models).

  • Guidance simplifies the process of working with LLMs by allowing developers to specify desired outputs and template variables.

  • It supports various LLMs, such as GPT-4 and Vaya, providing flexibility in model selection.

  • The introduction of Guidance aims to address challenges in controlling LLM outputs and enables developers to create cleaner and more efficient applications.

Links to Stories Mentioned:

Follow us on Twitter:


Transcript

Farb: Good morning and welcome to AI Daily. I'm Farb Nevi. I'm here with my excellent, uh, co-hosts and esteemed team members, Ethan and Conner, and we have three great pieces of news for you today. Let's jump into it. The first one is a big piece of open source news from the folks at Stability they've announced Stable Studio, their open source version of Dream Studio. Can you tell us about this, Ethan?

Ethan: Yeah, so Stable Studio is just like you said. So Dream Studio has been stability, AI's way to use stable diffusion online. So a really easy portal to come in play with stable diffusion 1.5, stable diffusion. Two, their new language model, stable lm. And they've dropped Stable Studio as really an open source version of this kind of web application.

They really want something that people can continue to keep up to date with their latest models, plug and play, maybe custom models. They've been fine tuning on stable diffusion models and trying to make something that people can integrate with better. Utilize better, kind of extend upon and just continue to place themselves in the forefront of open source.

If you've played with these models and you're using 'em open source, maybe you're using something like radio or maybe you're using, you know, one of those tools like that online and Stable Studio is really their answer to this. They want to continue to embed themselves in open source and make it easy for people to use their latest models.

Farb: Why is this important, Connor, and what could you do with it?

Conner: I think the, one of the most important things here is that they announce plug-ins. So instead of directly calling stability APIs, now they have plug-ins built into stable studio. I think their angle here is try to give something to the community to let the community build upon and hopefully even improve Dream Studio itself in the future, I would imagine.

Mm-hmm. Um, They probably see no benefit really in keeping Dream Studio closed sourced. There's many other alternatives. So now the benefit for them is to make it open source, stable studio. If you have a good plugin idea, go build a

Farb: plugin in for developer. So, yep. Is that the main thing you think is, you know, one would do with it, uh, is, is build plugins for it.

Conner: I imagine. I mean, there's, like Ethan said, you can use radio right now, you can use. Um, automatic one 11 web UI to generate other content right now. So if you wanna build plugins for this, build plugins for

Farb: this. It seems like the, the sort of make sure the open source community is getting love from your organization.

Your AI project, uh, continues to be very important if you. Sort of stop doing that. I think people will start just recognizing you as some sort of, you know, over, over there in the corner team that's trying to keep everything for themselves. And you're not gonna get the, uh, the masses, you know, teaming up around what you're doing and sharing what you're doing, and you might just sort of slowly disappear.

Uh, if that sort of stuff is important for you. Some companies are big enough that, you know, even the big guys like Google obviously care a lot about this. Uh, the folks at Apple you see aren't doing necessarily as much right now with the open source world, but um, you know, they're obviously big proponents of the of, of Dev, so maybe they don't really need to do that as much, uh, as some of the other folks do to show, show the dev world their love.

Very cool, uh, to see more awesome open source stuff happening. Let's go on to our second story. Uh, this is a little bit more of a social story. The, um, you know, professor at Texas A&M, it seems like, has maybe incorrectly labeled a bunch of folks as using chat G p T to write essays, even though that's not what happened.

Connor, could you tell us a little bit more about this and why is this story important? Yeah,

Conner: apparently the story is that he got his entire graduating class and he fed their essays in Cheche, BT and Chei. Bts like, yes, I wrote that and based off that, he failed all them and is blocking their graduation.

Uh, it's a pretty common thought process among people who use Chei b t. They think Cheche BT keeps a log of everything. It says the huge, they think chat. BT knows what it says, and they asked Shay Bt if it wrote something, and Shay Bt says, yes. And then they just believe it.

Farb: So yeah, chat,GPT sort of, you know, probably wants to validate what they're saying to it and it, it, you know, it's like, if you did this, it's maybe it just wants to generally apply in, you know, the affirmative, uh, it, it's interesting, it seems weird to me that.

You know, more schools aren't getting ahead of this type of stuff, and maybe they are, maybe they're quietly doing things and they're not ready to come up with announcements. But to, you know, this isn't the first time we've heard this type of story. Uh, this has been happening since, you know, last, last fall, when a lot of these, you know, cooler, more powerful, LLMs started, started hitting the, the public.

Uh, this has been going on. It seems a little strange to me that. You know, school the size of Texas a and m doesn't, hasn't already announced some sort of policy or provided its professors with some guidance on how to do things. E Ethan, where do you think this is going? Is this the end of this? Is this just the beginning?

Ethan: No, I, I think you're on point with that last point. I was just talking to an educator friend of mine yesterday, and at the end of the day, I, I get it from a professor's point of view, from a teacher's point of view, you're like, Hey, why am I grading this stuff when they're just taking five minutes to write it with chatGPT?

But from an administration side, from the top down level of these institutions, it needs to be made clear that. You chat, g p t. You can't just tell if this was AI written yet. Uh, G P T zero and chat, g p t saying, yes, I wrote it. None of these are accurate yet. So it's important that that is conveyed to these professors and that this is something we continuously deal with versus making these.

Like harsh reactionary points of saying, I'm just gonna fail everyone. I'm, I'm tired of this. I'm, if one person did it, maybe the whole class did it. I just wanna fail everyone. I'm sick and tired of grading ai, so there's some valid points, but it's, it's important that we're not doing things like this. So I feel bad for all these students who are sitting there who probably did write a good chunk of these thesis papers or their final papers and are now waiting for their degree because some AI models said,

Conner: yeah, that was I wrote that. Yeah, a few were already exonerated because, uh, because like they had like Googled docs like history of like edits and then like their emails and stuff were ignored by the dean and the teacher apparently. But as soon as these stories hit the Washington Post and everything got into the limelight.

Farb: Of course they will exonerate. It sounds like the definition of wishful thinking. You know, you, you want to be able to just plug these into Chad g p t and have it tell you if it's, if it's real or fake. And so you do it cause you're like, oh, if this thing's so powerful, it should be able to do this, I guess.

Uh, but it also seems like, you know, maybe this guy's, uh, just sick and tired of, of, of, of everything, you know? Um, yeah. I'm sort of sick and tired of this story, uh, but I don't think it'll, I don't think it'll be the last time we hear this story, quite frankly, maybe the last time that we talked too much about it.

So it'll have to become a bigger story. An entire, entire, you know, graduating classes of tens of thousands of people denied their diplomas or stuff for us to cover this again. But, you know, be careful out there folks. It's, uh, It's an AI world. All right, let's move on to our third piece of news. It's not small.

Um, you know, big news from, uh, stability announcing a, a new open source, uh, program. Um, and pretty huge news from Microsoft. Uh, launching what they call guidance, I think is they're just calling it guidance. Uh, and, and, and it's a language to help work with LLMs. Connor, can you tell us more about what exactly is this and, and why you think it's important?

Conner: Yeah, it’s a templating language, essentially like handlebars, so it's very easily, you can say exactly what you want and then you can template variables. But then more importantly here to integrate with LLMs. You can template generating for M, you can template. I want this part to have this temperature. I want this part to have this temperature.

I want this part to have this maxed tokens and you can use different lms. You can use anything from G P D four to Vaya. And Microsoft's writing something like guidance in such an open way that's such easy, so easy to template and can use any type of model, makes it really nice to see for Microsoft as a really like proponent of open source here.

Farb: So is this, is this really about, you know, doing much bigger things or is it more about saving time with the things you were already doing or some combination of both?

Conner: Both. I mean, you can, it saves a lot of time in these problems with templating we've seen before, like we talked about JSON former before.

About how difficult it is to get correctly formatted j s o out of a language model. So this does that, but also this makes it cheaper because, or not cheaper, but faster at least. Because instead of generating each part of a completion at all at once, it can separate it out. It can safely keep the tokens you need to be safe and it's another step forward in.

Farb: Guidance. Pretty much. Yeah. What, Ethan, what do you imagine, you know, assuming this is mostly for somebody who's developing, you know, and writing software on, on a regular basis, what do you imagine a, an engineer is doing with something like this? Yeah, I,

Ethan: I think at the end of the day, This is just seeing the space mature more.

If you've played with these language models a lot, especially on a code side or an application side, there, it becomes spaghetti code very quickly, uh, becomes an absolute mess very quickly trying to wrangle, okay, this prompt is here and then it changes to this prompt, but then I want it to do this. And when you're just trying to control the output of a language model, at the end of the day, Anyone been in the weeds of that?

You're looking at, you know, a bunch of functions and like I said, spaghetti code going everywhere. So Microsoft releasing a framework like this, we're just seeing the space mature. A few other people have done it, and I think Microsoft laid it out really well, um, with this guidance framework and just how do we control these models better?

How do we make applications better and how do we continue to make the feature set even larger? You know, cleaner code and cleaner applications give us. Bigger and better applications.

Farb: So excited to see, you imagine yourself using it?

Ethan: Oh, absolutely. Yeah. I think it's gonna be super useful, like I said, to remove all these spaghetti codes that people have been using before.

And you know, devs have put together their own mini frameworks to try to accomplish this, but having a. You know, it's almost like react to the web development world. It's a framework that everyone circles around, continues to add features to, is your base level framework is kind of what we're beginning to see these L LMS get.

Um, so I would compare this Microsoft guidance to, you know, some of the early web dev frameworks.

Farb: They really dialed in their GitHub page for it too. They weren't messing around. What were you saying Connor?

Conner: The current solution of this has been like link chain. As we, as we've seen, they've even raised a bunch of money to build this.

Um, but laingchain's got a lot of flack because it's over time, it's like Ethan said, gotten very spaghetti. They, it's kind of exploded in complexity because it wasn't structured for what it is now when it was originally built because these chat models didn't exist because the idea of agents didn't exist.

So doing all that in now is kinda hard, but with guidance agents is built in, chat models is built in. Far, but

Ethan: we were talking about Ruby on Rails the other day. You know, some of these web dev frameworks when you centralize them and you make them easy to use, you just enable more application developers, bigger projects, cooler features, and it's same thing's happening on the l om space right now.

So you're not manually doing all these prompts and chains,

Conner: et cetera.

Farb: Connor, do you imagine this replacing Laingchain in your, in your workflow?

Conner: It does for a lot of what it does. I would definitely see replacing link chain. Link chain still has some features like parsing files that this doesn't have built in yet.

But for agents and for chat and especially something like parsing json, this is definitely what replace Link Chain

Farb: my work on. Yeah, it's all json all the time around here. The, uh, is this available now? Yep. Do you know Open source now?

Conner: You can use all of it can contribute to it. It's here.

Farb: It's very, it's very cool.

What about the, the stable studio? I think that's available now too, right? Yeah. Yeah. Awesome. Yeah. All right. Couple of. You know, big stories plus a, uh, you know, some mistakes and errors going on over there at Texas a and m. Hopefully they can get their act together, um, and not embarrass themselves any further.

Uh, let's see what folks are hearing in the space. Uh, today. I was gonna talk a little bit about, uh, it's not new news, but we hadn't covered it and it was. Uh, sort of along the lines of some of the stuff we've spoken about before, uh, which is Google's universal translator, uh, tool. They've, they've not, it's not quite available yet because it's, you know, it's deep fake territory and they want it to be safe.

They want it to have watermarks, and I think that's the, that's the right approach from a big company. They can't just toss this thing out there and create massive chaos around the world. But basically what it'll do is it'll take anything that you're saying in one language, translate it into another language, but it also.

Change the way your mouth moves so that it looks like you are speaking in that other language. So, uh, that's gonna be completely bonkers. And I think on a previous episode we talked about how. These sorts of technologies are gonna unleash content onto the world that, you know, has always been there, but siloed in different languages.

And people don't wanna do, you know, uh, subtitles and they don't wanna see, uh, horrible dubbing experiences. It's going to take the amount of content out there available to you and, you know, a hundred exits like that, even though it's not new content, it's just content that you weren't really accessing, accessing before.

Uh, I thought that was. Pretty cool. It was just like one of the 50 things that they announced last week, but pretty huge news in, in my estimation. Ethan, what are you seeing out there?

Ethan: Um, yeah, a little bit extension on the testimony we talked about yesterday. Um, o one of the congressmen had mentioned a constitutional AI and I think it was very interesting they were in the weeds of that, um, and kind of understood that term.

I remember on some of the initial discussions and calls with Anthropic, um, of who of course made Claude, they kind of have a goal and initial work towards building their own like constitution within their large language models. And it's a different approach versus doing something like, Human feedback and trying to align the model for every single possible use case and how should it respond?

Um, so more philosophical and more in regards to regulation and how we continue to go down this path, but also affecting how we build these models. Um, Like I said, theoretical right now, but interesting the fact that, you know, ouris may have their own constitution at one point.

Farb: Yeah. I sent Sam a little congratulations text yesterday.

I thought he did a bang up job and wanted to make sure he, he knew that he, he, he said, thanks, uh, Connor, what are you seeing out there?

Conner: Well, you think kind of stole mine there, but Yeah, I was gonna comment on the Constitution, on the constitutional also, because it's kinda interesting that word is kind of like from Anthropic.

So the Congresswoman, or I think it was congresswoman throwing, throwing it out there, it's just kind of bit like, uh, who told you to mention that? But

Farb: it's funny, one of their staffers, uh, one of their staffers probably listens to AI daily, I imagine. You know, probably getting most of their information. I'm not sure how else they'll be,

Conner: so well inform as they were yesterday.

Farb: Yeah, absolutely. All right, some cool news today. Big news as usual. And uh, we'll see you tomorrow with more in AI Daily. See you tomorrow, guys. Have a great day, Al.

Discussion about this podcast

AI Daily
AI Daily
The go-to podcast to stay ahead of the curve when it comes to the rapidly changing world of AI. Join us for insights and interviews on the most useful AI tools and how to leverage them to drive your goals forward.