Join us on this episode of AI Daily as we discuss three exciting news stories. First, we delve into Sam Altman's testimony in front of Congress, where he discusses the future of AI regulation and its impact on the economy. Then, we explore Quora's Poe API, a groundbreaking web browser for LLMs that allows developers to bring their own language models to the platform. Finally, we cover Apple's latest accessibility announcements, including live speech and personal voice advancements. Tune in to gain insights and discover the intriguing developments in the world of AI.
Sam Altman's testimony:
Sam Altman testified in front of the Senate Judiciary subcommittee on Privacy, Technology, and the Law about AI regulation.
He emphasized the importance of AI safety and the future of the economy with AI.
Altman's approach of being open and accessible to lawmakers was praised.
Senators expressed surprise that technology was actively seeking regulation.
The hearing focused on past mistakes in technology, such as social media, and the need for good regulation.
Quora Poe API:
Quora introduced the Poe API, positioning itself as a web browser for language models (LLMs).
Poe aims to allow developers to integrate any type of LLM, including custom models, into their applications.
The API offers features like language chaining, monetization, and human feedback for reinforcement learning from humans.
It provides a one-click replica for easy API usage and built-in integrations with LLM frameworks.
The focus is on enabling developers to bring full LLM experiences to users, not just plugins.
Apple's latest accessibility announcements:
Apple announced advanced speech accessibility features, including live speech and personal voice.
The focus is on making AI technologies accessible and beneficial for people with disabilities.
Users can train their voices on their devices, creating custom voice models for communication.
The Magnifier app allows users to point at objects and have the labels or buttons read aloud.
These accessibility features leverage Apple's on-device machine learning capabilities and are expected to roll out later in the year, likely with the next OS release.
Links to Stories Mentioned:
Follow us on Twitter:
Conner: Good morning. Good morning. Welcome to another episode of AI Daily. We got three pretty great stories for you guys today, starting with first we have Sam Altman's testimony. Uh, this morning in front of Congress, the Senate Judiciary subcommittee on Privacy technology and the law interviewed him this morning, uh, parking a lot about AI regulation.
He was joined by a professor at from NYU. And also by, uh, Christina Montgomery, IBM's Chief Privacy and Trust Officer and the Senate, and Sam Altman and the other representatives were all extremely concerned about the future of AI safety, the future of the economy with ai. Farb, Any
Farb: thoughts? Uh, you know, I think Sam has been giving a masterclass in how to do this stuff correctly.
The bottom line is people want to know who the leaders are that are building and controlling these massively powered technologies, ma, massively powered technologies. Obviously the politicians want to know who these people are. The politicians want to be understood by their constituents as caring about these things, taking the steps to get these leaders in front of them to speak.
And Sam is. You know, getting ahead of the story just about every time and a, a real blueprint for other tech leaders. And clearly he's watched folks in the past, other big tech leaders from big companies who've sort of had to be, be dragged out in front of the Senate who've had to sort of been, be pulled out from, you know, hiding behind the magical curtain.
Sam is doing it out in public, uh, saying, here I am. I'm happy to speak about it. We should regulate this, and it's really working. And I, I say kudos to him and, and keep doing it.
Conner: Yeah. A couple of the senators really commented on that, how surprised they are that this is the first time that new technology has really asked to be regulated.
We've seen social media in the past section two 30, there was essentially a waiver on technology and any. Being held liable.
Farb: This is the part that I think he's reading it correctly on a lot of folks in the past. I think were afraid to do that because they're like, well, if I put myself out there, they're gonna start controlling me and start telling me what I can do, and all of a sudden, my business is not going to be as valuable.
I don't think that's the right approach. Sam's approach has been these people wanna see your face. They wanna hear your voice. They wanna know that you're a real accessible person that's willing to participate in the social contract that we have with each other around how we, you know, live and how we engage with these cool new technologies.
And, and that's the right call. Just making yourself available goes a long way.
Conner: seems to have a lot more goodwill in front of the Senate than past technology has. We've seen with social media, with meta, the entire, the entire. Duration of the hearing, they're really talking about their past mistakes in nuclear, in the genome project and social media.
They're really talking about their past mistakes and what they can learn from that, and I think Sam was really taking good stance, Ethan.
Ethan: No, I agree. I think you both nailed it. Um, at the end of the day, the, our elected officials want to do better this time. Um, and you, you pointed out very strongly how this is one of the first times that technology's asking to be regulated and Congress is open to that and they know the mistakes of the past and they want to be involved in this process.
I do think it's, you know, uh, very interesting how most of the articles coming out about this do still point at the. You know, dangers of AI and talk about how bad this can be. But if you listen to the testimony, it was actually very engaging, very thoughtful. Each congressman and each person who spoke and Sam themself talked about the positives of this technology, how it's gonna benefit people, how it can benefit.
Creatives, how the impact on jobs is not gonna be as bad as we mostly think. So if you listen to the testimony, it's actually very heartwarming to see how engaged our elected officials are, how engaged our kind of upcoming tech leaders are on this subject. And, uh, I was happy to hear it. It was very bi, it was very bipartisan.
Conner: Uh, our leaders were very engaged. Like you said, Ethan, they really seemed to know what they were going on. They talked a lot, they mentioned a couple times garbage in, garbage out, and how it related to all this of how. We need good regulation or else that's garbage in. Absolutely. Um, it really contrasts with eu, like the EU news we saw yesterday of them trying to crush open source and it's really a contrast there.
So definitely. Okay. Our next story then is Quora's Poe API. Quora announced the Poe API where they're really trying to take the angle of being a web browser for LLMs. They're really a centered piece. Po if, if you guys have used the app, it's pretty great. Yeah. Um,
Ethan: Ethan, what do you think? Yeah, Poe's been very popular for people who wanna use philanthropic and Claude and some of these other models.
And I think the most interesting thing to me here is. You know, unlike, uh, Bing or possibly Bard or even chat, g p t plugins, pos, really, I think you nailed it on a web browser almost that PO is letting you bring any type of model. So any l l m you want, if you built your own custom, l l m, if you built a whole application on top of another LLM.
They want to make that available within Poe and not just a plugin to ChatGPT, for example, but really your entire application, your entire business per se, as a custom LLM with custom features for users that they want to use. Put directly in the distribution funnel of po. So different than a plugin, bringing the whole l l m experience to someone.
So I think it's a, it's a new angle of people like using Poe for Claude, and I'm excited to see what kind of startups and developers deploy as a full l lm and not just a plugin.
Farb: Yeah. You know, as a, as someone who's not a full-time developer myself, I love the fact that they have a one-click replica, uh, that lets you fire up the.
The API and a demo and just start using it. It's the sort of thing that, you know, I have the time to actually go in there and do and start engaging with things instead of having to go to GitHub and, you know, not like it's a lot, a lot of work, but even saving 30 minutes, saving 15 minutes and just let you spend some time in the middle of your day actually interacting with the code instead of setting up, you know, where you wanna interact with it.
Getting your own replica built, uh, is really nice.
Conner: Absolutely. Yeah. It looks like they have built in integrations into Lang Chain. LA Index looks like they wanna bring monetization in the future. They have really all the features that you see in ChatGPT, but you only have to bring your own language model.
They give you, uh, human feedback that you can work with for RLHF. So between the alternatives of taking the open source for out and forcing and forking something like a HuggingChat or building for Poe, we'll see where developers go, but yep. Where's
Farb: the one click replica for every, you know, repo on GitHub.
Conner: Exactly. There's code spaces, but yeah. Yeah. Okay. Next up we have the Apple announcement. Apple announced they have live speech and personal voice, advanced speech accessibility. Uh, they're really taking the angle of it as an accessibility tool, but this is essentially an 11Labs that works on your phone.
It, you can train your voice on it, and then now you can just type in what you want it to say, and then it produces the audio sound just like you. Ethan, what are you, what are you thinking?
Ethan: Yeah, def, definitely Apple is going really on the accessibility, um, and routes. So bringing these kind of, I guess, next generation technologies and technologies, some people are still worried about, but instead of them putting them all over iOS, they're focused on, you know, accessibility and disability features right now.
The coolest one I saw was you can actually train your own voice model on your device now, so it's something you would use on 11 labs or the computer. Now you just recorded on your phone and you have your own voice model for times, you. Either don't want to talk or for some people can't talk. So Apple focusing on, you know, accessibility for these latest generation models of text and images and audio, I think is probably the right path for them right now.
Farb: is classic Apple, getting the technology out of the way. And talking about what it can actually do for people. So instead of saying, Hey, AI, AI, AI look at all this AI we're doing. They're just quietly doing ai. Not only that, they have ML chips on your device. You know, there are hundreds of millions of Apple devices all around the world, all with insanely powerful machine learning chips.
On the device. What's cool about this thing is like some of the other technologies, like 11Labs, you give it 15. Descript does this too. You give it about 15 minutes of your voice and it will train on that. Apple is doing it on device. It's private, it's secure. Apple doesn't even want access to your, to your voice.
It's really, really powerful. A couple of other cool things that they had in there. One, you'll be able to point the magnifier app. At a, say a, uh, washing machine, and you can point your finger on the washing machine to one of the labels or one of the buttons, and it'll read out the label of the one that you're pointing to.
Yeah. This again, happening on device, so you know, you can see Apple. Apple's great at doing this. They, they build things up over years and they slowly bring them together until they merge them into something. So you can see that the a AR glasses that Apple is expected to come out with later this year, they've been building the apps and the technologies that will power those glasses for years already.
Obviously, all these cool things that you're seeing in the Magnifier app. Like being able to point to something and having it read it for you is gonna be in the AR glasses at some point. Absolutely, absolutely. So Apple just, you know, doing their thing quietly, crushing it and, you know, maybe quietly the biggest AI company, uh, in the world, but I guess just about anything Apple does, they're the biggest company in the world doing it.
Uh, very, very cool to see. Thank you. Tim, apple.
Conner: Ethan. Is this ready now, tomorrow? When do we know, does Apple have this out already or?
Ethan: I think it's gonna start rolling out towards the end of the year, I believe is what they said.
Farb: I think typically what Apple does on these sort of accessibility announcements, and it's actually, I think, cool that they're doing it, uh, in the accessibility world first because as strange as it may sound in a world where you know, every little bit helps, it doesn't have to be perfect to be helpful.
So in a world where, you know, this is making meaningful changes in people's lives, they get it. The technology isn't perfect, it's not the smoothest perfect consumer experience available, but that's okay because it's really being used by people who need it. Uh, so typically they'll announce something like this on this, you know, world accessibility day.
Um, and then they'll sort of release it in the next os. So I would expect to see these in September. So October when, uh, the new OS rolls out.
Conner: Exactly. Okay guys. Well we had three pretty exciting announcements today. We had Sam's testimony, we had the Quora Poe API, and then of course we had Appless announcements.
So what have you guys been seeing far?
Farb: What have you seen? You know, uh, I'm a big basketball person. Played basketball my whole life. Was pretty obsessed with it when I was younger, so it was cool to see HoopGPT, uh, Maybe, maybe it's not new, but it seemed new when I, when I, when I was reading it, where it's basically allowing you to chat about all things basketball, ask for, you know, natural language versions of getting stats from people.
But one of the cool things that seemed to do is it would, for example, you could say, you know, give me a image of every spot on the floor that LeBron, you know, hit a basket against the nuggets. And so it'll generate an image where you'll see. All the shots, you know, where, where LeBron hit, hit, hit the basket.
Pretty cool stuff to see. Obviously we're gonna see this in every discipline, uh, that we haven't, and keep seeing it, uh, reiterate on in the disciplines that we have seen it. So, uh, just a little basketball news ache,
Conner: plugin, or is
Farb: that It's, it's its own thing. Yeah.
Conner: Very cool. Very cool. Ethan, what did you say?
Ethan: Yeah, I saw the ChatGPT fund on Twitter. Uh, they kind of kicked off. They put 50 grand into a fund and they're gonna let ChatGPT go on and see if it can make money. Um, it's from the team at Autopilots. Um, actually a really cool team. They're also behind the Nancy Pelosi tracker. Um, so you can. Follow her whole stock portfolio and copy trades and kind of make money that way.
And yeah, it's fun that they're making this chat. B d fund, it's been blowing up on Twitter. Um, cool to watch. Great team. Um, and yeah.
Conner: I like it. Uh, me, I saw that Boundless was a hip-hop song that Suhail made, uh, lyrics were written with ChatGPT AI to change his voice. And the bass instrument was Google's MusicLM.
So, I know we talked about before that the music, a LM only made short segments, but I assume he just made a bunch and then strung them together. Yeah. Um, I listen to it. It's definitely like a, it's like a SoundCloud quality wrap. Like I like it. Um, yeah. Pretty interesting. Very cool. See where music's gonna go.
So thank you guys. Great episode today. Um, we'll be back tomorrow. Have a good one everyone.
Farb: Thanks. Bye.