ChatGPT Releases App, Google's Introduces "Colab", & MedPaLM2 for Healthcare

AI Daily | 5.18.23

First up, OpenAI has released the official ChatGPT mobile app, bringing the power of chat GPT right to your fingertips. Discover its smooth interface, voice recording, and haptic features that make it a joy to use. Next, we explore the integration of AI coding assistant in Google Colab, making it easier than ever to learn and experiment with AI models. And last but not least, we delve into the advancements of MedPaLM2, Google's game-changing healthcare model. Discover its impressive accuracy and potential impact on the healthcare industry.

Main Take-Aways:

ChatGPT Releases App

  • OpenAI has released an official ChatGPT mobile app.

  • The app is smooth, clean, and has nice features such as haptic feedback, voice recording, and voice memos.

  • It is faster than native iOS voice transcription and allows users to search their history.

  • The app does not have browsing or plugins yet, but those features may be added in the future.

  • It uses GPT-4 and is snappy, providing quicker access to chat.

  • An Android version of the app is expected to be released soon.

  • Some users have already started using the app and replacing Safari and Chrome on their iPhones.

  • There are many fake Chat GPT apps in the app store, but OpenAI's official app should help eliminate them.

Google Colab

  • Google's CoLab, a Jupiter notebook environment, is integrating an AI coding assistant.

  • This integration aims to make coding and AI learning more accessible to users.

  • The AI coding assistant in CoLab is built off of Palm II, a language model developed by Google.

  • The assistant offers autocomplete suggestions, a chatbot, and a generate feature to create new code blocks.

  • Users can benefit from AI assistance within CoLab, enhancing their coding experience.

  • The integration is seen as a significant improvement for CoLab, attracting users back to the platform.

  • It is expected to have a positive impact on the developer community, particularly for beginners learning to code.

  • The AI coding assistant is not yet available but will be rolled out to paid subscribers first and eventually to all users.

  • The code blocks feature is free, while autocomplete and the chatbot are available to paid users.

MedPaLM2 for Healthcare

  • MedPaLM2, Google's healthcare-specific model, has shown improvements in accuracy.

  • MedPaLM2 achieved a score of 86.5% on the Med QA exam and received positive evaluations from physicians and patients.

  • The model provides quick responses and aims to assist clinicians and potentially patients directly in the future.

  • The integration of AI models like MedPalm 2 into healthcare raises legal and ethical considerations, such as liability for decisions made contrary to AI recommendations.

  • Technical details of MedPaLM2's advancements include adversarial question testing and efforts to increase accuracy to 100%.

  • The development of application-specific integrated circuits (ASICs) for running language models (LLMs) offers significant performance improvements compared to GPUs.

  • Microsoft and Apple are also investing in AI-specific chips, indicating the growing trend in the industry.

Links to Stories Mentioned:

Follow us on Twitter:


Transcript:

Ethan: Good morning and welcome to AI Daily. We have some pretty interesting stories today. The first one we're kicking off with big news out of open ai releasing a ChatGPT Mobile app. So this has been, I think, requested by people for a long time now, people using the web browser for chat, GPT, some of the other apps that were popped up in the app store.

But now, We have an official ChatGPT app from OpenAI, Connor Farb. I think you both have used it a bit. I haven't got too deep into the weeds of it yet, but Connor, you've used it. What's, show me your favorite features. What are we looking at?

Conner: Sorry. It's just like smooth and clean to use. It's, it's a big jump.

I was using chat G B T as a progressive web app before and now the native app has some nice features. It has the haptic, it has the voice recording, the voice memos. It's very quick though. I think it's using Whisper, because it's a little bit faster than like Native iOS voice transcription. Just overall it's very smooth.

You can search your history. Um, and it's just like the little haptic features like I mentioned that make it very great to use so far.

Ethan: Have you got to use it? I think you said they don't have browsing or plug-ins yet, but how do you think of it?

Farb: Yeah, it doesn't look like there's browsing or plug-ins yet.

I'm sure that's coming. It has G P T four, which is cool. It's super snappy. Uh, my guess is I'll, I'll probably start using it. You know, I fire up the web version on my phone pretty regularly, so. Having quicker access to the chat is, you know, is just gonna make you use it more. So that I think is smart. They should see the, their, their numbers go up with this.

Uh, I think an Android version will be coming out, uh, soon. And, uh, yeah, I, I thought it was, uh, great for their first app.

Ethan: Yeah, it seems pretty cool. I saw some funny tweets on Twitter, people removing Safari and Chrome from their home bar on their iPhone, replacing a, a ChatGPT app. Um, so more to come, but really cool to see OpenAI launch this official app and actually get this in the hands of every iOS user.

Farb: So, are they using Whisper on device or is it going to the cloud and getting….

Conner: No, they haven't even said if they're using Westford. That's just some of the like thoughts I've, I've seen on Twitter. It's pretty funny though if you search chat G B T in the app store. It's not even the first top 20 results.

Cause there's so many, like there's so many fake ChatGPT apps about 30 out.

Farb: Well I saw somebody tweeting about, oh, here's a list of, you know, fake ChatGPT apps you should delete from your phone. So it's good to see that there's a official one out there and hopefully some of these scammy ones will start disappearing.

Ethan: Yeah, we can only hope. We can only hope. Yeah. But let's move on to our second story, which is AI and Colab. So Colab, if you don't know, is Google's kind of what's called a Jupiter Notebook environment. So a really easy way to run Python and learn AI models and train AI models and test out some of these latest things.

But they announced today that they're bringing their AI coding assistant. Into Colab, um, which I think will be just another kind of boom, similar to replica of teaching more people how to code, getting more people into AI by using AI to help them. Um, so it's built off of Palm Two. Connor, you know, Google's putting this in all of their products, it seems.

How do you think this will affect people using Colab?

Conner: So it's honestly a pretty big jump for using Colab. I was using a few other Jupiter Notebook hosts before this, but I think I'll go, I'll go back to CoLab. It's very nice to see these integrated AI features. Um, the auto complete, I of course saw that coming.

Um, the S code can have co-pilot has had up for a while, and the chat bot is also coming to co-pilot. So I saw those coming. But to generate the, the generate feature they have where you generate new blocks. By just typing in a prompt that's very, I didn't see that coming, and I like it a lot. Yeah. Uh, the example they gave is import data CSP as a data frame, and it's a very great way to use AI within Jupyter Notebook.

Ethan: Yeah, it's kind of meta bringing AI to AI. Farview and I have been in the weeds of Colab for a long time now. Always messing with stuff. Was it, is this something you'd use? Does it excite you?

Farb: Yeah. You know, um, is it rep, is it, is it Colab? I, you know, last year when we were doing namesake, I think I spent two months inside of, Colab and didn't see the light of day other than just staring at Colab repeatedly.

So, uh, it's great to see this in there. They probably have what, tens of millions of people using it, or some insane number. Yeah, I, I hope they share some. News on the metrics of it. As it as it gets out there and it's not quite available yet, I don't think they're gonna start releasing it to their paid subscribers first.

Uh, and then eventually to everybody. I know that the news said that, you know, this is freely available to everybody, but not quite yet. But it'll, it'll get there. Uh, it's gonna, it's gonna be everywhere. And this is a great place to to, to have it. Lots of people use this. Lots of people are learning to code inside.

Colab. Huge win for the, for the. Developer community, especially the folks that are just starting to learn.

Conner: I believe the code blocks are free, but then auto complete and the chat bot is only for paid users, which makes sense. Those are a bit like, take a little bit more context than like thinking do so definitely.

Ethan: Well, still cool to see them bringing Palm two into all their products, um, and Cody as they call their code models. So more to come. Not ready yet, but exciting to see it come to Colab. Uh, our last piece of news today is, this was actually announced at Google io, um, last week, but MedPaLM2, there's some more details around how good this model is becoming.

So if you don't know, med Palm two is Google's kind of. Um, models specific for healthcare. So in answering healthcare questions, helping clinicians, um, even possibly helping patients directly in the future. Um, so they scored 86.5% on the med QA exam, and they had a lot of really interesting evaluations in it.

Far. What do you think of these kind of specific models as we see Med Palm two get better? What does this mean for some of these, you know, the entire healthcare space?

Farb: To be honest, if I understood it correctly, they said that. You know, physicians from multiple countries rated these responses as being accurate and, and, and helpful.

And I think even patients were rating them as well as, uh, finding them helpful. They were, uh, quick to generate responses. It'll be interesting to see. One, one thing that comes to mind is when is there gonna be the first case where a physician. Decides to do something that the AI didn't say, sort of go against what the AI said and then get in trouble for it.

You know? Mm-hmm. Uh, people talk about it in the direction of, you know, doing something where the AI is wrong, but what about when the AI is right? But a physician didn't follow what the AI said and somebody comes back later and says, I don't understand this. AI gave you the correct information about my healthcare.

You did something else and now I'm suffering harm because I listen to you. Instead of the ai. It's gonna get really crazy, you know, who's underwriting what? And all these sorts of things are gonna be litigated, uh, literally at some point. So, absolutely. You know, I think it's in the long run, a big win for human health, which is the main point.

But it's gonna be a bumpy ride here and there as we try to integrate this into medical care.

Ethan: Definitely. And, and Connor, one thing I I was finding interesting from their, like, kind of more the technical piece of what they're dropping is, you know, somebody like medical, you need this to be as extremely accurate as possible.

I mean, to all of P's points on the ramifications of this between underwriting, litigation, et cetera. I saw they were using some interesting things on like adversarial questions, you know, trying to push the bounds to get it even to a hundred percent on some of these tests and some of these situations.

What'd you think of some of like the technical details they dropped?

Conner: Yeah, I mean exactly like focusing really deeply on medical was definitely like the push here, like doctors mess up still, but if these can beat out doctor, that's really the push here. Um, from a quick search, looks like GP four scored an 84% on that test.

Mm-hmm. So, Not, it's not like, it's not a medical model, but it gets very close to Med Palm. So we'll see Med Palm too. So we'll see. We'll see the comparison there more, but

Ethan: 2% is someone's life. That definitely is absolutely. Well, as always. What else are you guys seeing Connor far? You guys can jump in, but what else are you guys seeing in ai?

Farb: It didn't make the cut for today's story, but I thought the Etched news was pretty interesting. They're announcing asics application, uh, specific integrated circuits that will run just, I think individual l l m models. I wasn't e e exactly clear whether it was, you know, would run a family of them or just literally tied to.

One individual l l m, they claim they can get a hundred percent, sorry, a hundred x performance improvement. It's a lot more than a hundred percent, a hundred x performance improvements versus GPUs, which is, which is amazing. It's inevitable these things are gonna happen. There's so much more headroom for AI and LLMs and chips to IM improve.

As amazing as these early days are, it is just getting started and I'm sure, I think Microsoft, if I remember correctly, said they were making AI specific chips. Mm-hmm. Uh, apple has ML chips inside every one of their iPhones these days. Mm-hmm. This isn't stopping anytime soon. It's great to see more people get in the game.

Conner: Connor, what about you? Yeah, that was gonna be my news. Also, my news gets stolen again for Renee. I, I think it was a hundred x cost savings and 140 x performance boost. Um, it's very interesting there, like, like far mentioned, they're specifically fine tuning the trans, the chips themselves as an asic board to fit these specific transformers.

I, I don't think they're very general purpose. I think it's like specifically tuned would've to be specifically tuned to like a GP four. Wow. Um, wouldn't work on a GP 3.5, I think

Ethan: so. Wow. Fascinating. Yeah. Well, really cool news out of Etched. Um, I've been having some really interesting conversations with some friends the past few days in the finance world talking about how LLMs are affecting everything from hedge funds over to PE funds.

I mean, there are analysts on the grounds who normally are writing up all of these. Documents about the state of macro, the state of the Japanese yen. They're getting these kind of scenario analysis. They're getting these analyst reports and they're fine tuning all these LLMs based on their own historical data, their own audio conversations.

Um, so interesting things in the finance world, um, how LLMs are actually. Affecting it, um, extremely fast, um, versus just statistical reasoning of the past. Something like Renaissance Technologies, these LLMs are having a real impact on analysts today. So if you're looking to get into finance in the future, you'll probably need to start using these LLMs.

Um, every, you know, most people break in as an analyst to these firms, and if you're not using the LLMs, you'll be out-competed. So interesting state of the world on the finance. But as always, thank you all for tuning into AI Daily and we will see you again tomorrow.

Farb: Have a great day. Peace guys.

0 Comments
Authors
AI Daily