Welcome to the newest episode of AI Daily! Today, we delve into some tantalizing topics - Military AI, geopolitics, and a controversial lawsuit against ChatGPT. Our first segment takes a closer look at the collaboration between Scale AI Donovan and Cohere to bring LLMs to defense and government. Following that, we'll dive into an unfolding lawsuit against OpenAI's ChatGPT. Is this a legitimate concern for copyright infringement, or just a publicity stunt? Lastly, we'll discuss China's access to GPUs, and how changes in international policy might affect this.
Key Points
1️⃣ Military AI - Scale Donovan
Scale AI Donovan is partnering with Cohere to provide LLMs to US government and defense, focusing on data ingestion and military decision-making.
They're offering a free trial with data sets targeting China, hoping to position themselves as a significant provider amid current geopolitical challenges.
To gain acceptance, they must achieve FedRAMP approval. If successful, LLMs could transform how the Department of Defense handles operational documents.
2️⃣ Lawsuit Against ChatGPT
Two authors have filed a lawsuit against ChatGPT, claiming the model used their books in its training data and is profiting from their intellectual property.
The authors aim to determine whether their specific works are in OpenAI's dataset, but it's unclear whether it's using actual books or just summaries.
The case brings to light a shift in public perception about AI, with people moving from seeing it as advanced search to a potential infringer of intellectual property.
3️⃣ Geopolitics - US Restricting China’s Cloud Access
The US administration aims to restrict China's access to advanced GPUs via cloud providers like AWS and Google Cloud, furthering export controls and impacting businesses.
The hosts suggest these actions are strategic negotiation tactics in the larger geopolitical context, using areas like AI and semiconductors as bargaining chips.
Compliance controls on cloud platforms reflect changing perspectives on the significance of advanced technology resources, transitioning from unrestricted access to closely regulated use.
🔗 Episode Links
Connect With Us:
Follow us on Threads
Subscribe to our Substack
Follow us on Twitter:
Transcript:
7.7.23
Ethan: Good morning and welcome to AI Daily, happy Friday. And we have three fantastic stories for you today covering some military LLM, covering some geopolitics and covering a lawsuit against chat g p T. So let's kick off with our first story today, which is scale AI Donovan. So Donovan is scale. It's their opportunity to try to bring LLMs to governments and defense.
They're working with cohere, the LLM provider, and they're really working kind of broad spectrum right now. So talking to military for decision making, talking to government employees for ingesting data. They had a couple interesting stories in here, but also military has not been the most open to LLMs yet.
Farb, what do you think of their kind of positioning here and like the product itself?
Farb: I think it's clever to offer this free trial to any government employee. They, they've clearly taking a few angles here. One is this free trial is important for any government employee. Two, there is a lot of China related news, uh, you know, features in this, in this trial that you can use.
So they seem to be trying to position themselves as a. Real provider for the US government, uh, US defense and jumping right into the geopolitical storm that the, the world is in these days. So kudos to them for the guts. And, you know, let's see, let's see what the practical outcomes are.
Ethan: Connor, what do you think of the product in terms of kind of the chat interface?
It looks like they're ingesting data, probably doing a lot of embeddings, maybe throwing up a map, letting you interact with some of it. Anything interesting on the product side you saw?
Conner: Yeah, I think they're pretty focused on this like wide range of very specific data that's normally hard to interact with, especially if it's classified, et cetera.
They have several data sets they preloaded, including think tank reports on ai, counter narcotics documents, Chinese technical documents, cloud environment documents, and several unclassified assessments of Chinese military capabilities. So all very Chinese focused targeting this military.gov. Emails and anyone can sign up and go use it.
And they're trying to show that, hey, these data sets are really useful to interact with with our platform. And you can even upload your own data sets. Interact with our platform. I think they're really trying to scale.
Farb: They make an interesting point that, you know, nobody else is taking it easy on this stuff, so either step up to the game or get left behind, which is tough to argue with.
Conner: Yeah. It's, it's pretty hard to interact with this type of data normally, but they've entered of working with AWS GovCloud, they're working with cohere to make it secure and compliant in the end, which is the most important part of this.
Ethan: Yeah, there's another product called Ask Sage, um, from a previous Air Force CIO. Um, so kind of same in the LLM space, you know, a chat interface to interact with these documents. I think there's a lot of open space here. They definitely need to get FedRAMP approved, you know, as of right now until they finally get that. No one's really gonna use this for anything serious, but I think LLMs will have big potential for, you know, DOD in terms of reading, operational documents, et cetera.
So good to see someone working on it. Um, our second story of today is a lawsuit against ChatGPT. So two authors came out and said, Hey, we saw that you could generate summaries and paragraphs from our books. You clearly used our books and your training data. You're profiting off our IP and our works and we wanna put a lawsuit out there.
You know, I think the two main questions that come to mind for me are, what does this actually play out to be? You know, is there a scenario in which. These lawsuits actually go through and they gotta redo some of these training data? Or is this just, you know, a couple of people trying to make a name for themselves?
Connor, what do you think? Yeah,
Conner: Yeah, internally open AI uses books two for their data set to train GB three, GB four. Everyone speculates. It's very similar because of the size and some other things to Books Three, which is an open data set by Lyon. Mm-hmm. Books three is mostly Torrented books, and that's really the only way to train these types of models.
Yeah. So there's no way to know if these specific authors books are in there. It could just have their summaries in there. And GPD four can just repeat those summaries. Don't wanna. No, but they're trying to find out apparently. Farb, what do you think of this?
Farb: You know, is there any pathway in this, these things go through at a court level, there might be retraining, or is this kind of all pie in the sky? You know, I think one possible way, you know, this is like the first thing that came off the top of my head is you. You have a scenario where, you know, open AI is gonna say, this is our proprietary information. We don't just tell everybody Yeah. What our training data is, and it's, you know, We're obviously not gonna just roll over every time someone wants to throw a lawsuit out, and then we're gonna be like, cool, let's put all of our, you know, uh, training data into the discovery process in the courts.
So make it publicly available forever for everyone. They're gonna fight against that and. I think the court's gonna, you know, say something along the lines of, okay, well, you know, how do we resolve this in a way where you can maintain your secrecy over your training information and we can get down to whether or not you've done illegal things.
And there, there's lots of approaches there. And I think one approach for AI could essentially be, hey, I can't quite tell you whether or not this is training off of this book or like Connor was saying, it could be training off of open summaries of the book. Not really sure how you're, you might get better open AI summarizations of books off of the training of some summarizations than you will off of the actual books than themselves.
And so they could see, you know, theoretically go to the court and be, Hey, here's an example of, um, A whole ton of summaries of a certain book. And you know, here we can show you that we can now create very convincing summaries of this book. That's probably all that's happening here. And since that's possible, that's a plausible, you know, scenario.
And we're not gonna give you access to all of our training data for you to figure out whether that's the case or not. And, you know, there's lots of summarizations of things out there. So are, is it illegal to summarize a summarization? It's not gonna be an easy one, uh, to, to, to dig through. And I'm guessing one side's got a little bit more money for lawyers than the other.
Conner: I agree. It's very well said. It's kinda a plausible deniability thing. If open AI can prove plausibly, that they can generate these summaries, just off summaries, that's perfectly legal.
Farb: So just to let OpenAI know, I'm not currently available to join your legal team, but um, If that changes, I'll let you all know.
Ethan: I think it's interesting too cuz there's a, you know, the same I guess class of people went from saying, Hey AI's just, you know, advanced search and auto complete and there's no problem with Google being able to search books. And at this point now they're saying, okay, wait, no, it's actually impacting profits possibly.
So interesting to see this kind of switch and people saying, hey, AI's actually taking our ip. It's not just search, so we'll see how it plays out.
Conner: That's a very interesting contrast there because like Google Books, you can technically go search any part of any book on Google Books and see that whole page of the entire book.
People don't know if that was used to train barred. Google says they can use anything to train.
Ethan: So yeah, Google updated their terms and conditions too. Anything on the open internet they can train on. So we'll see how it plays out. Our third story of today is, uh, around China and restricting their act.
Access to even more GPUs. So we talked about on the show, you know, before some of the export controls that are coming into play, not selling Nvidia GPUs, a 100 s, H 100 s to China. The main loophole that people have been finding is, hey, well a Chinese company can just call up aws, call up Google Cloud and rent some of those things.
So the administration's trying to put another closing loophole on that and saying, Hey, big cloud providers, you're not gonna be like letting these companies access these chips, um, probably will impact some of their business. So I wonder how lobbying is going around that far. What do you think?
Farb: So, I think AI is the geopolitical space for the rest of our lives probably. And you know, it's impossible to look at these things in the vacuum of. Just themselves. You have to place them in, in the context of the larger geopolitical, uh, environment that we're in. So, for example, you know, China's restricting some exports on metals that are important for semiconductors. The US does this a lot of times.
These are just. Forms of negotiation tactics. Uh, so that when they, when they come to, to the table to make an agreement, you know, whenever you're negotiating, you want to have things that you're willing to give up to the other side. Yeah. And so you create a whole list of things that are, you know, you say you're super important for you, and we're not gonna, we're not willing to budge on this.
And then when you come to the table, you start budging on a few of these and they budge on these things. And sometimes, you know, if you do it correctly, you end up budging on the things that you didn't actually care about. Yep. So, you know, in any negotiation, both sides are posturing. Both sides are, uh, putting themselves in a position so that they can get to where they want to.
On the other side of a negotiation, you know, this does and, and. That context doesn't even include a ton more geopolitical context when you're talking about the deep states of both sides, you know, chatting with each other. Uh, they're playing their own internal negotiation games and control and power games.
But that kind of was my original point. A AI is the, uh, surface area for geopolitics for some time to come. Because it not only is intellectual and information based, it's also physical and based on semiconductors and, and even from mining to, you know, intellectual ideas. Uh, the AI space covers it. So we're gonna see a lot more of this type of thing.
Ethan: Conner?
Conner: Yeah, like US government doesn't really care. They don't really care that TikTok was training its algorithms and AI on a 100 s, but now that platforms like Donovan and other, like more military capabilities are. Coming to fruition and coming to be more useful. That's where all this kind of ramps up and that's why we saw, as we said, US banned ship exports, China bands, metal exports, and now US is gonna ban cloud computing.
Yeah. So is it back and forth until again, we meet at the table in the middle.
Ethan: Yeah, it's funny, we talk about philosophically how important and far always talks well on it, how important like the microprocessor is now. And I think 10 years ago you go on to AWS, Google Cloud and if you had the money, you get whatever you want.
But as you see compliance controls make their way into these platforms, it's a whole new world. So interesting as always. Um, well what are y'all seeing outside of this? Conner?
Conner: I saw a focus transformer with contrast of training for context scaling. It's a new paper. Basically the gist of it was they have a LongLLaMA model in it. They were able to fine tune LLaMA models all the way up to 65 B and expand it to a context length of 256,000. So impressive or definitely interesting. Um, but like we talked about yesterday with the dilated transformers, the dilated attention it remains to see if these super long context lengths are as useful or as capable as our current context lengths, just because they're architected very differently.
Farb: Yeah. You know, I saw a, uh, new text to video, uh, demo and I can't quite remember which one it was right now. I apologize, but really that's not really so much the point. I I I, it was just kind of interesting to me how much room there still is to go in, in, in the text to video. You know, these are clips that are only a few seconds long, and yet they're so unusable and unbelievable and.
You know, unbelievable in the sense that you don't believe there it's good video. So it's amazing to see, uh, how challenging the problem of text to video is. And you know how unsolved it is, even when you're just trying to create a, a few seconds, it can be really difficult. And so you gotta imagine it, it's probably not a computation problem to generate three seconds of something convincing.
It's not that, it's not like you can't get enough GPUs or, or. Uh, to do something like that or, or TPU to do something like that. It's just more a question of, I don't think the, you know, the algos are, are there yet?
Conner: No, it's super tough problem and algorithms and work that's gonna come outta that still, and I'm excited to see it.
Ethan: Yeah, following on the text of video, I saw this AI web tv. It's a live stream of these generative AI videos. They're using Zero Scope. Maybe that was the one you saw far. But yeah, you know, you can, you can easily paint the picture of how cool this will be a couple years from now. Um, but it's still such early days, um, here.
So really cool stuff. Well, thank you all it as always for tuning in if you made it to the end of the show and we will see you again next week.
Military AI | ChatGPT Lawsuit | AI Geopolitics