Playback speed
×
Share post
Share post at current time
0:00
/
0:00

OpenAI Future Plans and Mathematics, DIDACT, & Japan's AI Copyrights

AI Daily | 6.1.23

In today's episode of AI Daily, we bring you three exciting news stories that are shaping the world of AI. First, we delve into Japan's groundbreaking stance on copyrights, allowing the use of all data for training AI models. This move showcases Japan's commitment to advancing its AI ecosystem and embracing the potential of AI to transform society. In our second story, we discuss DIDACT, the first code model that mirrors the thinking process of real software developers. By understanding the entire coding process, DIDACT brings a new level of accuracy and efficiency to code generation and debugging. Lastly, we explore OpenAI's innovative approach to mathematical reasoning through process supervision. By rewarding each step in finding a mathematical answer, OpenAI is revolutionizing how AI models learn and improving their performance. Join us as we uncover the latest developments in AI and its wide-ranging implications.

Key Take-Aways:

Japan & AI Copyrights:

  • Japan reaffirms its stance on copyrights, allowing the use of all data, regardless of commercial use or copyright, for training AI models and applications.

  • Japan sees AI as a way to save its declining society and drive future progress, taking a progressive and serious approach to its development.

  • Other countries are likely to follow Japan's lead in adopting similar copyright policies for AI, considering the advantages it offers in terms of workforce and economic growth.

  • The copyright law in Japan applies only to content produced within the country, exempting foreign-owned content from its regulations.

DIDACT:

  • DIDACT is the first code language model trained to mimic the step-by-step reasoning and process of a software developer, going beyond just providing the final output of code.

  • Google's Monorepo, with data from years of developer activity, enabled the training of DIDACT to understand the full software development stack, including error fixing, code editing, and unit testing.

  • Understanding the history and context of a developer's actions is crucial for DIDACT's ability to predict and suggest the next steps in the coding process.

  • The development of models like DIDACT reflects a parallel to human cognition, where language and reasoning abilities have evolved over time, leading to the emergence of metacognitive processes. This advancement in AI cognition has potential applications in fields like medicine and law, enabling a step-by-step understanding of complex processes rather than just the final output.

Open AI Mathematics & Future Plans:

  • OpenAI has implemented process supervision to improve mathematical reasoning in their models, enabling a deeper understanding of the step-by-step process of solving math problems, rather than focusing solely on the final output.

  • Process supervision aligns with the way humans learn, as it provides feedback at each step of the problem-solving process, reinforcing learning and understanding.

  • This approach signifies a shift towards considering the entire process and not just the end result, mirroring the way education is conducted in the real world.

  • OpenAI's focus on improving GPUs to enhance the performance and affordability of GPT-4 demonstrates their commitment to addressing limitations and advancing AI capabilities. Additionally, they discussed the challenges with plug-ins and the need for seamless integration into existing platforms to provide a more efficient user experience.

Links Mentioned

Follow us on Twitter:

Subscribe to our Substack:


Transcript:

Ethan: Good morning and welcome to AI Daily. Today we have three great stories revolving around some. Grid regulation, some LLM technology and a lot of interesting news out of OpenAI. So let's kick off with our first story, which is Japan's stance on copyrights. So the Japanese government has pretty much reaffirmed that all data, whether not for commercial use or not for commercial use, no matter how obtained, no matter what the current copyright is, You are allowed to use this data in training AI models, and of course in inferencing and utilizing these in applications and inferencing on these models.

So they've pretty much taken a stance that says, we are open to all data. They're looking to reaffirm their own positioning in AI and continue to grow the Japanese AI ecosystem. So Farb, tell us more your take on this.

Farb: You know, Japan basically came out and said, if you're gonna use our anime and arm manga in your various the trainings and AI and generative ai, we're gonna use all of the corpus of all of your English knowledge that has been, uh, you know, part of the training sets of this stuff.

Pretty, pretty awesome and somewhat hilarious p position to take. And you know, I think Japan is looking at a world where AI could, you know, save that society. Its population is declining. They're the third largest economy in the world. Although it's been a relatively stagnant economy for a few decades now, and I think they see that this is their path to the future, and they're not going to tread it lightly.

They're going to speed down this path and they're gonna do what they're gonna do, and you can do what you're gonna do. And they're kind of fine with that. It's, it's, it's pretty progressive, but at the same time, it lets you know, I think how seriously they're taking their own society and what AI could do for it.

Ethan: Connor, what's your take? Do you think we'll see other countries start doing this or is this uniquely Japanese?

Conner: I think we'll see other countries start doing this now that Japan has taken the lead here. Now that Japan is gonna be producing models that are vastly superior and capability and have, don't have the copyright problems of current large models, um, I think we'll see other countries, they follow their lead.

Um, in hindsight, this makes sense, Japan. Has, as we mentioned by far, has declining birth rates. Japan, I've seen some mentions that Japan's kind of workforce is like almost unionized in a way where it's hard to really get the progress, but AI helps a lot there. But even though in hindsight this makes sense, no one really saw this coming in a lot of ways because the discussion before was, oh, what if a rogue country, oh, what if a rogue third world country.

Puts out a law like this, but now that Japan, a member of the G seven does this, it's surprising in a lot of ways, but in hindsight it does make sense.

Ethan: So, yeah, I, I wonder how this works out day to day. I don't know if either of y'all know about this, but in terms of, I mean, let's say you're a, you know, Japanese citizen, you wanna train a new ai AI model on Disney.

Are these tentacles of the US gonna reach over there?

Conner: How did this, I believe it's only content produced in Japan is how the copyright law works. So copyright by another country, Japan does not include that. Cause they don't own that.

Farb: So just no, no. Self-respecting rogue first world country is going to let a rogue third world country do something that they could have otherwise done.

Ethan: Interesting. Well, Godspeed to Japan. Um, our second news today is DIDACT. So this is the first code l l m that's actually trained on pretty much a reasoning of thought. So it's modeled on the way a real software developer codes. So if you think of, you know, GP four or some of these other coding models like Codex, you're putting in an input and it's gonna output all the codes you need.

We've seen so much progress on like train of thoughts and reasoning. You know, let's think step by step, let's have the model go from A to B to C and finish its workload like that. So this is the first code model, trained on the way a developer works, editing the code, fixing bugs, reviewing errors, and producing the end output.

So Connor, tell us more on what the training of this looks like, why it's actually important for code.

Conner: Yeah. A model like this was really only possible by Google because Google has a decades old Monorepo that has the exact changes and the exact steps a developer goes through to update a repo, to update a code base.

Google has very specific data by every developer that's ever worked at Google and. In comparison to what we see from co-pilot, from rep's, ghost writer, those models are really only trained on the head of the code, on the latest code, on the final product of the code. Mm-hmm. And how those models are trained means that you can only really get the final output.

But this. Didact out of Google means that we can now train and we now have these models that are able to fully follow how a software developer works fully go through following errors, repairs, comments, tips, following unit tests, everything about the software development stack. And that's really a big leap from just the final output of code.

Ethan: Yeah, and we're about to touch on this in the next story too. The importance of doing this kind of step-by-step reasoning and understanding the entire process that gets you to an output. Um, far of course, we've worked a lot with, you know, chain of thought reasoning and how that affects applications. What do you think of this kind of, you know, how is it so important to understand the entire process of an output and not just an output?

Farb: You know, I think one of the important pieces in Didact is it's understanding of the history of what the developer's been doing. Yeah. So that it can better predict what should, what should happen next. The developer makes this change here. They put their cursor over here in the, in the documentation and it's, boom, here's the documentation for the change you just made.

So I think, you know, when you're talking about chain of thought, chain of reasoning, uh, you can't do that without history. And it's, it's interesting, there's all this metacognition going on in the sort of a AI development world, and interestingly enough, to me, I feel like it follows the development of, of human cognition.

You know, long ago we developed language and we became these beings that can, uh, you know, have a, these semi semis sized language models in our head. And then over time, People took that ability and started creating metacognitive, you know, stacks on top of it where the reasoning was developed more. Okay.

So we can speak and we can remember what we said. Okay, how do we use that to reason more deeply? Uh, and so it's, it's interesting for me to see the development of AI's cognition in some ways, you know, match the development of human cognition, probably just a million times faster in terms of the timeline.

Ethan: No. Yeah, I like how you pointed that out. And I think we're gonna see more models like this, uh, affecting things like medical and law too. So not just getting the outputs of a medical opinion, but actually going step by step, Hey, let's integrate the clinical research searches. Let's integrate this. And, you know, understanding the process, like you said, that metacognition to get to the output.

So really cool stuff. Um, which goes into our next story really cleanly as well, two different pieces from open ai. The first one, Is open ai, they pretty much improved mathematical reasoning through what they call process supervision. Um, so Connor, please speak more on it, but pretty much we're looking at very similar things, right?

A improved chain of thought reasoning that understands the process of math and not just, Hey, here's a possible output.

Conner: Yeah. So far the model has been out outcome supervision, where you only reward based off finding the correct answer. But this process, supervision now rewards based on each step of finding a mathematical answer.

And this really mirrors how we do education in the real world, how we teach humans, how we teach kids to learn. It's not just, Hey, you randomly got the right answer. Good job. It's, oh, you followed each step correctly. You followed the entire mathematical process correctly. Good job. And that is a much stronger reinforcement of learning in real people, and clearly in real models as well.

Ethan: Farb, I think you covered it up well in the last segment, but anything you wanna touch on with this kind of process, supervision even more?

Farb: Yeah, I mean, I think you're exactly right. There's this, uh, human mapping to how humans work, giving people feedback at every step of a, of a process instead of just at the end of the process.

Not surprisingly, it performs better when you're giving feedback along the way. You know, a lot of these things are a combination of, uh, Eventually getting to the point where there's some developers that can start working on this part of it because they worked on the previous part. Now they can move on to this part and make this better.

Uh, processors get better. You get more GPUs in your farm. You can do more things with it. Uh, This is, like we've said, just the, the, the beginning of all this stuff. And you're right, it's very closely related to what's happening at dac. And what's awesome to me is that we're continuing to see this rapid proliferation of the application layer of ai, what you can do with it.

Uh, and we're just as equally seeing the rapid proliferation of the basic research and fundamental infrastructure that AI is building on. So, you know, I don't think we've ever seen a world where, The, the leader at something is, seems to be switching on an almost daily basis. A couple of people. On Twitter have been complaining that they feel like G P T four uh, has lost some of its mojo, especially with regards to writing code.

I haven't, you know, done a benchmark myself, so I can't say whether this is accurate or not. But the, the number of people that are talking about it makes you, makes you wonder maybe, maybe that is the case. So it's interesting to see Didact from Google coming up and being like, oh, here is something that could potentially be the greatest AI software developer ever, G P T four that.

Seemed like it was the greatest software, you know, AI software developer ever maybe isn't. Uh, this is pretty exciting stuff to see this rate of change and, you know, who's the best at any one thing, almost changing on a daily basis. I think the chaos is good for individuals, individual developers, open source community, because there's not just some, some understanding that there's a, a leader that's never going to change.

Conner: Absolutely. It's also very interesting to note that a lot of these methodologies like process supervision, like did act even like chain of thought. They would've worked with the models we had three years ago with G B D three, with et cetera from back then. But we've only just discovered these thoughts and practices now and are only applying them to the larger models, even though they would've worked years ago.

Ethan: You can only connect the dots looking backwards. But ab absolutely super cool. Uh, segmenting again with open ai. You know, we got a lot of, a lot more color, uh, from Sam and from open AI on how they're thinking, what some of their bottlenecks are. Um, I, I'd like each of you to touch on, you know, the most interesting thing you saw from that.

There was a lot in there to me. The main point here we're looking at, you know, we've talked about Nvidia this week, but Opening Eye and Sam are talking about, hey, we are limited on GPUs, and that's affecting, you know, their fine tuning API that's affecting them, rolling out multimodal, that's affecting them rolling out longer context windows.

And their plans for this year, as he said, was first they wanna make G D four faster and cheaper. And then start accomplishing those goals. But to do that, they're limited on GPUs, just like every other startup and business out there. So I found that really interesting. But there was a lot of color in that.

Anything far of anything you saw, um, from opening Eyes, kind of Sam's plans that you wanted to touch on?

Farb: You know, I think he's, he's right about what he's saying pretty clearly, and I think again, we're just seeing this natural tension where, uh, more processors are coming online, the processes are getting better, but I think.

As profound, maybe more profound as people are learning how to get more out of less. Uh, and once the flywheel of AI pushing AI faster starts going and you apply it to getting more out of less, hopefully we'll see this. You know, we'll always be riding this tension line between what we can do with this amount of processing power and what we could do if we had more.

So, I don't see this problem really ever going away, cuz we're always gonna push up to the edge of what we can do. But the amount of processing we have our hands on.

Ethan: Connor, anything you wanna touch on with kind of OpenAI's plans and what they talked about? I saw also they mentioned plug-ins. They're not having product market fits with plug-ins.

Conner: Yeah, I was gonna comment on that. I've had plug-in access for a bit and I use them a few times. The wolf from Alpha one's a little bit useful, but overall plug-ins are not that useful and if you people aren't using 'em in chat BT then I agree with Sam. I agree with open ai.

It doesn't really make sense to offer them through the ChatGPT API

Ethan: What's your thoughts on why like people aren't seeing this? You know, at least anecdotally from using plugins, it, it's less so that the plug-in end state is not useful. Um, you know, I'd love to be able to browse a low, et cetera, but the actual experience is pretty subpar right now.

It's slow. It's pretty ineffective. Why do you think this is happening?

Farb: This is like talking, you know, if you want to get a reservation on open table. Mm-hmm. Do you wanna talk to yourself about. The reservation and how to go about doing it, or do you wanna just go book the reservation so you know, talking with another intelligence or entity about something that you want to have done and then have it do it is just, you know, one more node in the process of accomplishing things.

So I think in some ways, depending on the task, it makes sense to have someone that you're talking to about accomplishing it. And for some tasks it doesn't. And you know, understandably, they're. Not starting with, you know, Hey, develop a new time machine for me. They're starting with make an open table reservation, but it turns out that you're better at making an open table reservation than you are at talking to somebody else about making an open table reservation.

Conner: It's kind of an issue of layers where instead of normally going straight to open table and now you have to go to chat G B T, then to open table. So if, if this solution was built into the OS directly by Microsoft, like we're seeing with Windows 11 or by Apple in the iPhone, it's a much different, if it's a much different experience than having to go to an app and then use plugins within that app.

Ethan: Absolutely. Well, we'll link it below. Um, there's a lot more info on kind of opening eyes plans, their, um, regulation plans, their current bottlenecks, what their near term future looks like. Fine tuning api, a lot of cool things. But to move on, what are y'all seeing? What, what else is interesting you Farb?

Farb: You know, it's really cool. Earlier we talked about Japan, uh, and their. Not caring about copyrights with regards to AI training sets. We have the UAE coming out and releasing Falcon, this massive, uh, the, I, my understanding is the best performing open source model that's currently out there. Uh, they removed any restrictions that you don't have to pay for it.

It used to be that you could use it, I think commercially, but after a certain sort of, uh, revenue run rate, you'd have to start paying for it. They've removed that. It is not uncensored. It's, it's still a censored model, but it is free and available for people to use and you know, I don't think anybody had UAE coming out and leapfrogging everybody else in the open source AI world on their bingo card. Good for them.

Conner: Absolutely Connor. Yeah, I saw Supabase is now really pushing their soup base vector. Uh, PG Vector has been out for a little bit and a lot of people, including myself, has been starting to use PG Vector inside soup base. It works very well and I think soup base saw that and saw that level of adoption and now they're pushing soup based vector as a pretty big thing.

Uh, they've written some libraries around it and this really is a much better way to use a vector database, in my opinion, than something like alleviate or something like a pine cone or even something like chroma. Because this integrates directly in your database and the metadata. Lines up in the columns along with your vectors, and it's way more straightforward to use as a developer than these third party databases.

Ethan: So you see the SQL databases and actual providers supporting these features. You don't have to go set up we and manage the linking the metadata and all of that annoying stuff. So Super cool. Um, I saw that Google was investing in runway. Runway has been a favorite tool of mine for gosh years now.

Watched them since the very beginning when they first released their kind of demo as a Mac app. Um, so really excited for them. I think they raised almost a hundred million from Google. It's also just, I think it's important we're seeing Google, just like AWS and some of the other big cloud players, of course, Microsoft Azure.

Trying to embed themselves with some of these startups that really have a lead and making sure that hey, they're getting that GPU dollars. So when Google's investing in runway, I think we'll be seeing runway deploy some workloads on Google Cloud. So cool to see. Overall excited for the runway team. But outside of that, thank you all for listening to AI Daily and we will see you again tomorrow.

Conner: Peace guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily