Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Microsoft Azure ChatGPT | SemiConductors | NVIDIA-HuggingFace Partnership

AI Daily | 8.15.23 [Watch]

Welcome back to AI Daily. In this episode, hosts Conner, Ethan, and Farb delve into three fascinating stories. First, Microsoft introduces an enterprise-specific ChatGPT version, self-hosted on Azure's private cloud. Next up, Global competition intensifies as countries race to bolster semiconductor production. Germany secures an $11 billion TSMC chip plant, while Texas welcomes a $1.4 billion semiconductor facility. Finally, Nvidia and HuggingFace join forces to enhance cloud offerings. Nvidia aims to expand its cloud services and connect directly with developers, positioning itself as more than a chip manufacturer.

Quick Points

1️⃣ Microsoft Azure ChatGPT

  • Microsoft unveils Azure ChatGPT for enterprises, self-hosted on Azure's private cloud.

  • Repository briefly removed amid potential conflicts, highlighting unique deployment benefits.

  • Tailored for businesses, offering data control and secure sandbox for AI-powered interactions.

2️⃣ SemiConductor Manufacturing

  • Global competition heats up as countries vie for semiconductor manufacturing dominance.

  • Germany secures $11 billion TSMC chip plant, bolstering European presence.

  • Texas welcomes $1.4 billion semiconductor facility, reflecting chips' pivotal role in technology evolution.

3️⃣ NVIDIA-HuggingFace Partnership

  • Nvidia teams up with Hugging Face, aiming to strengthen cloud services presence.

  • Nvidia's expansion into direct cloud hosting aims to compete with established players.

  • The collaboration enhances accessibility to GPUs, potentially reshaping Nvidia's cloud industry involvement.

🔗 Episode Links

Connect With Us:

Follow us on Threads

Subscribe to our Substack

Follow us on Twitter:


Transcript

Conner: We're back once to get into the three great stories. I'm your host Conner, joined by Ethan Farb. Our first story today is Microsoft's or Azure, Azure ChatGPT. So they launched on GitHub, a Microsoft Azure ChatGPT for Enterprise. It's specifically tailored for enterprises in that it's essentially the same thing as ChatGPT, but open source and entirely self-hosted on Azure.

So instead of having to connect open AI servers, everything's. In your own private little sandbox on Azure's specific private cloud. Uh, funnily enough, a couple days after that, Microsoft actually took down this repo because apparently, I would imagine open AI's kind of mad about this far. What do you think happened here?

Farb: There's, there seems to be a backup for it, so not sure what's going on with that. It seems like this is something that could be. Very popular. It's tough to know what adoption is gonna be like. We're in a world where there's so much of so many options like this. They're all slightly different in their own way, but this is a pretty good combination of.

Teams, you know, if you're a C I O I can see, you know, having Microsoft and open AI as the things behind it, as opposed to somebody took LAMA two and created some, you know, some random developer took LAMA two and created a version of something like this. You may be more inclined to, uh, you know, if you're trying to c y a yourself as a c I O.

You're gonna wanna have some bigger names behind it. So I could see the adoption being big. I haven't, you know, heard a ton of people start using it. Obviously it's pretty new. Uh, but I'm pretty excited to see how it, how it gets adopted.

Conner: I mean, it's very easy to host yourself. You have features like Azure Active directory for login.

You have features like uploading files, just kind of stuff. You would never actually get in your standard chat pt, just 'cause it's enter enterprise level stuff. It's not available. You'd have, you wouldn't find that. Ethan, what do you think about this?

Ethan: Yeah, I think enterprises definitely need this. It reminds me when people like deploy their own Oracle instance for their own like enterprise environment, right?

So their open source repo is pretty much just a web app wrapper and some network like protocol setup for Azure. It's just an easy way to like deploy this little instance. So you're not actually like open source on the side of chat G P T at all. You're not open source on the side of your own GPUs. It's really like they released a web app and they released some like, Templates for Azure to make sure your networks are fine and you have some places to upload your own data.

So, you know, similar to the Oracle days, similar to what enterprises need, but they probably removed it because it's not too much of an open source repo, just a chat web app and some template guidelines.

Conner: Yeah. Um, personally I think Open Eyes had tried to compete for some enterprise use cases. They've added the stuff recently of not keeping data.

This, on the other hand, keeps all your data, but in a secure enclave in your own OpenAI cloud. So I wouldn't be surprised if OpenAI was a little bit annoyed by this repo.

Farb: Sam just tweeted like literally, I don't know, a little bit ago that, trying to clarify that OpenAI does not use any, anything that you do through their API as a, you know, for training or anything, maybe anything through chatGPT directly.

Conner: Our next story up. Today though, we have semiconductor manufacturing. Of course, semiconductors are very important for all modern computing. Ai, especially any GPUs from Nvidia , AMD, et cetera. They're all reliant on semiconductors. Semiconductors so far, mostly manufactured in Taiwan, China.

Now, recently we're trying to, we're starting to see that branch out first with Germany winning an $11 billion T S M C chip plant. That's gonna be the first in Europe. Uh, I think T SS m C committed $3.8 billion to that and Germany committed $5 billion to that. So pretty nice collaboration there. And then in north Texas, we have the Silicon Prairie in north of Dallas, where there's another, I think $1.4 billion semiconductor plant coming in.

So, Very exciting. Ethan, what do you think about this?

Ethan: It's a manic race everywhere. I mean, chips are definitely the new oil here, so you have $280 billion from the CHIPS Act. You have pretty much every global state actor trying to have tax incentives or subsidies, et cetera, to enable more chip manufacturing facilities.

Conner: Just thinking about what he's gotta say, I'm sure he said something that was good. Am I cut? You dropped out for a bit there, but I'm sure it was recorded on your side, so it was, well, I'm back again.

Ethan: As I was just saying, that chips are definitely the new oil, so you have every single state actor and every single global.

Powerhouse. Every single company, every single person with a manufacturing facility trying to get in on $280 billion of a chips act, trying to get in on every subsidy for every nation. So I think it's really cool to see, especially here in America, we're trying to bring back some of this manufacturing to Texas.

You know, we know a few people in Texas trying to convert some of their old facilities into chips. Manufacturing facilities for this is everything too. Not just AI chips. This is everything to small Bluetooth chips and. Radiofrequency chips, et cetera. So I think this is just important for American dynamism, as some call it.

Conner: Yes, love American dynamism. The 1.4 billion actually was the CHIPS Act that Greg Abbott approved in June. Mm-hmm. That's just funding for any general chips. And then the plant itself is actually 5 billion, so almost half the size of the Germany one. So, yeah. Very exciting. Fab. What do you think about this?

Where do you think chip manufacturing is going in the future?

Farb: I don't think this is stopping anytime soon. This is gonna just keep accelerating. It. It's smart for this type of manufacturing to be distributed and not, uh, centralized in one part of the world. It's not, uh, you know, you want redundancies in important systems like this and people are just flat out competing to, uh, you know, get, get the business and.

Since there's gonna be such a massive demand for GPUs. If you're making them where you are, then you have a business there. You have, you know, real income and revenue coming to your part of the world. So, uh, may not be an easier way in the world to guarantee some, uh, future income than by. Building, you know, chip manufacturing, uh, where you are Saxony is, you know, well known for precision manufacturing.

They've been doing that for, uh, probably hundreds of years, if not more. Everything from, uh, high precision watchmaking to car making to now chip making. So, uh, nice work. Saxony and obviously Texas is not shy about getting into, uh, big industrial, uh, parts of the, you know, business cycle. So, uh, Pretty cool to see that happening as well probably see even more of that in Texas.

Conner: I think Germany and Texas were very clearly the targets for manufacturing of their respective unions, and I'm glad that we're starting to see the plants being built there. And I'm sure a lot of Germans in Texas, lot of Germans in Germans love Texas, maybe even Pennsylvania too. Germans loves Pennsylvania, so I'm sure we'll see all that.

Oh yeah. Beautiful, beautiful. Our last story today, Nvidia and hugging, hugging face and announce a new partnership. This comes on the heels of the A M D partnership. That hugging face announced a little bit earlier. This is Nvidia trying to keep up with the open source community. Nvidia is sometimes accused of not being very open source friendly.

They do have a lot of open source libraries, but on a cloud side, on a hosting side, Nvidia loves Enterprise, loves their big money clients. So now partnering with Hugging Face, they're still going off their enterprise big money clients, but with a little bit of an open source twist. That's honestly nice to see.

Ethan, what'd you think about this? What'd you see about this?

Ethan: Uh, mainly Nvidia wants their cloud to compete. So, you know, you have Lambda Labs and you have all these other cloud providers that are kind of sitting on the heels of actually where developers are sitting. So NVIDIA's, D G X Cloud that they've been putting out, they've been trying to sell to enterprises like you mentioned, but getting a partnership, like Hugging Face Now, anyone who's deploying stuff can just deploy to an H100 on the DGX Cloud.

So they're just kind of putting their tentacles everywhere, which I think is cool. We need more GPUs, we need more access to them. So if you're on hugging face and you wanna. Use it. You don't really care if it's on D G X Cloud or Lambda or anything else. So more players the better.

Conner: Yeah. Nvidia so far has kind of followed a like dealership model, like with cars. And so they make the, they make the cars, they make the ships, but instead of selling them to people directly, instead of selling the hosting directly, they're selling 'em to dealers essentially like Core Weave or like Lambda or like many others. Mm-hmm. And they're start, they're trying to get in the field directly, just like Tesla does with cars.

So, yeah. Farb, what do you think? What do you think about Nvidia?

Farb: I think one of the things Nvidia said is, Hey, we actually have a cloud service here. I think I literally remember reading something along those lines. Uh, they've had it, it's not well known. They're trying to make it well, more known. For obvious reasons. Uh, I think another cool thing they announced here was, you know, training cluster as a service.

They just wanna make it easy for developers to use their trips, and this seems to be a great way to do it and a great way to get people to know that. NVIDIA's actually a, a player in, in the cloud space, not just a manufacturer of the chips. So, you know, their business on the cloud side could grow, grow massively and be, be as big as the, the rest of the business of Nvidia has been up till now.

Uh, so they would be smart, obviously to keep going bigger in their bets.

Conner: They of course, have pretty much complete dominance in actually making chips, but connecting to customers, not really, and customers. If there's, if they have other ships in the future that aren't from Nvidia, they don't really care.

So the market will like it. The market will like it. Well, those were three stories today, guys, onto what we're seeing. I saw that if you go to Google Scholar, which of course is the way to search for research papers, the best way I. Uh, if you, if you look up as an AI language model, you'll see hundreds, thousands of papers that were co-authored by ChatGPT and not given those co-author credentials.

So very interesting to see of the amount of papers that are clearly written by ChatGPT. And we're sure there's hundreds or thousands more that weren't dumb enough to include as an a language model, and were not clearly written.

Ethan: Wow. So they really just left that in, huh?

Farb: Yeah. Yeah. That was amazing.

I saw that too. I thought that was pretty hilarious. It's almost, almost tough to believe. I mean, is anybody not reading these things before?

Conner: Before they submit them and you write a research paper, you write it once? They don't really reread it, so, wow.

Farb: Fascinating. They're so boring. The authors can't even read them themselves.

So boring and incomprehensible with a bunch of words. Salad, garbage to make themselves sound intelligent when they're not really even saying something that they cannot even stomach reading their own papers. Welcome to academia in 2023. Ladies

Conner: and gentlemen, what? What about you guys? Ethan, what have seen.

Ethan: Uh, I saw Play.HT, did an instant voice cloning, so they're kind of another, you know, AI voice model, AI audio model in the space. And they had some really, really good ones. Less than a second latency and voice cloning with voice emotions. I think they're finally like putting together a lot of the pieces.

I don't know if they're using their own model yet, but they've got a really cool pipeline down. So link below, but really great results.

Conner: Amazing.

Farb: Uh, it looks like Nvidia has released the code for NeuralAngelo, which, uh, is, is is pretty cool. Creating, you know, immersive 3D environments from 2D videos, uh, which was a pretty crazy demo. They did, I don't know, maybe like a month or ago or something like that. Uh, but it seems they've, they're now making the code available, which is, seems pretty powerful.

I haven't seen anyone use it yet, but, Uh, I'm guessing, uh, I'm guessing we'll see some of that. Maybe you can combine it with your a 16 z uh, AI tune town. Yeah. Uh, get your 3D to do something cool.

Conner: AI Tune town with Angelo. Very exciting. Well, wonderful show today, guys. We'll, if you've watched this far, you probably love our hats on hat daily.

Um, thank you everyone for watching. We'll see you tomorrow. Thank you guys.

0 Comments
AI Daily
AI Daily
Authors
AI Daily