Playback speed
Share post
Share post at current time

LK-99 Cont. | Flow2 NeuroImaging | IBM & NASA GeoSpacial AI

AI Daily | 8.03.23

In this Today’s episode of AI Daily, our hosts Conner, Ethan, and Farb continue the discussion of LK-99, an intriguing material with replications in diverse settings, from Russian countertops to superconductivity experiments in China. The discussion revolves around practical implications and the path to usability. Next, they discuss Flow2 Neuroimaging, an innovative helmet offering FMRI-like capabilities, envisioning a future with accessible brain research and AI models. Finally, they discuss the collaboration between IBM and NASA, introducing Privy, a groundbreaking temporal vision transformer leveraging satellite data for predicting crop yields, monitoring disasters, and advancing earth science research.

Quick Points

1️⃣ LK-99 Cont.

  • LK-99 replication news: Russian countertops to Chinese scientists exploring superconductivity at room temperature.

  • Exciting advancements: Levitation and zero resistivity observed, though challenges in scalable usability remain.

  • Public interest surges, promising potential for future engineering and groundbreaking applications.

2️⃣ Flow2 Neuroimaging

  • Flow2 Neuroimaging device: Compact helmet offers FMRI-like capabilities for brain research and AI models.

  • Pioneering data collection: Predicting emotions and thoughts, potential AR integration, and revolutionary brain understanding.

  • AI's role in processing data, opening doors to a new era of human interaction.

3️⃣ IBM & NASA GeoSpacial AI

  • Named, Prithvi, a temporal vision transformer utilizing NASA's vast satellite data.

  • Applications in predicting crop yields, monitoring natural disasters, and advancing earth science research.

  • Open-sourced AI with profound implications, a milestone in bridging AI and earth science.

🔗 Episode Links

Connect With Us:

Follow us on Threads

Subscribe to our Substack

Follow us on Twitter:


Conner: Good afternoon. Welcome to another episode of AI Daily. I'm your host Conner, joined by Ethan Farb. Farb is here once again. Uh, three stories. Today we're starting up with continuing LK-99. Um, there's been some replications of LK-99, anywhere from Russian countertops to scientists in China who replicated it, working as a superconductor apparently, but haven't replicated it.

Working at room temperature. Farb, what have you read on this? What do you think about it?

Farb: Firstly, I didn't know they were making countertops out of this stuff in Russia. That is, that is, that is next level. I'm all here for a superconducting countertop in my kitchen. But I think you were talking about somebody apparently cooked it up in their Yes.

Kitchen in Russia on their countertop. But that said, the average Russian kitchen is more like a super lab in, in other countries. I'm, I'm sure, so not surprising there. Uh, some pretty exciting stuff here, you know. This is probably the greatest example of we're so back that I've ever heard. It's literally every 12 hours somebody drops something that's a little bit, you know, disconcerting and then 12 hours later it's, we're so back.

So there's seems to be a few things going on here. Some groups are noticing that this thing can levitate. Some groups are noticing that it's got zero res resistance to electrical, um, connection and doesn't seem like they're the same group. Um, two different groups, uh, that said the, you know, zero resistivity is being shown at something like 110 Calvins, so a lot colder than room temperature, obviously, but something that is actually a manageable temperature, you know, uh, in terms of you're not talking about needing a lab the size of a building, uh, to pull this sort of stuff off.

And, you know, my, my hot take on this is this. We're in the, this is amazing. We are in like this deep research and science phase of this, and yet there is this massive like public outpouring of interest and, uh, participation in making this happen. And, and nothing could be better for the world than.

Everybody is super excited and getting behind the latest scientific discoveries that could change the world. That said, it's probably gonna take many years of hardcore engineering and manufacturing, uh, iterations to get this to something that is a highly usable material. So, for example, uh, some people are hypothesizing that parts of the material, uh, dis display these.

Characteristics, but not all of the material. Okay. So do we have to manufacture, create a manufacturing process that can essentially isolate that part of the material and create another material that is just made out of those parts of the material? That's how manufacturing works. It takes a lot more stuff to, you know, really put something into a, into a product, but it seems super promising.

It seems to really have the. Characteristics, at least some of the characteristics in some situations that they've been talking about. Now the question will remain, can we take this somewhere and do something with it?

Conner: No, I agree. Very well said. Um, the Chinese team noted that they like managed to reduce some of the impurities, but interesting thought, maybe it's some sort of like Roman concrete situation where it's the impurities that give it the capabilities that Lee and Kim found. Ethan, any interesting insights, thoughts?

Ethan: Yeah, I think y'all covered it well. Just, you know, even with the news we have now, this is such great progress on this, being able to show zero, you know, no resistivity in the conductivity here. I think if, if you've looked at archive, there's like 10 papers dropping every day.

From a theory side. From a simulation side, showing off the zero resistance again. So yeah, it's a little bit colder than room temperature for sure, but just as a massive story, just as it is now. And I think. The hype is living up to the hype so far.

Conner: It's also far easier to produce than other superconductors. So even if this is the most we find of it, is that it's not room temperature. It's just the fact that it's so easy to produce would help a lot in quantum computers, many other applications. So absolutely. Our next story today we have Flow2 Neuroimaging, Flow2offers an FMRI like neuroimaging with pretty, a pretty basic helmet that you just put on your head. Ethan, you read about this sum, what do you think about it?

Ethan: Yeah, so Flow2 looks really cool. Um, right now if you do neuroimaging, it's a huge device and a medical lab, you're trying to measure oxygen within your brain, and you're also trying to measure electrical current within your brain. And I think what's really cool about this is, you know, I'm not sure how much AI they're using in this product, but when I think about, you know, the future of kind of a foundation model for brains or understanding the human brain, we're so far from understanding that.

And the data sets we have right now are so limited. That. I think if people actually start picking up on this device more, using this device more, we'll have such a big data set of really what's going on in the brain and can likely build a lot of AI models off of that. So I think it's a really cool application to AI and understanding ourselves, understanding biology, understanding.

How our brain works at the end of the day. So just based on oxygen and electrical current movements, how much more can we figure out as to, you know, why we're sad someday, why we get excited, what our brain's thinking about. So some really cool applications here. Again, I'm not sure if they're using AI in their product for any of their, you know, potential measurements in the future, or stats they might give you on an app or something like that.

So it might just be hardware for now, but I think some really cool applications to AI in the future.

Conner: It kinda looks like a biker helmet and it's like, it's pretty exciting and you can see some sort of future well, where you have your biker helmet that can read your brain and has a visor and that has your AR overlay of the world and reads your thoughts of what you want to show you. Yeah, so kind of stuff like that. Farb, what are you thinking about this?

Farb: I believe Colonel is from our dear friend Brian Johnson, the live Forever, man. I think, I think he started Colonel years ago, or at least was part of it. Uh, maybe I'm making that part up. Um, I'm sure he'll take, I'm sure he is happy to take the credit for it.

They're combining a few cool technologies here. Uh, To, like Ethan said, understand oxygen flow in the brain. Understand electrical signaling in the brain. Uh, one of the cool technologies, td, FMRIs, I think it's time domain functional near infrared spectography, I can't speak today. Um, and uh, what that's doing is actually shining infrared light through your brain and understanding where the flow of oxygen is in your brain and creating a map of that.

So I think this is creating a. Treasure trove of data that ais will be able to use. And some of the things that you'll be able to do with this is, you know, control computers and control objects using your mind, uh, diagnose, uh, brain issues with this. Understand if you know you have a concussion or of loss, you know, oxygen to a part of your brain.

So I think the amount of data that this thing is gonna be able to generate on the brain is. Pretty remarkable and sort of exactly what AI and ML loves to get is a ton of data that it can do some cool stuff with.

Ethan: Yeah. Okay. You talk about controlling computers too. It reminds me of like, you know how they use basic AI on your iPhone's keyboard or something?

Predict the next letter you're typing. Same type of thing for these kind of brain machines is being able to predict what action you want just from your signals in your brain.

Conner: It's a new way to interact with the world. So third story, today we have IBM and NASA. They collaborated on a GeoSpacial AI model.

They called it Prithvi. It's a new state-of-the-art temporal vision transformer. So essentially they got all the satellite data from all over the world that NASA's been collecting for years and years, and they put it into IBM's new platform. And it's kind of, the whole story is a little bit just of a.

Like push piece between IBM and the new Watson X platform and NASA being able to talk about, Hey, we're doing new, big things in ai. But both of those are pretty exciting nonetheless, and I think it's a great story. Ethan, what do you think about it?

Ethan: I mean, I, I think if we went back to 2001 and you said, is NASA and I B M going to open source AI on hugging face?

I think most people would laugh at you. So this is really cool. You know, they took every single image and every single spectometry from NASA's images of the earth, you know, from fires, from crops to mountains, to Texas to New York. So all across the world we have so many satellite images, but, you know, predicting crop yields, predicting fires, predicting floods is still really difficult.

So, They built a foundation model around it and you know, I don't know all the applications we use it for, but it's super cool. I don't know if y'all checked out their video or demo. They have better search and you can say, you can jump into India and say, Hey, what's, what are the crops gonna look like in a year from now?

Or like, where do you think is happening to these current crops based on the latest images versus all this like human analyzers day to day. So really cool stuff and I love that they open sourced it.

Conner: No, extremely exciting. I love it. It's, it was more of like, um, Like it's a pretty small model, I think a hundred million parameters, so I think it's kind of more of a demo of what NASA and IBM can do. And so I'm excited for when they get more funding and apply themselves more to something like this. Farb, what do you think?

Farb: There's probably some spectroscopy in this story as well. All spectroscopy all day. Great. Now you've said it a few times, I can start saying it correctly. They're using NASA's, what they call harmonized Landsat Sentinel-2.

Uh, this is pretty cool. This is. Uh, a satellite system that gets a complete image of the earth every two to three days, and it's down to about 30 meters, I think, per pixel. So you can't quite make out a tree, but for most, you know, geo work that you're gonna do, it's probably very helpful and you know, Taking this mountain of data and again, uh, putting it in the world of transformers so you can start digging through the data and making sense of it more readily and reliably, uh, is just gonna be a, a good thing for everyone.

Whether or not you, um, are interested in the climate change angles of it, or you want to use it for improving crop yield or understanding, you know, uh, certain. Flooding patterns that could impact human or agricultural existence. Uh, it's, it's, it's pretty powerful. And if this is their first foray into this, I, I can't imagine what the, what it's going to lead to. It's gonna get better and better.

Conner: Mm-hmm. Yeah, I agree completely. NASA, I'm excited to see what they build next. Apparently they talked about they're building some language models next based off earth science literature, so I'm excited to see that and talk about that in hopefully a few weeks, maybe a few months, and whatever else they work on.

So, okay. Well, what have you guys been seeing? Anything interesting? Anything exciting? Ethan, what about you?

Ethan: Um, a16z dropped a pretty cool blog post on the impact of AI to healthcare. You know, I think we're seeing a lot of people go on the healthcare side from an administrative side to Dr. Gupa, which we've covered.

There's so much just. Paperwork between insurance, between patient authorization. The entire healthcare space is just one gigantic field of documents. So I think LLMs are gonna have huge impact here too. Um, and their blog post really covers it well on some of the initial angles people are trying. So really cool stuff.

Conner: Yeah. Wonderful article. I definitely recommend it. Farb, what about you? What have you read? What have you seen?

Farb: Mostly I'm just playing with an Allen wrench. If you're not, if you're not fidget spinning with an Allen wrench, what are you doing with your life? Uh, I saw the fine folks over at OpenAI are, uh, sharing a little bit more, putting out a little few more cool little features here.

Uh, some of the stuff you may have seen in, in, in some of the other LLMs, like, like Bing or, um, Claude, for example. So, uh, they're doing prompt examples to kinda help you get started so you're not just seeing a, a blank page when you get there. Some prompt examples, they're doing some suggested replies.

Something that I think, uh, Bing has done a pretty good job of, and so and so has, uh, So is Claude and, and Poe and maybe Poe just has it in their, uh, in their whole model. I think just about everywhere you use, uh, whatever LLM you're using, I think Poe provides some of this. So suggested replies so you can go deeper into whatever conversation you're having with it.

Uh, they move to GPT-4 by default. Uh, you can upload multiple files now, which is pretty cool. Uh, they keep you logged in instead of booting you out every couple of weeks, uh, and a few keyboard shortcuts. I'll let you discover those on your own.

Conner: I think the login one is the most exciting. It was kind annoying to be logged out every day open ChatGPT.

Farb: The, the challenges that people these days have to deal with, it's, it's a wonder we get through the day.

Conner: I'm, I'm glad they're upgrading their web developers to be a little bit closer to their AI model developers. So,

Farb: I mean, if you're paying, if you're paying per click on your mouse, it's nice to have one less click on the login button. You know what I mean?

Conner: Three clicks usually 'cause it redirects me to the zero page, and I have to go back three clicks these days. Very expensive. Yeah. Um, yeah, I saw that there's commercially available for Vicuna models now. So of course Vicuna was originally trained on LAMA one, um, not commercially available.

And now recently they retrained it, another team trained it, I think, uh, for Vicuna two on LAMA two. So very exciting.

Farb: Also speaking of commercially available. I think NASA, uh, the NASA IBM CoLab is going to, there is a commercially available, uh, portion of that.

Ethan: Yeah, fine tune it.

Farb: You can fine tune it. Yep. A lot of fine tuning examples there, so I'm sure we'll see John Deere and the likes making fine tunes of that.

Ethan: That would be cool.

Farb: We love to see it.

Conner: Big fans of John Deere over here at AI Daily and so, okay, another great episode you guys. Um, thank you guys for all tuning in.

We'll see everyone tomorrow.

AI Daily
AI Daily
AI Daily