If this is where you picked up the podcast today, you are in the wrong place.
This is a part two of a two-part episode.
Go back to last week's episode and check that one out and give it a start and come back
right here and join us here at The Overlap as we talk about AI, cryptocurrency, electricity,
and the future of innovation.
All right.
Now welcome back to part two of The Overlap podcast about AI, cryptocurrency, and electricity.
Will, why don't you kick it off where we left off from?
Right.
So, why the hope that these two technologies, although they're very energy hungry, we understand
now, why they're so energy hungry?
How do we come up with this idea that perhaps they could rescue us from the energy scarce
or energy consuming future that we fear and actually give us a positive way out?
I asked that question, I'm actually going to ask you that question.
Are you asking me?
I mean, I have my answer, but.
Well, what we have is we have examples from the past, right?
Yeah.
And the fact that new technologies, although they appear to at first be energy consuming,
actually enable us to create things that are energy producing, right?
The very steam turbines we were just talking about, if we could bury these AI and use them
to heat water and all that, those turbines exist because of the industrial revolution
and the things that that made available.
Our variability to harness steam in engines was what gave us the capacity to generate much
more energy than we actually used ever prior to the industrial revolution.
And while there's.
Yeah, I think.
Go ahead.
Go ahead.
We both do that to each other all the time.
I think that it's important to evaluate this, right?
I think we're at the precipice of a junction in which we're looking at a major categorical
shift in how we do things, right?
For millennia, the human progress was limited by the power that could be generated by human
and animal muscle.
Right.
And that was that was the limit, whether that was swinging of an axe or the walking of an
ox.
So the development of the water wheel, for example, produced an increase.
We had it.
We had it there that we were now able to extend ourselves past our own physical limitations
and the limitations of work animals.
So we have the water wheel, right.
But they were then geographically tethered.
They were tethered to rivers.
If there was no river in the area, there could be no water wheel.
Right.
And then you have the steam engine like you were talking about.
That that thing by itself transformed and created the power available to a society by
about 600 times.
So a steam engine had about 600 times the maximum amount of power that a human being
or an animal could produce.
Exactly.
And that's good.
But because because the coal used to create the steam engines could be transported almost
anywhere, right.
Now you can have a steam engine completely decoupled from energy production and completely
from geography for the first time in the in the history of the modern world.
Which is the industrial revolution, which gave way to factories and railroads and mass
production.
Exactly.
And now we have access to a technology that can simulate potentially one day simulate
new types of engines and new types of energy retrieving and generating devices that can
potentially unlock more power than we can even conceive of now to not only feed the
future AI and future cryptocurrency devices, but also many other things that we may not
have even dreamt of yet.
Perhaps those starships that you were talking about before.
Yeah, I think I think one way that crypto is similar in this scenario is that it decouples
our system of trading from any one country.
Right.
And allows free money in the same way that cross borders without needing middle middlemen.
We're using a network in place of that.
So we're increasing our ability to exchange goods and services.
And then on the AI side, I see it as you know, in the same way that the steam engine was
an answer to to overcoming the limitations of human ability physically, AI is now the
hope I don't know that it does now.
I don't think it does is the hope that it can now overcome our mental capacity and our
mental abilities generally by 600 times, but even more than 600 times.
I mean, if we had a GI, we could put a GI to building its own a si and we would potentially
see an even bigger savings.
But go ahead.
Go ahead.
I didn't have anything else.
I was going to let you go with that thought.
Oh, no, I think I lost my train of thought there.
Well, I think, I mean, to your point, like this idea that by setting free the value system,
right.
The thing that the sort of to use the analogy or to torture the analogy a little bit, the
engine and the input of our economic systems.
By setting that free, we can enable people to utilize the technology and the intelligence
that we hope that a GI will eventually generate to take us to that level.
Like you said, take us far beyond anything we've conceived and way beyond 600 times our
output productivity.
Because I think you could probably argue that GPT's and LLMs have already done that in some,
to some extent, with our thought output or content outputs, right?
I think there are studies about how much output there is from GPT's even currently.
And we haven't even begun to scratch the surface of what they're capable of.
Yeah.
Because I find in general our data tends to be reflexive, you know.
Right.
So I mean, that's kind of, you know, the history though, I guess to dive a little bit more
into that or to elaborate a little bit more on that is just that, you know, the new technologies
enable things that were never thought of, right.
So the harnessing of electricity or the harnessing of a steam turbine gave the possibility of
steady enough electricity or steady enough power output to use incandescent light bulbs,
right.
Which allowed us like to, which basically freed up the 24 hour world that we all live
in now, right.
Whereas before you were limited to, you know, how many candles you could put around you
and, you know, utilizing at a given time and there was limitation to how much light that
generated.
You know, eventually now we have skyscrapers full of incandescent light bulbs and all sorts
of other, you know, technologies that were never even dreamt up before the industrial
revolution.
The hope is that that will repeat itself but even on a much larger scale, an incomprehensively
larger scale than it did the first time around.
Yeah.
I think another example of that is sort of like the fossil fuel generation industry,
right.
I think none of us disagree that fossil burning and consumption of fossil fuels is bad for
the planet.
It's bad for our overall air quality.
It affects our ability to breathe and to succeed in the far future.
Maybe not as far as I think but that persistent demand for energy drove relentless technological
advancement.
So over the past century you've seen improvements in extraction techniques, extraction technologies,
horizontal drilling in wells, right, and look I'm not a fan of fracking.
I, especially where I live, I think it actively hurts me.
I've sat through earthquakes that I've never, never experienced an earthquake before in
the middle of the United States but because of fracking we're seeing those things increase.
The US coal miner for instance increased ninefold.
Their production, their output increased ninefold in 50 years.
So these innovations have always pushed back the predictions of resource scarcity by giving
us a higher output and driving innovation through demonstrating that really reserves
are really not a fixed geological quantity but a dynamic sort of economic and technological
one.
Right, and essentially what it amounts to as I see it is that humanity is hurtling down
the road at an ever-increasing speed.
The question is where are we headed towards?
Paradise and utopia or are we headed towards destruction?
And the signs change pretty frequently depending on what you look around and see as to which
road we're on or whether we're on the road that could lead to both.
And the decisions we make today are probably going to determine where we end up.
Yeah, 100%.
And technology's just going to accelerate that speed and multiply that effort.
Yeah, and I mean I guess sometimes I think we do have to kind of sit back and say to
what end.
Right.
And I think that reason differs for everyone.
I think I have my own reasons for wanting to see it.
I think a couple of ways that we've already seen that I think are really good examples
in already with text generation is the ability to use specific A&I models to mimic the replication
and synthesization of proteins that can produce new chemicals, new cures, new, it helps to
fight diseases.
Now I think that there's pluses and minuses to that both.
Maybe, through these sort of protein synthesis, we can overcome and help eventually cure cancer,
but at the same time that also might open up a whole new world of toxic chemicals that
people can use for biological warfare.
Right?
I think that there are tradeoffs that we have which is, I mean obviously it's a great reason
for regulation, which I'm pro-regulation around these sorts of things for that very reason.
But it's like saying, look, we could find the cure to every known world illness by employing
an A&I model to do it.
Yes, it takes a lot of electricity, but we could save lives in the long run.
So should we completely ignore our ability to cure these diseases because it uses a lot
of electricity and that could eventually kill the planet as a whole?
Or do we say, no, why don't we point it at those problems and develop those technologies
to solve those problems as they arise at the speed of faster than human thinking?
Right.
And who do we allow to utilize those computers?
Because on the one hand, there's the worry about a rogue actor who might discover some
sort of terrible new virus and unleash it on the world.
But then on the other hand, there are pharmaceutical companies who know that they'll be putting
themselves out of business if they cure all the illnesses.
And are they going to be able to resist the temptation to create new illnesses on their
pipeline?
Hey, that's why I'm a socialist, I'm just saying, you know what I mean?
Not right there, that's reasonable.
A democratic socialist, not by force, but through democratic means.
I think one of the things that I wanted to point out, and look, there's detriments to
this and I hope we can poke holes in it.
And what I think you're saying, Will, is that as we get down this road that there kind of
is no way back from, right?
You wouldn't be able to just reel AI back in and reel cryptocurrency back in.
Obviously, in the cryptocurrency sake, part of the design was such that it could not be
reeled in by any single entity or any single force.
But in the sense of AI, I mean, there are ways in already putting measures in place
to limit harm, I think is that we have also, as things have come around, increased, A,
our knowledge, right?
Like scientifically, when we were putting coal out in the world, we didn't think, "Oh,
this is going to cause global climate change."
We basically just thought we have a need and that is we need more energy.
We need to put light bulbs in everybody's houses so we could take them away from having
only eight hours of every day with daylight or 12 hours of every day with daylight to
we don't need people dying of useless diseases, but we also don't need to trade our ecological
wealth for curing diseases because either way, we're going to die.
Which one's going to be the worst?
And I think that doesn't have to be a decision that we live with every day.
I mean, sorry, that we live with one time.
I think it's something that we can live with every day and say, "Look, are we getting closer?"
That to me is kind of the epitome of science is collecting that data and making meaningful
observations, and then adjusting to those observations.
Right, it's feedback.
It's using our feedback and using it to take us to the next step and make the next decision,
the scientific method.
So let's talk a little bit about some of the, and even before AI and cryptocurrency, we've
had a need of wanting and needing to generate more and more and more electricity as we increase
our technology, right?
The current power grid is in a miserable state.
We know that.
We need lots of infrastructure improvements.
And we're seeing major swaths of Texas being browned out at certain times.
They talk about blackouts in California, and we've seen demand-related problems.
And I think it's more than just one thing.
First of all, I think that it's because we generate electricity for money, we create false
scarcity.
And if we don't create a whole lot of generation, if we don't have a lot of means of generating
electricity, we keep the costs higher, right?
Because then we keep the threat of losing it that much.
And as we increase not just these technologies, but as the temperature rises globally, there
will be an even greater need for what?
Temperature control, for air conditioning, for heating, in incredibly cold winters and
air conditioning in hot summers.
That demand is going to increase regardless, as long as we continue to increase as a species.
We've agreed.
We're not getting any further away from utilizing that energy.
That's where we're headed, is more and more consumption.
And the hope is that these technologies will allow us to keep pushing the limits further
and further while not destroying ourselves in the process.
Yeah.
So let's talk about some of the advances that we've already seen in the current electricity
and power generation space that are sort of driving innovation.
I think one thing that has been kind of flip-flopped back and forth and has somehow been made political
is something like nuclear power generation.
Ultimately, it's terrible, but it really does come down to steam engines.
And nuclear power generation facilities are essentially just using nuclear power to boil
water because it has a greater output to input ratio.
And then steam engines turn and that generates electricity.
We're still on the same standard, right?
But it's incredibly difficult to enrich plutonium, to find plutonium, and it's also very dangerous
to mine it.
It's a very dirty process, both on the front end and the back end.
And then there's also the problems, obviously, Chernobyl, if there's some sort of radioactive
chemical spill in this scenario or radioactive condition.
Right.
But we're able to hopefully utilize these, again, these advancing technologies to offset
those dangers to give us new safety procedures, new development.
There's a lot of ways, and they're also advising us or simulating these sorts of reactions
and telling us how to make them safer, how to make them more efficient, and how to increase
the output without endangering ourselves or blowing ourselves into oblivion.
Yeah.
And there's also been alternatives proposed, right?
There's two approaches when you're faced with the reality of the dangers of a particular
type of energy generation.
In the case of nuclear, yeah, nuclear waste and nuclear spills and enrichment of nuclear
material is dangerous, inherently.
What can we do to overcome that?
And you see that has already driven productivity gains in energy.
China, this year, brought on their first thorium nuclear reactor.
Now thorium is a little bit easier to cultivate, to enrich.
I don't think the returns are as high, but the dangers are lower.
So they're bringing on this thorium.
From what I understand, thorium is still pretty dirty.
It's similar to plutonium, but not as much so.
Again, a trade off, right?
For additional progress, you got to have a little bit of malaise.
We've made significant, in the past three years, we've made a significant movement toward
nuclear fission.
For the first time ever, I believe China, and then it was reproduced in California,
there was a nuclear fission reaction that was for the first time ever a net gain of electricity.
So it required a lot less to put out than it did, or than it consumed, which is enormous.
I mean, that's massive because in a fusion reaction, you're not going to have the actual,
the explosive part, right?
If you think about it, it goes inward and creates cold.
It doesn't implode as opposed to exploding.
You're joining two atoms together rather than exploding them apart.
Which again, is going to, I mean, it could be massive.
If we could get nuclear fusion to a point where the net gains are just interminable,
you could literally put a fusion reactor, in terms of safety, you could put a fusion
reactor at everybody's house.
It could be just as common as a hot water heater in 30 years.
I don't know why you'd want to heat hot water, but.
I'm looking for the Mr. Fusion, you know, the back to the future where you just throw
your trash in your car, engine in your car, and it generates all the power you need.
But realistically, that's not far from what we're moving toward.
Yeah.
But what has driven that progress?
The need for power, right?
The need for more power.
The necessity for that electricity.
Exactly.
Now...
Thus, the source and the potential solution to our problem.
Exactly.
Now, I'm not saying fusion is necessarily, but it could end the scarcity of electricity.
But like you said earlier about pharmaceutical companies not wanting them to find cures for,
you know, the cancer drugs and things of this nature, there are also gigantic lobbies, gigantic,
billionaire, trillionaire lobbies in the energy creation and energy dispersion space that
absolutely would not want that technology to get into the hands of the average human
being.
That's right.
I think it's fortunate in some sense that open AI may be worth more than ExxonMobil,
once it's publicly traded, in the sense of where that economic high energy flows.
Where it doesn't flow, right?
Now, I would think that somebody like Sam Altman and open AI might be incentivized to
invest in that sort of thing.
But if supposedly, and this is what they tell us about capitalism, right?
The necessity drives the innovation.
So if open AI has a necessity for unlimited amounts of power, they have no choice but
to invest in things that will continue to allow them to consume that power.
Then they will invest in the clean creation of it.
And hopefully it trickles down to us.
Now, we know that doesn't happen.
That's why I think we as a society should say, let's fund, let's self-fund as a country,
as a nation, as a world.
Let's figure out how to self-fund fusion so that we all can have a fusion reactor.
Everybody's car is powered by fusion and we're driving around in the cleaner world because
we're not burning coal, we're not burning gas, and we're not burning diesel.
And then we own the means.
So then what do you do?
Now, in my opinion, I don't have a problem with private companies making money by generating
electricity and putting it into a nationally owned grid.
I believe that we as a democratic socialist should own the grid as a whole and we should
also invest as a people in reactors of whatever sort, even fine with a nuclear reactor, like
a fission reactor to generate electricity because that is the best technology that we
currently have at a large scale.
Thorium is still not at a large scale.
It's still being test bedded in China and fusion is still not even remotely in the realm
of being able to scale.
So we should be in the generation business because we're seeing rising power costs across
the nation now and they're saying it's because the rising cost of fuel.
But they're using coal and they're using diesel to drive generators, to generate electricity.
And they're not going to have a reason to change from those things if they can just
pass the costs on to the people who have to pay the electricity bills.
Right, as long as they can keep up, they'll continue to burn the most available and the
most profitable source for them.
Exactly.
So I will say it is my position that innovation will be driven by the need in this space.
So we as a show would be remiss if we didn't talk about the very real, very raw reality
of how much this sucks in the everyday for specific people.
Croc has built a giant AI farm in Georgia.
Open AI, no, Metta, Metta is building a giant AI farm in Louisiana.
Because why?
Low cost of energy.
Right?
Relative to the rest of the country.
And it's putting people out of house and home, obviously, by scooping up land.
Nobody wants to live near those things.
They're loud, they're hot, and they're hard to be around.
Right.
And they're not really approved by the people.
Now obviously, who would want, who would sign up to have an AI farm in their backyard?
Nobody, unless they were somehow benefiting from it.
So I think it's important in these scenarios to use legislation to say you can build an
AI farm, but you also have to build a power generation plant, or you have to build a power
generation facility and contribute, not just remove.
But the real fact is, exactly.
And the municipalities are saying, yeah, come on in, we'll give you tax breaks even.
The problem is, those costs get passed through, sent down to the average people who live in
that community.
And it's wrong, it's not right.
But I want to talk a little bit from a perspective of skepticism, right?
Let's talk about a little bit of cryptocurrency.
Right?
I think there are some practical and ethical challenges of crypto.
Do you have any opinions on these?
We'll see.
I'll ask you too specifically.
I was talking to the audience, but I also am talking to you.
So first of all, I think that here's why crypto sucks.
Right?
Volatility.
Crypto, I said the spot price today was 111,000.
About 112,000.
It was 127,000, 123,000 two weeks ago.
So if you own one Bitcoin, in the last couple of weeks, you've lost $12,000.
$15,000.
If that's your 401k, then that could be pretty scary.
Exactly.
And guess who just legalized external sources inside a 401k.
Yeah.
So I think that we're not talking about just volatility, guys.
We're not talking about like the index goes up a point or two and your 401k gains $12
one day and $127 the next or whatever.
We're talking about extreme price volatility.
That kind of limits cryptocurrency's usefulness as a stable medium of exchange.
They've tried to fix this introducing stable coins.
The US Mint is actually talking about a USDC, US dollar coin that's tethered to the dollar.
There's also Tether, which is another cryptocurrency that's directly tied to the value of the US
dollar.
But it's an asset that can fluctuate by double digit percentages.
We're talking 30, 40% in a single day.
And that's really impractical for commercial transactions.
If you take a hundred bucks and you buy a hundred dollars worth of Bitcoin, that's not
a lot of Bitcoin.
It's about a hundred thousandth of a Bitcoin, right?
And that is worth a hundred dollars when you buy it.
At the end of the day, that hundred dollars could only have the spending power of 40 when
you're talking about that kind of volatility.
So unlike payments made with a credit card or a debit card, those sorts of crypto transactions
have no consumer protections either.
It's so if you get defrauded of hundreds of thousands of dollars of cryptocurrency,
no, ain't nobody can help you.
They have those.
They have those coins now.
They're tied to their cryptography keys and there's no protection.
If you make a mistake, just fat finger an address that you're trying to send somebody
money to.
You can hope they send it back voluntarily, but there's no mechanism to get that money
back.
There's also the risks, the idea of a centralized system is contrasted against the practical
reality of how we interact with it.
So the underlying blockchain, the ledger that we were talking about earlier, it is really
secure by design.
Unfortunately, most people store their assets on exchanges which are centralized.
Why for convenience, the ability to withdraw, to buy more, to add it, to purchase it, to
trade it, to make money with that money.
Or in my case where I got hung up is it guaranteed a return that was greater than the market
average 5.9% or something to that effect to basically let them borrow your cryptocurrency
and use it to trade inside of a volatile market and hopefully get something back from it.
There was a whole lot of mess with the Gemini exchange and you can feel free to look that
up.
I was very lucky in that I was able to get it all back eventually, but it took two and
a half years.
Otherwise it was just sitting there in flux and it was not a small amount of money.
When you put money in a bank, if there's a large scale hack, if you're defrauded, you
go to the bank and you say, "Hey, look, this is fraud.
I didn't do this.
This is not my transaction."
You were hacked or somebody generated my credit card number.
That's not mine and you get your money back because that is a built-in protection.
Our system builds in those protections through FDIC insurance.
Your money is guaranteed up to $250,000 or whatever.
Same thing with the NCUA for credit unions.
There is no organization like this.
Can we make it?
Absolutely.
Could it be done?
Yes, absolutely.
But who would guarantee it?
Essentially, it would be a form of insurance, which what do we know about insurance?
The only people who ever make money using insurance are the people who create insurance.
They don't stay around that long to give away all their money, all the premiums.
Exactly.
In order to get some sort of regulation, we would have to reasonably create a cryptocurrency
that needed to be centralized in order to protect it.
It's like the Wild West in the sense that you can use it to trade money, but if you
mess up, that's on you.
It's why the libertarians love it.
They're like, "It's all about your decisions."
Also, there are smart contracts.
I didn't really talk about that, but there are smart contracts that automate transactions
on the Ethereum platform that they could contain actual coding errors or flaws that can lead
to leaking of coins, that could lead to catastrophic losses to terms of 100% or more.
Also, obviously, I'm going to point it out, it's almost not even worth considering illicit
use.
The currency has been the number one way to do something illegal in the world since its
inception, mainly because nobody can really track it or see it.
It's not tied to your identity.
It's just a thing that can be traded.
But I would argue, cash is too.
Cash is absolutely the same concept.
That's why in the '70s and '80s movies we saw about the cocaine trade and when the CIA and
the FBI were bringing in massive amounts of drugs into the United States, you saw everything
was dealt with in cash.
So it's the same concept.
It just became the new cash.
It was an easy way to conduct it electronically without having to carry around large sums.
It's why we can't take a flight with more than $10,000 in cash without declaring it
and letting somebody know that we're leaving with massive amounts of cash.
So it is used for illicit reasons.
To kind of tackle the downsides of AI as well, now we're talking a high level from an ethical
and moral standpoint on these topics.
So it's not just about the power consumption of it.
Algorithmic bias, you know, ultimately these models are just giant repositories of things
that have already been written in the world and on the internet.
That means that along with it, all of these are trained on these datasets and scraped
images as a reflection of our world.
That means all of those social biases, all those prejudices and stereotypes about gender
and race and culture and everything else, those models reflect those biases and then
can recreate them and sometimes even amplify them in their output.
There are a lot of examples of this.
You can check the internet for that, especially around like AI powered hiring tools.
They've specifically shown bias against female candidates.
Facial recognition systems have lower accuracy in regards to people of color.
AI models used in criminal justice system have actually been shown to disproportionately
flag black defendants as being high risk of reoffending.
That leads to discriminatory outcomes and the reinforcement of those inequalities.
Then there's the all encompassing and very well publicized misinformation and hallucinations
that have been well documented that AI models have the tendency to hallucinate.
Why?
Well, it's a great question.
We're asking you to generate something new from what it was trained on.
That means there's a possibility that whatever it's making up is completely nonfactual and
nonsensical.
Well, most of the time it's sensical, but it might not be based in fact.
It doesn't have a comprehension of the data.
So if you ask it a question about the speed limit in Wisconsin, it doesn't understand that
the speed limit in Wisconsin has a specific legislative paper trail.
You could ask me about the speed limit in Zootopia and it'll tell you something because you're
asking it a question, right?
Exactly.
Exactly.
There will be some sort of determination.
Exactly.
It doesn't know that there's no such place.
Think of it kind of like my nine year old who hasn't learned the power of "I don't know"
yet.
You know, like that's the power to shut down a conversation.
You ask him a question, they say, "I don't know."
That's a teenage thing.
But as a nine year old, he will answer every question you ask him.
He just will answer it with total confidence whether he knows it or not, and that's how
AI responds.
So hopefully it'll grow out before it destroys us all.
I do that myself now.
I do that all the time.
I say if you just say it with confidence, they'll surely believe it.
Also...
Go ahead.
As I was saying, unfortunately, hallucinations didn't originate with AI.
No, they did not.
And the truth is, if we're trying to create these language models to sound and be more
like we are, we hallucinate, we dream, we have ideas that are nonsensical and not based in
fact.
But what does this do?
It means that scammers, people who are nefariously using AI, sound more convincing.
They sound more real.
Those letters that we... the emails we used to get from the Zimbabwe government that reminded
us how well off we were, and how we had magically either won their lottery or something to that
effect.
They were hoping for some sort of barrister situation where you had to send them money.
Those were easily determined to be AI.
And for a very specific reason...
I mean, sorry, not easily determined to be AI.
They were easily sussed out as being fraudulent.
And why do you think that is?
Well, because there were spelling errors, there were grammar problems because they were
clearly from people who didn't speak English as their first language.
But you know, now with AI, they can use that same script that they had before and run it
through an AI and say, "Hey, clean this up.
Make it more formal.
Make it match American English."
Those sorts of things.
And still get away... and more now than ever, they're actually targeting people and getting
away with it because it looks more legitimate than it did before.
You know, I remember Kim Commando, who was a radio-popular host that helped old people
with computers back in the day.
And my grandparents when they were alive loved to listen to her radio show.
And she would say like, "Here's how you pick out these things and look for grammar mistakes
and look for spelling errors.
But in today's scams, there are no spelling errors.
There are no grammar mistakes.
It sounds exactly like a person who would coming from... and they're using proper language.
You know, they're using barrister instead of lawyer, and they're using, you know, these
sorts of things.
Another thing, privacy and data, right?
When you train these large language models, it requires a colossal amount of information.
Most of that is scraped from public websites.
It's scraped from forums like Reddit and Substack and those sorts of things.
Stack trace, stack overflow, lots of stacks.
And from social media... all of them.
And is pulled from social media without the knowledge or consent of the people who created
that content.
That raises a lot of privacy concerns, not to mention people's personal stories, their
sensitive information, pictures of their children, copyrighted artworks, copyrighted music.
All of these things are ingested into these models.
So there is a not insignificant risk that models could inadvertently leak, right?
Or reproduce private data in their responses or worse, maliciously be coerced into producing
this information that was used to train them.
There's also a good bit of accountability and transparency issues in these AI models, right?
We as consumers don't have the ability to produce an AI model on the equipment that we have
in our homes.
They require massive amounts of information.
The average model right now is right at $22 million.
You need that amount of money to train one of these models.
So many of the most powerful of these really are just black boxes, like we were talking
about earlier with the computational aspects of it.
But in the informational aspect of it, they're just black boxes.
Their internal workings are so complex that even the people who do this for a living cannot
fully explain why a model produced the specific output that you're seeing on your chat.
Like you said, when model 4.0 was so friendly, and then 5.0 was cold and objective, people
were like, "What just happened?" and open their eyes, shrugging their shoulders.
They don't know.
Exactly.
And that opaqueness is a real challenge for accountability for a very specific reason.
When an AI system hurts someone and we're seeing it hurt people, whether by providing
dangerously incorrect medical advice, causing a self-driving car to crash, or making a biased
loan decision.
And it's really often unclear who's responsible for that.
Is it the user who prompted the AI?
Is it the people who train the AI?
Is it the company that made the AI available for you to use?
Or was it the creators of the biased data that were used to train it?
So without transparency in how these systems make decisions, the ability to establish any
sort of accountability or transparency is almost impossible.
All right, let's talk about the innovation, Fowl.
We have built for you a narrative of historical information that presents progress as being
a driver for additional progress.
Progress in one area creates the need, which drives innovation in another area.
And it is a valid one.
I do want to say that, but it's incomplete.
Innovation is really not a moral force.
It's kind of a neutral process.
It's a tool.
Gosh, where do you heard that argument before?
It's a tool, and it can be used for good or bad.
It just magnifies the effort.
And that's the point of a tool to begin with, is to expand our capabilities into further.
So the final and really the most fundamental critique of ourselves challenges this, this
sort of techno-optimistic premise that a high demand for these resources will lead to positive
and beneficial innovation.
That is my belief.
I will say that there is data that actively backs it up.
Will, is it your belief that you've seen it come to pass in your lifetime?
I have, but more importantly, I think it's more of just hope that I have to hold on to,
because the alternative is just the dark ages again.
Right?
We've already gone so far at this point.
The hope is to completely borrow something entirely unrelated.
If you're going through hell, keep on going, as they say, right?
Like the idea is the only way out is through.
The only way out is through.
Exactly.
I think high demand, I think if I had to word it correctly, I would say high demand provides
a motive for innovation, but it doesn't actually dictate the ethics of the outcome.
Exactly.
So demand for cheap textiles in the 19th century drove innovation in factory machinery and
the steam engine, but it also led to horrific labor conditions, child labor, dangerous urban
pollution, both in the water and in the air, but the demand for agricultural efficiency
in the 20th century drove innovation in chemical fertilizers and pesticides, but also led to
widespread water contamination and ecological damage that we're still dealing with the consequences
of.
Historical innovations that power our modern world right now, the steam engine, the internal
combustion engine were based on fossil fuels, the extraction of use and use of which have
led directly to our current climate crisis, a massive and potentially catastrophic negative
externality of that progress.
So a closer look at the consequences of high resource demand kind of reveals this pattern
of negative outcomes, right?
That often come along with the advancement of technology.
The extraction of raw materials, whether they be coal for a steam engine or lithium and
cobalt for batteries is a very energy intensive and environmentally destructive process.
It very often leads to really terrible and sometimes irreversible ecological damage.
I mean, soil degradation, water shortages.
It pollutes the air and the water both.
And the less obvious one for you biologists out there is the loss of biodiversity.
The global pursuit of resources is inherently intertwined with social conflict and inequality.
The process of material extraction has been linked to severe human rights violations and
forced displacement of local and indigenous populations and acute health problems for
the communities that surround these things from contamination.
We're already looking at the prospect of all of us losing the ability to work in white
collar jobs at all because of the possibility of an eventual AGI.
Which makes alignment all that much more important, right?
Right.
The economic benefits of the resource extraction flows to multinational corporations and governments
and developed nations, while the ecological and social costs are actually burdened upon
by the actual population of less developed countries.
According to the UN actually, natural resources play a key role in 40% of intra-INTRA state
conflicts with profits from their sale being used to finance armed militias.
And look, this is a crucial point in us building up our current moment.
I mean, this immense demand for energy and materials generated by A&I and by cryptocurrency
could spur development, right, of clean fusion power which effectively would end scarcity
for energy on our planet as a whole.
But it could also lead, and if I'm also looking at history, to a giant global grab for resources
needed to build that new energy infrastructure and the computing hardware that it will power.
And that outcome is not guaranteed to be positive.
So it's not going to be determined by like… go ahead.
It's guaranteed not to be positive for some people, even if it's generally positive for
the majority of us.
Some of us will have to pay the cost.
That is a reality.
I view it kind of like I view war.
And look, I'm not a fan of war, but the theory behind it was that a few put up their own
lives at stake to protect the majority.
And I don't think it's necessarily right to equate this with war.
But it's not going to be determined by the technology itself, AI or crypto.
It's going to be determined by the choices that we make as people regarding the regulation,
the governance and the ethical deployment of these technologies.
Right.
We've got access to a lot of power.
Now it comes with a responsibility.
Thanks, Spider-Man.
And Peter Parker over here.
Hey, Uncle Ben.
No, it was Uncle Ben.
I was about to say it was the uncle.
It was not Peter Parker.
So okay.
So where does that leave us, right?
In terms of a conclusion, I came across a fun word that I really liked in doing the
research for this and it was trilemma.
I mean, it makes sense.
I heard of dilemma.
Most of us have heard of dilemma, but trilemma or trilemma is a fun word that I really enjoy.
I think it creates this trilemma of innovation and consumption and responsibility.
So I feel like we're currently standing like dead center and we have to we take care of
all of these things because of the profound implications of the twin revolutions of crypto
and AI that we're, regardless of whether you want it, are being put into the real world
for us to contend with.
So this podcast has been an effort to show that these technologies aren't like abstract
concepts, but they're real powerful forces with real world consequences, right?
Especially with their immense and endless, meaningly endless growth of desire for consumption
of energy.
So where we go from here is not like a simple choice between, "Hey, I'm going to embrace
AI and I'm going to embrace cryptocurrency" or flat out reject it, right?
And decide to be Amish and go live in Pennsylvania.
But it's really, it's a very complex navigation of this trilemma.
And opting out is no guarantee that you'll avoid the consequences and in fact, maybe
just abdicating your ability to do something about it.
Exactly.
So if you were to take each branch of this, right, innovation, we have potential benefits
of AI and decentralized technology that we talked about earlier.
Not only that, we're talking about the possibility of increased productivity.
Now, productivity for what?
You know, again, this is a tool.
We get to determine that.
We've got new forms of economic organization, accelerated scientific discovery and solutions,
possibly to some of humanity's most pressing problems.
So if we turn our back on the potential innovation, we would literally be rejecting like a pretty
powerful engine for future progress.
The second point is consumption, right?
Like as we talked about earlier, that innovation comes at a steep environmental cost.
It doesn't matter how you look at it.
The silicon, from the silicon of the chips to the coal that we are apparently now, once
again, all excited about burning to produce electricity inside of our electrical grid.
That is the energy demand of these technologies is already on the scale of like coal nations,
right?
Like, and growing exponentially.
That is, I mean, that drives a voracious appetite, not just for that electricity, but for the
raw materials needed to build the data centers, to build the processors, the new energy infrastructure
required to sustain them.
And if we don't instill and enforce a system of checks and balances, it really could threaten
to undermine our climate goals.
And I don't mean the ones that our current president has decided to remove us from.
I mean our individual personal climate goals and also exacerbate the destruction that we're
already seeing at an environmental level, which brings up the most important part of
all of this.
It's about responsibility.
We cannot have the benefits of innovation without managing the consequences of consumption.
And those have a tangible effect on everyday people, not just multinational corporations.
And that requires a distinctly profound sense of responsibility to develop and deploy these
technologies in a way that is ethical and sustainable, but equitable too.
Absolutely.
And that means confronting the risks of the algorithmic bias, the misinformation and yes,
the illegal use, right?
And I mean, whatever.
I guess I personally don't really care if some guy from Montana really needs LSD and
can figure out a way to get it through the internet.
That doesn't really affect me that much.
But it's about ensuring that the pursuit of new energy sources doesn't just recreate all
of the crappy historical patterns, right?
Of social and environmental exploitation that often come along with resource extraction.
And we have real world examples of those.
Plenty of historical examples.
Yes.
But even looking toward the possibilities, when we look at possible hiccups in the road,
it's pretty clear that the future is not just a predetermined outcome of technological inevitability.
The pattern of demand driving innovation is powerful.
I mean, you can't deny that either, but it is not a moral guarantee, right?
It doesn't mean that it is...
A scientific fact.
Right.
It is morally good.
It does not mean that.
It is just a fact.
So that same demand for electricity that could actually get us there faster for clean fusion
power could also trigger a destructive scramble for the resources to build Fusion React.
That same AI that could cure diseases we've already seen through Russian propaganda entrenches
societal biases and completely obliterates public trust.
So our challenge really is to manage this trilemma, you're welcome, with intention and foresight.
Right?
So the path forward is really not just about the breakthroughs that we're seeing from this
technology.
And they're cool.
They're cool.
The idea of endless clean energy generation is amazing.
Not to mention computational efficiency of having a constant brain more powerful than
the human when continually working on a problem.
But it demands conscious, deliberate, and international conversations about the kind
of society we want to build.
And it has to be one that balances the power of that innovation with the very finite reality
of our planet and the enduring importance of our collective human value.
And that is the overlap.
We hope you enjoyed this podcast here today.
We worked really hard on coming up with all the information and doing all the research
to get it to you.
If you have any questions or comments, you can email us the overlap@fof.foundation.
You can hit us up and give us a follow over there on Blue Sky, the overlap podcast, as
well as Mastodon, the overlap podcast.
You can also check out our website, fof.foundation, and keep up with us there.
We will see you again soon, folks.
Well, maybe not see you.
Maybe we'll hear from you again soon, but you'll definitely hear from us later.
Thanks, everybody.
And thank you again, Will, for being here.
Thanks everybody.
Bye now.
[MUSIC]
[MUSIC]
♪♪
♪♪