AI Infrastructure Boom: CoreWeave's IPO, AWS Transform, and Quantum Computings Next Leap

welcome to another episode

of cloud unplugged we have

um four stories today new

stories um in obviously the

technology space as always

we have lewis coreweave's

big ipo's um valuation

which we'll come on to we've got um

Amazon have launched some

new features called Amazon Transform,

plus there's an open source

called StreamYard,

but we'll come on that for agents.

But they can now help you

transform all your legacy

apps into more modern code

bases and services.

We have Grok three is now on

the Azure AI Foundry.

They made that available as

one of the models.

And something we don't

really know that much about,

but we're going to try and

talk about anyway,

which is quantum computing,

an ABCIQ supercomputer in Japan,

and NVIDIA's investment in

that with Advantage, too.

I think the company is called D-Wave.

Before we get into them, how have you been,

Lewis?

I've been good.

I've been good.

I went surfing at the weekends.

So another non-tech.

Non-tech again.

Although maybe AI did come

up and we had some conversations,

but I saw a couple of good

friends in Bristol and got to go surfing.

Did you master it?

Are you...

weirdly enough last time I

went surfing it was a

little bit choppy and I

thought oh I can't I can

only surf on an artificial

wave but this time you're

looking at a surfer oh

really well done I could do

it surf how long has that taken

uh it took quite a few

sessions and the conditions

were kind of perfect so

let's not get ahead of

ourselves I can go in a

straight line off a baby

wave that's good that's

really good how about how

long did it like do you

reckon did it take hours

wise if you to have

condensed it all into hours

and sit all down into hours

it's a it's a maybe I don't

know a full day of

flapping around,

but you never get it in a

day because conditions and

tiredness and things.

So I'd say you need a week,

or I would need a week to learn surfing.

Cool.

That's very good.

It is very good.

And did your friend surf as well,

really well?

He can bib and bob around.

He can bib and bob.

That's a no.

It's just basically just a no, isn't it?

I don't know.

Throw your friend under the bus there.

Absolutely.

it's anonymous it's fine how

how did your weekend go

anything interesting um

just bits and bobs it was

very hot wasn't it here in

the uk on saturday so that

was good um and then a bit

of the usual stuff

exercising I went out for

dinner um nothing really

pretty pretty pretty chilled um

But yeah, nothing particularly exciting,

just kind of like domestic

things and a bit of work on the side.

But nothing really very exciting.

And then just usually went

to the gym this morning.

I had somebody get very

angry at me this morning in a car.

Were you psyched?

No, I was just crossing a zebra crossing,

which here obviously in the

UK is you have to stop for.

and then he he basically

obviously I was just

waiting he and then he went

over this after a cross

just kind of like went

through this little

crossing and I went thank

you very much like that and

I said to myself uh and

then obviously started some

you know I think he called

me a dickhead or something

but I mean it's a zebra crossing

of which you're supposed to

legally stop so someone could cross.

It was a bit weird.

It's obviously I was

supposed to thank him for stopping,

like paying for your food

in a supermarket,

which is obviously expected.

And I'd be like, thank you very much.

You know what I mean?

I paid.

Do you know what I mean?

Like you're doing them a

favour or something.

And it's like, yeah,

that's kind of the rules.

Well,

the takeaway is that you've got a better,

sunnier disposition than

this poor person.

That's very strange.

There was only me and this

car on the road as well.

So it was like,

because it was super early.

So I just thought, what a bizarre thing.

Anyway, so yeah, I had that this morning,

which obviously started the

day a bit like,

there's a lot of ag in

London this morning, so it seems.

And I'm a little bit hungover.

That might have contributed

to your perception.

Maybe I just walked out in

front of his car in a

hungover state and actually

it was legitimate.

I had to break really quickly.

In my world, it wasn't like that at all.

You look both ways.

Green cross code.

I'm on a vote.

Exactly.

I was perfect.

But really, exactly.

So anyway, let's get into these stories.

um core weave obviously this

is something you were kind

of rifted on around the big

valuation used to be

cryptocurrency mining

company and then obviously

has started to specialize

in gpu and ai workloads um

due to obviously them using

that technology to do the

mining um what do you think

about it's thirty five billion

valuation in this year, a very,

very small company with a very,

very recent company, a twenty seventeen.

Twenty seventeen.

Yeah.

Doing crypto mining.

And then twenty twenty one, I think,

if I remember rightly, not very long ago,

maybe twenty nineteen, not that long ago.

The crypto mining space is

obviously extraordinarily volatile.

um and there was um a switch

um in uh it's all going out

of my head there's a two

there's bitcoin and then

what's the other main the

bigger cryptocurrency that

switched from a proof of

work to proof of stake um

which kind of took the wind

out of some of their um

pure compute play for crypto mining

I actually really don't know

what you're saying.

I actually don't.

So what do you mean,

the proof of work to a proof of state?

So cryptocurrency is based

on doing vast amounts of

computation to prove

factorization of prime

numbers effectively,

like cryptography itself.

It's very easy to prove that

the work's been done,

but it's very hard to do

the work in the first place.

So it's perfect for storing

on a blockchain and saying, okay,

all the transactions are stored.

We know what's going on,

but you've got to do an

extortionate amount of

compute to prove that next token,

if you will,

that would be presentable on

the blockchain.

So that's called mining,

and that's highly

contentious because

arguably you're just doing

maths and heating up the

planet and using all the

power for an arguable IOU

sort of use case when money

already exists.

Mm-hmm.

But, you know,

there's other ways of sort

of using cryptography to

record something other than

their actual work itself to

make that more frequent.

And that made them thirty

five billion how?

So they were sitting on a

whole bunch of hardware for

they were using NVIDIA chips, the latest,

greatest,

and they had a competitive

advantage when mining

cryptocurrencies if they

could just have the latest

chips at all times,

because then they can mine

the most money effectively.

If there's an exchange rate

for the cryptocurrency you're using,

you can cash it in.

So you literally mine for

money by using NVIDIA.

But as time's gone on,

AI has become a thing,

and they were already

scaling NVIDIA pure play

computing platforms and data centers.

So they found themselves at

the right place at the

right time to take advantage of a newer,

better, and in some ways,

slightly less contentious

use of their data centers.

And I think

Revenue comes from Microsoft.

And Microsoft clearly have

arrangements with OpenAI

and are riding the AI wave

at the same time.

So it's interesting.

And I think inference for AI

is probably a much better

use than doing maths to secure value.

Well,

you're making money through the

computation process.

aspect directly, isn't it?

It's like computing.

They're selling the compute

for other people to make money.

Absolutely.

I know that NVIDIA have got

like seven percent

investment in CoreWeave.

I think they managed to secure it

as part of the IPO.

So I was reading about this,

how they basically get

access to all of the latest NVIDIA chips.

So quite cutting edge,

which is obviously quite good.

Sorry, you were saying I'm not endorsing?

Yeah, I was just going to say,

and then also there was a

deal with OpenAI as well.

a billion deal which is

mental and obviously also

interesting because you'd

expect the Microsoft

relationship with OpenAI

that they would obviously

just use Microsoft so

there's obviously a need to

like there's so much

compute needed for OpenAI

even to the point where

they're kind of surpassing

Azure's capability for it

and obviously maybe given the fact that

You don't really want OpenAI

eating all the compute of

your customers' kind of GPU needs.

So it does kind of make

sense that they diversify

out as a service to use something else.

That was quite good.

It's funny trying to work

out how much of a

diversification from Microsoft,

who are using compute for hosting OpenAI,

to OpenAI Direct for...

hosting the models or

training the models I don't

know it's an interesting

play I guess at the moment

they they're making a net

loss um in twenty twenty

four of um nearly a billion

um eight hundred and sixty

three million so um I guess

time will tell there's a

lot of interested parties

and a lot of people wanting

their compute regardless so

I don't think they'll go um

under but also

It does feel like, well,

I hope for their sake it's

not the next WeWorks

evaluation and loads of

investment forever and

never turning a profit.

They're slated to maybe turn

a profit by twenty twenty nine.

So they must be able to turn it.

It must surely if they've

had an eleven point nine

billion investment from OpenAI,

then obviously that's an investment

It's a contract, isn't it?

They've basically got a

commercial deal over several years.

So that's obviously going to

add to their revenue.

The bottom line and part of

the business model is to

continue growing and

effectively the investment

by parties that just need

the compute whatever.

they've already securing it

and part of the business

model is actually to secure

investment to build ai

hardware on demand so

they've got very advanced

data centers with water

cooling um infiniband and

all the things you need to

just have gpus in a data

center running as

as large as you can go but

you kind of what part of

the business model I was

reading was um securing the

investment to grow the data

center so it's kind of net

loss on that year on year

but net growth in valuation

and continued custom um so

it's it's too early to say

but yeah interesting

very interesting demand in

this space is just constant

growth and now the pure

play providers yeah yeah I

think they've got in about

they've got like infinite

like very low latency

infinity band or obviously

like the transmission of

the data of such a low

level that to the models

and to the models training

on the gpu so obviously

it's like obviously very very quick um

I'm not actually used to it.

Have you used Coolweave?

No,

I hadn't really heard of them because

they're obviously a couple

of removed from normal consumers.

Their customers are cloud

vendors and people needing

to host model training and

model inference,

and their hardware just fits.

I guess most of the cloud computing,

I guess the interesting

thing here is the cloud

platforms hosting their own

models and their own progression in AI,

but also potentially

brokering other service

providers who host their

own training and hardware.

I guess there's other news

that touches on this as we move forward.

Interesting.

Yeah,

I know they do have the SaaS offering,

and they've got the fully

managed Kubernetes.

I know you can kind of sign up.

I kind of remember reading

about that a while back there.

I feel like old Cubas is

quite relevant when the

main workload is kind of

thin and stateless.

And the thing that you're working on,

you need as many GPUs as possible.

So you just need to keep

connecting to loads of nodes.

So yeah, interesting.

And then Amazon, they've launched,

this is kind of interesting

because there was a pitch

recently that we were part

of on Amazon were kind of

pushing their partners to

use a product called V function,

which basically does

exactly what this does.

And it takes legacy apps.

kind of analyzes the code base,

starts to document the business logic,

tries to understand the business logic,

and starts to write the code,

the more modernized code

based off the legacy system.

And then literally,

that was maybe two weeks ago,

or maybe it was even last week,

and then AWS Transform came out,

which is their own service.

that does exactly what they

were telling their partners

to use as one of their partners, you know,

B function partner.

So it's kind of interesting.

So it does look really, really good,

as in like on paper, claims,

and these are the big claims,

that it has ninety percent code accuracy.

I think in some cases it has

a hundred percent accuracy.

These are the claims.

And it uses a graph neural

network with the LLM.

So it can map,

starts mapping the

relationships between

everything and going

through the code base.

Apparently quite accurate, so it says.

And it's four times faster

than you trying to do it on

your own and trying to rework it all.

And it has human in the loop.

So basically you can approve

the things it's coming back with.

and you know explain whether

it's good it will even

provision things for you

right the cloud formation

for you will go afterwards

going to host the app for

you so obviously there's a

big play on you know all

these big legacy apps that

are in these companies that

no one ever dare even think

about trying to start that

project it's written in

cobol and people are retiring

um who know cobol um and

there's some logic and

they're like obviously more

and more risk in businesses

like actually just need to

get this done so it's quite

a good move I think um yeah

I I agree it's a good move

to um to start to address

but I think it kind of shows

The approach,

when you're faced with this

whole rise of agentic AI,

you're going to quickly

flowchart NNAT and Vertex AR, whatever.

You're going to just quickly

flowchart up an agent flow

and all your business

problems are solved.

I think the devil's in the

detail and having the right

type of model.

No.

It's a hundred percent code accuracy.

A hundred percent.

Oh, you're right.

Yeah, sorry.

So it's actually done.

Let me give you a couple of details.

Four times faster and a

hundred percent code accuracy.

There's a couple of details for you.

Well, they also said ninety percent.

You know, if we're splitting hairs,

maybe not.

I guess the point I'm trying

to make is trying to make

your own agentic flow

without using the right type of model

Whether it be an LLM or

whether it be a tool or

whether it be a combination

of specifically trained networks.

I mean,

I think the graph neural networks

that they're using as part

of this service to train on

a hierarchy or hierarchy.

a graph, you know,

a tree of all the related

nodes of connectivity

between pieces of code and

the layout of code

repositories to pieces of

infrastructure and how that

software architecture needs

to be hosted and

communicate and therefore

what you need to wrap it in

and how you need to re-host it.

It's fascinating.

I imagine like

in the the devil that I

speak of will be actually

getting this end-to-end

story like to apply to your

business logic when there's

hidden blobs compiled you

know binaries with no

source available and all sorts of mess

that's going to exist and

sasses and other bits of

hidden functionality,

which aren't privy to the model.

Yeah.

Human in the loop is going

to be key to like, ah, no,

you can't just infer that

all those docs were made up

or that person left.

And they,

that was the intention of the

spec doc there,

but it's actually just a stub.

And that was a version that

wasn't finished.

And we never use that bit of code.

All this type of reality would, you know,

make the,

the actual problem in

reality harder than this conceptual,

but I don't doubt for a

minute that it would speed

things up drastically and a

highly specific agentic

flow written by Amazon has

a much better chance than

businesses trying to cobble

together a sort of let's do

this for code.

and given a very specific

output of a very specific

set of frameworks and

hosting options and a very

specific set of hosting

options and frameworks that

they're going from, there's a chance,

isn't there, that

Yeah, I think it's their own model.

So I don't think it's like

they're not using some generic model.

I think it's like their own

trained model under the

hood on the code bases.

They're using bedrock models.

So they've got their own

trained models for bits of the puzzle,

LLMs for other bits of the puzzle,

and their own graph neural networks.

specifically you've got

their own graph where

they're trying to work out

relationships so yeah it's

all and it's an energetic

flow over the top to wire

together each of these yeah

exactly to do in a flow so

it's not like one model

does it exactly exactly

which does make a lot of

sense so it sounds very

much more effective yeah

Yeah, very, very interesting.

And they also came out with

an open source SDK for generating agents,

AI agents,

basically like a one-liner to

bootstrap a agent.

Very, again,

wedded to the Amazon ecosystem.

So, you know, obviously leverage these,

like you're just saying,

the bedrock models.

So basically all the different models,

Claude and

grok and etc I can use them

on my drop grok is in um

Is it now?

The recent announcement was

Gok was just released for

use with Azure in AI Foundry.

That's because they were

slow to the table.

It's not available on GCP,

but I didn't realise it was

available on Amazon.

I did a report to find out

which ones are available

and Gok wasn't on the report,

but obviously LLM training and

they're giving me without

doing deep research but

it's an interesting topic

where the models are

available yeah I'm pretty

sure it was in there

because there was some blog

by Amazon somewhere around

Grok I could be wrong but

I'm pretty sure I saw

something but people can

tell me I'm full of shit I

suppose maybe I'm just like

I am the AI I just

hallucinate and make things up that's the

That's the way to be.

But yeah, anyway,

getting back to the strands agent,

which is the open source,

they've open sourced a way

of you quickly creating a

new agent and you can put

the tool integration in there.

So you can obviously

integrate into third

parties and they've got a loop handling.

So you can go off and do

things and get the output

and feed it back to your agent and

Believe it or not, you can deploy it,

and unless it's going to

really surprise you,

you can deploy it in Amazon afterwards.

You can use this open source

Amazon-created project, and weirdly,

it also supports deploying

that thing into Amazon.

Do you know what's fascinating?

The cost of using this

framework and the agents is free,

but hosting things on Amazon isn't.

Yeah,

that's a little a little bring all

your stuff into our cloud.

We make it super easy.

No cap.

Exactly.

So it's good, though.

I mean,

I do like it because it does make

producing agents much easier.

Going to be very,

very interesting next few years,

isn't it?

With all these MCP servers

coming out and how quickly

it is to build an agent.

The agentic flow is like

basically obviously flowing

from one agent to another

to kind of string things together,

a workflow together,

or an outcome together that

we're just talking about.

So it's going to be so much innovation,

isn't there really?

It's going to be insane.

In general business process,

it feels like reducing

hallucination is key and

testing that is hard,

but with highly specific

agents that do one bit well

of a process and then linking them up.

And I think with code, ironically, it's...

at least there's really

strong signals of whether

it compiles at least um so

there are some um stronger

indications that code in ai

is is one of the first

things to um show real

benefit so yeah yeah it's

mad um what are your views

because you've already kind

of touched on this you're

saying that the grok three announcement

now on the Azure AI Foundry,

which is an equivalent to

basically Amazon's bedrock, essentially,

isn't it?

A list of the models available.

That's right.

I mean, I found,

because I researched this specifically.

Oh, you're saying I don't research stuff.

It felt like a really... No,

it wouldn't be that cool.

Let's just put it down to, you know,

the fact that you were out

last night and today's home.

Almost got run over today, yeah.

Yeah, fair.

So what I found was that

Microsoft were first with getting GoC.

GoC is one of many, many models.

And I think the important

thing is each cloud

provider has very specific models,

foundational models that

are good at very specific things.

and they host,

and you get access to the

AWS models through AWS and

through GCP for the model garden.

You get access to all the

Gemini models and the latest VO models,

so the image generation

models and the video

generation models and the audio models.

Each cloud has got different

specialities and different training.

And I'd say, you know,

GCP is very much at the

forefront of some AI research,

but they realize they've

got to democratize it to

give all the llama models

and all the other and bring

your own models.

And so to each cloud

provider has got a slightly

different shape of how this

all fits together.

But

yeah I guess it's too early

to say how how this all

links up because for things

like agent to agent

protocol where you host things

and which models you use at which costs.

I mean,

it sounds like a job for an agent

to work out where the best,

where to run all the agents

are and then which models

and which bits do which

bits of the problem.

Yeah, they've got to be new.

There's going to be quite a

lot of new standards surely coming out,

you know, like agent to agent,

like maybe even new RFC standards.

DNS,

I think I was reading something about

DNS for agents,

for discoverability of agents.

and actually thinking

through how an agent can

actually discover other agents.

Oh, this is like,

what was the name for the web,

the semantic web?

was all going to be based on

xml and it was all going to

change the world because

the computers would then be

able to look up all the

other web services and how

it was web services wasn't

it with xml back in the

nineties this is how we're

going to connect up the web

because the web will

understand how to connect

the web up but you kind of

need a bit of intelligence on top of that

And it's kind of playing full circle.

Now you need some documents

written in English that AI

models can read, understand how to use,

but they've got more power than, you know,

a fixed JSON schema.

It's more of an open,

these are the capabilities

and these are the blobs

that you can stick things

in as you move your context around.

So, yeah, fascinating times.

Yeah, I think...

The barrier for entry is dropping,

isn't it?

So you don't need to.

I mean,

if you can create an agent that's

very specific,

you want to do a very specific thing,

like you're saying.

And now there's all these

different types of models

that have different advantages,

and you can basically figure that out,

grok for more research based stuff,

potentially.

scientific research, data, web citations,

live data, basically.

Then I guess if you have

access to all the different

models for your agent to then leverage,

and like you're saying,

you can then choose exactly

the right one for that job.

And it only does one job.

That's the point.

And then you can quickly go

and write another agent

really fast because they

need to bootstrap it using StreamYard.

And that's got a different

model maybe behind the

scenes because it's more suited.

And then that does that specific job.

And then you chain those

jobs together and you get

into the agentic flows,

which is what you're talking about.

Then before you know it,

you've managed to really

probably not write much

code and potentially give

very good answers or very

good results to whatever it

is you're trying to do.

Plus then hook it into your tools,

the things you might want to run.

constrain the output so you

can check and qualify like

if it's not really like

this it's probably not a

good answer so you don't

give silly answers that you

know it's made up and you

can kind of filter those

things and you've kind of

constrained it so you're

like well actually the

probability of it making a

mistake is really really

low and you've got a really

neat little service so when

you're writing when you're writing one

I'm just finishing one now.

In fact, this is one.

Oh, wow, it's this one, is it?

This is just a bunch of

agents taking some flow,

generating some images, listening,

going down some deep research.

I can seem really clever, but actually,

just generating.

You are really just an AI-generated Lewis.

yeah it's basically my

avatar it's an avatar

that's generated on demand

listens to your voice it

goes off it uses a bunch of

protocols to talk to

different agent services

they run tools they have

access to the web and it

comes back and I

hallucinate all this rubbish

I was thinking, because I was looking,

and I was like,

I don't think you are real.

Because that hair.

It's just pixels.

I do need a hair.

This weekend will be the hair.

That's better.

No hair.

God, I look like my brother.

My brother's got no hair.

Oh, really?

Your brother's got no hair.

Well,

he could tarp your hair and probably

post it off to him.

There's enough hair to go around,

is what we're saying.

Or get an AI for your

brother that has hair,

an avatar for your brother with hair.

Maybe that's the way to go.

That's the solution.

That's the thing.

The next thing we're going to talk about,

which this is where I am

slightly out of my depth.

I'm not going to even pretend.

Not like I've pretended at

all anyway on how little I know.

But the quantum supercomputer,

A, B, C, I, Q,

which is a Japanese supercomputer for HPC,

essentially, isn't it?

Supercomputers.

NVIDIA is obviously investing.

And I think they're

leveraging D-Wave's

advantage to basically six generations.

Two separate investments.

ABCIQ is one thing that

happens to use NVIDIA,

but NVIDIA are also

investing and partnering with D-Wave,

two separate things.

Yeah, separately, yeah.

They're investing everywhere.

They're invested in CoreWeave.

They've invested,

they've opened their group

model for robotics, open source that.

They've invested in D-Wave.

They're investing now in

ABCI supercomputer.

They are really putting

their money to work.

They have some cash, don't they?

So I guess it makes a lot of sense.

And I would never have

thought of NVIDIA years ago, you know,

from the graphics card company.

Mm-hmm.

being this behemoth of, you know,

you're almost like taking over,

aren't they?

Like slowly,

slowly maneuvering all over the place.

Like quite, yeah, quite interesting.

But anyway, getting back to,

just more from an investment perspective,

but getting back to the supercomputers.

Go on then, Lewis.

Let's talk about qubits.

Let's talk about how this really works.

Let's figure out how super a

supercomputer is.

How many supers?

What's the scale of supers?

I actually understand

quantum computing more specifically.

It's very bizarre because

there's a lot of scientific

articles saying quantum

annealing is the thing that

D-waves quantum devices actually do.

And that's not the same as

quantum computing.

So at this point,

It's not the same.

It's not the same, no.

I mean, they are different words,

but what is quantum annealing?

see I I asked on lm I did a

couple of searches just

before and I thought you

were an ai yeah I think I

think actually

understanding quantum

annealing is going to take

a little bit longer than a

bit of research before the

podcast but quantum

computing per se um is a an

area that all the cloud computing

providers, you know, Google, Microsoft,

are desperately trying to

crack because there are

certain types of computing

problem which are best

expressed and dealt with by

nature using qubits.

I mean,

if you can express it in a system

which uses quantum effects

to work things out in parallel,

then you can find

like a complex path for a

route of options and do all

sorts of very specific

calculations almost

immediately where you can't

with classic computers.

So the promise is huge.

And the AI involvement with

GPUs is about working out

how to do quantum computing

and quantum research

rather than somehow AI using

quantum chips.

yet so at this point it's

very much in the research

space and using ai to infer

a quantum simulation on one

side of research and to

also infer progress needed

to better make quantum

devices so it's on two

fronts and those are lots

of words all in a

superposition that if we measure

Might just evaporate very quickly.

Well, let me just explain.

This is going to make a lot of sense.

You tell me how quantum annealing works.

I'm going to read out

exactly what quantum annealing is.

Did you refer to an LLM or a

search engine?

I've just searched.

This is your head.

No, this is what it says it is.

And it explains it really, really well,

actually.

Okay.

It's a computing paradigm

that basically uses quantum

mechanical effects such as superposition,

entanglement, and critically,

quantum tunneling.

What it does is it searches

the energy landscape of an

optimization problem for

its global minimum.

Instead of applying really a

sequence of logic gates,

which obviously you would

have thought it would have done,

like a gate model quantum computer,

it slowly cools a quantum

system from an easily

prepared ground state

towards a final hamiltonian

that encodes the problem of

interest um so basically

there you go I mean not

really much they couldn't

have put it any simpler

really that's basic basic

stuff I mean I I don't know

what you're getting what

you're getting oil

Complexities of content.

Honestly,

I'm slightly embarrassed for you

that you didn't know what it was now.

I mean, it's so...

although nearly all quantum

physicists and physicists generally,

when talked about the two

theories of nature,

one being the standard model,

which obviously needs

quantum understanding and

quantum theory to be represented,

it doesn't offer us any common sense.

And unless you actually deal

with the wave function of

the maths involved, it's all a bit meh.

I mean, basically,

things can be in more than

one place at one time,

and they don't exist until

you measure them, or do they?

And the universe thing, black holes stuff,

it's great.

That's my summary.

Did my summary make more

sense than the thing you read out,

or less?

I kind of feel that might

have made less sense than

what I just read out.

That was called a human hallucinatory...

Anyway, look, the point being,

the ability to compute somehow,

we don't know how,

at a rate of knots by using qubits,

which don't really fully

understand exactly what that means,

other than whatever a qubit ends up being,

some superposition of something or other.

um something something

science um anyway basically

very powerful uh that's

basically the crux that's

how it translates that's

quantum quantum computing

that is quantum that is the

end of this quantum course

uh you can leave feedback

at the end we might need it yeah

um but yeah very very

interesting I don't really

know enough about it how it

works I just know that

obviously like you were

saying the research is

still happening it's still

very early days I haven't

quite managed to

pin it down but it is

obviously clearly an

opportunity there on

getting in on those

advancements and making

sure they're part of

whatever the end result of

that manufacturing of the

chip ends up being in

whatever future happens

really when it gets to

quantum which kind of makes

sense so from an investment

perspective you know

Makes a lot of sense.

For us mere mortals who

don't understand what the

hell is going on with quantum,

we just see it as an investment.

That's what we're summarising it to be.

There's a lot of investment,

but it's very unclear as to

how long the play is.

Is it like, I don't know,

And the cost in the end, you know,

what will be the cost to

manufacture any of these chips,

whatever they manage to, you know.

Well, at the moment,

the chips have to be called

to practically close to absolute zero.

So although you may have a chip,

you might only have a few

qubits and might take up

the size of a room and do

just one calculation a day.

um but the calculation will

be very fast so it's hardly

a practical device that you

can put in your cell phone

just yet so basically the

calculations will go

absolutely off the charts

in winter and in summer we degrade down

to the morons that we are.

But in winter, my God, we are.

When I say absolute zero,

I'm talking minus two hundred and forty.

Oh, you're talking, I see.

Kelvin.

Not just a bit chilly.

They need to be really quiet.

You're talking Kelvin.

Absolute Kelvin zero is what

you're referring to.

Okay, fair.

Not a lot of movement of any atoms.

Well, anyway.

On that bombshell of no

information in the podcast,

I think we should end it there.

Obviously,

there'll be another episode next week.

We have a guest episode.

I did an interview with Matt

Griffiths as well,

who heads up the cloud platform

team for Phoenix Group who

kind of own lots of

different insurance

companies and such so

that's quite interesting

and we do talk about AI and

the culture change that AI

is probably going to have

in the future to teams like

what does it mean to

actually the engineering

culture inside of a

business so we've got some

hot takes there so yeah

we'll be listening to I'm

releasing that sorry

probably this week or maybe

next week we'll see

So, yeah.

Cool.

Well,

let's get out of here and get some

computing going, some quantums,

some qubits.

Let's go, Lewis.

Maybe not quantum.

Bit of AI.

All right.

See you all later.

Bye.

Bye-bye.

Creators and Guests

Lewis Marshall
Host
Lewis Marshall
Lewis is a Senior product engineer, co-founder of Appvia, lover of all things AI, science, space and anything engineering!
AI Infrastructure Boom: CoreWeave's IPO, AWS Transform, and Quantum Computings Next Leap
Broadcast by