Hacker Newsnew | past | comments | ask | show | jobs | submit | sam's commentslogin

Well said about the “lossy crankshaft”. As for what determines Q, that’s up next, stay tuned :)

Power to magnets (at least those not contributing to heating) are assumed to be included in the house load.

For pulsed power, with an optimistic beta of 1, the magnetic field energy is going to be comparable to the heat energy. The house load here seems tied to a static superconducting coil, not a pulsed field.

And can in many cases be much higher than the heat energy (e.g. theta pinch).


Thanks - what browser?

technically LibreWolf, but it should be just latest Firefox with some privacy options preselected (like blocking your requests to cloudflare insights)

other that said cloudflare, I see no other errors/warnings in f12

the "load only on lighting theme switch" has fixed itself, the other 2 problems are still there for me


Yeah, it’s slightly buried - the publication with all the details, including the math is linked to in footnote 3 of the explainer article,

https://pubs.aip.org/aip/pop/article/29/6/062103/2847827/Pro...

It’s open access and you can download the PDF directly from there.


This is mistaken. In space a radiator can radiate to cold (2.7K) deep space. A thermos on earth cannot. The temperature difference between the inner and outer walls of the thermos is much lower and it’s the temperature difference which determines the rate of cooling.


"Radiate" is exactly what you have to do, and that is extremely slow. You need a huge area to dissipate the amount of power you are talking about.


Basically you concentrate the heat into a high emissivity high temperature material that’s facing deep space and is shaded. Radiators get dramatically smaller as temperature goes up because radiation scales as T⁴ (Stefan–Boltzmann). There are many cases in space where you need to radiate heat - see Kerbal Space Program


"High emissivity, high temperature" sounds good on paper, but to create that temperature gradient within your spacecraft the way you want costs a lot of energy. What you actually do is add a shit load of surface area to your spacecraft, give that whole thing a coating that improves its emissivity, and try your hardest to minimize the thermal gradient from the heat source (the hot part) throughout the radiator. Emissivity isn't going past 1 in that equation, and you're going to have a very hard time getting your radiator to be hotter than your heat source.

Note that KSP is a game that fictionalizes a lot of things, and sizes of solar panels and radiators are one of those things.


I’m not sure I understand why creating the gradient is hard - use a phase transitioning heat pump to a high surface area radiator. The radiator doesn’t have to be hotter than the heat source the radiator just has to be hot, but given the fact we are talking about a space data center, you can certainly use the heat pump to make the radiator much hotter than any single GPU, and even use the energy from the heat cycle to power the pumps, but I imagine such a data center the power draw of the heat pump would be tiny compared to the GPUs.

To be clear I’m not advocating KSP as a reality simulator, or that data centers in space isn’t totally bonkers. However the reality is the hotter the radiator the smaller the surface area for pure radiance dissipation of heat.


I am referring to the "using a heat pump to make the radiator hotter than the GPU" as "creating a thermal gradient." No matter the technology, moving heat like this is always pretty expensive in power terms, and the price goes way up if you want the radiator hotter that the thing it's cooling.

Can you point to a terrestrial system similar to what you are proposing? Liquid cooling and phase change cooling in computers always has a radiator that is cooler than the component it is chilling.

You can do this in theory, but it takes so much power you are better off with some heat pumping to much bigger passive radiators that are cooler than your silicon (like everything else in space).


Yah but the key is that it’s not the power draw that’s the issue is the dissipation of thermal energy through pure radiation. The heat of the radiator is really important because it reduces the required surface area immensely as it scales up.

However the radiators you’re discussing are not pure radiance radiators. They transfer most heat to some other material like forced air. This is why they are cooler - they aren’t relying on the heat of the material to radiate rapidly enough.

I would note an obvious terrestrial example though is a home heat pump. The typical radiator is actually hotter than the home itself, and especially the heads and material being circulated. Another is any adiabatic refrigerator where the coils are much hotter than the refrigerated space. Peltier coolers even more so where you can freeze the nitrogen in the air with a peltier tower but the hot surface is intensely hot and unless you can move the heat from it rapidly the peltier effect collapses. (I went through a period of trying to freeze air at home for fun so there you go)

For radiation of heat the equation is P = \varepsilon \sigma A T^4

P = radiated power • A = surface area • T = absolute temperature (Kelvin) • \varepsilon = emissivity • \sigma = Stefan–Boltzmann constant

This means the temperature of the material increases radiation by the fourth power of its value. This is a dramatic amount of variance at it scales. If you can expend the power to double the heat it emits 16x the heat. You can use a much lower mass and surface area.

This is why space based nuclear reactors are invariably high temperature radiators. The idea radiators are effectively carbon radiators in that they have nearly perfect emissivity and extraordinarily high temperature tolerances and even get harder at very high temperatures. They’re just delicate and hard to manufacture. This is very different than conduction based radiators where metals are ideal.


Making your radiator hotter than the thing you're pulling heat out of is very, very expensive in energy terms. This is why home AC is so expensive and why nobody uses systems like this to cool computers. All that energy has to come from a solar panel you fly, too, so you're not saving mass by doing this. You're just shifting it from cooling to power. If you need 200W to cool 100W of compute, you're tripling the amount of power you need to do that work.

Also, peltiers are less energy-efficient than compressors. That is why no home AC uses a peltier.


Stupid question: Why not transfer the heat to some kind of material and then jettison that out to space? Maybe something that can burn itself out and leave little material behind?


How many times can you do that?

Consider your own computer... how often does it get hot under a regular load and the fans kick on? That "fans kick on" is transferring the heat to air and jettisoning it into the room... and you're dealing with 100 watts there. Scale that up to kilowatts that are always running.

There is a lot of energy that is being consumed for computation and being converted into heat.

The other part if that is... its a lot easier to do that transfer heat into some other material and jettison it on earth, without having to ship the rack into space and also deal with the additional mechanics of getting rid of hot things. You've got advantages of things like "cold things sink in gravity" and "you can push heat around and sink it into other things (like phase change of water)" and "you don't need to be sitting on top of a power plant in order to use the power."


I have a vacuum thermos. I've been unimpressed with its ability to keep coffee hot.


In the context implied above it is the ratio of fusion energy released to laser energy on target or the laser energy crossing the vacuum vessel boundary (they are the same in this case). So it would have been more precise to say "target gain" or "scientific gain".


We are careful to always specify what kind of “breakeven” or “gain” is being referred to on all graphs and statements about the performance of specific experiments in this paper.

Energy gain (in the general sense) is the ratio of fusion energy released to the incoming heating energy crossing some closed boundary.

The right question to ask is then: “what is the closed boundary across which the heating energy is being measured?” For scientific gain, this boundary is the vacuum vessel wall. For facility gain, it is the facility boundary.


It’s the ratio of fusion energy released to heating energy crossing the vacuum vessel boundary.


Author here - some other posters have touched on the reasons. Much of the focus on high performing tokamaks shifted to ITER in recent decades, though this is now changing as fusion companies are utilizing new enabling technologies like high-temperature superconductors.

Additionally the final plot of scientific gain (Qsci) vs time effectively requires the use of deuterium-tritium fuel to generate the amounts of fusion energy needed for an appreciable level of Qsci. The number of tokamak experiments utilizing deuterium tritium is small.


Thanks a lot for this research. Seing the comments here I think it's really important to make breakthroughs and progress more visible to the public. Otherwise the impression that "we're always 50 years away" stays strong.

Here was my completely layman attempt to forecast fusion viability a few months ago. https://news.ycombinator.com/item?id=42791997 (in short: 2037)

Is there some semblance of realism there you think?


In the 2037 timeframe, modeling trends doesn’t matter as much as looking at the actual players. I think odds are good because you have at least 4 very well funded groups shooting to have something before 2035: commercial groups including CFS, Helios, TAE, also the efforts by ITER. Maybe more. Each with generally independent approaches. I think scientific viability will be proven by 2035, but getting economic viability could take much longer.


If ITER is where it's at why are we building commercial scale tokamak? https://en.wikipedia.org/wiki/Commonwealth_Fusion_Systems


Companies like Commonwealth Fusion Systems are an example of those utilizing high-temperature superconductors which did not exist commercially when ITER was being designed.


ITER uses HTSs, just not for the coils:

> The design operating current of the feeders is 68Ka. High temperature superconductor (HTS) current leads transmit the high-power currents from the room-temperature power supplies to the low-temperature superconducting coils 4K (-269°C) with minimum heat load.

Source: https://www.iter.org/machine/magnets


HTS current feeds are a good idea (we also use them at CFS, my employer: https://www.instagram.com/p/DJXInDUuDAK/). It's HTS in the coils (electromagnets) that enables higher magnetic fields and thus a more compact tokamak.


This comment thread will go down in history along with the famous HN Dropbox thread.

This thing is incredible and will eventually crush the iPhone. Solves iPhone addiction while retaining the utility of an iPhone? Solid gold.


The thing is, most people don't actually want to solve their phone addiction even if they say they do.

In reality, they want to read news while waiting at a doctor's office, play games while they take the subway, and see Instagram updates from friends throughout the day.

And if you already want a less capable device, it's called an Apple Watch, but it comes with a little screen that is way more useful than laser projection, and will soon surely have a powerful LLM it can access. (And paired with AirPods it does a much better job preserving your audio privacy.)

So it's hard to see how this is going to succeed, when Apple can just copy the good part (LLM) as part of the Watch.


IMO "Solves iPhone addiction" is more or less a rephrasing of "people will quickly get bored of this".

It's just a smartphone, except you can't run third-party software, can't directly interface with it, and can't connect it to other machines. And instead of holding an N-million pixel, M-million-colour, extremely high-constrast display directly in your hand, you have to indirectly project (meaning extremely LOW contrast) a single-colour display onto your hand from a projector that's shaking around being clipped to your clothes.

The only single hypothetical upside I can see to this tech is that it might lower the two-second delay in looking at my phone caused by putting my hand in my pocket before raising my hand, but you could say that that goes against the goal of solving phone addiction.


Apple watch? Cellular mode allows this, has siri built in, can handle calling/messaging/etc. People don't want to replace their phones though.


> along with the famous HN Dropbox thread.

Most people saw the utility and the use cases of Dropbox even when it launched.

What's the utility and use case of this? What problem does it solve?


> iPhone addiction

This is not a thing. "Screens" aren't 'separating us from one another', or 'distracting us'; that's fuzzy verbalistic nonsense, made up by marketers who want to sell you non-phones, and bloviating op-ed columnists who don't have a clue. It's so ridiculous, that everyone has seen the memes debunking it.[0][1]

True invasiveness is expressed as: "how long does it take me to do this thing I want to do?" In other words, you need a human-computer interface that reduces friction as close to zero as possible. The phone won because it's the best at that. The "pin" is orders of magnitude worse, so it won't catch on.

[0]: https://xkcd.com/610/ [1]: https://imgflip.com/i/1swr7j


Maybe in the long term view - people correctly identifying that Dropbox had no differentiator (to quote Steve "this is a feature, not a product').


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: