Hacker Newsnew | past | comments | ask | show | jobs | submit | cestith's commentslogin

In this scenario, is the person in the Oval Office a rapist, child molester, serial fraudster, corruptly manipulating stock markets, steering government money to his children’s own weapons companies, assassinating other world leaders, committing the war crime of declaring no quarter, committing the war crime of threatening to destroy all significant civilian infrastructure in another sovereign nation, committing the war crime of threatening genocide, and threatening the use of nuclear weapons in a preemptive military action?

Don't forget his incontinence, and that whole literal bulldozing of your democratic institutions.

Incontinence can happen to anyone. No need to pick on things that people can’t control. Especially when he has so many legitimate targets to hit.

There are still ISA slots in new systems with fairly modern processors and plenty of RAM, if you don’t mind buying specific models of industrial PCs for way too much money.

For $1100 or so you, too, could have a 4th generation Core i3 machine. https://www.rampcsystems.com/product/2-isa-slot

Or maybe you need 4 PCI and 9 ISA for some reason. DuroPC’s got you, if you can drop $1800 on a system with the same generation of processor. https://duropc.com/product/r810-4p9i-4


ISA slots are all identical. If you have one slot, you can multiply it to 100 slots just by connecting the wires.


IBM was always special. :-) Aren't they the ones who invented the MCA bus abomination that required a floppy disk to configure each card?

You may want to read up on the history of dip switches, jumpers, and plug and play a little bit more.

That’s one of those facts that’s always good to know, but in practice people tend to put one card in one slot with no expanders.

I'm pretty sure the host will run out of IRQs long before 100. Don't most systems only have 16?

You don't really need IRQs for most ISA boards. OPL3/Adlib sound cards don't need one, MIDI doesn't, joystick port doesn't. I saw various I/O boards that don't need IRQ. Soundblaster does, but I don't know for what purpose. Maybe someone here can explain?

Coincidentally I'm currently working on a Sound Blaster driver for some DOS homebrew, so here's quick rundown of how an SB is programmed and what its resources do:

Base Address: This is the beginning of the IO port range you use to program the card, commonly it's 0x220, but can be configured with jumpers (or software on later cards). You can add offsets to this address to access different functionality of the card, such as the OPL chip or the Mixer chip.

IRQ: The interrupt number that will be fired when the soundcard finishes playback of an audio chunk. Early cards usually used 7, later models defaulting to 5. More on this below.

DMA Channel: Which channel of the PC's DMA controller will be supplying audio data to the card. Usually 1 for 8-bit cards, with 5 being used for 16-bit cards.

The general process for playback is as follows:

- Program the DMA controller with the address and size of an audio buffer you'll be using to mix your PCM sound into. This buffer will conventionally be used in 2 halves by the interrupt service routine, a front buffer and backbuffer, similar to what you'd have for double buffered video. The DMA channel should also be put in "auto-init" mode so that the DMA transfer will loop back to the start when it finishes, which allows continuous playback.

- Install an interrupt service routine to write data into the "backbuffer" half of the DMA buffer, which switches back and forth each time an IRQ fires.

- Initialize the DSP chip via its IO port, pick a sample rate (usually around 11khz for most DOS games), then issue a continuous playback command. For this part, you tell the soundcard that your playback buffer is half the size it actually is, which causes the IRQ to fire once in the middle of the buffer, and again at the end of the buffer before looping back to the start. These halfway IRQs allow you to fill the unused half of the buffer while the other half is playing, for smooth gapless playback with no clicks or pops.

This is probably more info than you or anyone actually wanted, but it's a fun topic so I couldn't help myself.


No, I really appreciate the detailed answer. Things were so simple back then.

I thought the OPL chip was addressed via 388h (adlib/fm), not 220h (wave)?


388h is indeed the original adlib base port. Most sound cards that feature an OPL chip will also monitor reads/writes to this port for backward compatibility with older software, but the FM chip is also addressable from a base port offset.

Incidentally, the DSP isn't actually at 220h, it's at 22a/22ch. How the ports are mapped exactly depends on which sound blaster model you have. What's actually at 220h on older cards is the old CMS chips, while the OPL2 is at 228/229h. As CMS chips fell out of use and later cards featured dual OPL2 or an OPL3 chip, 220h-223h were repurposed for FM writes also, which means you can access the OPL chip from a grand total of 3 different IO ports.

Interestingly, cards with dual OPL2 chips would often be designed such that writes to 388h would actually go to both FM chips instead of just one, so that you still get proper mono sound, otherwise it would be panned hard left.


Sound Blasters and compatible cards used IRQ lines because back in the bad old days CPUs were slow, bandwidth was tiny, and buffers were minuscule.

To get responsive/real time audio the card needs to signal to the CPU, not the other way around, and at the time IRQs were the way to do that on ISA busses.

I would imagine that ISA cards that didn't need IRQs either required CPU polling or DMA.


I imagined that the game / audio driver would just send data to the card at regular intervals and that's it. I realize now that the card uses it's own clock that can drift when compared to the system timer and this method would have a buffer underrun/overrun problem.

This is what the word "bus" used to mean on a hardware level: a backplane of connections to which multiple peripherals could be attached. These days a bus is a LAN of point-to-point serial connections which, it turns out, is much more viable at the high communication rates demanded of modern hardware.

How were different devices addressed? I assume it’s a master and slave system, but even then were address collisions automatically resolved?

In original ISA none of this is managed, the owner of the PC is expected to manually configure both hardware and software appropriately.

So e.g. [with the PC turned off!] you move a tiny jumper (basically just a piece of conductive metal with a plastic housing) to the "IRQ 8" position and you pick "IRQ 8" in some menu or set it in an environment variable in DOS or whatever.

By the time PCI is starting to appear there is some level of "Plug and Plug ISA" but it's fairly crazy because of course all the old stuff still exists whereas for PCI the bus always had this intelligence baked in so nobody just assumes they can pick.


It can't be IRQ 8 on an ISA board. That's the IRQ for the RTC.

That's correct. I considered whether I should dig out a manual and decided that I should do the exact opposite and pick a value I know won't exist for ISA.

To avoid collisions, you moved physical jumpers on cards that might conflict, to select among a small range of addresses, I/O ports and/or IRQ numbers.

For example if you had two identical network cards, or SCSI disk controllers, you would need to physically reconfigure one of them away from its defaults.

There were only a small number of configurations available on each type of device, and some weren't configurable at all, so you could still get irreconcilable conflicts.

The Linux kernel of the time was full of hard-coded "probe" addresses and I/O ports, probe sequences to see if there was a device there, and IRQ auto-detection routines that triggered an interrupt to find out which IRQ line was asserted. Some of the probes had to be run in a particular order, so that probes for one type of device wouldn't break another type.

Later came ISAPnP, meaning Plug'n'Play for ISA, which allowed the operating system to use a clever protocol to talk simultaneously over ISA with all devices on the bus that support it, identify and select them individually, query what they required and and configure their addresses, I/O ports and IRQs to avoid overlap, or permit overlap where it was ok for IRQs. After the operating system was done configuring them, they operated as if they were configured physically like the older ISA cards. If necessary this could be implemented cheaply by adding an ISAPnP module to an existing ISA card design.

Eventually ISA was superceded by PCI which had better, well-defined enumeration and configuration methods from the start which all devices had to implement. PCI also allowed MMIO and IO base addresses to be set anywhere (32-bit), not just a small number or single option as ISA cards usually had, so there were no more address conflicts. The operating system still had to find the PCI bus registers itself, but after that, probing was simpler and more reliable than with ISA.

USB also arrived around the same time, and also had well-defined enumeration and configuration methods. Many simpler ISA devices were replaced by equivalent USB devices. Although USB was (and is) complex to implement at a low level, the complexity was handled very well by low-cost, generic USB modules on the device side, so it was easy for device manufacturers to use.


> How were different devices addressed?

There's a shared address bus. Each device responds to the i/o and/or memory addresses it's configured for. Configuration can be static, jumpers, isapnp.

> I assume it’s a master and slave system, but even then were address collisions automatically resolved?

No. If two devices want to use the same address space, you'll have problems. isapnp might help you out, but it was added in the second decade of ISA, so ... lots of things don't use it.


All cards receive the same signals (address lines, data lines, IRQ lines and everything else). They just ignore all data for addresses (on address lines) that are not theirs.

Each card must have a unique I/O address, sometimes more than one and sometimes an IRQ and DMA too. For example, Soundblaster cards had an OPL3/Adlib/FM synth chip at address 388h (it's fixed, you can't have two in the same system, or maybe you can and they would play the same tune, I don't know...), the main chip (wave playback and recording) at 220h or 240h configurable by a jumper, IRQ 2, 5, 7 or 9 (two jumpers), a MIDI port at 300h or 330h (another jumper), two DMA channels (another 4 jumpers), and an IDE port (2 more jumpers).

When you install the card, you set those jumpers according to the manual and what other cards you have installed and their addresses, so that there are no conflicts. Then you add the "SET BLASTER=A220 I5 D1 H7 T6 P330" to AUTOEXEC.BAT so that the games know where in memory to read/write the data so that it reaches the correct card.

Then, PnP was invented, because changing those jumpers and avoiding conflicts was very hard, as you can imagine.

On a PnP system, you would enter the BIOS setup and reserve the IRQs of any non-PnP cards you may have, so that they are not auto-assigned to PnP cards. I/O addresses are managed automatically.

The ISA PnP initialization process is actually very interesting:

All the ISA PnP cards power up in a disabled state. They all respond only to a specific address reserved for PnP initialization. Each card has a unique serial number written at factory. The BIOS scans for serial numbers, not by brute force (that would take too long), but bit by bit.

  Let's say there are 3 cards:
  A: 010...
  B: 011...
  C: 100...

  BIOS sends an "init command" to the reserverd initialization address. All cards enter selection process.
  BIOS asks for bit 0 of the serial number. Cards A and B pull down the line for bit 0. ISA lines are normally pulled-up by the chipset when receiving data from the cards. The BIOS remembers "0". Card C notices that the line is down, in conflict with it's own bit (has "1") and disables itself until the next init command.
  BIOS asks for bit 1. No cards pull down the line, both A and B have "1". BIOS adds a "1" to the serial number (now "01").
  BIOS asks for bit 2. Card A pulls down the line. BIOS remembers "010". Card B is in conflict and disables itself.
  Continue until the last bit. Only card A remains active. For each bit, it either pulls down the line and the BIOS adds a "0" or no response and BIOS adds a "1". There can't be any more conflicts to disable it, since card A is the only one remaining. When the BIOS reaches the last bit, only one card can remain, no matter how many were initially active.
  The BIOS then asks for config requirements, and the only remaining active card answers. BIOS configures it with bus addresses, IRQs, DMAs, etc.

  BIOS sends the "init command" again. Card A now has specific addresses configured and will ignore the reserved init address. Only cards B and C enter the selection process.
  BIOS asks for bit 0. Card B pulls down the line. Card C is in conflict and disables itself. Card B remains the only one active and will be configured.

  Repeat the process and configure remaining card C.
  At the end, when no more cards remain. Serial number scan returns "1111111..." - no cards to pull down any lines. It means the scan is finished.

They used 16-bit addresses from 0x0 to 0xffff.

The one we’re trying to avoid the most in my household is sucralose. Genotoxicity and upregulating inflammation and oxidative stress are bad things. Accumulating unchanged in the environment and resisting biodegradation is a bad thing. https://pmc.ncbi.nlm.nih.gov/articles/PMC12251854/

A lot of the “zero” soft drinks are sweetened differently from the “diet” ones. There’s often a mix of different sweeteners so you don’t get too much of any one aftertaste.

The one we’re trying to avoid the most in my household is sucralose. Genotoxicity and upregulating inflammation and oxidative stress are bad things. Accumulating unchanged in the environment and resisting biodegradation is a bad thing.

https://pmc.ncbi.nlm.nih.gov/articles/PMC12251854/


Indeed. It turns out that “MSG headaches” are just high sodium level headaches, either through dehydration, unbalanced electrolytes, elevated blood pressure or whatever else higher than normal sodium levels cause headaches. The same headache could be caused by salt. MSG actually makes recipes require less of other flavor ingredients, including salt. It’s also often found in dishes that still contain relatively massive amounts of salt.

So a little MSG to get your taste buds extra sensitive to other flavors is a net good. Just don’t eat too much sodium altogether, balance your electrolytes, and stay hydrated.


I have a family member who has discovered through gradual process of elimination that she gets migraines from MSG, aspartame and yeast extract. "just sodium headaches" doesn't really apply to her case; simply chewing a piece of gum that has aspartame, or eating a piece of meat cooked with MSG in her salad is enough to trigger them. I agree in the general sense with your comment and the article that there's no widespread danger to public health from these additives, but it doesn't mean there aren't still individuals whose health gets messed up (including legitimate headache or migraine symptoms) by these additives.

> discovered through gradual process of elimination that she gets migraines from MSG

This is definitely not true. There is no biological pathway that can do this. MSG is nearly identical to the glutamic acid in other foods. If it were true they'd be unable to tolerate parmesan cheese, soy sauce, aged meats, tomatoes, mushrooms, and seaweed.


Glutamate is considered a migraine trigger, though. Many people do avoid or limit those foods for that reason. Thankfully it doesn’t appear to be a trigger for me, because I love all those things.

There is some controversy about dietary glutamate being directly responsible for migraine. It’s common in the brain already. It’s only allowed selectively through the blood-brain barrier. However it could trigger other types of headache, and those can trigger migraines. Also, apparently more of it is formed in the brain when there are high levels of lysine and ornithine in the body. Many of the foods with high levels of glutamate also have high levels of those aminos.

High levels or low levels of sodium in the body can also be a migraine trigger. MSG is lower in sodium than table salt, but it is additional sodium. Many of the issues blamed on it though are after eating foods that contain MSG and a high amount of salt as well. That’s also true of many of the glutamate-containing foods for that matter (gravies, miso, soy sauce, aged meats).

Doctors recommend eliminating one single ingredient at a time to find your triggers. However, I’m sure many people don’t control for salt when eliminating MSG or natural food glutamate.


Elevated brain glutamate levels are associated with migraines, but there’s no solid evidence that dietary glutamate is a trigger for migraines.

The number of people avoiding it is not evidence of anything other than public perception.

Elimination diets are also super impressive.


I agree on all your points. If someone suffers from migraines, though, it’s worth trying figuring out plausible triggers even if the evidence isn’t really solid.

It’s important not to conflate ingredients when doing an elimination diet, though. Separating restaurants or prepackaged foods at home that use MSG from those that use a lot of salt (or preservatives, or artificial dyes, or “natural flavors”, or any number of other things) is pretty difficult. I’ve seen several instances over the years of people assuming a restaurant used MSG based on getting a migraine, even when that restaurant doesn’t use MSG in any of their dishes. I’m not even a doctor, just an interested person with migraines. I’m sure a nutritionist or headache specialist could tell us stories.


There's a pretty good finding here[1] about elimination diets being inappropriate for most patients. Basically without any diagnosis of something like celiac, allergy, etc you have a high risk of misidentifying foods as causes because the co-occur with non food triggers. The literature just seems super weak for most alleged dietary triggers.

[1]https://pmc.ncbi.nlm.nih.gov/articles/PMC12609589/#sec8-nutr...


> This is definitely not true. There is no biological pathway that can do this

Nevertheless, it continues to give her migraines even in small portions where other foods don't. I don't doubt it could be some byproduct from the process of MSG salt's synthesis or cooking with it rather than the actual glutamic acid, or some allergy as others have suggested.

I wouldn't be so strong as to categorically say that MSG can't cause migraines in any of the human race as you so claim though. There's so much we don't know about human biological mechanisms in niche cases; even water can cause allergic reactions in certain individuals (see Aquagenic Urticaria). What is true generally is not always true specifically when it comes to human health.


I'm curious: have you done a (single or double) blind test where you prepare dishes (selected at random) with or without MSG/aspartame/yeast extract and record the effects?

To be clear: not saying you should, just wondering how you came the conclusion that those ingredients are the trigger.


Why are you arguing when the internet expert already stated that is impossible.

MSG is the salt form, wherre the glutamate is bound to a sodium atom. In food, my understanding is that MSG will split into two things: sodium ion and glutamate ion. The difference between adding MSG to food and food being already high in glutamate would be the salt content.

I don't recommend telling people their subjective experience isn't true- you don't know for sure that they don't actually get migraines from MSG. I think it's fine to tell people that often their subjective experiences can be colored by prior knowledge, and people often ascribe causes to unrelated factors. (My personal belief is that most people who say they got a headache from MSG experienced a headache, but consuming glutamate was not the cause).


That's very interesting because cheese, paneer and cured meats do trigger wife's migraines. I had not considered that richness is n glutamic acid is a common factor.

The personal, anecdotal relation seems strong on the cheese and paneer component. Even if she had something not aware that it contains either of those it would trigger a migraine, sometimes not immediately though, seems to take a few to several hours.

Will have to try a blind testing with MSG.


Oh she/you should check out mast cell activation syndrome (mcas). Basically different foods increase histamine levels in the body or prevent its degradation. Old proteins and fermented foods are particularly problematic because microbes break down the protein and release histamine precursors.

Thanks for the suggestion.

Well, my dad got migraines from everything° on that list bar tomatoes - though he did from dried tomatoes, so does that count as everything on the list? I don't know the biological pathway, but it was neither self-diagnosed, self-derived, nor made from woo; he visited several real-MD neurologists before someone identified the chemical(s) at fault, and gave him a list of foods not to eat.

°In fact it was all cheeses, not just parmesan; the more aged the worse. And also chocolate, and olives. Basically anything aged or fermented. I don't know how that lines up with MSG's chemistry, but he was careful with MSG, though nothing like as avoidant as he was with soy sauce and cheese.


For some people, migraines can be triggered by things like light or certain smells. It's not at all impossible that a certain taste can also trigger them.

Migraines are complicated enough that I'd buy a psychosomatic trigger, maybe?

Migraines can possibly be triggered by cause and effect chains several intermediate causes long. It could help explain for example why certain things are triggers for certain migraine patients and not others.

Aspartame is also a trigger, but the fact that one person has multiple triggers doesn’t mean they are related at all.

Now you’re right that MSG is more than sodium. Sodium can be a headache trigger, including migraines. Glutamate is also a migraine trigger and a fairly common one. It doesn’t happen to be one for me. However, it is a neurotransmitter that is involved in pain signaling. It’s understandable how it could easily trigger a migraine or make the pain worse.

Some triggers for some people actually help other people with migraines, like caffeine. Migraines are such an incredibly complex topic that there are medical specialists for them. Mine can be fairly debilitating, but are rare enough I don’t qualify for most prescriptions. So I definitely understand how trigger management and symptom management are a big deal.


For me aspartame only just recently started giving me headaches, and it happens every time now, but not MSG or salt. No idea why.

Sounds like an allergy.

I definitely wouldn't be surprised if that were the case

Or psychosomatic.

It's possible she believes that those items all trigger her migraines therefore her body gives her a migraine when she believes she's had one of her triggers.

A big tell would be her getting a migraine and blaming it on "hidden MSG" in a food item that doesn't have it.

Or her not getting migraine from foods that have MSG naturally but is never pointed out. Like tomatoes.


It's funny... reading this thread, I'm reminded of a friend of mine who indeed gets migraines from tomatoes. That was actually what she figured out first; the MSG connection came later.

This effect is very obvious on me. I consistently get headaches when my sodium intake is too high. I don’t even use MSG in my own cooking but occasionally I add too much salt.

Might consider a mix of electrolytes instead of just salt. I usually keep a container mixed with "snake juice" ratios for electrolytes and use that to season with instead of salt alone. I'll also sometimes put a pinch in my water, not nearly snake juice amounts, when I get a bit off and start getting leg cramps.

Doesnt all sodium chemicals like salt and baking soda increase taste perception?

I drank sodas with aspartame just fine for many decades. Then one day they suddenly started giving me migraines any time I had one, so I had to quit cold turkey. No other amount of caffeine, regular sodas, salty foods, MSG-laden meals etc. seem to trigger it though, and I have no idea why.

It triggers a headache for me as well. Happens whether I'm previously aware of its presence in the product or not. I'm fine accepting that it's a generally safe chemical that has been thoroughly studied and I just have a quirk, but I also don't want my quirk to be dismissed because studies don't validate it.

The headaches are replicable and severe enough that it's turned me off of all artificial sweeteners, although I doubt they all have the same effect. I don't want to risk it.


> So a little MSG to get your taste buds extra sensitive to other flavors is a net good.

Salt and MSG are sometimes said to strengthen existing flavors, but I'm pretty sure they mainly just contribute their own unique taste: salty and umami.

(There could of course theoretically be some interactions with other taste receptors, similar to how sweet things make things taste much less bitter, e.g. cocoa, but that is a relatively specific effect and not one that acts as a general flavor enhancer.)


If you lick plain MSG, it tastes bitter. Add it to something very sweet and it just tastes bizarre. Sprinkle it on fried chicken and it tastes like you just dumped chicken gravy on it and pumped up the taste. It really does mainly amplify flavors.

And while MSG tastes very wrong in sweets, sweets generally always taste better with a bit of salt. Salt is its own flavor and a flavor amplifier.


Plain MSG absolutely does not taste bitter. I just tried some (again) to confirm, it's not salty & not bitter. Just a strong flavor of its own.

Yeah it just tastes like straight up "savory."

Almost tastes like fat more than anything.


> Almost tastes like fat more than anything.

Probably because a lot of fat sources have high levels of glutamates in them. You're not tasting the fat, per-say, but the other stuff that isn't fat. It's why beef tallow is so much tastier than neutral oil. Same level of fat.


Yeah, I also just tried it, it doesn't taste bitter at all. It tastes like clear soup broth, which usually contains a lot of MSG.

You left out shipping, storing, and logistics.

Often I can get a used paperback or sometimes even a used hardcover book cheaper than the ebook.

Wx isn’t bad either. https://wxwidgets.org/

You don’t get an app that looks the same across platforms. You do get apps that look like they belong on your platform, even though the code is cross-platform. It uses the native toolkit no matter where you run it across Windows, GTK, Qt, Motif, macOS/Carbon, macOS/Cocoa, and X11 with generic widgets.

Older platforms are also supported, like OS/2, Irix, and OSF/1.

https://wiki.wxwidgets.org/Supported_Platforms

It’s a C++ project, but it has bindings for most of the languages you’d use to build an application. Ada? Go? Delphi? Ruby? Python? Rust? Yes, and more. https://wiki.wxwidgets.org/Bindings


The problem is, most of these bindings are out-of-date. Delphi from 2012, Basic from 2002, D from 2016. wxRuby is a dead link. wxAda was already dead in 2009, as the discussion I can google suggests.

So, if you use wxWidgets, you probably have to use either C++ or Python version, others are unlikely to be supported.


wxRuby has been resurrected as wxRuby3, see https://mcorino.github.io/wxRuby3/

Among actively developed bindings, there is also wxRust at https://crates.io/crates/wxdragon


> [Wx] uses the native toolkit no matter where you run it

This is false. https://news.ycombinator.com/item?id=24250968 https://news.ycombinator.com/item?id=24259040 It was false in 2020 and it is still false today (I just checked).

I wish the Wx proponents would stop saying these things. Who exactly are you trying to fool? Do you have no concept of reputational damage? What good comes from a claim that is so easily disproven by just installing a Wx application and looking?


Do you understand the difference between a toolkit API and a graphical widget?

I’m not trying to fool anyone. I'm not affiliated with the project. I’m just aware of it and have used it a few times. You, on the other hand, have called me a liar and a fraud because I repeated exactly what the project docs state and which your two links do nothing to contradict. In fact, you linked to yourself being corrected by the actual maintainer of the project. Did you read anything he wrote?


> Do you understand the difference between a toolkit API and a graphical widget?

I think I do. I have taken a few minutes on the Web to compare that what I had in mind is correct. What was the point of asking this question? Was it to trap me in a gotcha, or paint me as clueless, or what?

> have called me a liar and a fraud because I repeated exactly what the project docs state

Good, you realise you are taking on the claims made by Wx on paper. However, there's more to the world. To get the full picture, you have to also engage with what I have listed. The docs say one thing, the reality shown in the screenshots say another. There is a contradiction. It remains unresolved, not for lack of trying on my part.

> your two links do nothing to contradict

You are not further allowed by me to invalidate what I was writing about by simply disregarding the evidence. Engage with the points I was making. The differences in look and feel between Wx and native are plain for everyone to see and verify. So, what now? Who is right?

> Did you read anything he wrote?

Yes. Examine this:

his claim> OTOH all the standard UI elements (buttons, checkboxes, text controls, date pickers, ...) are native

my counter-evidence> Well, let's verify that… https://i.imgur.com/uHfjoUs.png No, they're not.

his deflection> Sorry, I don't know what is this supposed to prove

So instead of admitting that there is a contradiction, he just pretends to not understand it.

Also examine this:

> look good

> look good

> looks fine

> look good

I never mentioned anything about looking good, this is a distraction designed to deflect from the central point I was making. As I wrote before, the central point made by me remains completely unaddressed.

Alas, I cannot deal with those crazy-making techniques, his behaviour measured by outcome is indistinguishable from the mentally ill. With the help and advice from a friend, I came to the conclusion that it was not safe for me to respond, so I then decided not to.


If you only need 100 Mbps the 3Com 3c905 series of PCI Ethernet cards are still some of the most reliable hardware you can put into your industrial PC that still has PCI slots. ISDN and ax25 are still really useful if you have low-bandwidth but low-latency needs like sensor data.

Now those are niche use cases, but they do exist. However, what’s wrong with removing insecure code for these niche cases? Either someone will step up to actually maintain it, or newer versions of the kernel will be leaner and have less historical cruft.


Well, if it degrades to 90% after three years, and let’s extrapolate to 81% after another two to three years, then a battery swap in 5 minutes might be reasonable to do instead of charging once every three to five years or so. I guess it depends on the quality and retained capacity on the batteries being swapped in.

The vast majority of charging is done at home, though. Five-minute-charging/swapping is basically a gimmick to show off to your friends, and only really sees (questionable) use during that once-a-year road trip.

The main value in these technologies is to shut up the "But sometimes I want to drive for 20 hours without being forced to take even a single 30-minute break!" pseudo-argument as to why an EV is "impossible" for their lifestyle. Same with the Lucid Air and its 1000km range: basically zero people truly need it, but it needs to exists in order to drag the last few holdouts into the future.


When my road trip is in negative temperatures, I appreciate not having to be in the cold for too long. I think the bigger adoption issue is thoughts of scaling the charging stations. If there’s a line of cars at a liquid fuel pump, one can still get fuel in twenty or thirty minutes. If there’s a line of four cars at every charger and every car takes 15 minutes to charge on average, that’s an hour before you can start.

> Well, if it degrades to 90% after three years, and let’s extrapolate to 81% after another two to three years,

That sounds like a phone battery, not an EV battery. Modern EVs should last 15-20 years before seeing significant degredation.


That was assuming, based on their recharge count, daily 10% to 98% rapid charging. You’d only see that in a vehicle of this range if it’s being used as like a courier vehicle or moving billboard. Pretty much the actual worst cases.

> Well, if it degrades to 90% after three years, and let’s extrapolate to 81% after another two to three years, then a battery swap in 5 minutes might be reasonable

eh? are you saying that something that is done once every 5 years has to be done inside 5 minutes? I strongly disagree.

> charging once every three to five years or so

Um, that's not how charging works at all.


That’s not how sentences or quotations work at all.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: