Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel undercut a standards body to give us the PCI connector (ieee.org)
146 points by jnord on May 18, 2024 | hide | past | favorite | 88 comments


VLB was dumped because it was pretty shit. It wasn’t going to survive past the 486 because it basically assumed, absent any other bridging logic, that it was connecting straight into the memory bus of a 486. You couldn’t drive more than a couple of cards because electrically the situation was a dog’s breakfast of a hack job and because the hack job was so bad if you put a hard drive controller on it you would be risking losing data and trashing your disk.


It is a great personal career risk to push such sentiments up into management that have harmful impacts on other partner companies and/or internal projects.

Intel's corporate culture specifically failed to do this with the Ittanium, where the technical failings were ignored.

I guess the industry is lucky that Intel Architecture Labs was permitted greater freedom.


I have to believe VESA tacked VLB onto ISA to buy time.

It was early in the IBM clone phase. Standardeers seemed to be catching up to the rate of progress and to how much time+resources were needed to craft a next gen bus.


don't remember those criticisms on Be's cpu port, which were mostly what you describe (well, didn't see much of the cpuport in any way now that i think about it)


Are you thinking of the "GeekPort"? That wasn't connected directly to the CPU - it was a bunch of digital (and a couple of analog) IOs, kind of like a cross between the PC parallel port and game port.

The Amiga had a "CPU-port", but that was a much older and slower system.


I’m kind of surprised, since they talk about all the other roots of modern protocols, that they don’t mention that USB was heavily influenced by the Atari 8-bit SIO subsystem [1] according to its designer.

It seems to ne there was quite a bit of cool ahead-of-it’s-time technology in the Atari 8-bit range that went unnoticed because “it was just a games machine”. SIO, with its universal bus interface and universal driver format, tied in with CIO (centralized input.output) that used the same driver/handler system for the keyboard and screen devices…

1: https://en.wikipedia.org/wiki/Atari_SIO


I always loved the Commodore bus...simple, daisy chainable, standard connector, etc. I remember being quite disappointed by the initial PC hard drive controller cards..


That's really interesting! I never owned an Atari 8-bit but reading about them now they sure seem like they were years ahead of their competitors in some important ways.


Standards are meant to be broken, especially when they’re based on outdated assumptions.

I’m glad that Tesla is trying to do that with the 48V auxiliary (non-traction) battery. 48V is just below the Low Voltage threshold for human safety (NEC, NFPA). It’s also 4X the 12V std and still a number cleanly divisible by 3V.

Sometimes first principles design outshines industry standards especially when there are newer reasons.


Tesla didn't spearhead that though, in fact they were very late. It was initiated by German manufacturers in the early 2010's.

https://en.m.wikipedia.org/wiki/48-volt_electrical_system


Audi has sold less than 10k cars total with a 48v system since 2018 and isn't even double digit percent of their total US sales. MB does better with a few hundred thousand since 2018. Tesla will be shipping a couple million per year. I think it's still fair to say Tesla is spearheading the actual production of these cars, whenever it starts to happen for them.


Has Tesla moved to 48V in any platform beyond the Cybertruck? Last I read, they have delivered ~4000 of them, and I don't know that they've updated the manufacturing of the existing lines to 48V yet.


Tesla has said they're moving to 48v for "all future vehicles". So my assumption is whenever their existing models get a big change-up, 48v will be part of that change-up. I doubt they're going to switch e.g. Model 3 to 48v starting next year or anything unless there's some compelling cost cutting reason to do so.

They're also using it for their humanoid robot


> Standards are meant to be broken, especially when they’re based on outdated assumptions.

A standard I'd love to be modernized would be wider availability of 240 volt outlets in North America. I'd love to have the option of a 240 V electric kettle on my countertop or a 240 V toaster/pizza oven.


Think you'd go with NEMA 240V sockets, or prefer a different standard of outlet?


The NEMA 6-20 is what I'd want.

I actually think the US should start requiring those. Start with having 1 or 2 in the kitchen, and additionally 1 within 100 feet of any spot in the house.

Once they are more common we'll start to see appliances start using them. At first mainly for cooking and vacuum cleaners.


I'm in favor of having common ~240v sockets in the US, for all of the perhaps-obvious reasons including boiling water more-quickly for coffee or tea and running a portable induction hob that doesn't suck.

But if the NEMA 5-15 is a not-great standard, then the NEMA 6-20 is also equally-flawed. They've both got the same set of issues.

The only good reason for 6-20 (or 6-15) in American kitchens is that it is an "approved" connector. It fits in, today, with what we have here in the States in terms of broadly-accepted wiring methods, safety testing, and formal training. That is indeed a good reason, and I do not wish to diminish that reason.

If 6-20 were already commonly-used in residential environments -- anywhere -- then it'd be easy to say that it would be a nice thing to have in every kitchen. But it has neither momentum nor inertia, because it mostly just doesn't exist.

So if we're going to set out sights on what is ostensibly going to be considered a "new" plug, then: Why not aim for one that doesn't inherently suck?


Lots of A/Cs and high power equipment uses NEMA 6, there's wide availability of parts for it. For example I have several in my house. EV's have connectors for it as well.

The thing is people keep complaining about how bad NEMA 5 and 6 are - and yet, somehow, the US doesn't have a lot of electrical injuries. So it seems the theory doesn't match the reality. It's just not that big of a problem.

NEMA 5/6 sockets have a lot going for them: They are physically relatively small, the flat blade has a much better contact surface relative to round plugs, they are also much cheaper to make (both the plug and the socket).

I could certainly design something better - I would put all 3 prongs in a line (with un-even spacing to ensure polarity) to make the plug even smaller, and put an insulating sheath on the top of the prong designed exactly right so that when it makes electrical contact the metal part of the prong is no longer exposed.

Even better: You could make an outlet that is simultaneously 120V with backward compatibility and 240V! (And I just realized that polarity is pointless - 240V have a hot on both sides.) I'm starting to warm to your idea of a new outlet. Have three slots in a row.

> Why not aim for one that doesn't inherently suck?

Out of curiosity, other than the things I designed against, what other issues do you have with it?


As a counter-anecdote, I've never had a 6-15 or 6-20 in any home I've ever lived in. I'm aware of its existence and utility, and I even have a duplex Leviton 6-20R in my Amazon shopping cart that I will purchase and install in my kitchen if life ever allows me to be to be a homeowner again, but I just have never encountered them much in the wild. (Except once, over 20 years ago: The wall-mounted AC in a hotel room I stayed at in Florida used NEMA 6. [EV owners love this one simple trick!])

I have issues with typical US socket arrangements. They begin to disconnect too easily when things move around, as small appliances in a kitchen tend to do. They still work when partially-disconnected, but they have exposed live elements in this state.

This has probably happened to me hundreds of times so far. And while it hasn't harmed me yet, and I'm not particularly afraid of 125VAC, I'm always aware of that potential harm when I encounter the condition. I have also found (and corrected) the mythological problem where a metal thing has slid down a wall and onto a partially-inserted plug.

---

You know, as I wrote what I wrote before and I write this now, I've reviewed many of the world's plugs. Some seem kind of OK, some are huge for no apparent reason (adding both cost and bulk for little benefit), and some seem like a series of trade-offs that are based in legacy.

And the US certainly has its own legacies, too:

Like you alluded to, multi-speed 4-wire outlets, offering hot, hot, neutral, and ground. In a fairy tale world where these can be compatible with bog-standard NEMA 5-15P appliances, that becomes a very, very easy thing to sell. It becomes so easy to sell that the ease of use starts to become more important than some of the other functions I've considered. They can even configured so that each of the 120V portions operates on a separate leg. (This ruins global compatibility, and will ultimately require revision of the NEC since split-phase duplex outlets are presently only allowed in kitchens, but boy would that be easy to sell. It also requires 4 wires instead of 3.)

Is there enough room to add a contact (or two) inside of a 5-15R, using modern manufacturing methods? Maybe a square hole near the middle somewhere with positive engagement?


> I've reviewed many of the world's plugs

https://www.plugsocketmuseum.nl/

> They begin to disconnect too easily when things move around

Replace your outlets, the old ones did this, but the newer ones, especially commercial grade (which is only $2 more), hold much better and they won't go loose.

> Is there enough room to add a contact (or two) inside of a 5-15R

I was going to have it like:

    -.
  | | |
   .-
You can plug in a 5-15 on the left, or upside down on the right. And you can plug in a new 240V outlet into all 3 vertical lines, plus the ground as a horizontal line. (I can't show it on the ASCII art but I would have the ground much closer to the powered plugs relative to the round ground pin, to make the plug smaller.)

The cord for the new 240V plug would exit from the left or right, instead of straight out or from the bottom like current cords.

This would make the plug long and narrow, and inline with the cord, to make it smaller.

I would have this outlet always be 20A, there's no reason to have this 15A stuff we have now - that's just legacy. But if we really needed to, the center slot could turn into a + shape to enable 15 vs 20.


For NEMA 5-15: Better outlets do help, but do not eliminate the problem: A tighter friction fit is still a friction fit, and given enough wiggling they still tend to work themselves loose -- just on a longer timescale.

A connector with a positive snap-action would seem better: It is either snapped tightly into place, where it is connected and it works. Or it is unsnapped, where it is electrically disconnected and free to fall out completely. USB C seems to get this part right, as an example; good versions of the C7 ("mickey mouse") power connector seen on many laptops and boomboxes of yore also seem to get at least part of it right.

Anyhow, I don't think it can happen with a new combined outlet that preserves 5-15P compabibility, so...

---

Your combined outlet idea is neat, but it increases the minimum size by 50% and that increase may require new boxes, and definitely requires new coverplates.

I'm thinking something more like this arrangement:

   | o |
       
     U
Key: | is existing 5-15-style blade opening, U is existing 5-15 ground, and o is the new terminal on the other hot leg of split-phase.

The new terminal can be a blade, but a square or rectangular cross-section seems better: It's still flat for ease of making good contact, but it's also strong in more than one direction. Having a shape for that hole that is incompatible with existing blades also helps prevent Curious Little Jimmy from bodging a 5-15P into the wrong holes and torching appliances by applying twice their expected voltage.

It preserves 5-15P compatibility, does not necessarily increase the space used inside of a box, and fits behind the existing (and vast) array of cover plates. Grounding remains optional, and polarization and remains optional for non-grounded things (and both are determined by the plug end, not the outlet end, just as today).

The ground terminals always face the "outside", which is perhaps safer, and the inverted outlet

Still to sort:

Tamper resistance mechanism. The NEC requires that mechanism in many cases, these days, and universal application requires NEC compliance.

Also compatibility with existing wiring. One advantage of using a single-purpose outlet like 6-20R or even L6-20R is that existing wiring can (in some cases) be re-used by re-identifying (eg) the white wire for use as a hot leg, and this can speed adoption in some existing dwellings. But both of the layouts we've drawn have a neutral terminal on their face and invite the insertion of 120V plugs, even if they won't work.


The Type G connector serves perfectly well, and is vastly safer than the standard US outlets!


The Type G is way too big! They only reason it's that big is leftover baggage from ring circuits requiring fuses in the plugs.

And while it might be safer on paper, in practice there is little difference in injury rates US vs UK.


I appreciate the plugs having replaceable fuses in them. Makes it harder for people to overload extension leads and the like.

Totally disagree on that injury rate though, and you would too if you ever stepped on a type G plug.


for your kettle, the Japanese ones cheat by just keeping it hot all the time. as far as a pizza oven, unless you want to import one from Europe and have an electrician wire you up 240V, the new induction ones are cool (well, hot).


> for your kettle, the Japanese ones cheat by just keeping it hot all the time

Sure, but that's not what I'd call terribly energy efficient. I'd rather boil on demand seeing as my wife, the primary user of the kettle, doesn't drink tea while she's sleeping or at the office.

As far as the pizza oven goes I'll probably end up going with an propane Ooni, Roccbox, or the like on my patio.

I'm just saying that I'd like to option to dump 240V @ 13A into a small appliance in my kitchen. As it stands now I'm limited to a paltry 1500 watts compared to a peek that's more than double that in the UK.


Why does divisibility by 3 matter?


If I had to guess: common multiple of nominal voltage of a single battery cell (which is 1.5 to 3V)


Might be for transformers windings in vs windings out? I think transformers are pretty efficient, but convert voltage in multiples. I'm not an EE though and don't know if that is how electrical conversion is actually handled in cars.


> transformers windings in vs windings out

Transformers only apply to AC, where there is a fluctuating magnetic field.

A transformer in a DC circuit is just a physically large (very low resistance) resistor.

The standard practice to convert DC voltage (at least for these voltages) would be a buck converter, which is an active circuit. [1]

[1] https://en.wikipedia.org/wiki/Buck_converter


Even if it was AC, the ratio of the number of turns is what matters. I don’t see why 3 would matter there.


Newer Tesla Model 3 (2022+?) already have a 16V internal system (Li-ion) instead of 12V lead-acid.


That article had weird statement about PCI-X: "It did not see wide use with PCs, likely because Intel chose not to give the technology its blessing, but was briefly utilized by the Power Macintosh G5 line of computers."

I don't know, what they meant with blessing, but Intel server motherboards had PCI-X slots and this was common bus for servers/workstations. Mostly used by SCSI and RAID controllers, high-end network adapters.


The bit before it mentions it being designed for high-end workstations and servers, i.e. not PCs, but I do agree that it seems to imply Intel never used it at all, rather than rather the standard wasn't used in PCs (Intel or otherwise).


> The peripherals didn’t work across platforms. If you wanted to sell hardware in the 1980s, you were stuck building multiple versions of the same device.

True, but in the late 70s and early 80s there was the S-100 bus. This was used by many systems and became an IEEE standard:

https://en.wikipedia.org/wiki/S-100_bus

However, it wasn't used by widely sold systems like the Apple ][ and later IBM PC, so it faded away.


Not to be confused with Sun Microsystems SBus: https://en.wikipedia.org/wiki/SBus


Discussion on the original article [0] (61 points, 3 months ago, 39 comments)

[0]: https://news.ycombinator.com/item?id=39363479


The greatest thing that helped the thriving of PCs was standardization first driven by IBM and after that by Intel.


Micro Channel was the IBM choice that forked universes. I guess our PCI universe is okay.


MCA always felt like a way for IBM to bring back control to itself to me. PCI was such a nice improvement to usability e.g. auto-IRQ assignment (no more jumpers), as well as speed and bus mastering.


I think it was Compaq reverse-engineering the IBM PC BIOS that lead to open standards on PCs (and clones).


Does anyone have any insights into the proprietary graphics buses that were being created leading up to the VESA Local Bus (as referred to in the article)? I was not aware of anything between 16-bit ISA and the addition of VLB.

Did any of these make it onto the market?


Practically every Unix workstation had a different solution.

Part of their performance lead was a proprietary bus that was much faster than ISA.

Technically they weren't graphics busses but since scsi and networking were built in graphics cards were the only things that mattered when it came to the higher bandwidth. A typical Sun Sparcstation would have a graphics card and maybe a serial port card or something that didn't care about the bandwidth of SBUS.

People completely forget this but from the late 80s to the mid-90s (when PCI started becoming widely available) if didn't want to shell out for a Unix workstation and you stuck a fast Radius or Supermac video card in your Macintosh II, your desktop publishing/graphics editing/visualization workflow experience was astronominfinitely better than on PC even if its 486 was faster than the 68020/68030 in your Mac. When PCI came out Apple immediately switched.

Intel probably looked at NUBUS, SBUS, and all of the others and went "well shit if we don't do something about this the pentium won't matter because video cards will be stuck on either ISA or the jank-ass VLB".


Well, VLB wasn't limited to graphics... it was just a fast bus. As opposed to the much later AGP that afaik, was graphics only.

But, MicroChannel was IBM proprietary. I don't know if anybody else had enough market or enough full stack to make a proprietary bus viable; IBM was making graphics cards and motherboards (and cpus, sometimes), and selling enough units that it was worthwhile for add-in makers to support MCA.


VLB wasn't limited to graphics... it was just a fast bus.

VLB wasn't limited to graphics, but it had issues which made it difficult to use in other applications. Still, there were a handful of SCSI and Ethernet cards made to the standard.

The physical size (Very Long Bus!) meant that it was best suited to cards which were already going to be large (e.g. graphics cards with lots of memory chips) and the tight coupling to the system memory bus meant that it was hard to use with anything other than an 80486 CPU -- which inherently discouraged its use for peripherals which weren't firmly aimed at the consumer market.

Ultimately I think the story here is less "Intel undercut a standards process" and more "Intel realized that the standards process had produced a horrible design". We should be glad that they hedged their bets; PCI was far superior.


Intel Architecture Labs is responsible for essentially the entire I/O architecture of virtually all computers (not just x86) for the last ~three decades: USB, SATA, PCI and PCIe, plus PCI-for-Graphics (AGP). Notably all of these were largely developed in-house at Intel and then basically gifted more or less finished to standards bodies or Intel created an industry consortium around them.


Wonder what made Intel keep its damn paws firmly on Thunderbolt up until the USB4 days. To this day it's still a truckload of issues/hacks to get old PCs upgraded with Thunderbolt 2/3 cards.


How much of that is because the tech was better versus because it was Intel pushing for it? There were competing standards such as (off the top of my head) SCSI (isn't that what SATA basically is anyway?), Firewire and PCIx.


> SCSI (isn't that what SATA basically is anyway?),

Nope, there is a lot of differences (Wiki would help on the details), most notably is what SATA was designed to be cheap from the start, including the controller complexity; while SCSI demands a quite intelligent controller, which costs more.

https://en.wikipedia.org/wiki/SATA#Comparison_to_other_inter...


> SCSI (isn't that what SATA basically is anyway?), Ehh, ever since ATAPI, both old 'IDE' style as well as SATA hosts could use SCSI commands. The speed you get with a SATA cable, wire for wire, is a win over any SCSI cable I've ever seen, let alone the LVD vs HVD and everything else you had to worry about.

>Firewire

arguably had the right ideas at the wrong time; the extra power delivery is something we are finally now seeing in USB. However firewire was still relatively expensive.

> PCIx

PCI-X was still parallel with lots of data lines/etc which can cause it's own problems. Aside from having multiple cards potentially hamming each other up (PCI-Express this is less possible since it's point to point rather than shared lines) there is the challenge of the large number of traces and the difficulty in running them on a board as the signalling frequency scales up.


Note that AGP was initially basically just PCI 2.1, with the bus conflict resolution system ripped out, the connector flipped around and a few minimal tweaks. It could, in principle have been used for something other than a video card.

The crucial part about it was that it was a dedicated link to one device. AGP was initially created not because PCI bandwidth was running out, but because PCI is a shared bus, and the kind of transfers video adapters liked to do played havoc with the system that negotiated who had right of way, resulting in all kinds of problems when other cards had to wait for their turn for much longer than their driver developers expected.

The fact that it only ever connected one device to the host made it much easier to evolve, as future cards and hosts could just negotiate to do something different than what AGP 1.0 defined, if they found they both supported a faster version. When bandwidth demand rapidly rose with ever faster 3d accelerators, this was very beneficial.


Microchannel for IBM with their PS/2 line.

There was also the VGA "feature connector" which was used sometimes for video capture, mpeg decoders, and so on: https://en.wikipedia.org/wiki/Feature_connector


There was NuBus, that did not have any adoption within i386. It was used by Apple and at least one Unix vendor.

https://en.m.wikipedia.org/wiki/NuBus

SPARC also had sbus, but this is likely later than your window.

https://en.m.wikipedia.org/wiki/SBus


There were a few vendor-specific VLBish busses:

Opti local bus was the most common, and had a few different boards: https://ancientelectronics.wordpress.com/tag/opti-local-bus/

Gigabyte had one that was only used for the "GA-486US" motherboard. The connector was just two 16bit ISA cards back to back: https://theretroweb.com/motherboard/image/ga-486us-front-60b...

I believe there were some others from different vendors.

The signaling for all of these was pretty similar to VLB, since it was just the 486 bus on a connector.


In the PC space, as a disjointed semi-chronological overlapping timeline:

In the beginning, we had ISA. We had ISA because it was cheap enough for IBM's PC, not because it was good.

MicroChannel was a thing, largely limited to IBM, starting in 1987. It worked well.

EISA was a thing. It was not ever particularly common or cheap. It had 32-bit width and 8.33MHz bus speed, in 1989. EISA's main features were that it was solid, and that it was not MicroChannel. It was backwards-compatible with ISA cards.

Then VLB happened. It was fast, 32-bit, was often flaky, and it was cheap. It was very popular for all kinds of PC accessories -- not just video cards.

Then, of course: Everything performance-oriented shifted to 32-bit PCI almost overnight (including some things outside of the PC space).

But there as also a time when we had PCI-X (which is absolutely not an abbreviation for PCI Express). PCI-X was 64-bits wide at up to 133MHz (though 66MHz was more typical). Like EISA, it never became common or cheap.

And eventually, we had AGP -- but only for graphics.

And there was also PCI-X 2.0, which was like the previous version was 64-bits wide, but it could operate at up to 533MHz. It was theoretically excellent, but essentially never really existed: Widespread PCI Express adoption was right around the corner by then.

And now, of course: We have PCI Express, which we've been successfully flogging along in various incarnations for a couple of decades -- a damned eternity in computer years.


Yes, there was 32-bit EISA.


There was no more satisfying home for an ESDI controller.


Now that’s a term I haven’t heard for a long time.


I seem to recall really damn long cards on some early PCs, maybe XTs, but searching Wikipedia only find 16 bit ISA and VLB (much later in 486 era). Am I hallucinating? Maybe on some server vendor stuff. Like SCSI or something? Some of them had proprietary daughterboards too.


try "full length 8 bit ISA cards". Here's a transputer card https://www.ebay.com/itm/256438778055, memory and I/O cards were pretty common iirc

edit: and video cards of course


Yeah, you don't even need to go that exotic. E.g. an IBM CGA card:

https://en.wikipedia.org/wiki/Color_Graphics_Adapter#/media/...


Yes, some of those early cards were monsters usually due to some combination of discrete 7400-series logic (i.e. economies of scale weren't there for most ASICs yet) and memory chips (individual DIP chips as SIMM/DIMM modules weren't a thing yet) in the 8086 and into the 80268 era. I remember issues with some of the early clones not having full length/height slots causing an issue with some of those cards. By the 386 era, those monster cards had mostly died out on the desktop side at least (there were a few specialty cards that persisted into the late 80's/early 90's probably before enough customers complained about not fitting in their new system.)



And now hard disks plug directly into sockets on the motherboard. (Well, NVMe ones do)


Disk On Module components for industrial computers have been doing this since we have had rewriteable non-volatile solid state memory.


I had Compact Flash cards with adapters on motherboard IDE ports on systems in 2000... i think that counts to


The Performance Analyzer version of the PlayStation development board took up 3 ISA slots. Once PCI slots started to outnumber ISA slots we struggled to find motherboards for them.


I owned an S3 local bus card of this type. It was my first computer purchase after college graduation.

https://en.m.wikipedia.org/wiki/NuBus


I had something like this from a temp job that was liquidating old hardware. They were Northgate PCs and they had two extended 32-bit ISA slots that were giant memory cards. I’m pretty sure the platform was i386.


This comment for some reason made me think of some very long things that looked like card slots sorta but I believe took very long ribbon cables. Might be that ?

No idea what they were


> If you’ve ever used a device with a USB or Bluetooth connection, you can thank Intel for that.

Wasn't Apple a serious factor in getting both popular? I 'member Apple was the first one to seriously deploying USB for its keyboards and mice, whereas the rest of the world was stuck with PS/2 (and the fact it wasn't supposed to do hot-plugging) or, even worse, that horrible large DIN plug for keyboards. And I also 'member that Windows' USB stack situation was horrible up until Windows 2000, with USB sticks shipping with tiny little driver CDs that contained the manufacturer's implementation of an USB storage class driver for 98/ME.

And IIRC they were also the first one to have a Bluetooth stack that didn't outright suck (it took Windows until W7 to ship with its own native BT stack, so every chipset vendor shipped their own package, with different feature sets and a host of interoperability issues).


> Wasn't Apple a serious factor in getting both popular?

I don’t think so, especially for BT.

> And IIRC they were also the first one to have a Bluetooth stack that didn't outright suck

Bluetooth already was well adopted on non smart mobile phones (feature phones) and then cars primarily for headsets but also OBEX - (2002-2003) long before anyone cared much about desktop use for peripherals, let alone Apple. Apple had nothing to do with popularizing Bluetooth.

For USB, it is more nuanced but I think usually overstated. Apple is still a minority in desktop marketshare, but in the 90s even though they were recovering with the iMac, they had basically come from the brink. I don’t think they had the market pull people take for granted now.

Intel and VIA were already sticking USB controllers into their Pentium/AMD chipsets “for free”, the ports were inevitably showing up on every bargain basement Wintel PCs. The numbers just dwarf anything Apple was doing even if they adopted early. It’s also not like Apple had market power to compel anybody like Microsoft to do anything like they would a few years later. (Further evidence would be the glut of cheap USB accessories in BRIC countries where Apple had essentially 0% market share in those days).

A few years later Apple would make FireWire commonplace on a number of peripheral classes for a short while.


> Wasn't Apple a serious factor in getting both popular?

USB was invented by Intel. Yes, Apple going all in on USB on the iMac was a huge push for USB which had still been languishing on the PC side, slowly adopted but poorly used. However, someone had to invent it. Bluetooth was more about Intel's influence, they weren't nearly as instrumental as they were with USB.


I wonder why graphics cards didn't jump from PCI directly to PCI express and used AGP between.


AGP was released several years before PCIe specifically because the bandwidth needs of graphics cards were so high. PCI just couldn’t keep up with the demands of 3D accelerators that were starting to come into widespread use. AGP increased bandwidth massively by providing direct access to system RAM, unlike PCI which had to go through the CPU.

A better question might be why AGP didn’t supplant PCI for all devices rather than just graphics cards, and the answer is that since AGP was a port rather than a bus, it was impossible to put more than a single AGP slot on a motherboard.

Once PCIe came along and was able to provide the bandwidth and DMA required for graphics cards, it simply replaced both PCI and AGP, rendering them both obsolete.


> it was impossible to put more than a single AGP slot on a motherboard.

AGP is just PCI on steroids[0], so it's less than impossible and more like prohibitively expensive, because it would require an additional run to a dual-AGP-ported memory controller (which resided in the chipset in the times of AGP) which did not exist or additional system chipset, probably with it's own memory and all those SMP shenanigans from that.

BTW I think I heard about some motherboard which had two AGP slots, but the second one was AGP only physically/electrically, running over a standard PCI bus. But maybe my brain is just making things...

[0] https://www.youtube.com/watch?v=Pdl1Jwe9dcw

EDIT:

https://retrocomputing.stackexchange.com/questions/16863/was...


> BTW I think I heard about some motherboard which had two AGP slots, but the second one was AGP only physically/electrically, running over a standard PCI bus. But maybe my brain is just making things...

I've not personally ever seen a board with dual AGP slots, but there were a number of AGP and PCI-e supporting oddballs during the transition period. I recall one of the more terrible ones doing something like basically just allowing an AGP card to hang off the PCI bus. There were some AGP/PCIe chipsets that were quite good during this time as well, but many of them seemed to be crappy hacks with performance limitations or compatibility problems.

Also interesting were the graphics cards that used Nvidia's AGP -> PCIe adapter chip which allowed them to keep selling older hardware on newer platforms.


https://www.asrock.com/mb/VIA/K7Upgrade-600/

It's not a real two slots, because you can use only one, of course.

AlphaServer ES47 and ES80 model has support for 4 and 8 AGP slots respectevly but that's cheating, they are server scalable systems. Maxed out GS1280 can support 16 AGP slots per partition.

They did test them with 4 cards, though.

https://www.hpe.com/psnow/doc/c04324523.pdf?jumpid=in_lit-ps...

https://docslib.org/doc/5603403/hp-alphaserver-es47-es80-gs1...

Unsurprisingly (in the context of the thread) some guy from Russia actually did AGP to PCI converter and it worked just fine (considering you can only use 3.3V cards on it):

https://www.youtube.com/watch?v=Jhp1_zBnAEk

> I recall one of the more terrible ones doing something like basically just allowing an AGP card to hang off the PCI bus

Intel 915 chipset:

https://www.anandtech.com/show/1344

Looks like this is what my mind mangled up to a dual AGP slot version.

Welp, guess there was no dual AGP consumer cards at all.


I did find a way to have two gfx chips on a single AGP card without glue logic. See my comment above in this thread.


Back then, I came up with a way to connect two graphics chips onto a single AGP slot without a bridge chip or glue logic. Since AGP is a superset of PCI, both gfx chips will get recognized and enumerated. Then you just have the driver only ever use AGP bus mastering on one chip and PCI bus mastering on the other chip. It is not symmetric in terms of transfer speeds and a bit janky, but it does work.

Can you guess which product used this implementation?


VooDoo in SLI? Volari?!

EDIT: and be sure to check the shenanigans in this comment: https://news.ycombinator.com/item?id=40422963


Isn‘t PCIe also a point-to-point connection instead of a bus?


Yes, but it has far fewer pins for the same bandwidth, so it's feasible to make a "PCIe switch" that will fit in a typical IC package.


AGP is little more than a second PCI bus running at higher clock speeds.

Because PCI was a shared bus, not only did the video card have to share bandwidth with every other card, but PCI ended up stuck at the original 33mhz speeds of the first version for compatibility reasons.


AGP was a modified PCI bus making it a high speed point-to-point PCI connection. It was only ever intended to connect a single graphics controller so for years we had only one AGP port (They later hacked more than one slot/chip but PCIe thankfully happened). The idea was the higher bandwidth could allow the graphics chip to use main memory for graphics but that access was much much slower than the on-board RAM on the . Plus you know, your CPU and hence, OS And programs also needed to access that memory. I am pretty sure that idea was quickly abandoned after the i740 flopped.


AGP (1997) was Intel’s bridge technology between PCI (1992) and PCIe (2004).


Because there was no PCIe when AGP was developed, and the video cards being made needed something faster than the PCI bus while other consumer expansion cards did not.


That's the equivalent of saying we should have gone straight from black-and-white television to 4K HDR without color TV and HD in between.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: