AGP was released several years before PCIe specifically because the bandwidth needs of graphics cards were so high. PCI just couldn’t keep up with the demands of 3D accelerators that were starting to come into widespread use. AGP increased bandwidth massively by providing direct access to system RAM, unlike PCI which had to go through the CPU.
A better question might be why AGP didn’t supplant PCI for all devices rather than just graphics cards, and the answer is that since AGP was a port rather than a bus, it was impossible to put more than a single AGP slot on a motherboard.
Once PCIe came along and was able to provide the bandwidth and DMA required for graphics cards, it simply replaced both PCI and AGP, rendering them both obsolete.
> it was impossible to put more than a single AGP slot on a motherboard.
AGP is just PCI on steroids[0], so it's less than impossible and more like prohibitively expensive, because it would require an additional run to a dual-AGP-ported memory controller (which resided in the chipset in the times of AGP) which did not exist or additional system chipset, probably with it's own memory and all those SMP shenanigans from that.
BTW I think I heard about some motherboard which had two AGP slots, but the second one was AGP only physically/electrically, running over a standard PCI bus. But maybe my brain is just making things...
> BTW I think I heard about some motherboard which had two AGP slots, but the second one was AGP only physically/electrically, running over a standard PCI bus. But maybe my brain is just making things...
I've not personally ever seen a board with dual AGP slots, but there were a number of AGP and PCI-e supporting oddballs during the transition period. I recall one of the more terrible ones doing something like basically just allowing an AGP card to hang off the PCI bus. There were some AGP/PCIe chipsets that were quite good during this time as well, but many of them seemed to be crappy hacks with performance limitations or compatibility problems.
Also interesting were the graphics cards that used Nvidia's AGP -> PCIe adapter chip which allowed them to keep selling older hardware on newer platforms.
It's not a real two slots, because you can use only one, of course.
AlphaServer ES47 and ES80 model has support for 4 and 8 AGP slots respectevly but that's cheating, they are server scalable systems. Maxed out GS1280 can support 16 AGP slots per partition.
Unsurprisingly (in the context of the thread) some guy from Russia actually did AGP to PCI converter and it worked just fine (considering you can only use 3.3V cards on it):
Back then, I came up with a way to connect two graphics chips onto a single AGP slot without a bridge chip or glue logic. Since AGP is a superset of PCI, both gfx chips will get recognized and enumerated. Then you just have the driver only ever use AGP bus mastering on one chip and PCI bus mastering on the other chip. It is not symmetric in terms of transfer speeds and a bit janky, but it does work.
Can you guess which product used this implementation?
AGP is little more than a second PCI bus running at higher clock speeds.
Because PCI was a shared bus, not only did the video card have to share bandwidth with every other card, but PCI ended up stuck at the original 33mhz speeds of the first version for compatibility reasons.
AGP was a modified PCI bus making it a high speed point-to-point PCI connection. It was only ever intended to connect a single graphics controller so for years we had only one AGP port (They later hacked more than one slot/chip but PCIe thankfully happened). The idea was the higher bandwidth could allow the graphics chip to use main memory for graphics but that access was much much slower than the on-board RAM on the . Plus you know, your CPU and hence, OS And programs also needed to access that memory. I am pretty sure that idea was quickly abandoned after the i740 flopped.
Because there was no PCIe when AGP was developed, and the video cards being made needed something faster than the PCI bus while other consumer expansion cards did not.