> NVIDIA RIVA 128 Customer Evaluation Kit (we have the NV1 CEK version 1.22)
> NVIDIA RIVA 128 Turnkey Manufacturing Package
> Source code (drivers, VBIOS, etc) related to the NVIDIA RIVA 128
> Any similar documents, excluding the well-known datasheet, with technical information about a GPU going by the name “NV3”, “STG-3000”, “RIVA 128”, “NV3T”, “RIVA 128 Turbo” (an early name for the ZX) or “RIVA 128 ZX”
> Any document, code, or materials relating to a graphics card by NVIDIA, in association with Sega, Helios Semiconductor or SGS-Thomson (now STMicroelectronics) codenamed “Mutara”, “Mutara V08”, or “NV2”, or relating to a cancelled Sega console codenamed “V08”
As a designer of Weitek's VGA core, this is a very interesting read. I had no idea how valuable the core was to nVidia. As Weitek was going under, I also remember interviewing with 3dfx and thinking how arrogant they were. I'm not surprised they eventually lost
I was very convinced at that time that 3dfx didn't have a good roadmap and Nvidia would prevail based on their professionalism and superior ability to design silicon.
> 5.0 came out late during development of the chip, which turned out to be mostly compliant, with the exception of some blending modes such as additive blending which Jensen Huang later claimed was due to Microsoft not giving them the specification in time.
Not sure if this is the same thing I had, but on my Riva128 the alpha blending wasn't properly implemented. I distinctly recall playing Unreal Tournament and when I fired the rocket launcher there were big black squared with a smoke texture on them slowly rotating :D couldn't see where I was shooting :D
Yes, that would be an artifact of missing additive blending.
It simply means that each newly rendered polygon’s RGB values are added together with the pixel values already in the frame buffer. It’s good for lighting effects (although not a very realistic simulation of light’s behavior unless your frame buffer is linear light rather than gamma corrected, but that effectively requires floating point RGB which wasn’t available on gaming cards until 2003).
Iirc, Riva 128 only supports 8 of the 32 D3D5 blend modes, or something. Usually in the case of GPUs that don't support all blending modes the Direct3D HAL will attempt to compensate by using a different blending mode, or just give up and render opaque pixels. The results are usually pretty ugly. Riva 128 is one of the better ones for the era in this regard.
I doubt it required floating point rgb and, iirc, it came (for real) much later. GPUs used fixed point math behind the scenes for a long time. The only thing you need to get proper additive blending is saturation, so that you don't overflow, like on the N64.
You don't need floating point RGB for additive blending, but to get correct behavior when using additive blending for your light sources, you must use linear buffers and if you do that, the dynamic range of the scene won't fit in eight bits.
The blend operation happens on linearized values, so the frame buffer being gamma corrected is not a problem.
8-bit sRGB is a very good gamma corrected format.
I’m pretty sure this wasn’t available on early GPUs. Additive blending with glBlendFunc(GL_ONE, GL_ONE) or its D3D equivalent just summed the new fragment with the frame buffer value.
Accumulating into an 8-bpc buffer will quickly artifact, so whether it’s acceptable to only do the blend operation in linear light (accumulating into a gamma-corrected frame buffer) depends on how many passes you’re rendering.
What may be called graphics commands in other GPU architectures are instead called graphics objects in the NV3 and all other NVIDIA architectures.
I think this choice of terminology reflects both the era in which it was chosen (OOP was a huge trend back then), and the mindset of those who worked on the architecture (software-oriented). In contrast, Intel calls them commands/instructions/opcodes, as did the old 8514/A, arguably the one that started it all.
A specialized hardware accelerator for the manner by which Windows 95’s GDI (and its DIB Engine?) renders text.
Drawing text (from bitmap font data) is a very common 2D accelerator feature.
Nvidia did have a very ambitious abstraction which encompassed audio as well. As a low level rendering engineer at the time, I found their documentation to be quite challenging as I didn't jibe with their mindset and nomenclature.
I wonder if anyone at old school Nvidia remembers if the NV1 card did quads to try to win a contract with Sega, or if the designers at Sega overtly wanted a card that would do quads. My suspicion is that this must've been shitty influence from Sega.
The Sega Saturn released in November 1994, with one of the most mind boggling bad hardware designs ever committed to a console. The thing had two CPUs, and unlike every console or 3d rendering machine to come later, actually rendered quads rather than tris. This is because you can more easily render lots of sprites for 2d games with quads (!!!). It was allegedly extremely difficult to program for such that its complexity stymied emulation for years after its release. I also read that Sega (which is actually a US company) had some sort of weird dynamic with its Japanese division such that the Japanese side of the company would design and ship hardware without consultation from the American side. Allegedly, the creator of Sonic the Hedgehog (Yuji Naka; who is currently in prison for securities fraud) would not pass the 3d engine used to build "Sonic Team's" first 3d game to the American team what was supposed to develop the main 3d Sonic game for the Saturn, and the main programmer for the Sonic 3d game engine in the US (Ofer Alon, who went to to found the company behind the 3d modeling software Z-brush) could not get a 3d sonic game to run on the Saturn because he tried writing the engine in the "slow" language "C", rather than 'ol fashioned assembly like Naka's team.
Nvidia weren't chasing Sega. The NV1 was really just an evolution of Sun's GX graphics card, which did quad-based accelerated 3D in 1989 (though no texture mapping, GX was limited to flat-shaded quads, or emulating gouraud shading with tessellation)
Quads vs Triangles was kind of an open question as of 1993, when work on the NV1 started. Triangles are simpler, but Quads allow for a neat trick where you can do forwards texture mapping and get a much better approximation of perspective correct texturing, than you can with triangles.
Nvidia went all in on this approach. Not only did they support 4 control point quads, but the NV1 can render 9 control point quadratic patches. These quadratic patches not only provide a really good approximation of a perspective correct textured quad, but can represent textured curved surfaces in 3d space.
Quad-based approximations are much cheaper to implement in hardware than proper perspective correct texturing, which requires an expensive division operation per pixel. And the forwards texturing approach additional benefits with optimal memory access patterns for textures. The approach seems like a win in this era of limited hardware.
The problem is that forwards texture mapping sucks for 3D artists. Artists are fine with quads, they still use them today, but forwards texture mapping is very inflexible. Inverse texture mapping (UV coords) allows you to simply drape a texture across a model with UV coordinates. Forwards texture mapping requires careful planning to get good results, you essentially need to draw the texture first and build your model out of textured quads. Many Sega Saturn games rely on automated conversion from inverse textured models.
By 1996, you could just add a divider to your hardware and get proper perspective corrected inverse texturing, and there was no reason to do proper quad support, just split them into two triangles.
From a glance it looks fairly standard for a fixed-function GPU of the day. These GPUs were primitive by modern standards; they didn't even have hardware T&L, meaning everything essentially had to be converted from 3D to 2D on the CPU†. The weirdest part is per-polygon mipmapping, which is not normal. (I didn't even know anyone even did this; it's clearly going to look terrible in many common cases, such as large textured floors. That being said, I can understand why they'd do this if they were in a rush; calculating screen space derivatives in both X and Y, as required for per-pixel mipmapping, is really annoying with a scanline algorithm, which they were probably using as Pineda rasterization hadn't caught on yet.)
†I know that normalized device coordinates are still 3D, so "converting 3D to 2D" is technically wrong, but it conveys the right intuition.
> The weirdest part is per-polygon mipmapping, which is not normal
I suspect that per-polygon mipmapping is actually calculated on the CPU. That would mean the actual hardware doesn't really implement mipmapping, it just implements switching between texture LODs on a per-triangle basis (probably that "M" coord in the 0x17 object).
Apparently later drivers from 1999 did actually implement per-pixel mipmapping, but I have a horrible feeling that's achieved by tessellating triangles along the change in LOD boundary, which must take quite a bit of CPU time.
In the modern era the 'compute shaders' part of that has become more dominant and lots of fixed function parts of the graphics pipeline have moved to software.
I'd love to see a more modern take on this if someone has run across it.
Well, RIVA 128 didn't even have multitexturing, so not much of that is going to be applicable. I don't think that Voodoo era chips even used Pineda rasterization as that article details; they did fancy scanline rasterization.
I interpreted rayiner's question as present tense, "what do GPUs look like now" - you're definitely right that most of the architectures I know of from back then were doing scans or (a bit later) tiling.
The expectation around the Riva 128 was intense. 16bit color, integrated 2d/3d, and a reasonable price were going to doom 3dfx. It was a little underwhelming, and wasn't until the TNT, TNT2, and Geforce 256 that it really became obvious that these guys were on a path to rule the market.
It really would be cool if someone could get a sitdown with Jensen to reminisce about the Riva 128 period.
Not sure I'd characterize NV3 as a "success". It probably made money, and kept the company above water. But they didn't have a genuinely "successful" product until the TNT shipped in 1998. At this stage, 3dfx completely owned the market, to the extent that lots of notionally "Direct3D" games wouldn't generally run on anything else. NVIDIA and ATI were playing "chase the game with driver updates" on every AAA launch trying to avoid being broken by default.
Which makes it, IMHO, a weird target to try to emulate. NV2 was a real product and sold some units, but it's otherwise more or less forgotten. Like, if you were deciding on a system from the early 70's to research/emulate, would you pick the Data General Nova or the PDP-11?
Ohhh. Actually, quite a lot of NV3s were sold...You can view that just from Nvidia's revenue totals, and reviews at the time. Note the standards for image quality increased very quickly, so it was described as decent in 1997, but awful in 1999.
>standards for image quality increased very quickly
In 1997 as long as game started at all and you could more or less see whats going on it was considered ok and probably not a scam. It was a time of "accelerators" like Matrox with no texturing support, S3 running slower than in fully software mode, with most vendors missing crucial blending modes and filtering.
I owned one, actually. It was very hard to find at retail. ATI still dominated the integrated video card market, with S3 and Matrox and Diamond et. al. filling out the rest. NVIDIA was a notable upstart, and the RIVA actually looked really great on paper. But like I said it couldn't break through the Voodoo's lock on the the game market, nor ATI's control of the OEM channel. It kept the company from failing, but that's about as far as it goes.
Again, a year later the TNT changed things for NVIDIA (and the Geforce 256 a year after that changed everything). But the 128 was forgettable in hindsight.
I don't think the retail sales were very good, there were a lot of them that ended up as OEM cards or on-board (not really integrated at this point) graphics.
From memory unreal tournament and other fps that used the same engine had support for opengl, glide, d3d , s3tc and software rendering. It was one of the most compatible render engine
In this era there were lots of competing APIs like Glide, S3 MeTaL, D3D, OpenGL. So you could end up with games that only supported particular APIs and not the one(s) supported by your GPU. At the time D3D and OpenGL were kind of a lowest common denominator, unless the game happened to support your particular vendor's OpenGL extensions (if they had any).
> Note: Documents wanted
> If you are in possession of any of:
> NVIDIA RIVA 128 Programmers’ Reference Manual
> NVIDIA RIVA 128 Customer Evaluation Kit (we have the NV1 CEK version 1.22)
> NVIDIA RIVA 128 Turnkey Manufacturing Package
> Source code (drivers, VBIOS, etc) related to the NVIDIA RIVA 128
> Any similar documents, excluding the well-known datasheet, with technical information about a GPU going by the name “NV3”, “STG-3000”, “RIVA 128”, “NV3T”, “RIVA 128 Turbo” (an early name for the ZX) or “RIVA 128 ZX”
> Any document, code, or materials relating to a graphics card by NVIDIA, in association with Sega, Helios Semiconductor or SGS-Thomson (now STMicroelectronics) codenamed “Mutara”, “Mutara V08”, or “NV2”, or relating to a cancelled Sega console codenamed “V08”
> Any documentation relating to RIVA TNT
> Any NVIDIA SDK version that is not 0.81 or 0.83
I feel this. A lot of information has been lost.
As a designer of Weitek's VGA core, this is a very interesting read. I had no idea how valuable the core was to nVidia. As Weitek was going under, I also remember interviewing with 3dfx and thinking how arrogant they were. I'm not surprised they eventually lost
Shoulda got an Nvidia job!
I was very convinced at that time that 3dfx didn't have a good roadmap and Nvidia would prevail based on their professionalism and superior ability to design silicon.
Can you talk about 3dfx’s arrogance?
Can you share more about Weitek and the work there? This name is new to me and I'm sure to a lot of people here.
> 5.0 came out late during development of the chip, which turned out to be mostly compliant, with the exception of some blending modes such as additive blending which Jensen Huang later claimed was due to Microsoft not giving them the specification in time.
Not sure if this is the same thing I had, but on my Riva128 the alpha blending wasn't properly implemented. I distinctly recall playing Unreal Tournament and when I fired the rocket launcher there were big black squared with a smoke texture on them slowly rotating :D couldn't see where I was shooting :D
Yes, that would be an artifact of missing additive blending.
It simply means that each newly rendered polygon’s RGB values are added together with the pixel values already in the frame buffer. It’s good for lighting effects (although not a very realistic simulation of light’s behavior unless your frame buffer is linear light rather than gamma corrected, but that effectively requires floating point RGB which wasn’t available on gaming cards until 2003).
Iirc, Riva 128 only supports 8 of the 32 D3D5 blend modes, or something. Usually in the case of GPUs that don't support all blending modes the Direct3D HAL will attempt to compensate by using a different blending mode, or just give up and render opaque pixels. The results are usually pretty ugly. Riva 128 is one of the better ones for the era in this regard.
I doubt it required floating point rgb and, iirc, it came (for real) much later. GPUs used fixed point math behind the scenes for a long time. The only thing you need to get proper additive blending is saturation, so that you don't overflow, like on the N64.
You don't need floating point RGB for additive blending, but to get correct behavior when using additive blending for your light sources, you must use linear buffers and if you do that, the dynamic range of the scene won't fit in eight bits.
The blend operation happens on linearized values, so the frame buffer being gamma corrected is not a problem. 8-bit sRGB is a very good gamma corrected format.
I’m pretty sure this wasn’t available on early GPUs. Additive blending with glBlendFunc(GL_ONE, GL_ONE) or its D3D equivalent just summed the new fragment with the frame buffer value.
Accumulating into an 8-bpc buffer will quickly artifact, so whether it’s acceptable to only do the blend operation in linear light (accumulating into a gamma-corrected frame buffer) depends on how many passes you’re rendering.
What may be called graphics commands in other GPU architectures are instead called graphics objects in the NV3 and all other NVIDIA architectures.
I think this choice of terminology reflects both the era in which it was chosen (OOP was a huge trend back then), and the mindset of those who worked on the architecture (software-oriented). In contrast, Intel calls them commands/instructions/opcodes, as did the old 8514/A, arguably the one that started it all.
A specialized hardware accelerator for the manner by which Windows 95’s GDI (and its DIB Engine?) renders text.
Drawing text (from bitmap font data) is a very common 2D accelerator feature.
Nvidia did have a very ambitious abstraction which encompassed audio as well. As a low level rendering engineer at the time, I found their documentation to be quite challenging as I didn't jibe with their mindset and nomenclature.
I wonder if anyone at old school Nvidia remembers if the NV1 card did quads to try to win a contract with Sega, or if the designers at Sega overtly wanted a card that would do quads. My suspicion is that this must've been shitty influence from Sega.
The Sega Saturn released in November 1994, with one of the most mind boggling bad hardware designs ever committed to a console. The thing had two CPUs, and unlike every console or 3d rendering machine to come later, actually rendered quads rather than tris. This is because you can more easily render lots of sprites for 2d games with quads (!!!). It was allegedly extremely difficult to program for such that its complexity stymied emulation for years after its release. I also read that Sega (which is actually a US company) had some sort of weird dynamic with its Japanese division such that the Japanese side of the company would design and ship hardware without consultation from the American side. Allegedly, the creator of Sonic the Hedgehog (Yuji Naka; who is currently in prison for securities fraud) would not pass the 3d engine used to build "Sonic Team's" first 3d game to the American team what was supposed to develop the main 3d Sonic game for the Saturn, and the main programmer for the Sonic 3d game engine in the US (Ofer Alon, who went to to found the company behind the 3d modeling software Z-brush) could not get a 3d sonic game to run on the Saturn because he tried writing the engine in the "slow" language "C", rather than 'ol fashioned assembly like Naka's team.
Whelp, that was my knowledge dump on 90s Sega!
That sounds plausible. I did a lot of work for Japanese companies in the nineties and it was quite eye opening.
Nvidia weren't chasing Sega. The NV1 was really just an evolution of Sun's GX graphics card, which did quad-based accelerated 3D in 1989 (though no texture mapping, GX was limited to flat-shaded quads, or emulating gouraud shading with tessellation)
Quads vs Triangles was kind of an open question as of 1993, when work on the NV1 started. Triangles are simpler, but Quads allow for a neat trick where you can do forwards texture mapping and get a much better approximation of perspective correct texturing, than you can with triangles.
Nvidia went all in on this approach. Not only did they support 4 control point quads, but the NV1 can render 9 control point quadratic patches. These quadratic patches not only provide a really good approximation of a perspective correct textured quad, but can represent textured curved surfaces in 3d space.
Quad-based approximations are much cheaper to implement in hardware than proper perspective correct texturing, which requires an expensive division operation per pixel. And the forwards texturing approach additional benefits with optimal memory access patterns for textures. The approach seems like a win in this era of limited hardware.
The problem is that forwards texture mapping sucks for 3D artists. Artists are fine with quads, they still use them today, but forwards texture mapping is very inflexible. Inverse texture mapping (UV coords) allows you to simply drape a texture across a model with UV coordinates. Forwards texture mapping requires careful planning to get good results, you essentially need to draw the texture first and build your model out of textured quads. Many Sega Saturn games rely on automated conversion from inverse textured models.
By 1996, you could just add a divider to your hardware and get proper perspective corrected inverse texturing, and there was no reason to do proper quad support, just split them into two triangles.
Riva 128 + Pentium II 233MHz + Corn Emulator = Mario 64 at full speed on your PC.
+ Bleem for your PS1 emulation needs :)
Is this an especially odd architecture? What do other GPUs look like?
From a glance it looks fairly standard for a fixed-function GPU of the day. These GPUs were primitive by modern standards; they didn't even have hardware T&L, meaning everything essentially had to be converted from 3D to 2D on the CPU†. The weirdest part is per-polygon mipmapping, which is not normal. (I didn't even know anyone even did this; it's clearly going to look terrible in many common cases, such as large textured floors. That being said, I can understand why they'd do this if they were in a rush; calculating screen space derivatives in both X and Y, as required for per-pixel mipmapping, is really annoying with a scanline algorithm, which they were probably using as Pineda rasterization hadn't caught on yet.)
†I know that normalized device coordinates are still 3D, so "converting 3D to 2D" is technically wrong, but it conveys the right intuition.
> The weirdest part is per-polygon mipmapping, which is not normal
I suspect that per-polygon mipmapping is actually calculated on the CPU. That would mean the actual hardware doesn't really implement mipmapping, it just implements switching between texture LODs on a per-triangle basis (probably that "M" coord in the 0x17 object).
Apparently later drivers from 1999 did actually implement per-pixel mipmapping, but I have a horrible feeling that's achieved by tessellating triangles along the change in LOD boundary, which must take quite a bit of CPU time.
I strongly recommend https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-... if you're curious about GPU architectures. Though in the 14 (!) years since, a lot has changed still.
In the modern era the 'compute shaders' part of that has become more dominant and lots of fixed function parts of the graphics pipeline have moved to software.
I'd love to see a more modern take on this if someone has run across it.
Well, RIVA 128 didn't even have multitexturing, so not much of that is going to be applicable. I don't think that Voodoo era chips even used Pineda rasterization as that article details; they did fancy scanline rasterization.
I interpreted rayiner's question as present tense, "what do GPUs look like now" - you're definitely right that most of the architectures I know of from back then were doing scans or (a bit later) tiling.
The expectation around the Riva 128 was intense. 16bit color, integrated 2d/3d, and a reasonable price were going to doom 3dfx. It was a little underwhelming, and wasn't until the TNT, TNT2, and Geforce 256 that it really became obvious that these guys were on a path to rule the market.
It really would be cool if someone could get a sitdown with Jensen to reminisce about the Riva 128 period.
Who else bought NVDA back in '99?
I bought a Riva 128. If I had bought NVDA instead, I’d be a lot better off! xD
I bought beos :D
I ran BeOS on a Dell with a Riva 128 card.
Well I bought stock of Be, Inc. :-)
You might enjoy this talk by Erik Lindholm (now retired), who talks about Riva 128 and many of the other early Nvidia cards: https://ubc.ca.panopto.com/Panopto/Pages/Viewer.aspx?id=880a...
Awesome! Thank you. The entire lecture is worth a watch. The nvidia portion starts about 33 minutes in with some great 3D wars nostalgia.
Amazing that they're able to make progress with such little public documentation.
I'd assume any driver source code, which the Linux world has produced a lot of, can serve as a source (pun intended) of documentation.
There's also https://envytools.readthedocs.io/en/latest/hw/intro.html
Not sure I'd characterize NV3 as a "success". It probably made money, and kept the company above water. But they didn't have a genuinely "successful" product until the TNT shipped in 1998. At this stage, 3dfx completely owned the market, to the extent that lots of notionally "Direct3D" games wouldn't generally run on anything else. NVIDIA and ATI were playing "chase the game with driver updates" on every AAA launch trying to avoid being broken by default.
Which makes it, IMHO, a weird target to try to emulate. NV2 was a real product and sold some units, but it's otherwise more or less forgotten. Like, if you were deciding on a system from the early 70's to research/emulate, would you pick the Data General Nova or the PDP-11?
Most 3dfx cards are already emulated. I'm just a crackhead. NV2 was not a real product, it was cancelled. You are talking about NV1
Heh, no, I was talking about NV3. It's just a typo in the last paragraph.
Ohhh. Actually, quite a lot of NV3s were sold...You can view that just from Nvidia's revenue totals, and reviews at the time. Note the standards for image quality increased very quickly, so it was described as decent in 1997, but awful in 1999.
>standards for image quality increased very quickly
In 1997 as long as game started at all and you could more or less see whats going on it was considered ok and probably not a scam. It was a time of "accelerators" like Matrox with no texturing support, S3 running slower than in fully software mode, with most vendors missing crucial blending modes and filtering.
vlaskcz has a great Youtube series called "Worst Game Graphics Cards" and its pretty much every single vendor that isnt 3dfx up to 1998. https://www.youtube.com/watch?v=A0ljjj4LTDc&list=PLOeoPVvEK8...
I owned one, actually. It was very hard to find at retail. ATI still dominated the integrated video card market, with S3 and Matrox and Diamond et. al. filling out the rest. NVIDIA was a notable upstart, and the RIVA actually looked really great on paper. But like I said it couldn't break through the Voodoo's lock on the the game market, nor ATI's control of the OEM channel. It kept the company from failing, but that's about as far as it goes.
Again, a year later the TNT changed things for NVIDIA (and the Geforce 256 a year after that changed everything). But the 128 was forgettable in hindsight.
I don't think the retail sales were very good, there were a lot of them that ended up as OEM cards or on-board (not really integrated at this point) graphics.
Were there any games or apps specifically tied to these cards, or did everything go through D3D at this point?
I remember some earlier titles that were locked to specific cards such as the Matrox ones and didn't support any other accelerators.
From memory unreal tournament and other fps that used the same engine had support for opengl, glide, d3d , s3tc and software rendering. It was one of the most compatible render engine
There are even third party renderer plugins for Unreal 1, like https://kentie.net/article/d3d10drv/
3dfx' proprietary Glide was still very popular in 1997. UltraHLE wouldn't come out until 1999 and that was famously Glide only.
In this era there were lots of competing APIs like Glide, S3 MeTaL, D3D, OpenGL. So you could end up with games that only supported particular APIs and not the one(s) supported by your GPU. At the time D3D and OpenGL were kind of a lowest common denominator, unless the game happened to support your particular vendor's OpenGL extensions (if they had any).