Video cards. Review of the ASUS EN8800GTS video card Heating and power consumption

Manufacturers have long been practicing the release of cheaper solutions based on graphic processors of the upper price segment. Thanks to this approach, the variation of ready-made solutions significantly increases, their cost decreases, and the majority of users often prefer products with the most favorable price / performance ratio.
Similarly, NVIDIA has done the same with the latest G80 chip, the world's first GPU with a unified architecture and support for Microsoft's new API, DirectX 10.
Simultaneously with the flagship GeForce 8800 GTX, a cheaper version called the GeForce 8800 GTS was released. It differs from its older sister by the truncated number of pixel processors (96 versus 128), video memory (640 MB instead of 768 MB for the GTX). The consequence of the decrease in the number of memory chips was a decrease in the bit depth of its interface to 320 bits (for GTX - 384 bits). More detailed characteristics of the graphics adapter in question can be found by examining the table:

The ASUS EN8800GTS video card got into our Test laboratory, which we will consider today. This manufacturer is one of the largest and most successful partners of NVIDIA, and traditionally does not skimp on packaging and packaging. As the saying goes, "there should be a lot of good video cards." The novelty comes in a box of impressive dimensions:


On its front side is a character from the game Ghost Recon: Advanced Warfighter. The case is not limited to one image - the game itself, as you might have guessed, is included in the package. On the reverse side of the package, there are brief characteristics of the product:


ASUS considered this amount of information insufficient, making a kind of book out of the box:


For the sake of fairness, we note that this method has been practiced for quite a long time and, by no means, not only by ASUS. But, as they say, everything is good in moderation. The maximum information content turned into a practical inconvenience. A slight breath of wind and the top of the cover opens. When transporting the hero of today's review, we had to contrive and bend the retaining tongue so that it would justify its purpose. Unfortunately, folding it up can easily damage the packaging. And finally, let's add that the dimensions of the box are unreasonably large, which causes some inconvenience.

Video adapter: complete set and close inspection

Well, let's go directly to the package bundle and the video card itself. The adapter is packed in an antistatic bag and foam container, which eliminates both electrical and mechanical damage to the board. The box contains disks, DVI -> D-Sub adapters, VIVO and additional power cords, as well as a case for disks.


Of the disks included in the kit, the GTI racing game and the 3DMark06 Advanced Edition benchmark are noteworthy! 3DMark06 was spotted for the first time in a bundle of a serial and mass video card! Without a doubt, this fact will appeal to users actively involved in benchmarking.


Well, let's go directly to the video card. It is based on a reference design printed circuit board using a reference cooling system, and it differs from other similar products only by a sticker with the manufacturer's logo, which retains the Ghost Recon theme.


The reverse side of the PCB is also unremarkable - a lot of smd components and voltage regulators are soldered on it, that's all:


Unlike the GeForce 8800 GTX, the GTS requires only one additional power connector:


In addition, it is shorter than the older sister, which is sure to please the owners of small bodies. There are no differences in terms of cooling, and ASUS EN8800GTS, like the GF 8800 GTX, uses a cooler with a large turbine-type fan. The radiator is made of a copper base and an aluminum casing. Heat transfer from the base to the fins is carried out in part through heat pipes, which increases the overall efficiency of the structure. Hot air is thrown outside the system unit, but part of it, alas, remains inside the PC due to some holes in the casing of the cooling system.


However, the problem of strong heating is easily solved. For example, a low-speed 120mm fan improves the temperature conditions of the motherboard quite well.
In addition to GPU, the cooler cools the memory chips and elements of the power subsystem, as well as the video signal DAC (NVIO chip).


The latter was removed from the main processor due to the high frequencies of the latter, which caused interference and, as a result, interference in operation.
Unfortunately, this circumstance will cause difficulties when changing the cooler, so NVIDIA engineers simply had no right to make it of poor quality. Let's take a look at the video card in its "naked" form.


The PCB contains a G80 chip of revision A2, 640 MB of video memory, accumulated by ten Samsung chips. The memory access time is 1.2 ns, which is slightly faster than the GeForce 8800 GTX.


Please note that the board has two slots for chips. If they were soldered to the PCB, the total memory size would be 768 MB, and its capacity would be 384 bits. Alas, the developer of the video card considered such a step unnecessary. This scheme is used only in professional video cards of the Quadro series.
Finally, we note that the card has only one SLI slot, unlike the GF 8800 GTX, which has two.

Testing, analysis of results

The ASUS EN8800GTS video card was tested on a test bench with the following configuration:
  • processor - AMD Athlon 64 [email protected] MHz (Venice);
  • motherboard - ASUS A8N-SLI Deluxe, NVIDIA nForce 4 SLI chipset;
  • RAM - 2х512MB [email protected] MHz, timings 3.0-4-4-9-1T.
Testing was carried out in operating system Windows XP SP2, Chipset driver version 6.86 is installed.
The RivaTuner utility has confirmed the compliance of the video card's characteristics with the declared ones:


The frequencies of the video processor are 510/1190 MHz, memory - 1600 MHz. The maximum heating achieved after multiple runs of the Canyon Flight test from the 3DMark06 package was 76 ° C at a fan speed of the standard cooling system equal to 1360 rpm:


For comparison, I will say that under the same conditions the GeForce 6800 Ultra AGP that came to hand heats up to 85 ° C at the maximum fan speed, and after a long time it hangs altogether.

The performance of the new video adapter was tested using popular synthetic benchmarks and some gaming applications.

Testing by Futuremark development applications revealed the following:


Of course, on a system with a more powerful central processor, for example, a representative of the Intel Core 2 Duo architecture, the result would be better. In our case, the morally outdated Athlon 64 (even if overclocked) does not allow the full potential of today's top video cards to be fully unleashed.

Let's move on to testing in real gaming applications.


V Need for Speed ​​Carbon clearly shows the difference between the rivals, and the GeForce 7900 GTX lags behind cards of the 8800 generation more than noticeably.


Since Half Life 2 requires not only a powerful video card, but also a fast processor to play comfortably, a clear difference in performance is observed only at maximum resolutions with anisotropic filtering and full-screen anti-aliasing enabled.


In F.E.A.R. approximately the same picture is observed as in HL2.


In the heavy modes of Doom 3, the card in question performed very well, but the weak central processor does not allow us to fully assess the gap between the GeForce 8800 GTS and its older sister.


Since Pray is made on the Quake 4 engine, which in turn is a development of Doom3, the performance results of video cards in these games are similar.
The progressiveness of the new unified shader architecture and some "cutting" of capabilities relative to its older sister put the GeForce 8800 GTS between the fastest graphics adapter from NVIDIA today and the flagship of the seven thousandth line. However, it is unlikely that the Californians would have acted differently - a novelty of this class should be more powerful than its predecessors. I am glad that the GeForce 8800 GTS is much closer to the GeForce 8800 GTX in speed capabilities than to the 7900 GTX. The support for the newest graphics technologies also inspires optimism, which should leave the owners of such adapters with a good margin of performance for the near (and, hopefully, more distant) future.

Verdict

After examining the card, we had an extremely good impression, which was greatly improved by the product cost factor. So, at the time of its appearance on the market and some time later, ASUS EN8800GTS, according to price.ru, cost about 16,000 rubles - its price was clearly overstated. Now the card is sold for about 11,500 rubles for a long period, which does not exceed the cost of similar products from competitors. However, considering the package bundle, ASUS 'brainchild is undoubtedly in a winning position.

pros:

  • DirectX 10 support;
  • reinforced chip structure (unified architecture);
  • excellent performance level;
  • rich equipment;
  • famous brand;
  • the price is on par with products from less reputable competitors
Minuses:
  • not always handy big box
We are grateful to the Russian representative office of ASUS for the video card provided for testing.

Reviews, wishes and comments on this material are accepted in the forum site.

Update: we decided to supplement the initial review with additional theoretical information, comparative tables, as well as the test results of the American laboratory THG, where the "junior" GeForce 8800 GTS was also involved. In the updated article, you will also find quality tests.

GeForce 8800 GTX is head and shoulders above the competition.

You've probably heard about DirectX 10 and the wonders that the new API promises over DX9. On the Internet, you can find screenshots of games that are still in development. But until now there were no video cards with support for DX10 on the market. And nVidia was the first to fix that flaw. Welcome to the release of DirectX 10 graphics cards in the form of nVidia GeForce 8800 GTX and 8800 GTS!

A single unified architecture will be able to squeeze more out of shader units, since they can now be used more efficiently than with a fixed layout. A new era in computer graphics is opened by the GeForce 8800 GTX with 128 unified shader units and the GeForce 8800 GTS with 96 such units. The days of pixel pipelines are finally over. But let's take a closer look at the new cards.

80 graphics cores are shown on the substrate. The new GPU promises to deliver twice the performance of the GeForce 7900 GTX (G71). 681 million transistors result in a huge die area, but when asked about it, nVidia CEO Jen-Hsun Huang replied, “If my engineers said they could double the performance by doubling the die area, I would even did not doubt! "

Experience has shown that doubling the area doesn't double the performance at all, but NVIDIA seems to have struck the right balance between technological advancements and silicon-based die implementation.

GeForce 8800 GTX and 8800 GTS fully comply with the DX10 and Shader Model 4.0 standards, various storage and data transfer standards, support geometry shaders and Stream Out. How did nVidia implement all of this?

To begin with, nVidia has moved away from the fixed design that the industry has been using for the past 20 years in favor of a unified shader core.


Earlier we showed similar slides illustrating the trend of increasing the power of pixel shaders. Nvidia is well aware of this trend and is moving towards balancing computing needs by implementing unified shaders through which data streams pass. This gives maximum efficiency and productivity.

NVIDIA says, “The GeForce 8800 development team was well aware that high-end DirectX 10 3D games would require a lot of hardware power to compute the shaders. While DirectX 10 stipulates a unified instruction set, the standard does not require a unified GPU shader design. But the GeForce 8800 engineers believed. that it is the unified GPU shader architecture that will effectively balance the load of DirectX 10 shaders, improving the architectural efficiency of the GPU and properly allocating the available power. "

GeForce 8800 GTX | 128 stream processors SIMD



The processor core runs at 575 MHz for the GeForce 8800 GTX and at 500 MHz for the GeForce 8800 GTS. If the rest of the core runs at 575 MHz (or 500 MHz), then the shader core uses its own clock. The GeForce 8800 GTX runs at 1350 GHz, while the 8800 GTS runs at 1200 GHz.

Each core shader element is called a Streaming Processor. The GeForce 8800 GTX uses 16 blocks of eight such elements. As a result, we get 128 stream processors. Similar to the design of the ATi R580 and R580 +, where pixel shader units are present, nVidia plans to add and remove units in the future. Actually, this is what we observe with 96 stream processors in the GeForce 8800 GTS.



Click on the picture to enlarge.

GeForce 8800 GTX | specification comparison table

Nvidia previously couldn't do full-screen anti-aliasing with HDR lighting at the same time, but that's history. Each raster operation unit (ROP) supports framebuffer mixing. Thus, both FP16 and FP32 rendering targets can be used with multisampling anti-aliasing. Under D3D10, color and Z acceleration in ROPs can be used with up to eight multiple rendering targets, as well as new compression technologies.

The GeForce 8800 GTX can fill 64 textures per clock, and at 575 MHz we get 36.8 billion textures per second (GeForce 8800 GTS = 32 billion / s). The GeForce 8800 GTX has 24 raster operations (ROPs) and when the card is running at 575 MHz, the peak pixel fill rate is 13.8 gigapixels / s. The GeForce 880GTS version has 20 ROPs and a peak fill rate of 10 gigapixels / s at 500 MHz.

NVidia GeForce Specifications
8800GTX 8800GTS 7950GX2 7900GTX 7800GTX 512 7800GTX
Process technology (nm) 90 90 90 90 110 110
Core G80 G80 G71 G71 G70 G70
Number of GPUs 1 1 2 1 1 1
Transistors per core (million) 681 681 278 278 302 302
Vertex block frequency (MHz) 1350 1200 500 700 550 470
Core frequency (MHz) 575 500 500 650 550 430
Memory frequency (MHz) 900 600 600 800 850 600
Effective memory frequency (MHz) 1800 1200 1200 1600 1700 1200
Number of vertex blocks 128 96 16 8 8 8
Number of pixel blocks 128 96 48 24 24 24
ROP number 24 20 32 16 16 16
Memory bus width (bits) 384 320 256 256 256 256
GPU Memory (MB) 768 640 512 512 512 256
GPU Memory Bandwidth (GB / s) 86,4 48 38,4 51,2 54,4 38,4
Vertices / s (million) 10 800 7200 2000 1400 1100 940
Pixel Throughput (ROP x Frequency, Bps) 13,8 10 16 10,4 8,8 6,88
Texture bandwidth (number of pixel pipelines x frequency, in billions / s) 36,8 32 24 15,6 13,2 10,32
RAMDAC (MHz) 400 400 400 400 400 400
Tire PCI Express PCI Express PCI Express PCI Express PCI Express PCI Express

Pay attention to the width of the memory bus. Looking at the diagram on the previous page, the GeForce 8800 GTX GPU uses six memory partitions. Each of them is equipped with a 64-bit memory interface, which gives a total of 384 bits wide. 768MB of GDDR3 memory is connected to the memory subsystem, which is built on a high-speed cross-connect, like a GeForce 7x GPU. This cross-connect supports DDR1, DDR2, DDR3, GDDR3 and GDDR4 memory.

The GeForce 8800 GTX uses GDDR3 memory with a default frequency of 900 MHz (the GTS version runs at 800 MHz). With 384 bits (48 bytes) wide and 900 MHz (1800 MHz effective DDR) bandwidth is a whopping 86.4 GB / s. And 768 MB of memory allows you to store much more complex models and textures, with higher resolution and quality.

GeForce 8800 GTX | nVidia knocks out ATi


Click on the picture to enlarge.

We have good news and bad news. The good ones are faster than the fast one, they are very quiet and there are so many interesting technical things for which there is not even software yet. The bad news is that they are not available for sale. Well, yes, there is always something wrong with the new hardware. Sparkle sells these cards for 635 euros. We are already getting used to such prices for top-end hardware.

The board is 27 centimeters long, so you won't be able to install it in every case. If your computer has hard drives located directly behind the PCIe slots, then installing a GeForce 8800 GTX is likely to be a tricky business. Of course, the disks can always be moved into the 5-inch bay through the adapter, but you must admit that there is little pleasant in the problem itself.


Click on the picture to enlarge.

The technical implementation is not something to laugh at. this is the best piece of hardware you can buy as a present for your PC for the New Year. Why is the GeForce 8800 GTX getting so much attention from the internet public? It's elementary - it's about record performance. So, in Half-Life 2: Episode 1, the number of frames per second on the GeForce 8800 GTX at 1600x1200 is as much as 60 percent higher than that of the top-end Radeon X1000 family (1900 XTX and X1950 XTX).

Oblivion runs incredibly smoothly at all levels. More precisely, with HDR rendering enabled in Oblivion, the speed is at least 30 fps. Titan Quest can't see less than 55 frames per second. Sometimes you wonder if the benchmark is hanging, or maybe something happened to the levels. Enabling full-screen anti-aliasing and anisotropic filtering does not affect the GeForce 8800 at all.

This is the fastest graphics card among all models released in 2006. Only the Radeon X1950 XTX in CrossFire paired mode catches up with the 8800 GTX in some places. So if you asked what card Gothic 3, Dark Messiah and Oblivion do not slow down, then here is the answer - before you GeForce 8800 GTX.

GeForce 8800 GTX | Two power sockets

Power is supplied to it through two slots on the top of the board. Both are necessary - if you remove the cable from the left one, the 3D performance will decrease dramatically. Do you want your neighbors to go crazy? Then take out the right one - the insane squeak that will begin to be heard from the board will be the envy of your car alarm. The board itself will not turn on at all. Note that nVidia recommends using a power supply unit with a capacity of at least 450 watts with the GeForce 8800 GTX, and so that there can be 30 amperes on the 12 volt line.


On a GeForce 8800 GTX, both power sockets must be connected. Click on the picture to enlarge.

The two power sockets are easy to explain. According to PCI Express specifications, a single PCIe slot can consume no more than 75 watts of power. Our test unit consumes about 180 watts only in 2D mode. That's a whopping 40 watts more than the Radeon X1900 XTX or X1950 XTX. Well, in 3D mode the board "eats" about 309 watts. The same Radeon X1900 / 1950 XTX in this case consume from 285 to 315 watts. It is not clear to us what needs the GeForce 8800 uses so much power when working in simple Windows.

Two more slots are reserved for SLI mode. According to nVidia documentation, only one plug is required for SLI. The second is not used yet. Theoretically, having two connectors, you can connect more than two in a multi-board system. The appearance of the second connector can also be linked to the progressive popularity of hardware physics calculations. Maybe another video card will be connected through it in order to calculate the physical functions in the game engine. Or maybe we are talking about Quad SLI on 4 boards, or something like that.


An additional slot is now reserved for SLI. But with the current version of the driver, you can only use one. Click on the picture to enlarge.

GeForce 8800 GTX | Quiet cooling system

The GeForce 8800 is equipped with a very quiet 80mm turbine cooler. Like the Radeon X1950 XTX, it is located at the very end of the board to push cool air across the entire surface of the GeForce 8800 and then out. A special grill is installed at the end of the board, which releases hot air not only outward through the hole that occupies the second slot, but also downward, directly into the case. In general, the system is simple, but there are a number of controversial points.


Warm air is thrown out through the hole in the second slot, but some of it gets back into the case through the grille on the side of the GeForce 8800 GTX. Click on the picture to enlarge.

If the PCIe slots in your computer are close, and in SLI two boards fit in such a way that the gap between them is not too large, then the temperature in this place will be very decent. The lower card will be additionally heated by the upper one, through the same side grill on the cooling system. Well, it’s better not even think about what will happen if you install three cards. It will turn out to be an excellent household electric heater. In cold weather, you will work near an open window.

When the board is installed alone, the cooling system is impressive and fulfills one hundred percent. Like the GeForce 7900 GTX cards, it also works quietly. For the entire six-hour test run, with a constant high load, the board was not heard even once. Even if the motherboard is fully loaded with work, the cooler at medium speed will cope with heat removal. If you bring your ear to the back of the computer, you will hear only a slight noise, a kind of quiet rustling.


The 80mm cooler is quiet and never runs at full capacity. The board's cooling system occupies two slots. Click on the picture to enlarge.

The dedicated ForceWare 96.94 driver that nVidia has prepared for the GeForce 8800 GTX does not output temperature monitoring data. Before the release of this version, you could choose between the classic and the new interface, but the 96.94 press release contains only new version settings panel. If you try to open the frequency and temperature settings, the driver will send you to the nVidia website so you can download the Ntune utility. It is in it that these functions are configured. Download the 30MB archive and install it. At the first start, we get a complete freeze of the computer and Windows.

If, after installing Ntune, you select the frequency and temperature adjustment in the settings panel, a special information page opens, where the motherboard settings are indicated. You cannot find any settings, that is, information about frequency and temperature. Therefore, we carried out temperature measurements in the classical way - using an infrared thermometer. When fully loaded, the measurements showed a temperature of 71 degrees Celsius, while working in 2D mode, the card was kept within the range from 52 to 54 degrees.

Hopefully, nVidia will release a standard version of ForceWare for the GeForce 8800. The classic configuration interface is sometimes more convenient, moreover, it displays information about the temperature, and with the help of coolbits you can adjust the frequencies. The new driver, together with Ntune, takes about 100 megabytes and is segmented into a considerable number of tabs and windows. It is not always convenient to work with him.


The GeForce 8800 GTX chip has as many as 681 million transistors, it is manufactured using 90 nanometer technology at the TSMC factory. Click on the picture to enlarge.

The G80 GeForce 8800 GTX has 681 million transistors. That's double that of the Conroe core Intel processors Core 2 Duo or in a GeForce 7 chip. The GPU of the video card runs at 575 MHz. The memory interface is 384-bit and serves 768 megabytes. For memory, nVidia used high-speed GDDR3, which runs at 900 MHz.

For comparison: the memory of the GeForce 7900 GTX runs at 800 MHz, and the GeForce 7950 GT at 700 MHz. Radeon X1950 XTX graphics cards use 1000 MHz GDDR4 memory. The GeForce 8800 GTS card has a core frequency of 500 MHz, a memory capacity of 640 MB at 800 MHz.

Test results show that full-screen anti-aliasing and anisotropic filtering finally do not degrade performance when enabled. In resource-intensive games, like Oblivion, you used to have to keep track of this, but now you can turn everything on to the maximum. The performance of previous nVidias was such that these games ran smoothly only at resolutions up to 1024x768, while HDR rendering with Pixel Shaders version 3 took up a huge amount of resources. The graphics cards are so powerful that the inclusion of 4xAA and 8xAF without problems allows you to play at resolutions up to 1600x1200. The G80 chip supports a maximum anti-aliasing setting of 16x and anisotropic filtering of 16x.


The GeForce 8800 GTX supports 16x anti-aliasing and anisotropic filtering.

Compared to single ATIs, the GeForce 8800 GTX has no competitors. New nVidia can now pull HDR rendering using third-party shaders and anti-aliasing. HDR rendering allows you to get extreme reflections and glare, simulating the effect of dazzle when you go out of a dark room into bright light. Unfortunately, many older games - Half Life Episode 1, Neef For Speed ​​Most Wanted, Spellforce 2, Dark Messiah, and others - only use second shaders for HDR effects. Newer games like Gothic 3 or Neverwinter Nights 2 use the previous Bloom method, as was the case in Black & White 2. And while Neverwinter Nights 2 can be configured to support HDR rendering, the developer is wary of these features so that those with normal FPS can play who has the usual hardware installed. This is correctly implemented in Oblivion, which has both Bloom and outstanding HDR rendering effects through third shaders.

It also supports the fourth shaders (Shader Model 4.0), and the most important innovation is the changed architecture of the rendering pipeline. It is no longer divided into pixel and vertex shaders. The new shader core can handle all data - vertex, pixel, geometric and even physical. This did not hurt performance - Oblivion runs almost twice as fast as on the pixel shader-optimized Radeon X1900 XTX or X1950 XTX.

What the graphics card supports in terms of DirectX 10 is not yet possible to test. Windows Vista, Direct X 10 and games for it don't exist yet. However, on paper, everything looks more than decent: geometry shaders support displacement mapping (Displacement Mapping), which will allow you to display even more realistic things, for example, render stereoscopic effects, objects in the form of grooves and corrugated surfaces. Stream Output will allow you to get even better shader effects for particles and physics. The technology of quantum effects (Quantum Effect) copes well with the miscalculation of the effects of smoke, fog, fire and explosions, and will allow you to remove their calculations from the central processor. All this together will give significantly more shader and physics effects that can be seen in future games. How all this will be implemented in practice, in what games and in what form, the future will show.

GeForce 8800 GTX | Boards in the test

Video cards on nVidia
and chip Codename Memory HDR-R Top / pix. shaders GPU frequency Memory frequency
nVidia GeForce 8800 GTX G80 768 MB GDDR3 Yes 4.0 575 MHz 1800 MHz
Asus + Gigabyte GeForce 7900 GTX SLI G71 512 MB GDDR3 Yes 3.0/3.0 650 MHz 1600 MHz
Gigabyte GeForce 7900 GTX G71 512 MB GDDR3 Yes 3.0/3.0 650 MHz
nVidia GeForce 7950 GT G71 512 MB GDDR3 Yes 3.0/3.0 550 MHz 1400 MHz
Asus GeForce 7900 GT Top G71 256 MB GDDR3 Yes 3.0/3.0 520 MHz 1440 MHz
nVidia GeForce 7900 GS G71 256 MB GDDR3 Yes 3.0/3.0 450 MHz 1320 MHz
Asus GeForce 7800 GTX EX G70 256 MB GDDR3 Yes 3.0/3.0 430 MHz 1200 MHz
Gigabyte GeForce 7800 GT G70 256 MB GDDR3 Yes 3.0/3.0 400 MHz 1000 MHz
Asus GeForce 7600 GT G73 256 MB GDDR3 Yes 3.0/3.0 560 MHz 1400 MHz
nVidia GeForce 6800 GT NV45 256 MB GDDR3 Yes 3.0/3.0 350 MHz 1000 MHz
Gainward GeForce 7800 GS + GSa AGP G71 512 MB GDDR3 Yes 3.0/3.0 450 MHz 1250 MHz

The following table lists the ATi that participated in our testing.

ATi-based video cards
Video card and chip Codename Memory HDR-R Top / pix. shaders GPU frequency Memory frequency
Club 3D + Club 3D Radeon X1950 XTX CF R580 + 512 MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
Club 3D Radeon X1950 XTX R580 + 512 MB GDDR4 Yes 3.0/3.0 648 MHz 1998 MHz
HIS + HIS Radeon X1900 XTX CF R580 512 MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Gigabyte Radeon X1900 XTX R580 512 MB GDDR3 Yes 3.0/3.0 648 MHz 1548 MHz
Power Color Radeon X1900 XT R580 512 MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
ATI Radeon X1900 XT R580 256 MB GDDR3 Yes 3.0/3.0 621 MHz 1440 MHz
Sapphire Radeon X1900 GT R580 256 MB GDDR3 Yes 3.0/3.0 574 MHz 1188 MHz
HIS Radeon X1650 Pro Turbo RV535 256 MB GDDR3 Yes 3.0/3.0 621 MHz 1386 MHz
Gecube Radeon X1300 XT RV530 256 MB GDDR3 Yes 3.0/3.0 560 MHz 1386 MHz

GeForce 8800 GTX | Test configuration

We used three reference stands for testing. All of them were based on extremely identical components - a dual-core AMD Athlon 64 FX-60 processor with a frequency of 2.61 GHz, equipped with 2 gigabytes of Mushkin MB HP 3200 2-3-2 RAM, two 120 GB Hitachi hard drives in a RAID 0 configuration. The difference was in the motherboards used - for the tests of single and nVidia boards in SLI mode, we used the Asus A8N32-SLI Deluxe motherboard. To test video cards in CrossFire mode (this is indicated by the abbreviation CF below in the graphs), we used the same computer with the reference motherboard ATi on the RD580 chipset. Finally, AGP video cards were tested on a computer in the same configuration, but on an ASUS AV8 Deluxe motherboard. The configuration data is summarized in a table.

For all nVidia graphics cards (including SLI) and single Ati cards
CPU
Bus frequency 200 MHz
Motherboard Asus A8N32-SLI Deluxe
Chipset nVidia nForce4
Memory
HDD Hitachi 2 x 120GB SATA, 8MB Cache
DVD Gigabyte GO-D1600C
LAN controller Marvell
Sound controller Realtek AC97
Power Supply Silverstone SST-ST56ZF 560W
For tests of ATi video cards in CrossFire mode
CPU AMD Athlon 64 FX-60 Dual Core 2.61 GHz
Bus frequency 200 MHz
Motherboard Reference ATi
Chipset ATI RD580
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller AC97
For tests of AGP video cards
CPU AMD Athlon 64 FX-60 Dual Core 2.61 GHz
Bus frequency 200 MHz
Motherboard Asus AV8 Deluxe
Chipset VIA K8T800 Pro
Memory Mushkin 2x1024 MB HP 3200 2-3-2
LAN controller Marvell
Sound controller Realtek AC97

On computers for testing single video cards and nVidia cards in SLI mode, we used Windows XP Professional with SP1a. CrossFire and AGP video cards were tested on systems with Windows XP Professional SP2 installed. The driver and software versions are summarized in the following table.

Drivers and configuration
ATi graphics cards ATI Catalyst 6.6, X1900 XTX, X1950 + Crossfire, X1650 + Crossfire, X1300 XT + Crossfire, Crossfire X1900, Crossfire X1600 XT ATI Catalyst 6.7 (entspricht Catalyst 6.8), Crossfire X1600 Pro, Crossfire X1300 Pro, ATI Catalyst 6
NVidia graphics cards nVidia Forceware 91.31, 7900 GS, nVidia Forceware 91.47, 7950 GT nVidia Forceware 91.47 (Special), 8800 GTX nVidia Forceware 96.94 (Special)
Operating system Single Cards and SLI: Windows XP Pro SP1a, ATI Crossfire and Windows XP Pro SP2 AGP Graphics Cards
DirectX version 9.0c
Chipset driver nVidia Nforce4 6.86, AGP VIA Hyperion Pro V509A

GeForce 8800 GTX | Test results

We received the reference board for THG directly from nVidia. For testing, we were provided with a special ForceWare 96.94 driver prepared exclusively for the press. is a DirectX 10 and Shader Model 4.0 compliant card. The performance in applications for DirectX 9 and Pixelshader 2 or Pixelshader 3 is staggering.

Enabling anti-aliasing and anisotropic filtering has almost no performance impact. In Half-Life 2 Episode 1, the GeForce 8800 GTX graphics card cannot be slowed down. At 1600x1200, the chip is 60 percent faster than the Radeon X1950 XTX, and in Oblivion, performance is twice that of the Radeon X1900 XTX or X1950 XTX. In Prey, the graphics card at 1600x1200 is a whopping 55 percent faster than the Radeon X1950 XTX. In Titan Quest, the number of frames per second does not change, no matter what resolution you set, and is 55 FPS.

In Half-Life 2: Episode 1 tests with HDR rendering, the board's results are impressive, but at low resolutions it loses to the Radeon X1950 XTX and boards in CrossFire mode, being approximately on par with SLI solutions on the GeForce 7900 GTX. Note that the video card is not the limiting factor at low resolutions. The higher we tweak the settings, the more interesting the result.

With anti-aliasing and anisotropic filtering enabled, the picture starts to change. All the boards lose a little in performance, but the GeForce 8800 GTX drops very insignificantly - by only 10 fps on average, while the dual ATi Radeon X1950 XTX loses as much as 20 fps in CrossFire mode.

As soon as we step over the resolution of 1280x1024 with anti-aliasing and anisotropic filtering enabled, the single GeForce 8800 GTX becomes the undoubted leader. The performance exceeds those of the Radeon X1950 XTX by almost 35 fps. This is a significant difference.

Further more. At 1600x1200 with anti-aliasing and anisotropic filtering, the breakaway from all other boards becomes a nightmare. Twice from GeForce 7900 GTX SLI and slightly less from CrossFire Radeon X1950 XTX. This is yeah!

Finally, let's look at the dynamics of FPS decrease with increasing resolution and image quality. We see that the GeForce 8800 GTX has an insignificant decrease in performance - from bare 1024x768 to smoothed and filtered by the 1600x1200 anisotropy method, the difference is just a little more than 20 fps. Previously, top-end solutions from ATi and nVidia go far back.

Hard Truck: Apocalypse is demanding on both the video card and the central processor. This explains virtually the same performance at 1024x768 when simple trilinear filtering is used and full-screen anti-aliasing is turned off.

As soon as you switch to 4xAA and 8x anisotropic filtering, the results start to vary. "Younger" cards lose performance significantly, as if they do not notice the improvement in picture quality.

At 1280x960 the difference increases even more, but the GeForce 8800 GTX demonstrates the same performance. It is obvious that Athlon 64 FX-60 is not capable of bringing this video card to its knees.

At 1600x1200, all single motherboards start to tend to non-playable. But the GeForce 8800 GTX showed 51 fps as it does.

Consider the performance degradation with increasing settings. CrossFire Radeon X1950 XTX and GeForce 7900 GTX keep close to, and old-generation cards have been kneeling for a long time and begging for mercy.

In Oblivion, a game that loads the video card to the limit, the picture is initially depressing for all boards, except for the Radeon X1950 XTX in CrossFire mode and. We have collected statistics on the operation of video cards in open locations, and when rendering indoor spaces. It can be seen that in the open air the GeForce 8800 GTX stands next to or slightly lags behind the dual Radeon X1950 XTX.






But when the resolution gets up to 1600x1200, our GeForce 8800 GTX goes far ahead. The gap is especially visible at closed levels.


Look at the degradation in performance as resolution and quality increase. The picture needs no comments. In closed locations, the speed is unshakable.


In the Prey game, the video card sits between the ATi Radeon X1950 XTX single-board solutions and the same cards in CrossFire mode. And the higher the resolution, the better the GeForce 8800 GTX looks.




Comparing the GeForce 8800 GTX with single-board solutions from ATi and nVidia is useless. The gap in high resolutions colossal, and at 1024x768 with anti-aliasing, it is impressive.

In Rise of Nations: Rise of Legends, the graphics card is the only leader. If we calculate the gap between CrossFire Radeon X1950 XTX and GeForce 8800 GTX as a percentage, then the gap will be very, very large. If we count in fps, then the difference is not so noticeable, but still significant.




Notice how the speed decreases with increasing resolution. At all settings, the GeForce 8800 GTX is a leader not only in comparison with single cards, but also with SLI / CrossFire solutions.

In Titan Quest, nVidia's cards do their best. At the same time, fps does not change from 1024x768 to 1600x1200 with anti-aliasing and anisotropic filtering.




The picture of what is happening is well illustrated by the following graph. The performance of the GeForce 8800 GTX is at the same level, regardless of the settings.

In 3DMark06, the card performs excellently with both the second and third shaders. Note how slightly the performance drops when you enable anisotropy and anti-aliasing.


The increase in resolution is also not scary. The card is on par with the SLI and CrossFire solutions, significantly outperforming all the previous leaders in the single race.


To give you a better idea of ​​gaming performance, we've re-arranged the graphics. There is no comparison here, only the net result of one video card. It is worth noting that the performance of the GeForce 8800 GTX does not change from resolution to resolution. In all games, the limiting factor is the insufficiently fast AMD Athlon 64 FX-60 processor. In the future, with the release of much faster chips, the card will perform even better in the same games. We think that the latest generation Core 2 Quad is not able to force the GeForce 8800 GTX to reach its limit.




So, having finished with the test results, let's try to compile a rating of the efficiency of video cards. To do this, we will collect together the results of all game tests and compare them with the price of the solution. We will take the recommended prices as a basis, that is, without the extra charges of specific stores. Of course, they will be very expensive at first, and many stores will include excess profits in the price. But then the prices will drop, and for sure, pretty soon you will be able to get the GeForce 8800 GTX for a more reasonable price.

As we can see, the performance of the GeForce 8800 GTX outperforms almost all solutions, including dual CrossFire and SLI. In absolute terms, the GeForce 8800 GTX is very fast. But what about the price?

The price is appropriate - the manufacturer asks for a fee of 635 euros. This is a lot, but you will have to pay more for two Radeon X1900 XTX boards in CrossFire mode - 700 euros. And for two Radeon X1950 XTX or SLI GeForce 7900 GTX as much as 750 euros. Despite the fact that in some tests a single GeForce 8800 GTX bypasses these solutions and takes up less space in the case, there is something to think about.

Finally, let's split the fps into money. We see that this figure is better than that of SLI and CrossFire. Of course, the cost of each fps will be higher than that of the GeForce 7800 GTX EX, and, of course, noticeably higher than that of the Radeon X1300 XT. But the performance of the board is also adequate. A very effective solution in terms of price / performance ratio.

We decided to supplement our review with the results of tests by the American laboratory THG, where the GeForce 8800 GTS also took part. Please note that due to differences in test configuration, do not directly compare the above results with those of a US lab.


The GeForce 8800GTX is longer than the Radeon X1950XTX and most other cards on the market. The 8800GTS is somewhat shorter.

Like other graphics card benchmarks in 2006, we tested it on the AMD Athlon FX-60 platform. We will also show the results of configurations with multiple GPUs. In addition, let's evaluate how the new video cards behave when the performance is limited by the CPU (low resolution and image quality).

System hardware
Processors AMD Athlon 64 FX-60, 2.6 GHz, 1.0 GHz HTT, 1 MB L2 cache
Platform nVidia: Asus AN832-SLI Premium, nVidia nForce4 SLI, BIOS 1205
Memory Corsair CMX1024-4400Pro, 2x 1024 MB DDR400 (CL3,0-4-4-8)
HDD Western Digital Raptor, WD1500ADFD, 150GB, 10,000 RPM, 16MB Cache, SATA150
Network Integrated nForce4 Gigabit Ethernet
Video cards ATi Radeon X1950XTX 512 MB GDDR4, 650 MHz core, 1000 MHz memory (2.00 GHz DDR)
NVidia cards:
nVidia GeForce 8800GTX 768 MB GDDR3, 575 MHz core, 1.350 GHz stream processors, 900 MHz memory (1.80 GHz DDR)
XFX GeForce 8800GTS 640 MB GDDR3, 500 MHz core, 1,200 GHz stream processors, 800 MHz memory (1.60 GHz DDR)
nVidia GeForce 7900GTX 512 MB GDDR3, 675 MHz core, 820 MHz memory (1.64 GHz DDR)
Power Supply PC Power & Cooling Turbo-Cool 1000W
Cooler CPU Zalman CNPS9700 LED
System software and drivers
OS Microsoft Windows XP Professional 5.10.2600, Service Pack 2
DirectX version 9.0c (4.09.0000.0904)
Graphics Drivers ATi - Catalyst 6.10 WHQL
nVidia - ForceWare 96.94 Beta

During the first 3DMark run, we ran tests at all resolutions, but with full-screen anti-aliasing and anisotropic filtering turned off. In the second run, we enabled the 4xAA and 8xAF image enhancement options.

nVidia clearly comes first in 3DMark05. The GeForce 8800 GTX achieves the same result at 2048x1536 as the ATi Radeon X1950 XTX at the default 1024x768. Impressive.

Doom 3 is usually dominated by nVidia cards as their designs fit well for this game. But not so long ago ATi was able to "take" this game with new cards.

Here, for the first time, we are faced with the limitations of the processing power of the CPU, since at low resolution the result is somewhere around 126 frames per second. The ATi card is capable of higher frames per second on this system configuration. The reason lies in the drivers. The point is that ATi releases drivers that are less CPU intensive. As a result, the CPU is in better conditions and can provide more performance for the graphics subsystem.

In general, the new 8800 cards become the winner. If you look at the results at all resolutions, the new DX10 cards outperform the Radeon X1950 XTX, starting from 1280x1024 and higher.

GeForce 8800 GTX and GTS | F.E.A.R.

In F.E.A.R. usually nVidia cards lead. But, again, the ATi drivers have less CPU load. Of course, with a faster platform the results will be different, but if your computer is not advanced, then this test clearly shows how the G80 cards will work on it. But apart from the test at 1024x768, the G80 simply kills the Radeon X1950 XTX. GTX is a monster. And no matter what load we put on the GeForce 8800 GTX, the card always provides more than 40 frames per second.


Click on the picture to enlarge.

The second screenshot (below) is taken on an 8800 GTX with the same settings.



Click on the picture to enlarge.

The nVidia picture is far superior in quality to the ATi screenshot. It looks like nVidia has taken the lead again in this regard. We have one more advantage that nVidia cards based on the G80 chip have.


Here is a table of the new quality improvement opportunities on the G80 cards.

In addition to the new DX10 graphics cards, nVidia also revealed several features that will be available on the G80 cards. And the first of these is a proprietary picture quality enhancement technology called Coverage Sampled Antialiasing (CSAA).

The new version of full-screen anti-aliasing uses an area of ​​16 subsamples. As a result, according to nVidia, it is possible to compress "redundant color and depth information into memory and bandwidth of four or eight multisamples." The new quality level works more efficiently by reducing the amount of data per sample. If CSAA does not work with any game, then the driver will revert to traditional anti-aliasing methods.


Click on the picture to enlarge.

Before we finish this review, let me talk about two more aspects of video cards that have been in development for a long time and will become more important over time. The first aspect is video playback. During the reign of the GeForce 7, the ATi Radeon X1900 was the leader in video playback quality. But the situation has changed with the advent of unified shaders with a dedicated Pure Video core.

Thanks to smart algorithms and 128 computing units, the GeForce 8800 GTX was able to get 128 out of 130 points in HQV. In the near future, we plan to release a more detailed article on image quality, so stay tuned for the news on our website.

Finally, very strong point The G80 is what NVIDIA calls CUDA. For years, scientists and enthusiasts have been looking for ways to squeeze more performance out of powerful parallel processors. The Beowulf cluster, of course, is not affordable for everyone. Therefore, ordinary mortals offer different ways of using a video card for calculations.

The problem here is the following: the GPU is good for parallel computing, but it does poorly with branching. This is where the CPU works well. Also, if you want to use a graphics card, you should program shaders like game developers do. nVidia again decided to go ahead with the Compute Unified Device Architecture, or CUDA.


This is how CUDA can work for fluid simulation.

Nvidia has released a C + compiler whose resulting programs can scale to the processing power of the GPU (for example, 96 stream processors in the 8800 GTS or 128 in the 8800 GTX). Now programmers have the ability to create programs that scale in terms of both CPU and GPU resources. CUDA is sure to appeal to a variety of distributed computing programs. However, CUDA can be used not only for calculating blocks, but also for simulating other effects: volumetric fluid, clothes and hair. Through CUDA, the calculations of physics and even other game aspects can be potentially transferred to the GPU.


Developers will be presented with a complete set of SDKs.

GeForce 8800 GTX and GTS | Conclusion

Those moving from GeForce 6 to now will get nearly a threefold increase in performance. It doesn't matter when games for DirectX 10 are released, it doesn't matter what the fourth shaders will give us - today the GeForce 8800 GTX is the fastest chip. Games like Oblivion, Gothic 3 or Dark Messiah were waiting for the G80 chip and graphics cards. It became possible to play without brakes again. The GeForce 8800 GTX has enough power for all the latest games.

The cooling system is quiet, the 80mm cooler on the reference card was unheard of. Even at full load, the rotational speed of the cooler is low. I wonder how ATi will respond to this. Anyway, nVidia did a heck of a thing Good work and released a really powerful piece of hardware.

Disadvantages: the board is 27 centimeters long; it takes the place of two PCI Express slots. The power supply must be at least 450 watts (12V, 30A). For the GeForce 8800 GTS, the minimum will be a 400-watt PSU with 30 amperes on the 12-volt bus.

Following a long tradition, nVidia cards are already available in online stores. On the international market, the recommended price for the GeForce 8800GTX is $ 599, and the GeForce 8800GTS is $ 449. And games for DX10 should appear soon. But no less important, you will get a better picture in existing games.


This is what a DX10 supermodel might look like. Click on the picture to enlarge.

GeForce 8800 GTX and GTS | Editor's opinion

Personally, I'm impressed with nVidia's implementation of DX10 / D3D10. The live viewing of Crysis and the many demos is impressive. The CUDA implementation allows you to turn your graphics card into more than just a frame renderer. Now programs will be able to use not only CPU resources, but also all the parallel power of the universal GPU shader core. I can't wait to see such solutions in reality.

But the G80 leaves a lot to be desired. What? Of course, new games. Gentlemen developers, would you be so kind as to release the DX10 games as soon as possible.

GeForce 8800 GTX | Photo gallery

For starters, NVIDIA installed the G80 on 2 video cards: GeForce 8800 GTX and GeForce 8800 GTS.

GeForce 8800 Series Graphics Cards Specifications
GeForce 8800 GTX GeForce 8800 GTS
Number of transistors 681 million 681 million
Core frequency (including allocator, texture units, ROPs) 575 MHz 500 MHz
Shader (Stream Processor) Frequency 1350 MHz 1200 MHz
# Of shaders (stream processors) 128 96
Memory frequency 900 MHz (1.8 GHz effective) 800 MHz (1.6 GHz effective)
Memory interface 384 bit 320 bit
Memory bandwidth (GB / s) 86.4 GB / s 64 GB / s
Number of ROP blocks 24 20
Memory 768 MB 640 MB

As you can see, the number of transistors in the GeForce 8800 GTX and 8800 GTS is the same, this is because they are absolutely the same GPU G80. As already mentioned, the main difference between these GPU options is 2 disabled banks of stream processors - only 32 shaders. The number of working shader units has been reduced from 128 for the GeForce 8800 GTX to 96 for the GeForce 8800 GTS. NVIDIA also disabled 1 ROP (rasterization unit).

The frequencies of the cores and memory of these video cards are also slightly different: the core frequency of the GeForce 8800 GTX is 575 MHz, the frequency of the GeForce 8800 GTS is 500 MHz. The GTX shader units operate at 1350 MHz, the GTS at 1200 MHz. For the GeForce 8800 GTS, NVIDIA also uses a narrower 320-bit memory interface and 640MB of slower memory that runs at 800MHz. The GeForce 8800 GTX has a 384-bit memory interface, 768 MB of memory / 900 MHz. And, of course, a completely different price.

The video cards themselves are very different:


As you can see in these photos, the reference GeForce 8800 cards are black (for the first time for NVIDIA). With a cooling module, the GeForce 8800 GTX and 8800 GTS are dual-slot. The GeForce 8800 GTX is slightly longer than the GeForce 8800 GTS: its length is 267 mm, versus 229 mm for the GeForce 8800 GTS, and, as announced earlier, the GeForce 8800 GTX 2 PCIe power connector. Why 2? The maximum power consumption of the GeForce 8800 GTX is 177 watts. However, NVIDIA says that this can be only as a last resort, when all the functional blocks of the GPU are maximally loaded, and in ordinary games during testing, the video card consumed an average of 116 - 120 W, maximum - 145 W.

Since each external PCIe power connector on the video card itself is designed for a maximum of 75 W, and the PCIe slot is also designed for a maximum of 75 W, 2 of these connectors will not be enough to supply 177 W, so we had to make 2 external PCIe power connectors. By adding a second connector, NVIDIA has provided the 8800 GTX with a solid headroom. By the way, the maximum power consumption of the 8800 GTS is 147 W, so it can get by with one PCIe power connector.

Another feature added to the design of the reference GeForce 8800 GTX is the second SLI slot, a first for an NVIDIA GPU. NVIDIA does not officially announce anything about the purpose of the second SLI slot, but journalists managed to get the following information from the developers: “The second SLI slot in the GeForce 8800 GTX is intended for hardware support for a possible expansion of the SLI configuration. Only one SLI slot is used with current drivers. Users can connect an SLI bridge to both the first and second contact groups. ”

Based on this and the fact that nForce 680i SLI motherboards come with three PCI Express (PEG) slots, we can conclude that NVIDIA plans to support three SLI video cards in the near future. Another option may be to increase the power for SLI physics, but this does not explain why the GeForce 8800 GTS does not have a second SLI connector.

It can be assumed that NVIDIA reserves its GX2 “Quad SLI” technology for the less powerful GeForce 8800 GTS, while the more powerful GeForce 8800 GTX will operate in a triple SLI configuration.

If you remember, the original Quad SLI video cards from NVIDIA are closer to the GeForce 7900 GT than to the GeForce 7900 GTX in terms of their characteristics, since the 7900 GT video cards have lower power consumption / heat dissipation. It is only natural to assume that NVIDIA will follow the same path in the case of the GeForce 8800. Gamers with motherboards with three PEG slots will be able to increase the speed of the graphics subsystem by assembling a triple SLI 8800 GTX configuration, which in some cases will give them better performance than Quad SLI system, judging by the characteristics of the 8800 GTS.

Again, this is just an assumption.

The cooling unit of the GeForce 8800 GTS and 8800 GTX is made of a two-slot, ducted, which removes hot air from the GPU outside the computer case. The cooling heatsink consists of a large aluminum heatsink, copper and aluminum heatpipes, and a copper plate that is pressed against the GPU. This whole structure is blown by a large radial-type fan, which looks a little intimidating, but is actually quite quiet. The cooling system of the 8800 GTX is similar to the cooling system of the 8800 GTS, only the former has a slightly longer heatsink.


All in all, new cooler copes with cooling the GPU quite well, and at the same time is almost silent - like the GeForce 7900 GTX and 7800 GTX 512MB, but the GeForce 8800 GTS and 8800 GTX are heard a little stronger. In some cases, you will need to listen well to hear the noise from a graphics card fan.

Production

All production of the GeForce 8800 GTX and 8800 GTS is carried out under an NVIDIA contract. This means that whether you buy a graphics card from ASUS, EVGA, PNY, XFX or any other manufacturer, they are all made by the same company. NVIDIA does not even allow manufacturers to overclock the first batches of GeForce 8800 GTX and GTS video cards: they all go on sale with the same clock speeds, regardless of manufacturer. But they are allowed to install their own cooling systems.

For example, EVGA has already released its e-GeForce 8800 GTX ACS3 Edition with its unique ACS3 cooler. The ACS3 video card is hidden in a single large aluminum cocoon. It bears the letters E-V-G-A. For additional cooling, EVGA has placed an additional heatsink on the back of the graphics card, right in front of the G80 GPU.

In addition to cooling, manufacturers of the first GeForce 8800 video cards can customize their products only with warranty obligations and bundle - games and accessories. For example, EVGA bundles its graphics cards with the game Dark Messiah, the GeForce 8800 GTS BFG graphics card is sold with a BFG T-shirt and a mouse pad.

It will be interesting to see what will happen next - many NVIDIA partners believe that for the next releases of GeForce 8800 video cards, NVIDIA limits will not be so strict, and they will be able to compete in overclocking.

Since all video cards come off the same pipeline, all GeForce 8800s support 2 dual-link DVI and HDCP connectors. In addition, it became known that NVIDIA is not planning to change the memory capacity of the GeForce 8800 GTX and GTS (for example, 256 MB GeForce 8800 GTS or 512 MB 8800 GTX). At least for now, the standard configuration for the GeForce 8800 GTX is 768 MB, and the GeForce 8800 GTS is 640 MB. NVIDIA also has no plans to make an AGP version of the GeForce 8800 GTX / GTS video cards.

Driver for 8800

NVIDIA made several changes in the GeForce 8800 driver, which must be mentioned. First of all, the traditional Coolbits overclocking utility has been removed, instead of it - NVIDIA nTune. That is, if you want to overclock a GeForce 8800 video card, you will need to download the nTune utility. This is probably good for owners of motherboards based on the nForce chipset, since the nTune utility can be used not only for overclocking a video card, but also for configuring the system. Otherwise, those, for example, who managed to upgrade to Core 2 and have a motherboard with a 975X or P965 chipset, will have to download a 30 MB application to overclock the video card.

Another change in the new driver that we noticed is that there is no option to switch to the classic NVIDIA control panel. I would like to believe that NVIDIA will return this feature to its video driver, as it was liked by many, in contrast to the new NVIDIA control panel interface.

It is well known that the flagship models of graphics adapters belonging to the highest price range are, first of all, a public demonstration of the technological achievements of the developer company. While these solutions are well-deservedly popular with enthusiastic gamers, they never make the big picture of sales. Not everyone is able or willing to pay $ 600, an amount comparable to the cost of the most expensive modern gaming console, just for a graphics card, therefore, the main contribution to AMD / ATI and Nvidia's revenues is made by less expensive, but much more mainstream cards.

On November 9 last year, Nvidia announced the first consumer GPU with a unified architecture and support for DirectX 10. The novelty was detailed in our Directly Unified: Nvidia GeForce 8800 Architecture Review. Initially, the novelty formed the basis for two new graphics cards - GeForce 8800 GTX and GeForce 8800 GTS. As you know, the older model proved to be excellent in games and may well be considered the choice of an enthusiast who is not embarrassed by the price, while the younger model took its rightful place in its price category - less than $ 500, but more than $ 350.

$ 449 is not a very high price for a new generation product that has full support for DirectX 10 and can offer the user a serious level of performance in modern games. Nevertheless, Nvidia decided not to stop there, and on February 12, 2007 presented to the public a more affordable model GeForce 8800 GTS 320MB with an official price of $ 299, which seriously strengthened its position in this sector. These two graphics cards will be discussed in today's review. Along the way, we will find out how critical it is for GeForce family 8 the amount of video memory.

GeForce 8800 GTS: technical specifications

To assess the qualities and capabilities of both GeForce 8800 GTS models, we should remind our readers of the characteristics of the GeForce 8800 family.


All three GeForce 8800 models use the same G80 graphics core, consisting of 681 million transistors, as well as an additional NVIO chip containing TMDS transmitters, RAMDACs, etc. Using such a complex chip to produce several graphics models adapters belonging to different price categories are not the best option in terms of the cost of the final product, however, you cannot call it unsuccessful: Nvidia has the opportunity to sell rejected versions of the GeForce 8800 GTX (which have not been rejected by frequencies and / or have some blocks), and the cost of video cards sold for over $ 250 is hardly critical. This approach is actively used by both Nvidia and its sworn rival ATI, just remember the history of the G71 GPU, which can be found both in the massive inexpensive GeForce 7900 GS video adapter and in the powerful two-chip monster GeForce 7950 GX2.

The GeForce 8800 GTS was created in the same way. As you can see from the table, in terms of technical characteristics, this video adapter differs significantly from its older brother: it not only has lower clock frequencies and some stream processors are disabled, but also the amount of video memory is reduced, the width of the access bus to it is cut and some of the TMU and ROP units are inactive. ...

In total, the GeForce 8800 GTS has 6 groups of stream processors, 16 ALUs each, which gives a total of 96 ALUs. The main rival of this card, AMD Radeon X1950 XTX, has 48 pixel processors, each of which, in turn, consists of 2 vector and 2 scalar ALUs - 192 ALUs in total.

It would seem that in pure computing power the GeForce 8800 GTS should be seriously inferior to the Radeon X1950 XTX, but there are a number of nuances that make this assumption not entirely legitimate. The first is that the stream processors GeForce 8800 GTS, like the ALU in Intel NetBurst, operate at a much higher frequency than the rest of the core - 1200 MHz versus 500 MHz, which already means a very serious increase in performance. Another nuance follows from the architecture features of the R580 GPU. In theory, each of its 48 pixel shader execution units is capable of executing 4 instructions per clock, not counting the branch instructions. However, only 2 of them will be of type ADD / MUL / MADD, and the other two are always ADD instructions with a modifier. Accordingly, the efficiency of R580 pixel processors will not be maximum in all cases. On the other hand, G80 stream processors have a fully scalar architecture and each of them is capable of executing two scalar operations per cycle, for example, MAD + MUL. Although we still do not have exact data on the architecture of Nvidia stream processors, in this article we will look at how much the new unified architecture of the GeForce 8800 is more advanced than the Radeon X1900 architecture and how this affects the speed in games.

As for the performance of texturing and rasterization systems, judging by the characteristics, the GeForce 8800 GTS has more texture units (24) and rasterizers (20) than the Radeon X1950 XTX (16 TMUs, 16 ROPs), however, their clock frequency ( 500MHz) is noticeably lower than the ATI product clock speed (650MHz). Thus, neither side has a decisive advantage, which means that the performance in games will be influenced mainly by the "success" of the micro-architecture, and not by the numerical advantage of the execution units.

It is noteworthy that both the GeForce 8800 GTS and the Radeon X1950 XTX have the same memory bandwidth - 64GB / sec, but the GeForce 8800 GTS uses a 320-bit video memory bus, it uses GDDR3 memory operating at 1600MHz, while The Radeon X1950 XTX can be found in 2GHz GDDR4 memory with 256-bit access. Given ATI's claims that the R580 has a better memory controller with a ring topology than a typical Nvidia controller, it will be interesting to see if the ATI Radeon solution gains some edge in high resolutions with full-screen anti-aliasing enabled against a new generation competitor, as happened in the case of the GeForce. 7.

The less expensive version of the GeForce 8800 GTS with 320MB of memory, announced on February 12, 2007 and intended to replace the GeForce 7950 GT in the performance-mainstream segment, differs from the usual model only in the amount of video memory. In fact, all Nvidia had to do to get this card was swapping out the 512 Mbit memory chips for 256 Mbit chips. A simple and technological solution, it allowed Nvidia to assert its technological superiority in the price category of $ 299, which is quite popular among users. In the future, we will find out how this affected the performance of the new product and whether a potential buyer should pay extra $ 150 for a model with 640 MB of video memory.

In our today's review, the GeForce 8800 GTS 640MB will be presented with the MSI NX8800GTS-T2D640E-HD-OC video adapter. Let's tell you more about this product.

MSI NX8800GTS-T2D640E-HD-OC: packaging and contents

The video adapter arrived at our laboratory in retail version - packed in a colorful box along with all accompanying accessories. The box turned out to be relatively small, especially in comparison with the box from MSI NX6800 GT, which at one time could compete with Asustek Computer in terms of dimensions. Despite its modest size, MSI packaging has traditionally been equipped with a convenient carrying handle.


The design of the box is made in calm white and blue colors and does not hurt the eyes; the front side is decorated with an image of a pretty red-haired angel girl, so there is no talk of aggressive motives so popular among video card manufacturers. Three stickers inform the buyer that the card is pre-overclocked by the manufacturer, supports HDCP and comes with the full version of Company of Heroes. On the back of the box you can find information about Nvidia SLI and MSI D.O.T. Express. The latter is a dynamic overclocking technology, and, according to MSI, it can increase the performance of the video adapter by 2% -10%, depending on the used overclocking profile.

Opening the box, in addition to the video adapter itself, we found the following set of accessories:


Quick Installation Guide
Quick Start Guide
Adapter DVI-I -> D-Sub
YPbPr / S-Video / RCA Splitter
S-Video cable
Power adapter 2хMolex -> 6-pin PCI Express
CD with MSI Drivers and Utilities
Company of Heroes Double Disc Edition

Both guides are in the form of posters; in our opinion, they are too simple and contain only the most basic information. The pursuit of the number of languages, and there are 26 of them in the short user's guide, has led to the fact that nothing particularly useful, except for the basic information on how to install the card into the system, can be learned from it. We think the manuals could be a little more detailed, which would give some advantage to inexperienced users.

The driver disk contains the outdated version of Nvidia ForceWare 97.29, as well as a number of proprietary utilities, among which MSI DualCoreCenter and MSI Live Update 3 deserve special mention. full functionality, the program requires an MSI motherboard equipped with a CoreCell chip and is therefore of little use to owners of other manufacturers' motherboards. MSI Live Update 3 utility is designed to keep track of driver and BIOS updates and conveniently update them over the Internet. This is a fairly convenient way, especially for those who do not want to understand the intricacies of the manual process of updating the BIOS of the video adapter.

MSI should be commended for the full version of the popular tactical RTS Company of Heroes. This is really a game of the highest category, with excellent graphics and a thoroughly worked out gameplay; Many players call it the best game in this genre, which is confirmed by numerous awards, including the title of "Best Strategy Game E3 2006". As we have already noted, despite belonging to the genre of real-time strategy, Company of Heroes boasts modern graphics at the level of a good first-person shooter, so the game is perfect for demonstrating the capabilities of the GeForce 8800 GTS. In addition to Company of Heroes, a demo version of Warhammer 40.000: Dawn of War - Dark Crusade can be found on the discs.

We can confidently call the MSI NX8800GTS-T2D640E-HD-OC a good package bundle thanks to the full version of the very popular tactical RTS Company of Heroes and convenient MSI software.

MSI NX8800GTS-T2D640E-HD-OC PCB Design

For the GeForce 8800 GTS, Nvidia has developed a separate, more compact PCB than the one used to manufacture the GeForce 8800 GTX. Since all GeForce 8800s are supplied to Nvidia partners ready-made, practically everything that will be said below applies not only to the MSI NX8800GTS, but also to any other GeForce 8800 GTS model, whether it is a version with 640 or 320 MB of video memory.


The GeForce 8800 GTS PCB is much shorter than the GeForce 8800 GTX. Its length is only 22.8 centimeters versus almost 28 centimeters in the flagship model GeForce 8. In fact, the dimensions of the GeForce 8800 GTS are the same as those of the Radeon X1950 XTX, even slightly smaller, since the cooler does not protrude beyond the PCB.

Our MSI NX8800GTS sample uses a board covered with a green mask, although on the company's website the product is shown with a PCB in the more usual black color. Currently, there are both "black" and "green" GeForce 8800 GTX and GTS on sale. Despite numerous rumors circulating on the web, there is no difference between such cards, except for the actual PCB color, which is confirmed by the official Nvidia website. What is the reason for this "return to the roots"?

There are many conflicting rumors on this score. According to some of them, the composition of the black coating is more toxic than the traditional green, while others believe that the black coating is more difficult to apply or more expensive. In practice, this is most likely not the case - as a rule, prices for solder masks of different colors are the same, which eliminates additional problems with masks of certain colors. The most probable is the simplest and most logical scenario - cards of different colors are produced by different contract manufacturers - Foxconn and Flextronics. Moreover, Foxconn probably uses coatings of both colors, since we have seen both "black" and "green" cards from this manufacturer.


The power supply system of the GeForce 8800 GTS is almost equal in complexity to the similar system of the GeForce 8800 GTX and even contains a larger number of electrolytic capacitors, but it has a denser layout and only one external power connector, due to which the PCB was made much shorter. The same digital PWM controller as in the GeForce 8800 GTX, Primarion PX3540, is responsible for power management of the GPU. The memory power is controlled by the second controller, Intersil ISL6549, which, by the way, is absent on the GeForce 8800 GTX, where the memory power circuit is different.

The left side of the PCB, where the main components of the GeForce 8800 GTS - GPU, NVIO and memory are located, is almost identical to the analogous section of the PCB of the GeForce 8800 GTX, which is not surprising, since the development of the entire board from scratch would require significant financial and time costs. In addition, it would probably not have been possible to significantly simplify the board for the GeForce 8800 GTS, designing it from scratch, in light of the need to use the same tandem of G80 and NVIO as on the flagship model. The only visible difference from the GeForce 8800 GTX is the absence of the second "comb" of the MIO (SLI) interface, in place of which there is a place for installing a technological connector with latches, which may perform the same function, but not wired. Even the 384-bit layout of the memory bus is preserved, and the bus itself was cut to the required width in the simplest way: instead of 12 GDDR3 chips, only 10. Since each chip has a 32-bit bus, 10 microcircuits give the required 320 bits in total. Theoretically, nothing prevents the creation of a GeForce 8800 GTS with a 384-bit memory bus, but the appearance of such a card in practice is extremely unlikely, therefore, a full-fledged GeForce 8800 GTX with reduced frequencies has great chances of being released.


MSI NX8800GTS-T2D640E-HD-OC has 10 GDDR3 Samsung K4J52324QE-BC12 chips with a capacity of 512 Mbit, operating at a 1.8V supply voltage and having a nominal frequency of 800 (1600) MHz. According to the official specifications of Nvidia for the GeForce 8800 GTS, the memory of this video adapter should have exactly this frequency. But the MSI NX8800GTS version we are considering has the letters "OC" in its name - it is pre-overclocked, so the memory operates at a slightly higher frequency of 850 (1700) MHz, which gives an increase in bandwidth from 64 GB / s. up to 68 GB / sec.

Since the only difference between the GeForce 8800 GTS 320MB and the regular model is the halved video memory, this card is simply equipped with 256 Mbit memory chips, for example, Samsung K4J55323QC / QI series or Hynix HY5RS573225AFP. Otherwise, the two GeForce 8800 GTS models are identical to each other down to the smallest details.

The marking of the GPU NX8800GTS is somewhat different from the marking of the GeForce 8800 GTX processor and looks like "G80-100-K0-A2", whereas in the reference flagship card the chip is marked with the symbols "G80-300-A2". We know that the production of the GeForce 8800 GTS can be launched with samples of the G80, which have defects in the part of functional blocks and / or have not passed the selection by frequency. Perhaps it is these features that are reflected in the marking.

The 8800 GTS processor has 96 streaming processors out of 128, 24 TMUs out of 32 and 20 ROPs out of 24. For the standard version of the GeForce 8800 GTS, the base GPU frequency is 500 MHz (513 MHz real frequency), and the shader processor frequency is 1200 MHz (1188 MHz real frequency), but for MSI NX8800GTS-T2D640E-HD-OC these parameters are 576 and 1350 MHz, which corresponds to the frequencies of the GeForce 8800 GTX. How this will affect the performance of the MSI product, we will find out later, in the section on the results of gaming tests.

The NX8800GTS has a standard output configuration: two DVI-I connectors capable of dual-link operation and a universal seven-pin mini-DIN connector that allows both HDTV devices using the analog YPbPr interface and SDTV devices using the S-Video or Composite interface. In MSI's product, both DVI connectors are carefully covered with rubber protective caps - a rather meaningless but pleasant trifle.

MSI NX8800GTS-T2D640E-HD-OC: cooling system design

The cooling system installed on the MSI NX8800GTS, as well as on the vast majority of GeForce 8800 GTS from other graphics card vendors, is a shortened version of the GeForce 8800 GTX cooling system described in the corresponding review.


Shortened heatsink and heatpipe that transfers heat from the copper base in contact with the GPU heat spreader. Also, a flat U-shaped heat pipe is located in a different way, pressed into the base and is responsible for an even distribution of heat flow. Aluminum frame on which all cooler parts are fixed. has a lot of protrusions in the places of contact with memory chips, power transistors of the power stabilizer and the NVIO chip crystal. Reliable thermal contact is ensured by traditional inorganic fiber pads impregnated with white thermal paste. For the GPU, a different, but also familiar to our readers, thick dark gray thermal paste is used.

Due to the fact that there are relatively few copper elements in the design of the cooling system, its mass is low, and the fastening does not require the use of special plates that prevent fatal bending of the PCB. Eight common spring-loaded bolts that attach the cooler directly to the board are enough. The possibility of damaging the graphics processor is practically excluded, since it is equipped with a heat spreader cover and is surrounded by a wide metal frame that protects the chip from possible distortion of the cooling system, and the board from excessive bending.

A radial fan with an impeller diameter of about 75 millimeters, which has the same electrical parameters as in the GeForce 8800 GTX cooling system - 0.48A / 12V, and is connected to the board via a four-pin connector, is responsible for blowing off the radiator. The system is covered with a translucent plastic cover so that hot air is blown out through the slots in the mounting plate.

The design of the GeForce 8800 GTX and 8800 GTS coolers is well thought-out, reliable, time-tested, practically silent in operation and provides high cooling efficiency, so it makes no sense to change it to anything else. MSI replaced only the Nvidia sticker on the casing with its own one, repeating the pattern on the box and provided the fan with another sticker with its own logo.

MSI NX8800GTS-T2D640E-HD-OC: noise and power consumption

To assess the noise level generated by the MSI NX8800GTS cooling system, a Velleman DVM1326 digital sound level meter with a resolution of 0.1 dB was used. The measurements were made using a weighted curve A. At the time of the measurements, the background noise level in the laboratory was 36 dBA, and the noise level at a distance of one meter from the working stand equipped with a graphics card with passive cooling was equal to 40 dBA.






In terms of noise, the cooling system of the NX8800GTS (and any other GeForce 8800 GTS) behaves exactly the same as the system installed on the GeForce 8800 GTX. The noise level is very low in all modes; In this respect, the new Nvidia design surpasses even the excellent GeForce 7900 GTX cooler, which was previously considered to be the best in its class. To achieve complete noiselessness and not lose in cooling efficiency in this case, it is possible only by installing a water cooling system, especially if serious overclocking is planned.

As our readers know, reference samples of the GeForce 8800 GTX from the first batches refused to run on a stand equipped to measure the power consumption of video cards. However, most of the new cards belonging to the GeForce 8800 family, and among them the MSI NX8800GTS-T2D640E-HD-OC, worked without any problems on this system with the following configuration:

Intel Pentium 4 560 processor (3.60GHz, 1MB L2);
Intel Desktop Board D925XCV (i925X);
PC-4300 DDR2 SDRAM (2x512MB);
Samsung SpinPoint SP1213C hard drive (120 GB, Serial ATA-150, 8MB buffer);
Microsoft Windows XP Pro SP2, DirectX 9.0c.

As we have reported, the motherboard, which is the heart of the measuring platform, has been specially upgraded: measuring shunts equipped with connectors for connecting measuring equipment are included in the gap between the power supply circuits of the PCI Express x16 slot. The 2xMolex -> 6-pin PCI Express power adapter is equipped with the same shunt. A Velleman DVM850BL multimeter is used as a measuring tool, which has a measurement error of no more than 0.5%.

To create a load on the video adapter in 3D mode, the first graphics test SM3.0 / HDR is used, included in the Futuremark 3DMark06 package and launched in an infinite loop at 1600x1200 resolution with 16x forced anisotropic filtering. 2D Peak Mode is emulated by running the 2D Transparent Windows benchmark, which is part of the Futuremark PCMark05 suite.

Thus, following the standard measurement procedure, we managed to obtain reliable data on the power consumption level not only of MSI NX8800GTS-T2D640E-HD-OC, but also of the entire Nvidia GeForce 8800 family.











The GeForce 8800 GTX is indeed ahead of the previous "leader" in terms of power consumption, the Radeon X1950 XTX, but only by 7 watts. Considering the enormous complexity of the G80, 131.5 watts in 3D mode can be safely considered a good indicator. Both additional power connectors of the GeForce 8800 GTX consume approximately the same power, not exceeding 45 watts, even in the most severe mode. Although the PCB design of the GeForce 8800 GTX presupposes the installation of one eight-pin power connector instead of the six-pin one, it is unlikely to be relevant even in the case of a significant increase in the clock speeds of the GPU and memory. In idle mode, the efficiency of the Nvidia flagship leaves much to be desired, but this is a payback for 681 million transistors and a huge, by GPU standards, frequency of shader processors. This high idle power consumption level is partly due to the fact that the GeForce 8800 family does not lower clock frequencies in this mode.

The performance of both GeForce 8800 GTS versions is noticeably more modest, although they cannot boast of efficiency at the level of Nvidia cards using the core of the previous generation, G71. The single power connector of these cards has a much more serious load, in some cases it can reach 70 watts or more. The power consumption levels of the versions of the GeForce 8800 GTS with 640 and 320 MB of video memory differ insignificantly, which is not surprising - this parameter is the only difference between these cards. MSI's product, operating at higher frequencies, consumes more than the standard version of the GeForce 8800 GTS - about 116 watts under load in 3D mode, which is still less than the same indicator for the Radeon X1950 XTX. Of course, in 2D mode, the AMD card is much more economical, however, video adapters of this class are purchased specifically for use in 3D, therefore, this parameter is not as critical as the level of power consumption in games and 3D applications.

MSI NX8800GTS-T2D640E-HD-OC: Overclocking Features

Overclocking representatives of the Nvidia GeForce 8800 family is associated with a number of features that we consider it necessary to tell our readers about. As you probably remember, the first representatives of the seventh generation GeForce, using the 0.11-micron G70 core, could increase the ROP and pixel processors frequencies only in 27 MHz steps, and if the overclocking turned out to be less than this value, there was practically no performance gain. Later, in cards based on the G71, Nvidia returned to the standard overclocking scheme with a step of 1 MHz, however, in the eighth generation of GeForce, the discreteness of changing the clock frequencies appeared again.

The scheme of distribution and change of clock frequencies in the GeForce 8800 is rather nontrivial, which is due to the fact that the shader processors in the G80 operate at a much higher frequency than the rest of the GPU units. The frequency ratio is approximately 2.3 to 1. Although the base frequency of the graphics core can change in smaller steps than 27 MHz, the frequency of shader processors always changes in steps of 54 MHz (2x27 MHz), which creates additional difficulties during overclocking, because all utilities manipulate the base frequency , and not at all the frequency of the shader "domain". However, there is a simple formula that allows you to accurately determine the frequency of the GeForce 8800 stream processors after overclocking:

OC shader clk = Default shader clk / Default core clk * OC core clk


Where OC shader clk is the desired frequency (approximately), Default shader clk is the initial frequency of shader processors, Default core clk is the initial frequency of the core, and OC core clk is the frequency of the overclocked core.

Let's take a look at the behavior of MSI NX8800GTS-T2D640E-HD-OC when overclocking with the RivaTuner2 FR utility, which allows you to monitor the real frequencies of various regions or, as they are also called, "domains" of the G80 GPU. Since the MSI product has the same GPU frequencies (576/1350) as the GeForce 8800 GTX, the following information is valid for Nvidia's flagship graphics card as well. We will increase the main GPU frequency in 5 MHz steps: this is a rather small step and at the same time it is not a multiple of 27 MHz.


An empirical check has confirmed that the main frequency of the graphics core can really change with a variable step - 9, 18 or 27 MHz, and we could not catch the pattern of change. The frequency of shader processors in all cases was changed in steps of 54 MHz. Because of this, some frequencies of the main "domain" of the G80 turn out to be practically useless during overclocking, and their use will only lead to excessive heating of the GPU. For example, there is no point in increasing the main core frequency to 621 MHz - the shader unit frequency will still be 1458 MHz. Thus, overclocking the GeForce 8800 should be done carefully, using the above formula and checking the monitoring data of Riva Tuner or another utility with similar functionality.

It would be illogical to expect serious overclocking results from the NX8800GTS version already overclocked by the manufacturer, however, the card unexpectedly showed quite good potential, at least from the GPU side. We managed to raise its frequencies from the factory 576/1350 MHz to 675/1566 MHz, while the NX8800GTS steadily passed several 3DMark06 cycles in a row without any additional cooling. The processor temperature, according to Riva Tuner, did not exceed 70 degrees.

The memory yielded to overclocking much worse, since the NX8800GTX OC Edition was equipped with chips designed for 800 (1600) MHz, operating at a frequency higher than the nominal 850 (1700) MHz. As a result, we had to stop at the 900 (1800) MHz mark, since further attempts to raise the memory frequency invariably led to a freeze or crash in the driver.

Thus, the card showed good overclocking potential, but only for the GPU: comparatively slow memory chips did not allow to significantly increase its frequency. For them, the level of the GeForce 8800 GTX should be considered a good achievement, and the 320-bit bus at this frequency is already capable of providing a serious bandwidth advantage over the Radeon X1950 XTX: 72 GB / s versus 64 GB / s. Of course, the result of overclocking may vary depending on a specific instance of MSI NX8800GTS OC Edition and the use of additional tools, such as modifying the power supply of the card or installing water cooling.

Test platform configuration and test methods

A comparative study of the performance of the GeForce 8800 GTX was carried out on platforms with the following configuration.

AMD Athlon 64 FX-60 processor (2 x 2.60GHz, 2 x 1MB L2)
Abit AN8 32X (nForce4 SLI X16) motherboard for Nvidia GeForce cards
Asus A8R32-MVP Deluxe (ATI CrossFire Xpress 3200) Motherboard for ATI Radeon Cards
OCZ PC-3200 Platinum EL DDR SDRAM (2x1GB, CL2-3-2-5)
Maxtor MaXLine III 7B250S0 hard drive (Serial ATA-150, 16MB buffer)
Sound Card Creative SoundBlaster Audigy 2
Power supply unit Enermax Liberty 620W (ELT620AWT, rated power 620W)
Dell 3007WFP Monitor (30 ", Max 2560x1600)
Microsoft Windows XP Pro SP2, DirectX 9.0c
AMD Catalyst 7.2
Nvidia ForceWare 97.92

Since we consider the use of trilinear and anisotropic filtering optimizations unjustified, the drivers were tuned in a standard way, implying the highest possible texture filtering quality:

AMD Catalyst:

Catalyst A.I .: Standard
Mipmap Detail Level: High Quality
Wait for vertical refresh: Always off
Adaptive antialiasing: Off
Temporal antialiasing: Off
High Quality AF: On

Nvidia ForceWare:

Texture Filtering: High Quality
Vertical sync: Off
Trilinear optimization: Off
Anisotropic optimization: Off
Anisotropic sample optimization: Off
Gamma correct antialiasing: On
Transparency antialiasing: Off
Other settings: default

In each game, the highest possible level of graphics quality was set, while the configuration files of the games were not modified. To take performance data, either the built-in features of the game were used, or, in their absence, the Fraps utility. Whenever possible, data on the minimum performance was recorded.

Testing was carried out in three standard resolutions for our methodology: 1280x1024, 1600x1200 and 1920x1200. One of the purposes of this review is to evaluate the effect of the video memory size of the GeForce 8800 GTS on performance. In addition, the technical characteristics and cost of both versions of this video adapter allow us to count on a fairly high level of performance in modern games when using FSAA 4x, so we tried to use the "eye candy" mode wherever possible.

FSAA and anisotropic filtering were activated by means of the game; in the absence of those, their forcing was carried out using the appropriate settings of the ATI Catalyst and Nvidia ForceWare drivers. Testing without full-screen anti-aliasing was used only for games that do not support FSAA for technical reasons, or when using FP HDR simultaneously with participation in testing of representatives of the GeForce 7 family, which does not support simultaneous operation of these features.

Since our task was, among other things, to compare the performance of graphics cards that differ only in the amount of video memory, MSI NX8800GTS-T2D640E-HD-OC was tested 2 times: at the factory frequencies, and at frequencies reduced to the reference values ​​for the GeForce 8800 GTS: 513 / 1188/800 (1600) MHz. In addition to the MSI product and the reference Nvidia GeForce 8800 GTS 320MB, the following video adapters took part in the testing:

Nvidia GeForce 8800 GTX (G80, 576/1350 / 1900MHz, 128sp, 32tmu, 24rop, 384-bit, 768MB)
Nvidia GeForce 7950 GX2 (2xG71, 500 / 1200MHz, 48pp, 16vp, 48tmu, 32rop, 256-bit, 512MB)
AMD Radeon X1950 XTX (R580 +, 650 / 2000MHz, 48pp, 8vp, 16tmu, 16rop, 256-bit, 512MB)

As a test software the following set of games and applications was used:

3D first-person shooters:

Battlefield 2142
Call of juarez
Far cry
F.E.A.R. Extraction Point
Tom Clancy "s Ghost Recon Advanced Warfighter
Half-Life 2: Episode One
Prey
Serious sam 2
S.T.A.L.K.E.R .: Shadow of Chernobyl


3D shooters with a third person view:

Tomb Raider: Legend


RPG:

Gothic 3
Neverwinter nights 2
The Elder Scrolls IV: Oblivion


Simulators:


Strategy games:

Command & Conquer: Tiberium Wars
Company of Heroes
Supreme commander


Synthetic gaming tests:

Futuremark 3DMark05
Futuremark 3DMark06

Game tests: Battlefield 2142


There is no significant difference between the two versions of the GeForce 8800 GTS with different video memory capacities up to the resolution of 1920x1200, although in 1600x1200 the younger model is inferior to the older one by about 4-5 frames per second with quite comfortable performance of both. The resolution of 1920x1440, however, is a turning point: the GeForce 8800 GTS 320MB drops out of the game abruptly with more than 1.5 times lag on average and twice in the minimum fps. Moreover, it loses to the cards of the previous generation. There is a shortage of video memory or a problem with the implementation of its management in the GeForce 8800 family.

MSI NX8800GTS OC Edition is noticeably ahead of reference model starting from 1600x1200, but it certainly cannot catch up with the GeForce 8800 GTX, although at 1920x1440 the gap between these cards becomes impressively narrow. Obviously, the difference in the memory bus width between the GeForce 8800 GTS and GTX is insignificant here.

Game tests: Call of Juarez


Both GeForce 8800 GTS models show the same performance level at all resolutions, including 1920x1200. This is quite natural, considering testing with HDR on but FSAA off. Working at nominal frequencies, the cards are inferior to the GeForce 7950 GX2.

The overclocked version of MSI allows you to achieve parity in high resolutions, which are impractical to use in this game even if you have a GeForce 8800 GTX in your system. For example, at 1600x1200, the average performance of Nvidia's flagship graphics card is only 40 fps, with dips in graphically intensive scenes up to 21 fps. For a first-person shooter, such indicators can hardly be called truly comfortable.

Game tests: Far Cry


The game is far from young and is not well suited for testing modern high-end video adapters. Despite the use of anti-aliasing, noticeable differences in their behavior can be seen only at 1920x1200. The GeForce 8800 GTS 320MB is faced with a shortage of video memory, and therefore yields by about 12% to the model equipped with 640 MB of video memory. However, due to the modest requirements of Far Cry by today's standards, the player is not in danger of losing comfort.

MSI NX8800GTS OC Edition is almost on a par with GeForce 8800 GTX: in Far Cry the power of the latter is clearly not in demand.


Due to the nature of the scene recorded at the Research level, the readings are more varied; even at 1600x1200 you can see the differences in performance of different members of the GeForce 8800 family. Moreover, the lag of the version with 320MB of memory is already evident here, despite the fact that the action takes place in a closed space of an underground cave. The difference in performance between the MSI product and the GeForce 8800 GTX at 1920x1200 is much larger than in the previous case, since the performance of shader processors at this level plays a more important role.




In FP HDR mode, the GeForce 8800 GTS 320MB no longer experiences problems with the video memory capacity and is in no way inferior to its older brother, providing a decent level of performance at all resolutions. The variant offered by MSI provides another 15% increase in speed, but even the version operating at standard clock frequencies is fast enough to use the 1920x1200 resolution, and the GeForce 8800 GTX will undoubtedly provide a comfortable environment for the player at 2560x1600.

Game tests: F.E.A.R. Extraction Point


The visual richness of F.E.A.R. requires the corresponding resources from the video adapter, and the 5% lag of the GeForce 8800 GTS 320MB is seen already at 1280x1024, and at the next resolution, 1600x1200, it sharply turns into 40%.

The benefits of overclocking the GeForce 8800 GTS are not obvious: both the overclocked and the usual versions allow you to play equally well at 1600x1200. In the next resolution, the increase in speed from overclocking is simply not enough to reach a level comfortable for first-person shooters. Only GeForce 8800 GTX with 128 active shader processors and 384-bit memory subsystem can do it.

Game Tests: Tom Clancy's Ghost Recon Advanced Warfighter

Due to the use of deferred rendering, the use of FSAA in GRAW is technically impossible, therefore, the data is given only for the mode with anisotropic filtering.


The advantage of MSI NX8800GTS OC Edition over the usual reference card grows as the resolution grows, and at 1920x1200 it reaches 19%. In this case, it is these 19% that make it possible to achieve an average performance of 55 fps, which is quite comfortable for the player.

As for the comparison of two GeForce 8800 GTS models with different video memory sizes, there is no difference in their performance.

Game tests: Half-Life 2: Episode One


At 1280x1024, there is a limitation on the part of the central processor of our test system - all the cards show the same result. In 1600x1200, the differences are already revealed, but they are not fundamental, at least for three versions of the GeForce 8800 GTS: all three provide a very comfortable level of performance. The same can be said about the resolution 1920x1200. Despite the high-quality graphics, the game is undemanding to the amount of video memory and the loss of the GeForce 8800 GTS 320MB to the older and much more expensive model with 640 MB of memory on board is only about 5%. The overclocked version of the GeForce 8800 GTS offered by MSI confidently takes the second place after the GeForce 8800 GTX.

Although the GeForce 7950 GX2 shows better results than the GeForce 8800 GTS at 1600x1200, do not forget about the problems that may arise when using a card that is, in fact, an SLI tandem, as well as the significantly lower quality of texture filtering in the GeForce 7 family. The new solution from Nvidia, of course, also has problems with drivers, but it has promising capabilities, and, unlike the GeForce 7950 GX2, has every chance to get rid of "childhood diseases" in the shortest possible time.

Game tests: Prey


The GeForce 8800 GTS 640MB does not show the slightest advantage over the GeForce 8800 GTS 320MB, possibly because the game uses a modified Doom III engine and does not show any special appetites in terms of video memory requirements. As in the case of GRAW, the increased performance of the NX8800GTS OC Edition allows the owners of this video adapter to count on a fairly comfortable game at a resolution of 1920x1200. For comparison, a regular GeForce 8800 GTS demonstrates the same figures at 1600x1200. The flagship of the line, the GeForce 8800 GTX, is beyond competition.

Game tests: Serious Sam 2


The brainchild of Croatian developers from Croteam always strictly demanded 512 MB of video memory from the video adapter, otherwise punishing it with a monstrous drop in performance. The volume provided by the inexpensive version of the GeForce 8800 GTS turned out to be insufficient to satisfy the appetites of the game, as a result of which it was able to show only 30 fps at a resolution of 1280x1024, while the version with 640 MB of memory on board turned out to be more than twice as fast.

For some unknown reason, the minimum performance of all GeForce 8800s in Serious Sam 2 is extremely low, which may be due to both the architectural features of the family, which, as you know, has a unified architecture without dividing into pixel and vertex shaders, and flaws in the ForceWare drivers. For this reason, the owners of the GeForce 8800 will not be able to achieve complete comfort in this game so far.

Game tests: S.T.A.L.K.E.R .: Shadow of Chernobyl

Eagerly awaited by many players, the GSC Game World project, after many years of development, finally saw the light of day, 6 or 7 years after the announcement. The game turned out to be ambiguous, but, nevertheless, multifaceted enough to try to describe it in a few phrases. Let's just note that in comparison with one of the first versions, the project engine has been significantly improved. The game received support for a number of modern technologies, including Shader Model 3.0, HDR, parallax mapping and others, but did not lose the ability to work in a simplified mode with a static lighting model, providing excellent performance on not very powerful systems.

As we focus on the highest level of image quality, we tested the game in full dynamic lighting mode with maximum detail. In this mode, which implies, among other things, the use of HDR, there is no FSAA support; at least this is the case in current version S.T.A.L.K.E.R. Since when using a static lighting model and DirectX 8 effects, the game loses much in attractiveness, we limited ourselves to anisotropic filtering.


The game does not suffer from modest appetites - even the GeForce 8800 GTX with maximum detail is not able to provide 60 fps in it at a resolution of 1280x1024. However, it should be noted that at low resolutions the main limiting factor is CPU performance, since the spread between the cards is small and their average results are quite close.

Nevertheless, the GeForce 8800 GTS 320MB lags somewhat behind its older brother, and it only gets worse with increasing resolution, and at 1920x1200 the youngest member of the GeForce 8800 family simply lacks the available video memory. This is not surprising, given the scale of the game scenes and the abundance of special effects used in them.

In general, we can say that the GeForce 8800 GTX does not provide a serious advantage in S.T.A.L.K.E.R. ahead of the GeForce 8800 GTS, and the Radeon X1950 XTX looks just as successful as the GeForce 8800 GTS 320MB. AMD's solution even surpasses Nvidia's solution in some ways, since it works at 1920x1200, however, the practical use of such a mode is impractical because of the average performance at 30-35 fps. The same applies to the GeForce 7950 GX2, which, by the way, is somewhat ahead of both its direct competitor and the younger model of the new generation.

Game tests: Hitman: Blood Money


Earlier, we noted that the presence of 512 MB of video memory provides such a video adapter with some gain in Hitman: Blood Money at high resolutions. Apparently, 320 MB is also sufficient, since the GeForce 8800 GTS 320MB is practically not inferior to the usual GeForce 8800 GTS, regardless of the used resolution; the difference does not exceed 5%.

Both cards, as well as the overclocked version of the GeForce 8800 GTS offered by MSI, allow you to play successfully at all resolutions, and the GeForce 8800 GTX even allows the use of better FSAA modes than the usual MSAA 4x, since it has the necessary performance headroom.

Game tests: Tomb Raider: Legend


Despite using the settings that provide the maximum graphics quality, the GeForce 8800 GTS 320MB copes with the game as well as the regular GeForce 8800 GTS. Both cards make available to the player the resolution of 1920x1200 in the "eye candy" mode. MSI NX8800GTS OC Edition slightly surpasses both reference cards, but only on average fps - the minimum remains the same. The GeForce 8800 GTX has no more it, which may mean that this indicator is due to some peculiarities of the game engine.

Game tests: Gothic 3

The current version of Gothic 3 does not support FSAA, so testing was carried out using anisotropic filtering only.


Despite the lack of support for full-screen anti-aliasing, the GeForce 8800 GTS 320MB is seriously inferior not only to the regular GeForce 8800 GTS, but also to the Radeon X1950 XTX, slightly outperforming only the GeForce 7950 GX2. Due to the performance at 26-27 fps at 1280x1024, this card is not well suited for Gothic 3.

Note that the GeForce 8800 GTX outperforms the GeForce 8800 GTS, at best, by 20%. By all appearances, the game is unable to use all the resources that Nvidia's flagship model has at its disposal. This is evidenced by the insignificant difference between the regular and overclocked versions of the GeForce 8800 GTS.

Game Tests: Neverwinter Nights 2

Since version 1.04, the game is FSAA enabled, but HDR support is still incomplete, so we tested NWN 2 in "eye candy" mode.


As already mentioned, the minimum playability barrier for Neverwinter Nights 2 is 15 frames per second, and the GeForce 8800 GTS 320MB balances on this edge already at 1600x1200, while for the version with 640 MB of memory 15 fps is the minimum value below which it performance is not compromised.

Game Tests: The Elder Scrolls IV: Oblivion

Without HDR, the game loses much of its appeal, and although the opinions of the players differ on this, we tested the TES IV in the mode with FP HDR enabled.


The performance of the GeForce 8800 GTS 320MB directly depends on the resolution used: if in 1280x1024 the new product is able to compete with the most productive cards of the previous generation, then in 1600x1200 and especially 1920x1200 it loses to them, yielding up to 10% to Radeon X1950 XTX and up to 25% to GeForce 7950 GX2. Nevertheless, this is a very good result for a solution that has an official price of only $ 299.

The regular GeForce 8800 GTS and its overclocked version offered by MSI feel more confident and provide comfortable first-person shooter performance at all resolutions.


Examining two versions of the GeForce 7950 GT, differing in the amount of video memory, we did not record any serious differences in performance in TES IV, however, in a similar situation with two versions of the GeForce 8800 GTS, the picture is completely different.
If in 1280x1024 they behave the same, then already in 1600x1200 the version with 320 MB of memory is more than twice inferior to the version equipped with 640 MB, and in the resolution of 1920x1200 its performance drops to the level of the Radeon X1650 XT. It is quite obvious that the point here is not in the amount of video memory, as such, but in the peculiarities of its distribution by the driver. The problem can probably be fixed by tweaking ForceWare, and with the release of new versions of Nvidia drivers, we will check this statement.

As for the GeForce 8800 GTS and MSI NX8800GTS OC Edition, even in the open spaces of the Oblivion world, they provide a high level of comfort at all resolutions, although, of course, not around 60 fps, as in closed rooms. The most powerful solutions of the previous generation simply cannot compete with them.

Game tests: X3: Reunion


The average performance of all members of the GeForce 8800 family is quite high, but the minimum is still at a low level, which means that the drivers need to be improved. The results of the GeForce 8800 GTS 320MB are the same as those of the GeForce 8800 GTS 640MB.

Game tests: Command & Conquer 3: Tiberium Wars

The Command & Conquer real-time strategy series is probably familiar to everyone who is more or less interested in computer games. The continuation of the series, recently released by Electronic Arts, takes the player into the familiar world of confrontation between GDI and the Brotherhood of Nod, which, this time, has been joined by a third faction represented by alien invaders. The game engine is made up to date and uses advanced special effects; in addition, it has one feature - the fps limiter, fixed at around 30 frames per second. Perhaps this is done in order to limit the speed of the AI ​​and, thus, avoid an unfair advantage over the player. Since the limiter is not disabled by standard means, we tested the game with it, which means that we paid attention first of all to the minimum fps.


Almost all test participants are able to provide 30 fps at all resolutions, except for the GeForce 7950 GX2, which is experiencing problems with the SLI mode. Most likely, the driver simply lacks the appropriate support, since the official Windows XP Nvidia ForceWare driver for the GeForce 7 family was last updated more than six months ago.

As for both GeForce 8800 GTS models, they demonstrate the same minimum fps and, therefore, provide the same level of comfort for the player. Although the model with 320 MB of video memory is inferior to the older model at 1920x1200, 2 frames per second is hardly a critical value, which, with the same minimum performance, again does not affect the gameplay in any way. The complete lack of discrete control can only be provided by the GeForce 8800 GTX, whose minimum fps does not fall below 25 frames per second.

Game Tests: Company of Heroes

Due to problems with FSAA activation in this game, we decided to abandon the "eye candy" mode and tested it in pure performance mode with anisotropic filtering enabled.


Here is another game where the GeForce 8800 GTS 320MB is inferior to the previous generation with an unified architecture. In fact, the $ 299 Nvidia solution is suitable for use at resolutions no higher than 1280x1024, even with anti-aliasing disabled, while the $ 449 model, which differs in the only parameter - the amount of video memory, allows you to play successfully even at 1920x1200. However, this is also available to owners of AMD Radeon X1950 XTX.

Game tests: Supreme Commander


But Supreme Commander, unlike Company of Heroes, does not impose strict requirements on the amount of video memory. In this game the GeForce 8800 GTS 320MB and the GeForce 8800 GTS show equally high results. Some additional gain can be obtained by overclocking, which is demonstrated by the MSI product, but such a step will still not reach the level of the GeForce 8800 GTX. However, the available performance is sufficient to use all resolutions, including 1920x1200, especially since its fluctuations are small, and the minimum fps is only slightly inferior to the average.

Synthetic benchmarks: Futuremark 3DMark05


Since by default 3DMark05 uses a resolution of 1024x768 and does not use full-screen anti-aliasing, the GeForce 8800 GTS 320MB naturally demonstrates the same result as the usual version with 640 MB of video memory. The overclocked version of the GeForce 8800 GTS, supplied to the market by Micro-Star International, boasts a nice even result - 13800 points.






In contrast to the general result obtained in the default mode, we get the results of individual tests by running them in the "eye candy" mode. But in this case it had no effect on the performance of the GeForce 8800 GTS 320MB - no noticeable lag behind the GeForce 8800 GTS was recorded even in the third, most resource-intensive test. MSI NX8800GTS OC Edition in all cases took a stable second place after GeForce 8800 GTX, confirming the results obtained in the overall standings.

Synthetic benchmarks: Futuremark 3DMark06


Both versions of the GeForce 8800 GTS behave in the same way as in the previous case. However, 3DMark06 uses more sophisticated graphics, which, when combined with FSAA 4x in some benchmarks, can give a different picture. Let's see.






The results of certain groups of tests are also logical. The SM3.0 / HDR group uses a larger number of more complex shaders, so the advantage of the GeForce 8800 GTX is more pronounced than in the SM2.0 group. AMD Radeon X1950 XTX also looks more advantageous in case of active use of Shader Model 3.0 and HDR, and GeForce 7950 GX2, on the contrary, in SM2.0 tests.




After enabling FSAA, the GeForce 8800 GTS 320MB really starts to lose to the GeForce 8800 GTS 640MB at 1600x1200, and at 1920x1200, the new Nvidia solution cannot pass the tests at all due to lack of video memory. The loss is close to twofold in both the first and second tests of SM2.0, despite the fact that they are very different in the construction of graphic scenes.






In the first test SM3.0 / HDR, the effect of the video memory size on the performance is clearly seen even at 1280x1024. The younger model GeForce 8800 GTS is 33% behind the older one, then, at 1600x1200, the gap increases to almost 50%. The second test, with a much less complex and large-scale scene, is not so demanding on the amount of video memory, and here the lag is 5% and about 20%, respectively.

Conclusion

Time to take stock. We tested two Nvidia GeForce 8800 GTS models, one of which is a direct competitor to the AMD Radeon X1950 XTX, and the other is for the $ 299 high-performance card sector. What can we say with the game test results?

The older model, which has an official price of $ 449, has shown itself to be on the good side when it comes to performance. In most tests, the GeForce 8800 GTS outperformed the AMD Radeon X1950 XTX and only in some cases showed equal performance with the AMD solution and lagged behind the dual-processor GeForce 7950 GX2 tandem. However, given the extremely high performance of the GeForce 8800 GTS 640MB, we would not unambiguously compare it with the products of the previous generation: they do not support DirectX 10, while the GeForce 7950 GX2 has a significantly worse quality of anisotropic filtering, and potential problems caused by incompatibility of one or another games with Nvidia SLI technology.

GeForce 8800 GTS 640MB can be confidently called the best solution in the $ 449- $ 499 price range. However, it is worth noting that the new generation of Nvidia products is still not cured of childhood illnesses: Call of juarez there are still flickering shadows, and Splinter cell: double agent, although it works, it requires a special run on drivers version 97.94. At least until the cards based on the next generation AMD graphics processor appear on the market, the GeForce 8800 GTS has every chance to take its rightful place as "the best accelerator costing $ 449". Nevertheless, before purchasing the GeForce 8800 GTS, we would recommend that you clarify the issue of compatibility of the new Nvidia family with your favorite games.

The new GeForce 8800 GTS 320MB for $ 299 is also a very good purchase for its money: support for DirectX 10, high-quality anisotropic filtering and not a good level of performance in typical resolutions are just some of the advantages of the new product. Thus, if you plan to play at 1280x1024 or 1600x1200, the GeForce 8800 GTS 320MB is an excellent choice.

Unfortunately, a very promising card from a technical point of view, which differs from the more expensive version only in the amount of video memory, sometimes seriously inferior to the GeForce 8800 GTS 640MB, not only in games with high demands on the amount of video memory, such as Serious sam 2, but also where previously the difference in performance of cards with 512 and 256 MB of memory was not recorded. In particular, these games include TES IV: Oblivion, Neverwinter Nights 2, F.E.A.R. Extraction Point and some others. Considering that 320 MB of video memory is definitely more than 256 MB, the problem is clearly related to its inefficient allocation, but, unfortunately, we do not know if it is due to flaws in the drivers or something else. Nevertheless, even taking into account the above-described shortcomings, the GeForce 8800 GTS 320MB looks much more attractive than the GeForce 7950 GT and Radeon X1950 XT, although the latter will inevitably lose in price with the advent of this video adapter.

As for the MSI NX8800GTS-T2D640E-HD-OC, we have a product with a good package bundle, which differs from the reference Nvidia card not only in packaging, accessories and a sticker on the cooler. The video adapter is overclocked by the manufacturer and in most games it provides a noticeable performance boost as compared to the standard GeForce 8800 GTS 640MB. Of course, it cannot reach the level of GeForce 8800 GTX, but additional fps are never superfluous. Apparently, these cards are carefully selected for their ability to work at higher frequencies; at least, our sample showed quite good results in the overclocking area, and it is possible that most of the NX8800GTS OC Editions are able to overclock well beyond what the manufacturer has already done.

The two-disc edition of Company of Heroes, considered by many game reviewers the best strategy game of the year, deserves special praise. If you are seriously aiming at buying a GeForce 8800 GTS, then this MSI product has every chance of becoming your choice.

MSI NX8800GTS-T2D640E-HD-OC: pros and cons

Advantages:

Improved performance level versus reference GeForce 8800 GTS
High level of performance at high resolutions using FSAA





Low noise level
Good overclocking potential
Good equipment

Disadvantages:

Insufficiently debugged drivers

GeForce 8800 GTS 320MB: advantages and disadvantages

Advantages:

Highest performance in its class
Support for new modes and methods of anti-aliasing
Excellent quality anisotropic filtering
Unified architecture with 96 shader processors
Future Proof: DirectX 10 and Shader Model 4.0 Support
Efficient cooling system
Low noise level

Disadvantages:

Insufficiently debugged drivers (problem with video memory allocation, poor performance in some games and / or modes)
High energy consumption

Comparative testing four GeForce 8800GTS 512 and 8800GT

Let's get acquainted with the GeForce 8800GTS 512 cards, compare them with the cheaper GeForce 8800GT and the veteran GeForce 8800GTX. Along the way, we are rolling in a new one test bench and collecting flaws in drivers for DX10

With the release of a new series of video cards GeForce 8800GTS 512, NVIDIA has noticeably strengthened its position. The novelty has come to replace the more expensive, hot and bulky GeForce 8800GTX, and the only drawback compared to its predecessor is the narrower 256-bit memory bus (versus 384 for the GeForce 8800GTX) and a smaller amount of memory equal to 512 MB (versus 768 MB for the GeForce 8800GTX) ... However, the novelty has undergone not only reductions, but also some improvements: the number of texture units has been increased from 32 to 64, which undoubtedly partially compensates for the simplifications in the map. Also, to compensate for the simplifications, the frequencies were increased compared to their predecessor, and the amount of video memory can be easily expanded to 1 GB by simply installing chips of larger capacity, which, by the way, have already begun to be done by some manufacturers. But, despite the fact that the GeForce 8800GTS 512 video card replaced the GeForce 8800GTX, its main competitor is not its predecessor, but the closest relative of the GeForce 8800GT, and all the salt lies in its lower price. The video cards GeForce 8800GTS 512 and GeForce 8800GT differ little from each other, since the GeForce 8800GT is a cut-down version of the GeForce 8800GTS 512 and, oddly enough, appeared on the market earlier than the full-fledged version. Both video cards are equipped with 512 MB of video memory and, as shown by today's research, they have the same memory. The main differences lie in the graphics processor, and specifically, in the GT version, some of its functional blocks are disabled. More details are given in the table below:

As you can see, the GeForce 8800GT differs from its older sister in the reduced number of universal processors to 112 and the number of texture units reduced to 56. Initially, the cards also differ in clock speeds, but this does not matter for our today's review, since almost all cards were factory overclocked. Let's find out how the differences on paper affected reality.

Leadtek 8800GTS 512

Designers from Leadtek chose a bright orange color to draw attention to their video card, and they were absolutely right: the new product will not go unnoticed.
The face of the novelty is an image of a scene from a fictional "shooter", under which are located the technical characteristics of the video card and a note about the bonus - the full version of the game Neverwinter Nights 2.
The reverse side of the box contains the characteristics of the video card, a list of the delivery set, and standard information from NVIDIA.
  • splitter S-video> S-video + component out;
  • adapter DVI> D-sub;
  • CD with drivers;
  • CD with Power DVD 7 program;

The Leadtek 8800GTS 512 video card is based on the reference design, familiar to us from the GeForce 8800GT cards. Externally, the novelty is distinguished by a "two-story" cooling system, which, unlike its predecessor, throws hot air out of the computer. The advantages of such a solution are obvious, and the reason for using the improved cooling system is, most likely, not that the “new” chip heats up more, but that for a lot of money the buyer has every right to get a better product. Indeed, speaking frankly, the reference system of the GeForce 8800GT does not do its job in the best way.
The reverse sides of the GeForce 8800GTS 512 and GeForce 8800GT look almost the same and differ in that the 8800GTS 512 version has all the elements mounted. However, we will be able to see the differences later on the example of the Leadtek 8800GT video card, but for now we will crawl under the hood of the new product.
Having removed the cooling system, we can again make sure that the boards are identical. However, pay attention to the right side of the board, where the power subsystem is located. Where the GeForce 8800GT is empty and only seats are located, the Leadtek 8800GTS 512 is densely populated with radio elements. It turns out that the GeForce 8800GTS 512 has a more complex power subsystem than the GeForce 8800GT. In principle, it is not surprising, because the GeForce 8800GTS 512 has higher operating frequencies and, consequently, more stringent requirements for the quality of power supply.
There are no external differences between the G92 chip in the Leadtek 8800GTS 512 and the G92 chip in the GeForce 8800GT video cards.
The new video card uses the same Qimonda chips with 1.0 ns access time as the GeForce 8800GT. A set of eight chips forms 512 MB of video memory. The nominal frequency for such chips is 2000 MHz DDR, but the real frequency set in the video card is slightly lower.
The cooling system for the video card is aluminum with a copper plate. This combination of two materials has been used for a long time and allows you to achieve the required efficiency with less weight and lower cost.
The processing of the copper "core" is at a satisfactory level, but no more.
After removing the casing from the cooling system, we have an amazing picture: as many as three heat pipes are engaged in removing heat from the copper base, which go to different parts of the radiator made of aluminum plates. Such a scheme serves for uniform heat distribution, and the large dimensions of the heatsink should have the best effect on the cooling quality, which cannot be said about the reference cooling system GeForce 8800GT. There are also three heat pipes, but their dimensions are noticeably smaller, as are the dimensions of the radiator itself.

Differences, overclocking and efficiency of the cooling system


Differences from the GeForce 8800GT lie in the increased number of universal processors from 112 to 128, as well as in the operating frequencies of the entire GPU.
The Leadtek 8800GTS 512 frequencies correspond to the recommended ones and are equal to 650/1625 MHz for the GPU and 1944 MHz for the video memory.

Now - about the heating of the video card, which we will check using the Oblivion game with maximum settings.


The Leadtek 8800GTS 512 video card warmed up from 55 degrees at rest to 71 degrees, while the noise from the fan was practically inaudible. However, this turned out to be insufficient for overclocking, and with the help of the same Riva Tuner we increased the fan speed up to 50% of the possible maximum.
After that, the GPU temperature did not rise above 64 degrees, while the noise level remained at a low level. The Leadtek 8800GTS 512 video card was overclocked to 756/1890 MHz for the GPU and 2100 MHz for the video memory. Such high frequencies were inaccessible for the GeForce 8800GT, apparently due to the simplified power system.

Well, let's get acquainted with the next participant in our today's test - the ASUS EN8800GTS TOP video card.

ASUS EN8800GTS TOP


When you look at the packaging from powerful video cards from ASUS, you may get the feeling that this is not a video card at all, but, for example, a motherboard. It's all about the large size; for example, in our case, the size of the box is noticeably larger than that of the first participant in today's test. The large area of ​​the front side of the package made it possible to fit a large image of the proprietary archer girl and a considerable diagram showing 7% faster speed compared to the “regular” GeForce 8800GTS 512. The “TOP” abbreviation in the name of the video card means that it has undergone factory overclocking. The disadvantage of the packaging is that it is not obvious that the video card belongs to the GeForce 8800GTS 512 series, but, by and large, these are trifles. At first, it is surprising that there is too little information on the box, however, the truth is revealed later, by itself, and in the literal sense.
As soon as you take the box by the handle, at the first breath of the breeze it opens like a book. The information under the cover is completely devoted to proprietary utilities from ASUS, in particular, ASUS Gamer OSD, which now can not only change the brightness / contrast / color in real time, but also show the FPS value, as well as record video and take screenshots. The second described utility, called Smart Doctor, is designed to monitor the value of supply voltages and frequencies of the video card, and also allows you to overclock it. It should be said that the proprietary utility from ASUS can change two GPU frequencies, that is, the core and the shader unit. This brings it close to the famous Riva Tuner utility.
The reverse side of the box contains a little bit of everything, in particular, a brief description of the Video Security utility designed to use a computer as a "smart" video surveillance system in online mode.
The complete set of the card is made according to the principle of "nothing more":
  • adapter for power supply of PCI-express cards;
  • S-video> component out adapter;
  • adapter DVI> D-sub;
  • bag for 16 discs;
  • CD with drivers;
  • CD with documentation;
  • brief instructions for installing a video card.

Outwardly, the video card is almost an exact copy of the Leadtek 8800GTS 512, and this is not surprising: both cards are based on the reference design and, most likely, were produced at the same factory by order of NVIDIA itself, and only then were sent to Leadtek and ASUS. Simply put, today, a card from Leadtek could well become a card from ASUS, and vice versa.
It is clear that the reverse side of the video card is also no different from that of the Leadtek 8800GTS 512, except that they have different, branded stickers.
There is also nothing unusual under the cooling system. The power system on the right side of the board is fully assembled, in the center there is a G92 GPU with 128 active stream processors and eight memory chips making up 512 MB in total.
The memory chips are manufactured by Qimonda and have a 1.0 ns access time, which corresponds to a frequency of 2000 MHz.
The appearance of the GPU does not reveal its noble origins, like that of the Leadtek 8800GTS 512.
The cooling system of the ASUS EN8800GTS TOP video card is exactly the same as that of the Leadtek 8800GTS 512 video card: a copper "core" is built into the aluminum heatsink to remove heat from the GPU.
The polishing quality of the copper core is satisfactory, like its predecessor.
The heat from the copper core is distributed over the aluminum fins using three copper heat pipes. We have already seen the effectiveness of this solution on the example of the first card.

Rated frequencies and overclocking

As we have already said, the TOP prefix after the name of the video card denotes its factory overclocking. The nominal frequencies of the new item are equal to 740/1780 MHz for the GPU (versus 650/1625 MHz for Leadtek) and 2072 MHz for video memory (versus 1944 MHz for Leadtek). Note that for memory chips with a 1.0 ns access time, the nominal clock frequency is 2000 MHz.

We managed to overclock the card to the same frequencies as the Leadtek 8800GTS 512: 756/1890 MHz for the GPU and 2100 MHz for the video memory with a fan speed of 50% of the maximum.

Well, now let's go down a notch and get acquainted with two video cards of the GeForce 8800GT class.

Leadtek 8800GT

The Leadtek 8800GT video card is a typical representative of the GeForce 8800GT series and, in fact, is not much different from the majority. The whole point is that the GeForce 8800GT video cards are cheaper than the "advanced" GeForce 8800GTS 512, so they are no less interesting.
The box of the Leadtek 8800GT is almost the same as that of the more expensive 8800GTS 512. The differences are in the smaller thickness, the absence of a carrying handle and, of course, in the name of the video card. The inscription "extreme" after the name of the video card speaks of its factory overclocking.
The back side of the box contains brief information about the video card, its advantages and a list of equipment. By the way, in our case there was no game Neverwinter Nights 2 and instructions for installing a video card.
The package includes new items:
  • adapter for power supply of PCI-express cards;
  • splitter S-video> S-video + component out;
  • adapter DVI> D-sub;
  • CD with drivers;
  • CD with Power DVD 7 program;
  • CD with the full version of Newervinter Nights 2;
  • brief instructions for installing a video card.

The Leadtek 8800GT video card is made according to the reference design and externally differs only by a sticker on the casing of the cooling system.
The reverse side of the video card does not stand out either, however, after examining the GeForce 8800GTS 512 video card, the missing row of chip capacitors on the left of the board draws attention.
The cooling system is made according to the reference design and is well known to us from previous reviews.
When examining the printed circuit board, the absence of elements on the right side of the card, which, as we have already seen, are mounted in the 8800GTS 512 version, attracts attention. Otherwise, it is a quite ordinary board with a G92 GPU cut to 112 stream processors and eight memory chips. in total constituting 512 MB.
Like the previous participants in today's testing, Leadtek 8800GT memory chips are manufactured by Qimonda and have a 1.0 ns access time, which corresponds to 2000 MHz.

Rated frequencies and overclocking

As already mentioned, the Leadtek 8800GT video card has factory overclocking. Its nominal frequencies are 678/1700 MHz for the GPU and 2000 MHz for the video memory. Very good, however, despite such a considerable factory overclocking, the video card showed not the best result when manually overclocked, only 713/1782 MHz for the GPU and 2100 MHz for the video memory. Recall that the participants of the previous reviews were overclocked to frequencies of 740/1800 MHz for the video processor and 2000-2100 MHz for the video memory. We also note that we achieved this result at the maximum speed of the cooling system fan, because, as we have already said, the reference system in GeForce 8800GT does not do its job in the best way.

Now let's move on to the next participant in today's testing.

Palit 8800GT sonic


The face of the Palit 8800GT sonic video card is a battle frog in a spectacular design. Silly, but very funny! However, our life consists of nonsense, and remembering this once again does not hurt at all. Moving from fun to business, you should pay attention to the lower right corner, where there is a sticker indicating the frequencies of the video card and its other characteristics. The frequencies of the new product are almost the same as those of the GeForce 8800GTS 512: 650/1625 MHz for the GPU and 1900 MHz for the video memory, which is only 44 MHz less than that of the 8800GTS 512.
The reverse side of the box does not contain anything remarkable, because everything interesting is located on the front side.
The package includes new items:
  • adapter for power supply of PCI-express cards;
  • S-video> component out adapter;
  • S-video adapter> tulip;
  • adapter DVI> D-sub;
  • adapter DVI> HDMI;
  • CD with drivers;
  • CD with the full version of the game Tomb Raider The Legend;
  • brief instructions for installing a video card.
It should be noted that this is the first video card of the GeForce 8800GT class with a DVI> HDMI adapter, which has been in our test laboratory; Previously, only some video cards of the AMD Radeon family were equipped with such an adapter.
Here comes the first surprise! The Palit 8800GT sonic video card is based on a PCB of our own design and is equipped with a proprietary cooling system.
The reverse side of the video card also has some differences, but it is still difficult for us to judge the pros and cons of the new design. But we can fully judge the installation of video card components and their quality.
Since the height of the racks between the GPU heatsink and the board is less than the gap between them, and the heatsink is fastened with screws without any damping pads, the board itself and the graphics chip substrate are very bent. Unfortunately, this can lead to their damage, and the problem lies not in the strength of the PCB from which the board is made, but in the tracks, which can burst when pulled. However, it is not at all necessary that this will happen, but the manufacturer should pay more attention to attaching cooling systems to their video cards.
The cooling system is made of painted aluminum and consists of three parts - for the GPU, video memory and power subsystem. The heatsink base for the GPU does not shine with any special treatment, and a solid gray mass is used as a thermal interface.
Changes in the design of the printed circuit board affected the power subsystem, small elements were replaced with larger ones, and their layout was changed. As for the rest, we have before us the well-known GeForce 8800GT with the G92 graphics processor and eight video memory chips making up 512 MB in total.
Like the rest of today's test participants, the memory chips are manufactured by Qimonda and have a 1.0 ns access time.

Cooling system efficiency and overclocking

We will check the efficiency of the proprietary cooling system used in the Palit 8800GT sonic using Oblivion with maximum settings, however, as always.


The video card warmed up from 51 to 61 degrees, which, in general, is a very good result. However, the fan speed increased noticeably, as a result of which the already not quiet cooling system became clearly audible against the general background. That is why it is difficult to recommend a video card from Palit to those who like quietness.

Despite the changes in the power subsystem and improved cooling, the Palit 8800GT sonic video card overclocked to the usual frequencies of 734/1782 MHz for the GPU and 2000 MHz for the video memory.

So we finished getting to know the participants of today's testing, and therefore we will move on to reviewing the test results.

Testing and conclusions

Today's testing differs not only in that we compare four video cards with each other, but also in that we produced it on a test bench different from the one you are familiar with, the configuration of which is as follows:

The change in the test platform is due to the fact that initially it was planned to test Leadtek 8800GTS 512 and ASUS EN8800GTS TOP video cards in SLI mode, but, unfortunately, by the end of the tests, the ASUS video card could not stand our mockery, and the idea collapsed. Therefore, we decided to transfer the SLI testing to a separate article as soon as we have the necessary hardware in our hands, but for now we will limit ourselves to tests of single video cards. We will compare seven video cards, one of which is overclocked to 756/1890/2100 MHz GeForce 8800GTS 512. For comparison, we added GeForce 8800GT and GeForce 8800GTX video cards operating at frequencies recommended by NVIDIA. To make it easier for you to navigate, we give a table with the clock frequencies of all test participants:

Name of the video card GPU frequency, core / shader unit, MHz Effective video memory frequency, MHz
Leadtek 8800GTS 512 650 / 1625 1944
ASUS EN8800GTS TOP 740 / 1780 2072
Leadtek 8800GT 678 / 1674 2000
Palit 8800GT 650 / 1625 1900
Overclocked GeForce 8800GTS 512 (8800GTS 512 756/1890/2100 on the diagram) 756 / 1890 2100
GeForce 8800GT (8800GT on the diagram) 600 / 1500 1800
GeForce 8800GTX (8800GTX on the diagram) 575 / 1350 1800

We used ForceWare 169.21 and ForceWare 169.25 drivers for Windows XP and Windows Vista, respectively. We traditionally start our acquaintance with the test results with the 3DMark tests:
Based on the results of 3DMark tests, of course, you can see who is stronger and who is weaker, but the difference is so small that there are no obvious leaders. But nevertheless, it is worth noting the fact that the most expensive of the participants - the GeForce 8800GTX video card - took the last places. To complete the picture, you need to familiarize yourself with the results of the game tests, which, as before, we produced with 4x anti-aliasing and 16x anisotropic filtering.
In Call of Duty 4, attention is drawn to the fact that the Leadtek 8800GT video card is almost on a par with the Leadtek 8800GTS 512, and the ASUS EN8800 TOP video card is almost not lagging behind the overclocked GeForce 8800GTS 512. The Palit 8800GT video card is in the penultimate place, slightly bypassing the reference GeForce 8800GT. The winner was the GeForce 8800GTX video card, apparently due to the wider (in comparison with other test participants) memory bus.
In the Call of Juarez game under Windows XP, the Leadtek 8800GTS 512 video card is almost on a par with the GeForce 8800GTX, which is no longer saved by a wider memory bus. Note the fact that the Leadtek 8800GT video card does not lag behind them, and at a resolution of 1024x768 it even surpasses it, which is explained by the higher frequencies compared to the other two video cards. The leaders are the video card from ASUS and the overclocked GeForce 8800GTS 512, and in the penultimate place is again the video card from Palit, right after the GeForce 8800GT.
Call of Juarez on Windows Vista ran into problems with 1600x1200 resolution, which experienced large fluctuations in speed and very slow in some places. We assume that the problem lies in the lack of video memory in such a difficult mode, and whether or not we will check this in the next review using the example of the ASUS 8800GT video card with 1 GB of video memory. Let's notice right away that there were no problems with the GeForce 8800GTX. On the basis of the results in two lower resolutions, it can be seen that the alignment of forces in comparison with Windows XP has practically not changed, except that the GeForce 8800GTX reminded of its noble origin, but did not become a leader.
In the Crysis game under Windows XP, the alignment of forces has changed slightly, but in fact everything remains the same: the Leadtek 8800GTS 512 and Leadtek 8800GT video cards are at about the same level, the leaders are ASUS EN8800GTS TOP video cards and the overclocked GeForce 8800GTS 512, and the last place goes to the video card GeForce 8800GT. Also, note the fact that as the resolution grows, the gap between the overclocked GeForce 8800GTS 512 and GeForce 8800GTX shrinks due to the wider memory bus in the latter. However, high clock speeds still prevail, and yesterday's champion remains out of work.
The problem in Windows Vista with a resolution of 1600x1200 did not bypass the Crysis game either, passing only the GeForce 8800GTX. As in the game Call of Juarez, there were jerks of speed and in some places a very strong drop in performance, sometimes below one frame per second. On the basis of the results in two lower resolutions, it can be seen that this time the Leadtek 8800GTS 512 video card has bypassed its younger sister, taking the third place. The first places were taken by the video cards ASUS EN8800GTS TOP, overclocked by GeForce 8800GTS 512 and GeForce 8800GTX, which finally took the lead at a resolution of 1280x1024.
In Need for Speed ​​Pro Street Racing, the GeForce 8800GTX video card is in the lead, and at a resolution of 1024x768 - by a large margin. It is followed by the Leadtek 8800GTS 512 video card, followed by the ASUS EN8800GTS TOP and the overclocked GeForce 8800GTS 512, and the last places were taken by the GeForce 8800GT and Palit 8800GT sonic cards. Since the GeForce 8800GTX video card has become the leader, we can conclude that the game strongly depends on the video memory bandwidth. After that, we can assume why the overclocked versions of GeForce 8800GTS 512 turned out to be slower than the non-overclocked version. Apparently, this is due to the increased video memory delays due to the increase in its clock frequency.
In the game Need for Speed ​​Carbon, we see a familiar picture: Leadtek 8800GTS 512 and Leadtek 8800GT video cards are on a par, the overclocked GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the first places, and the last place is taken by the GeForce 8800GT. The GeForce 8800GTX video card looks good, but nothing more.
In Oblivion, attention is drawn to the fact that at a resolution of 1024x768 the overclocked GeForce 8800GTS 512 and ASUS EN8800GTS TOP took the last places. We assumed that it was the memory delays that increased due to the increase in the frequency, and we were right: after lowering the memory frequency of the overclocked GeForce 8800GTS 512 to the nominal, it showed a result of over 100 frames per second. As resolution grows, the situation bounces back, and former outsiders become leaders. By the way, attention is drawn to the fact that the Leadtek 8800GT outperforms the Leadtek 8800GTS 512, most likely, this is due to the high frequency of the shader unit.
The Prey game turned out to be undemanding to all video cards, and they settled down according to their clock frequencies... Except that the GeForce 8800GTX behaved a little differently, but this is understandable, because it has a wider memory bus, and the game depends heavily on its bandwidth.

conclusions

The purpose of today's testing was to find out how much the video cards differ from each other, and how much the high price for the "advanced" GeForce 8800GTS 512 video card is justified. GeForce 8800GTS 512 outperforms GeForce 8800GT in characteristics, including in terms of active functional blocks inside the GPU. The obvious advantages of the new GeForce 8800GTS 512 video cards are a high-quality and quiet cooling system and a higher overclocking potential than the GeForce 8800GT. Special attention deserves a video card from ASUS, which, thanks to the factory overclocking, occupies a leading position. Of course, you can overclock the card yourself, and, most likely, all GeForce 8800GTS 512 video cards will “take” the frequencies of the ASUS video card. On the whole, we note once again that the new family of video cards based on the G92 graphics chips turned out to be very successful and may well replace the recent leader GeForce 8800GTX.

Pros and cons of individual video cards:

Leadtek 8800GTS 512

Pros:
  • good overclocking potential;
  • solid equipment;
  • bright and convenient packaging.
Minuses:
  • not noticed.

ASUS EN8800GTS TOP

  • Pros:
  • factory overclocking;
  • high-quality cooling system;
  • good overclocking potential.
Minuses:
  • too large and inconvenient packaging.

Leadtek 8800GT

Pros:
  • factory overclocking;
  • solid equipment.
Minuses:
  • not noticed.

Palit 8800GT sonic

Pros:
  • factory overclocking;
  • alternative cooling system;
  • solid equipment.
Minuses:
  • strongly curved board in the GPU area;
  • noticeable fan noise.
2021 wisemotors.ru. How it works. Iron. Mining. Cryptocurrency.