1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

The Official Graphics Card and PC gaming Thread

Discussion in 'Building a new PC' started by abuzar1, Jun 25, 2008.

  1. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    This is all at stock settings. Since the 955 overclocks better than the 940, I think lower heat outputs are resulting from that.
     
  2. DXR88

    DXR88 Regular member

    Joined:
    May 29, 2007
    Messages:
    714
    Likes Received:
    2
    Trophy Points:
    28
    AMD isn't notorious for making cool processors. Socket A anybody?
     
  3. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    That was more an issue of the non-existant heat spreader though, rather than TDP per se. Pretty sure P4s put out a lot more heat, they were just easier to cool.
     
  4. DXR88

    DXR88 Regular member

    Joined:
    May 29, 2007
    Messages:
    714
    Likes Received:
    2
    Trophy Points:
    28
    i dont know man when i changed out my socket a mobo for a 754 some years ago. i Remember the socket A was warped right down the center. it looked like a potato chip. it still worked though.
     
  5. omegaman7

    omegaman7 Senior member

    Joined:
    Feb 12, 2008
    Messages:
    6,955
    Likes Received:
    2
    Trophy Points:
    118
    I've never owned an intel. But given the reviews I've read, Intel have hot chips as well. The only real hot chip I've had was the 1.4Ghz thunderbird. I saw that hit 60C. I've heard of intels that get WAYYYY hotter. Just an observation by the way :p

    Remember, I am a neutral party. No fanboyism...

    My 940 phenom overclocked to 3.5Ghz with proper air cooling never breaches ~52C. At 1.4V. Most of the intels in this class run on a lower voltage. Of course they'd be cooler ;)
     
  6. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    I ran my Q9550 at 1.35V to be secure at 3.65Ghz, and I know a fair few people when pushing for high overclocks were using 1.4V on them too.
    The only hot intel chips I remember were the old Pentium Ds, and to a lesser extent P4s. Not really seen any cooling nightmares since Core 2, though i7s put out a lot of heat when overclocked.
     
  7. harvrdguy

    harvrdguy Regular member

    Joined:
    Sep 4, 2007
    Messages:
    1,193
    Likes Received:
    0
    Trophy Points:
    46
    Ok you guys - I'm entering here a bit late - at the cpu power consumption discussion.

    But just prior to this, there was some amazing stuff about 30" performance on 6970.

    just prior to that, however:

    - - - - - - - - - -

    My 8800 will soon be a piece of burnt toast:

    Hahaha.

    Well - at 25fps, the game (Dragon Rising) is slightly laggy - just a bit leaden. On the other hand, 28 fps feels smooth - the lagginess appears to be gone. Same game, same chapter - I just Alt-tabbed out, chose the 621 clock and associated 10% faster shader and 10% faster memory clocks, and alt-tabbed back in.

    Are you saying that 3fps can't make that much of a difference, and I am just deluding myself? DXR stated a while back that, at the expense of 10 degrees hotter temps, his GTX board would allow a mild overclock that smoothed out certain games, when he was running at the edge of his card performance. That's what I'm talking about.

    Maybe you're right, but it sure seems to go from laggy to smooth.

    This is one time where I can't run from one end of the street to the other, in the L4D boathouse finale, with a timeclock, and PROVE that you run forwards just as fast as backwards, and at the same exact speed no matter what rifle you are carrying.

    Remember that? LOL

    Lagginess is quite subjective of course. I will say this however: On all the other games, I am running at stock 594 clock, since I am getting at least 28 to 30 fps, which seems smooth to me.

    For example, I finally installed Vista, just to be able to play Medal of Honor single player, which was half-crashing on opening (a sound card issue all the trouble-shooters suggested) and it wouldn't keep track of my progress. Now MOH is working under Vista, and I got Riva working with the on screen display, but the overclocking settings are not set up. However, I saw in the test that I was getting about 30fps, so I made a mental note that I would not be needing to overclock for MOH under Vista.

    By the way, speaking of Vista, should I go ahead and install Windows 7 also? Is DX12 something that will impress me on any of my five new games (MOH, Black Ops, MW2 - already finished that, BC2 single player, and Dragon Rising?)

    - - - - - - - - - -

    Wow, roughly twenty miles from you and Shaff!!! Haha - amazing!! And, like Shaff, I am startled to learn that there was a Far Cry movie!

    - - - - - - - - -

    WHAT!!!!

    From "lemon - best avoided" to "utter madness." I put my wallet away, now I've pulled it back out again. Each little guy is $370 - that's $740 for two of them, at 424 watts total. Will my toughpower 750 handle two?

    Whaaaat???? So you're saying that the little buggar appears to have 325 power (relative to 4870 = 100) if you're running 30" with AA???

    Holy crapola! That reminds me of that Phenom II russian review that Russ was pushing about 2-3 years ago, where the phenom actually beat the i7 for certain resolutions - russ said they were tuned to that specific resolution.

    So, having succumbed to the Sam influence (although Sam mostly tried to hide the awesomeness) LOL, and having picked up a 30" Dell two years ago - my ears are perking up!!!!!

    LOVELY LOVELY LOVELY LOVELY! Did I mention, lovely!!!

    Ok, Sam. Now let's get back to 30". You sound like you're gonna trade your dual 4870x2 cards. You're gonna drop to 35 watts total idle, from 150.

    You're gonna pick up 325 x 2 x 90% = about 6 4870 cards packed into two 6970s - using these numbers because you run 30" with AA. And you can't even say it's 1.5 X better than 4870 quad cf, because quad cf doesn't scale at the 90% or better range - more like around 50%, right? So are you expecting, basically, to double your performance running Warhead, for example, compared to what you have now?

    I repeat, do you think my Toughpower 750 can handle two of those 6970s?

    [​IMG]

    And lastly - what do you think would have been the 6970 cf fps figures in this Warhead chart you posted, at the enthusiast setting, instead of the gaming setting?

    Oh - wait - one last question. I currently have 4 gigs DDR2 memory. I can add another 4 gigs for about $229 at newegg. Since the 6970 graphics cards will each have 2 gigs of memory, and two of them a total of 4 gigs, I remember you said that XP 32 bit can't keep track of more than 4 gigs total - including graphic card memory. So it sounds like for sure I need a 64 bit program - I guess Windows 7 64 bits - am I right?

    So my very last question is:
    1. Do I definitely need a 64 bit OS if I plan on CF 6970s?
    2. Should I also double my ram to 8 gigs, or can I leave it at 4 gigs?

    Rich
     
    Last edited: Dec 22, 2010
  8. DXR88

    DXR88 Regular member

    Joined:
    May 29, 2007
    Messages:
    714
    Likes Received:
    2
    Trophy Points:
    28
    if your planning on using more than 3.25GB of RAM then yes 64-bit OS is a must.

    more ram is never a bad thing but i find 4 Gig's to be sufficient for W7 x64. if you got 234 bucks to blow though don't let me stop you.

    who makes your mobo's chipset, if its NForce you cant run crossfire on it. im sure you already knew that but you mentioned a 8800.
     
    Last edited: Dec 22, 2010
  9. omegaman7

    omegaman7 Senior member

    Joined:
    Feb 12, 2008
    Messages:
    6,955
    Likes Received:
    2
    Trophy Points:
    118
    I spent half that on my 8Gb of ram :p Perfect timing I guess. Though I'm probably gonna sell it soon...
     
  10. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    Yes. While there is a perceptible difference between 25 and 28fps, it is so slight that it would never lead anyone to say it was smooth instead of laggy. The only way that comes about is if that's what people want to think happens, and because they see a tiny detectable improvement, that's what they think to themselves they are seeing. In short, a placebo.
    DX11 makes a bit of a difference, but only really because it incorporates DX10. Graphically-speaking DX11 does almost nothing yet. However, several games that use DX11 do not distinguish DX10. What this means is, if you have DX11 installed, you can play in DX11. If you have DX10 installed, you can only play in DX9, as there is no intermediate mode. And DX10 vs DX9 does make a difference.
    Obviously, you can only get DX11 with the GTX400 series or HD5000 series and newer, the 8800GTX certainly isn't capable of it. My 4870X2s aren't even capable of it.


    The 'lemon' concept is a question of price. Compared to the GTX580, the HD6970 is still a bit of a lemon. It doesn't really have as much of a power advantage as Radeons have had previously, and it's still definitely an inferior to the 580, performance wise.
    However, fact is, the 6970 is in stock, and $370. Compare that to the GTX580 at $510 and still in minimal stock. Things look a bit brighter when you consider this.
    Overall the 6970 is indeed faster than the GTX570, by about 10%. Good news, as it's only $20 (6%) more expensive, and uses slightly less power (210W vs 230-270W).

    However, on their own, the high-end Radeons are nothing to shout about. Nothing to shout about at all. The reason for this is just how capable crossfire configurations are across the board with the HD6 generation.
    Now, the HD6800s with 1GB of video memory and smaller GPUs, can't cope with high levels of AA at 2560x1600. They are limited to 4x and sometimes 2x MSAA in games due to this. However, at smaller resolutions like 1920x1200 this obviously doesn't apply. And when memory restrictions don't apply, HD6800 crossfire configs simply pancake everything else there is out there.

    Geforce GTX570: $350, 252W(ave), Performance Index 194
    HD6850 Crossfire: $360, 254W, Performance Index 292
    Radeon HD6970: $370, 212W, Performance Index 212

    HD6870 Crossfire: $480, 302W, Performance Index 332
    Geforce GTX580: $510, 290W, Performance Index 230
    HD6950 Crossfire: $600, 340W, Performance Index 374
    GTX570 SLI: $700, 504W (ave), Performance Index 350
    HD6970 Crossfire: $740, 424W, Performance Index 412

    There's simply no compare to the crossfire configs, at all.

    The performance of the HD6900s doesn't really show up until high resolutions, however, where they are basically unphased with maximum detail and huge levels of AA.
    It's worth bearing in mind the huge 325 figure only applies for Crysis Warhead and not most other games. However, this does highlight the fact that as games get more demanding, the HD6970 will cope better than anyhting else out there.

    There's no such thing as a particular resolution for CPUs as CPUs don't render graphics at all. The only possible fact I could link that to is that below a certain resolution, the graphics cards are bottlenecked by the CPU, as they're producing such a high framerate. Presumably the 'specific resolution' refers to the point at which this bottleneck is no longer an issue, i.e. the graphical stress is so high the CPU is not the limiting factor. Considering this is different for every graphics/CPU pairing, there is no 'tuned to a specific resolution'. Not for games at least. Video rendering is obviously a completely different story.


    You can't really use the '325x2x90%' to work out how the cards run overall, merely such in crysis. And as you see from that graph, it's actually x100%.
    In warhead, the 4 4870s scale roughly about 200-220% from what I can gather, so I'm looking at going from 300-320 to 650. For Warhead, specifically. Other games won't see such an increase. So yeah, I'll be doubling my frame rate in warhead.

    As for enthusiast, this is a bit of guesswork, but I would hazard a guess at (Minimums)
    HD5970: 2fps
    HD6970: 14fps
    CF6970: 28fps
    SLI580: 32fps


    Running warhead in 32-bit windows is impossible. The game needs so much memory you could have all the graphics power in the world and the game still run horrendously. 8GB would be the bare minimum for Warhead, ideally 16GB, and obviously, 64-bit windows.
    You will want an SSD with your page file and preferably also the game install on it, as the game will be reading/writing tens of gigabytes of data whilst you are playing it.

    64-bit windows is effectively mandatory for one 1GB graphics card, let alone two 2GB cards.
    4GB memory you can 'get away with', but I will put it out there, that even if I reduce the detail level and turn off AA so my 4870X2s can cope with the game, it is unplayable on my system because I only have 4GB of RAM.


    DXR is also right, I can't remember which board you have. If it's an nforce board, you can't use crossfire and will need a new board.

    For the record, the RAM I paid £101 for in February is now £44. Sucks eh? :p
     
  11. omegaman7

    omegaman7 Senior member

    Joined:
    Feb 12, 2008
    Messages:
    6,955
    Likes Received:
    2
    Trophy Points:
    118
    Wow, Warhead is a Ram hoard eh? And I thought GTA IV was a hoard LOL!
     
  12. shaffaaf

    shaffaaf Regular member

    Joined:
    Jan 5, 2008
    Messages:
    2,572
    Likes Received:
    4
    Trophy Points:
    46
    placebo or not, if i person feels its smoother, thats all the counts. in the end thats all we need, to be able to play a game without lag and at decent image quality.

    as for the 6870 being 10% ahead of the GTX 570, overall its about the same form the numerous reviews i have seen.

    http://www.hardwarecanucks.com/foru...899-amd-radeon-hd-6970-hd-6950-review-28.html

    http://www.hexus.net/content/item.php?item=27983&page=20
    (look at bot hagreggate and normalised FPS)

    but as we can see, it does pull away at 2560 and above.
     
  13. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    Omega: Yeah GTA4 is a ridiculous memory hoard, but it doesn't leak memory like Crysis and Warhead do, so if you're playing it in a large map for a prolonged period, GTA4 will be using less memory. I'd still advocate at least 8GB to enjoy GTA4 properly, ideally 12, but unlike Crysis, 16GB would probably be slight overkill. Given how memory prices have sunk recently I intend to up to 12GB when I get the HD6970s.
    Shaff: Placebos vary with perspective though, and I'm sure Rich's perspective would change if his 8800 prematurely expired before he could afford to replace it. If it was a case of 'the upgrade is in the post', or if there was no downside to overclocking a card, then sure, go nuts, but the fact is, the risk of failure is high enough as it is with an 8800, let alone when you're overclocking it and its already past its use-by date.
    It does amuse me how you manage to find the most negative benchmarks from the set.
    A lot of sites vary wildly with test results from others using the same game, and when this occurs I look to the sites that have the best testing methodology. The benchmarks from both HardOCP (but not exclusively as they test at 2560x1600) and thetechreport detail that the HD6970 is below the GTX580, but above the GTX570.
    I calculated the averages from two benchmark suites, and it comes out as
    HD5870 180
    GTX480 190
    HD6950 191
    GTX570 194
    HD6970 212
    GTX580 230

    So on that basis, I treat the HD6970 10% above the GTX570.
    Taking the hardwarecanucks tests you posted:
    Avp: HD6950 wins against GTX570 (++)
    BBC2 (Known to be biased down against the HD6 series): HD6970 marginally behind GTX570 (-)
    DiRT2 (Known to be biased down against the HD6 series): HD6970 approximately equal to GTX570 [ignoring tests with no AA, as with cards this powerful there is no reason not to use it]
    F1 2010: HD6970 negligibly ahead of GTX570 [contradictory to several other tests of this game]
    Just Cause 2: HD6970 considerably ahead of GTX570 when AA is applied, even with it disabled (+)
    Lost Planet 2 (Considerably nvidia biased): HD6970 considerably behind GTX570 (--)
    Metro 2033 (known to be slightly biased): HD6950 equal to GTX570 (+)

    Sum the pluses and minuses and I come out with 4 pluses versus 3 minuses, so on balance the HD6970 is indeed superior to the GTX570, if only slightly, but given the inclusion of nvidia-biased games, that's not strictly an accurate test.


    The positioning of the cards only re-affirms my earlier statement though, if you actually read it, which is that buying a single HD6900 card is the wrong way to do it. You either want one HD6800 for low-end, two HD6800s for high-end and two HD6900s for top end. It's crossfire that brings the big advantages of this generation due to its epic scaling, and it's crossfire that gives the HD6900s the GPU power to support their enormous potential.


    Hexus aren't included because they quote a lot of false facts in their reviews, such as the HD6970 being £40 more than the GTX570. The HD6970 is actually cheaper than the 570, quite significantly so, in fact.
     
    Last edited: Dec 22, 2010
  14. shaffaaf

    shaffaaf Regular member

    Joined:
    Jan 5, 2008
    Messages:
    2,572
    Likes Received:
    4
    Trophy Points:
    46
    again with this bias stuff. really who sets the standard on which game is biased?

    http://techreport.com/articles.x/20126/16

    tech report, overall 570=6970.

    so what if a title is "biased"? over all we can see the cards are the same.

    also negative? they really arnt negative, all they show is what they report on. just becuase its not pro AMD doesnt make it negative.

    the best GPU releases this year IMO haev been all nvidia (GTX 5 series and 460), and withthe GTX 560 about to hit, it should continue.

    witht he 5 series, wasnt metro highly biased? whats changed is the tesselator. it utilises it, and the 5 series had IICR 1 unit.

    fermi on the other hand is based around it. you call it biased, i call it hardware fail.

    With battlefield, every site i have seen seems to show they but heads, how is either of that biased?

    while i agree FPS for price dual cards are better, its still a problem with new games, noise, power consumptiona nd future upgrades.

    while i do say the GTX5 are better than the HD6, its in comparission to their previous series. cmpared to each other, most are exactly on par. no difference, just get which ones you prefer. but overall both are flops, and TSMC are to blame.

    really it i think we shouldnt be arguing green vs red, but arguing who is oging to fill that 3rd spot to push these two.
     
  15. harvrdguy

    harvrdguy Regular member

    Joined:
    Sep 4, 2007
    Messages:
    1,193
    Likes Received:
    0
    Trophy Points:
    46
    Well, good news for me - I don't have an nvidia board - it's an asus P5E x38 chipset, and I've got 16 lanes of pci-e for each of two graphics cards in crossfire.

    It sounds like Shaff and DXR are both behind the concept that a few fps can smooth out a laggy game, and improve playability significantly. We all know that 24.7 fps fools the eye (or some such figure close to that) regarding motion pictures.

    Playing Dragon Rising, I just happened one day to notice that I had dropped down to the low 20s - it was the mission where you go down and blow up the pump at the refinery. I think the extra demand was because the huge oil tanks were in view - so more going on in the screen than usual. I dropped from 4x AA down to 2xAA, and managed only to bring the fps up to 25 - at the bare threshhold for fluid motion, speaking of movies. The laggyness was in how my mouse responded - a tiny but perceptible lag.

    I thought back to the experimenting I had done with overclocking, and selected the first overclock step from the 594, which was the 621 - it's the next core clock up. Not much of a step, but the shader and memory clocks were about +100 from stock, about a 10% gain.

    This improved my fps to 28 - and the slight mouse lagginess disappeared. I won't argue that I do seem to have quite a tolerance for lag - seeing how I played quite a bit of Grand Theft Auto IV at around 12 fps - killing off all the bad guys on the police wanted list, etc., and completing many missions.

    Anyway, I suppose we've beaten that subject to death. You're right of course, if I kill off my 8800GTX before I am able to pop for at least the first 6970, then that will chill my game playing for a bit. But the weather is quite chilly, and I added another little 80mm exhaust over the hole I created removing the 3 slot covers below the 8800 card, and I put the extra kaze in the case blowing on the 8800, instead of the 1600 rpm scythe - it makes quite a racket and also a vibration, but the case closes up and I can't hear any of that through the headphones. So the 8800GTX runs below 90 degrees. I know that the overclocking is supposed to be quite damaging, even at low operating temps, but I'll take DXR's experience with the GTX series, as he had good luck overclocking for quite a while.

    ------------------


    Now getting back to the twin 6970s - I am quite excited - obviously because I am in the 30" camp. So selfishly I don't really care that the 6970 doesn't really shine for 24" play - I pretty much only care that it becomes a BEAST for AA and 2560x1600 play. And let me ask you - WHY EXACTLY does it do that? (And you mentioned same is true also of nvidia 570 and 580.)

    I appreciate your advice about memory, suggesting 12 gigs. Unfortunately the max memory I can utilize on this motherboard is 8 gigs, and I can get the 8gigs DDR2 for about $229 (gskill) on newegg.

    I also appreciate the Warhead advice about the SSD for page file, and maybe for the entire game - good idea.

    So I was correct - you are going to double your graphics power at the 650 number, realizing 100% scaling - awesome!! Assuming I can follow suite with two of those cards myself, 8 gigs of memory, 64 bit Windows 7 - I am going to assume that my present psu - your old Toughpower 750, will handle the load with my Q9450 cpu. Am I correct?

    How important will it be to overclock the Q9450 up to 3.2 or beyond from the present 2.66, for good frame rates on warhead, with the 2x6970 configuration and 8 gigs of ram, and will my toughpower psu still handle the load including the cpu overclocking? I have two disk drives, and by then, depending on price, I'll have the SSD also, but I assume, maybe incorrectly, that the power draw of an SSD is marginal.

    Also, you guestimated 28 fps with 6970cf on Warhead at enthusiast settings. Are you planning to try to play the game at enthusiast settings with your i5 and see how it plays - at least in the early chapters before the ice?

    Rich
     
  16. DXR88

    DXR88 Regular member

    Joined:
    May 29, 2007
    Messages:
    714
    Likes Received:
    2
    Trophy Points:
    28
    Over clocking an Nvidia graphics card is like lighting a fire inside a room made of dried wicker vines. is it a good idea, hell no. is it fun to see how long you luck runs, hell yes.

    the only reason i OCed the GTX460 is because it seems to run a hell of a alot cooler than the 8800 series ever did. not to mention it was already factory over clocked.
     
  17. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    Shaff: A game that is biased is one that does not follow a pre-established trend of performance. We have an accepted baseline for older cards to use as reference, and if a game shows that, for example, the GTX260-216 and HD4870 which we know are equals, to be mismatched, i.e. one faster than the other, then the game is biased. The same goes for a few other pairings as well.
    There are some AMD-biased titles out there as well, not many but they do exist, and that's also considered in testing.
    You can't simply write off bias and consider biased games as 'that's just another one of the tests', as where do you stop? TWIMBTP titles with 20% bias? Lost Planet 2 with 40% bias? HAWX 2 with 115% bias?
    Including such titles in an average score completely changes the outcome of a benchmark suite.

    From techreport:
    "Holy moly, we have a tie. The GTX 570 and 6970 are evenly matched overall in terms of raw performance. With the results this close, we should acknowledge that the addition or subtraction of a single game could sway the results in either direction"
    While they have at least excluded HAWX2, Lost Planet 2 is included in that test. Its removal places the HD6970 where it actually performs, a few percent above the GTX570.
    Meanwhile, the HD6950 performs a few percent below it.

    The best GPU releases this year have not all been nvidia. The GTX460 was well placed at the time, but nothing exemplary, as it was no faster than the HD5830, no more efficient, and not really any cheaper. To this day a 1GB GTX460 costs more than an HD5830 did towards the end of its life.
    Then the HD6850 came out, which is agreed by the majority of unbiased people to have been the best card of the year. It's the same price as a 1GB GTX460, considerably faster, and considerably more efficient, at 127W vs 170W.
    Rating the GTX5 cards as 'better' is laughable.
    You could potentially argue that the GTX580 was worth buying before the HD6900 came out, but now it's a complete waste.
    9% extra performance, for 80W more power, $150 more cost, and being nearly impossible to find? Are you mad?
    Same goes for the GTX570, 60W more power for no extra performance at all over the HD6970. It being a 1280MB card doesn't really help either.
    As I keep saying though [and you keep ignoring] I don't think any of these cards are worth buying solo. What you really want is two HD6850s, as they way outstrip anything else in their price, power and power-connector sector. Better scaling and indeed reliability from this gen means that it really is the best option.
    Two HD6850s are the same price as a GTX570, sometimes cheaper, typically a massive 50% faster, are 6+6 not 6+8, and only use 254W max, versus the GTX570 which can go up to 270W. There's simply no contest. Find me a major game that isn't HAWX 2 where a GTX570 beats two 6850s. Go find one.

    You will notice that while Metro 2033 was biased initially, the more complex HD6900s have been able to negate the bias and actually perform beyond the level of the equivalent geforces.

    Battlefield is one title that takes an unusual turn, as unlike the other games it's not biased from the outset through coding, the drivers for the game are simply bad AMD-side, and in reality, they always have been. However, again, see earlier comment about 6850s, they're 20% on top of the 580, let alone the 570.
    Even AvP which was definitely nvidia-biased at one stage sees the HD6970 perform a distinctive percentage above the GTX570, and close behind the 580.

    September 2009 [HD5 launch] - April 2010 [Fermi Launch]:
    £200: HD5850
    £300: HD5870
    £450: CF5850

    April 2010 - July 2010 [GTX460 launch]:
    £200: HD5850
    £300: HD5870
    £450: CF5850
    £800: SLI480 [not really worth it!]
    No change mainly, because the GTX465, GTX470 and GTX480 were all terrible.

    July 2010 - October 2010 [HD6800 launch]:
    £150: GTX460
    £200: HD5850
    £300: HD5870/SLI460 -> SLI460s was pretty good here, IF you ran low resolutions. 768MB per GPU was a nogo for 30" res.
    £400: CF5850

    October 2010 - November 2010 [GTX580 launch]:
    £150: HD6850
    £200: HD6870
    £300: CF6850
    £400 for CF6870 isn't really worth it, so I held people off here in anticipation of the HD6900 launch.

    November 2010 - December 2010 [GTX570 and HD6900 launch]:
    £120: GTX460
    £150: HD6850
    £190: HD6870
    £300: CF6850
    £450: GTX580 [not really worth it]
    £900: SLI580

    December 2010 - January 2011 [GTX560 launch]:
    £140: HD6850
    £220: HD6950
    £280: CF6850
    £440: CF6950
    £920: SLI580 [again, not worth it]

    Barring the GTX460 for low-end, there's no occasion where using a geforce has been worth it in the high-end performance sector at all. The cards are too underpowered for their ridiculous price tags and power consumption. To say that they're better isn't a matter of opinion, it's simply false.
    You can argue what you like from a technological perspective, but until there's a midrange card as good as the HD6850 from nvidia that scales as well, and until the GTX580 is readily available and loses at least £100 from its price tag, Radeons are the only products to buy. I'm sorry but that's simply how it is.

    Rich: Not actually true on the 24.7fps front. Primarily because your monitor isn't producing 24.7fps, it's producing 60fps. Just, because your PC isn't putting out that much, some of those 60 frames are the same. Your PC might happen to finish rendering a frame just after the monitor has sent one that's the same as the first one, so what you're actually seeing is a frame rate that's lower still. The human eye does notice this, and it's why I can distinguish lag all the way up to 60fps.
    This also doesn't cover microstutter, which can happen in dual graphics scenarios sometimes [Thankfully less often with modern cards]. Frame rate can jump wildly between frames, literally each odd frame taking twice as long to render as even frames, for example. This doesn't get picked up by FPS counters, but it does notice in the real world. With CF (and SLI) it's conceivable to notice a game lagging as high as 100fps.
    In Left 4 Dead 2 for example, the only thing that microstutters typically is the fire effects. Stare up-close at a molotov going off, the fps counter says 80, but it feels more like 40. It's perceptably not smooth. It's hardly appalling, but it's noticeable nonetheless.
    It stands to reason that you don't want microstutter if you're getting a low frame rate too.

    Rich: turning AA down from 4x to 2x doesn't reduce required performance much, but it does reduce video memory a fair bit. The 8800 probably isn't out of video memory at 4x, just not powerful enough to render the effect, so that will be why you don't see much of a benefit in FPS.
    Dragon Rising, along with a few other titles, uses a frame-delayed cursor. This means, when you send a command, the game queues it, for example, 3 frames behind the current action, so 3 frames have to be rendered before the screen pans round to where you moved. At 100fps, this is 30ms, a largely imperceptible delay. At 20fps, this is 150ms, which is quite a substantial period of time, slower than the fire rate of most of the automatic weapons in the game.
    Games that use this annoy me, as you have to be putting out a very high FPS not go get this delay, even if the game seems smooth without one.

    You're right that overclocking does damage at lower temperatures than standard. 'lower than 90 degrees' in the concept of a Radeon is 'you can probably run the fan a bit quieter than that'. 'lower than 90 degrees' in the concept of a stock geforce is 'the absolute maximum'. 'lower than 90 degrees' in the concept of an overclocked geforce is 'that's not low enough'.


    The HD6970 takes on 2560x1600 so well for the same reason the GTX400s take on 2560x1600 so well. Apart from having 2GB of video memory so it never runs out, the architecture is BIG. Actual maximum output isn't so high because the clock speeds of the GPU aren't enormous and neither is the number of cores, but the actual cores themselves are very complex, and designed in such a manner that they're unphased by huge workloads.

    Buying 8GB for $230 is a bit barmy, as you could buy a new board for a Core i5 or i7 AND 8GB of RAM for that much.

    With the Q9450 stock the 750W Toughpower should be OK, but it will be being loaded quite heavily. With the CPU overclocked [you will want that for Warhead and crossfire, I assure you], you're probably pushing things.
    You'll probably be pulling around 700-720W ac, so 600-620W DC out of the unit. Not max, but remember this is still a Thermaltake PSU, even if it does have CWT internals, and I'm not sure I like the prospect of loading one so high, especially if it's in a hotbox environment. [A hotbox refers to a case whereby the PSU sits at the top sucking in heat from the CPU area. The far better way of running a PSU is in the bottom of the case facing downwards, pulling floor air from a vent and being isolated from other components]

    I'll try running warhead with the 6970s no doubt, but because I only have 4GB of RAM I haven't even been able to get the game to load the level at enthusiast settings, because so little of the game [2.8GB/12.5GB] fits in RAM, it can't actually store enough to render a single frame.

    DXR88: Depends which 8800. The coolers on the pre-overclocked GTX460s can be extremely beefy [they're the same coolers used on HD5870s and sometimes even GTX470s] hence why they take so much load.
    GTX460s overclocked to the limit around 940mhz put out over 250 Watts, a dramatic increase from their typical 170 at stock, and as much as two HD6850s put together!
     
  18. Estuansis

    Estuansis Active member

    Joined:
    Jan 18, 2006
    Messages:
    4,523
    Likes Received:
    8
    Trophy Points:
    68
    For the record the HD6850 is one of the most significant cards released since the 8800GT. Definitely enjoying the scaling. One of very few deals where it was actually worth it in the end to get a sidegrade.

    I think I pinched my pennies close enough to make the cost pocket change. Guy who bought my 5850s says he's keeping them as a "mating pair" lol XD. Gotta admit I felt bad about breaking my 4870s up because they were truly a pair and had never been run separately. Shoot me but computers have a certain soul to them you really learn to feel out after a while. Crossfired video cards are something you don't readily split up.

    Really is a funny cycle though. I basically have to go Crossfire to get a worthwhile upgrade now. 4870s, 5850s, and now 6850s, it really is addictive :D Also, 4850s would have been cool.
     
    Last edited: Dec 23, 2010
  19. sammorris

    sammorris Senior member

    Joined:
    Mar 4, 2004
    Messages:
    33,335
    Likes Received:
    7
    Trophy Points:
    118
    Once you move to crossfire, it's rarely an upgrade to buy a single card. Performance just doesn't increase that much in one, or even two generations. It's taken until the GTX580/HD6970 to actually provide an improvement over one 4870X2, albeit by a measly 28%/18%.
     
  20. Estuansis

    Estuansis Active member

    Joined:
    Jan 18, 2006
    Messages:
    4,523
    Likes Received:
    8
    Trophy Points:
    68
    Which is why I love my video cards. Overkill for your resolution is the only way to go if you ask me. Crysis maxed? I say lol lets do it :p Don't even use my cheap settings anymore. Just crank it and enter all the little performance tweaks. Turn off edge AA, move physics to the 4th core, slightly higher LOD for distant textures, etc. Just to smooth it all out.

    And don't even get on me about it because Crysis is an awe-inspiring showcase of technology.
     
    Last edited: Dec 23, 2010

Share This Page