Poll

x86 Or ARM?

x86
ARM

Author Topic: [MEGATHREAD] Personal Computer - Updated builds thanks to Logical Increments  (Read 1629475 times)

Why does the GTX 780's only come with a max of 3GB VRAM? Wtf.

Because you don't need more. If you have 3 monitors you buy a Titan if you have 1 or 2 you buy a 780 or 780Ti.


-tons of links-

You could have just linked one lol.

Alright maybe 6+ monitors :P The only thing the Titan wins in is VRAM and Texture Units.


The reason the titan still sells for 1K is because it has something like 24x compute performance than the 780 ti, although it's far inferior in typical gaming workloads. It's now in the vein of workstation graphics cards. I should mention though, nvidia chose to limit the 780 ti's compute performance purposefully, so they could continue to sell the titan, similarly to how intel arbitrarily locks some of their processor models for no reason except because they can and it will make them more money

Unless you're doing hardcore editing/3D modelling/streaming/video rendering, the i7 is useless. And if you're in the market for 780s, you're probably not doing a workstation computer.

That is a horrible choice in SSDs.

And a loving 670? Really?

Because unless you're using six 1080p screens at a time, you don't need more.

Also, don't SLI. It's not needed and has a host of instability issues with games. Always go for the single card.
Whats so bad about my SSD's? They work amazing. Both operating systems boot in 4 seconds, and neither have gave me any problems.

Because you don't need more. If you have 3 monitors you buy a Titan if you have 1 or 2 you buy a 780 or 780Ti.
But if that's the case, why does the GTX 770 come in 4GB and not the GTX 780, wouldn't the higher quality card get more VRAM instead of less?

Edit: Double post, forgot to edit.


But if that's the case, why does the GTX 770 come in 4GB and not the GTX 780, wouldn't the higher quality card get more VRAM instead of less?

Edit: Double post, forgot to edit.

Really?

No, 1GB more of VRAM does not make the 770 a better card than a 780.
The 770 is not a double 4K card. It does not matter that it has 1GB more VRAM because you'll never use it on 1080p.

Whats so bad about my SSD's? They work amazing. Both operating systems boot in 4 seconds, and neither have gave me any problems.

They are not necessarily bad SSDs, but a bad purchase as their performance/price is worse than, say, an 840 EVO.

Really?

No, 1GB more of VRAM does not make the 770 a better card than a 780.
The 770 is not a double 4K card. It does not matter that it has 1GB more VRAM because you'll never use it on 1080p.

They are not necessarily bad SSDs, but a bad purchase as their performance/price is worse than, say, an 840 EVO.
I never said the 770 was better, all I said was I figured the GTX 780 would have more VRAM than the lesser card. And I got both my SSD's about $60 off.

I never said the 770 was better, all I said was I figured the GTX 780 would have more VRAM than the lesser card. And I got both my SSD's about $60 off.
So, why did you bother to post the full spec price (seemed pretty braggy) if you didn't actually pay full price?

http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review

240 texture units on the 780 Ti, 224 on the Titan.

Sorry my source misinformed me then.

I was trying to basically get at this...
The reason the titan still sells for 1K is because it has something like 24x compute performance than the 780 ti, although it's far inferior in typical gaming workloads. It's now in the vein of workstation graphics cards. I should mention though, nvidia chose to limit the 780 ti's compute performance purposefully, so they could continue to sell the titan, similarly to how intel arbitrarily locks some of their processor models for no reason except because they can and it will make them more money

So, why did you bother to post the full spec price (seemed pretty braggy) if you didn't actually pay full price?
Those were the only things I got on sale, I don't remember how much I paid, all I posted that for was to show how much the system was worth.

The reason the titan still sells for 1K is because it has something like 24x compute performance than the 780 ti, although it's far inferior in typical gaming workloads. It's now in the vein of workstation graphics cards.

http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/14

This shows equal (if not better) compute performance on the 780 ti vs the titan. I'm seeing one benchmark where the Titan was better by far, Folding@home double precision, with a 6ms difference. Not exactly 24x the performance.

http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/14

This shows equal (if not better) compute performance on the 780 ti vs the titan. I'm seeing one benchmark where the Titan was better by far, Folding@home double precision, with a 6ms difference. Not exactly 24x the performance.
http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663.html
Quote
Is GeForce GTX 780 Ti More Titanic Than Titan?

At this juncture, the most natural question to ask is: well what about the $1000 GeForce GTX Titan? Nvidia is calling GeForce GTX 780 Ti the fastest gaming graphics card ever, and it’s selling for $700. That’s less than Titan for a card with technically superior specifications.

Titan lives on as a solution for CUDA developers and anyone else who needs GK110’s double-precision compute performance, but is not beholden to the workstation-oriented ECC memory protection, RDMA functionality, or Hyper-Q features you’d get from a Tesla or Quadro card. Remember—each SMX block on GK110 includes 64 FP64 CUDA cores. A Titan card with 14 active SMXes, running at 837 MHz, should be capable of 1.5 TFLOPS of double-precision math.

You don't get this option with GeForce GTX 780 TiYou don't get this option with GeForce GTX 780 Ti

GeForce GTX 780 Ti, on the other hand, gets neutered in the same way Nvidia handicapped its GTX 780. The card’s driver deliberately operates GK110’s FP64 units at 1/8 of the GPU’s clock rate. When you multiply that by the 3:1 ratio of single- to double-precision CUDA cores, you get a 1/24 rate. The math on that adds up to 5 TFLOPS of single- and 210 GFLOPS of double-precision compute performance.

That’s a compromise, no question. But Nvidia had to do something to preserve Titan’s value and keep GeForce GTX 780 Ti from cannibalizing sales of much more expensive professional-class cards. AMD does something similar with its Hawaii-based cards (though not as severe), limiting DP performance to 1/8 of FP32.

And so we’re left with GeForce GTX 780 Ti unequivocally taking the torch from Titan when it comes to gaming, while Titan trudges forward more as a niche offering for the development and research community. The good news for desktop enthusiasts is that Nvidia’s price bar comes down $300, while performance goes up.

That benchmark suite is not exclusively double precision. It's no surprise that the titan doesn't win in single precision benchmarks, and only wins in the double precision benchmark. Huh. Especially because Ryan Smith said exactly that at the top of the anandtech article page:
Quote
Jumping into compute, we’re entering the one area where GTX 780 Ti’s rule won’t be nearly as absolute. Among NVIDIA cards its single precision performance will be unchallenged, but the artificial double precision performance limitation as compared to the compute-focused GTX Titan means that GTX 780 Ti will still lose to GTX Titan whenever double precision comes into play.

Benchmarks are all fine and dandy, but you still have to read the words that are there. That article didn't bother to test more double precision benchmarks because it already told you that the titan would win