The NVIDIA GeForce GTX 1660 Review, Feat. EVGA XC GAMING: Turing Stakes Its Claim at $219
by Ryan Smith & Nate Oh on March 14, 2019 9:01 AM ESTAs NVIDIA’s carefully orchestrated rollout of its Turing family of GPUs and related video cards keeps steaming right along, we’re back again this month with the next piece in the GeForce product stack. Last month was, of course, the GeForce GTX 1660 Ti; and if you know anything about NVIDIA naming then you know that NVIDIA never does a stand-alone Ti card. As a suffix indicating higher performance, if there’s a Ti card, then there needs to be a regular card as well. And today NVIDIA is delivering on just that with the vanilla GeForce GTX 1660.
For the Turing family launch, NVIDIA has been following a very straightforward top-to-bottom video card launch, and today’s GeForce GTX 1660 launch continues that pattern. With the GTX 1660 Ti coming in at the very top end of the mainstream market with a $279 price tag, NVIDIA is now ready to launch its lower-tier, more wallet-friendly $219 counterpart. This continues NVDIA’s cascade of Turing video cards down to lower prices and lower performance levels, raising the bar for video card performance at each price tier.
Turning our eyes to NVIDIA’s new card then, within the NVIDIA Turing GeForce product stack the GTX 1660 is essentially a cut-down GTX 1660 Ti, and serves as this generation’s version of the GeForce GTX 1060 3GB. Which is to say that it’s a card that uses the same GPU in a slightly cut-down configuration – in this case the same TU116 introduced for the GTX 1660 Ti – while instead making a larger tradeoff in memory in order to bring the price of the card down. Gone is GDDR6 in favor of cheaper, more widely available GDDR5, and better still you get a full 6GB of it.
Equally as important however, NVIDIA has (largely) stopped the naming shenanigans for this generation by not using memory capacity to indicate overall GPU performance. Though I’m still not a fan of suffixes (as they tend to get unintentionally cut off), this situation is massively better than the GTX 1060 naming system. So I will give credit to NVIDIA for not making the same consumer-unfriendly decision twice in a row.
Setting up today’s launch then, from a consumer standpoint the GTX 1660 may very well be the most important NVIDIA video card launch of the year. As what’s essentially NVIDIA’s $200 video card – with NVIDIA cheekily tacking another $20 on it – we’re now getting into NVIDIA’s high-volume desktop products. It’s mainstream cards like these that make up the biggest chunk of NVIDIA’s desktop shipments, so this is the card that’s going to be setting the pace for the mainstream market for the world-over. But first, let’s see if it’s any good.
NVIDIA GeForce Specification Comparison | ||||||
GTX 1660 | GTX 1660 Ti | GTX 1060 3GB | GTX 1060 6GB | |||
CUDA Cores | 1408 | 1536 | 1152 | 1280 | ||
ROPs | 48 | 48 | 48 | 48 | ||
Core Clock | 1530MHz | 1500MHz | 1506MHz | 1506MHz | ||
Boost Clock | 1785MHz | 1770MHz | 1708MHz | 1708MHz | ||
Memory Clock | 8Gbps GDDR5 | 12Gbps GDDR6 | 8Gbps GDDR5 | 8Gbps GDDR5(X) | ||
Memory Bus Width | 192-bit | 192-bit | 192-bit | 192-bit | ||
VRAM | 6GB | 6GB | 3GB | 6GB | ||
Single Precision Perf. | 5 TFLOPS | 5.5 TFLOPS | 3.9 TFLOPs | 4.4 TFLOPs | ||
"RTX-OPS" | N/A | N/A | N/A | N/A | ||
TDP | 120W | 120W | 120W | 120W | ||
GPU | TU116 (284 mm2) |
TU116 (284 mm2) |
GP106 (200 mm2) |
GP106 (200 mm2) |
||
Transistor Count | 6.6B | 6.6B | 4.4B | 4.4B | ||
Architecture | Turing | Turing | Pascal | Pascal | ||
Manufacturing Process | TSMC 12nm "FFN" | TSMC 12nm "FFN" | TSMC 16nm | TSMC 16nm | ||
Launch Date | 3/14/2019 | 2/22/2019 | 8/18/2016 | 7/19/2016 | ||
Launch Price | $219 | $279 | $199 | MSRP: $249 FE: $299 |
Diving into the GeForce GTX 1660’s specifications, what we find is a very familiar echo of the GTX 1660 Ti. NVIDIA is of course using the same TU116 GPU as before, making the GTX 1660 the company’s second-tier TU116 card. However rather than a fully-enabled GPU, the TU116 is slightly cut-down, shaving off 2 of the GPU’s 24 SMs, leaving 22 enabled for a total of 1408 CUDA cores.
Otherwise there are no other changes on the GPU front; in particular all of the ROPs are enabled, and clockspeeds have actually gone up just a tad, with the official boost clock rated for 1785MHz. So unlike NVIDIA’s higher-performance products where the steps in GPU configurations are much larger, the GTX 1660 family keeps things close, giving NVIDIA an avenue for salvaging chips but not creating a wide gap between them. As a result, on paper the GTX 1660 has 92% of its bigger sibling’s shading/texturing/geometry throughput, and 101% of its ROP throughput. But don’t let that fool you; how TU116 has been cut-down plays second-fiddle to the memory changes.
As I mentioned at the start of this article, like the GTX 1060 3GB before it, NVIDIA’s defining change for this card isn’t the GPU, but rather the memory. For the GTX 1060 3GB that change was tossing out half the memory, resulting in a 3GB GDDR5 card that to this day I still think was a short-sighted decision. Thankfully, for the GTX 1660 vanilla, NVIDIA is doing something different: swapping out cutting-edge GDDR6 for the tried and true backbone of the video card industry that is GDDR5. GDDR5 of course isn’t as fast as GDDR6 – and critically, this is where GTX 1660 loses a lot of its performance versus GTX 1660 Ti – but it still delivers a good amount of memory bandwidth. And better still, it means NVIDIA is shipping the card with a more appropriate 6GB of VRAM.
By the numbers then, the GTX 1660 ships with 6GB of GDDR5, which is attached via a 192-bit memory bus and operating at 8Gbps. As it so happens, this means that the GTX 1660 has exactly the same amount of memory bandwidth as the GeForce GTX 1060 6GB and 3GB; so on a generational basis, NVIDIA needs to get more performance out of the same amount of memory bandwidth.
Or, to put things within the context of the Turing generation, this is only 2/3rds the data rate of GTX 1660 Ti’s 12Gbps GDDR6, so the aggregate bandwidth reduction is significant, dropping from 288GB/sec to 192GB/sec. However as we’ll see here for the GTX 1660 – and as we’ve seen before with other video cards that ship with multiple memory speeds – video card performance scaling is far less than 1-to-1 with memory speeds. So while this causes the GTX 1660 to fall behind the GTX 1660 Ti by a decent gap, TU116 isn’t completely hamstrung by GDDR5.
Finally, to draw one last parallel to the GTX 1060 3GB, like its predecessor NVIDIA is targeting the same 120W TDP. The GTX 1060 cards all shipped with the same TDP, as it will be for the GTX 1660 cards as well. The net result is that the vanilla GTX 1660 is essentially a bit less power efficient than its Ti sibling, delivering less performance for the same power consumption. And while NVIDIA’s TDP choice is functionally arbitrary, this gives the company an outlet for marginal TU116 chips that are a little more hooked on the juice (the best chips, of course, going into laptops). The upshot is that because the GTX 1660 has fewer SMs vying for power, it means that it can boost just a bit higher, giving the card higher average clockspeeds. This also helps to keep the two GTX 1660 cards a bit closer in performance than the specs would otherwise indicate – at least when the vanilla GTX 1660 isn’t memory-bandwidth bound.
Price, Product Positioning, & The Competition
In the Pascal generation, the GTX 1060 3GB was NVIDIA’s $199 “sweet spot” video card. However like the rest of the new Turing generation, the GTX 1660 is subject to price inflation. The GTX 1660 Ti came in at $279 – which was $30 higher than before – and similarly the GTX 1660 is getting a $20 price bump to $219. This means that although NVIDIA isn’t quite hitting the sweet spot, the GTX 1660 is essentially their take on a $200 video card.
Like last month’s GTX 1660 Ti launch, today’s GTX 1660 launch is a pure virtual launch for NVIDIA and its board partners. meaning that NVIDIA is not doing any kind of retail reference card here, and all the cards hitting the shelves are customized vendor cards. In practice, expect most (if not all) of these cards to look like the GTX 1660 Ti cards that just hit the market; with the same GPU running at the same TDP, it’s quick and efficient for board vendors to reuse their designs, even with the change in memory types. TU116 is of course pin-compatible with itself, though I’m not yet sure if GDDR6 and GDDR5 can use the exact same PCBs, as the new memory has a different pin count.
NVIDIA is calling today’s launch a hard launch, though truth be told I’m not convinced it will be quite as hard of a launch as last month’s GTX 1660 Ti. We actually had to go through a few board partners before we were able to secure a sample, as other board vendors didn’t have samples available (thanks, EVGA!). So I get the distinct impression that local offices and distributors were cutting it rather close on receiving their cards ahead of today’s launch date. At any rate, by the time this article goes live, we should have a better idea of just how well-stocked this launch is.
Looking at the product stacks, within NVIDIA’s product stack this card is set to replace the GTX 1060 3GB, with the earlier GTX 1660 Ti having done the same to the GTX 1060 6GB. In practice the channel has been mostly depleted of GTX 1060 6GB cards anyhow, but whatever cards remain have just been rendered obsolete for retail purposes, as the GTX 1660 is all-around better for the same price or less. Meanwhile the remaining GTX 1060 3GB cards – which were largely under $200 to begin with – will soon face the same fate.
Otherwise, NVIDIA is once again playing this launch straight in terms of bundles. There will be no game bundles for any of the GTX 1660 series, so the value of the product is the value of the video card itself. The outgoing GTX 1060 cards on the other hand do qualify for NVIDIA’s Fortnite bundle, though it goes without saying that Fortnite is a free game to begin with.
Meanwhile in terms of intended market, like the GTX 1660 Ti before it, NVIDIA is targeting the vanilla GTX 1660 at the mainstream market, and is particularly pitching it as an upgrade from the GTX 960, GTX 760, R9 380, and other ~$200 mainstream video cards from earlier this decade. Turing cards haven’t been a true generational upgrade over their Pascal predecessors, and GTX 1660 is no different; the new card is about 28% faster than the GTX 1060 3GB it replaces.
As for NVIDIA’s loyal opposition, the launch of the GTX 1660 will continue to put pressure on AMD’s aging Polaris video cards. The GTX 1660 is faster than all of them – including the fastest RX 590 – so AMD and its board partners will have little choice but to cut prices. This means AMD can try to position their cards as spoilers to the GTX 1660 – and $219 RX 590 cards are already popping up – but they can’t take on NVIDIA in terms of performance or power efficiency. AMD will have an edge in pack-in bundles though, as they’re continuing to offer their Raise the Game Bundle, which for the RX 590 is the full, 3 game pack.
Q1 2019 GPU Pricing Comparison | |||||
AMD | Price | NVIDIA | |||
Radeon RX Vega 64 | $499 | GeForce RTX 2070 | |||
$349 | GeForce RTX 2060 | ||||
Radeon RX Vega 56* | $279 | GeForce GTX 1660 Ti | |||
Radeon RX 590 | $219 | GeForce GTX 1660 | |||
Radeon RX 580 (8GB) | $179/$189 | GeForce GTX 1060 3GB (1152 cores) |
77 Comments
View All Comments
Qasar - Thursday, March 14, 2019 - link
it also depends on if the consoles even have any games some one would want to play... for me.. those games are not on consoles.. they are on a comp... not worth it for me to by a console as it would just sit under my tv unused..D. Lister - Saturday, March 16, 2019 - link
@eva02langley: "...console hardware is more efficient since it is dedicated for gaming only."smh... console hardware used to be more efficient for gaming when console hardware was composed of custom parts. Now, consoles use essentially the same parts as PCs, so that argument doesn't work anymore.
Fact of the matter is, consoles remain competitive in framerate by either cutting down on internal resolution, or graphic quality features, like AA, AF, AO, or in many cases, both res and features. Take a look at the face-offs conducted by the Digital Foundry over at Eurogamers.net.
D. Lister - Saturday, March 16, 2019 - link
@eva02langley: I also find it rather ironic that you, who has often criticized NVidia for not being open-sourced enough with their technologies, are making a case here for consoles that are completely proprietary and closed-off systems.maroon1 - Thursday, March 14, 2019 - link
Faster, consume much less power, smaller and produce less noise than RX590 which cost sameEven if you ignore the performance advantage, the GTX 1660 is still better out of the two. No reason to buy big power hungry GPU when it has no performance advantage
0ldman79 - Thursday, March 14, 2019 - link
What is it with Wolfenstein that kills the 900 series?I mean they're still competitive in almost everything else, but Wolfenstein just buries the 900 series horribly. If it's that bad I'm glad I'm not addicted to that series. I had thought about picking up a copy, but damn...
Opencg - Thursday, March 14, 2019 - link
they may be using some async techniques. the famous example is doom where many 900 series saw worse performance on vulkan due to async being a cpu based driver side implementation.Dribble - Thursday, March 14, 2019 - link
I think it's because it has FP16 code in the shaders - which Turing and newer AMD have hardware support for, but Pascal doesn't. It was AMD's trump card until Turing so you'll find a few AMD sponsored games use FP16.Ryan Smith - Thursday, March 14, 2019 - link
"What is it with Wolfenstein that kills the 900 series?"Memory capacity. It really wants more than 4GB when all of its IQ settings are cranked up, which leaves everything below the GTX 980 Ti a bit short on space.
AustinPowersISU - Thursday, March 14, 2019 - link
Used GTX 1070 still makes the most sense. You can easily get one for less than this card and have much better performance.Nvidia needs to do better.
eva02langley - Thursday, March 14, 2019 - link
It is still their best offering in term of price/performance from Turing. However, yeah, that should have been done way before.