Friday, 30 November 2018

Nvidia Turing release date, news and features

The wait for Nvidia’s next-generation Turing graphics cards was  excruciating, but finally, the latest and greatest GPU architecture is here. It was a long wait, but the sheer power of Nvidia Turing was definitely worth it.

Now that we finally have Turing-powered Nvidia GeForce RTX 2080 Ti, RTX 2080 and RTX 2070 cards in our hands, we know they’re the best graphics cards for the games we’ll be playing for the next few years – as long as you’re willing to pay. (And if you don’t get a faulty RTX 2080 Ti.)

Real time ray tracing is the premiere technology with Nvidia Turing, bringing this holy grail of graphics tech to a mainstream GPU for the first time. This could bring about a revolution in the way the best PC games are rendered.

It’s a true paradigm shift as, the new GeForce RTX gaming cards may completely change look of the best PC games

Cut to the chase

  • What is it? Nvidia’s latest graphics card architecture
  • When is it out? September 20
  • What will it cost? $599 (£569, AU$899) - $10,000 (£7,830, AU$13,751)

Nvidia Turing release date

All of the currently-announced Nvidia Turing GPUs are now out in the wild – the RTX 2080 Ti, 2080 and 2070. Though, the RTX 2080 Ti will likely have limited availability for a while – likely into the new year. 

As for the mobile versions, there’s been some speculation that a mobile version of the RTX 2080 is on the way, and while we’ve heard ‘by the end of 2018’ thrown around a few times, we’re guessing it’ll launch in early 2019. Either way, if they’re nearly as powerful as the desktop versions, the best gaming laptops are going to be out of this world.

We’ve also seen some leaked benchmarks, compliments of Tom’s Hardware, that suggests that the RTX 2060 may be announced sometime soon.

Nvidia Turing price

For starters, The Nvidia Quadro RTX GPUs are much more expensive, which should come as no surprise for high-end chips such as these. 

  • Nvidia Quadro RTX 8000: $10,000 (£7,830, AU$13,751)
  • Nvidia Quadro RTX 6000: $6,300 (£4,935, AU$8,660)
  • Nvidia Quadro RTX 5000: $2,300 (£1,800, AU$3,160)

Of course, these are graphics cards meant for commercial work in the visual effects industry

For more consumer-focused cards, the prices seem to have risen, as well. The Nvidia GeForce RTX 2080 Ti seems to be taking the place of Nvidia’s past Titan cards, whereas the other cards seem to fall in line with the 10-series cards.

The prices for the announced cards is as follows:

  • Nvidia GeForce RTX 2080 Ti: $1,199 (£1,099, AU$1,899) 
  • Nvidia GeForce RTX 2080: $799 (£749, AU$1,199) 
  • Nvidia GeForce RTX 2070: $599 (£569, AU$899) 

It should be noted that the prices on the store are a bit higher than what Nvidia CEO and founder Jensen Huang revealed at the Nvidia Geforce Celebration at Gamescom 2018 – at the time of writing. For instance, the 2080 Ti was initially revealed at $999, but that price isn’t currently reflected in the online store.  

These prices might make you want to spring for the Pascal-based GTX 1080 Ti, and we wouldn’t blame you. However, you might not want to wait for Black Friday and Cyber Monday – manufacturers are quickly running out of stock, according to a report from GamersNexus.

Nvidia Turing specs

The headline feature of Nvidia Turing is the inclusion of ray-tracing tech that can render more realistic visuals and lighting in real time without having to fall back on programming tricks. These specialized RTX cores essentially calculate how light and sound travel in a 3D environment at a rate of up to 10 GigaRays on the RTX 2080 Ti.  These specialized cores will also supposedly allow Nvidia Turing-based graphics cards to process ray tracing up to 25 times faster than Pascal.

When these RTX Cores aren’t in use for processing ray tracing, they’ll essentially switch off, ceasing to draw any power. 

In addition to these RTX cores, the Turing Architecture will also feature Tensor Cores, like the ones found in Volta. These specialized cores enable artificial intelligence and neural networking so that Turing cards get better at rendering over time – something previously exclusive to supercomputers.  

With the ability to deliver 500 trillion Tensor operations a second, this technology accelerates deep learning training and inferencing. This will allow Nvidia to offer Deep Learning Super Sampling (DLSS), which could be a version of super sampling that won’t bring your computer to its knees. 

Even for games that don’t support this new DLSS tech, these AI-fueled cores should deliver traditional anti-aliasing much more efficiently – up to eight times.

As with Volta, Nvidia Turing is adopting GDDR6 memory – up to 11GB in the RTX 2080 Ti, which can clock in at up to 14Gbps, quite the leap over the Pascal-powered Nvidia Titan Xp that clocked in at 11.4Gbps.

The Nvidia GeForce RTX 2080 Ti is an absolute behemoth of a GPU. With 4,352 CUDA cores, 11GB of GDDR6 VRAM with a 352-bit memory bus and 18 billion transistors, it’s going to be capable of 4K Ultra gaming at high refresh rates for years to come. It’s no wonder it comes with such a high price tag. 

The more mainstream RTX 2080 and RTX 2070 are both still quite impressive, though, and will absolutely destroy the previous generation. The former will feature 2,944 CUDA cores, 8GB of GDDR6 memory and will clocked at 1.5GHz at its base frequency. The 2070, though will be a bit weaker, coming with 2,304 CUDA cores 8GB of GDDR6 VRAM and clocked at 1,410Mhz base.

Nvidia Turing Performance 

Now that we’ve been able to test the RTX 2070, 2080 and 2080 Ti, we have a better picture of how they perform, and the two high-end cards are beasts.

As long as you have the high-end specs to back them up, these new Turing cards are able to perform much faster than their Pascal equivalents, and will be able to push it even further once DLSS or deep learning super sampling is more widespread. And, thanks to the AA improvements in the Tensor cores, we’re seeing about a 20-40% increase in games that don't support DLSS.

In our benchmarks, the GeForce RTX 2080 is outperforming the GeForce GTX 1080 Ti by about 11% and the Nvidia GTX 1080 by a more impressive 32% in Middle Earth: Shadow of War in 4K. This performance difference is even more massive when you look at the Nvidia GeForce RTX 2080 Ti which not only is 20% faster than the RTX 2080 in the same title, but beats out the last-generation 1080 Ti by a massive 30%, destroying the GTX 1080 with a 45% performance delta. 

Unfortunately, the Nvidia RTX 2070 is less impressive. While it does absolutely wipe the floor with the GTX 1070, it is essentially neck in neck with the GTX 1080 – barely hitting a 10% performance increase at 4K in Shadow of the Tomb Raider. At its price point we were hoping for more, especially after seeing the RTX 2080 and RTX 2080 Ti’s impressive performances.

Earlier, we mentioned some leaked benchmarks for an Nvidia GeForce RTX 2060. In those benchmarks, the mid-range Turing card outclassed the GTX 1060 by far, but fell short of both the GTX 1070 and the AMD Radeon RX Vega 56. Take this with a grain of salt, because there’s no way to verify this, but it’s not looking good for the RTX 2060.

Still, in traditional games, there’s no question that Nvidia Turing marks a total upgrade from Pascal. And, over time as drivers mature and users get ready to start overclocking their Turing cards, the difference is only going to grow. That’s not to mention the inclusion of incoming DLSS and ray tracing in games, which should only increase the Nvidia Turing performance gap.

When it comes to ray tracing, we have a better idea of how Nvidia Turing is going to handle performance. Rather than rendering using pure ray tracing techniques, the new graphics cards are going to use a hybrid method – combining both traditional rasterization and ray tracing in order to produce playable frame rates. 

Nvidia utilizes “Bounding Volume Hierarchy,” or BVH to track large portions of the scene being rendered for whether or not a ray is being bounced. The RTX cores will then dig deeper into these large rendering zones until it finds the polygon that’s getting hit by the light ray. 

This method should impact performance far less than tracking each ray live – but will still be quite demanding. In short, Nvidia Turing makes ray tracing easier because it simplifies the math.

  • Meanwhile, this the latest in AMD Vega


from TechRadar - All the latest technology news https://ift.tt/2Dc2TFs

No comments:

Post a Comment