AMD's latest Radeon is the RX 7600 XT: double the VRAM for $330

Scorpus

Posts: 2,146   +238
Staff member
TL;DR: AMD is also launching a graphics card in January, the Radeon RX 7600 XT, priced at $330. This product is straightforward: it's a Radeon RX 7600 with 16GB of memory – doubling the VRAM – and a slight overclock. This places the two 7600 GPUs much closer in performance than the difference observed between the Radeon RX 6600 and RX 6600 XT in the previous generation, which had different core configurations.

AMD claims substantial performance improvements moving from the 7600 to the 7600 XT, with a chart showing increases ranging from 7 to 40 percent at 1080p, and 13 to 45 percent at 1440p. The larger improvements could be plausible if VRAM capacity is a limiting factor on the 8GB model, especially given the high memory requirements for Ultra setting gaming in many current games.

However, we are less certain about the performance gains in games that are not heavily VRAM limited, such as Starfield. The gains there seem higher than expected based on the hardware differences between the cards, necessitating further benchmarking.

The RX 7600 XT continues to utilize a fully unlocked Navi 33 die with 32 compute units and 2,048 stream processors. AMD claims a 10% higher game clock for the XT model compared to the non-XT, along with 16GB of GDDR6 at the same 18 Gbps on the same 128-bit bus, doubling the capacity while maintaining the same memory bandwidth. The card's board power rating has increased from 165W to 190W to accommodate these changes, marking a 15% increase.

We are somewhat disappointed by AMD's chart, which compares performance both with and without various features enabled. There's no issue with the baseline numbers comparing the 7600 XT against the RTX 4060, or for equivalent comparisons like FSR 3 enabled on both GPUs in Avatar Frontiers of Pandora or Like a Dragon Gaiden. AMD shows the 7600 XT competing closely with the RTX 4060 in these scenarios.

What seems misleading, in our opinion, are the comparisons where AMD applies their driver-based AFMF frame generation feature to "beat" the GeForce GPU in the listed output frame rate. These features do not deliver equivalent image quality, especially when comparing DLSS 2 Quality mode without frame generation to FSR 2 Quality mode with AFMF frame generation. Based on our tests, AFMF does not produce image quality comparable to other configurations in this chart. Yet, by comparing them as equivalent, AMD can present a significant performance "win" with the new 7600 XT.

It's not great when Nvidia compares RTX 40 GPUs to RTX 30 GPUs and shows the new 40 series cards having a "performance win" through DLSS 3 frame generation – we've extensively explained why frame generation is not a performance-enhancing technology. They are also guilty of this with the Super series data, although, to their credit, they did provide other more relevant numbers that didn't include frame generation FPS.

The more companies pursue the highest FPS number through software features (upscaling, frame generation, etc.) without considering image quality, the worse these comparisons become, and the more useless or misleading the information is for consumers.

With AMD's numbers, the situation is even worse because they are directly comparing Radeon with GeForce – something Nvidia only does with their own products. Moreover, the difference in image quality between AFMF and basic upscaling or native is much larger than that of DLSS 3 frame generation. The more companies pursue the highest FPS number through software features without considering image quality, the worse these comparisons become, and the more useless or misleading the information is for consumers. At least AMD did provide native apples-to-apples numbers for a proper comparison.

On a more positive note, AMD claims to have improved encoding image quality through rate control optimizations in the latest version of their SDK. This has been implemented in the latest version of Radeon Software 24.1 and applies to H.264, HEVC, and AV1 encoding. Streaming image quality on Radeon GPUs, particularly with H.264, has been criticized for some time now, so it's encouraging to see AMD working on this. Third-party apps will need to update to the latest version of the AMF SDK to benefit from these enhancements, but the prospects look promising.

The price of the Radeon 7600 XT doesn't seem exceptional, with the MSRP $60 higher than the RX 7600 for a minor clock increase and double the VRAM. Real-world pricing could be as much as $80 higher, given that we have seen the 7600 fall to $250 at times, and the 7600 is not an exceptionally priced product to begin with.

We give some credit to AMD for making their most affordable 16GB graphics card yet and offering more VRAM in the sub-$350 price tier, but it feels a bit expensive relative to the RX 7600. It's priced similarly to the still-available RX 6700 XT with 12GB of VRAM, which can be found for $320. The 6700 XT is likely faster, so it will be interesting to see where it ranks in Steve's benchmarks.

Our initial feeling is that AMD may have missed an opportunity to exert significant pressure on the RTX 4060. The Radeon RX 7600 (non-XT) and GeForce RTX 4060 offer similar performance, so if AMD could offer a slight performance advantage as well as double the VRAM at the same $300 price as the RTX 4060, that would be quite compelling. However, at $330, we are yet to be convinced.

Permalink to story.

 
So let me who the 7600xt is for because the 7700 is still the better high refresh rate 1080p card. There are people with 4k displays who don't care about turning on all the eye candy. 8gb is not enough and due to the size of memory buffer you can only have 8 or 16 gigs. If 4k is more important than max settings then you need a card with 12gb or more. If you want to play at 1080p60 then the 8gb version for 250 is fine. It's also a better card for screen capture, streaming and video rendering than the 7700 because of the extra memory.
 
So let me who the 7600xt is for because the 7700 is still the better high refresh rate 1080p card. There are people with 4k displays who don't care about turning on all the eye candy. 8gb is not enough and due to the size of memory buffer you can only have 8 or 16 gigs. If 4k is more important than max settings then you need a card with 12gb or more. If you want to play at 1080p60 then the 8gb version for 250 is fine. It's also a better card for screen capture, streaming and video rendering than the 7700 because of the extra memory.
I believe this would be for someone who uses multi-monitor set-ups with 2-3 4K displays and still wants to do some level of gaming on their primary display.

Seeing 16GB is a great addition however AMD needs to not be a Nvidia/Apple scalping users for the additional VRAM.

Historically, AMD has always pushed that envelope, as we know Nvidia has been very stingy over the years with adding more VRAM... why? It helps with built-in obsolescence... and helps them discontinue support sooner when they just can't handle titles based on VRAM alone.
 
So let me who the 7600xt is for because the 7700 is still the better high refresh rate 1080p card. There are people with 4k displays who don't care about turning on all the eye candy. 8gb is not enough and due to the size of memory buffer you can only have 8 or 16 gigs. If 4k is more important than max settings then you need a card with 12gb or more. If you want to play at 1080p60 then the 8gb version for 250 is fine. It's also a better card for screen capture, streaming and video rendering than the 7700 because of the extra memory.
There are games that, even at 1080p, 8GB is no longer enough. TLOU2 as an example. Consoles have 16GB now, 8GB needs to be put out to pasture, its time to move on.

Why would someone want one of these? The 7700xt is priced WAY too close to the 7800xt to make any sense. So if you dont want to pay for a 7800xt, the 7700xt will also be too expensive, ergo 7600xt.

Blame AMD for using nvidia pricing strategy.
I believe this would be for someone who uses multi-monitor set-ups with 2-3 4K displays and still wants to do some level of gaming on their primary display.

Seeing 16GB is a great addition however AMD needs to not be a Nvidia/Apple scalping users for the additional VRAM.

Historically, AMD has always pushed that envelope, as we know Nvidia has been very stingy over the years with adding more VRAM... why? It helps with built-in obsolescence... and helps them discontinue support sooner when they just can't handle titles based on VRAM alone.
Nvidia cards have longer support lifetimes then AMD cards. Let's not spread disinformation here!
 
There are games that, even at 1080p, 8GB is no longer enough. TLOU2 as an example. Consoles have 16GB now.
That's not exactly true, consoles have to share their memory with the GPU and then you have extreme cases like the PS5 where THEY JUST HAD TO BE DIFFERENT and have some unified memory storage thing that exists nowhere but in a Playstation. Well, Intel Optane but even they gave that up. Optane was sweet 😥.

But short story long, 8gigs is enough so long as you don't care about texture size relative to view distance and resolution. All the heavy compute BS like shaders doesnt directly impact memory usage. as a general rule, it's either heavy on compute or its heavy on memory. there are things like RT that are heavy on both, but I see the 7600xt being a good niche card for a number of years. far better than the mx5200 256mb was
 
A 7600xt with 16GB and a 7700 with 12. LMAO. It's just more evidence, the 7700 should have been the 7600 and the 7600 a 7500/7400 card.

Someone else gets it. 7600 is sham, and should have been called the 7500XT and would have been a huge upgrade from the 6500XT which is one of the most pathetic gpu's in history. AMD would have got a lot of cudo's if they called the 7600 a 7500XT for the huge upgrade if prices were similar. 7700XT is indeed the real 7600XT and 7800XT is a 7700XT, just as 7900XT is the real 7800XT. Nvidia is worse as the 4060's are in reality 4050's, 4070 the true 4060, 4070 Ti the true 4060Ti, 4080 the 4070 Ti.
 
That's not exactly true, consoles have to share their memory with the GPU and then you have extreme cases like the PS5 where THEY JUST HAD TO BE DIFFERENT and have some unified memory storage thing that exists nowhere but in a Playstation. Well, Intel Optane but even they gave that up. Optane was sweet 😥.
In every generation since the PS2 era, the total memory of the consoles roughly equals the amount of VRAM you need in a video card for quality settings. In the xbox era, 64 MB. 360 era, 512 MB, PS4, 8GB. The majority of console memory use will be for graphics at the end of the day, especially today.

But short story long, 8gigs is enough so long as you don't care about texture size relative to view distance and resolution. All the heavy compute BS like shaders doesnt directly impact memory usage. as a general rule, it's either heavy on compute or its heavy on memory. there are things like RT that are heavy on both, but I see the 7600xt being a good niche card for a number of years. far better than the mx5200 256mb was
No, its not. In the TLOU2 example provided, even on the lowest settings at 1080p, the 8GB GPUs experience texture failure, severe stutter, and the occasional crash. Resident Evil Village is another example. I do not consider 100+ms lag and texture failures to be "fine" or "enough".

IDK why some people are so insistent on 8GB being enough. It isnt. The 8GB generation ended 3 years ago. We had soem of this with the xbox 360 and PS4 eras, with people insisting that 128 and 512 MB cards were enough, respectively, for the next generation, but this time around people are REALLY hung up on 8GB and will ignore evidence of games flat out not working properly with so little VRAM to work with.
Someone else gets it. 7600 is sham, and should have been called the 7500XT and would have been a huge upgrade from the 6500XT which is one of the most pathetic gpu's in history. AMD would have got a lot of cudo's if they called the 7600 a 7500XT for the huge upgrade if prices were similar. 7700XT is indeed the real 7600XT and 7800XT is a 7700XT, just as 7900XT is the real 7800XT. Nvidia is worse as the 4060's are in reality 4050's, 4070 the true 4060, 4070 Ti the true 4060Ti, 4080 the 4070 Ti.
You nailed it. This entire gen is upsold an entire tier from where it should be.
 
No, its not. In the TLOU2 example provided, even on the lowest settings at 1080p, the 8GB GPUs experience texture failure, severe stutter, and the occasional crash. Resident Evil Village is another example. I do not consider 100+ms lag and texture failures to be "fine" or "enough".
I think a big reason that that game is such a memory hog is that it wasn't designed to work on PC hardware. Regardless of that, VRAM is going to be a major issue with TLOUP2 being a small red flag in what is to come.
 
AMD continues to price-fix it's releases with nVidia. No doubt they will lower their high-end cards to be back in line with NVidia's prices and continue to perform poorly in sales. Really wish AMD would fight nVidia in the GPU market the way they fight Intel in the CPU market but they always seem to just tag along.
 
AMD continues to price-fix it's releases with nVidia. No doubt they will lower their high-end cards to be back in line with NVidia's prices and continue to perform poorly in sales. Really wish AMD would fight nVidia in the GPU market the way they fight Intel in the CPU market but they always seem to just tag along.
Before 2011 AMD was many times way ahead Nvidia. After Bulldozer somewhat failed, AMD had to choose: invest on CPU or GPU. AMD decided to go with CPU because despite offering better cards on GPU side, Nvidia still sold better. As we know, Zen development started 2012.

In other words, it makes no sense to compete with Nvidia because market told they will buy Nvidia even if AMD is clearly better.

Shortly: Everyone that has Nvidia GPU have no right to blame AMD for lack of competition on GPU market. They made decision for AMD.
 
A 7600xt with 16GB and a 7700 with 12. LMAO. It's just more evidence, the 7700 should have been the 7600 and the 7600 a 7500/7400 card.

It's the amount of gigabits per chip, and I suspect that Samsung or Hynix is only selling 4Gbit and no longer 2Gbit chips. It happens - the same is with Polaris (4GB, 8GB and even 16GB). The density on chips just gets larger. No additional performance in bandwidth or anything.

AMD takes whatever is there and stamps it onto a card. As for VRAM; even the latest games do not tax cards enough on highest possible details you can imagine. Big difference in between allocated vs cached vs actually using.

If you have a game that requires 16GB of VRAM then likely those mid-range cards won't be sufficient.
 
Back