"Why doesn't Intel develop their own kind of 3D V-Cache?" It's a thought-provoking question, especially given how beneficial 3D V-Cache has been for AMD Ryzen chips. Let's benchmark and check that out.
"Why doesn't Intel develop their own kind of 3D V-Cache?" It's a thought-provoking question, especially given how beneficial 3D V-Cache has been for AMD Ryzen chips. Let's benchmark and check that out.
what I can interpret from this is..Cool test! I can't wait to see how people misinterpret it.
It's another die altogether, so not really any die space used at all. Although I guess the connecting vias do take some space.Huge advantages for things that could benefit from the extra cache, other then that it's extremely expensive, it takes up die space and it requires a good sum of power (because of the always on state).
Power efficiency is improved again.Downside: AMD's implementation prohibits OC'ing or voltage selection. The cache is tied to the CPU's VCORE and that explains why the whole voltage thing is locked. And a few who managed to increase the voltage(s) learned quickly that the X3D cache died really really fast.
It would mean a respin is all. If Intel wanted to it could.Intel cant just slap extra cache onto it. I mean it can but the costs would be significant. The way AMD designs it's chips vs Intel's is completely different.
What AMD probably needs to do now is implement a virtual L4 cache mechanism that alters the utilisation priorities of distance L3 caches.The fail with the EPYCs is the reason why the 7950X3D is stuck with only one V-cache die stacked on one chiplet - When two stacks exist the traffic between chiplets skyrockets, ruining performance. Some have blamed latency alone but it's probably a combination of latency and saturation.
Do you, for even one second, really think TS is going to redo "all the measurements" based on the swirling thoughts in your head? Start a blog and publish your own results, Demand Boy.@TechSpot. Your method is unsound:
- There is on average 10% performance difference @1080p-RTX4090 between 7600X (32MiB L3 cache) and 8600G (16MiB L3 cache)
- You didn't test any Intel CPU with 16MiB L3 cache or smaller (despite the fact that Intel CPUs with smaller L3 cache sizes are available), yet you "inferred" that lower L3 cache size wouldn't affect Intel
- Please add i3-14100 (12MiB L3 cache, 4 P-cores), compare it to 4 cores on i9-14900K at the same clock frequency in games that do NOT benefit from more than 4 cores, and then REPUBLISH both the video and the article
- There is little performance difference between 5800X (1*32MiB L3 cache) and 5900X (2*32MiB L3 cache), and there is large performance difference between 5800X3D (1*96MiB L3 cache) and 5800X in one half of games, while 5800X3D can outperform 5900X in one quarter of games: this implies that the "split 64MiB" cache in 5900X does NOT matter for gaming performance in one quarter of games, but non-split 96MiB in 5800X3D DOES matter for gaming performance in one half of games
- You didn't simulate any Intel CPU with non-split 96MiB L3 cache, yet you "inferred" that a larger (64MiB or larger) non-split L3 cache wouldn't affect Intel gaming performance
- As far as I know you didn't use 5800X3D (or 7800X3D, or even 8600G) to divide games into two categories BEFORE performing Intel benchmarks: one category is games that do benefit from large L3 cache of 5800X3D and the 2nd category is games that don't benefit. Please do use 5800X3D/7800X3D vs 5800X/7700X as a precursor for Intel cache sensitivity benchmarks to "bin" games into these two categories, with a sufficient number of games in each bin, and then REPUBLISH both the video and the article about how Intel cache size affects gaming performance.
What you did ISN'T the SCIENTIFIC METHOD. Please redo all the measurements.
Yes.@TechSpot. Your method is unsound:
- There is on average 10% performance difference @1080p-RTX4090 between 7600X (32MiB L3 cache) and 8600G (16MiB L3 cache)
They don't really infer that, they refer to the 10th gen review where the L3 cache size (between the 12, 16, and 20MB caches on them) did make a difference, they're just saying the smaller 24/33/36MB compared to AMD's larger caches is not making a big difference. It's true though, if there's a 13th/14th gen that can hit 5ghz and has a 16MB or smaller cache that would have been a nice data point to have anyway.- You didn't test any Intel CPU with 16MiB L3 cache or smaller (despite the fact that Intel CPUs with smaller L3 cache sizes are available), yet you "inferred" that lower L3 cache size wouldn't affect Intel
As far as I know the only Intel CPUs with that much cache are a couple Xeon models, with like 56-64 cores. I don't know that it'd be particularly comparable to get this CPU with like quad memory controllers etc. and just turn off like 95% of the cores on it.- There is little performance difference between 5800X (1*32MiB L3 cache) and 5900X (2*32MiB L3 cache), and there is large performance difference between 5800X3D (1*96MiB L3 cache) and 5800X in one half of games, while 5800X3D can outperform 5900X in one quarter of games: this implies that the "split 64MiB" cache in 5900X does NOT matter for gaming performance in one quarter of games, but non-split 96MiB in 5800X3D DOES matter for gaming performance in one half of games
- You didn't simulate any Intel CPU with non-split 96MiB L3 cache, yet you "inferred" that a larger (64MiB or larger) non-split L3 cache wouldn't affect Intel gaming performance
So? They could have tested fewer games than they did. That doesn't change the results.- As far as I know you didn't use 5800X3D (or 7800X3D, or even 8600G) to divide games into two categories BEFORE performing Intel benchmarks: one category is games that do benefit from large L3 cache of 5800X3D and the 2nd category is games that don't benefit. Please do use 5800X3D/7800X3D vs 5800X/7700X as a precursor for Intel cache sensitivity benchmarks to "bin" games into these two categories, with a sufficient number of games in each bin, and then REPUBLISH both the video and the article about how Intel cache size affects gaming performance.
I don't think in absolute terms, I think in probabilities. Thus, the question whether "I really think TS will redo all the measurements" is nonsensical, such a question has no answer in my mind.Do you, for even one second, really think TS is going to redo "all the measurements"
No. With such a small sample size as in the article (11 games), while at the same time the authors having NO IDEA whether the 11 selected games are large-cache sensitive or insensitive in the first place, they might have actually selected a wrong set of games and thus the measurements might be invalid.They could have tested fewer games than they did. That doesn't change the results.
No. With such a small sample size as in the article (11 games), while at the same time the authors having NO IDEA whether the 11 selected games are large-cache sensitive or insensitive in the first place
...
Well then, let me help you out! The answer is absolutely not.I don't think in absolute terms, I think in probabilities. Thus, the question whether "I really think TS will redo all the measurements" is nonsensical, such a question has no answer in my mind.
I think you hit the nail on the head.Zen suffers from higher RAM access latency, so having a larger L3 cache helps to alleviate this bottleneck.