Arc graphics are exactly what gaming laptops need in 2022

Since gaming laptops have existed, only two companies have produced GPUs for them. Nvidia and AMD have had an iron grip on the market for years, and now Intel Arc mobile graphics processors are finally here. I don’t know if an Arc 7 GPU will be faster than an RTX 3080 Ti – it probably won’t – but since it’s Intel at the helm, at least I know it’s going to result in a good experience .

To be clear, while Intel has stated that laptops using its Arc 3 graphics are available now, I haven’t even seen any in person, so all I have to go is the information provided by Intel. I’m not exactly in the business of trusting internal benchmarks, and neither should you.

But let’s be honest, while AMD and Nvidia both offer mobile graphics solutions, the best gaming laptops on the market use a combination of Intel processors and Nvidia graphics. AMD Navi has started to reduce Team Green’s dominance, but Intel can hit a lot harder, and it’s about more than raw frame rates.

Nvidia RTX

(Image credit: Nvidia)

A stable platform

Intel had a tough few years trying to catch up with AMD’s Zen architecture. But even when Intel was furthest behind in terms of raw performance, it still excelled where it really matters – especially for laptops – reliability. When you look at graphs and figures, it is quite easy to forget what the experience of actually using something is like, and Intel processors have never really had the same kind of adjustment period as AMD processors.

See also  Elden Ring mod turns legendary player Let Me Solo Her into a Spirit Summon

It seems like every time a new AMD chipset comes out on a desktop, there are a number of critical bugs that Team Red needs to jump on after release. For example, shortly after the release of its 5000-series processors, there were numerous reports of USB issues, where the devices simply stopped responding, according to Tom’s Hardware. (opens in a new tab).

Intel doesn’t usually have the same kind of problems with its newer platforms. And while Intel is admittedly new to discrete graphics, the company has proven that it values ​​user experience. So even if the performance isn’t quite there with this first generation of Arc graphics, it will at least result in a user-friendly product. Perhaps that’s why Intel prioritized laptop GPUs instead of trying to tackle the RTX 3080 right away.

Intel Arc graphics chip layout

(Image credit: Intel)

A true DLSS competitor

It’s impossible to overstate the impact DLSS has had on PC gaming since its debut with the RTX 2080 in 2018. While it’s not as exciting as it started out, it’s become a core technology for developers. of AAA games that want people to actually play. their games on affordable hardware.

And while AMD has come up with competing technology in FSR, or FidelityFX Super Sampling, it just doesn’t have the same visual quality as DLSS. However, it has the advantage of being usable on any GPU.

What keeps DLSS out of FSR’s reach is that it uses the Tensor Cores of Nvidia’s RTX graphics cards to apply a game-specific deep learning algorithm that allows you to render the game at a resolution lower with the regular CUDA cores, then use the Tensor cores to scale it up to your screen’s native resolution. Because it’s a hardware-accelerated approach that’s so finely tuned, it’s hard to even tell the difference between native resolution and DLSS on a quality preset.

See also  Half of workers are worried about the cost of going back to the office

But XeSS could be a legitimate alternative to DLSS with similar visual quality, as it takes a similar approach. Let’s break this down very quickly.

But XeSS could be a legitimate alternative to DLSS with similar visual quality, as it takes a similar approach.

For example, the Intel A370M, one of the first GPUs the company is launching, comes with 8 Xe cores. Fortunately, Intel has released the layout of each of these cores. Each Xe core will come with 16 Xe Vector Engines (XVE) and 16 Xe Matrix Engines (XMX). XVE threads will essentially perform the same function as CUDA cores in Nvidia GPUs. Second, XMX cores are purpose-built for AI workloads and are able to perform this specialized workload much faster than standard Vector units.

I won’t explain too much why, as I’m not an engineer, but it’s very similar in structure to Nvidia Ampere and should be just as good at increasing workloads, at least on paper.

XeSS won’t actually be available until later this year, but I can’t wait to get my hands on it to see how it works and, more importantly, how games watch when the technology is activated.

Because let’s face it, the performance gains between Nvidia’s DLSS and AMD’s FSR are quite similar – to the point where we get the same framerate in Cyberpunk 2077 when switching between them on the new RTX 3090 Ti – but the quality picture is much better with Nvidia technology.

The technology is there for Intel too, and it looks like XeSS will be just as important for PC gamers as other scaling technologies. But, it’s also important to keep in mind that it took Nvidia a while to get DLSS to look as good as it does today. I remember when the technology first became available in Battlefield 1 and Metro Exodus, and it’s come a long way. It would be great if Intel could avoid these growing pains, but there are likely to be problems along the way.

See also  Why AMD could be Mercedes F1's most important partner this season

But since it’s Intel technology, it’s more than likely actually work, and you probably won’t have to worry too much about it to make it work.

Leave a Comment