Wednesday, 28 December 2022

AMD in 2022: year in review

https://ift.tt/0qAjz1o

While last year was a relatively quiet one for AMD – well, as quiet as things get for a tech behemoth which is a huge presence in the CPU and GPU worlds – 2022 was very different in terms of being a year of major next-gen launches. We witnessed the arrival of the long-awaited Zen 4 processors, and also RDNA 3 graphics cards, plus more besides.

AMD had a lot to live up to, though, in terms of rival next-gen launches from Intel and Nvidia respectively, and Team Red certainly ran into some controversies throughout the year, too. Let’s break things down to evaluate exactly how AMD performed over the past 12 months.

An AMD Ryzen 9 7950X on a table

(Image credit: Future)

CPUs: V-Caching in, and Ryzen 7000 versus Raptor Lake

In the CPU space, AMD’s first notable move of 2022 was to unleash its new 3D V-Cache tech, launching an ‘X3D’ processor in April that was aimed at gamers. The Ryzen 7 5800X3D was hailed as a great success, and indeed really did beef up performance levels with gaming for the 8-core CPU, as intended.

Of course, this wasn’t the big news for 2022 – that was the arrival of a new range of Ryzen 7000 processors based on Zen 4. This was an entirely new architecture, complete with a different CPU socket and therefore the requirement to buy a new motherboard (along with a shift to support DDR5 RAM, bringing the firm level with Intel on that score).

The initial Zen 4 chips were the Ryzen 5 7600X, Ryzen 7 7700X, and the high-end Ryzen 9 7900X and Ryzen 9 7950X, all of which hit the shelves in September. The flagship 7950X stuck with 16-cores, as with the previous generation, but that’s plenty enough to be honest, and as we observed in our review, this CPU represented a superb piece of silicon, and a huge leap in performance over the 5950X.

However, the Zen 4 top dog soon ran into a problem: Intel’s Raptor Lake flagship emerged just a month later, proving to have considerably more bite than the 7950X. In actual fact, performance-wise, there wasn’t much between the Core i9-13900K and Ryzen 7950X, except the Intel CPU was somewhat faster at gaming – the real problem for AMD was pricing, with Team Blue’s chip to be had for considerably less (a hundred bucks cheaper, in fact).

Compounding that was the situation of AMD’s new AM5 motherboards being more expensive than Intel’s LGA 1700 boards, plus buyers on the AMD side had to purchase DDR5 RAM, as DDR4 wasn’t compatible – but with Raptor Lake, DDR4 was an option. And the latter memory is a fair bit cheaper, for really not all that much difference in performance (at least not in these early days for DDR5 – that will change in the future, of course).

All of this added up to a problem in terms of value proposition for Ryzen 7000 compared to Raptor Lake, not just at the flagship level, but with the mid-range Zen 4 chips, too. Indeed, the added motherboard and RAM costs made themselves even more keenly felt with a more modestly-targeted PC build, so while the Ryzen 7600X and Intel’s 13600K were well-matched in performance terms (with a little give and take), the latter was a less expensive upgrade.

It soon became clear enough that when buyers were looking at Zen 4 versus Raptor Lake late in 2022, AMD’s product was falling behind in sales due to the aforementioned cost factors.

An AMD Ryzen 9 7950X slotted into an AM5 socket

(Image credit: Future)

Hence Team Red made an unusual move, slashing the price of Ryzen 7000 CPUs during Black Friday, and keeping those discounts on as a ‘Holiday promotion’ afterwards. That kind of price-cutting is not normally something witnessed with freshly launched CPUs, and it was an obvious sign that AMD needed to stoke sales. Even as we write this, the 7600X still carries a heavy discount, meaning it’s around 20% cheaper than the 13600K – and clearly a better buy with that relative pricing.

AMD’s actions here will presumably have fired up sales considerably as 2022 rolled to an end (we’d certainly imagine, though we haven’t seen any figures yet). And while these price cuts aren’t permanent, more affordable motherboards should be available soon enough in 2023, taking the price pressure off elsewhere. Add into the mix that new X3D models are coming for Zen 4 which should be excellent for gaming – and rumor has it they should be revealed soon at CES, with multiple versions this time, not just one CPU – and AMD looks like it’s going to be upping the ante considerably.

In summary, while Zen 4 got off to a shaky start sales-wise, with Raptor Lake pulling ahead, AMD swiftly put in place plans to get back on track, so it could yet marshal an effective enough recovery to be more than competitive with Intel’s 13th-generation – particularly if those X3D chips turn up as promised. Mind you, Intel does still have cards up its own sleeve – namely some highly promising mid-range CPUs, and that 13900KS beast of a flagship refresh – so it’s still all to fight for in the desktop processor world, really.

An AMD Radeon RX 7900 XTX on a table against a white backdrop

(Image credit: Future)

RDNA 3 GPUs arrive to applause – but also controversy

Following a mid-year bump for RX 6000 models led by a refreshed flagship, the RX 6950 XT – alongside an RX 6750 XT and 6650 XT – the real consumer graphics showpiece for AMD in 2022 was when its next-gen GPUs landed. Team Red revealed new RDNA 3 graphics cards in November, and those GPUs hit the shelves in mid-December.

The initial models in the RDNA 3 family were both high-end offerings based on the Navi 31 GPU. The Radeon RX 7900 XTX and 7900 XT used an all-new chiplet design – new for AMD’s graphics cards, that is, but already employed in Ryzen CPUs – meaning the presence of not just one chip, but multiple dies. (A graphics compute die or GCD, alongside multiple memory cache dies or MCDs).

We loved the flagship RX 7900 XTX, make no mistake, and the GPU scored full marks in our review, proving to be a more than able rival to the RTX 4080 – at a cheaper price point.

Granted, there were caveats with the 7900 XTX being slower than the RTX 4080 in some ways, most notably in ray tracing performance. AMD’s ray tracing chops were greatly improved with RDNA 3 compared to RDNA 2, but still remained way off the pace of Nvidia Lovelace graphics cards. Nvidia’s new GPUs also proved to offer much better performance with VR gaming, and creative apps to boot. But for gamers who weren’t fussed about ray tracing (or VR), the 7900 XTX clearly offered a great value proposition (partly because RTX 4080 pricing seemed so out of whack, mind you).

As for the RX 7900 XT, that GPU rather disappointed – like the RTX 4080, it was very much regarded as the lesser sibling in the shadow of its flagship. That didn’t mean it was a bad graphics card, just that being priced only slightly more affordably than the XTX variant – just a hundred bucks less – the XT didn’t work out nearly as well in terms of comparative performance.

AMD Radeon RX 7900 XT graphics card

(Image credit: Future)

The 7900 XTX suffered from its own problems, though, partly due to its popularity, and quickly becoming established as the GPU to buy versus the RTX 4080, or indeed the 7900 XT (for the little bit of extra outlay, the XTX clearly outshone it). Stock vanished almost immediately, and as we write this, you still can’t find any; except via scalpers on eBay (predictably). Those auction prices are ridiculous, too, and indeed you could pretty much get yourself an RTX 4090, never mind 4080, for some of the eBay price tags floating around at the time of writing.

The stock situation wasn’t as bad for the RX 7900 XT, with models available as we write this, but still, many versions of this graphics card have sold out, and some of the ones hanging around are pricier third-party models (which obviously suffer further on the value proposition front).

Part of the reasons for stock being initially sparse pertained to some card makers not having had all their models ready to roll mid-December – indeed, MSI didn’t launch any RDNA 3 cards at all (and may not have custom third-party boards on shelves until Q1 2023, or so the rumor mill theorized).

Aside from availability – which is, let’s face it, hardly an unexpected issue right out of the gate with the launch of a new GPU range – there was another wobble around RDNA 3 that AMD had to come out and defend itself against in the first week of sales.

This was the speculation that RX 7000 GPUs had been pushed out with hardware functionality that didn’t work, and were ‘unfinished’ silicon – unfair criticisms, in our opinion, and AMD’s too, with Team Red quickly debunking those theories.

What did still raise question marks elsewhere, however, was some odd clock speed behavior, and more broadly, the idea that the graphics driver paired with the RDNA 3 launch wasn’t up to scratch. In this case, the rumor mill floated more realistic sounding theories: that AMD felt under pressure to get RX 7000 GPUs out in its long-maintained timeframe of the end of 2022, only just squeaking inside that target, in order to make sure Nvidia didn’t enjoy the Holiday sales period uncontested (and to keep investors happy).

An AMD Radeon RX 7900 XTX on a table against a white backdrop

(Image credit: Future)

The same rumor-mongers noted, however, that AMD is likely to be able to improve drivers in order to extract 10% better performance, or maybe more, with headroom for quick boosts on the software front that really should have been in the release driver.

All of that is speculation, but it’ll be good news for 7900 XTX and XT buyers if true – because their GPUs will quickly get better, and make the RTX 4080 look an even shakier card. Indeed, we may even see the 7900 XTX become a much keener rival for the 4090, rather than the repositioning AMD engaged in at a late pre-launch stage, pitting the GPU as an RTX 4080 rival. (Which in itself, perhaps, suggests that Team Red expected driver work to be further forward than it actually ended up being for launch – and that these gains will be coming soon enough).

Even putting that speculation aside, and forgetting about potentially swift driver boosts, AMD’s RDNA 3 launch still went pretty well on the whole, with the flagship 7900 XTX coming off really competitively against Nvidia – performing with enough grunt to snag the top spot in our ranking of the best GPUs around. Even if the 7900 XT did not hit those same heights...

And granted, the controversies around RDNA 3 and hardware oddities didn’t help, but these felt largely overblown to us, and the real problem was that ever-present bugbear of not enough stock volume with the initial launch. Hopefully the supply-demand imbalance will be corrected soon enough (maybe even by the time you read this).

An AMD Radeon RX 6650 XT graphics card

(Image credit: Future)

The GPU price is right (or at least far more reasonable)

One of the big positives of 2022 was the supply of graphics cards correcting itself, and the dissipation of the GPU stock nightmare that had haunted gamers for so long.

Early in the year, prices started to drop from excruciatingly high levels, and by mid-2022, they were starting to normalize. The good news for consumers was that AMD graphics cards – RX 6000 models – fell more quickly in price than Nvidia GPUs, and towards the end of the year, there were some real bargains (on Black Friday in particular) to be had for those buying a Radeon graphics card.

Of course, a lot of this was down to the proximity of the next-gen GPU reveals, and the need to get rid of current-gen stock as a result. But there’s no arguing about the fact that it was a blessed relief at the lower-end of the market for folks to be able to pick up an RX 6600 or 6650 XT for not much more than $200 in the US for the former (and as little as $250 for the latter at some points).

It was a good job these price drops happened, though, because like Nvidia, AMD’s GPU launches in 2022 did little to help the lower-end of the market. AMD released the aforementioned RX 6650 XT in May, but the graphics card was woefully overpriced in the mid-range initially (at $400, which represented terrible value). Offerings like the RX 6500 XT and RX 6400 further failed to impress, with the former being very pricey for what it was, and while the latter did carry a truly wallet-friendly price tag, it was a shaky proposition performance-wise.

These products were generally poorly received, so let’s hope next year brings something better at the low-end from RDNA 3, eventually. Otherwise both AMD and Nvidia are really leaving the door open for Intel to carve a niche in the budget arena (with Team Blue’s Arc graphics drivers undergoing solid improvements as time rolls on).

Deathloop

(Image credit: Bethesda Softworks)

FSR 2.0: AMD switches to temporal

Not to be beaten by Nvidia DLSS, AMD pushed out a new version of its frame rate boosting FSR (FidelityFX Super Resolution) in May 2022. FSR 2.0 brought temporal upscaling into the mix – which is what DLSS uses, albeit AMD doesn’t augment it with AI unlike Nvidia’s tech – and the results were a huge improvement on the first incarnation of FSR (which employed spatial upscaling).

It was an important step for AMD to take in terms of attempting to keep level with Nvidia in the graphics whistles-and-bells department (even if Team Red couldn’t do that with ray tracing, as already noted, with its new RDNA 3 GPUs).

Support is key with these technologies, of course, and in December, AMD reached a milestone with 101 games now supporting FSR 2.0, with that happening in just over half a year. DLSS 2.0 may support more games, at 250 going by Nvidia’s latest figure, but that includes some apps, and has only risen by 50 in the past six months (it was 200 back in July).

So, AMD’s overall rate of adoption looks to be twice as fast, or thereabouts, backing up promises Team Red made about FSR 2.0 being easier to quickly add to games – which bodes well for future acceleration of those numbers of supported titles.

Of course, Nvidia just unleashed DLSS 3.0 – as an RTX 4000 GPU exclusive, mind, and with vanishingly few titles right now – and so AMD revealed it’s working on FSR 3.0, too. Team Red told us that it uses tech similar to DLSS 3.0 and Nvidia’s Frame Generation, and could double frame rates (best-case scenario).

But with any details on FSR 3.0 seriously thin on the ground, it felt like a rushed announcement and rather a case of keeping up with the Joneses (or the Jensen, perhaps). Which doesn’t mean it won’t be great (or that it will), but with a vague ‘coming in 2023’ release date, we’re not likely to find out soon, and FSR 3.0 is probably still a long way off.

Threadripper goes All-Pro

AMD deployed a superheavyweight 64-core CPU this year aimed at professional users. The Ryzen Threadripper Pro 5995WX (backed by the 32-core 5975WX, and 24-core 5965WX) faced no opposition from Intel, though, which hasn’t released a high-end desktop (HEDT) range in years (but that should change in 2023, mind you, if rumors are right).

Meanwhile, the 5995WX strutted as an unchallenged champion (and will have a successor next year, as well), but the bad news on the AMD front is that there were no non-Pro Threadripper models – and won’t be in the future either.

As these Pro categorized chips are ridiculously expensive, that puts Threadripper out of the pricing reach of all but the richest of PC enthusiasts. That said, as December rolled to a close, a rumor was floated suggesting maybe AMD will bring out non-Pro chips with the next generation of Threadripper (but take that with very heavy seasoning).

AMD EPYC server

(Image credit: AMD)

Server success

AMD forged ahead in the server market during 2022, with its Q3 earnings report showing that the firm’s data center business boomed to the tune of 45% compared to the previous year. AMD moved up to hold 17.5% of the server market, grabbing more territory from Intel, and experiencing its fourteenth consecutive quarter of growth in this area, no less.

AMD has new fourth-generation EPYC chips to come in 2023, including Genoa-X processors utilizing 3D V-Cache tech to offer in excess of 1GB of L3 cache per socket (a larger amount than any current x86 chip). It’s no wonder Intel has forecast a difficult time for itself in the server sphere in the nearer future.

An AMD Ryzen 7 7700X in a man's hand seen at an angle

(Image credit: Future)

Concluding thoughts

AMD had a pretty good year with both its new CPU and GPU ranges, but not without problems – a lot of which were to do with the heat of the competition. While Ryzen 7000 provided a very impressive generational boost, the Zen 4 chips struggled on the cost front against Intel Raptor Lake (and the relative affordability of Team Blue’s platform in particular).

AMD wasn’t afraid to take action, though, in terms of some eye-opening early doors price cuts for Ryzen 7000 processors. And in the (likely very near) future, fresh X3D models based on Zen 4 hold a whole heap of promise – not to mention cheaper AM5 motherboards to make upgrading to AMD’s new platform a more affordable affair.

As for graphics cards, the Radeon RX 7900 XTX made quite a splash, and on the whole, RDNA 3 kicked off well enough. Even if the launch felt rushed, with question marks around the drivers perhaps needing work, and the 7900 XT failed to be quite such a hit as the XTX.

There’s no doubting that Nvidia clearly held the performance crown in 2022, however, with the RTX 4090 well ahead of the 7900 XTX in most respects. (Driver improvements for RDNA 3 could be key in playing catch-up for AMD, though, as we discussed above).

All that said, arguments about the relative merits of these high-end graphics cards from both Nvidia Lovelace and AMD RX 7000 are not nearly as important as what comes in the mid-range. That’s going to be the real GPU battleground – outside of these initial top-end boards that cost more than most people pay for an entire PC – and we’re really excited to see how these ranges stack up against each other.

And we’re praying that the budget end of the GPU spectrum will actually get some peppier offerings from AMD (and Nvidia), otherwise Intel might well take advantage with low-cost Arc cards next year.



from TechRadar: computing components news https://ift.tt/x6aJt8v
via IFTTT

Tuesday, 20 December 2022

This keyboard has a screen underneath - and you might not be able to look away

https://ift.tt/2CdIFuS

Computer mice brand Finalmouse is taking customization to an eccentric level by revealing a new keyboard with an interactable screen underneath a layer of transparent keys and switches. And the results are pretty striking.

Customizing mechanical keyboards mostly comes down to changing the keys or picking out a new color pattern for the RGB LEDs. But with the Finalmouse Centerpiece, as it’s called, you can instead have the display show a moving cityscape at sunset, a space shuttle taking off, or some abstract 3D animation. Looking at the announcement video, these skins can be swapped out at any time. Some, as alluded to earlier, are interactive. The koi fish skin, for example, causes the on-screen water effect to ripple whenever you press a key, while another makes sparks fly as you type.

What’s interesting about these skins is that they’re powered by Unreal Engine 5, a free-to-download 3D graphics creation tool that’s seen use in a ton of video games. Finalmouse states artists will be allowed to create and upload their own skins on the Centerpiece’s display or submit them to the Freethinker Portal app on Steam when that launches. You’ll be able to download up to three skins to the keyboard and switch between them using the buttons on the side.

The video states artists can monetize their creations or trade them with friends. It's unknown how monetization will work, but Finalmouse does allude to some kind of in-app store.

Internal features

The inside of the keyboard is just as fascinating. The Centerpiece has its own CPU and GPU so it won’t take up any resources from the computer. And it looks like you’ll be able to play games on the keyboard itself. One scene in the announcement shows someone controlling a running lion with the arrow keys.

Speaking of which, the keys aren’t made from plastic but a type of proprietary tempered glass called Laminated DisplayCircuit Glass Stack (LDGS) encased in an aluminum chassis. If you’re worried about fragility, Finalmouse states LDGS is “able to withstand intense abuse…” if you apparently feel like slamming the Centerpiece in anger (don’t do that).

Competitive gamers will enjoy the custom mechanical switches in the keys. They were developed alongside the notable keyboard switch brand Gateron to ensure fast response times. There will also be another Centerpiece model that comes with hall-effect sensors in its switches for users who want that extra bit of speed.

Lingering questions

Despite all this information, there are still a lot of lingering questions. For starters, how can people type with this keyboard when it's all blank? A text-style skin, perhaps? We also don’t know the size of the keyboard, how bright the display is (although it looks like it can be dimmed), its refresh rate, or even the resolution. The Finalmouse Centerpiece launches in early 2023 for $349. Hopefully, we’ll get more information before then. 

Until then, be sure to check out TechRadar’s recently updated best gaming keyboards list for the upcoming year. While none of them are as ostentatious as the Centerpiece, they’re certainly worth a gander.



from TechRadar: computing components news https://ift.tt/PczCE4v
via IFTTT

Tuesday, 13 December 2022

Nvidia RTX 4090 GPU spotted with new Raptor Lake CPU in gaming laptop

https://ift.tt/xFlEHbz

Nvidia’s RTX 4090 laptop GPU has been spotted in an HP Omen gaming notebook, along with a raft of other Lovelace mobile graphics cards, plus these portables are all powered by an imminent new Raptor Lake mobile CPU.

This leak comes from @momomo_us on Twitter (as VideoCardz flagged), who spotted the HP Omen machines listed on a European retailer’s website, but given that this is an unknown retail outlet, we need to be particularly careful around these rumored specs provided via product listings.

See more

The retailer lists six variations of the incoming HP Omen 17 which are all powered by Intel’s Core i7-13700HX CPU – a mobile part expected to be revealed at CES – and they all have Nvidia Lovelace GPUs.

The top-end model runs with the RTX 4090 laptop graphics card which is listed with 16GB of VRAM (no other specs are provided here, by the way, just the memory configuration).

We also see two Omen 17 laptops with an RTX 4080 (equipped with 12GB VRAM), one machine with an RTX 4070 (8GB), and another pair with the RTX 4060 (also 8GB).

The prices are in Lei, Romanian currency – so we can assume this is a Romanian retailer, naturally – and the top HP Omen with RTX 4090 is going for 18,881 Lei. That converts to roughly $4,000 / £3,300 / AU$6,000, but take that with even more seasoning than the rest of this leak (which is to say a whole lot).


Analysis: A beast of a laptop flagship?

If this leak proves to be correct, then we are apparently looking at an RTX 4060 all the way through to an RTX 4090 for laptop GPUs, which are about to be revealed at CES (along with Intel’s new Raptor Lake mobile offerings, as mentioned). There is also an RTX 4050 rumored, and this could be coming too, in theory, just not in an HP Omen (the RTX 4050 was actually spotted in leaked benchmarks for a Samsung Galaxy Book Pro laptop).

What’s also interesting here is that the rumor mill has been rather unsure of what the top-end model will be for Lovelace mobile, and to see that it could be an RTX 4090 primes us to expect a suitably beefy laptop GPU which really pushes frame rates for gaming on the move. (Remember, the Ampere generation topped out at the RTX 3080 Ti for notebooks; there was no 3090 for laptops).

Sadly, we don’t get any specs other than the memory loadouts, although that hasn’t stopped whispers about the possibility of the RTX 4090 being a laptop graphics card that might match the power of the desktop RTX 3090, even (or at least come close to it). A previous leak pointed to the 4090 mobile using the AD103 chip (AD102 is just too much for the confines of a notebook chassis) with a TGP of up to 175W (but in fairness, that rumor is another sketchy one that we need to remain more skeptical than usual around).

There’s still plenty of doubt in the air, then, as to how Lovelace laptop GPUs will really shape up, but at least we should find out in less than a month now. At CES, we’re fully expecting to see some power-packed gaming laptops that use Intel’s Raptor Lake mobile processors in combination with RTX 4000 graphics cards.



from TechRadar: computing components news https://ift.tt/HUsKvuV
via IFTTT

Thursday, 8 December 2022

SK hynix Reveals DDR5 MCR DIMM, Up to DDR5-8000 Speeds for HPC

https://ift.tt/g4BwfGF

One of the world's biggest semiconductors and manufacturers of DRAM, SK hynix, has unveiled it has working samples of a new generation of memory modules designed for HPC and servers. Dubbed Multiplexer Combined Ranks (MCR) DIMMs, the technology allows high-end server DIMMs to operate at a minimum data rate of 8 Gbps, which is an 80% uptick in bandwidth compared to existing DDR5 memory products (4.8 Gbps).

Typically, the most common way to ensure higher throughput performance on DIMMs is through ever increasing memory bus (and chip) clockspeeds. This strategy is not without its drawbacks, however, and aiming to find a more comprehensive way of doing this, SK hynix, in collaboration with both Intel and Renesas, has created the Multiplexer Combined Rank DDR5 DIMM.

Combining Intel's previously-unannounced MCR technology for its server chips and Renesas's expertise in buffer technology, SK hynix claims that their DDR5 MCR DIMM has 66% more bandwidth than conventional DDR5 DIMMs, with an impressive 8 Gbps/pin (DDR5-8000) of bandwidth. SK hynix themselves claim that the MCR DIMM will be 'at least' 80% faster than what's currently out there DDR5-wise, but it doesn't quantitate how it reaches this figure.

The technology behind the MCR DIMM is interesting, as it enables simultaneous usage of two ranks instead of one, in essence ganging up two sets/ranks of memory chips in order to double the effective bandwidth. Unfortunately, the details beyond this are slim and unclear – in particular, SK hynix claims that MCR "allows transmission of 128 bytes of data to CPU at once", but looking at the supplied DIMM photo, there doesn't seem to be nearly enough pins to support a physically wider memory bus.

More likely, SK hynix and Intel are serializing the memory operations for both ranks of memory inside a single DDR5 channel, allowing the two ranks to achieve a cumulative effective bandwidth of 8Gbps. This is supported by the use of the Renesas data buffer chip, which is shown to be on the DIMM in SK hynix's photos. Conceptually, this isn't too far removed from Load Reduced DIMMs (LRDIMMs), which employs a data buffer between the CPU and memory chips as well, though just how far is difficult to determine.

More curious, perhaps, is that this design puts a great deal of faith into the ability of the physical memory bus and host controller (CPU) to be able to operate at DDR5-8000 (and higher) speeds. Normally the bottleneck in getting more memory bandwidth in server-grade systems is the memory bus to begin with – having to operate at slower speeds to accommodate more memory – so going a route that requires such a fast memory bus is definitely a different approach. In either case, the ability to run DIMMs at DDR5-8000 speeds in a server would be a significant boon to memory bandwidth and throughput, as that's often in short supply with today's many-core chips.

As SK Hynix has partnered up with Intel via its MCR technology and using buffer technology from Renesas, MCR would seem to be an Intel-exclusive technology, at least to start with. As part of SK hynix's press release, Intel for its part stated that they "look forward to bringing this technology to future Intel Xeon processors and supporting standardization and multigenerational development efforts across the industry.” In the interim, this appears to be a technology still under active development, and SK hynix is not publishing anything about availability, compatibility, or pricing. 

While SK Hynix hasn't gone too much into how MCR DIMM is twice as fast as conventional DDR5 memory, this product is designed for the high-performance computing (HPC) and server industries, and it's unlikely we'll see MCR DIMMs in any form on consumer-based systems. We expect to learn more in the not-too-distant future.

Source: SK Hynix



from AnandTech https://ift.tt/I8ReurE
via IFTTT

Monday, 21 November 2022

Microsoft’s embarrassing Windows 11 printer fail finally gets fixed – but is it too late?

https://ift.tt/otRTQy8

Microsoft has officially marked a frustrating printer bug as resolved, and those folks who were being blocked from upgrading to Windows 11 22H2 due to the compatibility issue will doubtless be pleased to hear that.

You might recall this seriously troublesome bug that emerged in late September 2022, forcing printers to revert to their default settings. By default, many important features weren’t available – we’re talking about printing in duplex, higher resolutions, and maybe even color, which could obviously be major stumbling blocks.

The good news is that as Neowin spotted, Microsoft officially marked the issue as resolved just a few days ago (November 18). In actual fact, the safeguard blocking devices which could run into this bug was removed a week previously – therefore allowing those machines to update to Windows 11 22H2 – though it could still take some time for the upgrade to come through.

At this point, though, any machine with a connected printer that could fall prey to this bug should be able to go ahead and upgrade to 22H2 successfully without waiting.

Microsoft observed: “Any printer still affected by this issue should now get resolved automatically during upgrade to Windows 11, version 22H2.”


Analysis: A rocky road, for sure

This has been a bit of a rocky road for those with an affected printer wishing to upgrade to Windows 11 22H2, of course, as the bug has hung around for quite some time. As noted, it was two months ago that it first came to our attention, so this has hardly been a quick fix.

With a lot of questions being asked about the prevalence of Windows 10 bugs in the past, and now Windows 11 apparently continuing with a worrying amount of problems in terms of quality assurance, the whole affair isn’t a great look for Microsoft. Yes, we’ve banged this drum many times, but we’ll continue to do so while bugs like this printer-related gremlin – or other flaws such as File Explorer crashing or slowing down Windows 11 PCs – are still popping up far too often for our liking.

If you’ve been suffering at the hands of a gremlin in the works with Microsoft’s latest OS, be sure to check out our guide to solving common problems with Windows 11.



from TechRadar: computing components news https://ift.tt/L486Kqr
via IFTTT

Wednesday, 16 November 2022

Asus details power demands of RDNA 3 GPUs – and they may not be as much as you think

https://ift.tt/jwXUuyk

Asus has provided official power supply requirements for those buying a new RTX 4080 or 4090 graphics card from the company, and also the necessary wattages for those looking at purchasing an RX 7900 XT or 7900 XTX when those RDNA 3 cards emerge in December.

Bear in mind that the stipulated PSUs for these graphics cards do depend on what CPU you’re running them with, as these days, some processors can demand a whole lot of juice from the power supply too – so whatever combo you have in your PC, you must bear that in mind.

See more

As @momomo_us reports on Twitter (via VideoCardz), Asus recommends that the RTX 4090 has at least an 850W power supply if running an Intel Core i5 or Ryzen 5 CPU. For those using a Core i7 or Core i9 chip, or Ryzen 7 or 9, that recommendation is pushed up to 1000W. For Intel HEDT (high-end desktop) or Threadripper CPUs, the PSU should be 1200W, no less.

For the RTX 4080, the base recommendation is 750W and that holds for Core i5 and i7 models, or Ryzen 5 or 7. For a Core i9 or Ryzen 9, it’s cranked up to 850W, and 1000W for Threadripper or Intel HEDT.

What about those recommendations for AMD’s incoming RDNA 3 graphics cards? Those wanting to grab an RX 7900 XT are looking at the exact same deal as the RTX 4080 – 750W for the Core i5 or i7 CPUs (and Ryzen 5 or 7), then stepping up to 850W for a Core i9 or Ryzen 9, and 1000W for an HEDT model.

For the RX 7900 XTX, you’re looking at 850W for a Core i5 or i7 (and Ryzen 5 or 7), and 1000W for when paired with a Core i9 or Ryzen 9 CPU. For an HEDT chip, the requirement is a lofty 1,200W, just as with the RTX 4090.


Analysis: Compare and contrast with AMD’s current-gen

It’s interesting to see some official figures from a graphics card maker for RDNA 3 products, although there are no real surprises here. The RX 7900 XT and 7900 XTX in particular were never going to be anything less than pretty demanding on the PC’s power supply, but the AMD flagship does push things to the same kind of level as the RTX 4090 – with the notable exception of an easier time for those running Core i7 or Ryzen 7 CPUs (probably quite a few users, as not everyone feels the need to push to Core i9 or Ryzen 9).

What’s perhaps more instructive is to compare the incoming AMD graphics cards to their predecessor models, and it’s here we see that the RX 7900 XT actually notches things down from the 6900 XT. At least for Core i7/Ryzen 7 and Core i9/Ryzen 9 processors, where the 7900 XT demands 100W less and 150W less respectively; that’s quite impressive in comparison. (In fact, the 7900 XT has the same requirements as the 6800 XT, believe it or not).

There isn’t a direct comparison for the 7900 XTX, but it does keep things the same as the 6950 XT (all but in HEDT turf where the new graphics card does require considerably more, 200W to be precise – but those are more niche cases).

Overall, then, this is decent news for those looking at a new AMD RDNA 3 GPU down the line, especially given Nvidia’s current troubles with melting cables and the RTX 4090 (with question marks over whether this problem will affect the RTX 4080, too).



from TechRadar: computing components news https://ift.tt/uFH9058
via IFTTT

Tuesday, 15 November 2022

Intel Raptor Lake mobile CPUs leaked – could these be the future of laptops?

https://ift.tt/jwXUuyk

Intel’s upcoming Raptor Lake laptop CPUs have been leaked, and the line-up looks pretty enticing, indeed.

This comes from regular hardware leaker Raichu on Twitter (as flagged up by Wccftech), who provided full specs for the range of HX chips, starting with the Core i9-13900HX with 8 performance cores, 16 efficiency cores, and a boost of up to 5.4GHz (the base clock is 3.9GHz).

See more

Next up, we have the Core i7-13700HX – remember, take all these with a fistful of seasoning, as ever with the rumor mill – which supposedly runs with 8 performance plus 8 efficiency cores, and boosts up to 5GHz (3.7GHz base clock).

A slight step down from that is the Core i7-13650HX processor which drops to 6 performance cores (keeping 8 efficiency ones), with boost of up to 4.9GHz (and a 3.6GHz base clock).

Then there’s the Core i5-13500HX which keeps the same core count as the 13650HX but drops that boost to 4.7GHz (3.5GHz base). And finally, Raichu tells us that there’s a Core i5-13450HX with 6 performance cores plus 4 efficiency cores, and a boost of up to 4.6GHz (and a base clock of 3.4GHz).


Analysis: Raptor ready to pounce

That Raptor Lake HX flagship looks something quite special, with no fewer than 24-cores (8 performance cores) and being capable of boost of up to 5.4GHz. It’s a big step up from Alder Lake laptop chips and, along with those extra cores and faster clock speeds, you can also count on bigger cache sizes for the Raptor Lake silicon. On top of that, better overclocking is promised with 13th-gen laptop chips, so it’s fair to say they’re waiting in the wings to make quite an impact.

When can we expect Team Blue to launch these mobile processors? Intel has said we’ll witness their appearance before 2022 is out but, realistically, these chips will be in more laptops come early 2023 (doubtless with some potent pairings alongside RTX 40 series laptop GPUs).

Perhaps the worry here is not performance, which appears to be present in abundance, but rather how these HX chips may hit battery longevity. That’s where AMD could score a key victory in terms of better efficiency with Zen 4 laptop silicon when it arrives – but exactly when that’ll be, we’re not sure. But we do know Raptor Lake mobile chips will be out in full force before AMD comes to market, and as mentioned, 13th-gen laptop silicon is not far away now. Exciting times, indeed.



from TechRadar: computing components news https://ift.tt/MrTWn75
via IFTTT

Tuesday, 8 November 2022

Intel Launches Raptor Canyon: Desktop Raptor Lake Sizzles in NUC13 Extreme

https://ift.tt/4eNH5nT

Intel is officially taking the wraps off the first member of their Raptor Lake-based NUC13 family today. The NUC13 Extreme (like the three previous Extreme NUCs) caters to the gamers and content creators requiring leading edge performance and high-end discrete GPU support. Unlike the mainstream NUCs which have been consistently maintaining an ultra-compact form-factor profile, the Extreme family has slowly grown in size to accommodate flagship CPUs and discrete GPUs. These systems integrate a motherboard in a PCIe add-in card form factor (the Compute Element) and a baseboard that provides additional functionality with PCIe slots and other I/O features. As a refresher, Intel created the NUC Extreme category with the introduction of the Ghost Canyon NUC family in 2019. This was followed by the Tiger Lake-based Beast Canyon NUC in 2021 and the Alder Lake-based Dragon Canyon NUC earlier this year. The latest member of this family is today's introduction - the Raptor Canyon NUC based on the Shrike Bay Compute Element.

The NUC Extreme family has grown in physical footprint with each generation, and the NUC13 Extreme is Intel's biggest one yet. Coming in at 317mm x 129mm x 337mm (13.7L), this is more of a traditional tower desktop than the NUCs that the market has grown accustomed to. However, this size has allowed Intel to integrate flagship components. The Shrike Bay Compute Element supports socketed LGA 1700 processors with a PL1 of 150W and PL2 of 250W (tau of 28s). The vertical centering of the baseboard within the case enables plenty of isolation between the Compute Element on the top and the discrete GPU on the bottom. Triple-slot dGPUs up to 12.5" in length are supported.

The NUC13 Extreme Kit comes in three flavors, while the Shrike Bay Compute Element itself has six variations. These allow system integrators and OEMs to offer a wide variety of systems targeting different market segments. The table below summarizes the key differences between the three NUC13 Extreme kits.

Intel NUC13 Extreme Kits (Raptor Canyon)
Model NUC13RNGi9 NUC13RNGi7 NUC13RNGi5
CPU Intel Core i9-13900K
Raptor Lake, 8P + 16E / 32T
5.8 GHz (Turbo) / 5.4 GHz (P) / 4.3 GHz (E)
125W TDP (Up to 253W)
Intel Core i7-13700K
Raptor Lake, 8P + 8E / 24T
5.4 GHz (Turbo) / 5.3 GHz (P) / 4.2 GHz (E)
125W TDP (Up to 253W)
Intel Core i5-13600K
Raptor Lake, 6P + 8E / 20T
5.1 GHz (Turbo) / 5.1 GHz (P) / 3.9 GHz (E)
125W TDP (Up to 181W)
GPU Intel UHD Graphics 770 (300 MHz - 1.65 GHz) Intel UHD Graphics 770 (300 MHz - 1.60 GHz) Intel UHD Graphics 770 (300 MHz - 1.50 GHz)
Memory 2x DDR5-5600 SODIMMs
(up to 64GB)
Motherboard (Compute Element) 295.3mm x 136.5mm x 46.1mm (Custom)
Storage 1x CPU-attached PCIe 4.0 x4 M.2 2280
1x PCH-attached PCIe 4.0 x4 M.2 2242 / 2280
1x PCH-attached PCIe 4.0 x4 / SATA M.2 2242 / 2280
2x SATA 6 Gbps (on baseboard)
I/O Ports 2x USB4 / Thunderbolt 4 (Type-C) (Rear)
6x USB 3.2 Gen 2 Type-A (Rear)
1x USB 3.2 Gen 2 Type-C (Front)
2x USB 3.2 Gen 1 Type-A (Front)
Networking Intel Killer Wi-Fi 6E AX1690i
(2x2 802.11ax Wi-Fi inc. 6 GHz + Bluetooth 5.2 module)
1× 2.5 GbE port (Intel I226-V)
1x 10 GbE port (Marvell AQtion AQC113C)
Display Outputs 2x DP 2.0 (1.4 certified) (via Thunderbolt 4 Type-C, iGPU)
1x HDMI 2.1 (up to 4Kp60) (rear, iGPU)
Audio / Codec 7.1 digital (over HDMI and Thunderbolt 4)
Realtek ALC1220 Analog Audio / Microphone / Speaker / Line-In 3.5mm (Rear)
USB Audio 3.5mm combo audio jack (Front)
Enclosure Metal
Kensington lock with base security
Power Supply FSP750-27SCB 750W Internal PSU
Dimensions 337mm x 317mm x 129mm / 13.7L
Chassis Expansion One PCIe 5.0 x16 with triple-slot GPU support up to 317.5mm in length
Customizable RGB LED illumination on chassis underside
CEC support for HDMI port
Power LED ring in front panel
3-year warranty

Each kit SKU corresponds to a NUC13SBB Shrike Bay Compute Element. In addition, Intel is also readying the NUC13SBBi(9/7/5)F variants that come with the KF processors - those Compute Elements do not have any Thunderbolt 4 ports. The HDMI port / graphics outputs are also not present. The three KF SKUs also forsake the 10GbE port.

The block diagram below gives some insights into the design of the system in relation to the I/O capabilities. Note that the system continues to use the Z690 chipset that was seen in the Dragon Canyon NUC.

PCIe x16 bifurcation (x8 + x8) is possible for the Gen 5 lanes. However, the baseboard design in the Raptor Canyon NUC kits does not support it. This is yet another aspect that OEMs could use to differentiate their Shrike Bay-based systems from the NUC13 Extreme.

Intel has provided us with a pre-production engineering sample of the flagship Raptor Canyon NUC (augmented with an ASUS TUF Gaming RTX 3080Ti GPU) for review, and it is currently being put through the paces. The 150W PL1 and microarchitectural advances in Raptor Lake have ensured that the benchmark scores are off the charts compared to the previous NUC Extreme models, albeit at the cost of significantly higher power consumption. On the industrial design side, I have been very impressed. By eschewing a fancy chassis and opting for a simple cuboid, Intel has ensured that all the I/O ports are easily accessible, installation of components is fairly straightforward, and cable management is hugely simplified. The increased dimensions of the chassis are well worth these advantages over the previous NUC Extreme models. Stay tuned for a comprehensive review later this week.



from AnandTech https://ift.tt/yAOpXKt
via IFTTT

Thursday, 27 October 2022

Ryzen 3 7300X leak gives us hope that AMD is planning a Zen 4 budget CPU

https://ift.tt/Cwx9Hlk

AMD could have a couple of new Zen 4 processors incoming, one of which is in theory a Ryzen 7 7800X, and the other a Ryzen 3 7300X.

These Zen 4 chips have been spotted on Geekbench by Benchleaks (as VideoCardz reported, though they removed the story shortly after posting), and apparently the 7800X will make the leap to being a 10-core processor, at least if the provided specs are correct. Of course, we must treat this leak with the usual caution that should be applied to any rumor.

See more

The 7800X is seen with 10-cores and 20-threads with boost speeds shown as up to 5.4GHz.

As for the theoretical Ryzen 3 7300X, its Geekbench entry shows that it’s a quad-core CPU with boost of up to 5GHz.

What about the Geekbench results themselves? The 7800X hits 2,097 and 16,163 for single-core and multi-core respectively. That falls a touch short of the existing 7700X in the former, but comfortably outdoes that chip – by around 15% – in the latter as you might expect with a pair of extra cores for the 7800X.

The Ryzen 3 7300X achieves 1,984 and 7,682 for single-core and multi-core, which unsurprisingly leaves it bringing up the rear of the Ryzen 7000 family. It’s not too far off the Ryzen 5 7600X for single-core though, and indeed the latter is only 7% faster. (Bear in mind on both counts for the 7800X and 7300X that these are pre-release processors, so are likely not showing their full performance levels yet).


Analysis: Does a Ryzen 3 CPU for Zen 4 really make sense this early on?

A Ryzen 7800X with 10-cores would seem to be an odd choice to make, perhaps, remembering that the 5800X was a straight 8-core CPU. As we can see from the benchmarks, in this form with two extra cores, it would represent a solid step on from the 7700X and differentiate these processors more, at least in terms of multi-threaded performance.

What more eyes are likely to be on here is the 7300X, and the hopeful prospect that a Ryzen 3 CPU for the Zen 4 generation could be inbound. This is an option that some folks looking to build budget-friendly PCs have been crying out for, and didn’t get with the Ryzen 5000 range. (There was a Ryzen 5 5500 brought out earlier this year, but other than that, no Ryzen 3 silicon, and there are just Ryzen 4000 Zen 2-based chips at the lowest end of the market for AMD).

Will this Ryzen 3 7300X really happen, though? We’re not sure, and certainly there are arguments for staying skeptical here. Releasing such a chip would require AMD to redirect at least some production resources to manufacture it, obviously, and these lower-end products have vanishingly smaller profit margins compared to what’s on the table right now. So, does it really make sense to do so this early in the game for the Ryzen 7000 family?

The other possibility is to use what are basically rejects for beefier chips (with cores disabled down to a quad-core CPU), but yields are so good these days, that problematic silicon such as this has become relatively thin on the ground.

Meaning that if a Ryzen 3 7300X is coming, it most likely wouldn’t be for some time one way or another – enough time to build up the necessary chips – and what’s more, it’d probably be a limited production run too. (Such as the situation with those old budget favorites with Ryzen 3000, the Ryzen 3 3100 and 3300X, which were difficult to get hold of; more so in the latter case).

Furthermore, there’s also the consideration that jumping to Zen 4 is a pricey proposition still with the cost of the required AM5 motherboard (and DDR5 RAM), which again is another argument that any Ryzen 3 offering wouldn’t make much sense as a nearer-term thing (as an affordable chip, but with no similarly affordable mobo to complement it). Longer-term, of course, we’ll see those motherboard prices come down – and lower-end models emerge – for those looking to build a shiny new AMD-powered PC (plus DDR5 is going to drop further according to forecasts, too).

In short, we’d advise any excitement around the 7300X should be tempered with a dose of the likely reality here. But we’re not denying it’s great to see a Ryzen 3 chip floating around at this stage of the game, and the very sighting of the CPU does hold some promise for the future in terms of budget PC builds. Meanwhile, we can of course look forward to cheaper Ryzen 5000 chips as that generation filters towards the exit…



from TechRadar: computing components news https://ift.tt/xuPANQW
via IFTTT

Tuesday, 25 October 2022

Intel Raptor Lake CPU surprise gets ruined by Microsoft

https://ift.tt/EeJOq41

What appears to be Intel’s full line-up of Raptor Lake processors has been leaked online courtesy of Microsoft.

We have already seen six 13th-gen CPUs launched – the Intel Core i9-13900K, Core i7-13700K and Core i5-13600K, plus their respective KF versions (‘F’ means no integrated graphics) – and there’ll be another 16 coming, to make 22 processors in total, if this list from Microsoft is correct.

As VideoCardz reports, the Raptor Lake processors have appeared in the software giant’s list of supported Intel chips for Windows 11 22H2, and what’s telling is that the models mentioned directly match another recent leak from motherboard maker Gigabyte.

Again, the Gigabyte list details all the Raptor Lake processors its motherboards will support, and with both of these line-ups matching exactly, it seems a good bet that this is the full range of Intel’s 13th-gen CPUs.

Here’s the full list in all its glory:

  • Core i9-13900KF    
  • Core i9-13900K      
  • Core i9-13900F      
  • Core i9-13900        
  • Core i9-13900T      
  • Core i7-13700KF    
  • Core i7-13700K      
  • Core i7-13700F      
  • Core i7-13700        
  • Core i7-13700T      
  • Core i5-13600KF    
  • Core i5-13600K      
  • Core i5-13600        
  • Core i5-13600T      
  • Core i5-13500        
  • Core i5-13500T      
  • Core i5-13400F      
  • Core i5-13400        
  • Core i5-13400T      
  • Core i3-13100F      
  • Core i3-13100
  • Core i3-13100T

Analysis: A few hints at some specs, but not much

As mentioned, the ‘F’ models are those without integrated graphics, for folks who have discrete GPUs in place, and ‘K’ models are unlocked (for overclocking – and both ‘KF’ means unlocked with no integrated GPU, of course). The ‘T’ models are low-power efforts for enterprise usage, not aimed at consumers in other words.

For the 16 purportedly incoming Raptor Lake processors, what we don’t get is any spec details at all from Microsoft – just the CPU names. The previous Gigabyte list, however, did provide a few scant specs: the base frequency (but sadly not that all-important boost), plus TDP. No core counts were confirmed for the supposedly inbound chips, though.

The full expansion of the Raptor Lake line-up is likely to happen early in 2023, we’d guess, and with this host of new CPUs will come cheaper motherboards, too. Meaning that folks intent on a budget 13th-gen build will have far more affordable options in terms of the Core i5-13500 and 13400, or indeed the Core i3-13100 at the low-end, combined with a B760 motherboard (which again will be considerably more reasonably priced than Z790 models).

There’s a good deal of excitement around the Core i5-13400 in particular regarding the potential for it to offer something very compelling in the value proposition stakes for more budget-friendly builds.



from TechRadar: computing components news https://ift.tt/1CQEaTr
via IFTTT

Friday, 21 October 2022

This AMD Ryzen 7000 CPU cooling trick is something you really shouldn’t try at home

https://ift.tt/D0PC9wF

AMD’s Ryzen 7000 processors have come under fire for the design of their integrated heat spreader (IHS), and how it doesn’t help thermals – but there’s a way around this apparently, one that will ensure the chips run a fair bit cooler. However, this is definitely not something we recommend the average user should try (not that they’ll be equipped to anyway).

Why? Well, because it involves taking a shiny new Zen 4 processor and exposing it to a grinding tool. Yes, the solution to the thick IHS for this Ryzen generation – we’ll discuss why it’s beefier later on – is simply to make it thinner by grinding it down.

Obviously this is not something the average PC owner wants to do, but more hardcore types may consider exploring this avenue - and some already have done in the case of JayzTwoCents with the use of expert overclocker Der8auer’s grinding tool – as spotted on Twitter by Andreas Schilling (via Tom’s Hardware).

See more

The result of shaving down the IHS of a Ryzen 9 7950X CPU by 0.8mm proved to be a reduction in temperatures from 94-95C, down to 85-88C, a pretty substantial drop (those were the temps running at 5.1GHz across all-cores for the CPU).


Analysis: The lesser of two evils? Well, not exactly

Essentially, this is an alternative to another risky procedure known as ‘delidding’, where the CPU has the IHS actually removed, which can result in even bigger temperature drops. (Der8auer demonstrated a huge 20C reduction when delidding a 7900X previously, although that was using a special liquid metal thermal grease which is the overclocker’s own custom concoction).

Grinding down the IHS represents a somewhat less risky path – and less fiddly, too, as there’s a lot of extra work in fitting a cooling solution to a delidded (very differently sized) chip – but granted, in both cases, you are voiding your warranty. And unless you really know what you’re doing, you’re running the risk of ruining the CPU as you might imagine when it comes to drastic action like pulling it apart or grinding bits down. Which is why we really wouldn’t recommend this to anyone but expert enthusiasts (who can afford the cost if things go wrong, for that matter).

The whole backdrop to this is that AMD has used a thicker design for the IHS with Zen 4 chips on the AM5 platform (with a new processor socket). This is in order to keep compatibility with new Ryzen 7000 CPUs in terms of existing (AM4 platform) coolers – so folks don’t have to buy a new cooling solution – as the new socket is flatter, meaning the chip sits a little lower (so the thicker IHS makes up for that difference). But that thickness of 1mm extra than usual is somewhat counterproductive for good thermals.

Now, AMD reckons it’s fine for the Ryzen 9 7950X to tick along at temps like 95C, but some enthusiasts beg to differ, hence the controversy. And hence shaving off 0.8mm to bring the IHS back to about its previous pre-Ryzen 7000 size, having the processor run at more like 85C, a level owners are happier about.

As an aside, don’t forget the IHS is there to provide protection for the CPU, and with delidding that represents an extra risk in terms of leaving the die exposed – whereas grinding it down still leaves a protective lid on the chip, as it were.

If you’re worried about your Zen 4 temps – which may, of course, vary from case to case anyway – rather than go this route, it’s a much better and more feasible idea to look at alternative solutions such as using Eco mode settings (in AMD’s Ryzen Master software) to rein in that heat. (Or undervolting is another option, perhaps).



from TechRadar: computing components news https://ift.tt/8mArhy4
via IFTTT

Thursday, 20 October 2022

AMD Announces Radeon RDNA 3 GPU Livestream Event for November 3rd

https://ift.tt/CN1OQvE

Following on the heels of AMD’s CPU-centric event back in August, AMD today has sent out a press release announcing that they will be holding a similar event in November for their Radeon consumer graphics business. Dubbed “together we advance_gaming”, the presentation is slated to be all about AMD Radeon, with a focus on the upcoming RDNA 3 graphics architecture and all the performance and power efficiency benefits it will bring. The event is set to kick off on November 3rd at 1pm ET (20:00 UTC), with undisclosed AMD executives presenting details.

Like the Ryzen event in August, next month’s Radeon event appears to be AMD gearing up for the launch of its next generation of consumer products – this time on the GPU side of matters. Back at the start of the summer, AMD confirmed that RDNA 3 architecture products were scheduled to arrive this year, so we have been eagerly awaiting the arrival of AMD’s next generation of video cards.

Though unlike AMD’s CPU efforts, the company has been far more mum about its next-gen GPU efforts. So details in advance on what will presumably be the Radeon RX 7000 series have been limited. The biggest items disclosed thus far are that AMD is targeting another 50% increase in performance-per-watt, and that these new GPUs (Navi 3x) will be made on a 5nm process (undoubtedly TSMC’s). Past that, AMD hasn’t given any guidance on what to expect for performance.

One interesting aspect, however, is that AMD has confirmed that they will be employing chiplets with this generation of products. To what extent, and whether that’s on all parts or just some, remains to be seen. But chiplets are in some respects the holy grail of GPU construction, because they give GPU designers options for scaling up GPUs past today’s die size (reticle) and yield limits. That said, it’s also a holy grail because the immense amount of data that must be passed between different parts of a GPU (on the order of terabytes per second) is very hard to do – and very necessary to do if you want a multi-chip GPU to be able to present itself as a single device.

We’re also apparently in store for some more significant upgrades to AMD’s overall GPU architecture. Though what exactly a “rearchitected compute unit” and “optimized graphics pipeline” fully entail remains to be seen.

Thankfully we should have our answer here in two weeks. The presentation is slated to air on November 29th at 1pm Pacific, on AMD’s YouTube channel. And of course, be sure to check out AnandTech for a full rundown and analysis of AMD’s announcements.



from AnandTech https://ift.tt/hg48Ov6
via IFTTT

Sunday, 16 October 2022

Microsoft’s tech to seriously speed up load times for Windows gamers is coming ‘soon’

https://ift.tt/lkKsEIM

Microsoft has announced that a fresh version of DirectStorage will be going out to game developers before the end of 2022, and it’ll come with an important step forward in terms of speeding up loading times with SSDs.

As you may be aware, DirectStorage is the feature first seen on the Xbox which brings faster load times – and better performance loading game assets in big open world titles – and it first arrived for Windows PCs back in March.

What Microsoft has now revealed (hat tip to Tom’s Hardware) is that DirectStorage 1.1, a new version with GPU Decompression tech incorporated, will be here very soon. Although there still aren’t any games that’ll benefit from it (yet – we’ll come back to this obviously rather crucial point).

Microsoft has already told us that DirectStorage (DS) will produce a reduction in load times of up to 40% – for games on fast NVMe SSDs running on Windows 11 – and this new piece of the DS puzzle, GPU Decompression, will offer something in the order of a tripling of loading time performance, the company promises.

Normally, decompression (of compressed game assets, which need to be made smaller due to their hefty size) is run by the CPU, but what Microsoft is doing is switching this grunt work directly to the GPU.

Microsoft explains: “Graphics cards are extremely efficient at performing repeatable tasks in parallel, and we can utilize that capability along with the bandwidth of a high-speed NVMe drive to do more work at once.”

In a Microsoft demo, the company illustrated that when DirectStorage is running with GPU decompression, compared to traditional CPU decompression, “scenes are loading nearly 3x faster and the CPU is almost entirely freed up to be used for other game processes.” (In that demo, the processor only saw 15% maximum usage, by the way, compared to 100% usage when DS wasn’t being used).

Now, bear in mind this is a cherry-picked and ‘highly optimized’ demo (in Microsoft’s own words), but it certainly promises some seriously beefy gains overall, which should see games that support DirectStorage loading – and running – much more smoothly all-round.


Analysis: Forspoken is sadly not forthcoming (still)

It’s worth noting that while DirectStorage is made with superfast NVMe SSDs in mind, it will still work with slower SSDs (and indeed hard disks, to a point); but the effect won’t be nearly as pronounced. The storage speeding tech will also work fine on Windows 10 machines, too, but Windows 11 offers advances on the storage optimization front which again will mean DS offers more impact. (Also, you need a contemporary GPU for DS to work, meaning one with DX12 and Shader Model 6 support).

The main catch still is that despite work on DirectStorage proceeding nicely, there are still no PC games that actually support the tech. We were supposed to be getting the first game to show off DS this month, Forspoken, but it has been delayed to January 2023 (and was already put back before then, so that’s a tad disappointing).

That said, it’s still only a few months away – assuming no further hiccups – but even then, it’s just one game. It’ll doubtless be a while before wider support is adopted among PC game devs, but when it is, this could become a compelling reason to upgrade to Windows 11 for gamers (and equally a good reason to get an NVMe SSD for those who haven’t yet made the leap on the storage front).



from TechRadar: computing components news https://ift.tt/ZhRX8It
via IFTTT

Monday, 10 October 2022

Samsung Foundry Outlines Roadmap Through 2027: 1.4 nm Node, 3x More Capacity

https://ift.tt/FOT7WwN

Samsung outlined its foundry business roadmap for the next five years at its Foundry Forum event last week. The company plans to introduce its next generation fabrication technologies in a timely manner and intends to make chips on its 1.4 nm (14 angstroms) manufacturing process by 2027. Also, the company will keep investing in new manufacturing capacity going forward as it strives to strengthen its position in the foundry market.

New Nodes Incoming

Samsung has been introducing new production nodes and/or variants on production nodes every 12 – 18 months for several years now, and plans to keep its rather aggressive pace going forward. Though the company’s roadmap illustrates, fanfare aside, that it is now taking longer to develop new fabrication processes. The company’s second-generation 3 nm-class gate-all-around (3GAP) technology is now set to arrive sometime in 2024. Meanwhile, Samsung Foundry intends to be ready with its 2 nm (20 angstroms) node in 2025, and with its 1.4 nm-branded fabrication process in 2027.

"With the company's success of bringing the latest [3 nm-class] process technology to mass production, Samsung will be further enhancing gate-all-around (GAA) based technology and plans to introduce the 2 nm process in 2025 and 1.4 nm process in 2027," a statement by Samsung reads.

  Chip Fab Roadmaps
Data announced during conference calls, events, press briefings and press releases
HVM Start 2023 2024 2025 2026 2027
Intel Process Intel 3 Intel 20A Intel 18A ? ?
  FET FinFET RibbonFET + PowerVia ? ?
  EUV 0.33 NA EUV 0.55 High-NA EUV
Samsung Process 3GAE 3GAP 2.0 nm 1.4 nm
  FET GAAFET ? ? ?
  EUV 0.33 NA EUV ? ? ?
TSMC Process N3E/N3P N3S/N3X N2 N2?  
  FET FinFET GAAFET GAAFET with backside power delivery (?)
  EUV 0.33 NA EUV ? ? ?

Painting some very broad strokes, compared to those of Intel and TSMC, it seems like TSMC is a little bit more conservative (which is something expected when you are the world's largest contrast maker of microelectronics). Whereas Intel is more aggressive (which is again expected given the company's position in the market of semiconductors). Meanwhile, naming of fabrication processes these days is essentially aspiratory, with little connection to their real physical measures. Which is why comparing different semiconductor companies' roadmaps is an imprecise metric at best.

In addition to new 'general' nodes, Samsung plans to expand its process technology optimization programs for each specific application as well as customized services for customers, the company said.

Meanwhile, one of the things that Samsung notably did not mention in its press release concerning its 1.4 nm node is usage of High-NA equipment. Intel, for its part, plans to use High-NA starting its Intel 18A node (in 2024), where it will eventually be supplanting the EUV multi-patterning used on initial 18A production. 

According to Samsung, the adoption of new process technologies and demand for new fabrication processes will be driven by already known mega trends — AI, autonomous vehicles, automotive applications in general, HPC, 5G, and eventual 6G connectivity. Keeping in mind that Samsung is a large industrial conglomorate with many divisions, many of applications that it intends to address with future process nodes are its own.

The company disclosed last week that its LSI Business (chip development division) currently offers around 900 products, which include SoCs, image sensors, modems, display driver IC (DDI), power management IC (PMIC), and security solutions. Going forward the company plans to put even more efforts into development of performance-demanding IP, including CPU and GPU, by working closer with its industry partners (which presumably includes Arm and AMD).

Expanded Production Capacity

Offering state-of-the-art production technologies is good, but to produce those advanced chips in sufficient quantities to meet market demands is equally important. To that end, Samsung announced that the company will also continue to invest heavily into building out additional production capacity. In the recent years Samsung's semiconductor capacity CapEx was around $30 billion a year and it does not look like the firm plans to put a cap on its spendings (though it is noteworthy that it does not disclose how much money it intends to spend).

Samsung plans to expand its production capacity for its 'advanced' process technologies by more than three-fold by 2027. While the companies is not naming the nodes it considers "advanced", we would expect a significant addition of its EUV capacity in the next five years – especially as more ASML EUV machines become available. Meanwhile, the company will adopt 'Shell-First' tactics in its expansion and construct buildings and clean rooms first, and add equipment later on depending on market conditions.

Samsung's new fab under construction near Taylor, Texas, will be one of the company's main vehicles to add capacity in the coming years. The shell-first site will start to produce chips in 2024. And as the company adds new tools to the fab and build new phases, production capacity of the site will further increase. 

Source: Samsung



from AnandTech https://ift.tt/BhTJ8Uj
via IFTTT

Friday, 7 October 2022

Intel Raptor Lake flagship CPU hits a huge 8.2GHz overclock

https://ift.tt/GmqAsEY

Intel’s Raptor Lake flagship has had another seriously impressive overclock applied, one that’s even faster than the leaked 8GHz feat we witnessed last month.

The difference this time is that this is an official overclock of the Core i9-13900K by an expert in the field, ‘Splave’ (Allen Golibersuch), who managed to get the CPU to run at 8.2GHz.

As Tom’s Hardware reports, this was part of Intel’s Creator Challenge and as you might imagine, Splave did not use traditional cooling, but rather liquid nitrogen (as is invariably the case, or similarly exotic cooling that can’t be done at home, and is only good for a brief period of operation).

Splave managed to push Alder Lake’s equivalent, the 12900K, to 7.6GHz, so with this overclock to 8.2GHz, the Raptor Lake flagship is 8% faster, even before it’s released.


Analysis: A tempting proposition for PC speed demons

It’s exciting times for PC tinkerers and enthusiasts, then. The overclocking potential for Raptor Lake is the strongest seen for a range of Core processors in ages, with the last time an Intel chip crested the 8GHz mark being over a decade ago.

The fastest in recent memory was the Core i9-10900K hitting 7.7GHz, and that’s obviously been well and truly beaten already. The thing to remember is, after the 13900K has been released, it’ll inevitably be pushed to greater heights. For example, the 12900K topped out at 6.8GHz in its overclocking capabilities when first released, but that was later beaten by the aforementioned 7.6GHz.

In theory, then, we could well see the 13900K storm into 8.5GHz plus territory eventually at the hands of experts using liquid nitrogen. At that point, the CPU will be challenging the fastest speeds ever reached by a desktop processor (over 8.7GHz, and those very fastest chips are older models from AMD, it’s worth noting).

While the average PC owner is obviously not going to see performance anything like this, it suggests more normal overclocking – using liquid cooling perhaps, or a good air cooler – will produce impressive results too. And indeed there has been a leak showing the 13900K purportedly running at a mighty 6.5GHz (on a single-core) with a standard liquid cooling solution (add your own salt, maybe a few shakes here, as ever with the rumor mill).

All signs point to a very promising level of overclocking performance for Raptor Lake, which could be a potent lure for enthusiasts, and a factor that could worry AMD. The new Zen 4 flagship, AMD’s Ryzen 9 7950X, has hit closing on 7.5GHz thus far (again with liquid nitrogen), so the 13900K is almost bang-on 10% better than that at this point in the overclocking wars. A gap that is unlikely to be closed, of course, as time progresses…



from TechRadar: computing components news https://ift.tt/6oxQvT3
via IFTTT

Wednesday, 5 October 2022

Nvidia RTX 4090 GPU shows blistering frame rates in Overwatch 2

https://ift.tt/hHWXwrI

Nvidia’s RTX 4090 can achieve over 500 frames per second (fps) in Overwatch 2 at 1440p resolution, a ridiculously fluid level of gameplay – with a notable caveat we’ll come back to later on the monitor front – Team Green itself has told us.

To be precise, Nvidia’s own benchmarking shows the shooter (which was released just yesterday) running at 507 fps on average with the RTX 4090, and this was at 1440p with max graphics settings (the test rig paired the GPU with an Intel Core i9-12900K CPU, by the way).

As for the RTX 4080 16GB, that achieved 368 fps, and the lesser RTX 4080 with 12GB (cough, RTX 4070, ahem) still managed 296 fps.

For last-gen comparisons, the RTX 3080 hit 249 fps, and the RTX 3070 weighed in with 195 fps, with the RTX 3060 achieving 122 fps, all with the same rig and settings of course.

Nvidia recommends the RTX 3060 for those who want to get 144 fps in 1080p (Full HD as opposed to 1440p), and the RTX 3080 Ti for those looking at 360 fps again at 1080p.


Analysis: The advantage of resolution as well as superfast fps

We normally think of Nvidia’s flagships like the RTX 4090 as graphics cards built to tackle high-resolution gaming (4K, or even 8K with the new Lovelace top dog), but of course competitive gamers want to go the other route – not more detail, but more frames is the priority. That provides the smoothest possible gameplay experience.

Seeing 500 frames per second being broken in Overwatch 2 is quite a feat, remembering that this isn’t 1080p resolution either – it’s a step up from that at 1440p, which gives you far better image quality (and sharpness) than Full HD. And as Nvidia points out, going to 1440p can have some advantages over 1080p in terms of pinpoint aiming like a headshot, noting: “Our research found that 1440p 27-inch displays can improve aiming by up to 3% over traditional 1080p 24-inch displays, when aiming at small targets, such as an enemy’s head.”

Add your own seasoning, naturally, and there are new G-Sync monitors of the 27-inch 1440p variety in the pipeline and arriving soon, from Asus, with a 360Hz refresh rate. So the RTX 4080 16GB will be able to fully drive and exploit that refresh rate as the above benchmarks show (360Hz meaning it can display 360 fps).

Obviously, don’t forget that you need a high-end monitor with a superfast refresh rate to actually display the staggering amount of fps generated by these GPUs in the presented scenarios for Overwatch 2.

And yes, there is a 500Hz monitor coming from Asus – capable of displaying 500 fps –  though we’re not sure when, and at any rate, it’s 1080p and likely to cost an arm and a leg (and possibly another arm). There are 480Hz models supposedly due next year, too, but these kind of monitors are going to be the province of pro esports gamers who are willing to spend whatever it takes to get even the slightest competitive edge.

As a final note, comparing the relative price of the two RTX 4080 models and performance here, you’re paying a third more for the top-tier (16GB) 4080, but getting a performance boost of a quarter versus the 12GB variant. So, this does make it seem like Nvidia has pushed pricing a little harder with the faster RTX 4080, but obviously we can’t make comparisons like this on the basis of a single game – it’s more an interesting observation than anything.

We’ll need full reviews of both RTX 4080 versions to draw conclusions on this front, naturally, though it’d be no surprise to see the lower-tier being the value champ for overall price/performance ratio.

Via VideoCardz



from TechRadar: computing components news https://ift.tt/DvJcGud
via IFTTT
Related Posts Plugin for WordPress, Blogger...