Friday, 27 January 2023

The Intel Core i9-13900KS Review: Taking Intel's Raptor Lake to 6 GHz

https://ift.tt/FepOtEu

Back at Intel's Innovation 2022 event in September, the company let it be known that it had plans to release a '6 GHz' processor based on its Raptor Lake-S series of processors. Though it didn't come with the same degree of fanfare that Intel's more imminently launching 13900K/13700K/13600K received, it put enthusiasts and industry watchers on notice that Intel still had one more, even faster Raptor Lake desktop chip waiting in the wings.

Now a few months later, Raptor Lake's shining moment has arrived. Intel has launched the Intel Core i9-13900KS, a 24-core (8x Perf + 16x Efficiency) part with turbo clock speeds of up to 6 GHz. A mark, which until very recently, was unprecedented without the use of exotic cooling methods such as liquid nitrogen (LN2). 

In what is likely to be one of the last in a wide range of Raptor Lake-S SKUs to be announced, Intel has seemingly saved its best for last. The Intel Core i9-13900KS is the faster and unequivocal bigger brother to the Core i9-13900K, with turbo clock speeds of up to 6 GHz while maintaining its halo presence with a 200 MHz increase on both P-core and E-core base frequencies.

Intel's focus and strategy on delivering halo-level processors in limited supply is something that's become a regular part of their product stack over the last few years. We've previously seen the Core i9-9900KS (Coffee Lake) and i9-12900KS (Alder Lake), which were relative successes in showcasing each Core architecture at their finest point. The Core i9-13900KS looks to follow this trend, although it comes as we've reached a time where power efficiency and costs are just a few widely shared concerns globally.

Having the best of the best is somewhat advantageous when there's a need for cutting-edge desktop performance, but at what cost? Faster cores require more energy, and more energy generates more heat; 6 GHz for the average consumer is finally here, but is it worth taking the plunge? We aim to find out in our review of the 6 GHz Core i9-13900KS.



from AnandTech https://ift.tt/vKCDtG0
via IFTTT

Tuesday, 24 January 2023

AMD makes another blunder – what on earth is going on?

https://ift.tt/RzZpxN0

You’d be forgiven for thinking something has gone markedly awry at AMD lately, with Team Red seemingly stumbling from one mistake to another in recent times.

The latest in a noticeable catalog of errors from AMD is a mistake on the official website pertaining to the overclocking capabilities of the incoming Ryzen 7000 processors with 3D V-Cache on-board (X3D models).

As highlighted by HXL on Twitter (via VideoCardz), in the specs, AMD marked these X3D chips down as a ‘yes’ under the ‘unlocked for overclocking’ category. However, we’d already been told that these processors definitely aren’t unlocked for full overclocking duties (though it will be possible to tweak them somewhat via Precision Boost Overdrive, you just won’t be able to juice them up by pumping in more voltage).

With the error picked up by the media, AMD swiftly removed the entry from the specs of the Zen 4 chips, and it has now vanished. Overall, this is hardly a big thing, you’ll doubtless now be saying, and you’re right – in itself. But this seems to be part of a somewhat disturbing trend…


Analysis: Strange times at AMD

So, yeah, a website error, no biggie; but then it does apply to a crucial area of the spec for these processors (and it’ll probably have briefly got hopes up for fully-fledged overclocking with some poor would-be buyers). Also in recent times, AMD made another mistake with Ryzen 7000X3D processor listings, providing a release date on the official web page (February 14) which turned out to be an error.

Naturally, these are relatively small mistakes, but a couple of website mess-ups in quick succession doesn't look very professional. You might expect this kind of thing from a lesser-known third-party graphics card manufacturing partner, perhaps, but not from AMD itself. And what’s more worrying is that this is kind of reflective of some bigger blunders which AMD has been subject to lately.

We’re talking about the early controversy – regarding wonky clock speeds, and unused code – around RDNA 3 graphics cards after their initial launch in December 2022, followed by a Windows 11 bug that randomly froze up PCs with AMD CPUs as 2023 began (which in turn came along after issues with the Windows 11 thread scheduler slowing down high-end Ryzen CPUs).

Then we witnessed a serious flaw in the firmware of the Ryzen 7600X that could cause boot failure (albeit there were others to blame in that saga, too – and admittedly it was beta firmware, with a fix applied swiftly).

Finally, the big thorn in AMD’s side since the RDNA 3 launch has been the revelation of the flagship RX 7900 XTX experiencing issues with temperature hotspots and throttling. Team Red eventually revealed what third-party investigations had suggested – that there was a problem with the vapor chamber (the cooling system) with some of these graphics cards. Specifically, not enough water had been placed in the closed system of the chamber when it was manufactured, meaning its cooling capabilities were lessened.

While AMD acted to replace any affected graphics cards swiftly, as you’d hope, this was quite a mistake to make, really. It’s absolutely one of the biggest GPU gaffes we can recall in more recent history, and one that makes the Nvidia RTX 4090 problems with power adapters melting look small-scale in comparison (albeit the latter issue was more severe by nature).

What’s worse for AMD is that it made a huge song and dance before the RDNA 3 launch, with snide references to power connectors indirectly throwing shade at Nvidia, only for Team Red to stumble haphazardly into its own next-gen GPU flaw: a big cooling error with a much more wide-ranging impact. Not a good look, really.

With all the mistakes seemingly coming thick and fast, whether small spec errors or plans for cooling solutions going seriously awry with some flagship graphics cards, we think it’s fair enough game to ponder – just what on earth is going on at AMD right now?

Whatever the case is behind the scenes, some deep breaths need to be taken, and eyes fixed more firmly on the proverbial ball, because what’s been happening over the past couple of months on multiple fronts at Team Red needs to stop, quite simply.



from TechRadar: computing components news https://ift.tt/AZpeqh9
via IFTTT

Monday, 23 January 2023

ASRock DeskMeet B660 Review: An Affordable NUC Extreme?

https://ift.tt/OefNwih

ASRock was one of the earliest vendors to cater to the small-form factor (SFF) PC market with a host of custom-sized motherboards based on notebook platforms. Despite missing the NUC bus for the most part, they have been quite committed to the 5x5 mini-STX form-factor introduced in 2015. ASRock's DeskMini lineup is based on mSTX boards and has both Intel and AMD options for the end-user. While allowing for installation of socketed processors, the form-factor could not support a discrete GPU slot. Around 2018, Intel started making a push towards equipping some of their NUC models with user-replaceable discrete GPUs. In order to gain some market share in that segment, ASRock introduced their DeskMeet product line early last year with support for socketed processors and a PCIe x16 slot for installing add-in cards. Read on for a detailed analysis of the features, performance, and value proposition of the DeskMeet B660 - a 8L SFF PC based on the Intel B660 chipset, capable of accommodating Alder Lake or Raptor Lake CPUs.



from AnandTech https://ift.tt/xqmTveC
via IFTTT

Tuesday, 17 January 2023

TSMC's 3nm Journey: Slow Ramp, Huge Investments, Big Future

https://ift.tt/R4X0jkK

Last week, TSMC issued their Q4 and full-year 2022 earnings reports for the company. Besides confirming that TSMC was closing out a very busy, very profitable year for the world's top chip fab – booking almost $34 billion in net income for the year – the end-of-year report from the company has also given us a fresh update on the state of TSMC's various fab projects.

The big news coming out of TSMC for Q4'22 is that TSMC has initiated high volume manufacturing of chips on its N3 (3nm-class) fabrication technology. The ramp of this node will be rather slow initially due to high design costs and the complexities of the first N3B implementation of the node, so the world's largest foundry does not expect it to be a significant contributor to its revenue in 2023. Yet, the firm will invest tens of billions of dollars in expanding its N3-capable manufacturing capacity as eventually N3 is expected to become a popular long-lasting family of production nodes for TSMC.

Slow Ramp Initially

"Our N3 has successfully entered volume production in late fourth quarter last year as planned, with good yield," said C. C. Wei, chief executive of TSMC. "We expect a smooth ramp in 2023 driven by both HPC and smartphone applications. As our customers' demand for N3 exceeds our ability to supply, we expect the N3 to be fully utilized in 2023."

Keeping in mind that TSMC's capital expenditures in 2021 and 2022 were focused mostly on expanding its N5 (5nm-class) manufacturing capacities, it is not surprising that the company's N3-capable capacity is modest. Meanwhile, TSMC does not expect N3 to account for any sizable share of its revenue before Q3.

In fact, the No. 1 foundry expects N3 nodes (which include both baseline N3 and relaxed N3E that is set to enter HVM in the second half of 2023) to account for maybe 4% - 6% of the company's wafer revenue in 2023. And yet this would exceed the contribution of N5 in its first two quarters of HVM in 2020 (which was about $3.5 billion).

"We expect [sizable N3 revenue contribution] to start in third quarter 2023 and N3 will contribute mid-single-digit percentage of our total wafer revenue in 2023," said Wei. "We expect the N3 revenue in 2023 to be higher than N5 revenue in its first year in 2020."

Many analysts believe that the baseline N3 (also known as N3B) will be used by Apple either exclusively or almost exclusively, which is TSMC's largest customer that is willing to adopt leading-edge nodes ahead of all other companies, despite high initial costs. If this assumption is correct and Apple is indeed the primary customer to use baseline N3, then it is noteworthy that TSMC mentions both smartphone and HPC (a vague term that TSMC uses to describe virtually all ASICs, CPUs, GPUs, SoCs, and FPGAs not aimed at automotive, communications, and smartphones) applications in conjunction with N3 in 2023. 

N3E Coming in the Second Half

One of the reasons why many companies are waiting for TSMC's relaxed N3E technology (which is entering HVM in the second half of 2023, according to TSMC) is the higher performance and power improvements, as well as even more aggressive logic scaling. Another is that the process will offer lower costs, albeit at the cost of a lack of SRAM scaling compared to N5, according to analysts from China Renaissance.

"N3E, with six fewer EUV layers than the baseline N3, promises simpler process complexity, intrinsic cost and manufacturing cycle time, albeit with less density gain," Szeho Ng, an analyst with China Renaissance, wrote in a note to clients this week. 

Advertised PPA Improvements of New Process Technologies
Data announced during conference calls, events, press briefings and press releases
  TSMC
N3
vs
N5
N3E
vs
N5
Power -25-30% -34%
Performance +10-15% +18%
Logic Area

Reduction* %

Logic Density*
0.58x

-42%

1.7x
0.625x

-37.5%

1.6x
SRAM Cell Size 0.0199µm² (-5% vs N5) 0.021µm² (same as N5)
Volume
Manufacturing
Late 2022 H2 2023

Ho says that TSMC's original N3 features up to 25 EUV layers and can apply multi-patterning for some of them for additional density. By contract, N3E supports up to 19 EUV layers and only uses single-patterning EUV, which reduces complexity, but also means lower density.

"Clients' interest in the optimized N3E (post the baseline N3B ramp-up, which is largely limited to Apple) is high, embracing compute-intensive applications in HPC (AMD, Intel), mobile (Qualcomm, Mediatek) and ASICs (Broadcom, Marvell)," wrote Ho.

It looks like N3E will indeed be TSMC's main 3nm-class working horse before N3P, N3S, and N3X arrive later on.

Tens of Billions on N3

While TSMC's 3nm-class nodes are going to earn the company a little more than $4 billion in 2023, the company will spend tens of billions of dollars expanding its fab capacity to produce chips on various N3 nodes. This year the company's capital expenditures are guided to be between $32 billion - $36 billion. 70% of that sum will be used on advanced process technologies (N7 and below), which includes N3-capable capacity in Taiwan, as well as equipment for Fab 21 in Arizona (N4, N5 nodes). Meanwhile 20% will be used for fabs producing chips on specialty technologies (which essentially means a variety of 28nm-class processes), and 10% will be spent on things like advanced packaging and mask production.

Spending at least $22 billion on N3 and N5 capacity indicates that TSMC is confident on the demand for these nodes. And there is a good reason for that: the N3 family of process technologies is set to be TSMC's last FinFET-based family of production nodes for complex high-performance chips. The company's N2 (2nm-class) manufacturing process will rely on nanosheet-based gate-all-around field-effect transistors (GAAFETs). In fact, analyst Szeho Ng from China Renaissance believes that a significant share of this year's CapEx set for advanced technologies will be spent on N3 capacity, setting the ground for roll-out of N3E, N3P, N3X, and N3S. Since N3-capable fabs can also produce chips on N5 processes, TSMC will be able to use this capacity where there will be significant demand for N5-based chips as well.

"TSMC guided 2023 CapEx at $32-36bn (2022: US$36.3bn), with its expansion focused on N3 in Fab 18 (Tainan)," the analyst wrote in a note to clients. 

Since TSMC's N2 process technology will only ramp starting in 2026, N3 will indeed be a long lasting node for the company. Furthermore, since it will be the last FinFET-based node for advanced chips, it will be used for many years to come as not all applications will need GAAFETs.



from AnandTech https://ift.tt/MtJ3hlQ
via IFTTT

Friday, 13 January 2023

Intel Lunar Lake CPUs could revolutionize ultrathin laptops

https://ift.tt/QiGM4tq

Intel’s Lunar Lake processors, which are still some way down the line, will be targeted at super-slim laptops, hopefully ensuring that these lean machines have some serious pep despite their svelte nature.

This comes from Ian Cutress on Twitter (via VideoCardz), who got word from Intel’s VP & GM of Client Computing, Michelle Johnston Holthaus, that Lunar Lake will be a totally new design from the ground-up, with an all-new architecture that’s built with performance per watt in mind.

See more

In other words, efficiency will be king, and running low on wattage will obviously make Lunar Lake chips an ideal option for laptops, while still delivering plenty of grunt (relatively speaking). These processors are expected to arrive in late 2024 or 2025, we’ve heard on the grapevine.

Cutress assures us that apparently, Intel will have more to share on the subject later this month when the firm’s financial results come out on January 26. So stay tuned, because it won’t be long before we hear more about exactly what Lunar Lake processors will bring to the table.


Analysis: Intel’s focus on efficiency suits mobile – but what about beefy desktop CPUs?

A focus on efficiency from Intel is not a surprise, as it’s something we’ve already learned is a key factor for next-gen Meteor Lake.

Indeed, with current Raptor Lake processors, Intel has seriously upped the numbers of efficiency cores throughout the 13th-gen range, and it’s expected to push harder on this front with the following generation. Meteor Lake rumors point to even more efficiency cores on board – these are low-power cores, which sip juice, but can really boost multi-core performance, particularly in large numbers. And furthermore, Meteor Lake will introduce a whole new architecture for efficiency cores on top, which should lead to some big power-efficiency benefits.

Lunar Lake could take bigger strides forward still, going by what’s been shared here, as the wording is pretty strong in terms of these processors having been built with efficiency for mobile devices in mind. Could that even mean Lunar Lake will be laptop CPUs only, perhaps? Cutress doesn’t comment on that, but we can theorize it’s a possibility.

After all, next-gen Meteor Lake looks to be firmly slanting towards mobile performance, with rumors indicating that due to a big push for more efficiency cores, the maximum performance core count might be restricted to 6 for the 14th generation. (Remember, the higher-end Raptor Lake Core i9 and i7 CPUs have 8 performance cores, and the same was true of Alder Lake – so lopping a pair of these cores off would be a disappointment on the desktop front).

With Meteor Lake and Lunar Lake apparently concentrating on the efficiency side of the equation, it’s starting to sound like Arrow Lake – which is supposed to come between these two as the 15th-gen family – might be the only hope in the more near-term future for those wanting a new heavyweight desktop (Core i9) processor.

As always, treat this as the speculation it is, and hopefully we might find out more about exactly how Lunar Lake will be pitched laptop or desktop-wise when Intel wheels out its latest (full year) fiscal results, and associated commentary, later in January.



from TechRadar: computing components news https://ift.tt/2uwKjVL
via IFTTT

Ryzen CPU firmware bug is fixed, but AMD has bigger problems

https://ift.tt/4YVIOnX

AMD has issued a new update to resolve a firmware bug that caused performance issues for the new Ryzen 5 7600X CPU. The dodgy firmware - which we recently reported on - was causing some 7600X chips to underperform or simply not boot at all, with the new update being pushed out in record time by AMD to fix the issue.

To keep things succinct, the previous AGESA ComboAM5PI 1.0.0.4 firmware was only affecting 7600X CPUs with two CCDs (also called chiplets). The 7600X doesn’t actually rely on AMD’s new dual chiplet design employed by more powerful CPUs in the Ryzen 7000 range such as the Ryzen 9 7950X, but we can reasonably assume that some 7600X units are in fact rejected dual-chiplet silicon that could be shipped with one CCD disabled.

The iffy firmware was reportedly trying to boot the CPU off this disabled chiplet, resulting in some 7600X processors refusing to post at all. We commend AMD’s speedy work in sorting out this issue, as it will have proved frustrating for many users who purchased AMD’s new best processor for budget users. It’s worth noting that this debacle wasn’t entirely AMD’s fault, either, since some motherboard manufacturers failed to correctly label the previous firmware update as a beta build.

However, this firmware trouble is just the latest in a series of problems for AMD. The Ryzen 7000 launch was swiftly overshadowed by the release of Intel’s incredible 13th-gen Core CPUs, and while Nvidia has also been engaged in continual missteps over the past year, AMD has seen no end of struggles with its new CPU and GPU lines.

An AMD RX 7900 XTX graphics card seen from an overhead angle

(Image credit: AMD)

Team Red’s big problems

Sticking with the topic of CPUs for now, the past few months have seen multiple issues with Windows 11 on PCs running Ryzen CPUs. The first was due to the OS’s thread scheduler slowing down high-end Ryzen processors (a problem we actually already saw back in 2021).

More recently, the release of three new non-X CPU variants has us worried that AMD is going to repeat past mistakes and overcrowd the market with an unnecessary abundance of different CPUs and GPUs. The close pricing of the new non-X chips to the existing X-series CPUs only reinforces this notion.

Over in the graphics card arena, a German repair service recently reported a worryingly high number of RX 6000-series cards showing up with cracked GPU chips, possibly tied to a new driver. Even worse, AMD had to admit to a serious cooling issue with its new flagship RX 7900 XTX where a small percentage of the cards were shipped with insufficient fluid inside the vapor chamber.

To Team Red’s credit, they have committed to replacing the affected GPUs as quickly as possible, since the defect can cause heat spikes that can damage the card. AMD has certainly reacted faster than Nvidia did to the RTX 4090’s cable-melting controversy. But these issues with AMD products are stacking up and affecting more and more users, so we hope Team Red can get things under control as fast as possible - since Nvidia’s floundering right now means it’s the perfect time for AMD to capitalize on the competition’s failures.



from TechRadar: computing components news https://ift.tt/tWnJqzU
via IFTTT

Thursday, 12 January 2023

Intel Unveils Core i9-13900KS: Raptor Lake Spreads Its Wings to 6.0 GHz

https://ift.tt/2n4PZ7u

Initially mentioned during their Innovation 2022 opening keynote by Intel CEO Pat Gelsinger, Intel has unveiled its highly anticipated 6 GHz out-of-the-box processor, the Core i9-13900KS. The Core i9-13900KS has 24-cores (8P+16E) within its hybrid architecture design of performance and efficiency cores, with the exact fundamental specifications of the Core i9-13900K, but with an impressive P-core turbo of up to 6 GHz.

Based on Intel's Raptor Lake-S desktop series, Intel claims that the Core i9-13900KS is the first desktop processor to reach 6 GHz out of the box without overclocking. Available from today, the Core i9-13900KS has a slightly higher base TDP of 150 W (versus 125 on the 13900K), 36 MB of Intel's L3 smart cache, and is pre-binned through a unique selection process to ensure the Core i9-13900KS's special edition status for their highest level of frequency of 6 GHz in a desktop chip out of the box, without the need to overclock manually.



from AnandTech https://ift.tt/voZ1DCe
via IFTTT

Gamers take note: AMD Ryzen 7000X3D CPUs could go on sale February 14

https://ift.tt/UFP3hVu

AMD’s Ryzen 7000X3D processors are set to launch in February, the company told us at CES 2023, but now we have a purported release date for the supercharged chips that’ll be of great interest to gamers – and it’s February 14.

OC3D spotted that this release date was shared via the official web page for the Ryzen 7 7800X3D on the AMD site, but looking at the page now, that information has been removed (not before it was screen grabbed).

So, we can assume this was an accidental leak by a less-than-careful AMD employee, though we do need to be a bit careful about assuming the screenshot is genuine, as ever. All we have is an image here, and as we know, pics can be manipulated.

However, OC3D seemingly took the screen grab itself, as the image source is credited to AMD and not a third-party, so it seems unlikely to be a fabrication.

Come February 14, then, we’ll see not just the Ryzen 7 7800X3D, but also the 7900X3D and 7950X3D. All these chips will use 3D V-Cache to pep up gaming performance considerably, as we saw with the 5800X3D of the Zen 3 family.


Analysis: Say it with flowers, or chocolates… or 3D V-Cache?

So, the launch date will be Valentine’s Day, but will we love AMD’s pricing? That remains to be seen. On the one hand, late last year we witnessed price cuts on the still fresh vanilla Ryzen 7000 CPUs, after sales weren’t great to begin with. However, that was partly a reflection of the total cost of upgrade, taking into account the necessary AM5 motherboard (with no wallet-friendly options) and need for DDR5 RAM (unlike Raptor Lake, you can’t use DDR4, and DDR5 is still fairly pricey).

Now more affordable AM5 motherboards are around (and even cheaper ones shouldn’t be far off), and gamers will likely be keen to get a piece of the Zen 4 3D V-Cache action, we can foresee AMD pushing a bit harder with the pricing here. Time will tell, but at any rate, the Ryzen 9 spins on X3D are obviously going to be expensive, anyway.

Aside from the potential hole they might blow in your wallet, the other niggling worry about the higher-tier Ryzen 9 X3D models is how Windows might deal with their new design. In a nutshell, these CPUs have two CCDs (chiplets), but only one has the 3D V-Cache on top – the other runs with a faster boost speed instead. Now, some games will benefit from more cache, and some will find the higher clocks more of a boon, so any given game needs to be marshaled to favor the appropriate CCD that’s best for it.

Could that introduce issues and performance teething problems? Microsoft and AMD are working closely to ensure this doesn’t happen in Windows gaming, but of course, we’ll only know when we get these processors in for review. Not long now, and we can’t wait to see how they’ll perform, and how much of a threat the new X3D CPUs will be to Intel’s powerful Raptor Lake line-up.



from TechRadar: computing components news https://ift.tt/3OC4HAl
via IFTTT

Tuesday, 10 January 2023

Nvidia RTX 4090 laptop GPU price revealed – and it’s seriously expensive

https://ift.tt/WPe4wSG

Nvidia’s RTX 4090 laptop GPU was revealed at CES 2023, but if you want this power-packed graphics card, how much will it cost you? The unsurprising answer is a small fortune, going by the price of an upgrade to the RTX 4090 on an XMG gaming laptop.

VideoCardz spotted that in a YouTube video from Jarrod’sTech, pricing is listed for the various RTX 40 Series graphics card options that buyers can choose from when purchasing XMG’s Neo 16, an incoming gaming portable packing Intel’s Core i9-13900HX Raptor Lake CPU (and a whole load of other high-end goodies).

The base model of the Neo 16 comes with the RTX 4060, retailing at €2,199 (around $2,360, £1,940, AU$3,420). If you want to upgrade that to the RTX 4090, the cost is a whopping €1,687 (around $1,810, £1,490, AU$2,630).

XMG’s prices for other Nvidia Lovelace GPUs come to a relatively palatable €375 (around $400, £330, AU$580) for the RTX 4070, but then the RTX 4080 is pitched at €1,050 (around $1,130, £930, AU$1,640).

The Neo 16, and other RTX 40 Series laptops, will be out in February.


Analysis: Pushing the boundaries of pricing (again)

Nvidia’s newly revealed GPUs are pricey, you say? Well, who’d have believed it…

Of course, the fact that the RTX 4090 is expensive in its laptop incarnation is hardly a shock, and again looking at the desktop pricing, the RTX 4080 also being a big financial ask is similarly predictable.

If we consider the total cost of the RTX 4090 going by XMG’s pricing, the cost of the upgrade is almost €1,700 as mentioned, but for the total theoretical price, we must also consider that you’re already paying for the RTX 4060 in the entry-level model’s price tag – so we can bung that on top.

Meaning that the RTX 4090 laptop GPU must come close to two grand in Euros, and that’s about the asking price for the a desktop RTX 4090 (at major German retailer MindFactory, at the time of writing). So, one way to look at this is the laptop version of this GPU is just as expensive as the desktop incarnation’s eye-watering price tag.

And of course, the laptop take on the Lovelace flagship isn’t nearly as performant as the desktop card – it draws a lot less wattage, of course, at 150W TGP (plus a 25W dynamic boost) – but what you’re paying for here is the ability to fit as much power as possible into a relatively small case (a laptop chassis).

Still, the question remains – do you really want to fork out that much for a notebook graphics card? One that has the same CUDA Core count as the RTX 4080 desktop, and likely similar performance to that card (in a best-case scenario, it’s perhaps a bit slower in the main). You can bet there are folks out there already reaching for their wallets, though…



from TechRadar: computing components news https://ift.tt/FMWGqfg
via IFTTT

Monday, 9 January 2023

The AMD Ryzen 9 7900, Ryzen 7 7700, and Ryzen 5 5 7600 Review: Zen 4 Efficiency at 65 Watts

https://ift.tt/WPe4wSG

In Q3 of last year, AMD released the first CPUs based on its highly anticipated Zen 4 architecture. Not only did their Ryzen 7000 parts raise the bar in terms of performance compared with the previous Ryzen 5000 series, but it also gave birth to AMD's latest platform, AM5. Some of the most significant benefits of Zen 4 and the AM5 platform include support for PCIe 5.0, DDR5 memory, and access to the latest and greatest of what's available in controller sets. 

While the competition at the higher end of the x86 processor market is a metaphorical firefight with heavy weaponry, AMD has struggled to offer users on tighter budgets anything to sink their teeth into. It's clear Zen 4 is a powerful and highly efficient architecture, but with the added cost of DDR5, finding all of the components to fit under tighter budget constraints with AM5 isn't as easy as it once was on AM4.

AMD has launched three new processors designed to offer users on a budget something to get their money's worth, with performance that make them favorable for users looking for Zen 4 hardware but without the hefty financial outlay. The AMD Ryzen 9 7900, Ryzen 7 7700, and Ryzen 5 7600 processors all feature the Zen 4 microarchitecture and come with a TDP of just 65 W, which makes them viable for all kinds of users, such as enthusiasts looking for a more affordable entry-point onto the AM5 platform.

Of particular interest is AMD's new budget offering for the Ryzen 7000 series: the Ryzen 5 7600, which offers six cores/twelve threads for entry-level builders looking to build a system with all of the features of AM5 and the Ryzen 7000 family, but at a much more affordable price point. We are looking at all three of AMD's new Ryzen 7000 65 W TDP processors to see how they stack up against the competition, to see if AMD's lower-powered, lower-priced non-X variants can offer anything in the way of value for consumers. We also aim to see if AMD's 65 W TDP implementation can shine on TSMC's 5 nm node process with performance per watt efficiency that AMD claims is the best on the market.



from AnandTech https://ift.tt/PDv2q8n
via IFTTT

Friday, 6 January 2023

Microsoft and AMD want to make Ryzen 7000 X3D CPUs run games faster in Windows 11

https://ift.tt/6xT5MNs

AMD and Microsoft are collaborating to ensure that the new design of the high-end Ryzen 7000 X3D processors which Team Red just revealed at CES 2023 works well enough in Windows 11, and is speedy enough for gaming.

If you recall, AMD unveiled a trio of new X3D models at CES: the Ryzen 7 7800X3D, plus the Ryzen 9 7900X3D and 7950X3D (with 8, 12 and 16-cores respectively).

In the case of the latter two Ryzen 9 models, they have two CCDs, meaning two separate chiplets that carry the processor cores, the twist being that only one of those CCDs actually has 3D V-Cache memory atop it. The idea is that this CCD can be used for gaming, or other apps where that cache will be advantageous, while the other CCD – the ‘bare’ chiplet with nothing on top – can be clocked higher, offering advantages to tasks where the cache isn’t so impactful (but speedier boost clocks will be).

This fresh design complication is something that needs to be catered for on the software front, then, and that’s exactly what AMD is doing.

As Tom’s Hardware reports, AMD is working with Microsoft to implement optimizations for Windows in conjunction with a new AMD chipset driver, with the aim of being able to determine if any given game would benefit from the cache, and then ensure that it is running in the V-Cache-topped CCD. And vice versa for those games which will be better served by the higher boost speed of the other CCD.

If you were wondering if the latter ‘bare’ chiplet can still access the V-Cache on the other chiplet, well, AMD tells us that yes, it can – but the catch is that this isn’t a fast process or at all optimal (but may still be useful in rare cases).

So, there’s quite a balancing act going on under the hood for these top-end Ryzen 7000 X3D chips.

One other interesting point raised by Tom’s is that while the new Ryzen X3D models will allow for Precision Boost Overdrive and the Curve Optimizer for juicing up performance – which wasn’t possible with the original 3D V-Cache CPU, the 5800X3D – manual overclocking (directly ramping up voltage) still won’t be allowed with the Zen 4 models.


Analysis: Getting things right could take a bit of time

There are already some worries floating around about these Windows optimizations for the Ryzen 9 7900X3D and 7950X3D, as you might imagine. What if AMD and Microsoft struggle to get them running smoothly? Should we expect teething problems at the launch of these X3D chips?

Those are questions which can’t be answered without the aid of a crystal ball, of course, but if there are any stumbling blocks in the initial implementation with some games, hopefully they’ll be ironed out soon enough.

And we should also bear in mind that Microsoft has had to perform tuning for new processor designs from Intel when the chip giant brought in its hybrid tech with Alder Lake. Working to ensure the different types of performance and efficiency cores were used correctly and ran to the best of their respective potential in Windows is a task Microsoft has previously tackled, and one that took a bit of time to get everything running really smoothly.

For AMD, the proof will be in the pudding, and how we see these top-end Ryzen 7000 X3D processors actually perform across a bunch of games when they’re released in February. We can’t wait – and we’re also very keen to learn the pricing for these V-Cache models, as that’s something AMD hasn’t shared yet (which is a little worrying, but we’re not expecting the 12 and 16-core models to be remotely cheap, of course).

Via Wccftech



from TechRadar: computing components news https://ift.tt/3jSyxZr
via IFTTT

A Lighter Touch: Exploring CPU Power Scaling On Core i9-13900K and Ryzen 9 7950X

https://ift.tt/4f3VOwS

One of the biggest running gags on social media and Reddit is how hot and power hungry CPUs have become over the years. Whereas at one time flagship x86 CPUs didn't even require a heatsink, they can now saturate whole radiators. Thankfully, it's not quite to the levels of a nuclear reactor, as the memes go – but as the kids say these days, it's also not a nothingburger. Designing for higher TDPs and greater power consumption has allowed chipmakers to keep pushing the envelope in terms of performance – something that's no easy feat in a post-Dennard world – but it's certainly created some new headaches regarding power consumption and heat in the process. Something that, for better or worse, the latest flagship chips from both AMD and Intel exemplify.

But despite these general trends, this doesn't mean that a high performance desktop CPU also needs to be a power hog. In our review of AMD's Ryzen 9 7950X, our testing showed that even capped at a these days pedestrian 65 Watts, the 7950X could deliver a significant amount of performance at less than half its normal power consumption.

If you'll pardon the pun, power efficiency has become a hot talking point these days, as enthusiasts look to save on their energy bills (especially in Europe) while still enjoying fast CPU performance, looking for other ways to take advantage of the full silicon capabilities of AMD's Raphael and Intel's Raptor Lake-S platforms besides stuffing the chips with as many joules as possible. All the while, the small form factor market remains a steadfast outpost for high efficiency chips, where cooler chips are critical for building smaller and more compact systems that can forego the need for large cooling systems.

All of this is to say that while it's great to see the envelope pushed in terms of peak performance, the typical focus on how an unlocked chip scales when overclocking (pushing CPU frequency and CPU VCore voltages) is just one way to look at overall CPU performance. So today we are going to go the other way, and to take a look at overall energy efficiency for users – to see what happens when we aim for the sweet spot on the voltage/frequency curve. To that end, today we're investigating how the Intel Core i9-13900K and AMD Ryzen 9 7950X perform at different power levels, and to see what kind of benefits power scaling can provide compared to stock settings.



from AnandTech https://ift.tt/QhEUizt
via IFTTT

Thursday, 5 January 2023

CES 2023: AMD Instinct MI300 Data Center APU Silicon In Hand - 146B Transistors, Shipping H2’23

https://ift.tt/dgaBqTw

Alongside AMD’s widely expected client product announcements this evening for desktop CPUs, mobile CPUs, and mobile GPUs, AMD’s CEO Dr. Lisa Su also had a surprise up her sleeve for the large crowd gathered for her prime CES keynote: a sneak peak at MI300, AMD’s next-generation data center APU that is currently under development. With silicon literally in hand, the quick teaser laid out the basic specifications of the part, along with reiterating AMD’s intentions of taking leadership in the HPC market.

First unveiled by AMD during their 2022 Financial Analyst Day back in June of 2022, MI300 is AMD’s first shot at building a true data center/HPC-class APU, combining the best of AMD’s CPU and GPU technologies. As was laid out at the time, MI300 would be a disaggregated design, using multiple chiplets built on TSMC’s 5nm process, and using 3D die stacking to place them over a base die, all of which in turn will be paired with on-package HBM memory to maximize AMD’s available memory bandwidth.

AMD for its part is no stranger to combining the abilities of its CPUs and GPUs – one only needs to look at their laptop CPUs/APUs – but to date they’ve never done so on a large scale. AMD’s current best-in-class HPC hardware is to combine the discrete AMD Instinct MI250X (a GPU-only product) with AMD’s EPYC CPUs, which is exactly what’s been done for the Frontier supercomputer and other HPC projects. MI300, in turn, is the next step in the process, bringing the two processor types together on to a single package, and not just wiring them up in an MCM fashion, but going the full chiplet route with TSV stacked dies to enable extremely high bandwidth connections between the various parts.

The key point of tonight’s reveal was to show off the MI300 silicon, which has reached initial production and is now in AMD’s labs for bring-up. AMD had previously promised a 2023 launch for the MI300, and having the silicon back from the fabs and assembled is a strong sign that AMD is on track to make that delivery date.

Along with a chance to see the titanic chip in person (or at least, over a video stream), the brief teaser from Dr. Su also offered a few new tantalizing details about the hardware. At 146 billion transistors, MI300 is the biggest and most complex chip AMD has ever built – and easily so. Though we can only compare it to current chip designs, this is significantly more transistors than either Intel’s 100B transistor Xeon Max GPU (Ponte Vecchio), or NVIDIA’s 80B transistor GH100 GPU. Though in fairness to both, AMD is stuffing both a GPU and a CPU into this part.

The CPU side of the MI300 has been confirmed to use 24 of AMD’s Zen 4 CPU cores, finally giving us a basic idea of what to expect with regards to CPU throughput. Meanwhile the GPU side is (still) using an undisclosed number of CDNA 3 architecture CUs. All of this, in turn, is paired with 128GB of HBM3 memory.

According to AMD, MI300 is comprised of 9 5nm chiplets, sitting on top of 4 6nm chiplets. The 5nm chiplets are undoubtedly the compute logic chipets – i.e. the CPU and GPU chiplets – though a precise breakdown of what’s what is not available. A reasonable guess at this point would be 3 CPU chiplets (8 Zen 4 cores each) paired with possibly 6 GPU chiplets; though there's still some cache chiplets unaccounted for. Meanwhile, taking AMD’s “on top of” statement literally, the 6nm chiplets would then be the base dies all of this sits on top of. Based on AMD’s renders, it looks like there’s 8 HBM3 memory stacks in play, which implies around 5TB/second of memory bandwidth, if not more.

With regards to performance expectations, AMD isn’t saying anything new at this time. Previous claims were for a >5x improvement in AI performance-per-watt versus the MI250X, and an overall >8x improvement in AI training performance, and this is still what AMD is claiming as of CES.

The key advantage of AMD’s design, besides the operational simplicity of putting CPU cores and GPU cores on the same design, is that it will allow both processor types to share a high-speed, low-latency unified memory space. This would make it fast and easy to pass data between the CPU and GPU cores, letting each handle the aspects of computing that they do best. As well, it would significantly simplify HPC programming at a socket level by giving both processor types direct access to the same memory pool – not just a unified virtual memory space with copies to hide the physical differences, but a truly shared and physically unified memory space.


AMD FAD 2022 Slide

When it launches in the later half of 2023, AMD’s MI300 is expected to be going up against a few competing products. The most notable of which is likely NVIDIA’s Grace Hopper superchip, which combines an NVIDIA Armv9 Grace CPU with a Hopper GPU. NVIDIA has not gone for quite the same level of integration as AMD is, which arguably makes MI300 a more ambitious project, though NVIDIA’s decision to maintain a split memory pool is not without merit (e.g. capacity). Meanwhile, AMD’[s schedule would have them coming in well ahead of arch rival Intel’s Falcon Shores XPU, which isn’t due until 2024.

Expect to hear a great deal more from AMD about Instinct MI300 in the coming months, as the company will be eager to show off their most ambitious processor to date.



from AnandTech https://ift.tt/oWtCzhu
via IFTTT

Wednesday, 4 January 2023

Intel CPU mystery unfolds as 6GHz flagship skips CES and suddenly goes on sale

https://ift.tt/Ciq8gc9

Intel’s Core i9-13900KS didn’t appear at CES 2023, but rather mysteriously the supercharged Raptor Lake flagship has popped up at a European retailer (briefly) and has also been spotted in China (before being swiftly pulled down in this case, too).

VideoCardz reports that the Core i9-13900KS was seen at a Chinese retailer, JD (as highlighted by @wxnod on Twitter), and before it was taken down, an enterprising soul managed to capture the spec listing and official pictures of the box (which look genuine enough – but still, we need to be careful around the authenticity of any leak).

See more

The packaging appears to be the same as the existing flagship processor, the 13900K, using a silver wafer case – which is as expected – with the only change being the addition of ‘Special Edition’ to the box. (As the 13900KS is basically a limited edition of the 13900K which consists of higher quality silicon that can be clocked faster – with a boost speed of 6GHz right out of the box in this case).

The specs also show the base TDP (power usage) of the 13900KS to be 150W, as previously rumored (it’ll use a lot more power than this when boosting with the pedal to the metal, though, naturally).

The Core i9-13900KS was also sighted at LDLC, a French retailer, priced at €950, but that listing has again been yanked down. For comparative pricing, the 13900K costs €770 at LDLC, meaning the KS model is €180 or 23% more expensive than the base version of the flagship.

While this could be placeholder pricing from LDLC, in theory, it would mean that in the US, the 13900KS could retail at just over $700, which is about in line with what happened with Alder Lake and the 12900KS – so that perhaps lends a little more credibility to this leak.

Whatever the case, a Special Edition processor is obviously never going to be cheap, and don’t forget, Intel likely has price hikes inbound for Raptor Lake (as they’re already coming for Alder Lake).


Analysis: The case of the missing 13900KS

Going by previous chatter on the grapevine (add your own skepticism, again), the Core i9-13900KS is supposed to debut on January 12, which is why we expected to see the turbocharged Raptor Lake flagship appear at CES 2023, a week in advance of that – because it was known that Intel was set to reveal a bunch of new 13th-gen processors at the show. Now, Team Blue did indeed unveil new Raptor Lake chips, but the 13900KS wasn’t one of them.

So this omission caused some scratching of heads, and the affair gets even more mysterious with these separate appearances of the Core i9-13900KS at retailers. Granted, the product listings were swiftly removed in both cases as noted, but this seems to suggest the usual case of retailers jumping the gun a bit, and that the refreshed Raptor Lake flagship will still go on sale if not next week, then sometime very soon.

Although that makes it seem rather odd that Intel didn’t want to show off the 13900KS at CES, doesn’t it? Unless Team Blue is planning a very low-key launch and just a simple announcement for the processor. That could be the case, though we’d imagine Intel would want to make something of a fuss over the first chip to hit 6GHz out of the box – so maybe the release date of the 13900KS has slipped, and these retailers are way off the mark with their listings.

We’ll learn the truth soon enough, but remember that Intel has already promised that this chip will arrive early in 2023, and there was certainly a wide expectation that this meant January.



from TechRadar: computing components news https://ift.tt/HgEATju
via IFTTT

Tuesday, 3 January 2023

Intel Announces Non-K 13th Gen Core For Desktop: New 65 W and 35 W Processors

https://ift.tt/Ciq8gc9

When Intel launches a new family of desktop processors, it typically unleashes its high-end unlocked SKUs first, including the K and KF models. Not only does this give users a glimpse of its performance enhancements throughout its product development cycle and roadmap in the best possible light, but it also allows enthusiasts and high-end performance junkies to get their hands on the latest and most potent processors from the very beginning of the product cycle.

For the rest of the consumer market, Intel has finally pulled the proverbial trigger on its non-K series SKUs, with sixteen new Raptor Lake-S series processors for desktops. Varied across a mixture of bare multiplier locked SKUs such as the Core i9-13900 and Core i7-13700 with a TDP of 65 W, Intel has also announced its T series models with a TDP of just 35 W for lower powered computing, including the Core i9-13900T. Furthermore, Intel has launched its Core i3 series family, offering decent performance levels, albeit with just performance (P) cores and no efficiency (E) cores, at a more affordable price starting from $109.



from AnandTech https://ift.tt/G9Au1eq
via IFTTT
Related Posts Plugin for WordPress, Blogger...