Wednesday 30 March 2022

Intel launches Arc GPUs powering up gaming laptops to take on AMD and Nvidia

https://ift.tt/ZRe13Ko

Intel has revealed the first of its Arc Alchemist graphics cards, and as we already knew, these initial offerings are laptop class GPUs.

Intel announced three classes of GPUs, lower-end Arc 3 mobile GPUs, Arc 5 (midrange) GPU, along with an Arc 7 for high-performance gaming laptops. This means Intel is going to finally be able to power legitimate gaming products, which will put it head to head with gaming laptops powered by Nvidia and AMD. 

Intel broadly claims that Arc 3 offers up to double the performance of Intel Iris Xe integrated graphics, and the benchmarks bear this out in some cases, with generally sizeable gains across the board (in rarer worst-scenario cases, such as Rocket League, the A370M looks to be only around 20% faster, which is still a noticeable boost of course).

However, Intel didn't talk about how performance would look against competing systems powered by Nvidia GeForce or AMD Radeon graphics, so it'll be interesting to see which laptop chips land on top. 

Luckily, we won't have to wait long – Intel Arc 3 GPUs will be available in gaming laptops starting March 30, and the Arc 5 and Arc 7 graphics will start appearing later in 2022. 

What about specs?

The Arc 3 GPUs are the A350M and A370M, entry-level products with 6 and 8 Xe-cores respectively. Both have 4GB of GDDR6 VRAM on board with a 64-bit memory bus, with the A350M sporting a clock speed of 1150MHz and the A370M upping that considerably to 1550MHz. Power consumption is 25W to 35W for the A350M, and 35W to 50W for the A370M.

As for the mid-range Arc 5, you get the A550M which runs with 16 Xe-cores clocked at 900MHz, doubling up the VRAM to 8GB (and widening the memory bus to 128-bit). Power will sit at between 60W and 80W for this GPU.

Finally, the high-end chips are the A730M and A770M which bristle with 24 and 32 Xe-cores respectively. The lesser A730M is clocked at 1100MHz and has 12GB of GDDR6 VRAM with a 192-bit bus, and power usage of 80W to 120W.

Intel has clocked the mobile flagship A770M at 1650MHz and this GPU has 16GB of video RAM with a 256-bit bus. Power consumption is 120W to 150W maximum.

Never mind the raw specs, you may well be saying at this point: what about actual performance? Well, Intel does provide some internal benchmarking, but only for the Arc 3 GPUs.

The A370M is pitched as providing ‘competitive frame rates’ for gaming at 1080p resolution, exceeding 90 frames per second (fps) in Fortnite (where the GPU hits 94 fps at medium details), GTA V (105 fps, medium details), Rocket League (105 fps, high details) and Valorant (115 fps, high details).

Intel provides further game benchmarks showing over 60 fps performance in  Hitman 3 (62 fps, medium details), Doom Eternal (63 fps, high details), Destiny 2 (66 fps, medium details) and Wolfenstein: Youngblood (78 fps, medium details).

All of those benchmarks are taken with the A370M running in conjunction with an Intel Core i7-12700H processor, and comparisons are provided to Intel’s Iris Xe integrated GPU in a Core i7-1280P CPU.

Intel Arc Alchemist GPUs

(Image credit: Intel)

Analysis: We can’t wait to see the rest of Intel’s alchemy

The first laptops with Arc 3 GPUs are supposedly available now – we’d previously heard from Intel that they’d be out on launch day, or the day after – and the one Intel highlights is the Samsung Galaxy Book2 Pro.

Hopefully, there should be a good deal of models out there soon enough – from all major laptop makers, as you’d expect – featuring Arc 3 graphics, which will happily slot into ultra-thins like the Galaxy Book2 Pro, providing what looks like pretty solid 1080p gaming performance (running the likes of Doom Eternal on high details in excess of 60 fps). The pricing of these laptops is set to start from $899 (around £680, AU$1,200), Intel notes.

It’s a shame we didn’t get any indication of how the mid-range Arc 5 – which is something of an oddity with its base clock dipping right down to 900MHz – and high-end Arc 7 products will perform, but then they don’t launch for a few months yet. What Intel can pull off here will tell us much more about how Arc will pan out in this first generation, and how the much-awaited desktop graphics cards – also expected to land in Q2 – will challenge AMD and Nvidia in gaming PCs.

Also of note is that during this launch, Intel let us know that XeSS, its AI super sampling tech (to rival Nvidia DLSS, and AMD FSR) won’t debut with these first mobile GPUs, but rather will arrive in the summer with the big gun Arc GPUs. Over 20 games will be supported initially.



from TechRadar: computing components news https://ift.tt/EtQdYc6
via IFTTT

AMD Ryzen 7 5700X leak suggests great value for mid-range CPU buyers

https://ift.tt/yIjGWN6

AMD’s inbound Ryzen 7 5700X might just be the most tempting purchase of all the new CPUs that Team Red unveiled recently, positioned as a sterling mid-range option, and apparently offering the same performance as the current 5800X, more or less – just at a considerably cheaper price tag.

We’ll come back to pricing, as there’s a bit more to that than meets the eye at first glance, but the crux of the matter here is some leaked benchmarks that show the new 5700X performing almost on a par with the 5800X.

As Wccftech reports, Geekbench results have been unearthed by Benchleaks showing the 5700X hitting a single-core score of 1,645 and multi-core of 10,196 (and also another entry showing 1,634 and 10,179 respectively). That’s only around 5% slower than the 5800X (though Wccftech’s comparative stats peg it even closer than that).

Now, it shouldn’t be a massive surprise to see that the 5700X is pretty close on the heels of the 5800X, because the spec of both chips is very similar.

The slight differences between these two 8-core CPUs is that the base clock of the  5700X is 3.4GHz rather than 3.8GHz, although boost speeds are much closer with the 5700X only being 100MHz slower (at 4.6GHz). The TDP of the new processor is markedly lower at 65W versus 105W, though (giving the 5700X the edge on the efficiency front).

What’s really eye-opening is the pricing here, though, with the recommended price of the existing 5800X being set at $449 (around £340, AU$600), whereas the 5700X weighs in at a third less than that, namely $299 (around £230, AU$400).


Analysis: Pricing gap won’t be that large in reality – but it’ll still be compelling

While the Ryzen 7 5800X3D processor has been grabbing all the attention and headlines – with its new 3D V-cache tech – since AMD unveiled its new bunch of CPUs to debut throughout April, this just shows that the 5700X could be the one to watch in terms of ramping up mid-range PCs and shifting more units.

However, there are caveats, and of course we can’t put too much stock in a couple of leaked benchmarks. Also, the 5800X is still a touch faster here, and the difference in cost isn’t quite as stark as the recommended price tags make out.

That’s because the 5800X isn’t selling at MSRP anymore, as you might imagine – US retailers have knocked around 20% off the chip at this point, so it’s attainable for around the $360 mark (about £275, AU$480) right now. And the 5700X will, of course, launch at its MSRP to begin with, and as that’s $299 (around £230, AU$400), that isn’t quite such a drop downwards.

Even so, it’ll still be the best part of 20% cheaper than the 5800X, and as such it’s going to represent a tempting option for those looking for a powerful yet affordable Ryzen chip for their new PC (or to upgrade a current rig). Don’t forget that AMD also has some very tasty-looking silicon emerging at the budget end of the spectrum which will be a crucial part of these fresh Ryzen offerings for next month, too.



from TechRadar: computing components news https://ift.tt/anDfg60
via IFTTT

Tuesday 29 March 2022

The Intel Core i7-12700K and Core i5-12600K Review: High Performance For the Mid-Range

https://ift.tt/uEJhCoO

Since Intel announced and launched its 12th Gen Core series of CPUs in to the market, we've reviewed both the flagship Core i9-12900K, as well as the entry-level (but still very capable) Core i3-12300 processors. Today, we're looking at the middle of the stack, with the Core i7-12700K and Core i5-12600K both taking center stage.

Ever since AMD launched its Zen 3 architecture and its Ryzen 5000 series for desktop, Intel has been playing catch up in both performance and pricing. Intel's hybrid Alder Lake design is its second attempt (Rocket Lake) to dethrone Ryzen 5000 as the go-to processor for consumers building a high-end desktop system for gaming, content creation, and everything in between. It's time to see if the Core i7-12700K and Core i5-12600K can finally level the playing field, if not outright give Intel an advantage in the always popular mid-range and enthusiast markets.



from AnandTech https://ift.tt/Qfh2IDS
via IFTTT

Monday 28 March 2022

Xbox tech set to reduce CPU overhead by up to 40% when gaming on Windows 11

https://ift.tt/w0PyuAa

Windows 11 gamers could get some really beefy benefits from DirectStorage tech, which was recently announced to have arrived on Microsoft’s newest OS – but it’ll be some time yet before developers incorporate it into games.

We already knew that Windows 11 would give users ‘optimal’ results with DirectStorage (compared to Windows 10) in terms of what this feature does – namely seriously speeding up NVMe SSDs.

However, there’s been an eye-opening revelation concerning exactly how much difference this will make when it comes to relieving the pressure on the PC’s processor.

As TweakTown reports, Cooper Partin, a senior software engineer at Microsoft, explained that the DirectStorage implementation for PC is specifically designed for Windows.

Partin noted: “DirectStorage is designed for modern gaming systems. It handles smaller reads more efficiently, and you can batch multiple requests together. When fully integrated with your title, DirectStorage, with an NVMe SSD on Windows 11, reduces the CPU overhead in a game by 20-40%.

“This is attributed to the advancements made in the file IO stack on Windows 11 and the improvements on that platform in general.”


Analysis: CPU resources freed which will make a major difference elsewhere

A 40% reduction is a huge difference in terms of lightening the load on the CPU, although that is a best-case scenario – but even 20% is a big step forward for freeing up processor resources.

Those resources can then be used elsewhere to help big open world games run more smoothly – as we’ve seen before, DirectStorage isn’t simply about making games load more quickly . There’s much more to it than that, and now we’re getting some exciting glimpses of exactly how much difference this Microsoft tech could make to PC games.

Of course, while the public SDK (software development kit) has been released, it’s still up to game developers to bake in this tech when they’re coding, and it’ll be quite some time before we see DirectStorage appearing in many games.

The first game which uses DirectStorage is Forspoken, and we got a glimpse of that at GDC, where it was shown to load up in a single second. Forspoken is scheduled to arrive in October 2022.



from TechRadar: computing components news https://ift.tt/rB35ovw
via IFTTT

Intel will launch ‘world’s fastest’ desktop CPU on April 5

https://ift.tt/w0PyuAa

Intel’s Core i9-12900KS processor is set to launch on April 5, just over a week away.

The 12900KS will debut at the Intel Talking Tech event which will be streamed on Twitch at 12pm PT in the US (that’s 8pm UK time), during which expert PC builders will show off their skills putting together a number of machines, presumably around the new Alder Lake chip.

See more

Intel bills the ‘KS’ version of its flagship – which essentially uses the best-performing 12900K silicon, capable of being pushed to higher clock speeds – as the ‘world’s fastest desktop processor’ based on the fact that it’s capable of boosting up to 5.5GHz.

That’s impressive, of course, but that speed can only be hit on a single-core over a limited period of time, with the rumor mill asserting that the all-core boost will be 5.2GHz (which again, is still impressive over the full 8 performance cores of this CPU).


Analysis: Top dog CPUs to battle it out in April

It’s no great surprise to hear that the launch of Intel’s Core i9-12900KS is so close, given that we’ve already seen the incoming flagship listed at some retailers complete with pre-release pricing, and indeed one report contends that someone has already bought one of the CPUs.

While Intel’s assertion that this is the ‘world’s fastest’ desktop CPU based on clocks is one thing, what PC owners will really be interested in is how well the 12900KS performs in real-world apps and games.

What makes this a really interesting launch is the incoming rival Ryzen 7 5800X3D due to arrive on April 20, as AMD has claimed this refreshed processor beats out the vanilla 12900K, and so should come close to the KS version, at least in terms of gaming. Outside of games, the 5800X3D won’t be so competitive with the Alder Lake top dogs, but for hardcore PC gamers who aren’t bothered about other apps in the main, that may not matter.

The crucial point will be the pricing here, and we already know that the new Ryzen processor will cost $449 (around £340, AU$625), or at least that’s the recommended price. We don’t know how the Core i9-12900KS will be priced yet, but pre-release leaks suggest it could go as high as $800 in the US (around £610, AU$1,070). If that’s the case, the relative value proposition is clearly going to be pretty heavily tilted in favor of Team Red. (Remember, the 12900K currently sells for around $600 or so in the US, so we can certainly expect KS pricing to be higher than that).

Ultimately, though, we can’t judge the relative merits of these high-end CPUs until we’ve tested them both, and taken into account whatever the final price tags end up as (at retailers, not MSRPs – and the latter could be quite different depending on demand and availability).

Via Wccftech



from TechRadar: computing components news https://ift.tt/HFw4ZD1
via IFTTT

Friday 25 March 2022

Newegg Briefly Lists the Intel Core i9-12900KS: 5.5 GHz Turbo, 5.2 GHz All-Core

https://ift.tt/sbSAmk6

Long expected from Intel, the Core i9-12900KS is now out of the bag thanks to an apparently accidental listing from Newegg. The major PC parts retailer listed the uannounced Intel chip for sale and began taking orders earlier this morning. pulling it a couple of hours later. But with the scale and popularity of Newegg – as well as having the complete specifications posted – the cat is now irreversibly out of the bag.



from AnandTech https://ift.tt/3Q2CIy1
via IFTTT

AMD’s next-gen frame rate booster leaves the most popular GPU in the world out in the cold

https://ift.tt/TFPDefi

The most popular graphics card out there (at least going by Steam’s hardware survey stats) falls below the recommended requirements for AMD’s incoming next-gen frame rate boosting feature.

Team Red recently revealed FSR 2.0 (the successor to the original FidelityFX Super Resolution tech), and made it clear that version 2.0 will not be usable (in theory – more on that later) by those with an Nvidia GTX 1060 graphics card (whereas v1.0 was compatible).

The new baseline where support for Nvidia cards begins is the GTX 16 Series (Turing) and GTX 1070 from the previous Pascal generation. On the AMD side, the base GPUs are the RX 6500 XT and RX 590.

This is a recommendation, mind, so not a hard-and-fast rule, but nonetheless the GTX 1060 – and RX 580 for that matter – can’t clear the bar with FSR 2.0 running at 1080p.

Higher resolutions make even greater demands on your graphics hardware, of course. We’re talking an RX 6600 or RX 5600 (or RX Vega) or better, or an RTX 2060 and GTX 1080 (or RTX 3060 for current-gen) when it comes to 1440p.

4K calls for an RX 6700 XT or RX 5700 and above, and an RTX 2070 or RTX 3070 for those with Team Green.

FSR 2.0 is expected to be released at some point in Q2.


Analysis: The temporal price… but all is not lost for GTX 1060 owners

First off, why the greater demands here? Well, FSR 2.0 moves to use temporal upscaling, rather than spatial upscaling as seen in FSR 1.0, the difference being that the former uses data drawn from past frames (not just the current frame) to improve overall quality. Unsurprisingly, upping the quality to better compete with Nvidia’s rival DLSS tech is more taxing for the GPU, and there’s no way around that.

Still, these are only recommendations from AMD, and it could be the case that in certain PCs, maybe with faster CPUs or other components working alongside the popular GTX 1060, the results could be palatable (at least in some games).

In short, this isn’t a black-and-white supported-or-unsupported situation; there’ll be shades of grey, and some owners with slightly underpowered GPUs compared to AMD’s suggestions might get away with it at 1080p.

Team Red itself noted: “Depending on your specific system specifications, the system requirements of individual games that support FSR 2.0, and your target resolution, you may be still able to have a good upscaling experience on lower-performing or older GPUs [than the recommended ones].”

Further tweaks to FSR 2.0 could improve the situation going forward, as well, as our sister site PC Gamer points out, while also observing that it seems like the new tech depends considerably on memory bandwidth to work well. (The RTX 3050 is notably not recommended, whereas the GTX 1070 gets a shout – and the latter offers more bandwidth, even though technically the 3050 is a slightly faster card).

Finally, it’s worth remembering that AMD also now has Radeon Super Resolution (RSR) on the table as a more basic frame rate booster, and one that applies across a great deal of games, not just those coded to support it (FSR support must be baked-in by game developers). The catch with RSR is that it’s directly in the Radeon driver, so while it’s good for AMD GPUs, Nvidia card owners can’t benefit from it.



from TechRadar: computing components news https://ift.tt/4CH7Pwz
via IFTTT

Intel Core i9-12900KS could be about to go on sale... with a wallet-destroying price

https://ift.tt/TFPDefi

Intel’s Core i9-12900KS, the incoming pepped-up version of Intel’s Alder Lake flagship, was spotted on sale at a big US retailer, before the product listing got yanked down – but the price Newegg briefly listed for the CPU is (perhaps predictably) expensive.

The price tag was a hefty $799.99 direct from Newegg (around £610, AU$1,070), although obviously we can’t take it as read that this will be the correct price, given that the listing is no longer there (we’ll discuss that in more depth later).

As well as pricing we also got further spec details on this CPU, as VideoCardz, which spotted this, reports. The maximum boost of the Core i9-12900KS is 5.5GHz (which we already knew), with a base clock of 3.4GHz. Those are the performance cores, with the efficiency cores running at 2.5GHz and boost to 4GHz.

So, compared to the vanilla 12900K, the KS version – which is a higher-binned, or better performing variant – runs 100MHz faster with the efficiency cores across the board, and has 200MHz faster base clocks with the performance cores, plus 300MHz extra boost.


Analysis: Comparing possible pricing to the 12900K and AMD Ryzen rivals

This is an expensive processor, then, if that pricing is correct. Of course, there’s a chance that the price tag could be a placeholder, but there are several reasons why that doesn’t seem likely.

Firstly, we know that the Core i9-12900KS is about to launch imminently, as Intel has told us the chip should be out before the end of March, meaning it’ll theoretically be on sale this week or next. Plus we’ve seen one report of somebody actually buying one of these new flagship CPUs already, with a retailer jumping the gun and selling it early.

The very fact that Newegg published this listing (briefly) is an indication that a launch is about to happen, as it seems the US retailer must have jumped the gun also, but as a high-profile operation it’s unlikely to be far off the mark (you would hope – indeed, there’s a distinct possibility we could see the Core i9-12900KS go on sale later today).

Furthermore, the $800 asking price does line up with what we’ve already heard on the grapevine about the 12900KS weighing in as pricey, and tipping the scales at closing on $800.

We should stress at this point that the Newegg listing is far from any confirmation of intended pricing (which, remember, is likely to be above Intel’s recommend pricing anyway). But it doesn’t look good, and as a broad indication of where things might be pitched, it is 35% more expensive than the base 12900K (and the performance boost does not add up to anything like that, of course).

Never mind comparing to the existing 12900K, though, where things look more worrying is if we consider AMD’s rival chips, and in particular, the new model which Team Red is about to launch – the Ryzen 7 5800X3D. That refreshed processor is expected to be competitive with the 12900K in terms of performance, and might even slightly outdo the Intel CPU – pinches of salt to hand, as that’s AMD’s pre-release assertion – and yet it retails at… wait for it… $449 (around £340, AU$625) when it launches next month (April 20).

That’s a good chunk less than the 12900K, and indeed almost half this purported price for the 12900KS, although there are some major caveats here. Namely that AMD claims the real strength of its 5800X3D is in gaming (where the 3D V-cache tech really excels), and so performance in other areas won’t be anywhere near as good as Intel’s Alder Lake heavyweights. It all depends on what you want to do with your PC, of course, but regardless, that asking price for the 12900KS seems exorbitant, even if only comparing to the existing 12900K.

Particularly when you consider that past KS models (the 9900KS was the last such effort from Intel) haven’t commanded nearly as much of a premium, but then, we’ll have to wait to see the final pricing when the 12900KS is actually on sale before we can really judge the chip’s relative merits, naturally.



from TechRadar: computing components news https://ift.tt/Bdm0Fio
via IFTTT

Tuesday 22 March 2022

The NVIDIA GTC Spring 2022 Keynote Live Blog (Starts at 8:00am PT/15:00 UTC)

https://ift.tt/EBy83ue

Please join us at 8:00am PT (15:00 UTC) for our live blog coverage of NVIDIA’s Spring GTC keynote address. The traditional kick-off to the show – be it physical or virtual – NVIDIA’s annual spring keynote is showcase for NVIDIA’s vision for the next 12 to 24 months across all of their segments, from graphics to AI to automotive. Along with slew of product announcements, the presentation, delivered by CEO (and James Halliday LARPer) Jensen Huang always contains a few surprises.

Looking at NVIDIA's sizable product stack, with the companys Ampere-based A100 server accelerators about to hit two years old, NVIDIA is arguably due for a major server GPU refresh. Meanwhile there's also the matters of NVIDIA's in-development Armv9 "Grace" CPUs, which were first announced last year. And of course, the latest developments in NVIDIA's efforts to make self-driving cars a market reality.



from AnandTech https://ift.tt/cyPoksA
via IFTTT

AMD Releases Instinct MI210 Accelerator: CDNA 2 On a PCIe Card

https://ift.tt/FmMXhKk

With both GDC and GTC going on this week, this is a big time for GPUs of all sorts. And today, AMD wants to get in on the game as well, with the release of the PCIe version of their MI200 accelerator family, the MI210.

First unveiled alongside the MI250 and MI250X back in November, when AMD initially launched the Instinct MI200 family, the MI210 is the third and final member of AMD’s latest generation of GPU-based accelerators. Bringing the CDNA 2 architecture into a PCIe card, the MI210 is being aimed at customers who are after the MI200 family’s HPC and machine learning performance, but need it in a standardized form factor for mainstream servers. Overall, the MI200 is being launched widely today as part of AMD moving the entire MI200 product stack to general availability for OEM customers.

AMD Instinct Accelerators
  MI250 MI210 MI100 MI50
Compute Units 2 x 104 104 120 60
Matrix Cores 2 x 416 416 480 N/A
Boost Clock 1700MHz 1700MHz 1502MHz 1725MHz
FP64 Vector 45.3 TFLOPS 22.6 TFLOPS 11.5 TFLOPS 6.6 TFLOPS
FP32 Vector 45.3 TFLOPS 22.6 TFLOPS 23.1 TFLOPS 13.3 TFLOPS
FP64 Matrix 90.5 TFLOPS 45.3 TFLOPS 11.5 TFLOPS 6.6 TFLOPS
FP32 Matrix 90.5 TFLOPS 45.3 TFLOPS 46.1 TFLOPS 13.3 TFLOPS
FP16 Matrix 362 TFLOPS 181 TFLOPS 184.6 TFLOPS 26.5 TFLOPS
INT8 Matrix 362.1 TOPS 181 TOPS 184.6 TOPS N/A
Memory Clock 3.2 Gbps HBM2E 3.2 Gbps HBM2E 2.4 Gbps HBM2 2.0 Gbps GDDR6
Memory Bus Width 8192-bit 4096-bit 4096-bit 4096-bit
Memory Bandwidth 3.2TBps 1.6TBps 1.23TBps 1.02TBps
VRAM 128GB 64GB 32GB 16GB
ECC Yes (Full) Yes (Full) Yes (Full) Yes (Full)
Infinity Fabric Links 6 3 3 N/A
CPU Coherency No N/A N/A N/A
TDP 560W 300W 300W 300W
Manufacturing Process TSMC N6 TSMC N6 TSMC 7nm TSMC 7nm
Transistor Count 2 x 29.1B 29.1B 25.6B 13.2B
Architecture CDNA 2 CDNA 2 CDNA (1) Vega
GPU 2 x CDNA 2 GCD
"Aldebaran"
CDNA 2 GCD
"Aldebaran"
CDNA 1 Vega 20
Form Factor OAM PCIe (4.0) PCIe (4.0) PCIe (4.0)
Launch Date 11/2021 03/2022 11/2020 11/2018

Starting with a look at the top-line specifications, the MI210 is an interesting variant to the existing MI250 accelerators. Whereas those two parts were based on a pair of Aldebaran (CDNA 2) dies in an MCM configuration on a single package, for MI210 AMD is paring everything back to a single die and related hardware. With MI250(X) requiring 560W in the OAM form factor, AMD essentially needed to halve the hardware anyhow to get things down to 300W for a PCIe card. So they’ve done so by ditching the second on-package die.

The net result is that the MI210 is essentially half of an MI250, both in regards to physical hardware and expected performance. The CNDA 2 Graphics Compute Die features the same 104 enabled CUs as on MI250, with the chip running at the same peak clockspeed of 1.7GHz. So workload scalability aside, the performance of the MI210 is for all practical purposes half of a MI250.

That halving goes for memory, as well. As MI250 paired 64GB of HBM2e memory with each GCD – for a total of 128GB of memory – MI210 brings that down to 64GB for the single GCD. AMD is using the same 3.2GHz HBM2e memory here, so the overall memory bandwidth for the chip is 1.6 TB/second.

In regards to performance, the use of a single Aldebaran die does make for some odd comparisons to AMD’s previous-generation PCIe card, the Radeon Instinct MI100. While clocked higher, the slightly reduced number of CUs relative to the MI100 means that for some workloads, the old accelerator is, at least on paper, a bit faster. In practice, MI210 has more memory and more memory bandwidth, so it should still have the performance edge the real world, but it’s going to be close. In workloads that can’t take advantage of CDNA 2’s architectural improvements, MI210 is not going to be a step up from MI100.

All of this underscores the overall similarity between the CDNA (1) and CDNA 2 architectures, and how developers need to make use of CDNA 2’s new features to get the most out of the hardware. Where CDNA 2 shines in comparison to CDNA (1) is with FP64 vector workloads, FP64 matrix workloads, and packed FP32 vector workloads. All three use cases benefit from AMD doubling the width of their ALUs to a full 64-bits wide, allowing FP64 operations to be processed at full speed. Meanwhile, when FP32 operations are packed together to completely fill the wider ALU, then they too can benefit from the new ALUs.

But, as we noted in our initial MI250 discussion, like all packed instruction formats, packed FP32 isn’t free. Developers and libraries need to be coded to take advantage of it; packed operands need to be adjacent and aligned to even registers. For software being written specifically for the architecture (e.g. Frontier), this is easily enough done, but more portable software will need updated to take this into account. And it’s for that reason that AMD wisely still advertises its FP32 vector performance at full rate (22.6 TFLOPS), rather than assuming the use of packed instructions.

The launch of the MI210 also marks the introduction of AMD’s improved matrix cores into a PCIe card. For CDNA 2, they’ve been expanded to allow full-speed FP64 matrix operation, bringing them up to the same 256 FLOPS rate as FP32 matrix operations, a 4x improvement over the old 64 FLOPS/clock/CU rate.

AMD GPU Throughput Rates
(FLOPS/clock/CU)
  CDNA 2 CDNA (1) Vega 20
FP64 Vector 128 64 64
FP32 Vector 128 128 128
Packed FP32 Vector 256 N/A N/A
FP64 Matrix 256 64 64
FP32 Matrix 256 256 128
FP16 Matrix 1024 1024 256
BF16 Matrix 1024 512 N/A
INT8 Matrix 1024 1024 N/A

Moving on, the PCIe format MI210 also gets a trio of Infinity Fabric 3.0 links along the top of the card, just like the MI100. This allows an MI210 card to be linked up with one or three other cards, forming a 2 or 4-way cluster of cards. Meanwhile, backhaul to the CPU or any other PCIe devices is provided via a PCIe 4.0 x16 connection, which is being powered by one of the flexible IF links from the GCD.

As previously mentioned, the TDP for the MI210 is set at 300W, the same level as the MI100 and MI50 before it – and essentially the limit for a PCIe server card. Like most server accelerators, this is fully passive dual slot card design, relying on significant airflow from the server chassis to keep things cool.  The GPU itself is powered by a combination of the PCIe slot and an 8 pin, EPS12V connector at the rear of the card.

Otherwise, despite the change in form factors, AMD is going after much the same market with MI210 as they have MI250(X). Which is to say HPC users who specifically need a fast FP64 accelerator. Thanks to its heritage as a chip designed first and foremost for supercomputers (i.e. Frontier), the MI200 family currently stands alone in its FP64 vector and FP64 matrix performance, as rival GPUs have focused instead on improving performance at the lower precisions used in most industry/non-scientific workloads. Though even at lower precisions, the MI200 family is nothing to sneeze at with tis 1024 FLOPS-per-CU rate on FP16 and BF16 matrix operations.

Wrapping things up, MI210 is slated to become available today from AMD’s usual server partners, including ASUS, Dell, Supermicro, HPE, and Lenovo. Those vendors are now also offering servers based on AMD’s MI250(X) accelerators, so AMD’s more mainstream customers will have access to systems based on AMD’s full lineup of MI200 accelerators.



from AnandTech https://ift.tt/y6mKhdb
via IFTTT

Monday 21 March 2022

AMD Releases Milan-X CPUs With 3D V-Cache: EPYC 7003 Up to 64 Cores and 768 MB L3 Cache

https://ift.tt/b3kFeO2

There's been a lot of focus on how both Intel and AMD are planning for the future in packaging their dies to increase overall performance and mitigate higher manufacturing costs. For AMD, that next step has been V-cache, an additional L3 cache (SRAM) chiplet that's designed to be 3D die stacked on top of an existing Zen 3 chiplet, tripling the total about of L3 cache available. Now AMD's V-cache technology is finally becoming available to the mass market, as AMD's EPYC 7003X "Milan-X" server CPUs have now reached general availability.

As first announced late last year, AMD is bringing its 3D V-Cache technology to the enterprise market through Milan-X, an advanced variant of its current-generation 3rd Gen Milan-based EPYC 7003 processors. AMD is launching four new processors ranging from 16-cores to 64-cores, all of them with Zen 3 cores and 768 MB of stacked L3 3D V-Cache.



from AnandTech https://ift.tt/KUtYWJH
via IFTTT

Sunday 20 March 2022

Intel Core i9-12900K vs AMD Ryzen 9 5900X

https://ift.tt/F76lQhr

Ever since the first AMD Ryzen processors hit the market all the way back in 2017, Intel has kind of been on the defensive. It was stuck on a 14nm process for so long, and while it was able to keep up in single-core performance, it quickly lost the lead in multi-core performance - something that is getting more important every single year. 

However, with its 12th-generation Alder Lake processors, led by the Intel Core i9-12900K, Intel finally gained a fighting chance against the AMD Ryzen 9 5900X - a processor that was untouchable when it came out in late 2020.

And because both processors are trying to do pretty much the same thing, albeit in very different ways, we thought it was about time to take a closer look at these two processors. After all, picking out the best processor isn't just about the numbers on the box or the color theme of the brand. 

AMD Ryzen 9 5900X

(Image credit: Future)

Intel Core i9-12900K vs AMD Ryzen 9 5900X: price

There's no way of getting around the fact that both the Intel Core i9-12900K and the AMD Ryzen 9 5900X are expensive processors. Looking to build or buy a system with either of these chips likely means you're going for a high-end device. So, it shouldn't be too surprising that they have high price tags. 

The Intel Core i9-12900K is the most recently released one, and it is still at full price pretty much everywhere. You'll find this 16-core processor at around $612 / £559 / AU$949. This is a bit higher than Intel's recommended price of $589-$599, but not by much. We imagine that prices will start to go down as the processor ages, and you should be able to find good deals on it soon. 

The AMD Ryzen 9 5900X, however, has been out for about a year and a half, and as such is starting to see lower prices. You can find it starting at $499 / £409 / AU$739, which is much more affordable than the Intel Core i9 at the moment. It does have fewer cores, though. But if you want an AMD 16-core chip, you could fork over around $599 / £548 / AU$919 for the Ryzen 9 5950X, instead. 

intel Alder Lake, processors on motherboard and on table

(Image credit: Future)

Intel Core i9-12900K vs AMD Ryzen 9 5900X: specs

Both the Intel Core i9-12900K and the AMD Ryzen 9 5900X are high-end CPUs, but they're quite different from one another. 

The AMD Ryzen 9 5900X will look pretty familiar to anyone with passing knowledge of desktop PC CPUs. It's a 12-core, 24-thread processor with a max boost of 4.8GHz. It also comes with a whopping 70MB of Cache, split between L2 and L3, and a TDP (thermal design power) of 105W. 

The Intel Core i9-12900K is a 16-core processor, but the way those cores are laid out is quite a bit different than the AMD chip. Unlike the Ryzen processor - and Intel's previous CPUs, no less - the 12900K is using a hybrid chip design. Specifically, it follows the big.LITTLE design philosophy popularized by Arm. 

Basically, it has 8 Performance cores and 8 Efficient cores. The Performance cores are dual-threaded, just like the Ryzen Threads, but the Efficient cores are not. So, while this chip has more cores altogether, it has the same amount of threads, 24, as the Ryzen 9 5900X. 

Both Intel and Apple have moved to a hybrid chip design like this, and AMD is the only CPU manufacturer that is still using a monolithic chip layout. We're not sure how much longer AMD will stick with this design philosophy, but it seems to be working for the company for now.

AMD Ryzen 9 5900X

(Image credit: Future)

Intel Core i9-12900K vs AMD Ryzen 9 5900X: performance

While AMD and Intel have been trading blows for the last few years, the release of the Intel Core i9-12900K sees Team Blue pulling ahead again - but this time the difference is pretty significant. 

In our review, we found that the Intel Core i9-12900K is about 21% faster than the Ryzen 9 5900X in single core workloads, particularly Cinebench. It's odd, because Cinebench has been the workload that has seen the most success on AMD processors, but Intel really pulled away this time. 

And it's not just single-core. In the Cinebench multi-core test, Intel is 23% faster, ending AMD's reign as the multi-threaded champ. This is repeated in pretty much every benchmark we ran in our review, with the 5900X not gaining a lead in any of our creative workloads. 

The closest it got was in Blender where the Ryzen 9 5900X was just 10% slower than the Core i9-12900K. But even then, that's still a pretty major loss. 

In gaming, Intel is still winning, especially in the CPU-intensive Total War: Three Kingdoms. We ran the game on low settings, to make sure it was leaning as much on the CPU as possible, and the Core i9-12900K got 480 fps to the Ryzen 9 5900X's 380 fps. 

However the tables turned in Metro: Exodus where the AMD Ryzen 9 5900X beat Intel with 251 fps to Team Blue's 246 fps. That's a small victory, but it's still a victory. 

Either way you slice it, the Intel Core i9-12900K is faster than the Ryzen 9 5900X. Whether that makes up for the price difference is up to you, though. However, Intel's support of DDR5, PCIe 5.0 and Thunderbolt may make up the difference to you. 



from TechRadar: computing components news https://ift.tt/nA8GEwN
via IFTTT

Wednesday 16 March 2022

Chinese CPUs could soon give Intel a run for its money

https://ift.tt/oM0V8jO

If Intel's venture into the graphics card market has you wishing for more variety for processors too then you might be in luck.  According to a report from Taiwan's DigiTimes, an Intel exec claims that Chinese CPU makers will become "strong competitors" to Intel within the next three to five years. 

These comments were made by Rui Wang, SVP of Intel Corporation and chair of Intel China at the 2022 National Party Congress on March 11, though no specific maker of Chinese processors was named. Tom's Hardware also notes in its own report that this could simply be Wang trying to be polite given the event was hosted by the Chinese Communist Party given the lack of data provided alongside the predictions.

In fact, the only real support we can see provided for this prediction comes from China's Minister of Information and Technology, Xiao Yaqing, who asserted that the domestic chip industry had grown by a third since this time last year.

"So far there has not been any local companies that are able to deal a substantial threat to Intel, but in 3-5 years, it will become clear that local companies will emerge as strong rivals," claimed Wang, though she also added some precaution that beating the mighty Team Blue in such a short time is far from an easy task given they make some of the worlds best processors, stating "Intel won't be polite, and will exert its power to compete fairly."

It wasn't made clear if Wang's claims referred to Chinese brands dominating the global market or simply taking the lead in China. Even if specific manufacturers of processors such as Zhaoxin or Loongson were mentioned, chances are they're unfamiliar in the west so there's little chance of recognizing them unless you're more involved with the world of computing.

Still, China's IT infrastructure is growing at an incredible rate, so while these claims may seem like nothing but appeasement, there is a chance that they could prove true if we see hardware that was previously exclusive to the region released globally. Right now, China provides around a quarter of Intel's annual revenue alone though, so whatever rival processors are hoping to bring the heat, they will need to start on home turf first.


Opinion: Is this good or bad for consumers?

It's pretty obvious that Intel getting dethroned by unrecognized brands from China is bad news for Team Blue, but adding more competition to the wider CPU market could actually be a good thing for everyday consumers. 

More competition on the market will encourage brands to keep their products reasonably priced in a bid to appeal to a wider market than rival manufacturers, which could result in computing hardware becoming more affordable and with greater variety, providing they don't meet the same fate as many other Chinese brands in the West.

Huawei has been affected by U.S sanctions on Chinese technology due to concerns that smartphones from the brand would be used to spy on behalf of the Chinese government to provide just one example, though computing components may not face the same issues.

China also has its own graphics card (Fenghua) and RAM manufacturers, so we could see other technologies outside of CPUs start to make their way overseas. It's likely that even if these Chinese processors don't usurp Intel, they'll make more of an impact and become more recognizable in the west within the next few years.



from TechRadar: computing components news https://ift.tt/adVAb6D
via IFTTT

Intel Core i9-12900KS could be about to spoil AMD’s Ryzen 7 5800X3D launch

https://ift.tt/oM0V8jO

Intel’s Core i9-12900KS processor is already in a customer’s hands, according to a new report – so the assumption is that it’s set to launch very soon.

Tom’s Hardware spotted that French tech site Overclocking.com has received images of a purchased 12900KS, sent to them by a reader, Daginatsuko. The pics look authentic – though we must be cautious around any such leak, of course – and show the box, chip itself, and the golden wafer (which accompanies the 12900K).

The box is labeled ‘Special Edition’ indicating that this is the ‘KS’ variant, which is a higher-binned (slightly better performing) 12900K, and it’s a darker blue color in comparison to the box of the existing Alder Lake flagship processor.

The owner of the purported 12900KS also provides a screen shot of overclocking the CPU (using the Asus tuning utility, in a PC with a ROG Strix Z690-F motherboard), and this shows an all-core boost of 5.2GHz and maximum single-core boost of 5.5GHz (matching what we saw with Intel’s reveal of this incoming CPU at CES 2022, when the chip was demoed with Hitman 3).


Analysis: A pre-emptive strike in the battle against AMD?

This looks like a genuine leak, and really, it’s believable that the CPU is already out there, given that we’ve seen retailers jump the gun and ‘accidentally’ (or mistakenly) sell hardware just before the official on-sale date – and that we know Intel’s Core i9-12900KS is imminent anyway (it’s expected before the end of March, so within the next two weeks). Indeed, we’ve already seen leaked online retailer listings for this processor, too.

Furthermore, it makes sense that Intel would want this revamped Alder Lake flagship out there pronto, given that we now know AMD is unleashing its new Ryzen 7 5800X3D on April 20. During Team Red’s initial reveal, which happened yesterday, the new 3D V-cache CPU was shown to outperform the 12900K across a range of games in 1080p – although AMD just made a broad assertion that its chip was faster, rather than providing any nitty-gritty details in terms of actual benchmarks and frame rates.

Intel will want ammunition to fire back at the Ryzen 7 5800X3D, naturally, and if it does emerge this month, the Core i9-12900KS will effectively be a pre-emptive strike. However, exactly how effective the new Alder Lake top dog will prove rather depends on pricing.

The thing is that the 12900K is already quite a lot more expensive than the 5800X3D – at least comparing recommended pricing, Intel’s CPU is around 30% dearer (going by US price tags).

That’s an appreciable chunk, and of course, the ‘KS’ will be a fair bit more costly than that, no doubt. It’s a special edition after all, and while the 9900KS – the last such effort from Intel – didn’t actually raise the bar for pricing all that much, it still notched things up a little, and in these days of silicon shortages, we can’t imagine that Team Blue won’t be attaching a bit more of a premium.

The upshot being that in theory, the 5800X3D could hold its ground very nicely against the 12900KS in terms of its value proposition. But there are other considerations here, most obviously the availability of the Ryzen 7 5800X3D, and how AMD might cope with demand if it does turn out to be a favorable buy as predicted – and the specters of scalping and price inflation, naturally.

As ever, it’s a case of waiting and seeing how this upcoming high-end battle unfolds in the real world when the respective CPUs hit the shelves (or indeed, disappear off the shelves in the blink of an eye). However, it does look like Intel may beat AMD to the punch by some distance in terms of launch timing if this early sighting of the 12900KS in the wild does indeed indicate that the Alder Lake special edition chip is about to be released.



from TechRadar: computing components news https://ift.tt/BWZQrEG
via IFTTT

Tuesday 15 March 2022

AMD strikes back against Alder Lake with Ryzen 7 5800X3D

https://ift.tt/tV9mwsp

AMD has revealed a whole bunch of new processors to launch throughout April headed up by the long-awaited Ryzen 7 5800X3D, which comes alongside fresh models in the 5000 series and some from the 4000 range too.

The 5800X3D is the first processor to employ AMD’s 3D V-cache tech – hence the ‘3D’ in the model name – and it’s the big hitter going up against Intel’s Alder Lake flagship, launching on April 20 priced at $449 (around £340, AU$625).

AMD boasts that it’s a cutting-edge 8-core (16-thread) CPU which offers 15% better gaming performance than current Ryzen champ, the 5900X.

Team Red further benchmarked the 5800X3D across a selection of six games running at Full HD resolution (with high graphics settings), and said that the CPU is faster than Intel’s Core i9-12900K (without providing further details). That’s when both are paired with an RTX 3080, and roughly equivalent components elsewhere (though there was actually more system RAM in the Intel rig).

The other processors launched will be out from April 4, and include the Ryzen 7 5700X, arriving with 8-cores and 16-threads, plus boost up to 4.6GHz with a base clock of 3.4GHz. It’ll retail for $299 (around £230, AU$415).

That’s backed up by the Ryzen 5 5600, a 6-core (12-thread) part with base and boost clocks of 3.5GHz and 4.4GHz respectively, priced at $199 (around £150, AU$276), coming along with the Ryzen 5 5500 which has the same core and thread count, but clocked at 3.6GHz and 4.2GHz. The latter processor also drops the cache from 35MB to 19MB, and the price tag down to $159 (around £121, AU$220).

Those are the launches from the Ryzen 5000 family, but AMD has also floated a trio of new Ryzen 4000 models – the 4600G, 4500 and 4100, which are Zen 2 rather than contemporary Zen 3 processors.

The Ryzen 5 4600G has 6-cores (12-threads) and is clocked at base and boost speeds of 3.7GHz and 4.2GHz, and the Ryzen 5 4500 offers the same core configuration but clocked at 3.6GHz and 4.1GHz. Pricing is $154 (around £118, AU$214) and $129 (around £99, AU$179) respectively.

Bringing up the rear with a $99 price tag (around £76, AU$137) is the Ryzen 3 4100, a quad-core (8-thread) CPU clocked at 3.8GHz with boost to 4GHz. All of these processors have a TDP of 65W, and all of them are bundled with a Wraith Stealth cooler (except for the 5700X which doesn’t have the cooler).


Analysis: Attacking Intel on two fronts - flagship and budget

The emergence of these multiple Ryzen chips is something the rumor mill was spot-on about, and as we commented when this was just speculation, it represents a major salvo of AMD processors being fired at Intel.

The Ryzen 7 5800X3D is an obvious point of excitement, target Intel’s top dog Core i9-12900K, and apparently outdoing Team Blue’s CPU, at least in being the ‘fastest 1080p gaming’ processor – while undercutting by a decent chunk price-wise (Intel’s flagship officially retails from $589, which is £450 / AU$815).

As noted above, AMD didn’t provide any further details on how much faster the 5800X3D is, and as we know with official benchmarking for big reveals like this, the tests picked are bound to be ones that show off the silicon in its best light.

So, with those obvious caveats, we’re looking forward to testing the power of 3D V-cache for real ourselves in the near future, and we won’t know the real score of how the 5800X3D fits into the current CPU landscape until that happens.

Also, bear in mind that Intel isn’t standing still either, and has the 12900KS – a supercharged version of the Alder Lake flagship – in the pipeline and supposedly imminent, or that was the last we heard from the grapevine. But then again, the price of that processor will be even higher than the 12900K, so there’ll be a greater disparity on the value front between the 12900KS and 5800X3D (with gaming performance differences not likely to stack up to that gap).

Away from the high-end, the peppering of more wallet-friendly CPUs from AMD is equally, if not more welcome. The budget end of the market has been neglected by AMD in recent times, so to see fresh options around $150 – like the Ryzen 5 5500 – and moreover models dipping down to the $100 mark – with the Ryzen 4500 and 4100 – is going to be a cause for celebration for those looking at budget PC builds.

Intel has some very compelling cheaper Alder Lake chips in these price brackets, so it’s great to see some competition there – assuming, and that’s the case for all these launches, that AMD can come good on the stock front (and demand doesn’t end up inflating the wallet-friendly pricing).

With so many new chips coming out, and the component shortage still very much being felt, the other major point of interest here will be how much supply Team Red can crank out in these early days. We shall see…



from TechRadar: computing components news https://ift.tt/flkKw0W
via IFTTT

Friday 11 March 2022

Best AMD Motherboards: March 2022

https://ift.tt/y8nx1kv

As we continue the journey through 2022, we find that AMD's current AM4+ platform is in middle of its stride. On top of AMD's current stack of Zen 3 processors, AMD CEO Lisa Su confirmed during CES 2022 that AMD's next Zen 3 desktop CPU, the V-cached equipped Ryzen 7 5800X3D, will launch this spring. So with a new chip to come and AMD's existing Zen 3-based offerings remaining highly competitive today, there remains a sizable market for AM4+ motherboards.

With varying levels of motherboards available, from the more affordable B450 chipset to the flagship X570 chipset and latest X570S models, there's something available for users on all kinds of budgets. Here are our AMD-based selections for March 2022 in our latest motherboard buyers guide.



from AnandTech https://ift.tt/hA2L1jT
via IFTTT

Thursday 10 March 2022

Apple M1 Ultra solves a multi-GPU problem that’s been plaguing AMD and Nvidia

https://ift.tt/K1wOIGl

It looks like Apple may have solved a multi-GPU problem that has been plaguing AMD and Nvidia for years thanks to the UltraFusion interface it unveiled earlier this week, which allows two M1 Max chip to connect together to make a single M1 Ultra chip.

When Apple announced the M1 Max last year, it rather cunningly didn’t reveal that there was a hidden secret built into the chip. This secret was revealed earlier this week at Apple’s March Event, where the company revealed that it can combine two M1 Max chips, turning them into an M1 Ultra with double the power – including twice as many GPU cores.

Of course, Apple isn’t the first company to harness the power of two GPUs. For many years, Nvidia graphics cards supported SLI (Scalable Link Interface) – a high-speed link that allowed you to connect multiple GPUs at once for increased performance. AMD had a similar tech with CrossFire that did the same.

Close up of an SLI bridge connector

(Image credit: KenSoftTH / Shutterstock)

Cracking the problem

The problem with SLI and CrossFire was that they weren’t good enough to multiply the power of the PC by the number of GPUs installed. So, if you had two Nvidia GPUs in SLI, you didn’t get twice the performance.

Best case scenario would be a 90% improvement, but in many cases, it was more around a 50% improvement – and that performance diminished the more GPUs you added. The return in investment just wasn’t worth it for many people, then, especially if you paid for three GPUs, but got around the power of 2.5 GPUs instead.

Both SLI and CrossFire also added overheads to the PC, especially with the CPU, which impacted performance. There was then also an increase in power consumption, and therefore running costs and cooling considerations, and the amount of space needed inside a PC chassis and motherboard to accommodate multiple GPUs.

Finally, there was also an issue with applications, and especially games, offering poor – or non-existent – support for SLI and CrossFire. In some cases, a game would only use a single GPU, no matter how many you had installed.

So, it’s perhaps little surprise that CrossFire and SLI ended up being extremely niche features, and you won’t hear either AMD or Nvidia talk about them. In fact, Nvidia essentially killed off SLI a few years ago in favor of NVLink for its RTX cards. However, price and the fact that it’s hard enough trying to buy one Nvidia GPU let alone multiple GPUs, has meant this has remained unloved.

Apple March Event 2022

(Image credit: Apple)

With the M1 Ultra, though, Apple appears to have addressed many of these issues. For a start, the UltraFusion connection has been designed to offer extremely low latency (essentially a delay in the transfer of data, and the lower the better) between the two M1 Max chips – with a bandwidth of 2.5TB/s. This is incredibly fast, and much higher than SLI or CrossFire ever offered.

The speed of UltraFusion brings numerous benefits, the most obvious being when it comes to performance, which is why Apple feels confident in claiming the M1 Ultra will offer twice the performance of the M1 Max.

By turning two M1 Max chips into a single M1 Ultra chip, Apple has also avoided the performance overheads that multi-GPU setups usually encounter. While the M1 Ultra is now the largest chip Apple has ever made, it’s still smaller than having  physical discrete graphics cards, allowing it to be used inside the compact Mac Studio, which was also announced at Tuesday’s event.

The M1 family of chips have also been acclaimed for their performance per watt, being much more power efficient than rival chips. This has allowed MacBooks with M1 chips to have longer battery lives, while keeping cool even when working hard, and the M1 Ultra continues this. According to Apple, the M1 Ultra’s GPU offers better performance than Nvidia’s mighty RTX 3090, while consuming 200W less power.

Apple March Event 2022

(Image credit: Apple)

That’s certainly a bold statement, but in these days of rising energy costs, the power efficiency of our components is going to be an important consideration. Rumors suggest that Nvidia’s next generation of GPUs, such as the RTX 4080, will be even more power-hungry, which could make Apple’s approach look far more appealing.

Finally, Apple’s approach looks like it will pay dividends when it comes to software support as well. Applications will see the M1 Ultra as a single chip, meaning that no extra coding is required – support will be out of the box, allowing applications to take advantage of the extra power. This is a key difference that could prove to be a game-changer.

Speaking of which, while Macs aren’t traditionally thought of as games machines, games should also see the M1 Ultra as a single chip, allowing them to use the extra power. With the M1 Ultra apparently out-performing the RTX 3090, could Apple have just made a fantastic gaming GPU?

We’ll have to wait to test Apple’s claims ourselves, but it certainly sounds promising. Apple’s wins here could also spur Nvidia and AMD to revaluate their multi-GPU technology, which could lead to increasingly impressive performance for both gaming and professional use. We can’t wait.



from TechRadar: computing components news https://ift.tt/BL9Guzn
via IFTTT

Wednesday 9 March 2022

Interview with Intel’s Raja Koduri: Zettascale or ZettaFLOP? Metaverse what?

https://ift.tt/u0s9Mqi

We currently live in a sea of buzzwords. Whether that’s something to catch the eye when scrolling through our news feed, or a company wanting to latch their product onto the word-of-the-day, the quintessential buzzword gets lodged in your brain and it’s hard to get out. Two that have broken through the barn doors in the technology community lately have been ‘Zettascale’, and ‘Metaverse’. Cue a collective groan while we wait for them to stop being buzzwords and into something tangible. That’s my goal today while speaking to Raja Koduri, Intel’s SVP and GM of Accelerated Computing.

What makes buzzwords like Zettascale and Metaverse so egregious right now is that they’re referring to one of our potential futures. To break it down: Zettascale is talking about creating 1000x the current level of compute today but in the latter half of the decade, to take advantage of the high demand for computational resources by both consumers and businesses, and especially machine learning; Metaverse is something about more immersive experiences, and leveling up the future of interaction, but is about as well defined as a PHP variable.

The main element that combines the two is computer hardware, coupled by computer software. That’s why I reached out to Intel to ask for an interview with Raja Koduri, SVP and GM, whose role is to manage both angles for the company towards a Zettascale future and a Metaverse experience. One of the goals of this interview was to cut through the miasma of marketing fluff and understand exactly what Intel means with these two phrases, and if they’re relevant enough to the company to be built into those future roadmaps (to no-one’s surprise, they are – but we’re finding out how).



from AnandTech https://ift.tt/Akunrw3
via IFTTT

Apple M1 Ultra reveals a secret even the leakers didn't find

https://ift.tt/BLU78xn

How did Apple do it? How did the most closely-watched tech company in the world ship a product almost six months ago and hide a crucial technical detail in plain sight?

When Apple unveiled on Tuesday what may be the last in its M1-class Apple Silicon line, the M1 Ultra, Apple SVP of Hardware Technologies Johny Srouji said that it was based on the M1 Max. More specifically, he said Apple's most powerful SoC, the 57-billion transistor M1 Max "has a secret, a hidden feature we haven’t talked about until now, a ground-breaking interconnect technology.” It's something called a silicon interposer that makes the M1 Ultra possible.

Wait a minute, I thought, an in-market product hid a feature? This is different than Apple saying, "We took a look at the M1 Max and realized we could repurpose the interconnect to build a new M1 chip." 

No, the silicon interposer was part of the original M1 Max. It was built in anticipation of this next step and nobody noticed (not even the teardown experts). Obviously, if Apple never lit up this portion of the SoC, it was probably easy to miss, but the feature is more than just a plug that lets Apple daisy-chain a couple of M1 Max processors.

As Apple described the interconnect on Tuesday, it lets two dies, really two M1 Max chips, operate as one. It calls the process "UltraFusion."

To be clear, we're not talking about two redesigned M1 Max chips. These are original M1 Max chips, with the hardware, and the code macOS uses to work with the new chip, is the same.

An Avatar connection

The M1 Max was built, for lack of a better word, with a port that would let it connect directly to another M1. We doubt this is a wall outlet and plug situation and more like the connection characters in the movie Avatar make with direhorses (neural ponytail to horse neural tail). In the film, they called the symbiotic, neural connection "Tsaheylu." It made a giant blue person and a horse-like thing into one. That's the interposer connection, it makes two M1 Max chips one and allows the system that runs on it (Mac Studio) and applications its driving to see those two dies as a single entity.

It sounds brilliant but may also be part business savvy. We're currently in the middle of a chip shortage. What better way to spin out a new chip than by using two you already have in hand? But I'd like to give Apple a little more credit than that.

If they built the interconnect into a chip it likely designed 12-to-18 months before launching it in October 2021, it knew nothing of future supply shortages, and simply wanted to build its most powerful chip ever in the most efficient way possible.

The big picture

That kind of planning is evidence of a large silicon master plan that I think I better understand now.

First, there's the rebranding of every piece of silicon that Apple touches, from the A-series to the M1 class as all being Apple Silicon. This is a repositioning of the mobile and laptop desktop system-class CPUs. When Apple launched the M1 two years ago, it was part of something new: Apple Silicon. When Apple launched the A4 in 2010, it was the start of Apple's bespoke silicon; chips built by Samsung, but designed by Apple.

Now all of Apple's SoCs, from the A13 Bionic to the M1 Ultra are called Apple Silicon. It's more than lip service. These chips are now clearly designed to work together (A13 Bionic in the Studio display and M1 Ultra in the Mac Studio), which means it's the fruition of an even longer-term plan.

The go-forward part is what remains: the powerful Mac Pro, which still runs an Intel Xeon Processor. As it stands, the M1 Ultra-powered Mac Studio offers more raw power but lacks the upgradeability of the Mac Pro. The reason the Mac Pro didn't get one or two M1 Ultras is clear to me. Apple is building the next generation of Apple Silicon, which will probably not be two or more M1 Max chips slammed together. It'll be something even more powerful and maybe at the top of the line of an all-new M2 series.

Maybe.

Whatever Apple does, you have to give it credit. It kept a secret and triggered the next stage of its already impressive Apple Silicon strategy. All I can think is... what else is Apple hiding?



from TechRadar: computing components news https://ift.tt/124qOYM
via IFTTT

Tuesday 8 March 2022

The Apple "Peek Performance" Event Live Blog (Starts at 10am PT/18:00 UTC)

https://ift.tt/L4dDPvF

Join us a bit later today for Apple's spring product launch event, which for this year is being called "Peek Performance".

The presentation kicks off at 10am Pacific (18:00 UTC) and should be packed with a barrage of Apple product announcements. In previous years these events have covered new Macs, iPads, and even iPhones, and this year should be much the same. So it should be interesting to see what Apple has in store, especially as the company continues its multi-year transition in the Mac from x86 CPUs to their own Arm-based Apple Silicon chips.

Join us at 10am PT for more details!



from AnandTech https://ift.tt/k0iNM2p
via IFTTT

AMD Announces Ryzen Threadripper Pro 5000 WX-Series: Zen 3 For OEM Workstations

https://ift.tt/i4aTMyI

In 2020, AMD released a new series of workstation-focused processors under its Threadripper umbrella, aptly named the Threadripper Pro series. These chips were essentially true workstation versions of AMD's EPYC server processors, offering the same massive core counts and high memory bandwidth as AMD's high-performance server platform. By introducing Threadripper Pro, AMD carved out an explicit processor family for high-performance workstations, a task that was previously awkwardly juggled by the older Threadripper and EPYC processors.

Now, just under two years since the release of the original Threadripper 3000 Pro series, AMD is upgrading that lineup with the announcement of the new Threadripper Pro 5000 series. Based on AMD's Zen 3 architecture, the newest Threadripper Pro chips are designed to up the ante once more in terms of performance, taking advantage of Zen 3's higher IPC as well as higher clockspeeds. Altogether AMD is releasing five new SKUs, ranging from 12c/24t to 64c/128t, which combined with support for 8 channels of DDR4 across the entire lineup, will offer a mix of chips for both CPU-hungry and bandwidth-hungry compute tasks.



from AnandTech https://ift.tt/shtkZbI
via IFTTT

Saturday 5 March 2022

AMD could unleash a trio of new Ryzen CPUs to take on Alder Lake

https://ift.tt/zfmFOpt

AMD could have a clutch of new processors debuting later this month alongside the Ryzen 7 5800X3D, which is rumored to go on sale late in March.

This comes from the Chiphell forum via well-known leaker HXL on Twitter, who claims that AMD is set to produce a Ryzen 7 5700X model, as well as Ryzen 5 5600 and 5500 CPUs.

See more

Wccftech, which spotted the tweet, claims its sources are saying the same thing: that we can expect this trio of fresh processors at some point in March.

The theory is that the Ryzen 7 5700X is set to run with 8-cores (16-threads) with a 65W TDP and could square off against Intel’s Core i5-12600K, with similar pricing pitched at around the $299 mark in the US (about £225, AU$405), or maybe a bit less. Supposedly the box will include the Wraith Stealth cooler.

Both the Ryzen 5 5600 and 5500 are rumored to be 6-core CPUs with a 65W TDP, and the former will have SMT (Simultaneous Multithreading, for 6-cores and 12-threads) whereas the latter may not (meaning it’d just be a straight 6-core chip).

Wccftech says it isn’t sure on whether the 5500 will dispense with SMT, but it makes sense and elsewhere we’ve seen chatter (via VideoCardz) that this will indeed be the case.

As ever with the rumor mill, take all of the above with a large helping of skepticism.


Analysis: Holding the fort against Alder Lake

Unleashing a few new Ryzen 5000 models to take on Intel’s Core i5 and i3 processors does make sense in terms of AMD holding the fort against Alder Lake until Zen 4 arrives, and shoring things up elsewhere away from the top-end, where the Ryzen 7 5800X3D is set to enter the fray.

The obvious presumption would be that these must be competitively priced against Alder Lake, of course, as Wccftech theorizes. If the Ryzen 7 5700X does come in at $299 (about £225, AU$405) as mentioned above, though, that would no doubt require adjusting the price of the 5600X which currently has that same MSRP – though interestingly we note at Newegg in the US it’s now dropped down to $269 (about £205, AU$365). Some rejigging of overall pricing may be required throughout the Ryzen range, of course, with all these new entrants inbound – if they happen.

The other point to bear in mind here is that even if these three rumored Ryzen processors do come to fruition alongside the 5800X3D, some models might be OEM-only, meaning that you won’t be able to buy them directly – they’ll only be included with prebuilt PCs. As ever, we’ll have to wait and see, but we shouldn’t have to wait long as these chips should, in theory, be out in a few weeks.



from TechRadar: computing components news https://ift.tt/dpKMXA4
via IFTTT

Thursday 3 March 2022

ASRock Industrial's NUC1200 BOX Series Brings Alder Lake to UCFF Systems

https://ift.tt/nQ1aXdp

Intel recently updated their low-power processors lineup with the Alder Lake U and P Series 12th Gen Core mobile SKUs. With support for a range of TDPs up to 28W, these allow ultra-compact form-factor (UCFF) PC manufacturers to update their traditional NUC clones. Similar to the Tiger Lake generation, ASRock Industrial is again at the forefront - launching the NUC1200 BOX Series within a few days of Intel's announcement.

The new NUC1200 BOX Series retains the chassis design and form-factor of the NUC1100 BOX Series. The NUC BOX-1165G7 left a favorable impression in our hands-on review, and the NUC1200 BOX Series seems to be carrying over all those aspects. The company is launching three models in this series - NUC BOX-1260P, NUC BOX-1240P, and NUC BOX-1220P. The specifications are summarized in the table below.

ASRock Industrial NUC 1200 BOX (Alder Lake-P) Lineup
Model NUC BOX-1260P NUC BOX-1240P NUC BOX-1220P
CPU Intel Core i7-1260P
4C + 8c / 16T
(C) 2.1 - 4.7 GHz
(c) 1.5 - 3.4 GHz
20 - 64W (28W)
Intel Core i5-1240P
4C + 8c / 16T
(C) 1.7 - 4.4 GHz
(c) 1.2 - 3.3 GHz
20 - 64W (28W)
Intel Core i3-1220P
2C + 8c / 12T
(C) 1.5 - 4.4 GHz
(c) 1.1 - 3.3 GHz
20 - 64W (28W)
GPU Intel® Iris Xe Graphics (96EU) @ 1.4 GHz Intel® Iris® Xe Graphics (80EU) @ 1.3 GHz Intel® UHD Graphics for 12th Gen Intel® Processors (64EU) @ 1.1 GHz
DRAM Two DDR4 SO-DIMM slots
Up to 64 GB of DDR4-3200 in dual-channel mode
Motherboard 4.02" x 4.09" UCFF
Storage SSD 1x M.2-22(42/60/80) (PCIe 4.0 x4 (CPU-direct))
DFF 1 ×  SATA III Port (for 2.5" drive)
Wireless Intel Wi-Fi 6E AX211
2x2 802.11ax Wi-Fi (2.4Gbps) + Bluetooth 5.2 module
Ethernet 2 × 2.5GbE port (Intel I225-LM)
USB Front 1 × USB 3.2 Gen 2 Type-A
2 x USB 3.2 Gen 2 Type-C (USB4 Certification Pending)
Rear 2 × USB 3.2 Gen 2 Type-A
Display Outputs 1 × HDMI 2.0b
1 x DisplayPort 1.4a
2 × DisplayPort 1.4a (using Front Panel Type-C ports)
Audio 1 × 3.5mm audio jack (Realtek ALC233)
PSU External (19V/90W)
Dimensions Length: 117.5 mm
Width: 110 mm
Height: 47.85 mm
MSRP ? ? ?

According to the products' datasheet, ASRock Industrial plans to get the two Type-C ports in the front panel certified for USB4. Since the certification plan is still pending, they are being advertised as USB 3.2 Gen 2 for now. Going by our experience with the NUC-BOX1165G7, at least one of the front Type-C ports should be able to support Thunderbolt 4 peripherals.

The key updates over the NUC1100 BOX series seem to be the integration of dual 2.5GbE ports (compared to 1x 1GbE + 1x 2.5GbE), addition of 6GHz Wi-Fi support, and the presence of Alder Lake processors with their hybrid architecture comprising of both performance and efficiency cores.

Since the units target the embedded market also, they have the usual bells and whistles including an integrated watchdog timer and an on-board TPM. Pricing is slated to be announced in the coming months.



from AnandTech https://ift.tt/sUOxL8o
via IFTTT
Related Posts Plugin for WordPress, Blogger...