Wednesday, 24 January 2024

MLCommons To Develop PC Client Version of MLPerf AI Benchmark Suite

https://ift.tt/IhJmFBa

MLCommons, the consortium behind the MLPerf family of machine learning benchmarks, is announcing this morning that the organization will be developing a new desktop AI benchmarking suite under the MLPerf banner. Helmed by the body’s newly-formed MLPerf Client working group, the task force will be developing a client AI benchmark suit aimed at traditional desktop PCs, workstations, and laptops. According to the consortium, the first iteration of the MLPerf Client benchmark suite will be based on Meta’s Llama 2 LLM, with an initial focus on assembling a benchmark suite for Windows.

The de facto industry standard benchmark for AI inference and training on servers and HPC systems, MLCommons has slowly been extending the MLPerf family of benchmarks to additional devices over the past several years. This has included assembling benchmarks for mobile devices, and even low-power edge devices. Now, the consortium is setting about covering the “missing middle” of their family of benchmarks with an MLPerf suite designed for PCs and workstations. And while this is far from the group’s first benchmark, it is in some respects their most ambitious effort to date.

The aim of the new MLPerf Client working group will be to develop a benchmark suitable for client PCs – which is to say, a benchmark that is not only sized appropriately for the devices, but is a real-world client AI workload in order to provide useful and meaningful results. Given the cooperative, consensus-based nature of the consortium’s development structure, today’s announcement comes fairly early in the process, as the group is just now getting started on developing the MLPerf Client benchmark. As a result, there are still a number of technical details about the final benchmark suite that need to be hammered out over the coming months, but to kick things off the group has already narrowed down some of the technical aspects of their upcoming benchmark suite.

Perhaps most critically, the working group has already settled on basing the initial version of the MLPerf Client benchmark around the Llama 2 large language model, which is already used in other versions of the MLPerf suite. Specifically, the group is eyeing 7 billion parameter version of that model (Llama-2-7B), as that’s believed to be the most appropriate size and complexity for client PCs (at INT8 precision, the 7B model would require roughly 7GB of RAM). Past that however, the group still needs to determine the specifics of the benchmark, most importantly the tasks which the LLM will be benchmarked executing on.

With the aim of getting it on PCs of all shapes and sizes, from laptops to workstations, the MLPerf Client working group is going straight for mass market adoption by targeting Windows first – a far cry from the *nix-focused benchmarks they’re best known for. To be sure, the group does plan to bring MLPerf Client to additional platforms over time, but their first target is to hit the bulk of the PC market where Windows reigns supreme.

In fact, the focus on client computing is arguably the most ambitious part of the project for a group that already has ample experience with machine learning workloads. Thus far, the other versions of MLPerf have been aimed at device manufacturers, data scientists, and the like – which is to say they’ve been barebones benchmarks. Even the mobile very of the MLPerf benchmark isn’t very accessible to end-users, as it’s distributed as a source-code release intended to be compiled on the target system. The MLPerf Client benchmark for PCs, on the other hand, will be a true client benchmark, distributed as a compiled application with a user-friendly front-end. Which means the MLPerf Client working group is tasked with not only figuring out what the most representative ML workloads will be for a client, but then how to tie that together into a useful graphical benchmark.

Meanwhile, although many of the finer technical points of the MLPerf Client benchmark suite remain to be sorted out, talking to MLCommons representatives, it sounds like the group has a clear direction in mind on the APIs and runtimes that they want the benchmark to run on: all of them. With Windows offering its own machine learning APIs (WinML and DirectML), and then most hardware vendors offering their own optimized platforms on top of that (CUDA, OpenVino, etc), there are numerous possible execution backends for MLPerf Client to target. And, keeping in line with the laissez faire nature of the other MLPerf benchmarks, the expectation is that MLPerf Client will support a full gamut of common and vendor-proprietary backends.

In practice, then, this would be very similar to how other desktop client AI benchmarks work today, such as UL’s Procyon AI benchmark suite, which allows for plugging in to multiple execution backends. The use of different backends does take away a bit from true apples-to-apples testing (though it would always be possible to force fallback to a common API like DirectML), but it gives the hardware vendors room to optimize the execution of the model to their hardware. MLPerf takes the same approach to their other benchmarks right now, essentially giving hardware vendors free reign to come up with new optimizations – including reduced precision and quantization – so long as they don’t lose inference accuracy and fail meet the benchmark’s overall accuracy requirements.

Even the type of hardware used to execute the benchmark is open to change: while the benchmark is clearly aimed at leveraging the new field of NPUs, vendors are also free to run it on GPUs and CPUs as they see fit. So MLPerf Client will not exclusively be an NPU or GPU benchmark.

Otherwise, keeping everyone on equal footing, the working group itself is a who’s who of hardware and software vendors. The list includes not only Intel, AMD, and NVIDIA, but Arm, Qualcomm, Microsoft, Dell, and others. So there is buy-in from all of the major industry players (at least in the Windows space), which has been critical for driving the acceptance of MLPerf for servers, and will similarly be needed to drive acceptance of MLPerf client.

The MLPerf Client benchmark itself is still quite some time from release, but once it’s out, it will be joining the current front-runners of UL’s Procyon AI benchmark and Primate Labs’ Geekbench ML, both of which already offer Windows client AI benchmarks. And while benchmark development is not necessarily a competitive field, MLCommons is hoping that their open, collaborative approach will be something that sets them apart from existing benchmarks. The nature of the consortium means that every member gets a say (and a vote) on matters, which isn’t the case for proprietary benchmarks. But it also means the group needs a complete consensus in order to move forward.

Ultimately, the initial version of the MLPerf Client benchmark is being devised as more of a beginning than an end product in and of itself. Besides expanding the benchmark to additional platforms beyond Windows, the working group will also eventually be looking at additional workloads to add to the suite – and, presumably, adding more models beyond Llama 2. So while the group has a good deal of work ahead of them just to get the initial benchmark out, the plan is for MLPerf Client to be long-lived, long-supported benchmark as the other MLPerf benchmarks are today.



from AnandTech https://ift.tt/tfjpi73
via IFTTT

Tuesday, 23 January 2024

Leaked AMD Ryzen 7 8700G and Ryzen 5 8600G APU benchmarks are substantially faster than Ryzen 7 5700G

https://ift.tt/u2wJLts

Both the AMD Ryzen 7 8700G and Ryzen 5 8600G APU Geekbench benchmark results have been leaked, showcasing single and multithread performance. 

The AMD Ryzen 7 8700G tested is the highest Hawk Point APU with eight cores, 16 threads, 16 MB of L3 and eight MB of L2 cache. It also has a base clock speed of 4.2 GHz, a boost clock speed of 5.1 GHz, and a TDP of 65W. According to the report from Wccftech, it scored on the single-core 2,720 and on the multi-core 14,326.

Meanwhile, the AMD Ryzen 5 8600G is a six-core and 12-thread APU with 16 MB of L3 and 6 MB of L2 cache. It has a base clock speed of 4.3 GHz and a boost clock speed of 5.0 GHz with a 65W TDP as well. On the single-core it received 2,474 and on the multi-core 11,453.

Compared to the AMD Ryzen 5700G’s performance, the AMD Ryzen 7 8700G has a 64% boost in multi-core and a 37% boost in single-core performance. Compared to the same CPU, the Ryzen 5 8600G has a 50% increase in multi-threaded and a 29% increase in single-threaded performance.

These are some truly impressive performances from these APUs, especially considering the price points of the chips being $329 and $229, respectively. Great mid-range priced APUs that the market desperately needs.

AMD may be gunning for Apple 

It seems that AMD is looking to compete with Apple’s M3 silicon with its APU line and become some of the best processors on the market, especially the AMD Strix Point Halo APU. For instance, the most powerful Strix Point Halo APU is rumored to have 16 cores and an RDNA 3.5 GPU with 40 Compute Units (CUs).

The only thing that might stop the Strix Point Halo APU in its tracks in terms of being competition for the M3 are rumors that production has been pushed back to 2025. But the Strix Point should still be launching in 2024, however, which should still be a major challenge for Apple.

I say, bring on the extra competition, since as it continues we get to reap the benefits of what the tech giants sow.

You might also like



from TechRadar: computing components news https://ift.tt/l8MuXAb
via IFTTT

AMD Strix Point Halo APU spotted – this could be the most powerful chip for thin-and-light laptops that beats out Apple

https://ift.tt/RJbDnXm

AMD’s incoming Strix Point Halo APUs – all-in-one chips with processors and integrated graphics that are expected to be seriously powerful – have just been spotted.

The rumor mill holds that Strix Point APUs will likely arrive later this year, and will be followed by top-end Strix Point Halo chips.

Excitingly, we’ve just had a sighting of Strix Point Halo for the first time, with the APUs mentioned in patches for ROCm, AMD’s open source software platform for GPU computation, as VideoCardz noticed that Kepler pointed out on X (formerly Twitter).

See more

Strix Point Halo will use AMD’s next-gen Zen 5 processor cores, which are expected to be a solid step on from current Zen 4 chips, and the integrated graphics will be RDNA 3+ (also referred to as RDNA 3.5). AMD uses the RDNA 3 architecture for its current-gen GPUs, so RDNA 3.5 is a refresh of this. It’ll also have an XDNA 2-powered NPU for accelerating AI workloads, to boot.

The most powerful Strix Point Halo APU is rumored to have 16-cores and an RDNA 3.5 GPU with 40 Compute Units (CUs).


Analysis: A worry for Apple

To give you some idea of how powerful we’re talking about here for a CPU plus GPU solution, this complements 16 full Zen 5 cores on the processor side with the equivalent of integrated graphics which are more powerful than AMD’s next-in-line GPU, the RX 7600 XT (which goes on sale tomorrow, in fact). That model has 32 CUs compared to 40 CUs for the flagship Strix Point Halo, and don’t forget the latter is on an improved refresh of the current graphics architecture, so it could be more than a little ahead of the 7600 XT.

If the rumors around the spec are right, Strix Point Halo really will be a mammoth step forward for AMD’s APUs, bringing in a chip to rival what Apple is doing with its M series of processors (and Apple is going great guns with them, of course).

One slight catch with Strix Point Halo is that while earlier rumors pegged it as having a 2024 launch, the latest chatter is that this has slipped to 2025. So in theory, we’ll get Strix Point this year, but not the Halo silicon until next year (hopefully early in 2025, mind).

This is certainly a chip that could worry Apple, then. Perhaps the main concern when throwing around all these juicy specs and forecasts of a storming high-performance AMD APU is the price tag that might be attached. A thin and premium gaming laptop powered by the flagship Strix Point Halo chip is likely to cost a pretty penny (or three hundred thousand of them).

Mind you, it’s not like MacBooks are cheap either (unless you spot a great deal). Strix Point Halo is certainly one to watch, though, in terms of the most powerful performers for APUs, even though Strix Point will be the more prevalent proposition with more mainstream pricing.

Strix Point Halo won’t just be about thin-and-light laptops, either – there are also small form-factor PCs to consider here, where dispensing with a separate GPU is obviously a major boon. The Halo silicon is another sign that the days of discrete GPUs might be numbered, at least at the lower-end of the market.

You might also like



from TechRadar: computing components news https://ift.tt/hB5kI2w
via IFTTT

Monday, 22 January 2024

The Corsair A115 CPU Cooler Review: Massive Air Cooler Is Effective, But Expensive

https://ift.tt/SUlBCsq

With recent high-performance CPUs exhibiting increasingly demanding cooling requirements, we've seen a surge in releases of new dual-tower air cooler designs. Though not new by any means, dual-tower designs have taken on increased importance as air cooler designers work to keep up with the significant thermal loads generated by the latest processors. And even in systems that aren't running the very highest-end or hottest CPUs, designers have been looking for ways to improve on air cooling efficiency, if only to hold the line on noise levels while the average TDP of enthusiast-class processors continues to eke up. All of which has been giving dual-tower coolers a bigger presence within the market.

At this point many major air cooler vendors are offering at least one dual-tower cooler, and, underscoring this broader shift in air cooler design, they're being joined by the liquid-cooling focused Corsair. Best known within the PC cooling space for their expansive lineup of all-in-one (AIO) liquid PC CPU coolers, Corsair has enjoyed a massive amount of success with their AIO coolers. But perhaps as a result of this, the company has exhibited a notable reticence towards venturing into the air cooler segment, and it's been years since the company last introduced a new CPU air cooler. This absence is finally coming to an end, however, with the launch of a new dual-tower air cooler.

Our review today centers on Corsair's latest offering in the high-end CPU air cooler market, the A115. Designed to challenge established models like the Noctua NH-D15, the A115 is Cosair's effort to jump in to the high-end air cooling market with both feet and a lot of bravado. The A115 boasts substantial dimensions to maximize its cooling efficiency, aiming not just to meet but to surpass the cooling requirements of the most demanding mainstream CPUs. This review will thoroughly examine the A115's performance characteristics and its competitive standing in the aftermarket cooling market.



from AnandTech https://ift.tt/AEdjXsw
via IFTTT

Wednesday, 17 January 2024

AMD Rolls Out Radeon RX 7900 XT Promo Pricing Opposite GeForce RTX 40 Super Launch

https://ift.tt/6H4RF5a

In response to the launch of NVIDIA's new GeForce RTX 40 Super video cards, AMD has announced that they are instituting new promotional pricing on a handful of their high-end video cards in order to keep pace with NVIDIA's new pricing.

Kicking off what AMD is terming a special "promotional pricing" program for the quarter, AMD has been working with its retail partners to bring down the price of the Radeon RX 7900 XT to $749 (or lower), roughly $50 below its street price at the start of the month. Better still, AMD's board partners have already reduced prices further than AMD's official program/projections, and we're seeing RX 7900 XTs drop to as low as $710 in the U.S., making for a $90 drop from where prices stood a few weeks ago.

Meanwhile, AMD is also technically bringing down prices on the China and OEM-only Radeon RX 7900 GRE as well. Though as this isn't available for stand-alone purchase on North American shelves, it's mostly only of relevance for OEM pre-builds (and the mark-ups they charge).

Ultimately, the fact that this is "promotional pricing" should be underscored. The new pricing on the RX 7900 XT is not, at least for the moment, a permanent price cut. Meaning that AMD is leaving themselves some formal room to raise prices later on, if they choose to. Though in practice, it would be surprising to see card prices rebound – at least so long as we don't get a new crypto boom or the like.

Finally, to sweeten the pot, AMD is also extending their latest game bundle offer for another few weeks. The company is offering a copy of Avatar: Frontiers of Pandora with all Radeon RX 7000 video cards (and select Ryzen 7000 CPUs) through January 30, 2024.



from AnandTech https://ift.tt/6teQ75y
via IFTTT

Bad news for flagship laptops: both AMD and Intel’s next high-end mobile CPUs are supposedly delayed until 2025

https://ift.tt/AHRfbdg

If you were hoping to see some seriously powerful gaming laptops driven by the next-in-line top-end mobile silicon from AMD and Intel later this year, those expectations might fail to be met, or at least that’s the latest gossip from the rumor mill.

Wccftech spotted that hardware leaker Golden Pig Upgrade – a source with a reasonable track record – made a comment on Weibo to the effect that there are no new high-end mobile processors coming this year.

That statement pertained to the future Xiaomi Redmi G Pro laptop refresh, which will have a high-end (HX series) Intel Arrow Lake CPU, but Wccftech points out that the comment refers to launches across the board. So, in theory that also includes AMD’s incoming Strix Point family of mobile processors (to some extent – we’ll come back to that).

In short, the most powerful laptop CPUs will remain Intel’s top-end Raptor Lake Refresh (HX) and AMD’s Dragon Range HX (alongside Hawk Point below that) for some time yet. At least in theory, as we should heap on the seasoning here.


Analysis: Intel delay seems quite possible

We can believe that Intel’s next HX series chips might be pushed to early 2025, and the same is true for AMD’s Strix Point Halo, the very beefiest versions of Strix Point processors. Indeed, there has already been speculation that Strix Point Halo will be delayed to 2025, so that makes sense (at least based on past rumors). Fire Range HX, the successor to Dragon Range, also isn’t rumored to debut until 2025.

However, we should still get vanilla Strix Point APUs for laptops (mixing Zen 5 with RDNA 3.5 graphics) in 2024, even though Wccftech appears to draw a broader conclusion here that Zen 5 mobile won't be arriving in 2024. Plain Strix Point will still be powerful notebook chips and should arrive late in the summer of 2024.

Zen 5 desktop will certainly be out at some stage later in 2024 – very likely before Arrow Lake desktop, according to rumors. Arrow Lake desktop is supposed to arrive late in 2024 from what we’ve heard, so having the laptop chips run slightly later, into early 2025, seems believable enough.

Still, we should be skeptical about all of this info as already noted.

You might also like



from TechRadar: computing components news https://ift.tt/JNilnS4
via IFTTT

Tuesday, 16 January 2024

New Intel rumor suggests powerful Battlemage GPU could be ditched – which should please AMD and Nvidia

https://ift.tt/DTEpmFi

Battlemage GPUs may not have anything to offer beyond the lower-end of the market, with the rumored mid-range graphics card for the 2nd-generation of Intel’s Arc series supposedly in danger of being canceled, according to fresh word from the grapevine.

This comes courtesy of the latest video from RedGamingTech (RGT), a YouTube leaker, although it’s not all bad news about Battlemage thankfully.

The main point made, though, is disappointing; there’s a “very good chance” that the previously rumored enthusiast class GPU – the one featuring 56 Xe cores, known as G10, and in theory being about equivalent to the current RTX 4070 Ti, or thereabouts – isn’t going to be released. However, we’re told that decision hasn’t actually been made at Intel yet.

Why might G10 be canceled (with the emphasis on the might)? It’s not due to hardware issues, or problems realizing the graphics card in the technical sense, but rather it is financial considerations that are in play here. In short, the fear is that the mid-range GPU may not be profitable enough to make sense.

We should again underline that this is just a possibility right now – Intel could still go ahead with this graphics card. Alternatively, Team Blue could hedge its bets and either delay the launch of G10, or just have it come out with a very low volume of production (which would effectively mitigate any financial concerns to some extent, as a kind of compromise).

If Battlemage doesn’t have the G10 with 56 Xe cores in the end, what will it have, then? RGT’s sources believe that a more modestly pitched model will have 40 Xe cores, so the good news is that isn’t a million miles away. But is it an enthusiast-class GPU? No, it’s a lower-tier affair (or it will be by the time Battlemage is released, probably towards the very end of 2024 from what we’re hearing, when AMD’s RDNA 4 will be here, and Nvidia’s Blackwell too, most likely, or it’ll be imminent).

The other more positive news is that if G10 does happen, it currently has faster clock speeds than previously rumored, and a special type of cache called ‘Adamantine’ which will hopefully help to pep up performance too. (The 40 Xe cores graphics card won’t have this Adamantine cache, though, if you were wondering – this cache was rumored for Meteor Lake CPUs, actually, but didn’t happen for those chips in the end).


Analysis: A merry dance of rumors

Okay, so there’s a lot to digest here, and the principal point we should bear in mind is that the G10 graphics card is not dead yet – there are just big question marks hanging over it. That’s not great for those wanting more from Battlemage than lower-end offerings, of course, but still – hope remains. And besides, this is only a rumor anyway.

We’ve been treading a very winding trail following these Intel Arc rumors, for sure. Originally, we were given the expectation that Battlemage was going to be powerful enough to best Nvidia’s Lovelace GPUs, and indeed in the above YouTube clip, RGT does mention that there was an 80 Xe cores variant in the works at one time, but it got ditched.

Then we were told Battlemage was going to be just low-end GPUs, only to later be informed that the G10 upper-mid-ranger was in the pipeline. So, now we’re back to lower-end only – or maybe that being the case, anyway.

Still, even if all we get from Intel’s Battlemage is a 40 Xe cores graphics card as the highest tier product, that could still make a big impact on the wider GPU market if it’s priced competitively. Let’s face it, more affordable graphics cards are what we really need in the way of rivals for AMD and Nvidia, because Team Green in particular appears to have rather forgotten about budget GPUs. Arguably, this could allow Intel to put all its resources into budget challengers, which might ultimately work out for the better.

What could also play into Intel’s decision to drop the G10 is that AMD is apparently firmly targeting the mid-range with RDNA 4, and those GPUs will supposedly top out there – but with potent products. And those cards could be tricky for Team Blue to take on, so it might just swerve the whole idea. Maybe Intel is partly waiting to see what’s in the works with RDNA 4, as more leaks spill forth, before making the final decision on G10.

Who knows, and we can continue to speculate, but the tentative picture right now for the future of the GPU market is Nvidia owning the high-end with Blackwell – with not even AMD challenging there – and Team Red focusing its efforts squarely on the mid-range, with Intel retreating to the lower tiers. Possibly, anyway – though if Team Blue can produce compelling low-end offerings, and supercharged integrated graphics with Battlemage, that could be enough to tide things over until bigger moves are made with Celestial (its third-gen GPUs).

You might also like



from TechRadar: computing components news https://ift.tt/IDt6ura
via IFTTT

Friday, 12 January 2024

EK Reveals All-In-One Liquid Cooler for Delidded CPUs

https://ift.tt/i6GlnXJ

Historically, delidded CPUs have been the prerogative of die-hard enthusiasts who customized their rigs to the last bit. But with emergence of specially-designed delidding tools, removing the integrated heat spreader from a CPU has become a whole lot easier, opening the door to delidding for a wider user base. To that end, EK is now offering all-in-one liquid cooling systems tailored specifically for delidded Intel LGA1700 processors.

The key difference with EKWB's new EK-Nucleus AIO CR360 Direct Die D-RGB – 1700 cooler is in the cooling plate on the combined base pump block. While the rest of the cooler is essentially lifted from the company's premium 360-mm closed-loop all-in-one liquid cooling systems, the pump block has been equipped with a unique cooling plate specifically developed for mating with (and cooling) of delidded Intel's LGA1700 CPUs.

Meanwhile, since delidded CPUs lose the additional structural integrity provided by the IHS, EK is also bundling a contact frame with the cooler that is intended to protect CPUs against warping or bending by maintaining even pressure on the CPU. A protective foam piece is also provided to prevent liquid metal from spilling over onto electrical components surrounding the CPU die.

According to the company, critical components of the new AIO, such as its backplate and die-guard frame, were collaboratively developed by EK and Roman 'Der8auer' Hartung, a renowned German overclocker who has developed multiple tools both for extreme overclockers and enthusiasts. In addition, EK bundles Thermal Grizzly's Conductonaut liquid metal thermal paste (also co-designed with Der8auer) with the cooling system.

And since this is a high-end, high-priced cooler, EKWB has also paid some attention to aesthetics. The cooler comes with two distinct pump block covers: a standard cover features a brushed aluminum skull, surrounded by a circle of LED lighting that creates a classic yet bold aesthetic, and an alternate, more minimalist cover without the skull.

Traditionally, cooling for delidded CPUs has been primarily handled by custom loop liquid cooling systems. So the EK-Nucleus AIO CR360 Direct Die D-RGB – 1700 stands out in that regard, offering a self-contained and easier-to-install option for delidded CPUs. Especially as delidding has been shown to reduce temperature of Intel's Core i9-14900K CPU by up to 12ºC, it's no coincidence that EKWB is working to make delidding a more interesting and accessible option, particularly right as high-end desktop CPU TDPs are spiking.

Wrapping things up, EKWB has priced the direct die cooler at $170, about $20 more than the EK-Nucleus AIO CR360 Lux D-RGB cooler designed for stock Intel processors. The company is taking pre-orders now, and the finished coolers are expected to start shipping in mid-March 2024.



from AnandTech https://ift.tt/3iDh7wJ
via IFTTT

Thursday, 11 January 2024

Intel’s next-gen Battlemage GPUs are on track, and 3rd-gen Celestial hardware is being worked on – with more good news to follow

https://ift.tt/piXkw5t

Next-gen Battlemage graphics cards are progressing well and should arrive before CES 2025, we’ve heard from Intel.

Specifically this came from Tom Petersen, Intel Fellow, who was interviewed by PC World at CES 2024 (see the YouTube clip below) as highlighted by PC Gamer.

Petersen said he’s excited about Battlemage which has moved onto development on the software side in the main – with 30% of Intel’s engineers working on it, we’re told – and the hardware team has already moved onto Celestial (3rd-gen graphics cards).

Petersen enthused: “So, think about it [as Battlemage] already has its first silicon in the labs which is very exciting and there’s more good news coming which I can’t talk about right now. We hope we are going to see it before CES 25.”


Analysis: Hoping for the best with Battlemage

The fact that there’s more good news coming is intriguing, of course, and it seems that Battlemage is on track for its previously rumored launch at some point in 2024. We were hoping for maybe a mid-2024 release timeframe, but realistically we were thinking that later in the year is probably more likely – and Peterson’s comment here makes it seem like it’ll be towards the very end of 2024.

Indeed, Petersen is talking in terms of ‘hoping’ rather than knowing, so we could be looking at an early 2025 launch (at CES 2025, perhaps). Although to be fair, it’s standard practice to keep things vague for the timing of any future hardware launch, of course.

Hopefully what we won’t see is a repeat of the Arc Alchemist launch which was plagued with delays, but given that there was a whole lot more to do to get things together for the original GPUs – particularly on the software side with drivers – it isn’t likely that’ll happen again.

Again, going by past rumors, Battlemage is set to be much like Alchemist in that it’ll top out at the mid-range – or maybe upper-mid-range (enthusiast class) – and won’t challenge the likes of high-end GPUs from Nvidia and AMD. (Although RDNA 4 may give the high-end a swerve, too, based on chatter from the grapevine, leaving Team Green with sole dominion in that expensive price bracket).

We’re hopeful that with lessons learned from Alchemist, and continual Arc driver improvements throughout 2024 – as witnessed consistently over the course of 2023 – Battlemage GPUs will be good candidates to grab spots on our roundup of the best graphics cards (and moreover, perhaps, the best cheap GPUs).

We’re covering all of the latest CES news from the show as it happens. Stick with us for the big stories on everything from 8K TVs and foldable displays to new phones, laptops, smart home gadgets, and the latest in AI.

And don’t forget to follow us on TikTok for the latest from the CES show floor!

You might also like



from TechRadar: computing components news https://ift.tt/Gc39USH
via IFTTT

Wednesday, 10 January 2024

ASUS Unveils NUC 14 Pro and Pro Plus Meteor Lake Mini-PCs

https://ift.tt/Ag7PcUI

As part of its 2024 CES announcements, ASUS officially unveiled a number of NUCs based on Intel's Meteor Lake platform. The company had earlier released the NUC 13 Rugged - the first NUC product that had not been inherited from Intel's existing lineup - based on Alder Lake-N processors. The new NUCs fall under the mainstream category and are marketed with the Pro tag. The Pro line represents the original 4" x 4" ultra-compact form-factor (UCFF).

The new ASUS NUC 14 Pro products fall under the UCFF category. In addition, ASUS is also introducing the NUC 14 Pro+. These are slightly wider (144mm vs. 117mm) than the regular Pro models. Their height of 41mm is slightly more than the slim Pro kit's 37mm, but shorter than the 54mm of the tall Pro kit with 2.5" SATA drive support.


The NUC 14 Pro kits have the Meteor Lake processors configured with a TDP Of 40W, while the Pro+ kits (with space for a better thermal solution?) pushes that up to 65W. As expected, the choice of processors with the Pro+ is restricted to the top Ultra 5 / Ultra 7 / Ultra 9 SKUs, while the Pro version sports processors ranging from the Core 3 100U to the Ultra 7 165H.

The specifications of the different NUC 14 Pro and Pro+ models are summarized in the table below.

ASUS NUC 14 Pro / Pro+ (Meteor Lake) Lineup
Model NUC 14 Pro Mini-PC NUC 14 Pro Kit NUC 14 Pro+ Mini-PC NUC 14 Pro+ Kit
CPU Intel® Core™ Ultra 7 165H Processor
Intel® Core™ Ultra 7 155H Processor
Intel® Core™ Ultra 5 135H Processor
Intel® Core™ Ultra 5 125H Processor
Intel® Core™ 3 100U Processor
(TDP up to 40W)
Intel® Core™ Ultra 9 185H Processor
Intel® Core™ Ultra 7 155H Processor
Intel® Core™ Ultra 5 125H Processor
(TDP up to 65W)
GPU Intel® Arc™ GPU (U7/U5)
Intel® Graphics (Core 3 100U)
Intel® Arc™ GPU
DRAM Two DDR5 SO-DIMM slots
Up to 96 GB of DDR5-5600 in dual-channel mode
Motherboard 4.13" x 4.16" UCFF
NVMe Storage 1x M.2 2280 PCIe Gen4 x4
1x M.2 2242 PCIe Gen4 x4
SATA Storage 1x 2.5" SATA 6 Gbps N/A
Wireless Intel Wi-Fi 6E AX211
2x2 802.11ax Wi-Fi + Bluetooth 5.3 module
Ethernet 1 x Intel® i226V/LM 2.5G LAN
(vPro SKUs include the i226LM non-vPro include the i226V)
Front I/O 1x USB 3.2 Gen 2x2 Type C (20 Gbps)
2x USB 3.2 Gen2 Type A
Rear I/O 2x Thunderbolt™ 4 / USB4 Type-C Ports (up to 8K@30Hz when combined)
1x USB 3.2 Gen 2 Type-A
1x USB 2.0 Type-A
2x HDMI 2.1 (TMDS up to 4K@60Hz with CEC support) ports
1x 2.5 GbE RJ45 LAN Port
Power Supply 120W Power Adapter (U5/U7)
90W Power Adapter (Core 3)
150W Power Adapter (U9)
120W Power Adapter (U5/U7)
Case Material Matte Textured Polycarbonate (Replaceable Lid) Anodized Aluminum
Operating System Windows 11 Pro / Windows 11 Home Barebones Windows 11 Pro / Windows 11 Home Barebones
Dimensions and Weight 117mm x 112mm x 54mm / 750g 117mm x 112mm x 37mm / 600g 144mm x 112mm x 41mm / 800g

The absence of an analog audio output port / headphone jack seems to be the major departure from previous mainstream NUCs. The sliding tabs on the underside for tool-less access to the SSD and DRAM modules is another intersting update. The Pro and Pro+ appear to be using the same motherboard based on the I/O port locations in the two SKU sets. The extra space in the 5" x 4" Pro+ chassis is likely needed to accommodate the 65W thermal solution. ASUS indicated the use of a triple 6mm heat pipe dual-side exchanger for this purpose. It also allows for the Kensington lock to have a prominent position in the rear panel, unlike the side location in the Pro unit.

In other mini-PC news from ASUS, the company has also introduced a ROG NUC which seems to be a the successor to the NUC Enthusiast line. It featuring Meteor Lake Ultra 9 and Ultra 7 processors along with NVIDIA RTX 4070 / 4060 (mobile versions in all likelihood) in a 2.5L chassis. Given the success of the ROG brand, it does make sense for ASUS to absorb the NUC Enthusiast line of products into it.

ASUS is also continuing to market mini-PCs under the ExpertCenter line. The PN65 sports Meteor Lake processors along with configurable port options (such as COM) for use in edge computing and other professional scenarios.

We have reached out to ASUS for pricing and availability details, and will update the piece when we get additional information.



from AnandTech https://ift.tt/WnJKhbq
via IFTTT

Want faster frame rates for free? Intel’s tech to boost game performance is coming to more of its CPUs

https://ift.tt/yI7vGpS

Some good news from Intel, as revealed at CES 2024, is that its APO tech will be coming to older CPUs, not just the newest Raptor Lake Refresh chips on our best processors list.

APO or Application Performance Optimization is a tech that optimizes supported games and boosts frame rates, and it was something Intel introduced with its 14th-gen CPUs.

Back at the time, we were told that this tech was for the new processors only, and wouldn’t come to previous generations – but now Intel has revealed it will be brought to some 13th-gen (Raptor Lake) and 12th-gen (Alder Lake) CPUs.

As we remarked at the time, because those previous two generations use the same architecture for their efficiency cores (Gracemont), there’s no technical reason in theory why APO wouldn’t work across all these chips. And indeed that has proved to be the case.

In case you’ve forgotten, here’s a quick refresher on APO: in basic terms it gets more out of the efficiency cores on the CPU (Intel’s hybrid tech splits the cores into performance and efficiency cores, ever since the release of Alder Lake).

Typically, those efficiency cores can run relatively slowly when gaming, and tests we’ve seen with APO and 14th-gen CPUs point to a doubling of speed in some circumstances, giving a nice performance boost.

Sadly, Intel didn’t give us a firm date of when these older processors will get APO support, but presumably it’ll be in the near future (fingers crossed).


Analysis: Better performance and efficiency to boot

The catch is that the performance cores are still the main engine of an Intel processor that do most of the grunt work when it comes to gaming (or any demanding application for that matter).

However, having efficiency cores, and maybe lots of them, running at say 2GHz instead of 1GHz will make a noticeable difference in certain scenarios. It can mean maybe 10% faster frame rates, or even more (early testing has observed 20% boosts in some cases). On top of this, power efficiency is improved somewhat, too.

So, this is really worthwhile to have for 12th-gen and 13th-gen CPUs, and gamers with these chips will doubtless be pleased. Although do note the other major caveat here which is that the game needs to support APO, and that only certain Raptor Lake and Alder Lake processors will get the benefit (namely unlocked ‘K’ models, the chips that can be overclocked).

How many PC games are supported for APO? Intel says it’s soon to be 14, and as PC Gamer, which spotted this development, points out, that includes the following: Rainbow Six: Siege, Metro Exodus, Guardians of the Galaxy, F1 22, Strange Brigade, World War Z, Dirt 5, and World of Warcraft.

Hopefully we can expect much wider games support down the line, because as noted, the performance boosts are definitely worth having. We won’t get more processor support, though, because hybrid tech (and efficiency cores) didn’t exist before Alder Lake (well, Lakefield aside, but that was a niche thing in the mobile silicon arena).

We’re covering all of the latest CES news from the show as it happens. Stick with us for the big stories on everything from 8K TVs and foldable displays to new phones, laptops, smart home gadgets, and the latest in AI.

And don’t forget to follow us on TikTok for the latest from the CES show floor!

You might also like



from TechRadar: computing components news https://ift.tt/cCMNzkZ
via IFTTT

Tuesday, 9 January 2024

The Intel CES 2024 Pat Gelsinger Keynote Live Blog (Starts at 5pm PT/01:00 UTC)

https://ift.tt/8ev4mWC

This evening is the biggest PC-related keynote of CES 2024, Intel's "prime" keynote with CEO Pat Gelsinger. Part of Intel's "AI everywhere starts with Intel" campaign for the show, Gelsinger is expected to talk about the role AI will play in the future of consumer technology, along with the economic implications.

So come join us at 5pm Pacific/8pm Eastern for a look at the latest from Intel!



from AnandTech https://ift.tt/qOaPz1H
via IFTTT

Monday, 8 January 2024

AMD Announces New Desktop Zen 3 Chips With 2 New APUs and the Ryzen 7 5700X3D

https://ift.tt/CIOED3A

One of AMD's most innovative desktop processor designs in recent years has to be the X3D series, with their 3D V-Cache packaging technology. Since AMD launched their Zen 3-based RYzen 7 5800X3D back in April of 2022, the sky has been the limit for AMD's L3 cache-stacked chips, which have had significant benefits in gaming, especially in titles that can leverage larger amounts of L3 cache. Since the Ryzen 7 5800X3D, AMD launched their latest Ryzen 7000X3D series last year, based on Zen 4, which markedly improved gaming and compute performance. But even more than a year into the release of Zen 4, AMD still isn't done with Zen 3 and the AM4 platform.

For CES 2024, AMD has launched a third Zen 3 X3D SKU in addition to last year's limited 6 core Ryzen 5 5600X3D. The latest AMD Ryzen 7 5700X3D offers the same eight Zen 3 cores and sixteen threads (8C/16T) as the original Ryzen 7 5800X3D and with the same large 96 MB pool of 3D V-Cache, but it has lower base and turbo frequencies and an even more affordable price.

And AMD's AM4 additions aren't done there. Also announced by AMD at CES 2024 is a new pair of Zen 3-based APUs, which adds to the 5000G(T) line-up. Like the other members of this family, the new SKUs pair up Zen 3 cores with AMD Radeon Vega integrated graphics. The new Ryzen 5 5600GT and Ryzen 5 5500GT both come with a 65 W TDP and faster base and turbo core frequencies compared to the existing Ryzen 5000G series APUs.

Despite being based around the previous Zen 3 microarchitecture, the AM4 desktop platform is still thriving. With new processors to take advantage of the cheaper AM4 boards and DDR4 memory, AMD is looking to leverage the low production costs of their existing (and well amortized) silicon to offer better chips across all budgets.

AMD Ryzen 7 5700X3D: Even Cheaper 8C/16T Zen 3 X3D

With AMD over a year into the Zen 4 architecture, many would expect their focus to be on further Zen 4 chips. And while AMD has a diverse portfolio of processors, architectures, platforms, and products, the continued sales of AM4 hardware and their low costs make it easy to see why AMD has added another Zen 3-based chip with 3D V-Cache to the market.

Enter the AMD Ryzen 7 5700X3D, an alternative to the very popular Ryzen 7 5800X3D, which broke the mold on typical desktop processors through AMD's 3D V-Cache packaging technology. Coming with 96 MB of 3D packaged L3 V-cache in total (32 MB + 64 MB), all of this is accessible on a single CCD with 8C/16T, with Zen 3 cores, the same as the 5800X3D.

AMD Ryzen 7000/5000 X3D Chips with 3D V-Cache
AnandTech Cores
Threads
Base
Freq
Turbo
Freq
Memory
Support
L3
Cache
TDP PPT Price
($)
Ryzen 9 7950X3D 16C / 32T 4.2 GHz 5.7 GHz DDR5-5200 128 MB 120W 162W $650
Ryzen 9 7900X3D 12C / 24T 4.4 GHz 5.6 GHz DDR5-5200 128 MB 120W 162W $499
Ryzen 7
Ryzen 7 7800X3D 8C / 16T 4.2 GHz 5.0 GHz DDR5-5200 96 MB 120W 162W $399
Ryzen 7 5800X3D 8C / 16T 3.4 GHz 4.5 GHz DDR4-3200 96 MB 105W 142W $359
Ryzen 7 5700X3D 8C / 16T 3.0 GHz 4.1 GHz DDR4-3200 96 MB 105W 142W $249
Ryzen 5
Ryzen 5 5600X3D 6C /12T 3.3 GHz 4.4 GHz DDR4-3200 96 MB 105W 142W $230

Although the general specifications of the latest Ryzen 7 5700X3D are nearly identical to the Ryzen 7 5800X3D, there are a few notable differences to highlight. The Ryzen 7 5700X3D has a 400 MHz lower base core clock speed, which sits at 3.0 GHz. Keeping with core frequencies, the turbo frequency on the Ryzen 7 5700X3D is also 400 MHz lower than the 5800X3D (4.1 vs. 4.3 GHz), although both chips share the same 105 W base TPD, with a 142 W Package Power Tracking (PPT) rating; this is how much power is fed through the AM4 CPU socket. All of AMD's Ryzen 5000X3D processors support DDR4-3200 memory and are CPU ratio/frequency locked, meaning users can't overclock them.

What sets the Ryzen 7 5700X3D and Ryzen 7 5800X3D processors apart, aside from core frequencies, is the price, with AMD setting an MSRP of $249 for the 5700X3D. In contrast, the Ryzen 7 5800X3D is currently available to buy at Amazon for $359, so for a reduction of 400 MHz across the board, users can have a similar chip, albeit slower, for around $110 less. Given that the target market for these processors is gamers, any CPU-saving can be put towards a discrete graphics card, with graphics being a much more ideal component to upgrade for faster and higher average frame rates than the CPU or memory.

AMD Ryzen 5 5600GT & 5500GT: Zen 3 APUs with Faster Core Frequencies

Also announced for AMD's previous AM4 platform is a pair of new APUs, which adds to the already established Ryzen 5000G series of processors. Under the new "GT" moniker, the AMD Ryzen 5 5600GT and 5500GT slot in alongside the existing Ryzen 5 5600G and 5600GE APUs, which are both based on AMD's Cezanne Zen 3 silicon and paved the way for the new Zen 4 APUs which AMD also announced during CES 2024.

AMD Ryzen 5000G/GT Series APUs (Zen 3)
AnandTech Core /
Thread
Base
Freq
Turbo
Freq
GPU
CUs
GPU
Freq
PCIe
*
TDP
Ryzen 7
Ryzen 7 5700G 8 / 16 3800 4600 8 2000 16+4+4 65 W
Ryzen 7 5700GE 8 / 16 3200 4600 8 2000 16+4+4 35 W
Ryzen 5
Ryzen 5 5600G 6 / 12 3900 4400 7 1900 16+4+4 65 W
Ryzen 5 5600GE 6 / 12 3400 4400 7 1900 16+4+4 35 W
Ryzen 5 5600GT 6 / 12 3600 4600 7 ? 16+4+4 65 W
Ryzen 5 5500GT 6 / 12 3600 4400 7 ? 16+4+4 65 W
Ryzen 3
Ryzen 3 5300G 4 / 8 4000 4200 6 1700 16+4+4 65 W
Ryzen 3 5300GE 4 / 8 3600 4200 6 1700 16+4+4 35 W
*PCIe lanes on the SoC are listed in 16xGFX + 4xChipset + 4 for NVMe

Looking at what separates the new GT series from the original G line-up, both the Ryzen 5 5600GT and Ryzen 5 5500GT have similar specs to the 5600G/GE chips, with the same Radeon Vega 7 integrated graphics. This is a testament to the Vega GPU architecture's longevity, and also makes for a bit of an awkward moment as AMD is in the process of winding down driver support for it.

With regards to specs, the Ryzen 5 5600GT has a 300 MHz slower base frequency than the Ryzen 5 5600G but has a 200 MHz faster turbo core clock speed, which tops out at 4.6 GHz. The Ryzen 5 5500GT also has the same 3.6 GHz base frequency as the 5600GT but shares the same 4.4 GHz turbo frequency as the 5600G.

At the time of writing, AMD hasn't provided us with the graphics core frequencies on the Radeon Vega 7 integrated graphics, but we would expect them to be similar to the 1.9 GHz on the Ryzen 5 5600G/GE processors. It's also worth noting that the Ryzen 5 5600GT and 5500GT have a 65 W base frequency, with the same 6C/12T of Zen 3 cores.

AMD has provided some basic performance metrics from their in-house testing for the Ryzen 5 5600GT, which compares it directly to the previous Ryzen 5600G. With faster turbo clock speeds, AMD shows that the Ryzen 5600GT performs around 5% in DOTA 2 and 10% better in PUBG, while compute performance in applications such as Blender show gains of 9% and 11% in WinRAR. AMD also outlines that the Ryzen 5 5500GT also performs better in applications such as CineBench nT by around 2% compared to the 5600G and marginally better gaming performance (1% better) in World of Tanks and Mount & Blade 2.

As they are the same silicon as the other Ryzen 5000 series APUs, both the Ryzen 5 5600GT and 5500G include 16 x PCIe 4.0 lanes for a discrete graphics card should users wish to upgrade, as well as 4 x PCIe 4.0 lanes interlinking the chip to the chipset and 4 x PCIe 4.0 lanes designated for an M.2 storage drive. It's also worth noting that AMD is bundling their Wraith Stealth cooler with both processors, which saves users money, as an additional CPU cooler purchase isn't required.

The AMD Ryzen 7 5700X3D is expected to cost $249, while the Ryzen 5 5600GT and Ryzen 5 5500GT will retail for $140 and $125, respectively. All three of AMD's new Zen 3-based processors will be available to buy from January 31st from all major retailers.



from AnandTech https://ift.tt/HQ7gzW9
via IFTTT

AMD Unveils Ryzen 8000G Series Processors: Zen 4 APUs For Desktop with Ryzen AI

https://ift.tt/yMfNOHl

While it's been touted for many months that AMD will release APUs for desktops based on Zen 4, rumors and wishes have finally come to fruition during AMD's presentation at CES 2024 with the announcement of the Rzen 8000G family. The latest line-up of APUs with Zen 4 cores and upgraded Radeon integrated graphics consists of four new SKUs, with the Ryzen 7 8700G leading the charge with 8 CPU cores and AMD's RDNA3-based Radeon 780M graphics. It offers users a more cost-effective pathway to gaming and content creation without needing a discrete graphics card.

The other models announced include the AMD Ryzen 5 8600G and Ryzen 5 8500G, both of which offer 6 CPU cores and integrated graphics, all with a 65 W TDP. Bringing up the rear will be the AMD Ryzen 3 8300G, a modest, entry-level 4 core offering. AMD will be tapping both their Phoenix (1) and Phoenix 2 silicon for these parts, depending on the SKU, meaning that the higher-tier parts will exclusively use Zen 4 CPU cores, while the lower-tier parts will use a mix of Zen 4 and Zen 4c CPU cores, a similar split to what we see today with AMD's Ryzen Mobile 8000 series.



from AnandTech https://ift.tt/DLB8ZPO
via IFTTT

Wednesday, 3 January 2024

New Intel Core i9-14900KS photo fuels speculation of upcoming launch

https://ift.tt/AU1EHCf

The Intel Core i9-14900KS processor may have just been spotted online thanks to a leaked photo, indicating that Intel might not be quite done with its 14th-gen Core lineup just yet. 

With Intel understandably focused on the launch of its new Meteor Lake processors, not a lot of attention has been paid to its Raptor Lake Refresh chips since they launched late last year. But a new photo posted to Twitter (or X, if you insist) by user 9550pro appears to show a rather pristine-looking production version of the i9-14900KS processor, rather than a more cryptic engineering sample. Since Intel hasn't officially disclosed any details about its potential flagship chip, there's no way to tell if the chip is genuine, something 9550pro acknowledges in their post. 

A purported Intel Core i9-14900KS chip against a pink background

(Image credit: 9550pro / X)

However, as PCGamesN points out, recent developments suggest that more information may emerge soon. If the Core i9-14900KS is on the horizon, it is poised to become Intel's best processor for gaming, though whether it will be capable of dethroning the AMD Ryzen 7 7800X3D remains to be seen.

The 14900KS is expected to follow the pattern of previous KS processors, offering similar specifications to the Core i9-14900K but with higher clock speeds. Current speculation suggests it could achieve clock speeds of up to 6.20GHz out of the box, but with the Intel Core i9-14900K falling behind the best AMD processor for gaming, this chip is probably better off with the enthusiast crowd and creative pros who can better leverage that extra 200MHz.

And, if past is prologue, the higher price of the i9-14900KS will likely mean enthusiasts are going to be the only ones willing to shell out the cash necessary to pick one up if it does launch.

Enthusiasts' last hurrah before Arrow Lake?

With Meteor Lake seemingly restricted to laptops and all-in-one PCs for the moment (it's still uncertain whether a Meteor Lake desktop processor will sell on its own), the Intel Core i9-14900KS might be the last high-end desktop processor to go on sale for several months. 

Intel says it is still committed to releasing both Arrow Lake and Lunar Lake in 2024, which means the Core i9-14900KS won't be the last Intel processor we see go on sale this year. 

Still, it might be just enough to hold the enthusiast crowd over until the new chips are released later this year, and given the somewhat lackluster gen-on-gen performance gain of the Intel Core i9-14900K, having that extra clock speed that the KS label brings with it might offer performance more in line with gamers and enthusiasts' expectations.

You might also like



from TechRadar: computing components news https://ift.tt/jrGMZ9a
via IFTTT
Related Posts Plugin for WordPress, Blogger...