Wednesday, 24 January 2024

MLCommons To Develop PC Client Version of MLPerf AI Benchmark Suite

https://ift.tt/IhJmFBa

MLCommons, the consortium behind the MLPerf family of machine learning benchmarks, is announcing this morning that the organization will be developing a new desktop AI benchmarking suite under the MLPerf banner. Helmed by the body’s newly-formed MLPerf Client working group, the task force will be developing a client AI benchmark suit aimed at traditional desktop PCs, workstations, and laptops. According to the consortium, the first iteration of the MLPerf Client benchmark suite will be based on Meta’s Llama 2 LLM, with an initial focus on assembling a benchmark suite for Windows.

The de facto industry standard benchmark for AI inference and training on servers and HPC systems, MLCommons has slowly been extending the MLPerf family of benchmarks to additional devices over the past several years. This has included assembling benchmarks for mobile devices, and even low-power edge devices. Now, the consortium is setting about covering the “missing middle” of their family of benchmarks with an MLPerf suite designed for PCs and workstations. And while this is far from the group’s first benchmark, it is in some respects their most ambitious effort to date.

The aim of the new MLPerf Client working group will be to develop a benchmark suitable for client PCs – which is to say, a benchmark that is not only sized appropriately for the devices, but is a real-world client AI workload in order to provide useful and meaningful results. Given the cooperative, consensus-based nature of the consortium’s development structure, today’s announcement comes fairly early in the process, as the group is just now getting started on developing the MLPerf Client benchmark. As a result, there are still a number of technical details about the final benchmark suite that need to be hammered out over the coming months, but to kick things off the group has already narrowed down some of the technical aspects of their upcoming benchmark suite.

Perhaps most critically, the working group has already settled on basing the initial version of the MLPerf Client benchmark around the Llama 2 large language model, which is already used in other versions of the MLPerf suite. Specifically, the group is eyeing 7 billion parameter version of that model (Llama-2-7B), as that’s believed to be the most appropriate size and complexity for client PCs (at INT8 precision, the 7B model would require roughly 7GB of RAM). Past that however, the group still needs to determine the specifics of the benchmark, most importantly the tasks which the LLM will be benchmarked executing on.

With the aim of getting it on PCs of all shapes and sizes, from laptops to workstations, the MLPerf Client working group is going straight for mass market adoption by targeting Windows first – a far cry from the *nix-focused benchmarks they’re best known for. To be sure, the group does plan to bring MLPerf Client to additional platforms over time, but their first target is to hit the bulk of the PC market where Windows reigns supreme.

In fact, the focus on client computing is arguably the most ambitious part of the project for a group that already has ample experience with machine learning workloads. Thus far, the other versions of MLPerf have been aimed at device manufacturers, data scientists, and the like – which is to say they’ve been barebones benchmarks. Even the mobile very of the MLPerf benchmark isn’t very accessible to end-users, as it’s distributed as a source-code release intended to be compiled on the target system. The MLPerf Client benchmark for PCs, on the other hand, will be a true client benchmark, distributed as a compiled application with a user-friendly front-end. Which means the MLPerf Client working group is tasked with not only figuring out what the most representative ML workloads will be for a client, but then how to tie that together into a useful graphical benchmark.

Meanwhile, although many of the finer technical points of the MLPerf Client benchmark suite remain to be sorted out, talking to MLCommons representatives, it sounds like the group has a clear direction in mind on the APIs and runtimes that they want the benchmark to run on: all of them. With Windows offering its own machine learning APIs (WinML and DirectML), and then most hardware vendors offering their own optimized platforms on top of that (CUDA, OpenVino, etc), there are numerous possible execution backends for MLPerf Client to target. And, keeping in line with the laissez faire nature of the other MLPerf benchmarks, the expectation is that MLPerf Client will support a full gamut of common and vendor-proprietary backends.

In practice, then, this would be very similar to how other desktop client AI benchmarks work today, such as UL’s Procyon AI benchmark suite, which allows for plugging in to multiple execution backends. The use of different backends does take away a bit from true apples-to-apples testing (though it would always be possible to force fallback to a common API like DirectML), but it gives the hardware vendors room to optimize the execution of the model to their hardware. MLPerf takes the same approach to their other benchmarks right now, essentially giving hardware vendors free reign to come up with new optimizations – including reduced precision and quantization – so long as they don’t lose inference accuracy and fail meet the benchmark’s overall accuracy requirements.

Even the type of hardware used to execute the benchmark is open to change: while the benchmark is clearly aimed at leveraging the new field of NPUs, vendors are also free to run it on GPUs and CPUs as they see fit. So MLPerf Client will not exclusively be an NPU or GPU benchmark.

Otherwise, keeping everyone on equal footing, the working group itself is a who’s who of hardware and software vendors. The list includes not only Intel, AMD, and NVIDIA, but Arm, Qualcomm, Microsoft, Dell, and others. So there is buy-in from all of the major industry players (at least in the Windows space), which has been critical for driving the acceptance of MLPerf for servers, and will similarly be needed to drive acceptance of MLPerf client.

The MLPerf Client benchmark itself is still quite some time from release, but once it’s out, it will be joining the current front-runners of UL’s Procyon AI benchmark and Primate Labs’ Geekbench ML, both of which already offer Windows client AI benchmarks. And while benchmark development is not necessarily a competitive field, MLCommons is hoping that their open, collaborative approach will be something that sets them apart from existing benchmarks. The nature of the consortium means that every member gets a say (and a vote) on matters, which isn’t the case for proprietary benchmarks. But it also means the group needs a complete consensus in order to move forward.

Ultimately, the initial version of the MLPerf Client benchmark is being devised as more of a beginning than an end product in and of itself. Besides expanding the benchmark to additional platforms beyond Windows, the working group will also eventually be looking at additional workloads to add to the suite – and, presumably, adding more models beyond Llama 2. So while the group has a good deal of work ahead of them just to get the initial benchmark out, the plan is for MLPerf Client to be long-lived, long-supported benchmark as the other MLPerf benchmarks are today.



from AnandTech https://ift.tt/tfjpi73
via IFTTT

Tuesday, 23 January 2024

Leaked AMD Ryzen 7 8700G and Ryzen 5 8600G APU benchmarks are substantially faster than Ryzen 7 5700G

https://ift.tt/u2wJLts

Both the AMD Ryzen 7 8700G and Ryzen 5 8600G APU Geekbench benchmark results have been leaked, showcasing single and multithread performance. 

The AMD Ryzen 7 8700G tested is the highest Hawk Point APU with eight cores, 16 threads, 16 MB of L3 and eight MB of L2 cache. It also has a base clock speed of 4.2 GHz, a boost clock speed of 5.1 GHz, and a TDP of 65W. According to the report from Wccftech, it scored on the single-core 2,720 and on the multi-core 14,326.

Meanwhile, the AMD Ryzen 5 8600G is a six-core and 12-thread APU with 16 MB of L3 and 6 MB of L2 cache. It has a base clock speed of 4.3 GHz and a boost clock speed of 5.0 GHz with a 65W TDP as well. On the single-core it received 2,474 and on the multi-core 11,453.

Compared to the AMD Ryzen 5700G’s performance, the AMD Ryzen 7 8700G has a 64% boost in multi-core and a 37% boost in single-core performance. Compared to the same CPU, the Ryzen 5 8600G has a 50% increase in multi-threaded and a 29% increase in single-threaded performance.

These are some truly impressive performances from these APUs, especially considering the price points of the chips being $329 and $229, respectively. Great mid-range priced APUs that the market desperately needs.

AMD may be gunning for Apple 

It seems that AMD is looking to compete with Apple’s M3 silicon with its APU line and become some of the best processors on the market, especially the AMD Strix Point Halo APU. For instance, the most powerful Strix Point Halo APU is rumored to have 16 cores and an RDNA 3.5 GPU with 40 Compute Units (CUs).

The only thing that might stop the Strix Point Halo APU in its tracks in terms of being competition for the M3 are rumors that production has been pushed back to 2025. But the Strix Point should still be launching in 2024, however, which should still be a major challenge for Apple.

I say, bring on the extra competition, since as it continues we get to reap the benefits of what the tech giants sow.

You might also like



from TechRadar: computing components news https://ift.tt/l8MuXAb
via IFTTT

AMD Strix Point Halo APU spotted – this could be the most powerful chip for thin-and-light laptops that beats out Apple

https://ift.tt/RJbDnXm

AMD’s incoming Strix Point Halo APUs – all-in-one chips with processors and integrated graphics that are expected to be seriously powerful – have just been spotted.

The rumor mill holds that Strix Point APUs will likely arrive later this year, and will be followed by top-end Strix Point Halo chips.

Excitingly, we’ve just had a sighting of Strix Point Halo for the first time, with the APUs mentioned in patches for ROCm, AMD’s open source software platform for GPU computation, as VideoCardz noticed that Kepler pointed out on X (formerly Twitter).

See more

Strix Point Halo will use AMD’s next-gen Zen 5 processor cores, which are expected to be a solid step on from current Zen 4 chips, and the integrated graphics will be RDNA 3+ (also referred to as RDNA 3.5). AMD uses the RDNA 3 architecture for its current-gen GPUs, so RDNA 3.5 is a refresh of this. It’ll also have an XDNA 2-powered NPU for accelerating AI workloads, to boot.

The most powerful Strix Point Halo APU is rumored to have 16-cores and an RDNA 3.5 GPU with 40 Compute Units (CUs).


Analysis: A worry for Apple

To give you some idea of how powerful we’re talking about here for a CPU plus GPU solution, this complements 16 full Zen 5 cores on the processor side with the equivalent of integrated graphics which are more powerful than AMD’s next-in-line GPU, the RX 7600 XT (which goes on sale tomorrow, in fact). That model has 32 CUs compared to 40 CUs for the flagship Strix Point Halo, and don’t forget the latter is on an improved refresh of the current graphics architecture, so it could be more than a little ahead of the 7600 XT.

If the rumors around the spec are right, Strix Point Halo really will be a mammoth step forward for AMD’s APUs, bringing in a chip to rival what Apple is doing with its M series of processors (and Apple is going great guns with them, of course).

One slight catch with Strix Point Halo is that while earlier rumors pegged it as having a 2024 launch, the latest chatter is that this has slipped to 2025. So in theory, we’ll get Strix Point this year, but not the Halo silicon until next year (hopefully early in 2025, mind).

This is certainly a chip that could worry Apple, then. Perhaps the main concern when throwing around all these juicy specs and forecasts of a storming high-performance AMD APU is the price tag that might be attached. A thin and premium gaming laptop powered by the flagship Strix Point Halo chip is likely to cost a pretty penny (or three hundred thousand of them).

Mind you, it’s not like MacBooks are cheap either (unless you spot a great deal). Strix Point Halo is certainly one to watch, though, in terms of the most powerful performers for APUs, even though Strix Point will be the more prevalent proposition with more mainstream pricing.

Strix Point Halo won’t just be about thin-and-light laptops, either – there are also small form-factor PCs to consider here, where dispensing with a separate GPU is obviously a major boon. The Halo silicon is another sign that the days of discrete GPUs might be numbered, at least at the lower-end of the market.

You might also like



from TechRadar: computing components news https://ift.tt/hB5kI2w
via IFTTT

Monday, 22 January 2024

The Corsair A115 CPU Cooler Review: Massive Air Cooler Is Effective, But Expensive

https://ift.tt/SUlBCsq

With recent high-performance CPUs exhibiting increasingly demanding cooling requirements, we've seen a surge in releases of new dual-tower air cooler designs. Though not new by any means, dual-tower designs have taken on increased importance as air cooler designers work to keep up with the significant thermal loads generated by the latest processors. And even in systems that aren't running the very highest-end or hottest CPUs, designers have been looking for ways to improve on air cooling efficiency, if only to hold the line on noise levels while the average TDP of enthusiast-class processors continues to eke up. All of which has been giving dual-tower coolers a bigger presence within the market.

At this point many major air cooler vendors are offering at least one dual-tower cooler, and, underscoring this broader shift in air cooler design, they're being joined by the liquid-cooling focused Corsair. Best known within the PC cooling space for their expansive lineup of all-in-one (AIO) liquid PC CPU coolers, Corsair has enjoyed a massive amount of success with their AIO coolers. But perhaps as a result of this, the company has exhibited a notable reticence towards venturing into the air cooler segment, and it's been years since the company last introduced a new CPU air cooler. This absence is finally coming to an end, however, with the launch of a new dual-tower air cooler.

Our review today centers on Corsair's latest offering in the high-end CPU air cooler market, the A115. Designed to challenge established models like the Noctua NH-D15, the A115 is Cosair's effort to jump in to the high-end air cooling market with both feet and a lot of bravado. The A115 boasts substantial dimensions to maximize its cooling efficiency, aiming not just to meet but to surpass the cooling requirements of the most demanding mainstream CPUs. This review will thoroughly examine the A115's performance characteristics and its competitive standing in the aftermarket cooling market.



from AnandTech https://ift.tt/AEdjXsw
via IFTTT

Wednesday, 17 January 2024

AMD Rolls Out Radeon RX 7900 XT Promo Pricing Opposite GeForce RTX 40 Super Launch

https://ift.tt/6H4RF5a

In response to the launch of NVIDIA's new GeForce RTX 40 Super video cards, AMD has announced that they are instituting new promotional pricing on a handful of their high-end video cards in order to keep pace with NVIDIA's new pricing.

Kicking off what AMD is terming a special "promotional pricing" program for the quarter, AMD has been working with its retail partners to bring down the price of the Radeon RX 7900 XT to $749 (or lower), roughly $50 below its street price at the start of the month. Better still, AMD's board partners have already reduced prices further than AMD's official program/projections, and we're seeing RX 7900 XTs drop to as low as $710 in the U.S., making for a $90 drop from where prices stood a few weeks ago.

Meanwhile, AMD is also technically bringing down prices on the China and OEM-only Radeon RX 7900 GRE as well. Though as this isn't available for stand-alone purchase on North American shelves, it's mostly only of relevance for OEM pre-builds (and the mark-ups they charge).

Ultimately, the fact that this is "promotional pricing" should be underscored. The new pricing on the RX 7900 XT is not, at least for the moment, a permanent price cut. Meaning that AMD is leaving themselves some formal room to raise prices later on, if they choose to. Though in practice, it would be surprising to see card prices rebound – at least so long as we don't get a new crypto boom or the like.

Finally, to sweeten the pot, AMD is also extending their latest game bundle offer for another few weeks. The company is offering a copy of Avatar: Frontiers of Pandora with all Radeon RX 7000 video cards (and select Ryzen 7000 CPUs) through January 30, 2024.



from AnandTech https://ift.tt/6teQ75y
via IFTTT

Bad news for flagship laptops: both AMD and Intel’s next high-end mobile CPUs are supposedly delayed until 2025

https://ift.tt/AHRfbdg

If you were hoping to see some seriously powerful gaming laptops driven by the next-in-line top-end mobile silicon from AMD and Intel later this year, those expectations might fail to be met, or at least that’s the latest gossip from the rumor mill.

Wccftech spotted that hardware leaker Golden Pig Upgrade – a source with a reasonable track record – made a comment on Weibo to the effect that there are no new high-end mobile processors coming this year.

That statement pertained to the future Xiaomi Redmi G Pro laptop refresh, which will have a high-end (HX series) Intel Arrow Lake CPU, but Wccftech points out that the comment refers to launches across the board. So, in theory that also includes AMD’s incoming Strix Point family of mobile processors (to some extent – we’ll come back to that).

In short, the most powerful laptop CPUs will remain Intel’s top-end Raptor Lake Refresh (HX) and AMD’s Dragon Range HX (alongside Hawk Point below that) for some time yet. At least in theory, as we should heap on the seasoning here.


Analysis: Intel delay seems quite possible

We can believe that Intel’s next HX series chips might be pushed to early 2025, and the same is true for AMD’s Strix Point Halo, the very beefiest versions of Strix Point processors. Indeed, there has already been speculation that Strix Point Halo will be delayed to 2025, so that makes sense (at least based on past rumors). Fire Range HX, the successor to Dragon Range, also isn’t rumored to debut until 2025.

However, we should still get vanilla Strix Point APUs for laptops (mixing Zen 5 with RDNA 3.5 graphics) in 2024, even though Wccftech appears to draw a broader conclusion here that Zen 5 mobile won't be arriving in 2024. Plain Strix Point will still be powerful notebook chips and should arrive late in the summer of 2024.

Zen 5 desktop will certainly be out at some stage later in 2024 – very likely before Arrow Lake desktop, according to rumors. Arrow Lake desktop is supposed to arrive late in 2024 from what we’ve heard, so having the laptop chips run slightly later, into early 2025, seems believable enough.

Still, we should be skeptical about all of this info as already noted.

You might also like



from TechRadar: computing components news https://ift.tt/JNilnS4
via IFTTT

Tuesday, 16 January 2024

New Intel rumor suggests powerful Battlemage GPU could be ditched – which should please AMD and Nvidia

https://ift.tt/DTEpmFi

Battlemage GPUs may not have anything to offer beyond the lower-end of the market, with the rumored mid-range graphics card for the 2nd-generation of Intel’s Arc series supposedly in danger of being canceled, according to fresh word from the grapevine.

This comes courtesy of the latest video from RedGamingTech (RGT), a YouTube leaker, although it’s not all bad news about Battlemage thankfully.

The main point made, though, is disappointing; there’s a “very good chance” that the previously rumored enthusiast class GPU – the one featuring 56 Xe cores, known as G10, and in theory being about equivalent to the current RTX 4070 Ti, or thereabouts – isn’t going to be released. However, we’re told that decision hasn’t actually been made at Intel yet.

Why might G10 be canceled (with the emphasis on the might)? It’s not due to hardware issues, or problems realizing the graphics card in the technical sense, but rather it is financial considerations that are in play here. In short, the fear is that the mid-range GPU may not be profitable enough to make sense.

We should again underline that this is just a possibility right now – Intel could still go ahead with this graphics card. Alternatively, Team Blue could hedge its bets and either delay the launch of G10, or just have it come out with a very low volume of production (which would effectively mitigate any financial concerns to some extent, as a kind of compromise).

If Battlemage doesn’t have the G10 with 56 Xe cores in the end, what will it have, then? RGT’s sources believe that a more modestly pitched model will have 40 Xe cores, so the good news is that isn’t a million miles away. But is it an enthusiast-class GPU? No, it’s a lower-tier affair (or it will be by the time Battlemage is released, probably towards the very end of 2024 from what we’re hearing, when AMD’s RDNA 4 will be here, and Nvidia’s Blackwell too, most likely, or it’ll be imminent).

The other more positive news is that if G10 does happen, it currently has faster clock speeds than previously rumored, and a special type of cache called ‘Adamantine’ which will hopefully help to pep up performance too. (The 40 Xe cores graphics card won’t have this Adamantine cache, though, if you were wondering – this cache was rumored for Meteor Lake CPUs, actually, but didn’t happen for those chips in the end).


Analysis: A merry dance of rumors

Okay, so there’s a lot to digest here, and the principal point we should bear in mind is that the G10 graphics card is not dead yet – there are just big question marks hanging over it. That’s not great for those wanting more from Battlemage than lower-end offerings, of course, but still – hope remains. And besides, this is only a rumor anyway.

We’ve been treading a very winding trail following these Intel Arc rumors, for sure. Originally, we were given the expectation that Battlemage was going to be powerful enough to best Nvidia’s Lovelace GPUs, and indeed in the above YouTube clip, RGT does mention that there was an 80 Xe cores variant in the works at one time, but it got ditched.

Then we were told Battlemage was going to be just low-end GPUs, only to later be informed that the G10 upper-mid-ranger was in the pipeline. So, now we’re back to lower-end only – or maybe that being the case, anyway.

Still, even if all we get from Intel’s Battlemage is a 40 Xe cores graphics card as the highest tier product, that could still make a big impact on the wider GPU market if it’s priced competitively. Let’s face it, more affordable graphics cards are what we really need in the way of rivals for AMD and Nvidia, because Team Green in particular appears to have rather forgotten about budget GPUs. Arguably, this could allow Intel to put all its resources into budget challengers, which might ultimately work out for the better.

What could also play into Intel’s decision to drop the G10 is that AMD is apparently firmly targeting the mid-range with RDNA 4, and those GPUs will supposedly top out there – but with potent products. And those cards could be tricky for Team Blue to take on, so it might just swerve the whole idea. Maybe Intel is partly waiting to see what’s in the works with RDNA 4, as more leaks spill forth, before making the final decision on G10.

Who knows, and we can continue to speculate, but the tentative picture right now for the future of the GPU market is Nvidia owning the high-end with Blackwell – with not even AMD challenging there – and Team Red focusing its efforts squarely on the mid-range, with Intel retreating to the lower tiers. Possibly, anyway – though if Team Blue can produce compelling low-end offerings, and supercharged integrated graphics with Battlemage, that could be enough to tide things over until bigger moves are made with Celestial (its third-gen GPUs).

You might also like



from TechRadar: computing components news https://ift.tt/IDt6ura
via IFTTT
Related Posts Plugin for WordPress, Blogger...