Sunday 31 December 2023

AMD in 2023: year in review

https://ift.tt/X0RDYry

We’re almost done with 2023, and as ever at TechRadar, it’s time to look back at how the various tech giants performed over the past year. In AMD’s case, we saw some inspiring new products introduced for its consumer processor and GPU ranges, and renewed gusto in its pursuit of AI.

There were also shakier times for Team Red, though, notably a string of blunders – the vapor chamber cooling debacle is one that springs immediately to mind, but there were other incidents, and a few too many of them. Join us as we explore the ups and downs of AMD’s 2023, weighing everything up at the end.

An AMD Ryzen 7 7800X3D slotted into a motherboard

(Image credit: Future/John Loeffler)

Zen 4 gets 3D V-Cache

One of AMD’s big moves this year was the introduction of 3D V-Cache for AMD’s Ryzen 7000 desktop processors.

There were a trio of X3D models introduced, with the higher-end Ryzen 9 7950X3D and 7900X3D hitting the shelves first in February. These were good CPUs and we liked them, particularly the Ryzen 9 7950X3D, which is a sterling processor, albeit very pricey (similarly, we felt the price of the 7900X3D held it back somewhat).

What everyone was really waiting for, though, was the more affordable mid-range 3D V-Cache chip, and the Ryzen 7 7800X3D turned up in April. We praised the 7800X3D’s outstanding gaming performance and it’s the best choice for a gaming PC as we conclude in our roundup of the best processors. This was a definite highlight in AMD’s releases this year.

We were also treated to an interesting diversion in the form of a new last-gen X3D processor which AMD chose a very different tactic for. The Ryzen 5 5600X3D arrived in July as a cheap CPU that’s great for an affordable gaming PC, the catch being that it only went on sale through Micro Center stores in the US. For those who couldn’t get that, though, there was always the old Ryzen 5800X3D which dipped to some really low price tags at various points throughout the year. For gamers, AMD had some tempting pricing, that’s for sure.

Away from the world of 3D V-Cache, AMD also pushed out a few vanilla Ryzen 7000 CPUs right at the start of the year, namely the Ryzen 9 7900, Ryzen 7 7700, and Ryzen 5 7600, the siblings of the already released ‘X’ versions of these processors. They were useful choices to be thrown into the mix offering a bit more affordability for the Zen 4 range.

An AMD Radeon RX 7600 on a desk

(Image credit: Future / John Loeffler)

RDNA 3 arrives for real

AMD unleashed its RDNA 3 graphics cards right at the close of 2022, but only the top-tier models, the Radeon RX 7900 series. And the RX 7900 XTX and 7900 XT were all we had until 2023 was surprisingly far along – it wasn’t until May that the RX 7600 pitched up at the other end of the GPU spectrum.

The RX 7600 very much did its job as a wallet-friendly graphics card, mind you, and this GPU seriously impressed us with its outstanding performance at 1080p and excellent value proposition overall. Indeed, the RX 7600 claimed the title of our best cheap graphics card for this year, quite an achievement, beating out Nvidia’s RTX 4060.

Then we had another sizeable pause – which witnessed gamers getting rather impatient – for the gap, or rather gulf, to be filled in between the RX 7600 and RX 7900 models. Enter stage left the RX 7800 XT and the 7700 XT as mid-range contenders in September, one of which really punched its weight.

An AMD Radeon RX 7800 XT on a table

(Image credit: Future / John Loeffler)

That was the RX 7800 XT and even though it only marginally outdid its predecessor for pure performance, this new RDNA 3 mid-ranger did so well in terms of its price/performance ratio versus its RTX 4070 rival that the AMD GPU scooped the coveted top spot in our best graphics card roundup. (Deposing the RTX 4070, in fact, which had held the number one position since its release six months prior).

As for the RX 7700 XT, that was rather overshadowed by its bigger mid-range sibling here, not making as much sense value-wise as the 7800 XT.

Still, the long and short of it is that AMD bagged both the title of the best GPU for this year, as well as the best budget offering – not too shabby indeed.

From what we saw of sales reports – anecdotally and via the rumor mill – these new desktop graphics cards pepped up AMD’s sales a good deal. While the RX 7900 series GPUs were struggling against Nvidia early in 2023, towards the end of the year, the 7800 XT in particular was really shifting a lot of units (more than the RTX 4070).

While Nvidia is still the dominant desktop GPU power by far, it’s a sure bet AMD regained some turf with these popular RDNA 3 launches in 2023.

Radeon RX 770 XT and Radeon RX 7800 XT

(Image credit: AMD)

FSR 3 finally turns up

We did a fair bit of waiting for stuff from AMD this year as already observed, and another item to add to the list where patience was definitely required was FSR 3.

FSR is, of course, AMD’s rival to DLSS for boosting frame rates in games, and more specifically, FSR 3 was Team Red’s response to DLSS 3 that uses frame generation technology (inserting extra frames into the game to artificially boost the frame rate).

FSR 3 was actually announced in November 2022 – as we covered in our roundup of AMD’s highlights for last year – and we predicted back then that it wouldn’t turn up for ages.

Indeed, it didn’t, and we heard nothing about FSR 3, save for a small info drop for game developers in March, for most of 2023. Then finally, at the end of September, AMD officially released FSR 3.

However, it wasn’t a simple case of that’s that and AMD was level-pegging with Nvidia suddenly. For starters, Nvidia went ahead and pushed out DLSS 3.5 (featuring ray reconstruction), and frankly, AMD’s frame generation feature was quite some way behind Team Green’s in its initial incarnation. It was also not nearly as widely supported – and remains so – with adoption moving at a sluggish pace, and only four games available that make use of FSR 3 so far.

But at least it’s here, and AMD made another important move in December, as the year drew to a close, releasing an improved version of FSR 3. We saw with Avatar: Frontiers of Pandora – the third game to introduce support – that the new version of FSR (3.0.3) runs a good deal more slickly, at least according to some reports.

Avatar: Frontiers of Pandora AMD bundle

(Image credit: AMD / Ubisoft)

On top of this, AMD also made FSR 3 open source. That means more games should be supported soon enough (and modders can, and already have, started introducing FSR 3 to some titles, but unofficial support will never be quite the same as the developer implementing the tech).

Furthermore, in terms of better support for games, Team Red did make another move at the same time as FSR 3. We’re talking about AMD’s Fluid Motion Frame (AFMF) tech which as well as being part of FSR 3 is integrated separately at a driver level.

This allows for frame generation boosts to be applied to all games – via the driver, with no need for the game to be coded to support it – with the caveat being that it only works with RX 7000 and 6000 GPUs. Now that’s great, but note that what you’re getting here is a ‘lite’ version of the frame generation process applied in FSR.

As 2023 now comes to a close, AFMF is still in preview (testing) and somewhat wonky, though Team Red has improved the tech a fair bit since launch, much like FSR 3.

In short, it looks like AMD is getting there, and also ushering in innovations such as anti-lag+ (for reducing input latency, with RX 7000 and supported games only, although this has had its own issues). Not to mention the company is wrapping up all this tech in HYPR-RX, an easy-to-use one-click tuning mode that’ll apply relevant (supported) features to make a given game perform optimally (hopefully).

But there’s still that inevitable feeling of following in Nvidia’s wake when it comes to FSR and related features, with AMD rather struggling to keep up with the good ship Jensen.

Still, AMD appears to have an overarching vision it’s making solid, if somewhat slow, progress towards, but we certainly need to see more games that (officially) support FSR 3 – with an implementation impressive enough to equal DLSS 3 (or get close to it).

Lenovo Legion Go on wooden table

(Image credit: Future)

Portable goodness

This year saw some interesting launches from AMD on the portable device front, not the least of which was the Ryzen Z1 APU. Built on Zen 4, this mobile processor emerged in April to be the engine that several gaming handhelds were built around, notably the Asus ROG Ally and Lenovo Legion Go.

There were two versions of the Z1, the 6-core vanilla chip, and a Z1 Extreme variant which was an 8-core CPU but crucially had a lot more graphics grunt (12 RDNA 3 CUs, rather than just 4 CUs for the baseline processor). The Z1 Extreme proved to be an immense boon to these Windows-powered gaming handhelds, driving the Legion Go to become what we called the true Steam Deck rival in our review.

The weakness of those Windows-toting Steam Deck rivals is, of course, the battery life trade-off (particularly when driving demanding games at more taxing settings). AMD was on hand to help here, too, introducing HYPR-RX Eco profiles to its graphics driver late in the year, which should offer a convenient way to tap into considerable power-savings (without too much performance trade-off – we hope).

Away from handhelds, in December we were also treated to the launch of a range of Ryzen 8000 CPUs for laptops. These ‘Hawk Point’ chips aren’t out yet, but will debut in notebooks in early 2024, although note that they’re Zen 4-based (the same as Ryzen 7000 silicon).

The line-up is led by the flagship Ryzen 9 8945HS, an 8-core processor with integrated graphics (Radeon 780M) that’ll be great for 1080p gaming (with some details toned down, mind). These chips will also benefit from AMD’s XDNA NPU (Neural Processing Unit) for accelerating AI tasks, and Team Red asserted that Hawk Point chips will be 1.4x faster than the Ryzen 7040 series in generative AI workloads – a pretty tasty upgrade.

AMD Instinct MI300A APU

(Image credit: AMD)

AI bandwagon

Those Hawk Point mobile CPUs showed AMD’s growing focus on AI, and this was a broader push for Team Red throughout the year, which comes as no surprise – everyone who was anyone in tech, after all, was investing in artificial intelligence. Moreover, Nvidia made an absolute fortune in the AI space this year, and obviously that didn’t go unnoticed at AMD towers.

As well as incorporating heftier NPUs in its processors, in May AMD tapped Microsoft for resources and cash to help develop AI chips (for the gain of both companies). But the real power move for Team Red came late in the year, when in December AMD revealed a Zen 4 APU for AI applications (the largest chip it has ever made, in fact, bristling with 153 billion transistors).

The Instinct MI300A is loaded with 24 CPU cores plus a GPU with 228 CDNA 3 CUs and eight stacks of HBM3 memory, posing a genuine threat to Nvidia’s AI dominance. AMD’s testing indicates that the MI300A is about on par with Nvidia’s mighty H100 for AI performance, and as the year ended, we heard that firms like Microsoft and Meta are interested in adopting the tech.

AMD said that the Instinct MI300A will be priced competitively to poach customers from Nvidia, as you might expect, while acknowledging that Team Green will of course remain dominant in this space in the near future. However, Lisa Su intends for her firm to take a “nice piece” of a huge AI market going forward, and if the MI300A is anything to go by, we don’t doubt it.

Person in a suit gets angry and smashes the keyboard on the monitor

(Image credit: Reshetnikov_art / Shutterstock)

Year of the gremlins

While AMD had plenty of success stories in 2023, as we’ve seen, there were also lots of things that went wrong. Little things, medium-sized things, and great hulking gremlins crawling around in the works and making life difficult – or even miserable – for the owners of some AMD products who got unlucky.

Indeed, AMD was dogged by lots of issues early in the year, most notably a serious misstep with the cooling (vapor chamber) for RX 7900 XTX graphics cards. Although the flaw only affected a small percentage of reference boards, it’s absolutely one of the biggest GPU blunders we can recall in recent years. (Nvidia’s melting cables with the RTX 4090 being another obvious one).

We also witnessed a worrying flaw with AMD’s Ryzen 7000 CPUs randomly burning out in certain overclocking scenarios. Ouch, in a word.

Other examples of AMD’s woes this year include a graphics driver update in March bricking Windows installations (admittedly in rare cases, but still, this is a nasty thing to happen off the back of a simple Adrenalin driver update), and other driver bugs besides (causing freezing or crashing). And we also saw AMD chips that had security flaws of one kind or another, some more worrying than others.

Not to mention RX 7000 graphics cards consuming far too much power when idling in some PC setups (multiple monitors, or high refresh rate screens – a problem not resolved until near the end of the year, in fact).

There were other hitches besides, but you get the idea – 2023 was a less than ideal time for AMD in terms of gaffes and failures of various natures.

A Gigabyte Radeon RX 7700 XT Gaming OC on a desk

(Image credit: Future / John Loeffler)

Concluding thoughts

Clearly, AMD tried the patience of gamers in some respects this year. First of all with those glaring assorted blunders which doubtless proved a source of frustration for some owners of their products. And secondly, purely due to making gamers wait an excessively lengthy time for features like FSR 3 – which seemed to take an age to come through – and ditto for filling out the rest of the RDNA 3 range, as those graphics cards took quite some time to arrive.

However, the latter were very much worth the wait. The double whammy for GPUs was a real coup for AMD, releasing the top budget graphics card in the RX 7600, and our favorite GPU of them all, the reigning RX 7800 XT that sits atop our ranking of the top boards available right now.

There were plenty of other highlights, such as releasing the best gaming CPU ever made – in the form of the Ryzen 7 7800X3D – which was a pretty sharp move this year. We also received a top-notch mobile APU for handhelds in the Ryzen Z1 Extreme.

AMD’s GPU sales were appropriately stoked as 2023 rolled on, and FSR – plus other related game boosting tech – seems to be coming together finally, albeit in an overly slow but steady manner as mentioned. In the field of AI, Team Red is suitably ramping up its CPUs, and with the Instinct MI300A accelerator it’s providing a meaningful challenge to Nvidia’s dominance.

In short, despite some worrying wobbles, 2023 was a good year for AMD. The future looks pretty rosy, too, certainly with next-gen Zen 5 processors that look set to get the drop on Intel’s Arrow Lake silicon next year. And some even more tantalizing Zen 5 laptop chips (‘Strix Point’ – sitting above Hawk Point, and sporting XDNA 2 and RDNA 3.5) are inbound for 2024.

Next-gen Radeon GPUs are a little sketchier – RDNA 4 is coming next year, but the range may top out at mid-tier products, as AMD refocuses more on AI graphics cards (as expected in terms of going where the profits are). Those RDNA 4 cards could still pack a value punch, though, and looking at the current mid-range champ, the RX 7800 XT, we’d be shocked if they didn’t.



from TechRadar: computing components news https://ift.tt/wazVhKe
via IFTTT

Thursday 21 December 2023

Will Intel’s cheapest 14th-gen CPU be worth buying? Price leak gives us a hint

https://ift.tt/pYchi15

Intel’s Core i3-14100 processor has had its price leaked by a US retailer, ahead of the CPU’s rumored reveal next month at CES 2024 (along with the rest of the Raptor Lake Refresh line-up not yet released by Team Blue).

On X (formerly Twitter), @momomo_us was hawk-eyed enough to spot the retailer, BLT, posting the price of this 14th-gen processor as $150 (around £120, AU$220). Unlike most leaks, though, this isn’t the retail price, but the price for bulk buying trays of the CPU.

See more

The Core i3-14100 is a quad-core chip (with 8 threads – and no efficiency cores), and represents the least powerful Raptor Lake Refresh processor on the desktop – and thus the most affordable.

Compared to the Core i3-13100, it’ll stay pretty much the same, but the boost speed gets an uplift, so the rumors reckon. It’s 4.7GHz, so 200MHz faster, which is quite believable seeing as this is the case with other 14th-gen CPUs. Intel appears to be targeting a 200MHz uptick with most (but not all) of this silicon.

In theory, we will see this CPU, and other non-K processors for the 14th-gen, on January 8. (Thus far, only ‘K’ chips, which are unlocked for overclocking, have been released – non-K models can’t be overclocked, and run a little slower than any ‘K’ equivalent).


Analysis: Price considerations

With the price of the 14100 being for volume purchases here (retailers or PC makers buying trays of chips), you might shrug your shoulders and wonder what the relevance is to the average consumer.

Well, the screenshot the leaker provides shows the tray price for the 13100, and it’s almost the same – just a touch under, actually, at $148.

Indeed, in the US, the current retail price for the 13100 stands at this level – $148 (at Newegg, at the time of writing) – so add seasoning, but the intention for Intel would seem to be to debut the Core i3-14100 at just about the same price, or maybe a touch more, for the consumer.

That makes sense as the currently released Raptor Lake Refresh models are pitched at about the same price as their Raptor Lake counterparts. (The 14900K is the same price, with the 14700K and 14600K actually dropping very slightly, but they’re all around the same ballpark as the last-gen really).

It’s possible Intel may go the other direction and notch things up a tiny bit with lower-end CPUs, certainly – but it’s unlikely we’ll see any meaningful increase, or anything to worry about, as this leak suggests.

Via VideoCardz

You might also like



from TechRadar: computing components news https://ift.tt/GVohJdg
via IFTTT

Thursday 14 December 2023

Intel launches Core Ultra processors to power up your next laptop with AI

https://ift.tt/qXIrsTC

Intel Core Ultra is officially here with Intel's AI Everywhere event kicking off today, December 14, 2023.

The new processors, first announced at Intel Innovations a couple of months ago, are Intel's first chips to feature a neural processing unit (NPU).

“AI innovation is poised to raise the digital economy’s impact up to as much as one-third of global gross domestic product,” Intel CEO Pat Gelsinger said in an Intel press release. “Intel is developing the technologies and solutions that empower customers to seamlessly integrate and effectively run AI in all their applications — in the cloud and, increasingly, locally at the PC and edge, where data is generated and used.”

This means that AI applications like those powering ChatGPT and Midjourney are set to become much more ubiquitous, though those apps require data to be sent over the internet to servers run by OpenAI and others in order to produce an output. Intel's new chips hold the promise that applications like those might start being processed offline on your device rather than on someone elses, a huge improvement in data security, especially for business and enterprise users.

Intel showing off the power of its new integrated Intel Arc GPU at its AI Everywhere event

(Image credit: Future / Philip Berne)

Bringing the biggest CPU architecture upgrade to the everyday laptop

Intel's new Core Ultra processors introduce three new major architecture features to its laptop portfolio. 

To begin with, there is the move to a multi-chiplet-module (MCM) design, which leaves behind the single-silicon-die design paradigm that Intel pioneered decades ago in favor of several smaller silicon chiplets bonded together to form a single chip. This allows for more versatility in the chip design than previous generations of processors in terms of what functionality the resulting chip will have.

Next, there is the neural processor. The NPU is speciallized hardware in the chip that processes machine learning tasks that are at the heart of the various artificial intelligence applications we've seen explode into the mainstream this year, like ChatGPT. Those apps require you to send data elsewhere, however, meaning that it's not good for anything requiring security. This will ultimately allow for more personal AI applications like photo editing and document drafting to occur on device.

You also have the integration of the new Intel Arc GPU. With up to 8 XE GPU cores, this is roughly equivalent to having an Intel Arc 370M discrete GPU integrated right into the processor, minus the dedicated video memory. An integrated GPU will never be able to match a current-gen discrete GPU in a laptop, but it does mean that more casual users will have access to more powerful graphics technology like real-time ray tracing and hardware-accelerated resolution upscaling, both of which are increasingly used in modern PC games.

Finally, there is the inclusion of new low-power efficiency (LPE) cores, which in addition to high-power performance cores and more modest efficiency cores, further improves the efficiency of Intel's laptop lineup, something that took a bit of a hit with the past two generations of Intel chips. 

We expect that Intel Evo, Intel's laptop certification that requires at least a nine-hour battery life, will actually become much more common again thanks to these new processors.

Apple M3 Series (2023)

(Image credit: Apple)

Can it compete with the Apple M3 and an increased interest in AMD mobile chips?

Of course, the biggest question is whether the new Intel Core and Core Ultra processors will be able to compete with Apple's newest M3 processors. Apple's chips, which power the Apple MacBook Pro 14-inch and MacBook Pro 16-inch, are some of the best processors ever made, so it remains to be seen how well Intel's new chips stack up.

That said, Intel has about a 70% market share for laptop CPUs, though that number is a bit fuzzy given the huge number and variety of "laptops" on the market. Still, with Apple's move to in-house silicon, Intel is facing unexpected competition from Apple devices while fending off a surge in competition from AMD's mobile procesors which are increasingly sought after in the best gaming laptops.

Many of the best laptops on the market though still sport Intel chips, and that's not likely to change anytime soon, but Intel's new Core and Core Ultra processors are launching in a much more competitive environment than previous releases to say the least.



from TechRadar: computing components news https://ift.tt/hcyDadj
via IFTTT

Wednesday 13 December 2023

Zhaoxin Unveils KX-7000 CPUs: Eight x86 Cores at Up to 3.70 GHz

https://ift.tt/38hUTXN

Zhaoxin, a joint venture between Via Technologies and Shanghai Municipal Government, has introduced its Kaixian KX-7000 series of x86 CPUs. Based on the company's Century Avenue microarchitecture, the processor features up to eight general-purpose x86 cores running at 3.70 GHz, while utilizing a chiplet design under the hood. Zhaoxin expects the new CPUs to be used for client and embedded PCs in 2024.

According to details published by Zhaoxin, the company's latest Century Avenue microarchitecture looks to be significantly more advanced than the company's previous x86 microarchitecture. This new design includes improvements in the CPU core front-end design as well as the out-of-order execution engines and back-end execution units. The CPU cores themselves are backed by 4MB of L2 cache, 32 MB of L3 cache, and finally a 128-bit memory subsubsystem supporting up to two channels of DDR5-4800/DDR4-3200. Furthermore, the new CPUs pack up to eight cores, capable of reaching a maximum clockspeed of 3.70 GHz.

As a result, the new CPUs are said to double computational performance compared to their predecessors, KaixianKX-6000 launched in 2018.

On the graphics side of matters, Zhaoxin's Kaixian KX-7000 CPUs also pack the company's new integrated GPU design, which is reported to be DirectX 12/OpenGL 4.6/OpenCL 1.2-capable and offers four-times the performance of its predecessor. Though given the rather low iGPU performance of the DirectX 11.1-generation KX-6000, even a 4x improvement would make for a rather modest iGPU in 2024. Principly, the iGPU is there to drive a screen and provide media encode/decode functionality, with the KX-7000 iGPU capable of decoding and encode H.265/H.264 video at up to 4K, and can drive DisplayPort, HDMI, and D-Sub/VGA outputs.

Another interesting detail about Zhaoxin's KX-7000 processors is that the company says they're using a chiplet architecture, which resembles that of AMD Ryzen's processors. Specifically, Zhaoxin is placing their CPU cores and I/O functions in to different pieces of silicon – though it's unclear into how many different chiplets altogether.

On the I/O side of matters, the new CPUs provide 24 PCIe 4.0 lanes, two USB4 roots, four USB 3.2 Gen2 roots, two USB 2.0 root, and three SATA III ports. And, given the target market, it offers acceleration for the Chinese standard SM2 and SM3 cryptography specifications.

At the moment, Zhaxin is not disclosing where it plans to produce its Zhaoxin's KX-7000 processors, nor on what node(s) they'll be using. Though given Zhaoxin's previous parts the and the limited, regional market for the chips, it is unlikely that they're intending to use a leading-edge fabrication process.

Perhaps the final notable detail about Zhaoxin's Kaixian KX-7000 CPUs is that they are set to come in both BGA and LGA packages, something that does not often happen to Chinese CPUs. An LGA form-factor will enable an ecosystem of interchangeable and upgradeable chips, which is something that we have not seen from Chinese processors for client PCs in the recent years.

Zhaoxin says that major Chinese machine manufacturers, including Lenovo, Tongfang, Unigroup, Ascend, Lianhe Donghai, and others, have developed new desktop systems based on the KX-7000 processors. These systems – which will be available next year – will run operating systems like Tongxin, Kylin, and Zhongke Fonde.



from AnandTech https://ift.tt/CNKRoPh
via IFTTT

Tuesday 12 December 2023

Minisforum Launches AR900i: A $559 Core i9-13900HX Mini-ITX Platform

https://ift.tt/Rx9kDTp

Minisforum has launched a new high-performance Mini-ITX motherboard that's based on Intel's 13th Generation Core HX mobile parts. The upsized, highly integrated AR900i platform promises to bring together the power efficiency benefits of a mobile platform with desktop-class performance and features – and all at a rather moderate price.

Minisforum itself calls its AR900i platform an ultimate mobile-on-desktop (MoDT) platform and indeed it is quite capable thanks to its 14-core Core i7-13650HX (up to 4.90 GHz, 24 MB LLC) or 24-core Core i9-13900HX (up to 5.40 GHz, 36 MB LLC) processors, which are designed for high-end laptops. These CPUs are configured to dissipate up to 100W of thermal energy (lower than 157W set by Intel), so they are equipped with a rather advanced cooling system with four heat pipes and a 12-cm fan. The CPUs can be mated with two DDR5 memory modules in an SO-DIMM form-factor. 

Being aimed primarily and gamers seeking for both performance and portability, the Minisforum AR900i comes with a PCIe 5.0 x16 slot for graphics cards as well as four M.2-2280 slots for PCIe 4.0 SSDs (two on top of the motherboard and two on the lower side). To ensure consistent performance of high-end drives under high loads, there is an active cooling system for them. Meanwhile, it is unclear how loud this SSD fan is. Speaking of fans, the motherboard has a connector for a system fan, which will certainly come handy for high-end builds.

When it comes to expandability, the Minisforum AR900i platform does not disappoint and resembles other high-end Mini-ITX motherboards with an M.2-2230 slot for a Wi-Fi and Bluetooth adapter, a built-in 2.5 GbE, three display outputs (DisplayPort 1.4, HDMI 2.0, USB4 Type-C), plenty of USB ports (USB4, two USB 3.2 Type-A, two USB 2.0 Type-A), and 5.1-channel audio connectors.

Getting a high-end CPU and a high-end motherboard quite an investment nowadays, so one would expect Minisforum's AR900i to be quite expensive. Indeed, it is priced at $689, but since the manufacturer sells virtually all of its products with a discount, it can be obtained for $559.



from AnandTech https://ift.tt/2xk9YQy
via IFTTT

Yet another leak suggests Intel Meteor Lake’s integrated graphics could replace discrete laptop GPUs

https://ift.tt/ld24SBN

Intel’s Meteor Lake processors, which are about to be unleashed in laptops, look pretty peppy going by a new leak – and they might power some excellent affordable gaming laptops and handhelds in the future.

Indeed, AMD might be worried about the challenge posed to its Ryzen Z1 Extreme chip which is the engine of choice for various popular gaming portables (like the Asus ROG Ally and Lenovo Legion Go).

The leak is a 3DMark gaming test where the Core Ultra 7 155H processor with integrated Arc Alchemist graphics is put through its paces (on Bilibili, plus a Cinebench score is provided).

As PC Gamer reports, this was flagged up by regular leaker HXL on X (formerly Twitter), and as ever, season rumors liberally.

See more

We can see that the Core Ultra 7 155H (which has 6 performance cores and 8 efficiency cores) scores 3,339 in Time Spy (with a graphics score of 3,077 and a CPU score of 6,465).

How does that stack up to AMD’s Z1 Extreme? Our sister site PC Gamer ran a comparison and found the Asus ROG Ally (with Z1 Extreme) managed a Time Spy score of 3,150 (with its integrated RDNA 3 graphics scoring 2,834, alongside a CPU score of 8,574).

So, we can see that while the Z1 Extreme is a little ahead overall, and a great deal ahead in processor performance, the Intel Core Ultra 7 wins the battle of the graphics – the most important consideration for gaming, of course.

To some extent, the disparity in CPU performance may be down to the Intel chip only running on performance cores, and not bringing its efficiency cores into play – we don’t know what’s going on under the hood with this leak exactly.

Interestingly, the Cinebench R23 shows that AMD’s APU is around 15% quicker (for single and multi-core), so that’s not as much of a gap as the difference in CPU scores above – and at least suggests that things will be closer in this respect.


Analysis: Promising early signs

Whatever the case, overall processor performance looks to be a solid win for AMD, but in the graphics department, the Meteor Lake chip appears to have the edge. That’s exciting because the power of the on-board integrated graphics is crucial for budget gaming laptops and gaming handhelds alike.

And it’s promising given that the Core Ultra 7 is a 28W chip compared to the Z1 running at 30W (docked) in PC Gamer’s testing, so Intel’s silicon has a slight power advantage here.

That said, this is just a single synthetic test, and the bigger picture is inevitably focused on real-world gaming tests, and averaged multiple runs across different titles.

We’ll get the broader swathe of testing soon enough, but it looks like the Core Ultra 7 packs a good bit of graphics grunt in a reasonable power envelope – and of course, regarding the latter, battery life will be an important part of the equation for portable devices of all kinds.

You might also like



from TechRadar: computing components news https://ift.tt/Pjw84BK
via IFTTT

Monday 11 December 2023

Nvidia could be about to release a cheaper RTX 3050 – and more budget goodness with an RTX 4050 GPU

https://ift.tt/4Ta9CZo

Nvidia might be preparing to replace the RTX 3050 graphics card with a new version – a cheaper one, with less VRAM – and there’s a hint that this move may be about making room for another model at the budget end of the spectrum, perhaps an RTX 4050.

The latter bit is an airier piece of speculation, mind – and indeed we should take all this with a healthy spoonful of salt (or maybe not so healthy) – but we have already heard from the rumor mill that a new RTX 3050 variant could be inbound.

The previously rumored theory, now backed up by Chinese tech site Benchlife (hat tip to VideoCardz), is that Nvidia is scrapping the RTX 3050 as it stands, which is equipped with 8GB of VRAM, and bringing in a refreshed version with a cut-down 6GB (and a trimmed down memory bus, too).

This would mean the RTX 3050 6GB could be pitched at a cheaper $179 to $189, or thereabouts, in the US (and in line with that pricing elsewhere – so about 10% to 15% less than the RTX 3050 8GB currently sells for, roughly).

Previous buzz from the grapevine has suggested that the RTX 3050 6GB could be cut down for CUDA cores as well – shedding up to 20% of its core count, perhaps – and it’d also have a much lower power usage in theory.

Nvidia will supposedly deploy this new RTX 3050 6GB in January 2024, we’re told.

Benchlife adds that there is speculation that Nvidia may be planning a “new GeForce RTX 40 series graphics card to meet consumers in different price ranges,” referring to something to fill the gap between the cheaper RTX 3050 6GB and the RTX 4060. And that must surely be an RTX 4050 – if it exists.


Analysis: Stiff budget competition

The rumor about the RTX 4050 is couched in very vague terms, and as we already mentioned, we should be especially skeptical here as a result. It is something that gamers on a budget have been hoping for, and an RTX 4050 does exist in laptop form already – so who knows, we may just see it.

Or the idea of bringing out a new RTX 3050 6GB that can be sold at a cheaper price point could simply be Nvidia trying to make itself a bit more compelling at the budget end of the market, though the rumored cut-backs for the graphics card make it seem not so tempting.

If the CUDA cores are indeed dropped down substantially (from the existing 2,560 cores) as some rumors have suggested, that’s going to make this an unappealing option – and surely it’d be more cheaply priced, if that was the case?

What’s more likely is that the VRAM and memory bus will be dropped, but not the core count – if the purported price is correct, that’d make much more sense. It’d also align with the refreshed RTX 3050 laptop GPU (which emerged at the start of the year and has 2,560 cores).

Whatever Nvidia does, it will surely have to pitch this rumored new RTX 3050 to compete with AMD’s RX 6600, a rival last-gen option that represents pretty stiff budget competition currently in the best cheap GPU department, with its price having dropped pretty low these days.

On a more general note, it’s good to see a trend of repurposing older generations of silicon into wallet-friendly contemporary products – not just from Nvidia, but also AMD with its rumored moves on the CPU front, bringing in new 3D V-Cache processors from Zen 3 (supposedly the Ryzen 5700X3D and 5500X3D, which could be seriously popular options at the low-end).

You might also like



from TechRadar: computing components news https://ift.tt/WVKcAvP
via IFTTT

Wednesday 6 December 2023

AMD reveals a full stack of shiny new Ryzen laptop processors, supercharged with AI

https://ift.tt/NQ4EWGD

Hot on the heels of a catty attack from Intel, AMD has just announced the next lineup of Ryzen processors that we’ll be seeing in next year’s best laptops - a whopping nine new chips, all bearing the Ryzen 8040 name and equipped with AMD’s ‘XDNA’ AI technology.

The new processors, codenamed ‘Hawk Point’, are already on their way to laptop manufacturers and will be available in new devices in early 2024. All of them will use AMD’s Zen 4 CPU core architecture and RDNA 3 graphics architecture, with the flagship being the Ryzen 9 8945HS (catchy, I know). I was pleased to see that AMD isn’t ditching the low-spec chips either, with a quad-core Ryzen 3 8440U set to bring the XDNA neural engine to more affordable laptops.

While a lot of the press release from AMD was quite focused on enterprise applications of on-chip AI (rather than consumer use cases), the key takeaway here - other than the fact that we’re getting a bunch of new chips - is that local AI is about to become a lot more widespread.

Bot in your laptop

AMD already introduced its neural processing unit (NPU) in the Ryzen 7040 series, and it represented an important step forward in the expansion of local AI. For the uninitiated, ‘local’ AI refers to on-chip machine learning capabilities, letting you run AI-powered workloads locally - as opposed to popular AI tools like ChatGPT, which currently use cloud computing for users to access them from their devices.

There are plenty of advantages to on-chip AI: for starters, it won’t require a mandatory internet connection to use, since there’s no cloud server involved. Keeping things on your device (in this case, a spate of new Ryzen laptops from key manufacturers like Asus, Acer, and Lenovo) also helps mitigate some security concerns surrounding AI, since you won’t need to upload any data to an external platform.

The Ryzen 8040 news came alongside an ‘AI roadmap’ from AMD, detailing that 2024 will also play host to the next-gen ‘Strix Point’ processors - which, importantly, will feature the second-generation XDNA 2 NPU. AMD promises ‘more than 3x generative AI NPU performance’ compared to the first-gen XDNA NPU, a big step up if Team Red can deliver.

Setting Strix Point aside, the upcoming Hawk Point chips look very impressive. AMD has claimed that the chips offer a 1.4x performance uptick in generative AI workloads compared to the Ryzen 7040 series, and also provided some comparative figures to Intel’s current i9-13900H, noting that the flagship 8040 chip should outperform its Intel counterpart in virtually every area. Needless to say, these could be some of the best processors around, and I’m excited to get my hands on one of these new laptops!

You might also like



from TechRadar: computing components news https://ift.tt/k0x2SJj
via IFTTT

AMD Widens Availability of Ryzen AI Software For Developers, XDNA 2 Coming With Strix Point in 2024

https://ift.tt/L1Vmd4N

Further to the announcement that AMD is refreshing their Phoenix-based 7040HS series for mobiles with the newer 'Hawk Point' 8040HS family for 2024, AMD is set to drive more development for AI within the PC market. Designed to provide a more holistic end-user experience for adopters of hardware with the Ryzen AI NPU, AMD has made its latest version of the Ryzen AI Software available to the broader ecosystem. This is designed to allow software developers to deploy machine learning models into their software to deliver more comprehensive features in tandem with their Ryzen AI NPU and Microsoft Windows 11.

AMD has also officially announced the successor to their first generation of the Ryzen AI (XDNA), which is currently in AMD's Ryzen 7040HS mobile series and is driving the refreshed Hawk Point Ryzen 8040HS series. Promising more than 3x the generative AI performance of the first generation XDNA NPU, XDNA 2 is set to launch alongside AMD's next-generation APUs, codenamed Strix Point, sometime in 2024.

AMD Ryzen AI Software: Version 1.0 Now Widely Available to Developers

Along with the most recent release of their Ryzen AI software (Version 1.0), AMD is making it more widely available to developers. This is designed to allow software engineers and developers the tools and capabilities to create new features and software optimizations designed to use the power of generative AI and large language models (LLMs). New to Version 1.0 of the Ryzen AI software is support for the open-source ONNX Runtime machine learning accelerator, which includes support for mixed precision quantization, including UINT16/32, INT16/32, and FLOAT16 floating point formats.

AMD Ryzen AI Version 1.0 also supports PyTorch and TensorFlow 2.11 and 2.12, which broadens the capabilities on which software developers can run in terms of models and LLMs to create new and innovative features for software. AMD's collaboration with Hugging Face also offers a pre-optimized model zoo, a strategy designed to reduce the time and effort required by developers to get AI models up and running. This also makes the technology more accessible to a broader range of developers right from the outset.

AMD's focus isn't just on providing the hardware capabilities through the XDNA-based NPU but on allowing developers to exploit these features to their fullest. The Ryzen AI software is designed to facilitate the development of advanced AI applications, such as gesture recognition, biometric authentication, and other accessibility features, including camera backgrounds.

Offering early access support for models like Whisper and LLMs, including OPT and Llama-2, indicates AMD's growing commitment to giving developers as many tools as possible. These tools are pivotal for building natural language speech interfaces and unlocking other Natural Language Processing (NLP) capabilities, which are increasingly becoming integral to modern applications.

One of the key benefits of the Ryzen AI Software is that it allows software running these AI models to offload AI workloads onto the Neural Processing Unit (NPU) in Ryzen AI-powered laptops. The idea behind the Ryzen AI NPU is that users running software utilizing these workloads via the Ryzen AI NPU can benefit from better power efficiency rather than using the Zen 4 cores, which should help improve overall battery life.

A complete list of the Ryzen AI Software Version 1.0 changes can be found here.

AMD XDNA 2: More Generative AI Performance, Coming With Strix Point in 2024

Further to all the refinements and developments of the Ryzen AI NPU block used in the current Ryzen 7040 mobile and the upcoming Ryzen 8040 mobile chips is the announcement of the successor. AMD has announced their XDNA 2 NPU, designed to succeed the current Ryzen AI (XDNA) NPU and boost on-chip AI inferencing performance in 2024 and beyond. It's worth highlighting that XDNA is a dedicated AI accelerator block integrated into the silicon, which came about through AMD's acquisition of Xilinx in 2022, which developed Ryzen AI and is driving AMD's commitment to AI in the mobile space.

While AMD hasn't provided any technical details yet about XDNA 2, AMD claims more than 3x the generative AI performance with XDNA 2 compared to XDNA, currently used in the Ryzen 7040 series. It must be noted that these gains to generative AI performance are currently estimated by AMD engineering staff and aren't a guarantee of the final performance.

Looking at AMD's current Ryzen AI roadmap from 2023 (Ryzen 7040 series) to 2025, we can see that the next generation XDNA 2 NPU is coming in the form of Strix Point-based APUs. Although details on AMD's upcoming Strix Point processors are slim, we now know that AMD's XDNA 2-based NPU and Strix Point will start shipping sometime in 2024, which could point to a general release towards the second half of 2024 or the beginning of 2025. We expect AMD to start detailing their XDNA 2 AI NPU sometime next year.



from AnandTech https://ift.tt/IgGZ7UP
via IFTTT

Tuesday 5 December 2023

Intel Wins Appeal on VLSI Case, $2.18B Judgement Reversed

https://ift.tt/2f0YaQ5

A U.S. appeals court on Monday overturned a 2021 patent infringement ruling against Intel that awarded patent holding company VLSI $2.18 billion over multiple patent violations. In a two-part decision, the court reversed a previous verdict that found that Intel violated a frequency management patent, while affirming the violation of a second patent on memory voltage reduction – but sending it back to a lower court on the grounds that the damages were improperly calculated in the original trial, Reuters reports.

Back in 2021, a District Judge in Waco, Texas, awarded VLSI a $2.18 billion patent infringement compensation by Intel. This amount included $1.5 billion for infringing on a patent related to frequency management developed by SigmaTel ('759'), and $675 million for a patent on reducing memory voltage, originally from Freescale ('373'). Intel challenged this ruling, but the attempt was unsuccessful in August 2021. Consequently, Intel sought the Patent Trial and Appeal Board's (PTAB) intervention to invalidate both patents, which PTAB did earlier this year.

The PTAB's rulings vacated Intel from the obligation to compensate VLSI for the alleged infringement of its '759' and '373' patents. Meanwhile, VLSI exercised its right to contest the PTAB's decisions, bringing the case to the U.S. Court of Appeals for the Federal Circuit. This court concluded that Intel did indeed infringe upon the '373' patent, but it set up another trial as it believed that the initial trial improperly calculated damages.

Intel said that the remaining patent has little value, though it remains to be seen whether the new trial will award VLSI a different sum and whether Intel will appeal once again.

Intel and VLSI are engaged in extensive legal disputes across various states and internationally, involving several allegations of Intel infringing on VLSI's patents. These patents were initially developed by Freescale, SigmaTel, and NXP, but were eventually sold to VLSI to be part of its larger portfolio. While some of these allegations have been dismissed by courts and others withdrawn by VLSI, numerous cases remain active.

Fortress Investment Group, a private equity firm that had control over VLSI, is under the ownership of SoftBank. SoftBank also holds a significant control over Arm, a competitor of Intel in the CPU market. Intel and Apple have leveled accusations against VLSI, Fortress, and related entities, claiming they engage in illegal patent collection practices. Meanwhile, back in May, Mubadala Investment agreed to purchase the majority of Fortress from SoftBank.



from AnandTech https://ift.tt/1dbyJqw
via IFTTT

Monday 4 December 2023

Intel claims AMD is selling snake oil with its Ryzen 7000-series chips

https://ift.tt/rnIoT1d

The rivalry between Intel and AMD runs deep, with the two companies constantly vying to take the best processor crown. But few people would have expected the aggression contained in a new playbook presented by Intel, which insinuates that AMD is selling “snake oil” to unsuspecting customers.

The claims were made in a slide deck dubbed “Core Truths,” the main argument of which is that AMD’s CPU naming conventions may confuse consumers and disguise old architecture inside newer-seeming products.

For instance, Intel directly calls out the Ryzen 5 7520U chip, which it notes is built on Zen 2 architecture that debuted in 2019, despite the chip going on sale in 2022. Intel says its own comparable chip (the Core i5 1335U) is 83% faster than the Ryzen 5 7520U, even though AMD markets its chip as containing brand-new tech.

Intel goes beyond merely arguing that this is misleading, instead putting it alongside images of a snake oil seller and a dodgy-looking used car salesman. The inference from Intel is clear: AMD's product-naming is intentionally confusing and could end up bamboozling buyers.

Does Intel have a point?

A render of an AMD Ryzen 7000 laptop APU.

(Image credit: AMD)

It’s hard to argue that Intel’s slide deck is entirely over the top. AMD’s CPU naming convention is somewhat confusing. It works like this: instead of differentiating CPU architecture by generation, AMD now says that all of its mobile chips fall under the latest Ryzen 7000 name.

The third number in the chip’s name denotes the architecture – the two in 7520U means Zen 2, for instance – but that departs from AMD’s long-held naming practices and could confuse buyers, who may see the seven and assume they’re getting a brand-new architecture.

But Intel itself doesn’t have entirely clean hands. The company’s (just released) 14th-generation desktop chips are essentially refreshed 13th-generation Raptor Lake processors, while the upcoming Core 100 series is rumored to also recycle Raptor Lake parts.

So yes, Intel does have a point in calling out AMD’s confusing chip naming system, and we shouldn’t ignore that. But it might be seen as a somewhat cynical play when Intel itself has been guilty of incorporating old architectures into new products. It just underlines the fact that when shopping for PC parts, you need to do your research to ensure you get exactly what you’re expecting.

You might also like



from TechRadar: computing components news https://ift.tt/yrL487N
via IFTTT

The Be Quiet! Dark Rock Elite CPU Cooler Review: Where Quiet Meets Quality

https://ift.tt/3HKo4qu

While stock coolers are adequate for handling the basic thermal load of a CPU, they often fall short in noise efficiency and cooling performance. For this reason, advanced users and system builders typically bypass stock coolers in favor of aftermarket solutions that better align with their specific requirements. The high-end segment of this market is exceptionally competitive, as manufacturers strive to offer the most effective cooling solutions.

Be Quiet!, established over two decades ago, has a reputation for quiet computing solutions. Initially making gradual progress, the company took significant strides after 2010, positioning itself as a leading manufacturer of advanced PC components and peripherals. Today, Be Quiet! boasts an extensive range of PC power and cooling products, with its particularly noteworthy air coolers.

In this review, we focus on the Dark Rock Elite, Be Quiet! 's formidable entry into the high-end CPU air cooler segment. This cooler is designed to rival top-tier models like the Noctua NH-D15, offering massive proportions for optimum cooling efficiency. The Dark Rock Elite is crafted to meet and exceed the demands of the most powerful mainstream CPUs, setting itself apart amidst fierce competition from various manufacturers. Our review will delve into the capabilities of the Dark Rock Elite and its place in the aftermarket cooling market.



from AnandTech https://ift.tt/NlOEbTF
via IFTTT
Related Posts Plugin for WordPress, Blogger...