Friday 30 June 2023

We found your AMD mid-range CPU but you better act fast

https://ift.tt/U5efp7S

Micro Center announced that it would be launching the limited edition AMD Ryzen 5 5600X3D CPU on July 7th, 2023 for $229. But here’s the kicker: it will release exclusively in its stores.

AMD and Micro Center struck a deal that ensured the retailer will have sole distribution of the mid-range processor until supplies run out. The Ryzen 5 5600X3D seems to be a solid gaming card, with six cores and 12 threads, 96MB of total L3 cache that comes with 3D V-Cache tech for boosting gaming performance and works with AM4 platforms that support DDR4 memory.

Due to its more affordable price, it’s ideal for budget gaming machines as it offers quite a bit of bang for its buck. Once it releases, the Ryzen 5 5600X3D could even be one of the best gaming CPUs on the market. And, even better for AMD, it most likely will serve as a direct competitor to Intel’s Core i5-13400 processors, which have been holding down the mid-range market. But Team Red’s latest offering has a similar MSRP but with stronger gaming performance than Team Blue’s chips.

According to Tom’s Hardware, Micro Center will also sell a $329 bundle that includes the Ryzen 5 56003XD, an ASUS B550-Plus TUF motherboard, and 16GB of G.Skill Ripjaws V DDR4 memory. At launch, the retailer will offer a pre-built PowerSpec G516 system with an AMD Radeon 6650XT graphics card, 16GB RAM, and 500 GB NVMe SSD storage, for $849.

Sounding like a good deal for consumers 

AMD has been overall dominating the mid-range CPU scene for years now, with the exception of Intel’s Core i5-13400 processors, and this limited edition Ryzen 5 5600X3D is set to continue that trend, but only during a limited release.

In fact, the Ryzen 5 5600X3D could easily rival other chips like AMD’s Ryzen 9 7950X or Ryzen 7 5800X3D in terms of performance for the price, possibly making it the best bang for your buck chip in a while. What makes the pricing interesting is that the base version, the AMD Ryzen 5 5600X, first sold for more at launch ($299) though now it goes for a much more reasonable $199.

The bundles that Micro Center is offering are pretty sweet as well, with excellent deals for a processor, motherboard, and plenty of RAM. And if you’ve been looking for a budget gaming machine, the build that it’s offering is quite good, especially compared to the prices of other PowerSpec G-series gaming PCs that go for way more.

It seems that AMD is trying to head off its competition by offering more budget-minded cards and PC builds, and working with a well-known tech retailer like Micro Center is a solid way to go about it. If this takes off, it could inspire AMD (and other manufacturers) to try out future initiatives.



from TechRadar: computing components news https://ift.tt/JqAj1cT
via IFTTT

Tuesday 27 June 2023

Noctua Releases Direct Die Kit for Delidded Ryzen 7000 CPUs

https://ift.tt/R6gVW1P

Noctua has announced a unique kit designed to enable the company's coolers to be installed on delidded AMD Ryzen 7000-series processors. The NM-DD1 kit, which can be either ordered from the company or 3D printed at home, was designed in collaboration with Roman 'der8auer' Hartung, a prominent overclocker and cooling specialist.

An effective method to enhance cooling of overclocked AMD's Ryzen 7000-series processors involves removing their built-in heat spreaders (a process known as delidding) and attaching cooling systems directly to their CCD dies. This typically reduces CPU temperatures by 10°C – 15°C, but in some cases it can get 20°C lower, according to Noctua. Lowering CPU temperature by such a high margin can let owners to take advantage higher overclocking potential, higher boost clocks, or just enable to reduce fan speeds and then enjoy silence.

The problem is that that standard coolers are not built for use with delidded CPUs and this is why Noctua is releasing its kit. The NM-DD1 kit includes spacers placed under the heatsink's securing brackets to compensate for the height of the removed IHS, and extended custom screws for reattaching the brackets with the spacers in place.

While the kit greatly simplifies cooling down of a delidded AM5 CPU, there are still concerns about the process as delidding is a risky process and it voids warranty. Furthermore, all the additional hardware needed for the delidding process must be acquired separately. 

To further improve cooling of AMD's AM5 processors, Noctua says that itsNM-DD1 can be paired with Noctua's recently introduced offset AM5 mounting bars, potentially leading to a further 2°C temperature reduction.

The NM-DD1 kit can be purchased from Noctua's website for a price of €4.90. Alternatively, customers can create the kit's spacers at home using 3D printing, with STL files available from Printables.com. The assembly process will require either four M3x12 screws (for NM-DDS1) or a single M4x10 screw (for NM-DDS2).

"Delidding and direct die cooling will void your CPU's warranty and bear a certain risk of damaging it, so this certainly isn't for everyone," said Roland Mossig (Noctua CEO). "However, the performance gains to be had are simply spectacular, typically ranging from 10 to 15°C but in some cases, we have even seen improvements of almost 20°C in combination with our offset mounting bars, so we are confident that this is an attractive option for enthusiast users. Thanks to Roman for teaming up with us in order to enable customers to implement this exciting tuning measure with our CPU coolers!"



from AnandTech https://ift.tt/ZH7vOx4
via IFTTT

Thursday 22 June 2023

Gigabyte's Low-Cost Mini-ITX A620 Motherboard Supports Ryzen R9 7950X R9 7950X3D CPUs

https://ift.tt/ygDKXwV

Gigabyte has quietly introduced one of the industry's first inexpensive motherboards for AMD's AM5 processors in Mini-ITX form-factor. The most unexpected peculiarity of Gigabyte's A620I AX motherboard — based on AMD's low-cost A620 chipset that only supports essential features — is that it supports AMD's top-of-the-range Ryzen 9 7950X3D and Ryzen 9 7950X processors.

Despite its positioning as an entry-level motherboard for AMD's Ryzen 7000-series CPUs based on the Zen 4 microarchitecture, Gigabyte's Ultra Durable A620I-AX can handle all of AMD's AM5 CPUs released to date, including relatively inexpensive Ryzen 5 7600 with a 65W TDP as well as range-topping Ryzen 9 7950X3D with 3D V-Cache rated for 120W and Ryzen 9 7950X rated for 170W. Given that AMD's A620 platform is not meant for overclocking, the Ryzen 9 7950X cannot be overclocked on this motherboard, but even support of this CPU is unexpected. 

AMD's Ryzen 7000-series CPUs are hungry for memory bandwidth and the UD A620I-AX does not disappoint here as it comes with two slots for DDR5 memory that officially support memory modules rated for up to DDR5-6400 and with EXPO profiles. High-performance DDR5 DIMMs will be beneficial not only for Ryzen 9 and Ryzen 7 CPUs aimed at demanding gamers, but will also be beneficial for cheap PCs running AMD's upcoming AM5 APUs with built-in graphics as memory bandwidth is crucial for integrated GPUs. The motherboard even has two display outputs to support iGPUs.

Speaking of gaming, the UD A620I-AX motherboard naturally lacks any kind of PCIe Gen5 support, but it does have a PCIe 4.0 x16 slot for graphics cards and an M.2-2280 slot with a PCIe 4.0 x4 interface for SSDs. For those who need additional storage space, the platform has two SATA ports.

As for overall connectivity, the UD A620I-AX motherboard features a Wi-Fi 6 + Bluetooth adapter, an 2.5GbE port, USB 3.2 Gen1/2 ports (including a Type-C port), and audio connectors. While this may not seem much, entry level gaming systems do not use a lot of high-performance peripherals anyway. Furthermore, AMD's A620 platform does not support USB4.

Pricing details for the UD A620I-AX are not yet available, but some early reports suggest that it will be priced below/around $100, like other A620-based offerings. Meanwhile, given support for high-end Ryzen 9 processors and Mini-ITX form-factor, it is possible that Gigabyte may charge a premium for the UD A620I-AX. Therefore, it remains to be seen how reasonably priced will this motherboard be when it hits the market.



from AnandTech https://ift.tt/NIJHsUX
via IFTTT

Thursday 15 June 2023

Intel's new processor branding drops the 'i' and the ball

https://ift.tt/nmbGthJ

Intel has made its rumored processor rebranding official with the announcement of the new Intel Core and Intel Core Ultra brands, phasing out the 'i' in its Core series processors that the company has used for more than a decade.

"Our client roadmap demonstrates how Intel is prioritizing innovation and technology leadership with products like Meteor Lake, focused on power efficiency and AI at scale," Caitlin Anderson, Intel vice president and general manager of Client Computing Group Sales, said in a statement. "To better align with our product strategies, we are introducing a branding structure that will help PC buyers better differentiate the best of our latest technology and our mainstream offerings."

According to Intel, the new branding will apply to its new Core processors starting with Meteor Lake, and will initially feature Intel Core 3, Intel Core 5, and Intel Core 7 processors, though it's not clear if there will be an option for an Intel Core 9.

The new Intel Core Ultra brand logo

(Image credit: Intel)

We do know that this highest tier will at least fall under the new Intel Core Ultra brand, which is set to debut in the second half of 2023, according to the company. This new branding differentiates between its mainstream and "advanced" lineup of processors, with the Ultra series seemingly geared toward the high-end enthusiast market and enterprise users while the vanilla Intel Core processors are more mainstream-facing chips.

This rebranding is a significant leap for Team Blue, which has spent 15 years building up the reputation of its Core i-series processors, with its latest Raptor Lake processors easily earning spots on our best processor list.

Another major change is that Intel will no longer reference the specific generation in its marketing or product markings, so no '14th-generation' or similar, though the generation number will still be identifiable in the specific processors model number, so presumably Intel Core Ultra 9-14XX, or similar, though Intel hasn't settled on a new numbering convention for the rebranded chips yet.

15 years of hard work building a stellar brand, gone like that

The badges for Intel Core and Intel Core Ultra processors

(Image credit: Intel)

With the rebranding of its Core series processors, Intel leaves behind more than a decade of hard work building up the reputation of its flagship processors, which is not something that Intel would do lightly.

I still don't understand why it is necessary, however, and the differentiation between Core and Core Ultra is also somewhat head-scratching, especially since there will at least be some overlap between the two brands when it comes to its middle tiers. There will be both Intel Core 5 and Intel Core 7 processors, but also Intel Core Ultra 5 and Intel Core Ultra 7, and this doesn't do that much to alleviate customer confusion when it comes to which processor to buy for their specific needs.

This may become clearer with time, but the difference between a Core and a Core Ultra processor is going to be just as opaque to most mainstream buyers (and even for many professional customers) so the rebranding doesn't look, at first glance, like it makes anything clearer.

The new Intel Core 5 badge on a laptop

(Image credit: Intel)

The mainstream customers that Intel seems most concerned with here are just as likely to ask the floor associate at Best Buy or Currys which one is better (or Google it) as they were with the old numbering convention. Plus, the enthusiast community that builds their own PCs is already very familiar with Intel's existing branding. Put simply, this isn't clarifying things for anyone who is out to buy an individual Intel processor from Newegg.

And, for enterprise users who need extra security features and such, Intel already has its vPro branding (though it will further differentiate between vPro Enterprise and vPro Essential), so designating a chip a Core Ultra 9 vPro Enterprise or Core Ultra 9 vPro Essential only seems to add complexity to an already complex system, and now everyone has to relearn everything from scratch.

Obviously, given time, we will learn this new system just as we did when Intel introduced its Core processors back in 2006, but there's no getting around the fact that the Intel Core i3, Intel Core i5, Intel Core i7, and Intel Core i9 processors have been an incredibly simple product tiering system that is easily recognizable and easy to explain. It's why AMD more or less just copied the convention wholesale when it introduced its Ryzen processors back in 2016.

Intel says that the rebranding reflects a major shift in the chips' architecture, and so on that level, it makes sense that a rebranding might be appropriate, but there is no way to shake the feeling that with this move Intel is leaving behind something important. Let's just hope Intel doesn't come to regret it.



from TechRadar: computing components news https://ift.tt/6tGXKnE
via IFTTT

Intel To Launch New Core Processor Branding for Meteor Lake: Drop the i Add Ultra Tier

https://ift.tt/n76fPgy

As first reported on back in late April, Intel is embarking on a journey to redefine its client processor branding, the biggest such shift in the previous 15 years of the company. Having already made waves by altering its retail packaging on premium desktop chips such as the Core i9-11900K and Core i9-12900K, the tech giant aims to introduce a new naming scheme across its client processors, signaling a transformative phase in its client roadmap.

This shift is due to begin in the second half of the year, when Intel will launch their highly anticipated Meteor Lake CPUs. Meteor Lake represents a significant leap forward for the company in regards to manufacturing, architecture, and design – and, according to Intel, is prompting the need for a fresh product naming convention.

The most important changes include dropping the 'i' from the naming scheme and opting for a more straightforward Core 3, 5, and 7 branding structure for Intel's mainstream processors. The other notable inclusion, which is now officially confirmed, is that Intel will bifurcate the Core brand a bit and place its premium client products in their own category, using the new Ultra moniker. Ultra chips will signify a higher performance tier and target market for the parts, and will be the only place Intel uses their top-end Core 9 (previously i9) branding.



from AnandTech https://ift.tt/qAKPosy
via IFTTT

Intel Prevails in Ongoing Legal Fight Against VLSI With Appeal Board Victory

https://ift.tt/g1lPs4E

In the ongoing legal battle between Intel and VLSI that started years ago and with billions at stake, Intel seems to be prevailing. The U.S. Patent Trial and Appeal Board (PTAB) has recently invalidated two VLSI-owned patents 'worth' a total of $2.1 billion, while another major dispute potentially 'worth' $4.1 billion for Intel was dissolved late last year.

The litigation between Intel and VLSI is multifaceted to say the least, involving numerous cases across various U.S. and international courts. The sum that Intel would have to pay should it lose in courts would be in the billions of dollars, as VLSI contended that Intel infringed on 19 of the patents it owns and which originated from IP filed by Freescale, SigmaTel, and NXP. While a number of these allegations have ruled on in some fashion, either against Intel, dismissed by court juries, or had the involved patents revoked, there are still some allegations that are awaiting a ruling.

But let's start from the latest developments.

Back in May the PTAB invalidated frequency management patent ('759') finding it 'unpatentable as obvious,' this month the same board found that another patent — the  memory voltage reduction method patent ('373') — in the $2.1 billion case was 'unpatentable.' The two patents were originally issued to SigmaTel and Freescale. These verdicts by the PTAB could potentially exempt Intel from making payments to VLSI for allegedly violating its 759 and 373 patents. On the flip side, VLSI retains the option to challenge these PTAB rulings at the U.S. Court of Appeals for the Federal Circuit.

"We find [Intel] has demonstrated by a preponderance of evidence that the challenged claims are unpatentable," a ruling by the U.S. Patent Trial and Appeal Board reads.

Two years ago, a local judge in Waco, Texas, ruled in VLSI's favor and determined Intel owed $2.18 billion in damages for infringing two patents. One was a frequency management patent developed by SigmaTel, which was assigned $1.5 billion, and the other a memory voltage reduction technique created by Freescale, accounting for $675 million. Intel made an unsuccessful attempt to overturn this ruling in August 2021, which led them to seek the PTAB's ruling to void both patents.

Intel and VLSI previously agreed to resolve the Delaware-based portion of their $4 billion patent dispute late last year. Meanwhile in a separate Texas case, a jury ruled that Intel owed VLSI nearly $949 million for infringing its 7,242,552 patent. This particular patent details a technique meant to alleviate problems arising from pressure applied on bond pads.

Although the legal war does not look to be over yet, Intel seems to be gaining the upper hand in its legal battles with VLSI.

Sources: ReutersJD Supra



from AnandTech https://ift.tt/6lr1oBE
via IFTTT

Wednesday 14 June 2023

Asus ROG Ally Is Now Available: A $700 Handheld Powerhouse

https://ift.tt/KPLudTc

Asus this week started global sales of its ROG Ally portable game console. The Asus take on Valve's Steam Deck and other portables offers numerous advantages, including higher performance enabled by AMD's Ryzen Z1 Extreme processor, broad compatibility with games and latest features courtesy of Windows 11, and a Full-HD 120 Hz display. Furthermore, the handheld can also be turned into a fully-fledged desktop PC.

The top-of-the-range Asus ROG Ally promises to be a real portable powerhouse as it is built around AMD's Ryzen Z1 Extreme processor that uses the company's Phoenix silicon fabbed on TSMC's N4 (4 nm-class) technology. This configuration, which is similar to AMD's Ryzen 7 7840U CPU, features eight Zen 4 cores and 12 CU RDNA 3-based GPU that promises solid performance in most games on the built-in Full HD display. 

To maintain steady performance for the APU that can dissipate heat up to 30W, Asus implemented an intricate cooling system featuring anti-gravity heat pipes, a radiator with 0.1 mm fins, and two fans. 

Speaking of performance, it should be noted that those who want to enjoy ROG Ally games in higher resolution, with higher performance on an external display on TV can do so by attaching one of Asustek's ROG XG Mobile external graphics solutions, such as the flagship ROG XG Mobile with Nvidia's GeForce RTX 4090 Laptop GPU for $1,999.99, or the more moderately priced XG Mobile with AMD's Radeon RX 6850M XT for $799.99. Compatibility with eGFX solutions is a rather unique feature that sets it apart from other portable consoles and makes it a rather decent gaming PC.

As for memory and storage, the ROG Ally features 16GB of LPDDR5-6400 memory and a 512GB M.2-2230 SSD with a PCIe 4.0 interface. Additionally, for users wishing to extend storage without disassembly, the console incorporates a microSD card slot that's compatible with UHS-II.

Another feature that makes ROG Ally stand out is its 7-inch display with a resolution of 1920x1080 and a maximum refresh rate of 120 Hz. To enhance gaming aesthetics, the console's display — covered in Gorilla Glass Victus for extra protection — uses an IPS-class panel with peak luminance of 500 nits and features Dolby Vision HDR support. Adding to the overal gaming experience, the ROG Ally also comes with a Dolby Atmos-certified audio subsystem with Smart Amp speakers and noise cancellation technology.

While the Asus ROG Ally certainly comes in a portable game console form-factor, it is essentially a mobile PC and like any computer, it is designed to deliver standard portable computer connectivity features. Accordingly, the console comes with a Wi-Fi 6E and Bluetooth adapter, a MicroSD card slot for added storage, a USB Type-C port for charging and display output, an ROG XG Mobile connector for attaching external GPUs, and a TRRS audio connector for wired headsets.

To make the ROG Ally comfortable to use, Asustek's engineers did a lot to balance its weight and keep it around 600 grams, which was a challenge as the game console uses a very advanced mobile SoC that needs a potent cooling system. Achieving a balance between device weight and potent SoC performance required a trade-off, so Asus equipped the system with a 40Wh battery, which is relatively small and lightweight. But with this battery, the ROG Ally can run up to 2 hours under heavy gaming workloads, as corroborated by early reviews.

This week Asus begins to sell its range-topping version of the ROG Ally game console based on AMD's Ryzen Z1 Extreme processor, which it first teased back in April and them formally introduced in mid-May. This unit costs $699in the U.S. and is available from BestBuy and from Asus directly. In Europe, the portable console can be pre-ordered presumably for €799, whereas in the U.K. it can be pre-ordered for £899. Later on, Asus will introduce a version of the ROG Ally based on the vanilla Ryzen Z1 processor that offers lower performance, but is expected to cost $599.



from AnandTech https://ift.tt/SRPvYw9
via IFTTT

Tuesday 13 June 2023

AMD Expands AI/HPC Product Lineup With Flagship GPU-only Instinct Mi300X with 192GB Memory

https://ift.tt/qrcOl91

Alongside their EPYC server CPU updates, as part of today’s AMD Data Center event, the company is also offering an update on the status of their nearly-finished AMD Instinct MI300 accelerator family. The company’s next-generation HPC-class processors, which use both Zen 4 CPU cores and CDNA 3 GPU cores on a single package, have now become a multi-SKU family of XPUs.

Joining the previously announced 128GB MI300 APU, which is now being called the MI300A, AMD is also producing a pure GPU part using the same design. This chip, dubbed the MI300X, uses just CDNA 3 GPU tiles rather than a mix of CPU and GPU tiles in the MI300A, making it a pure, high-performance GPU that gets paired with 192GB of HBM3 memory. Aimed squarely at the large language model market, the MI300X is designed for customers who need all the memory capacity they can get to run the largest of models.

First announced back in June of last year, and detailed in greater depth back at CES 2023, the AMD Instinct MI300 is AMD’s big play into the AI and HPC market. The unique, server-grade APU packs both Zen 4 CPU cores and CDNA 3 GPU cores on to a single, chiplet-based chip. None of AMD’s competitors have (or will have) a combined CPU+GPU product like the MI300 series this year, so it gives AMD an interesting solution with a truly united memory architecture, and plenty of bandwidth between the CPU and GPU tiles.

MI300 also includes on-chip memory via HBM3, using 8 stacks of the stuff. At the time of the CES reveal, the highest capacity HBM3 stacks were 16GB, yielding a chip design with a maximum local memory pool of 128GB. However, thanks to the recent introduction of 24GB HBM3 stacks, AMD is now going to be able to offer a version of the MI300 with 50% more memory – or 192GB. Which, along with the additional GPU chiplets found on the MI300X, are intended to make it a powerhouse for processing the largest and most complex of LLMs.

Under the hood, MI300X is actually a slightly simpler chip than MI300A. AMD has replaced MI300A's trio of CPU chiplets with just two CDNA 3 GPU chiplets, resulting in a 12 chiplet design overall - 8 GPU chiplets and what appears to be another 4 IO memory chiplets. Otherwise, despite excising the CPU cores (and de-APUing the APU), the GPU-only MI300X looks a lot like the MI300A. And clearly, AMD is aiming to take advantage of the synergy in offering both an APU and a flagship CPU in the same package.

Raw GPU performance aside (we don't have any hard numbers to speak of right now), a bit part of AMD's story with the MI300X is going to be memory capacity. Just offering a 192GB chip on its own is a big deal, given that memory capacity is the constraining factor for the current generation of large language models (LLMs) for AI. As we’ve seen with recent developments from NVIDIA and others, AI customers are snapping up GPUs and other accelerators as quickly as they can get them, all the while demanding more memory to run even larger models. So being able to offer a massive, 192GB GPU that uses 8 channels of HBM3 memory is going to be a sizable advantage for AMD in the current market – at least, once MI300X starts shipping.

The MI300 family remains on track to ship at some point later this year. According to AMD, the 128GB MI300A APU is already sampling to customers now. Meanwhile the 192GB MI300X GPU will be sampling to customers in Q3 of this year.

It also goes without saying that, with this announcement, AMD has solidified that they're doing a flexible XPU design at least 3 years before rival Intel. Whereas Intel scrapped their combined CPU+GPU Falcon Shores product for a pure GPU Falcon Shores, AMD is now slated to offer a flexible CPU+GPU/GPU-only product as soon as the end of this year. In this timeframe, it will be going up against products such as NVIDIA's Grace Hopper superchip, which although isn't an APU/XPU either, comes very close by linking up NVIDIA's Grace CPU with a Hopper GPU via a high bandwidth NVLink. So while we're waiting on further details on MI300X, it should make for a very interesting battle between the two GPU titans.

Overall, the pressure on AMD with regards to the MI300 family is significant. Demand for AI accelerators has been through the roof for much of the past year, and MI300 will be AMD’s first opportunity to make a significant play for the market. MI300 will not quite be a make-or-break product for the company, but besides getting the technical advantage of being the first to ship a single-chip server APU (and the bragging rights that come with it), it will also give them a fresh product to sell into a market that is buying up all the hardware it can get. In short, MI300 is expected to be AMD’s license to print money (ala NVIDIA’s H100), or so AMD’s eager investors hope.

AMD Infinity Architecture Platform

Alongside today’s 192GB MI300X news, AMD is also briefly announcing what they are calling the AMD Infinity Architecture Platform. This is an 8-way MI300X design, allowing for up to 8 of AMD’s top-end GPUs to be interlinked together to work on larger workloads.

As we’ve seen with NVIDIA’s 8-way HGX boards and Intel’s own x8 UBB for Ponte Vecchio, an 8-way processor configuration is currently the sweet spot for high-end servers. This is both for physical design reasons – room to place the chips and room to route cooling through them – as well as the best topologies that are available to link up a large number of chips without putting too many hops between them. If AMD is to go toe-to-toe with NVIDIA and to capture part of the HPC GPU market, then this is one more area where they’re going to need to match NVIDIA’s hardware offerings

AMD is calling the Infinity Architecture Platform an “industry-standard” design. Accoding to AMD, they're using an OCP server platform as their base here; and while this implies that MI300X is using an OAM form factor, we're still waiting to get explicit confirmation of this.



from AnandTech https://ift.tt/I56TYFG
via IFTTT

AMD: EPYC "Genoa-X" CPUs With 1.1GB of L3 Cache Now Available

https://ift.tt/zCPvEXZ

Alongside today’s EPYC 97x4 “Bergamo” announcement, AMD’s other big CPU announcement of the morning is that their large cache capacity “Genoa-X” EPYC processors are shipping now.  First revealed by AMD back in June of last year, Genoa-X is AMD’s now obligatory V-cache equipped EPYC server CPU, augmenting the L3 cache capacity of AMD’s core complex dies by stacking a 64MB L3 V-cache die on top of each CCD. With this additional cache, a fully-equipped Genoa-X CPU can offer up to 1152MB of total L3 cache.

Genoa-X is the successor to AMD’s first-generation V-cache part, Milan-X. Like its predecessor, AMD is using cache die stacking to add further L3 cache to otherwise regular Genoa Zen 4 CCDs, giving AMD a novel way to produce a high-cache chip design without having to actually lay out an fab a complete separate die. In this case, with 12 CCDs on a Genoa/Genoa-X chip, this allows AMD to add 768MB of additional L3 cache to the chip.

Like its predecessor, these high-cache SKUs are aimed at a niche market segment of workloads that benefit specifically from the additional cache, which AMD terms their “technical computing” market. To make full use of the additional cache, a workload needs to be cache capacity limited – that is to say, it needs to significantly benefit from having more data available on-chip via the larger L3 cache. This typically only a subset of server/workstation workloads, such as fluid dynamics, databases, and electronic design automation, which is why these high cache chips serve a narrower portion of the market. But, as we saw with Milan-X, in the right situation the performance benefits can be significant.

As these are otherwise stock Genoa chips, Genoa-X chips use the same SP5 socket as Genoa and Bergamo. AMD hasn’t disclosed the TDPs, but based on Milan-X, we’re expecting a similar range of TDPs. The additional cache and its placement on top of the CCD means that V-cache equipped CCDs are a bit more power hungry, and the cache die does pose some additional challenges with regards to cooling. So there are some trade-offs involved in performance gains from the extra cache versus performance losses from staying within the SP5 platform’s TDP ranges.

As with Bergamo, we expect to have a bit more on Genoa-X soon. So stay tuned!



from AnandTech https://ift.tt/rBkmFHX
via IFTTT

PCI Express 7.0 Spec Hits Draft 0.3 512GBps Connectivity on Track For 2025 Release

https://ift.tt/O4AxPQw

In what’s quickly becoming a very busy week for data center and high-performance computing news, the PCI Special Interest Group (PCI-SIG) is hosting its annual developers conference over in Santa Clara. The annual gathering for the developers and ecosystem members of the industry’s preeminent expansion bus offers plenty of technical sessions for hardware devs, but for outsiders the most important piece of news to come from the show tends to be the SIG’s annual update on the state of the ecosystem. And this year is no exception, with a fresh update on the development status of PCIe 7.0, as well as PCIe 6.0 adoption and cabling efforts.

With PCI Express 6.0 finalized early last year, the PCI-SIG quickly moved on to starting development work on the next generation of PCIe, 7.0, which was announced at last year’s developer’s conference. Aiming at a 2025 release, PCIe 7.0 aims to once again double the amount of bandwidth available to PCIe devices, bringing a single lane up to 16GB/second of full-duplex, bidirectional bandwidth – and the popular x16 slot up to 256GB/second in each direction.



from AnandTech https://ift.tt/wa0MAP5
via IFTTT

AMD Intros EPYC 97x4 Bergamo CPUs: 128 Zen 4c CPU Cores For Servers Shipping Now

https://ift.tt/a9EzebY

Kicking off a busy day of product announcements and updates for AMD’s data center business group, this morning AMD is finally announcing their long-awaited high density “Bergamo” server CPUs. Based on AMD’s density-optimized Zen 4c architecture, the new EPYC 97x4 chips offer up to 128 CPU cores, 32 more cores than AMD’s current-generation flagship EPYC 9004 “Genoa” chips. According to AMD, the new EPYC processors are shipping now, though we’re still awaiting further details about practical availability.

AMD first teased Bergamo and the Zen 4c architecture over 18 months ago, outlining their plans to deliver a higher density EPYC CPU designed particularly for the cloud computing market. The Zen 4c cores would use the same ISA as AMD’s regular Zen 4 architecture – making both sets of architectures fully ISA compatible – but it would offer that functionality in a denser design. Ultimately, whereas AMD’s mainline Zen 4 EPYC chips are designed to hit a balance between performance and density, these Zen 4c EPYC chips are purely about density, boosting the total number of CPU cores available for a market that is looking to maximize the number of vCPUs they can run on top of a single, physical CPU.

While we’re awaiting additional details on the Zen 4c architecture itself, at this point we do know that AMD has taken several steps to boost their CPU core density. This includes redesigning the architectural layout to favor density over clockspeeds – high clockspeed circuits are a trade-off with density, and vice versa – as well as cutting down the amount of cache per CPU core. AMD has also outright stuffed more CPU cores within an individual Core Complex Die (CCD); whereas Zen 4 is 8 cores per CCD, Zen 4c goes to 16 cores per CCD. Which, amusingly, means that the Zen 4c EPYC chips have fewer CCDs overall than their original Zen 4 counterparts.

Despite these density-focused improvements, Bergamo is still a hefty chip overall in regards to the total number of transistors in use. A fully kitted out chip is comprised of 82B transistors, down from roughly 90B transistors in a full Genoa chip. Which, accounting for the larger number of CPU cores available with Bergamo, works out to a single Zen 4c core being about 68% of the transistor count as a Zen 4 core, when amortized over the entire transistor count of the chip. In reality, the savings at the CPU core level alone are likely not as great, but it goes to show how many transistors AMD has been able to save by cutting down on everything that isn’t a CPU core.

Meanwhile, as these are a subset of the EPYC 9004 series, the 97x4 EPYC chips are socket compatible with the rest of the 9004 family, using the same SP5 socket. A BIOS update will be required to use the chips, of course, but server vendors will be able to pop them into existing designs.

As noted earlier, the primary market for the EPUC 97x4 family is the cloud computing market – the ‘c’ in Zen 4c even stands for “cloud”, according to AMD. The higher core counts and less aggressive clockspeeds make the resulting chips, on a core-for-core basis, more energy efficient than Genoa designs. Which for AMD’s target market is a huge consideration, given that power is one of their greatest ongoing costs. As part of today’s presentation, AMD is touting a 2.7x improvement in energy efficiency, though we’re unclear over what that figure is in comparison to.

With their higher core density and enhanced energy efficiency, AMD is especially looking to compete with Arm-based rivals in this space, with Ampere, Amazon, and others using Arm architecture cores to fit 128 (or more) cores into a single chip. AMD will also eventually be fending off Intel in this space, though not until Sierra Forest in 2024.  

Pure compute users will also want to keep an eye on these new Bergamo chips, as the high core count changes the current performance calculus a bit. In regards to pure compute throughput, on paper the new chips offer even more performance than 96 core Genoa chips, as the extra 32 CPU cores more than offsets the clockspeed losses. With that said, the cache and other supporting hardware of a server CPU boost performance in other ways, so the performance calculus is rarely so simple for real-world workloads. Still, if you just need to let rip a lot of semi-independent threads, then Bergamo may offer some surprises.

We’ll have more on more on the EPYC 97x4 series chips in the coming days and weeks, including more on the Zen 4c core architecture, as AMD releases more information on that. So until then, stay tuned.



from AnandTech https://ift.tt/ydmQtxp
via IFTTT

Monday 12 June 2023

Microsoft to Bring Game Pass Games to NVIDIA's GeForce Now

https://ift.tt/DgsQT7t

Microsoft on Sunday announced plans to bring select PC Game Pass games to NVIDIA's GeForce Now cloud streaming service later in 2023. The move will allow gamers to enjoy Microsoft's curated collection of PC games on high-end hardware in the cloud without purchasing either the games or a high-end gaming device (PC, Xbox), all for a monthly fee. 

"Game Pass members will soon be able to stream select PC games from the library through NVIDIA GeForce Now," wrote Joe Skrebels, Xbox Wire Editor-in-Chief, in a blog post. "This will enable the PC Game Pass catalog to be played on any device that GeForce Now streams to, like low-spec PCs, Macs, Chromebooks, mobile devices, TVs, and more, and we will be rolling this out in the months ahead."

NVIDIA's GeForce Now is a cloud gaming service known for offering cutting-edge gaming hardware, including the highly acclaimed GeForce RTX 4080 graphics card that is offered in the top tier subscription ($19.99 per month) aimed at demanding gamers. Meanwhile, Microsoft's PC Game Pass subscription ($9.99 per month) gives access to over 100 titles of different genres and a library of Electronic Arts games. The value proposition of Game Pass on GeForce Now is evident as it allows to play high-quality PC games on an advanced rig for $30 a month, or $360 a year, which is considerably cheaper than buying a gaming PC.

There are a couple of things to keep in mind though. Microsoft has only confirmed that a "selected range" of Game Pass PC games will be compatible with GeForce Now. Also, the question of whether EA Play games will be supported is yet to be clarified. Thus, it remains uncertain how many games from Microsoft's subscription will eventually be compatible with the GeForce Now platform.

Furthermore, cloud game streaming comes with its own quirks, such as longer loading times and increased latencies, so its overall experience is not exactly the same as that provided by a local gaming PC with a high-end CPU and a GeForce RTX 4080 graphics board. Still, NVIDIA's GeForce Now with the GeForce RTX 4080 tier provided better experience than Microsoft's Xbox Cloud Gaming service, offering higher performance and lower latency, according to a comparison by The Verge.

Bringing PC Game Pass games to NVIDIA's GeForce Now platform — which currently supports Epic's Game Store and Valve's Steam — could potentially enhance the appeal of both services for gamers. Meanwhile, some might perceive this move as a strategic effort to pacify regulators in light of Microsoft's ongoing acquisition of Activision Blizzard. In general, the move shows that the software giant is willing to distribute its services and games on platforms beyond Windows and Xbox.

In separate news, Microsoft introduced a new, larger capacity version of its Xbox Series S console. The all-black Xbox comes with a 1 TB SSD - up from 512GB on the base model - and carries a $50 price premium, putting the final price tag at $349. The new system will be available starting September 1, 2023.



from AnandTech https://ift.tt/cxACrJP
via IFTTT

Thursday 8 June 2023

Cooler Master Introduces MasterAir MA824 Stealth 250W Dual-Tower CPU Cooler

https://ift.tt/RkdTmQy

Cooler Master has strengthened the brand's already prolific CPU air cooler portfolio with the latest addition in the form of the MasterAir MA824 Stealth. There's also a 30th-anniversary edition of the new cooler with an ARGB fan. The MasterAir series is already home to some of the more prominent high-performance CPU coolers; therefore, the MasterAir MA824 Stealth will fit right in.

The MasterAir MA824 Stealth features a dual-tower design like many popular CPU air coolers, such as the Cooler Master's previously-released MasterAir MA624 Stealth or the Noctua NH-D15. Aesthetically-conscious consumers will value the MasterAir MA824 Stealth's all-black exterior since it helps the CPU cooler blend into most PC builds. The top cover is a nice touch that adds some flair to the design. Checking in with the dimensions of 6.4 x 5.9 x 6.5 inches (162.2 x 150.6 x 165.6 mm), the MasterAir MA824 Stealth is significantly larger than the MasterAir MA624 Stealth (5.7 x 6 x 6.3 inches) but falls in the same ballpark as the Noctua NH-D15 (6.3 x 5.9 x 6.5 inches).

Cooler Master MasterAir MA824 Stealth 250W
Type Dual Tower Cooler
Dimensions 162.2 x 150.6 x 165.6 mm
Fans 1 x 135mm "Mobius" Fan, 63.1 CFM
1 x 120mm "Mobius" Fan, 63.6 CFM
RGB Yes (Anniversary Edition)
Supported Sockets Intel: LGA1700, LGA1200, LGA1151, LGA1150, LGA1156, LGA1155

AMD: AM5, AM4
Warranty 5 Years
Price £99.99

Featuring a nickel-plated copper base that makes direct contact with the processor, the MasterAir MA824 Stealth transfers heat to the dual-tower heatsinks through eight composite copper heat pipes. The cooler has two more heat pipes than the prior MasterAir MA624 Stealth and the Noctua NH-D15. Cooler Master revised the fin stack design and thickness on the MasterAir MA824 Stealth compared to the MasterAir MA624 Stealth to improve thermal performance. Cooler Master had displayed the MasterAir MA824 Stealth at Computex 2023, where the cooler featured a coating that reacts to the temperature. Basically, the heat pipes and heatsink would turn red when heat dissipation is in action. It's a shame that the cool feature didn't make it to the retail product.

Despite the large heatsinks, the MasterAir MA824 is compatible with tall memory modules. With its default configuration, there are 1.7 inches of clearance space and up to 2.6 inches in a single-fan setup. The MasterAir MA824 is more generous than the Noctua NH-D15, which provides 1.3 and 2.5 inches of clearance in default and single-fan mode, respectively. The MasterAir MA824 Stealth is compatible with many recent and old sockets. It supports LGA1700, LGA1200, and LGA115x from Intel and the AM4 and AM5 sockets.

Cooler Master equips the MasterAir MA824 Stealth with two Mobius cooling fans with loop dynamic bearings. The vendor placed the pair of PWM cooling fans in a staggered design to maximize push-pull performance. Consumers can add a third fan of their choice. The middle fan is a Mobius 135 (135mm) that delivers up to 1.92 mmH₂O of static pressure with a maximum noise level of 24.6 dB(A), whereas the other is the Mobiius 120 (120mm), which is rated for 2.69mmH₂O at 22.6 dB(A). On the MasterAir MA824 Stealth 30th Anniversary Edition, the 120mm fan has vibrant ARGB lighting. On paper, Cooler Master's Mobius fans seemingly perform better than Noctua's NF-A15 PWM 140mm fans that cool the NH-D15. The NF-A15 PWM fans have a static pressure of 1.51 mmH₂O at 19.2 dB(A). While NF-A15 PWM lacks the performance of the Cooler Master fans, it is substantially more silent during operation. Cooler Master rates the MasterAir MA824 Stealth's cooling capacity up to 250 watts with the dual-fan layout.

The MasterAir MA824 Stealth comes with a five-year warranty. Cooler Master didn't reveal the U.S. pricing or availability for the MasterAir MA824 Stealth or MasterAir MA824 Stealth 30th Anniversary Edition. The cooler is already available in the U.K. with a £99.99 (~$125) MSRP.



from AnandTech https://ift.tt/6FgLvSd
via IFTTT

Monday 5 June 2023

Apple M2 Ultra chip announced at WWDC 2023 as upgrade to Mac Studio and Mac Pro

https://ift.tt/5TOEA2x

Apple just announced the new Apple M2 Ultra chip at WWDC 2023, filling out its M2 lineup even further with a powerful new workstation chip capable of handling some of the most demanding content creation workloads.

“M2 Ultra delivers astonishing performance and capabilities for our pro users’ most demanding workflows, while maintaining Apple silicon’s industry-leading power efficiency,” Johny Srouji, Apple’s senior vice president of Hardware Technologies, said in a statement emailed to TechRadar. “With huge performance gains in the CPU, GPU, and Neural Engine, combined with massive memory bandwidth in a single SoC, M2 Ultra is the world’s most powerful chip ever created for a personal computer.”

The new SoC's CPU should be 20% faster than the M1 Ultra, with a 30% faster GPU, and a 40% faster neural engine. It also increases the amount of maximum unified memory to 192GB, and in the new Mac Studio, you can get the latest HDMI upgrade so you can get 8K video output. You can also support 6 Pro Display XDR monitors, with 22 4K stream inputs.

You can get the new M2 Ultra in the Mac Studio or the Mac Pro, with the latter offering PCIe expansion slots, which is a major boon for industrial users who need high-end cards for storage, encoding, and much more.

You will be able to order the new Mac Studio and Mac Pro with Apple M2 Ultra chip starting today, with deliveries coming next week

Apple cuts its final tie to Intel with the M2 Ultra

With the introduction of the Apple M2 Ultra, Apple has finally cut its remaining ties to Intel, which still supplied Intel Xeon processors for the Mac Pro. With the move to Apple silicon now complete, Apple has full control over its ecosystem in a way it hasn't in more than a decade.

Intel has been been rocked a bit on both sides, with Apple taking its popular laptops and desktops in-house, and AMD offering very competitive Ryzen processors for desktops (and increasingly laptops).

The final move from Intel Xeon to M2 Ultra though does come with some degree of risk. A lot of industries aren't as tied specifically to Apple hardware as they might have been in years past. With the increasing power of Nvidia's GPUs and AMD Threadripper, Apple has a lot more competition than it did in the professional/industrial space.

How many people make the jump to M2 Ultra remains to be seen, especially with the Mac Pro, but there is undoubtedly a lot of excited Apple fans out there.



from TechRadar: computing components news https://ift.tt/kprFaXv
via IFTTT

Friday 2 June 2023

Next-Generation Memory Modules Show Up at Computex

https://ift.tt/t9rnzyc

Dynamic random access memory is an indispensable part of all computers, and requirements for DRAM — such as performance, power, density, and physical implementation — tend to change now and then. In the coming years, we will see new types of memory modules for laptops and servers as traditional SO-DIMMs and RDIMMs/LRDIMMs seem to run out of steam in terms of performance, efficiency, and density.

ADATA demonstrated potential candidates to replace SO-DIMMs and RDIMMs/LRDIMMs from client and server machines, respectively, in the coming years, at Computex 2023 in Taipei, Taiwan, reports Tom's Hardware. These include Compression Attached Memory Modules (CAMMs) for at least ultra-thin notebooks, compact desktops, and other small form-factor applications; Multi-Ranked Buffered DIMMs (MR-DIMMs) for servers; and CXL memory expansion modules for machines that need extra system memory at a cost that is below that of commodity DRAM.

CAMM

The CAMM specification is slated to be finalized by JEDC later in 2023. Still, ADATA demonstrated a sample of such a module at the trade show to highlight its readiness to adopt the upcoming technology.

The key benefits CAMMs include shortened connections between memory chips and memory controllers (which simplifies topology and therefore enables higher transfer rates and lowers costs), usage of modules based on DDR5 or LPDDR5 chips (LPDDR has traditionally used point-to-point connectivity), dual-channel connectivity on a single module, higher DRAM density, and reduced thickness when compared to dual-sided SO-DIMMs. 

While the transition to an all-new type of memory module will require tremendous effort from the industry, the benefits promised by CAMMs will likely justify the change. 

Last year, Dell was the first PC maker to adopt CAMM in its Precision 7670 notebook. Meanwhile, ADATA's CAMM module differs significantly from Dell's version, although this is not unexpected as Dell has been using pre-JEDEC-standardized modules.

MR DIMM

Datacenter-grade CPUs are increasing their core count rapidly and therefore need to support more memory with each generation. But it is hard to increase DRAM device density at a high pace due to costs, performance, and power consumption concerns, which is why along with the number of cores, processors add memory channels, which results in an abundant number of memory slots per CPU socket and increased complexity of motherboards.

This is why the industry is developing two types of memory modules to replace RDIMMs/LRDIMMs used today. 

On the one hand, there is the Multiplexer Combined Ranks DIMM (MCR DIMM) technology backed by Intel and SK Hynix, which are dual-rank buffered memory modules with a multiplexer buffer that fetches 128 bytes of data from both ranks that work simultaneously and works with memory controller at high speed (we are talking about 8000 MT/s for now). Such modules promise to increase performance and somewhat simplify building dual-rank modules significantly. 

On the other hand, there is the Multi-Ranked Buffered DIMM (MR DIMM) technology which seems to be supported by AMD, Google, Microsoft, JEDEC, and Intel (at least based on information from ADATA). MR DIMM uses the same concept as MCR DIMM (a buffer that allows the memory controller to access both ranks simultaneously and interact with the memory controller at an increased data transfer rate). This specification promises to start at 8,800 MT/s with Gen1, then evolve to 12,800 MT/s with Gen2, and then skyrocket to 17,600 MT/s in its Gen3.

ADATA already has MR DIMM samples supporting an 8,400 MT/s data transfer rate that can carry 16GB, 32GB, 64GB, 128GB, and 192GB of DDR5 memory. These modules will be supported by Intel's Granite Rapids CPUs, according to ADATA.

CXL Memory

But while both MR DIMMs and MCR DIMMs promise to increase module capacity, some servers need a lot of system memory at a relatively low cost. Today such machines have to rely on Intel's Optane DC Persistent Memory modules based on now obsolete 3D XPoint memory that reside in standard DIMM slots. Still, in the future, they will use memory on modules featuring a Compute Express Link (CXL) specification and connected to host CPUs using a PCIe interface.

ADATA displayed a CXL 1.1-compliant memory expansion device at Computex with an E3.S form factor and a PCIe 5.0 x4 interface. The unit is designed to expand system memory for servers cost-effectively using 3D NAND yet with significantly reduced latencies compared to even cutting-edge SSDs.

Image Credits: Toms Hardware



from AnandTech https://ift.tt/PpdQyIY
via IFTTT
Related Posts Plugin for WordPress, Blogger...