Monday 29 November 2021

Best PC Power Supplies: Holiday 2021

https://ift.tt/2Acuirx

Now that you've picked out your CPU, it's time to start picking out the rest of your system components. And perhaps the most humble but overlooked of these components is the power supply unit (PSU). Available in a wide range of sizes and power capacities, there are a number of great PSUs out there, but choosing between them can be a challenge. So today we're bringing you our annual PC power supply guide, to help you sort figure out what the best options are, be it a low-wattage unit for a small form factor PC, or a hulking kilowatt unit for the most powerful PC.



from AnandTech https://ift.tt/32xpO0i
via IFTTT

Monday 22 November 2021

Best CPUs for Gaming: Holiday 2021

https://ift.tt/3nEM5BB

The launch of Intel’s 12th Gen Alder Lake processors has shaken up the market in the eternal battle against AMD, with seemingly a good number of processors to go around. The main limitations are still graphics cards for gaming, but also those looking for DDR5 are having to scout around as the dreaded ‘supply chain’ has limited how many modules have come to market. Nonetheless, platform costs aside, stock of both AMD’s Ryzen 5000 and Intel’s 12th Gen Alder Lake processors seems to be healthy, and both are aggressively priced.



from AnandTech https://ift.tt/2xKjnGe
via IFTTT

Friday 19 November 2021

Intel Meteor Lake 14th-gen CPUs already spotted in early photo

https://ift.tt/3nzXW3X

Intel’s Meteor Lake processors, which will be its 14th-gen chips – the ones following next-gen models (Raptor Lake) and expected in 2023 – have already been spotted and photographed; or at least test versions of the chips have.

This initial sighting of Meteor Lake – which will be the first of Intel’s CPUs to blaze onto 7nm (finally) – was recorded by CNET’s Stephen Shankland, who snapped a close-up of a wafer of Meteor Lake test chips (see below), quite possibly of the M-Series (low-power models).

Intel Meteor Lake test chips in Fab 42

(Image credit: CNET / Intel)

Shankland reminds us that manufacturing will begin in 2022, and shipping is scheduled for 2023 as Intel previously told us (remember that Meteor Lake was ‘taped in’ back in May 2021, meaning all separate design elements of the CPUs were complete back then).

During his tour of Intel’s Fab 42 in Arizona, Shankland also got to see Sapphire Rapids next-gen Xeon Scalable processors for servers. Earlier this week, we also heard about high-end desktop (HEDT) Sapphire Rapids-X chips and how they are expected to launch in Q3 of 2022.


Analysis: Intel 4 looking good for 2023

Spotting test chips is another exciting hint that Intel is on target with its proposed 2023 launch for Meteor Lake.

These 14th-gen processors will be built on the Intel 4 process (7nm EUV), which is expected to be a considerable 20% performance uplift over Intel 7 (Enhanced SuperFin) as used in the current-gen Alder Lake chips. A refresh of the latter is expected to bridge the move between Intel 7 and Intel 4, which will be Raptor Lake 13th-gen as mentioned at the outset.

Raptor Lake is expected to arrive in Q3 of 2022 (alongside Sapphire Rapids-X), and could be a serious step up in power-efficiency itself.

Via VideoCardz



from TechRadar: computing components news https://ift.tt/3nuag5E
via IFTTT

Thursday 18 November 2021

Qualcomm x Nuvia: Silicon Sampling in Late 2022, Products in 2023

https://ift.tt/3qRAbpY

One of the more curious acquisitions in the last couple of years has been that of Nuvia by Qualcomm. Nuvia was a Silicon Valley start-up founded by the key silicon and design engineers and architects behind both Apple’s and Google’s silicon for the past few years. Qualcomm CEO Cristiano Amon made it crystal clear when Nuvia was acquired that they were going after the high-performance ultraportable laptop market, with both Intel and Apple in the crosshairs.

Nuvia came out of stealth in November 2019, with the three main founders having spent almost a year building the company. Gerard Williams III, John Bruno, and Manu Gulati have collectively driven the silicon design of 20+ chips, have combined over 100 patent, and have been in leadership roles across Google, Apple, Arm, Broadcom, and AMD. Nuvia raised a lot of capital, $300M+ over two rounds of funding and angel investors, and the company hired a lot of impressive staff.

The goal of Nuvia was to build an Arm-based general purpose server chip that would rock the industry. Imagine something similar to what Graviton 2 and Ampere Altra are today, but with a custom microarchitecture on par (or better) with Apple’s current designs. When Nuvia was still on its own in start-up mode, some were heralding the team and the prospect, calling for the downfall of x86 with Nuvia’s approach. However, Qualcomm swept in and acquired the company in March 2021, and repurposed Nuvia’s efforts towards a laptop processor.

It’s been no secret that Qualcomm has been after the laptop and notebook market for some time. Multiple generations of ‘Windows on Snapdragon’ have come to market through Qualcomm’s partners, initially featuring smartphone-class silicon before becoming something more bespoke with the 8cx, 8cx Gen 2, and 7c/7 options in the past couple of years. It has taken several years for Qualcomm to get the silicon and the Windows ecosystem somewhere that makes sense for commercial and consumer use, and with the recent news that Windows 11 on these devices now enabling full x86-64 emulation support, the functional difference between a Qualcomm laptop and an x86 laptop is supposed to be near zero. Qualcomm would argue their proposition is better, allowing for 2 days of use on a single charge, holding charge for weeks, and mobile wireless connectivity with 4G/5G. I’ve tested one of the previous generation S855 Lenovo Yoga devices, and the battery life is insane – but I needed better were functional support (turns out I have an abnormal edge-case work flow…) and more performance. While Qualcomm has been working on the former since my last test, and Nuvia is set to bring the latter.


Image from @anshelsag on Twitter, Used with permission

At Qualcomm’s Investor Day this week, the Qualcomm/Nuvia relationship was mentioned in an update. I had hoped that by the end of this year (and Qualcomm’s Tech Summit in only a couple of weeks) that we might be seeing something regarding details or performance, however Qualcomm is stating that its original schedule is still on track. As announced at the acquisition, the goal is to deliver test silicon into the hands of partners in the second half of 2022.

The goal here is to have laptop silicon that is competitive with Apple's M-series, but running Windows. This means blowing past Intel and AMD offerings, and coupled with the benefits of better battery life, sustained performance, and mobile connectivity. From the disclosures so far, it’s perhaps no surprise that the Nuvia CPUs will be paired with an Adreno GPU and a Hexagon DSP, although it will be interesting to see if the Nuvia CPU is a single big core paired with regular Arm efficient cores, or everything in the CPU side will be new from the Nuvia team.

I have no doubt that at Qualcomm’s Tech Summit in December 2022 we’ll get a deeper insight into the microarchitecture of the new core. Either that or Qualcomm might surprise us with a Hot Chips presentation in August. With regards to going beyond laptop chips, while Qualcomm is happy to state that Nuvia's designs will be 'extended to [other areas] opportunistically', it's clear that they're locking the crosshairs on the laptop market before even considering what else might be in the field of view.



from AnandTech https://ift.tt/3FpYnnt
via IFTTT

AMD’s Instinct MI250X: Ready For Deployment at Supercomputing

https://ift.tt/30DdhYb

One of the big announcements at AMD’s Data Center event a couple of weeks ago was the announcement of its CDNA2 based compute accelerator, the Instinct MI250X. The MI250X uses two MI200 Graphics Compute Dies on TSMC’s N6 manufacturing node, along with four HBM2E modules per die, using a new ‘2.5D’ packaging design that uses a bridge between the die and the substrate for high performance and low power connectivity. This is the GPU going into Frontier, one of the US Exascale systems due for power on very shortly. At the Supercomputing conference this week, HPE, under the HPE Cray brand, had one of those blades on display, along with a full frontal die shot of the MI250X. Many thanks to Patrick Kennedy from ServeTheHome for sharing these images and giving us permission to republish them.

The MI250X chip is a shimmed package in an OAM form factor. OAM stands for OCP Accelerator Module, which was developed by the Open Compute Project (OCP) – an industry standards body for servers and performance computing. And this is the accelerator form factor standard the partners use, especially when you pack a lot of these into a system. Eight of them, to be exact.

This is a 1U half-blade, featuring two nodes. Each node is an AMD EPYC ‘Trento’ CPU (that’s a custom IO version of Milan using the Infinity Fabric) paired with four MI250X accelerators. Everything is liquid cooled. AMD said that the MI250X can go up to 560 W per accelerator, so eight of those plus two CPUs could mean this unit requires 5 kilowatts of power and cooling. If this is only a half-blade, then we’re talking some serious compute and power density here.

Each node seems relatively self-contained – the CPU on the right here isn’t upside down given the socket rear pin outs aren’t visible, but that’s liquid cooled as well. What looks like four copper heatpipes, two on each side of the CPU, is actually a full 8-channel memory configuration. These servers don’t have power supplies, but they get the power from a unified back-plane in the rack.

The back connectors look something like this. Each rack of Frontier nodes will be using HPE’s Slingshot interconnect fabric to scale out across the whole supercomputer.

Systems like this are undoubtedly over-engineered for the sake of sustained reliability – that’s why we have as much cooling as you can get, enough power phases for a 560 W accelerator, and even with this image, you can see those base motherboards the OAM connects into are easily 16 layers, if not 20 or 24. For reference, a budget consumer motherboard today might only have four layers, while enthusiast motherboards have 8 or 10, sometimes 12 for HEDT.

In the global press briefing, Keynote Chair and Professor world renowned HPC Professor Jack Dongarra, suggested that Frontier is very close to being powered up to be one of the first exascale systems in the US. He didn’t outright say it would beat the Aurora supercomputer (Sapphire Rapids + Ponte Vecchio) to the title of first, as he doesn’t have the same insight into that system, but he sounded hopeful that Frontier would submit a 1+ ExaFLOP score to the TOP500 list in June 2021.

Many thanks to Patrick Kennedy and ServeTheHome for permission to share his images.



from AnandTech https://ift.tt/3kO2UrV
via IFTTT

Wednesday 17 November 2021

Intel Raptor Lake CPU launch could come sooner than you think

https://ift.tt/eA8V8J

Intel’s Raptor Lake CPUs could arrive in Q3 of 2022, earlier than was broadly anticipated – meaning that the processors might just be here in August, or even July technically.

This comes from regular rumor peddler @momomo_us on Twitter, who tweeted (rather obliquely) that a Q3 launch date is now the expected timeframe for both Raptor Lake and high-end desktop Sapphire Rapids-X (that’s what the fish represents: this will be Intel’s Fishhawk Falls HEDT platform).

See more

Of course, this is just speculation, but it’s interesting to see the assertion made that Raptor Lake might turn up earlier than we thought – and by September at the latest, in theory.

The default expectation has been the likely usual gap of a year (or so) between generations, and with Alder Lake having been released earlier this month, that would leave Q4 as the most likely pitch for a Raptor Lake launch.


Analysis: Rapid Raptor release is certainly a solid possibility

It’s possible that Intel could be quick with its next-gen release, given that as we’ve already heard, Raptor Lake is purportedly more or less a simple refresh of Alder Lake. And while there’s usually a year between processor generations as we mentioned above, note that Alder Lake was here well inside a year after Rocket Lake (only seven months later, in fact; but that was unusual).

On balance, September 2022, then, could be the date to tentatively pencil in should you be waiting for Intel’s 13th-gen chips, and not jumping on board the hybrid tech (performance and efficiency cores) bandwagon in its first (12th-gen) iteration. There are probably more than a few folks in that boat.

Raptor Lake could be particularly interesting for laptops, as we saw yesterday, because Intel might be planning some new power-efficiency tricks which could augment battery longevity gains from the CPU’s efficiency cores.

As for Sapphire Rapids CPUs, these have been a long time coming, as Cascade Lake-X HEDT models have been around since 2017, so it’ll have been five years in the making if Sapphire Rapids does indeed emerge in Q3 of next year. Meanwhile, AMD is expected to have new Threadripper 5000 chips (Zen 3-based) before too long…

Remember that Intel has Sapphire Rapids CPUs for data centers ready to roll for Q2 of 2022, as well.

Via PC Gamer



from TechRadar: computing components news https://ift.tt/3Cw0MLC
via IFTTT

Tuesday 16 November 2021

Next-gen Raptor Lake CPUs could fix Intel’s biggest weakness

https://ift.tt/eA8V8J

Intel could shore up one of its biggest weak points with CPUs in that next-gen Raptor Lake processors might advance considerably in terms of power-efficiency.

As you may be aware, power usage has been something Intel has struggled with in recent times, and Alder Lake – while admittedly being better than its predecessor Rocket Lake – still looks power-hungry compared to Ryzen 5000 chips (and particularly the 12900K, with the flagship chip once again remaining a Watt-guzzling monster).

Raptor Lake could change that, though, thanks to a fresh innovation which the rumor mill reckons (apply salt here) might just pop up in time for the 13th-gen chips, spotted in an Intel patent.

As Tom’s Hardware reports, Twitter user Underfox pointed out the Intel patent in a tweet (a while back – actually in August) and described how it aims to reduce power usage.

See more

The DLVR (digital linear voltage regulator) power delivery system is a “voltage clamp placing in parallel to the primary VR [voltage regulator]” which reduces the CPU VID (the voltage the CPU requires to be delivered) and therefore power consumption of processor cores.

That power-efficiency improvement could amount to a 20% to 25% decrease in the power needed by the CPU, and that may translate to a roughly 7% performance gain.

The enhancement comes at only a “small extra cost for the silicon, low complexity of tuning, and a relatively small additional motherboard VR”, Intel asserts.


Analysis: Efficiency, efficiency, efficiency – is Intel poised for major laptop success?

Raptor Lake is thought to be arriving later in 2022 to take the baton from the freshly released Alder Lake chips. It’s expected to be a simple refresh of Alder Lake, but will obviously sport performance gains in terms of IPC (Instructions per Clock), with a flagship that’s rumored to run with 24-cores (8 performance cores plus 16 efficiency cores, which as the latter don’t have hyper-threading means 32-threads in total).

Better power-efficiency for Raptor Lake would clearly be good news, because as we already mentioned, power consumption is an area Intel has continued to struggle with keeping a lid on while eking more performance from its chips in recent generations.

Where this could be particularly exciting is with mobile CPUs, as if the rumor mill is correct, Raptor Lake is going to really push with the efficiency cores (with 16 in the flagship, doubling up from 8 in Alder Lake), and on top of that, we could have an overall improvement with power-efficiency in the underlying architecture, too.

That’s a prospective double whammy which might mean that Raptor Lake laptop CPUs could promise huge battery life boosts as a result – assuming that this patent is indeed for tech which will be ready and in Intel’s next-gen processors.



from TechRadar: computing components news https://ift.tt/3wPS9u3
via IFTTT

Monday 15 November 2021

Nvidia CEO says there’s no magic cure for GPU shortage that’ll last until 2023

https://ift.tt/eA8V8J

Nvidia has some bad news for those who were hoping that GPU stock shortages might be a thing of the past before too long, because these supply issues are apparently going to be felt throughout next year.

The latest from Nvidia’s CEO Jensen Huang came in an interview with Yahoo Finance, in which he observed: “I think that through the next year, demand is going to far exceed supply. We don’t have any magic bullets in navigating the supply chain.”

So, we can broadly expect 2022 to be much like 2021, in that there won’t be enough graphics cards to go around, and inventory shortages will lead to scalping and price gouging which worsens the overall situation. Meh.

Currently, you’ll struggle to find an RTX 3080, for example – unless you want to buy a whole PC with one of the GPUs inside (which will, of course, be a very costly endeavor). And the same remains true of other RTX 3000 models by and large, with that scenario of the demand seesaw being tipped against supply in a major way to continue for the foreseeable (and, of course, for Black Friday).


Analysis: General consensus among CEOs and analysts is seriously gloomy

This is the latest prediction in a bunch of other gloomy forecasts which indicate that component supply problems will continue to be a blight on the broad tech industry throughout next year.

Intel’s chief executive Pat Gelsinger recently asserted that the ‘supply-demand balance’ won’t be back in a more normal state of equilibrium until 2023, in a CNBC interview in October. Plus Toshiba, along with IBM and TSMC have issued similarly dire predictions.

Lisa Su, CEO of Nvidia and Intel arch-rival AMD, hit a slightly more optimistic note in September when she indicated that we were looking at later in 2022 for a potential resolution – although even in that case, any improvement could still be the best part of a year away.

If Nvidia and others are right, and problems persist all the way through 2022, then that will encompass the launch of its next-gen ‘Lovelace’ graphics cards, which are rumored to debut at some point in Q3 next year. These GPUs are expected to do amazing things on the performance front, and if that speculation pans out, pricing will likely be in line with the power on tap – with availability issues and the aforementioned scalpers reselling on eBay for big profits only adding to the misery and expense, no doubt.

Our best hope now seems to be that these CEOs are looking on the pessimistic side in order to not spark any false hope of a recovery in the nearer future, but given the overall consistency of the message from these execs and also analyst firms, it’s really not looking good for 2022.

Via PC Gamer



from TechRadar: computing components news https://ift.tt/3nfAgla
via IFTTT

Intel: Sapphire Rapids With 64 GB of HBM2e, Ponte Vecchio with 408 MB L2 Cache

https://ift.tt/eA8V8J

This week we have the annual Supercomputing event where all the major High Performance Computing players are putting their cards on the table when it comes to hardware, installations, and design wins. As part of the event Intel is having a presentation on its hardware offerings, which discloses additional details about the next generation hardware going into the Aurora Exascale supercomputer.



from AnandTech https://ift.tt/3kFB80B
via IFTTT

Friday 12 November 2021

Fresh leak hints at how AMD vs Nvidia next-gen GPU battle is shaping up

https://ift.tt/eA8V8J

We’ve had some new material float down from the GPU grapevine, theorizing how both the next-gen Nvidia ‘Lovelace’ and AMD RDNA 3 graphics cards will pan out.

This comes from Greymon55, now a regular hardware leaker on Twitter, who has shared what they believe to be the likely core spec of the flagship GPUs from both Teams Green and Red – although note the use of question marks with some of the contended details, meaning that those parts are educated guesswork at this stage.

See more

Of course, this is an early stage rumor so all of it should be treated with a very healthy dollop of skepticism, but Greymon asserts that what will presumably be the RTX 4090 will run with 18,432 CUDA cores, plus 24GB of GDDR6X video memory and a 384-bit memory interface.

Clock speeds will be pitched in the area of 2.3GHz to 2.5GHz – that’s one of the nuggets of info with a big question mark hanging over it – offering something in the order of around 90TFlops of raw (FP32) performance (ditto on the question mark there).

Moving to the AMD RDNA 3 champion, this GPU is purportedly set to come bristling with 15,360 stream processors (cores), as previously rumored, and 32GB of GDDR6 with a 256-bit memory interface – plus the key addition of 3D Infinity Cache to stay competitive with Nvidia, as we heard recently from Greymon (there could be 256MB or 512MB of Infinity Cache).

As you may know, 3D V-Cache or vertical cache – meaning simply that it’s stacked up vertically – is what’s coming to Ryzen CPUs with new models (possibly Ryzen 6000 chips) at the start of 2022.

Clock speeds for the next-gen AMD flagship, presumably the RX 7900, are roughly pegged at around the same as Nvidia – 2.4GHz to 2.5GHz – offering about 75Tflops of FP32 performance, Greymon reckons.


Analysis: Worrying about the wrong thing…

We’d be foolish to try to engage in any real comparative debate based on the above-rumored specs at this stage in the next-gen GPU game, so bear that firmly in mind. Still, it’s worth putting some tentative initial thoughts together, huge amounts of salt clutched in both hands.

So, the theory is that the RTX 4000 flagship will pull around 85-90TFlops of raw (FP32) performance. Now, the RTX 3090 comes in at 36TFlops, so that’d make its successor around 2.5x more performant, in theory: a huge leap.

On the AMD side, the RDNA 3 flagship is pulling a purported 75Tflops, as noted, which isn’t far off Nvidia, but it’s a bit behind. However, even if these figures are in the correct ballpark – which is obviously a big if – that performance metric does not equate to real-world gaming performance.

We’ve also heard this week that Nvidia’s next-gen graphics cards could be twice as powerful as current models, and these purported specs look to back up this assertion. And with AMD set to crank the cache (and use that 3D V-Cache, of course), plus adopt a fresh multi-chip module strategy with the RX 7900, according to previous rumors, Team Red’s flagship could also turn out to be ridiculously powerful as we’ve reported before.

The reality is that it sounds like both Nvidia and AMD’s next GPUs are shaping up very nicely, and will represent big jumps in performance above the current graphics cards on both teams.

The real worry, perhaps, is not exactly how powerful these cards will turn out to be in comparison to each other, but how much they will cost. Indeed, the true next-gen GPU battle may be fought on value proposition and availability (if component shortages continue as predicted) – and indeed with power consumption being a further concern (and it’s here that what we’ve been hearing about Nvidia in recent times is rather alarming, condiments aplenty of course).

Via Wccftech



from TechRadar: computing components news https://ift.tt/30pWF6H
via IFTTT

Thursday 11 November 2021

Intel Alder Lake rumor spills specs of incoming 12th-gen CPUs – with an eye-opening change

https://ift.tt/eA8V8J

Intel Alder Lake processors are now out, or at least the first clutch of desktop CPUs, but more will follow – and we now have purported specs for what could be the next models to arrive for the 12th-gen range.

To begin with, Intel unleashed K series processors (12900K, 12700K and 12600K, plus KF variants with no integrated GPU), which are unlocked chips that overclockers can ramp up clock speeds with. The next models to come could be the vanilla non-K versions of these chips, most likely, and their (unchangeable) clock speeds have been shared on Twitter by prolific leaker @momomo_us.

See more

So, if this rumor is right the Core i9-12900 (and again, its F counterpart with no integrated graphics) will have a base clock of 2.4GHz, considerably slower than the 12900K’s speed of 3.2GHz. Boost for the 12900 will run up to 5.1GHz compared to 5.2GHz for the 12900K (of course, the K processor can be pushed further still with overclocking, if you have the cooling to handle it, that is).

Those are the purported clock speeds for the standard performance cores, whereas the efficient cores (low-power ones) are theoretically pegged at 1.8GHz with the 12900, versus 2.4GHz for the 12900K’s base clock. The TDP difference is 125W plays 65W, and the core configurations (of performance and efficient cores) remain the same as the K version.

Moving on to the Core i7-12700, with its performance cores you’ll supposedly get a base clock and turbo of 2.1GHz and 4.9GHz respectively, compared to the 12700K at 3.6GHz and 5GHz. The base clock of the efficient cores is 1.6GHz for the 12700 and 2.7GHz for the 12700K. Again, the core configuration remains the same between vanilla and K variants.

Finally, we have a snippet of a leak on the Core i5-12600, plus also the entirely new Core i5-12400F, both of which could be six-core (12-thread) chips with no efficient cores.

The 12600 will supposedly boast a base clock of 3.3GHz compared to the 3.7GHz of the 12600K. Note that the 12600K has a different core configuration, not just having six performance cores, but also four efficient cores.

As for the 12400F, that’ll run with a base clock of 2.5GHz for its performance cores (which as mentioned will be the only type of core it has). There’s no info leaked regarding boost speed here.


Analysis: Potential confusion for the consumer?

As ever, we should be very skeptical around leaked details, and there are certainly a couple of odd things with this particular bit of Alder Lake spillage which have some alarm bells lightly ringing.

For starters, the efficient cores of the 12700 are slower than the 12900 (by 200MHz), and the opposite is true with the existing K versions (the vanilla variant is 300MHz faster, not slower).

What’s stranger is that the Core i5-12600 is marked down as having no efficient cores, with just six performance ones (meaning 12-threads in total), whereas the 12600K has four efficient cores on top of those six standard cores (for 16-threads in total). To change the core configuration between K and vanilla models would be odd for Intel, and as other rumors have held that they should both maintain the same core loadout and 16-thread count, we’re betting this will likely be the case.

If it isn’t, and there is a difference in the actual core config between the 12600 and 12600K, that could be very confusing for consumers, who may well assume that as is usually the case – and as is the case with the 12900 and 12700 here – they’re the same chip with the vanilla model as the K spin, other than clock speed differences and a lack of overclocking. Theoretically, with a different core count as well as the latter changes, the typical pricing gap between K and non-K chips isn’t really going to work out well for the lesser model’s value proposition here.

Via Wccftech



from TechRadar: computing components news https://ift.tt/3qrh71y
via IFTTT

Wednesday 10 November 2021

No, Gigabyte didn’t shatter Intel Alder Lake world record with 8GHz overclock

https://ift.tt/eA8V8J

Intel’s Alder Lake chips are freshly out and you may have seen that Gigabyte recently claimed a world record 12th-gen overclock, tipping 8GHz with the Core i9-12900K flagship – but this achievement has now been officially refuted by CPU-Z.

The story here is that Gigabyte’s overclocking expert HiCookie submitted the 8GHz overclock (using liquid nitrogen) to CPU-Z last week, and bear in mind this was a staggering figure for Intel’s 12th-gen processors, given that 11th-gen chips came nowhere near 8GHz (the 11900K managed around 7.3GHz).

While Alder Lake was expected to be a good overclocker, this was a serious eye-opener, but doubt was immediately cast on the 8GHz clock speed by another overclocking expert, der8auer, who called it a ‘fake result’.

That has now been confirmed by Doc TB, the developer of CPU-Z Validator, who has rejected the 8GHz record-breaking submission to CPU-Z, following Gigabyte’s apparent failure to provide any concrete proof of the feat, as Tom’s Hardware reports.

Tom’s further notes that HiCookie couldn’t reproduce this 8GHz result with the Gigabyte rig built around its top-end Z690 Aorus Tachyon motherboard.


Analysis: Extremely complex trickery at work seems the most likely explanation

So, what exactly is going on here? Doc TB from CPU-Z observed that it’s “very unlikely” that the 12900K hit 8GHz, while detailing an apparent way to trick the CPU into reporting a falsely high speed if the overclocker has access to BIOS source code (which some of the in-house experts with motherboard manufacturers do).

Indeed, with Alder Lake pre-release samples, there were people exploiting a CPU PLL lock bug to fake reported overclocks of 8GHz or even higher (as much as 12GHz), and it seems that while Intel patched against this with a microcode update before the 12th-gen launch, there is another route still open to potentially leverage this same flaw.

Doc TB concludes that: “The issue is now on Intel’s hands, needing a new hardware stepping or maybe another microcode patch. Solving such a niche issue to prevent ppls messing with BIOS source code is a real challenge and probably not on the top list.”

The other thing to note here, as the CPU-Z dev also points out in the extensive explanation on Twitter, is that all other overclocking experts were managing to set records around the 7.5GHz to 7.6GHz mark, so 8GHz is a huge outlier compared to the general consensus from all the other major motherboard makers and their in-house overclockers. Remember, those efforts consisted of “tens of overclockers [testing] hundreds of CPUs on a bunch of motherboards while all struggling around 7.5GHz”, so an 8GHz result appearing out of the blue was “quite suspicious at least”, Doc TB asserts.

The general consensus for all-core benchmark stability for the 12900K is pegged at around 7GHz, by the way.

In case you’re wondering how fast 8GHz is in the overall scheme of things, the fastest validated processor clock speed ever achieved on CPU-Z is 8794MHz or very nearly 8.8GHz, a feat managed by an AMD FX-8350 almost a decade ago now.



from TechRadar: computing components news https://ift.tt/2YBoLLo
via IFTTT

Tuesday 9 November 2021

NVIDIA Launches A2 Accelerator: Entry-Level Ampere For Edge Inference

https://ift.tt/3bVZmPq

Alongside a slew of software-related announcements this morning from NVIDIA as part of their fall GTC, the company has also quietly announced a new server GPU product for the accelerator market: the NVIDIA A2. The new low-end member of the Ampere-based A-series accelerator family is designed for entry-level inference tasks, and thanks to its relatively small size and low power consumption, is also being aimed at edge computing scenarios as well.

Along with serving as the low-end entry point into NVIDIA’s GPU accelerator product stack, the A2 seems intended to largely replace what was the last remaining member of NVIDIA’s previous generation cards, the T4. Though a bit of a higher-end card, the T4 was designed for many of the same inference workloads, and came in the same HHHL single-slot form factor. So the release of the A2 finishes the Ampere-ficiation of NVIDIA accelerator lineup, giving NVIDIA’s server customers a fresh entry-level card.

NVIDIA ML Accelerator Specification Comparison
  A100 A30 A2
FP32 CUDA Cores 6912 3584 1280
Tensor Cores 432 224 40
Boost Clock 1.41GHz 1.44GHz 1.77GHz
Memory Clock 3.2Gbps HBM2e 2.4Gbps HBM2 12.5Gbps GDDR6
Memory Bus Width 5120-bit 3072-bit 128-bit
Memory Bandwidth 2.0TB/sec 933GB/sec 200GB/sec
VRAM 80GB 24GB 16GB
Single Precision 19.5 TFLOPS 10.3 TFLOPS 4.5 TFLOPS
Double Precision 9.7 TFLOPS 5.2 TFLOPS 0.14 TFLOPS
INT8 Tensor 624 TOPS 330 TOPS 36 TOPS
FP16 Tensor 312 TFLOPS 165 TFLOPS 18 TFLOPS
TF32 Tensor 156 TFLOPS 82 TFLOPS 9 TFLOPS
Interconnect NVLink 3
12 Links
PCIe 4.0 x16 +
NVLink 3 (4 Links)
PCIe 4.0 x8
GPU GA100 GA100 GA107
Transistor Count 54.2B 54.2B ?
TDP 400W 165W 40W-60W
Manufacturing Process TSMC 7N TSMC 7N Samsung 8nm
Form Factor SXM4 SXM4 HHHL-SS PCIe
Architecture Ampere Ampere Ampere

Going by NVIDIA’s official specifications, the A2 appears to be using a heavily cut-down version of their low-end GA107 GPU. With only 1280 CUDA cores (and 40 tensor cores), the A2 is only using about half of GA107’s capacity. But this is consistent with the size and power-optimized goal of the card. A2 only draws 60W out of the box, and can be configured to drop down even further, to 42W.

Compared to its compute cores, NVIDIA is keeping GA107’s full memory bus for the A2 card. The 128-bit memory bus is paired with 16GB of GDDR6, which is clocked at a slightly unusual 12.5Gbps. This works out to a flat 200GB/second of memory bandwidth, so it would seem someone really wanted to have a nice, round number there.

Otherwise, as previously mentioned, this is a PCIe card in a half height, half-length, single-slot (HHHL-SS) form factor. And like all of NVIDIA’s server cards, A2 is passively cooled, relying on airflow from the host chassis. Speaking of the host, GA107 only offers 8 PCIe lanes, so the card gets a PCIe 4.0 x8 connection back to its host CPU.

Wrapping things up, according to NVIDIA the A2 is available immediately. NVIDIA does not provide public pricing for its server cards, but the new accelerator should be available through NVIDIA’s regular OEM partners.



from AnandTech https://ift.tt/3qpGFvZ
via IFTTT

NVIDIA Announces Jetson AGX Orin: Modules and Dev Kits Coming In Q1’22

https://ift.tt/3CUY4A6

Today as part of NVIDIA’s fall GTC event, the company has announced that the Jetson embedded system kits will be getting a refresh with NVIDIA’s forthcoming Orin SoC. Due early next year, Orin is slated to become NVIDIA’s flagship SoC for automotive and edge computing applications. And, has become customary for NVIDIA, they are also going to be making Orin available to non-automotive customers through their Jetson embedded computing program, which makes the SoC available on a self-contained modular package.

Always a bit of a side project for NVIDIA, the Jetson single-board computers have none the less become an important tool for NVIDIA, serving as both an entry-point for helping bootstrap developers into the NVIDIA ecosystem, and as a embedded computing product in and of itself. Jetson boards are sold as complete single-board systems with an SoC, memory, storage, and the necessary I/O in pin form, allowing them to serve as commercial off the shelf (COTS) systems for use in finished products. Jetson modules are also used as the basis of NVIDIA’s Jetson developer kits, which throw in a breakout board, power supply, and other bits needed to fully interact with Jetson modules.

NVIDIA Jetson Module Specifications
  AGX Orin AGX Xavier Jetson Nano
CPU 12x Cortex-A78AE 8x Carmel
@ 2.26GHz
4x Cortex-A57
@ 1.43GHz
GPU Ampere Volta, 512 Cores
@ 1377MHz
Maxwell, 128 Cores
@ 920MHz
Accelerators Next-Gen NVDLA 2x NVDLA N/A
Memory 32GB LPDDR5, 256-bit bus
(204 GB/sec)
16GB LPDDR4X, 256-bit bus
(137 GB/sec)
4GB LPDDR4, 64-bit bus
(25.6 GB/sec)
Storage ? 32GB eMMC 16GB eMMC
AI Perf. (INT8) 200 TOPS 32 TOPS N/A
Dimensions 100mm x 87mm 100mm x 87mm 45mm x 70mm
TDP 15W-50W 30W 10W
Price ? $999 $129

With NVIDIA’s Orin SoC set to arrive early in 2022, NVIDIA is using this opportunity to announce the next generation of Jetson AGX products. Joining the Jetson AGX Xavier will be the aptly named Jetson AGX Orin, which integrates the Orin SoC.

Featuring 12 Arm Cortex-A78AE “Hercules” CPU cores and an integrated Ampere architecture GPU, and comprised of 17 billion transistors, Orin is slated to be a very powerful SoC once it begins shipping. The SoC also contains the latest generation of NVIDIA’s dedicated Deep Learning Accelerator (DLA), as well as a vision accelerator to further speed up and efficiently process those tasks.

Surprisingly, at this point NVIDIA is still holding back some of the specifications such as clockspeeds and the GPU configuration, so it’s not clear what the final performance figures will be like. But NVIDIA is promising 200 TOPS of performance in INT8 machine learning workloads, which would be a 6x improvement over AGX Xavier. Presumably those performance figures are for the module’s full 50W TDP, while performance is proportionally lower as you move towards the module’s minimum TDP of 15W.

For the AGX Jetson, Orin is being paired with 32GB of LPDDR5 RAM, which is attached to a 256-bit memory bus, allowing for 204GB/second of memory bandwidth. Unfortunately, NVIDIA is not listing how much storage the modules come with, though for reference, AGX Xavier came with 32GB of eMMC storage.

Meanwhile, for this generation NVIDIA will be maintaining pin and form-factor compatibility with Jetson AGX Xavier. So Jetson AGX Orin modules will be the same 100mm x 87mm in size, and use the same edge connector, making Orin modules drop-in compatible with Xavier.

Jetson AGX Oron modules and dev kits are slated to become available in Q1 of 2022. NVIDIA has not announced any pricing information at this time.



from AnandTech https://ift.tt/3qifkvB
via IFTTT

Monday 8 November 2021

AMD Confirms Milan-X with 768 MB L3 Cache: Coming in Q1 2022

https://ift.tt/3bSwIyU

As an industry, we are slowly moving into an era where how we package the small pieces of silicon together is just as important as the silicon itself. New ways to connect all the silicon include side by side, on top of each other, and all sorts of fancy connections that help keep the benefits of chiplet designs but also taking advantage of them. Today, AMD is showcasing its next packaging uplift: stacked L3 cache on its Zen 3 chiplets, bumping each chiplet from 32 MiB to 96 MiB, however this announcement is targeting its large EPYC enterprise processors.



from AnandTech https://ift.tt/3mY0OHe
via IFTTT

Nvidia RTX 3080 Ti GPU spotted in a leaked gaming laptop benchmark

https://ift.tt/eA8V8J

We reported a couple of weeks ago that Nvidia looks set to release an RTX 3080 Ti laptop graphics card, and thanks to a fresh Geekbench leak there's further weight to add to that speculation.

VideoCardz reports that an unspecified HP Omen laptop is equipped with an Intel 12th Gen Alder Lake CPU alongside Nvidia GeForce RTX 3080 Ti Laptop GPU. Some of these also listed 'Nvidia Graphics Device' for the driver name which doesn't give us much to work with, but given this is all unconfirmed information anyway, try not to take anything as gospel. Much like the mobile RTX 3080, the RTX 3080 Ti looks like it will ship with 16GB of VRAM.

Speculation about higher-end notebook GPUs from Nvidia has been flying around for quite some time now, although previous leaks suggested that an RTX 3080 Super was planned rather than the current Ti allocation. Naming conventions aside, the SKU for this model of the mobile GPU seemingly confirms that it will feature 58 Compute Units, each carrying 128 CUDA cores.

After some basic maths, we can assume that the RTX 3080 Ti mobile will have 7,424 CUDA cores over the 6,144 featured in the mobile RTX 3080, though 256 CUDAs appear to have been disabled for this variant. The Vulcan score is a fairly impressive 90,114 points, which actually sets it below the standard RTX 3080 laptop GPU average score of 91,130, but we expect this number will change after drivers have been finalized and everything has been fine-tuned.

If we have to hazard a guess, an announcement around CES 2022 looks likely, so take things with a pinch of salt for now and hopefully, we'll have confirmation on potential performance by January.


Analysis: What does this mean for RTX 4000?

Rumors for a new addition to the Ampere RTX 3000 series of mobile graphics cards have been circulating for months, with estimations that they could be unveiled early in 2022. Given the ongoing silicon shortage, it feels like an unusual time for Nvidia to be focusing its efforts on a new laptop GPU, especially when the current lineup feels fairly well balanced, but no doubt people will be itching to get their hands on a new, powerful gaming laptop once they appear in the wild.

We also know that Nvidia is currently working on its RTX 4000 series, codenamed 'Lovelace', but this new generation is currently expected to appear around the end of 2022, which would fit the timeline expectations set by previous releases. 

To us, it would surely make more sense to focus efforts on creating a well-balanced lineup of RTX 4000 mobile GPUs rather than churn out another addition to the current family, but as we have no way of knowing for sure that Lovelace will land by this time next year, this could be in preparation for a delay on RTX 4000 gaming laptops. 



from TechRadar: computing components news https://ift.tt/3bRKSQB
via IFTTT

How to build a cheap gaming PC that doesn't suck

https://ift.tt/2DkMbtS

If you’re trying to learn how to build a cheap gaming PC that doesn’t suck, then you’ve come to the right place. After all, while most of the best gaming PCs seem to cost an arm and a leg, you don’t have to empty your bank account for a solid gaming computer. Building PCs is about more than just spending as much as possible for your parts. 

We’re going to help build a cheap gaming PC that’s actually pretty good. You will have some limitations with your build as you probably won’t be booting up Control in 4K resolution with ray tracing on. But, you’ll be able to play it in 1080p with some pretty high settings. Though parts have been tough to come by the past year, there are a lot of reasonably priced processors and graphics cards out there that come with a surprising amount of power.

If you’re building a gaming PC out of completely new parts, getting an Xbox One X will probably either be cheaper or more powerful. And, don’t expect to do any 4K gaming with a budget build. Though you might be able to soup up your rig with used parts, we wouldn’t recommend that since there’s the potential that your PC components spontaneously combust.

But, the other benefits that a gaming PC can offer more than make up for the higher price tag. And, even if PC gaming has a higher entry price, you’ll still save a ton of money over time on PC games. If you’re putting together a cheap gaming PC right now, you can save even more if you take advantage of all the Black Friday and Cyber Monday deals that are out there.

Let us guide you through how to build a cheap gaming PC that doesn’t suck. Whether you’re simply looking to save a lot of cash on a great performing gaming machine or you really don’t need a setup with a lot of power in the first place, you’ve come to the right place.

What you'll need

Despite what you may have thought, you don’t need too much in the way of tools to build a cheap gaming PC. A phillips head screwdriver is the only absolutely necessary tool. However, there are a couple things that can help  you out. Because you’ll be dealing with a lot of screws, having a parts tray helps a lot. If you don’t have one of those lying around (who can blame you), you can just use a couple bowls to keep things sorted. 

Also, you have to be on the lookout for static electricity. An anti-static wristband is a godsend if you have one, but if you don’t, just make sure you’re not standing on carpet when building, and discharge any latent static electricity by touching some metal, like your power supply or PC case. 

Most importantly, however, you need a clean space to build. If you can clear off the dining room table for a couple hours, that’s perfect. You just need enough space to hold all of your PC components. 

 The parts 

There are so many PC components out there these days that you could theoretically build dozens of PCs without having the same parts list. Luckily, we follow PC components literally every day, so we used our expertise to pick out the best bang-for-your-buck PC components for this cheap gaming PC, and why those parts are the best choices for a budget PC build in 2019. And, once you’ve gathered up all the best PC components that don’t suck we’ll show you how to build a PC

AMD Ryzen 3 3200G against a white background

(Image credit: AMD)

Processor: AMD Ryzen 3 3200G

Getting a quad-core for cheap

Affordable
Includes on-board graphics
No multi-threading

This AMD Ryzen processor (CPU) is the holy grail of budget PC components. It’s a quad-core chip with a boost clock of 4.0GHz, which would be enough to get some PC gaming done on its own. Where this chip really gains its budget bragging rights, however, is in the on-board Radeon Vega 8 graphics. This integrated graphics processor (GPU) isn’t powerful enough to play top-end games, but it should be enough to try some of the best indie games while saving up for a beefy graphics.

Intel alternative: we’d suggest the Pentium G4560. It’s only a dual-core chip, but with high clock speeds and hyper-threading it can keep up with the latest PC games.

ASRock Fatal1ty B450 Gaming against a white background

(Image credit: ASRock)

Motherboard: ASRock Fatal1ty B450 Gaming

You shouldn't have to break open the piggy bank

Looks good for budget
Affodable
No overclocking

When you’re picking out a motherboard, you don’t want to skimp too much. It’s one of those components where if something goes wrong, you have to rebuild the entire PC. The ASRock Fatal1ty B450 Gaming will get the job done, while saving you plenty of cash. It’s not the most feature-rich motherboard out there, but you’re just looking for a dependable board. Just keep in mind that you'll have to update the BIOS to at least version P3.20 to use the Ryzen 3 3200G. But, if you're not comfortable with that,  you can always pick up the Ryzen 3 2200G instead – you won't lose much performance.

Intel alternative: if you’re going with Team Blue, you can save quite a bit on the motherboard by going with the ASRock B250M-HDV motherboard. It’s an older chipset, so you can find a bargain. 

G.Skill Ripjaws V against a white background

(Image credit: G.Skill)

Memory: G.Skill Ripjaws V

Some reasonable RAM

Affordable 
Neat red shroud
Only 2,400MHz

For budget gamers, sticking with 8GB of memory (RAM) is reasonable. There are some heavy duty games that will really start to push past that limit, but those are few and far between – especially at 1080p. So, we recommend picking up an 8GB kit of G.Skill Ripjaws V DDR4. It’s not the fastest or the flashiest, but it gets the job done.  

Adata Ultimate SU800 128GB against a white background

(Image credit: Adata)

SSD: Adata Ultimate SU800 128GB

An affordable boot drive

Super affordable
Easy to install
Not the fastest SSD on the block

A 128GB SSD may sound small to you, and it is, but when you’re just trying to get an affordable PC build done, it’s perfect. The Adata Ultimate SU800 128GB is big enough to fit your operating system on, which means your computer will be nice and fast, and more importantly, it’s dirt-cheap. This drive is just $20, and you should be able to find it for even cheaper during seasonal sales like Black Friday.

WD Caviar Blue 1TB against a white background

(Image credit: Western Digital)

Hard drive: WD Caviar Blue 1TB

This is where your games go

Power efficient
Lots of space
Not as fast as an SSD

Unfortunately, SSDs are so much more expensive than the best hard drives when it comes to mass storage – that’s just a fact of life. That’s why picking up a 1TB hard drive, like the WD Caviar Blue, just makes sense for a cheap gaming PC. You’ll install your OS and maybe like one game on your SSD, and everything else can just go on your hard drive.

AMD Radeon RX 570 8GB against a white background

(Image credit: AMD)

Graphics card: AMD Radeon RX 570 8GB

A great 1080p card

Great 1080p performance
Doesn't hog too much power
Won't move to 1440p well

When you’re shopping for the best graphics card for your build, the most important advice we can give you is to consider what you’re going for. A lot of people will tell you that the Nvidia GeForce RTX 2080 Ti is the best graphics card out there, but not everyone has $1,200/£1,200to throw at a GPU. That’s why the AMD Radeon RX 570 8GB is such a gem. It’s extremely affordable, and should be good enough to handle most games at 1080p at high settings.

Nvidia alternative: If you’re looking for an affordable Nvidia card that trades blows with the AMD Radeon RX 570, you’ll want to take a look at the GeForce GTX 1650. It’s not super powerful, but it will get you through your 1080p gaming.

Corsair Carbide 100R against a white background

(Image credit: Corsair)

PC case: Corsair Carbide 100R

A big black box

Expandability
Decent airflow
Kind of ugly

With a PC case, you really don’t need the most badass tower to get the job done. And, the Corsair 100R s a perfect example of a cheap PC case that doesn’t suck. It doesn’t have all the RGB lights and tempered glass panels that a more expensive case might, but what matters is that there’s plenty of room for case fans, and more than enough space for full length graphics cards if you want to upgrade later. 

Corsair VS550 against a white background

(Image credit: Corsair)

PC power supply: Corsair VS550

Desperate times ...

Won't set your house on fire
Affordable
Not modular

When you’re setting out to build a cheap gaming PC that doesn’t suck, it's easy to find the cheapest power supply and toss it into your PC. But, because that could literally present a fire hazard, you should at least find something like the Corsair VS550K. This budget power supply just has an 80+ efficiency rating, rather than the Gold, Silver or Bronze efficiencies of more expensive PSUs, but it should still be good enough to get the job done. Just keep in mind that this power supply isn’t modular, so you might have to find some creative ways to hide the extra cables.



from TechRadar: computing components news https://ift.tt/3EZgDDJ
via IFTTT

AMD Gives Details on EPYC Zen4: Genoa and Bergamo, up to 96 and 128 Cores

https://ift.tt/3mTEGOk

Since AMD’s relaunch into high-performance x86 processor design, one of the fundamental targets for the company was to be a competitive force in the data center. By having a competitive product that customers could trust, the goal has always been to target what the customer wants, and subsequently grow market share and revenue. Since the launch of 3rd Generation EPYC, AMD is growing its enterprise revenue at a good pace, however questions always turn around to what the roadmap might hold. In the past, AMD has disclosed that its 4th Generation EPYC, known as Genoa, would be coming in 2022 with Zen 4 cores built on TSMC 5nm. Today, AMD is expanding the Zen 4 family with another segment of cloud-optimized processors called Bergamo.



from AnandTech https://ift.tt/3ocJT3a
via IFTTT
Related Posts Plugin for WordPress, Blogger...