Thursday 28 September 2023

Core i7-14700KF leak gets us even more excited for Intel's 14th-gen desktop CPU

https://ift.tt/CeX1xDd

Intel’s Core i7-14700K is again causing some excitement thanks to a leak for this incoming Raptor Lake Refresh processor, which is increasingly seeming like it will be the star of the 14th-gen desktop show.

The watchful eye of our sister site Tom’s Hardware turned up the new leak for the Core i7-14700KF on X (formerly Twitter), which was shared by Benchleaks (the source of a lot of prerelease hardware benchmarks).

See more

There are a few juicy bits to highlight here for the 14700KF (which is the same as the 14700K, in case you were wondering – it just lacks integrated graphics, which makes no difference to these benchmark results).

The first point to note is that the clock speed of the chip hit a staggering 6GHz very nearly (15MHz shy, at 5985MHz or 5.985GHz). As already rumored, the CPU is shown with an extra four efficiency cores compared to the current-gen 13700K.

So, given the difference in cores and clocks, you’d expect the 14700KF to turn out a good bit faster – and indeed it is with the Geekbench results presented here.

The Core i7-14700KF managed to record scores of 3,097 points and 21,196 points for single-core and multi-core respectively. That is 9% and 18% faster in those tests than the 13700K, which is an impressive step up in performance.

Indeed, the 14700KF comes close to the Core i9-13900KS for multi-core, with the latter only having a 2% lead – and remember, that’s a super-pricey special edition CPU that is top of the Raptor Lake tree.


Analysis: A somewhat worrying lack of hype overall

This all sounds very impressive, but there’s a caveat, and that is the mentioned clock speed is not going to be the default. This Core i7-14700KF must surely have been overclocked in the test, as the rumored boost speed out of the box is 5.6GHz (and it won’t hit almost 6GHz, that’s for sure). Mind you, that still shows the overclocking potential of the 14700K, of course.

The kind of mentioned performance gains do, however, align with previous rumors, with chatter indicating 15% gains for multi-core, or just over, as we see here. In short, there’s plenty to be optimistic about for the Core i7-14700K, and the kind of overclock achieved here is cause for positive noises about what we might see on that front, as mentioned.

The danger with Raptor Lake Refresh is that Intel has been so quiet about these desktop processors, that the 14700K, and maybe one or two other chips, will be highlighted among a refreshed line-up which is otherwise very pedestrian. That’s certainly the hint dropped by a leaked briefing from MSI, as we’ve covered in the past.

Remember, Intel held its big Innovation event recently, and didn’t mention anything about Raptor Lake Refresh during any of the big keynotes or sessions – the CPUs were actually shown at one point, but blink, and you missed it.

This is a rather ominous sign, then, that Intel is keeping things very lowkey with its next-gen desktop chips. But even so, the speculation around the 14700K continues to push the idea that there’ll definitely be something to look forward to in what’s an important slot for the desktop range – and a potential candidate to storm up our ranking of the best processors.

You might also like



from TechRadar: computing components news https://ift.tt/2H37rnJ
via IFTTT

Tuesday 26 September 2023

Corsair's Dominator Titanium Memory Now Available, Unveils Plans for Beyond 8000 MT/s

https://ift.tt/XLmZMPj

Corsair has started sales of its Dominator Titanium memory modules that were formally introduced this May. The new modules bring together luxurious look, customizable design, and extreme data transfer rates of up to 8000 MT/s. Speaking of performance, the company implied that it intends to introduce Dominator Titanium with speed bins beyond DDR5-8000 when the right platform arrives.

Corsair's Dominator Titanium family is based around 16 GB, 24 GB, 32 GB, and 48 GB memory modules that come in kits ranging from 32GB (2 x 16GB) up to 192GB (4x 48GB). As for performance, the lineup listed at the company's website includes DDR5-6000 CL30, DDR5-6400 CL32, DDR5-6600 CL32, DDR5-7000 CL34, DDR5-7000 CL36, DDR5-7200 CL34, and DDR5-7200 CL36 with voltages of 1.40 V – 1.45V.

Although Corsair claims that Dominator Titanium with data transfer speeds beyond 8000 MT/s are coming, it is necessary to note that they will be supported by next generation platforms from AMD and Intel. For now, the company only offers 500 First Edition Dominator Titanium kits rated for DDR5-8266 mode for its loyal fans.

To address demand from different types of users, Corsair offers Dominator Titanium with XMP 3.0 SPD settings for Intel's 12th and 13th Generation Core CPUs with black and white heat spreaders as well as with AMD EXPO SPD profiles for AMD's Ryzen processors with grey finish on heat spreaders.

In terms of design of heat spreaders, Corsair remained true to aesthetics. The modules are equipped with 11 customizable Capellix RGB LEDs, offering users a personalized touch. This can be easily adjusted using Corsair's proprietary software. For enthusiasts who lean towards a more traditional aesthetic, Corsair provides an alternative design with fins, reminiscent of their classic memory modules.

Speaking of heat spreaders, it is necessary to note that despite the name of the modules, they do not come with titanium radiators and keep using aluminum, which is a good thing since titanium has a rather low thermal conductivity of 11.4 W/mK and will therefore heat up memory chips rather than distribute heat away from them. Traditionally, Corsair's Dominator memory modules use cherry-picked DRAM chips and the company's proprietary printed circuit boards enhanced with internal cooling planes and external thermal pads to improve cooling.

Corsair's Dominator Titanium memory products are now available both directly from the company and from its resellers. The cheapest Dominator Titanium DDR5-6000 CL30 32 GB kit (2 x 16 GB) costs $175, whereas faster and higher-capacity kits are priced higher.



from AnandTech https://ift.tt/japl0u9
via IFTTT

Wednesday 20 September 2023

Intel developing its own stacked cache tech to compete with AMD 3D V-Cache

https://ift.tt/NTLd8Bz

Intel is working on its own version of stable cache that AMD pioneered with its 3D V-Cache technology, though it is still at least a couple of generations away.

Following Intel CEO Pat Gelsinger's Intel Innovation 2023 keynote, Gelsinger held a Q+A session with members of the press where he was asked if Intel would adopt the same stackable cache technology that AMD has been using to make some of the best processors on the market. 

"When you reference V-Cache," Gelsinger said, as reported by Tom's Hardware, "you're talking about a very specific technology that TSMC does with some of its customers as well. Obviously, we're doing that differently in our composition, right? And that particular type of technology isn't something that's part of [the new Intel Core Ultra processors], but in our roadmap, you're seeing the idea of 3D silicon where we'll have cache on one die, and we'll have CPU compute on the stacked die on top of it, and obviously using [embedded multi-die interconnect bridges] that Foveros [chiplet packaging technology] we'll be able to compose different capabilities."

A slide showing off the details of the Intel Core Ultra

(Image credit: Future / John Loeffler)

Anyone who saw Gelsinger's keynote would have seen how Intel's upcoming processor roadmap will move heavily into the multi-chiplet module (MCM) design paradigm, where different processor components like the iGPU, cache, and Intel's new nueral processing unit would be discrete segments bonded together into a single unit rather than cast together all at once.

The MCM process allows for a lot more flexibility than the more restrictive monolithic silicon fabrication that has traditionally been used previously, and would obviously open up all sorts of new possibilities for chip design that weren't practical using a monolithic structure.

The most obvious of these is stacked cache, which greatly increases the available cache memory pool for the processor that translates into much faster processing of specific CPU workloads.

AMD has already proven the benefits of this expanded cache pool when it launched the AMD Ryzen 7 5800X3D chip in 2022, followed by AMD Ryzen 7 7800X3D, AMD Ryzen 9 7900X3D, and AMD Ryzen 9 7950X3D earlier this year.

"We feel very good that we have advanced capabilities for next-generation memory architectures, advantages for 3D stacking, for both little die, as well as for very big packages for AI and high-performance servers as well," Gelsinger said. "So we have a full breadth of those technologies. We'll be using those for our products, as well as presenting it to the [Intel] Foundry customers as well."

Stackable cache is only the beginning for Intel's packaging tech

Intel's move into MCM processor design using its embedded multi-die interconnect bridge (EMIB) and Forveros chip packaging technology is a major step forward for the chipmaker. 

The best Intel processors of the past couple of years have relied heavily on simply throwing raw electrical power into its processors to increase performance, making its high-end Intel Core i9-12900K and Intel Core i9-13900K especially power hungry processors.

This has allowed it to regain a lot of ground lost to the best AMD processors of the past few years, but this isn't a workable long term solution, and even Nvidia is reportedly seeing the wisdom of moving to an MCM design for its next-gen Nvidia Blackwell architecture.

And while the idea of a future Intel processor, possibly as soon as Lunar Lake, featuring stacked cache is exciting, it should only be the beginning of new processor developments, not the end of it.

You might also like



from TechRadar: computing components news https://ift.tt/BGJEiKr
via IFTTT

Intel High-NA Lithography Update: Dev Work On Intel 18A, Production On Future Node

https://ift.tt/NWXFMqQ

As part of Intel’s suite of hardware announcements at this year’s Intel Innovation 2023 conference, the company offered a brief update on their plans for High-NA EUV machines, which will become a cornerstone of future Intel process nodes. Following some changes in Intel’s process roadmap – in particular Intel 18A being pulled in because it was ahead of schedule – Intel’s plans for the next-generation EUV machines. Intel will now only be using the machines with their 18A node as part of their development and validation work of the new machines; production use of High-NA machines will now come on Intel’s post-18A node.

High Numerical Aperture (High-NA) machines are the next generation of EUV photolithography machines. The massive scanners incorporate 0.55 numerical aperture optics, significantly larger than the 0.33 NA optics used in first-generation production EUV machines, which will ultimately allow for higher/finer quality lines to be etched. Ultimately, High-NA machines are going to be a critical component to enabling nodes below 2nm/20 angstroms.

At the time that Intel laid out their “5 nodes in 4 years” roadmap in 2021, the company announced that they were going to be the lead customer for ASML’s High-NA machines, and would be receiving the first production machine. High-NA, in turn, was slated to be a major part of Intel’s 18A node.


Size Comparison: ASML Normal & High NA EUV Machines

But since 2021, plans have changed for Intel, seeming in a good way. Progress on 18A has been ahead of schedule, such that, in 2022, Intel announced they were pulling in 18A manufacturing from 2025 to H2’2024. Given that the release date of ASML’s High-NA machines has not changed, however, that announcement from Intel left open some questions about how High-NA would fit into their 18A node. And now we finally have some clarification on the matter from Intel.

High-NA machines are no longer a part of Intel’s production plans for 18A. With the node now arriving before production-grade High-NA machines, Intel will be producing 18A with the tools they have, such as ASML’s NXE 3000 series EUV scanners. Instead, the intersection between 18A and High-NA will be that Intel using the 18A line to develop and validate the use of High-NA scanners for future production. After which, Intel will finally use High-NA machines as part of the production process for their next-generation, post-18A node, which is simply being called “Intel Next” right now.

As for the first High-NA development machine, Intel also confirmed this week that their schedule for development remains on track. Intel is slated to receive their first High-NA machine late this year – which as Pat Gelsinger put it in his keynote, is his Christmas present to Dr. Ann Kelleher, Intel’s EVP and GM of technology development.

Finally, back on the subject of the Intel 18A process, Intel says that they are progressing well on their second-generation angstrom node. The 0.9 PDK, which should be the final pre-production PDK, is nearly done, and should enable Intel’s teams to ramp up designing chips for the process. Intel, for its part, intends to start 18A silicon fab work on Q1’2024. Based on Intel’s roadmaps thus far, that is most likely going to be the first revision of one of the dies on Panther Lake, Intel’s first 18A client platform.



from AnandTech https://ift.tt/8CsFkhN
via IFTTT

Intel Announces Panther Lake Client Platform, Built on Intel 18A For 2025

https://ift.tt/vO9Q14A

While the primary focus has been on Intel's impending Meteor Lake SoC due by the end of the year, Intel CEO Pat Gelsinger unveiled more about their current client processor roadmap. Aside from a demo showing off a 'Lunar Lake' test box, Pat Gelsinger also announced Panther Lake, a new Intel client platform that is on track for a release sometime in 2025.

Intel's updated roadmap has given the industry a glimpse into what lies ahead. Following the much-anticipated Lunar Lake processors set for a 2024-2025 timeframe, Panther Lake is set to bring all the technological advancements of Intel's 18A node to the party.

As mentioned, Intel demoed Lunar Lake's AI capabilities with a live demo at Intel Innovation 2023. This included a pair of demos, one running an AI plugin called Riffusion within the Audacity software, which can generate music. The second was a demo running Stable Diffusion using a text-to-image generation model; it was a giraffe in a cowboy hat for reference. This was all done using a working Lunar Lake test box, which seamlessly looked to run the two demos with ease.

Intel Client Processor Roadmap
Name P-Core uArch E-Core uArch Process Node
(Compute Tile)
Release Year
Meteor Lake Redwood Cove Crestmont Intel 4 2023 (December)
Arrow Lake Lion Cove? Crestmont? Intel 20A 2024
Lunar Lake Lion Cove? Skymont? Intel 20A? 2024?
Panther Lake ? ? Intel 18A 2025

Pivoting to the Panther Lake, Intel, via CEO Pat Gelsinger during Intel Innovation 2023, said that it's on track for release in 2025; we also know that Intel is sending it to fabs in Q1 of 2024. This means we're getting Meteor, Arrow, Lunar, and then Panther Lake (in that order) by the end of 2025. Panther Lake aims to build on Lunar Lake with all its tiles fabricated on the advanced 18A node. While (understandably) details are thin, we don't know what P-core or E-core architectures Panther Lake will use. 

Intel's Innovation 2023 event was a starting point for Intel CEO Pat Gelsinger to elaborate on a comprehensive processor roadmap beyond the much anticipated Meteor Lake SoC, with the first Ultra SKU set to launch on December 14th; this about counts as a launch this year, barring any unexpected foibles. With Panther Lake on track for a 2025 release and set to go to fabs in Q1 of 2024, Intel's ambitious "5 nodes in 4 years" strategy is in full swing. While Lunar Lake paves the way with advanced on-chip AI capabilities on the 20A node, Panther Lake aims to build upon this foundation using the more advanced 18A node.

Although specific architectural details remain scant, the sequential release of Meteor, Arrow, Lunar, and Panther Lake by the end of 2025 underscores Intel's aggressive push to redefine the client processor landscape.



from AnandTech https://ift.tt/hM7UHbi
via IFTTT

Tuesday 19 September 2023

Nvidia 5000 series GPUs might use multi-chiplet design—and it could help get Nvidia back on the performance track

https://ift.tt/fl1agd8

Nvidia may be joining AMD and Intel in developing a multi-chiplet architecture for its next generation of GPUs, and could reap major gains in terms of performance.

Nvidia is the last of the big three chipmakers that still uses a single slice of silicon for the processors inside its best graphics cards, so it's something of a welcome surprise that rumors have begun circulating that the company will finally move to the more adaptable multi-chiplet module (MCM) design with its next-generation Nvidia Blackwell architecture. 

The report comes from well-known hardware leaker @kopite7kimi on X, who said that Nvidia's commercial-grade GB100 GPU will feature MCM for the first time.

See more

The Nvidia Blackwell architecture is expected to power both Nvidia's next-gen commercial GPU products, which are used by data centers and industial-scale users, as well as its consumer graphics cards, the Nvidia RTX 5000 series.

Even though both will use the Blackwell architecture, however, it's unclear at the moment if the MCM shift will also extend to the Nvidia 5000 series graphics cards. If it does, though, it could provide the transformational performance for Nvidia's next graphics card generation that was often lacking in some of its more recent RTX 4000-series cards

The chiplets in an MCM design, when interconnected into a single processor, promises significantly faster performance over a monolithic slab of silicon. As Tom's Hardware explains, a single silicon chip is constrained by the physical dimensions of the equipment used to fabricate it. Currently, the process Nvidia uses can only produce a 26mm by 33mm (858mm²) pieces of silicon at most, and Nvidia's commercial-grade GPUs are already bumping right up against that maximum size.

And since it's become exponentially more difficult to further shrink the size of a transistor, the electronic switch inside a chip that produces a computer's logic functionality, the only way to increase the number of transistors in your GPU to increase performance is to make the chip larger than the physical manufacturing process will allow. 

That's where the chiplets come in. If you can produce two or more chiplets that are smaller, but use special ties called interconnects to link them together so they act as a single unit, you can effectively build a larger chip than the fabrication process can supprt and dramatically improve performance. With an MCM design for its GPUs, Nvidia might be able to deliver the kinds of gains across its entire portfolio of Nvidia 5000 series cards that many were hoping to see with the 4000 series but which Nvidia wasn't able to deliver consistently. 

Obviously, this is still very speculative and based on rumors, but there's a reason why both AMD and Intel have made the switch to MCM in their GPUs and CPUs, and Nvidia would be very smart to follow suit, or risk getting left behind.

Make the move to MCM, Nvidia, it's the only way forward

A render of the Nvidia GPU die

(Image credit: Nvidia)

The problem chip makers have been having for years now has been the end of Moore's Law, the famous prediction by Intel co-founder Gordon Moore that transistor density on a chip would double roughly every two years. 

For 50 years, that had been the case, but as we now measure the size of transistors relative to the diameter of individual atoms in the silicon, cutting a transistor's size in half just isn't possible anymore.

But consumers and industry have become used to faster computers every couple of years, and so no one really wants to hear that the party is over. If you're a chip maker looking to keep selling more processors, you have to find another way to deliver those performance gains the market expects, Moore's Law be damned.

The answer to this is using multiple chips in conjunction with one another to achieve those performance targets. We've already been doing this for more than a decade, as Nvidia well knows. 

There was a time when there was no such thing as a graphics processor, there was just the main CPU which was expected to handle graphics as well as every other operation.

As graphics became more advanced though, this so heavily taxed the CPU that something had to be done before computing 3D scenes either ate up 99.9% of the CPU's clock cycles, or the limits of the CPU itself ground computer graphics progress to a halt. 

The work around was to pass all but the most basic graphics processing work off to a second processor, the GPU, which was specially designed for this task and which went on to power the modern era of computer graphics. Nvidia knows this because it was the one that created the world's first GPU, the Nvidia GeForce 256, back in 1999.

We've come full circle then, and graphics processors are so overwhelmed with the workloads being assigned to them that they cannot keep up and we can't squeeze more performance out of the same-sized silicon. It's time then to break up geometry, rasterization, ray tracing, machine learning, and other GPU workloads into different mini-processors that can be specifically engineered to perform those tasks faster and more efficiently than we're currently doing. 

Nvidia's chief competitor, AMD, is already doing this and it's seen very positive results so far. And while the first few attempts to get MCM engineering right might not be the revolution that the first GPU was when it landed over 20 years ago, future attempts will get us to where we want—and Nvidia needs—to be, so Nvidia should probably get to work on that. 



from TechRadar: computing components news https://ift.tt/vMepZ90
via IFTTT

Intel Demos Lunar Lake Client Processor In Action, Silicon Pulled In To Intel 20A?

https://ift.tt/CIB9Dln

As part of Intel’s Innovation 2023 conference, the company is not only showing off their current and soon-to-be-current products like Meteor Lake, but the forward-looking keynote by CEO Pat Gelsinger was also used to showcase future generations of Intel products. Perhaps the biggest surprise this year being Intel’s Lunar Lake platform, which is already up and running to the point where Intel can do demos on it.

Lunar Lake is Intel’s 2025 client platform, which is scheduled to arrive after Meteor Lake (very late 2023) and Arrow Lake (2024). At last disclosure from Intel, it is going to be a brand-new platform, replacing the shared Meteor/Arrow platform. At this point, confirmed details are few and far between, other than that it will be bigger and better than Meteor Lake.

Intel Client Processor Roadmap
Name P-Core uArch E-Core uArch Process Node
(Compute Tile)
Release Year
Meteor Lake Redwood Cove Crestmont Intel 4 2023 (December)
Arrow Lake Lion Cove? Crestmont? Intel 20A 2024
Lunar Lake Lion Cove? Skymont? Intel 20A 2024?
Panther Lake ? ? Intel 18A 2025

In any case, as part of this year’s Innovation keynote, Gelsinger & co ran a pair of AI demos on their Lunar Lake test box. The first was Riffusion, an AI music generation plugin for Audacity that can generate music based on the style of another artist. The second demo was the now classic Stable Diffusion text-to-image generation model. Both demos were able to leverage the chip’s NPU, which is a new processing block for Intel client chips starting with the impending Meteor Lake.

And while the demo was brief, it served its purpose: to show that Lunar Lake was back from the fab, and was already in good enough shape not just to boot and OS, but to show off controlled demos. Intel has made it clear over the past few years that they intend to move fast to make up for lost time and recapture leadership of the client market (both in terms of architecture and fabs), so they are eager to show off their progress here.

Perhaps the most interesting thing about the demo was what wasn’t said, however: the process node used for Lunar Lake’s compute (CPU) tile. In Intel’s earliest (and still most recent) public roadmap, Lunar Lake was listed to be built on the Intel 18A process. However, other disclosures from Intel today indicate that they’re only going to be starting risk production of 18A silicon in Q1’2024. Which means that for Lunar Lake to be working today, it can’t be on 18A.


Old Intel Client Roadmap, Circa 2022

That leaves 20A as by far the most likely alternative, which is due sooner and is already turning out working wafers. Which means that Intel is planning on using 20A over two generations of client processors: Arrow Lake and Lunar Lake. We’re still waiting on confirmation of this, of course, but all signs currently point to Lunar Lake having shifted to 20A since Intel’s previous update.



from AnandTech https://ift.tt/2YIsPgW
via IFTTT

Intel announces new Core Ultra CPU with AI processing engine coming in December

https://ift.tt/kPEZmnj

Intel announced its newest processors, the Intel Core Ultra series, during the Intel Innovation keynote on Tuesday, which it says will usher in the age of the "AI PC" later this year when the chip hits the market.

The new Intel processors, codenamed Meteor Lake, will be the company's first chips for the consumer market to feature a dedicated neural processing unit (NPU), which will power AI-driven workloads for consumers, Intel CEO Pat Gelsinger said during the opening keynote of the Intel Innovation conference in San Jose, California. Gelsinger also confirmed that the new processors will launch on December 14 of this year.

“AI will fundamentally transform, reshape, and restructure the PC experience – unleashing personal productivity and creativity through the power of the cloud and PC working together,” he said. “We are ushering in a new age of the AI PC.”

A slide showing off the details of the Intel Core Ultra

(Image credit: Future / John Loeffler)

Along with the NPU and what Intel claims will be impressively "power-efficient performance" thanks to advanced 7nm Intel 4 process technology, Intel's new chip series will bring an enhanced integrated GPU powered by Intel Arc graphics architecture. While we haven't gotten to see the chips for ourselves yet, the improved GPU alone could help make these the best processors of 2023, especially for more budget-friendly systems that don't need a dedicated graphics card.

The Core Ultra is the company's first consumer CPU to feature a multi chiplet module (MCM) design. This design uses two or more silicon slices—which contain the transistors that power a computer, called dies—that are bonded together at a microscopic level to allow for more flexible chip development than is possible with a single slab of monolithic silicon companies have used in the past.   

The MCM design will be backed by Intel's Foveros packaging technology, which is the same chip-bonding technology that went into the ill-fated Lakefield chip that powered some low-power mobile devices that were poorly received, and so was quickly put into End of Life.

While there were many issues with the Lakefield chip other than the Foveros packaging that kept it from being successful, the Core Ultra chips represent a major design shift for Intel's processors, bigger even than its move to a hybrid-core architecture with Intel Alder Lake back in 2022, so Intel is putting a lot of faith in this tech to power its future chip development.

Bringing AI applications into the personal computer

A render of an Intel Core Ultra chip's processing die package

(Image credit: Intel)

A major part of this Year's Intel Innovations conference this year is an update to Intel's distribution of the OpenVINO AI toolkit, which provides developers a common language to use when building out AI applications and will leverage the new Intel hardware. 

Intel's latest 2023.1 version of the toolkit is optimized to utilize the NPU in the Intel Core Ultra processor, which Intel and developers hope will in turn make practical development of AI applications for PCs with Core Ultra chips both easier and more appealing for both developers and consumers.

“We’ve been co-developing with Intel teams a suite of Acer AI applications to take advantage of the Intel Core Ultra platform,” Jerry Kao, chief operating officer of Acer said during the keynote, “developing with the OpenVINO toolkit and co-developed AI libraries to bring the hardware to life.”

To show what Intel's Core Ultra could do, Kao demonstrated an Acer Swift laptop running an Intel Core Ultra chip running a Stable Diffusion generative AI-powered app that took a basic photo of a ballerina and generated a whole new image from it, locally, and created a parallaxing desktop wallpaper with it in under a minute.

Generative AI and multimedia uses immediately come to mind, as do device personalization, settings controls, and others that have already started cropping up on Windows PCs in recent years, but which haven't had specialized hardware to power them.

Taking a cue from Apple's playbook

The Intel Core Ultra may be the first chip from Team Blue that introduces an NPU, but it's not the first processor on the market to do so. That honor would go to Apple, which introduced a neural engine into its A11 Bionic chip back in 2017, and later its Apple M1 Chip in 2020.

These chips, especially on mobile devices, have helped power some remarkable advances in the area of photography and video on the best iPhones, but there hasn't really been a groundbreaking "killer app" for these consumer chips yet the way the best graphics cards from Nvidia, as well as its more industrial-strength AI hardware, has powered the generative AI revolution behind ChatGPT, Stable Diffusion, and others.

Still, Intel getting into the NPU game is a smart move, and with the OpenVINO toolkit, there's a lot of incentive for the broad base of developers coding for Intel hardware to find new and practical uses for this NPU. If nothing else, it's something genuinely new from Intel, so it'll be exciting to see how it all plays out once we get the chips in our hands. 



from TechRadar: computing components news https://ift.tt/b2upkCD
via IFTTT

Monday 18 September 2023

Apple A17 Pro is a match for Intel and AMD CPUs, and this is great news for Apple M3

https://ift.tt/exRLNq4

Newly leaked benchmark scores for Apple’s A17 Pro system-on-chip (SoC) for smartphones reveal that it can challenge AMD and Intel's processors in Geekbench 6.

According to Apple’s A17 Pro scores in Geekbench 6’s single-core test, it scored within 10% of AMD's Ryzen 9 7950X and Intel's Core i9-13900K processors. The A17 Pro scored 2,914 points in single-core testing, while the Core i9-13900K scored 3,223 and the Ryzen 9 7950X scored 3,172. Though these are impressive results, it’s important to note that Apple's A17 Pro operates at 3.75 GHz while the Core i9-13900K and Ryzen 9 7950X operate at a much higher 6.0GHz and 5.8GHz, respectively.

As pointed out by Tom’s Hardware, the A17 Pro being able to challenge desktop-class processors with its 2,900 score is quite the feat and could prove that it’s even able to challenge Raptor Cove and Zen 4 cores at around 3.77GHz. Of course, this is a single specific benchmark that doesn’t tell the whole story, but seeing a smartphone processor best Intel and AMD processors is more than worth noting.

These results also mean that the A17 Pro scored 10% higher in the single-core test than its predecessor, the A16 Bionic, which is also rather impressive. However, when it comes to multi-core scores, the A17 Pro only scored 7,200 or 3% higher than the A16. Regardless, these results still mean that Apple’s marketing of the A17 Pro’s CPU performance was accurate, though it seems that Apple hasn’t made any real architectural changes to it and has merely boosted clocks.

The M3 is looking even better

Seeing how well the Apple A17 Pro SoC scored in Geekbench 6’s single-core test is extremely promising. Though it’s a specific test, a smartphone processor being able to match desktop-class ones in any capacity is incredible and highlights the efficiency of Apple Silicon.

In the past, this was demonstrated best by Apple’s M1 and M2 series chips, which boasted high-efficiency performance that far surpassed previous Intel processors attached to MacBooks. And now, with the M3 chip most likely coming as soon as 2023, we have even more to be excited about.

If a mere smartphone chip can offer such performance, imagine how much more powerful the M3 will be with truly enhanced tech and some serious architectural changes. The M3-powered Apple MacBooks will surely sport some impressive benchmark scores compared to Intel and AMD-based desktops and laptops, easily becoming one of the best MacBooks and even one of the best laptops on the market.

You might also like



from TechRadar: computing components news https://ift.tt/9Y6x4hZ
via IFTTT

Tuesday 12 September 2023

AMD Ryzen Z1 CPU benchmarks leaked - and they look like good news for handheld gaming

https://ift.tt/uX4rEbU

A leaker on the Chinese discussion forum Zhihu has published a detailed performance analysis of AMD’s Ryzen Z1 processor - the little brother of the Z1 Extreme chip revealed earlier this year and seen in action in PC gaming handhelds such as the upcoming Lenovo Legion Go.

David Huang posted his findings on Zhihu, giving us a great deal of insight into how this chip will perform in relation to currently available gaming handhelds like the Valve Steam Deck, which runs on an older custom Zen 2 APU from AMD. He provided detailed specs and performance stats, including several game benchmarks, although we should caution that we can't verify that these are genuine results from the CPU.

Image 1 of 2

Benchmark charts detailing the performance of the AMD Ryzen Z1 against the AMD Ryzen 7840U.

(Image credit: David Huang)
Image 2 of 2

Benchmark charts detailing the performance of the AMD Ryzen Z1 against the AMD Ryzen 7840U.

(Image credit: David Huang)

Although Huang didn’t have a Z1 Extreme to hand for direct comparisons, he did compare performance to AMD's Ryzen 7 7840U laptop CPU that was revealed earlier this year. The 7840U was presumably chosen since it is broadly analogous to the Z1 Extreme chip in terms of overall specs (although actual performance varies, since the Z1 APUs are specifically designed to be used in handheld gaming devices).

As for the results? Well, they're a bit of a mixed bag. The standard Z1 obviously lags behind its Extreme counterpart in gaming performance, but it’s also a lot more power-efficient - offering a playable experience at 720p with just 15W of power consumption. Dialing up the available power draw to the chip’s maximum of 30W actually only provided a modest performance boost of less than 20% on average.

Naturally, the Z1 is expected to be a fair bit cheaper than the Z1 Extreme - although it won't be available to buy directly, so pricing will ultimately be determined by the third-party manufacturers using the chip.

An iPhone 15 Pro feature

Last year, my colleague John Loeffler published a piece about the potential of Apple’s M-series silicon for gaming in integrated graphics. While I respectfully disagreed with his stance about macOS becoming a gaming haven (OK, fine, I said I’d rather die than become a Mac gamer), John was absolutely right that processor-integrated graphics will be the way most of us play games in the future.

Consider, for a moment, the RTX 4090. Nvidia’s flagship desktop graphics card is an absolute monster, costing more than three PS5s and guzzling down up to 450W of power (without overclocking). That’s 30 times the maximum power consumption of the AMD Z1 running at 15W - and you’ll need some similarly power-hungry components in your PC to prevent your 4090 from bottlenecking, too.

In an era where electricity bills can be brutal (especially if, like me, you live in the UK right now), low-power gaming is immediately preferable. The push for gaming-capable iGPUs is only just beginning; now, every new generation from major players including AMD, Intel, and Apple brings impressive performance leaps. The other advantages are size and thermal performance; these chips can be comfortably squeezed into compact devices like gaming handhelds and the best ultrabooks.

Although there are no official details right now about devices that will use the Z1 chip, it’s very likely that currently available handhelds using the Z1 Extreme will see a cheaper variant pop up in the not-too-distant future. When I met with Lenovo in Berlin at IFA 2023, the staff were coy about the potential of a more affordable Legion Go, but all but confirmed that a Z1 model would arrive eventually.

Personally, I can’t wait - and I hope that AMD is planning a Z2 series to deliver even better gaming performance with the same low, low power draw. If only Microsoft could get their act together and give Windows 11 a proper handheld mode…

You might also like



from TechRadar: computing components news https://ift.tt/IvH4f3U
via IFTTT

Thursday 7 September 2023

Windows 11's latest major error has been fixed

https://ift.tt/XMu0o6P

MSI has begun issuing updates to fix a Windows 11 Blue Screen of Death (BSOD) error that has been popping up with its Intel 700 and 600 Series of motherboards last month. 

According to the Verge, MSI has confirmed that it has found and now fixed the issue, stating that “the root cause of the issue is the firmware setting of Intel Hybrid Architecture.” The BSOD is said to only affect Intel’s 13th Gen Core i9 processors that have the latest Windows 11 and Windows 10 updates and should be curbed with the new update issued by MSI.

MSI explains that “The new BIOS coming will include an update on the Intel CPU uCode which will prevent any more messages regarding the ‘UNSUPPORTED_PROCESSOR’ issues.”

Blame game

When the error was first reported, Microsoft was quick to point out that this is strictly a hardware issue, and has nothing to do with the tech giant. While this raised some eyebrows (ours included), Intel has since conceded that the problem was likely due to a faulty microcode update.

This should allow MSI users to breathe a sigh of relief, as there is nothing scarier than seeing that awful, ominous blue screen. If you’re interested in checking if your model is on the list of available updates, you can head over to the update page and scroll down to the table of motherboard models and different links for the fix.

For now, the update is aimed only at Intel 700 and 600 series motherboards, but that should be expanded to other models in the coming weeks. 

You might also like...



from TechRadar: computing components news https://ift.tt/obH6jPD
via IFTTT

Wednesday 6 September 2023

Arm's Clients and Partners Signal Interest to Invest $735 Million Ahead of IPO

https://ift.tt/xwkU2eb

According to fresh SEC filings from Arm, the chip IP designer has secured a slew of industry investors ahead of the company's impending IPO. Aiming for a strong start to what Reuters reports is projected to be a $52 billion IPO valuation, Arm has been seeking out major industry customers as cornerstone investors, successfully lining up nearly a dozen companies from their efforts. Altogether, AMD, Apple, Cadence, Google, Intel, MediaTek, NVIDIA, Samsung, Synopsys, and TSMC have signaled an interest to purchase up to an aggregate of $735 million of Arm's American Depositary Shares (ADS), SoftBank, the owner of Arm, disclosed in a filing with the Securities and Exchange Commission.

While the exact number of shares to be purchased has not been disclosed – and may very well change ahead of the IPO as the current inquiries are non-binding – at the upper-end price of $51/share, a $735 million purchase would represent just over 15% of the 95.5 million Arm shares that SoftBank intends to offer as part of the IPO. Or, measured against the projected $52 billion valuation of the company, this would leave the cornerstone investors owning a collective 1.4% of Arm.

The list of companies that plan to purchase Arm shares is pretty impressive as it contains not only Arm's partners and clients like Apple, Cadence, Google, Samsung, and TSMC, but also customer-rivals, such as AMD and Intel, who both use Arm IP in some of their chips while competing with Arm designs in other chips. Meanwhile, some of Arm's other big customers are notably absent from the cornerstone investor group, including Qualcomm and Amazon.

Overall, the cornerstone investors represent a mix of fabless chip designers and tool vendors, as well as all three of the world's leading fabs themselves. For Intel's part, the company is establishing its Intel Foundry Services group to produce chips for fabless chip designers, and virtually all of them use Arm's cores. Therefore, close collaboration with Arm is something that IFS needs to have, and a good way of making friends with Arm is to own a piece of it.

"80% of TSMC wafers have an Arm processor in them," said Stuart Pann, Senior Vice President and General Manager of Intel Foundry Services, at the Goldman Sachs Communacopia & Technology Conference, reports Tom's Hardware. "The fact that our organization, the IFS organization, is embracing Arm at this level, investing in Arm, doing partnerships with Arm should give you a signpost that we are absolutely serious about playing this business. Because if you are not working with Arm, you cannot be a foundries provider."

Interestingly, the head of Intel's foundry unit even said that IFS will have to focus more on Arm and RISC-V going forward as both instruction set architectures are going to drive chip volumes and volumes is what Intel wants at its fabs.

Meanwhile Apple, one of the founders of Arm back in the 1990, extended its license agreement with Arm beyond 2040, which is a testament that the company is confident of the ISA and its development, at least for now. Keeping in mind that for now all of Apple's products use at least one Arm's CPU core, it is not reasonable that the companies are going to remain partners for the foreseeable future.



from AnandTech https://ift.tt/muEgVIL
via IFTTT

Tuesday 5 September 2023

Watch out AMD - Intel's 14th gen CPU could continue winning streak with new benchmark

https://ift.tt/65NMo1J

Rumors and leaked reports have been coming out for months on Intel’s preparation of its new line of Core i9 series of processors, called the Raptor Lake refresh. The flagship model’s newly leaked Geekbench score in particular proves how promising its performance might be.

The flagship model, the Core i9-14900K processor, managed to score a Geekbench score of over 3.1K for the single-core test. This puts it at a 6% increase over the previous model, the 13900K. Also, keep in mind that there are several variables involved in testing, including a more limited data set, the motherboard not being fully optimized, and the processor being paired with DDR5-4800 memory which could impact performance as well.

Its key specifications are eight P-cores and 16 E-cores and, according to VideoCardZ, it is only the second Intel processor to exceed a clock speed of 6 GHz thanks to its Thermal Velocity Boost technology. Testing was done using the Biostar Z790 SILVER motherboard, which has the added benefit of proving compatibility with other similar, soon-to-be-released motherboards.

Intel could be on to something, as long as the price is right 

Plenty of rumors concerning the Core i9-14900K have been making their way through the tech grapevine, including other benchmarks and pricing.

There was a previously leaked CrossMark result for Intel’s Core i9-14900K and the Core i7-14700K. The Core i9-14900K recorded an overall score of 2,265, with the Core i7-14700K hitting 1,980. Thus it can be concluded that the 14900K is around 14% faster than the 14700K, at least going by this single result. Judging by these scores, we can see the potential of how well the flagship and even the i7 processor can perform.

However, according to a leak from well-known leaker @momomo_us, the pricing of the Intel Core i9-14900K, Core i7-14700K, and Core i5-14600K are all set to increase by about 15%. This would raise the price of the 14900K flagship to $695 in the US which, considering the smaller boost in performance compared to the previous models, would hardly be worth the price of entry. Then again, considering all the caveats that come with the currently leaked benchmarks, we could be seeing even better performance results once the processor officially launches.

As for that launch date, previous leaks point to an October 2023 release while VideoCardZ states that the flagship will be coming out this September.

You may also like



from TechRadar: computing components news https://ift.tt/cSeqRJm
via IFTTT
Related Posts Plugin for WordPress, Blogger...