Tuesday 29 August 2023

Intel’s Meteor Lake CPUs will use AI to make your laptop battery last longer

https://ift.tt/VU4yiRG

Intel’s incoming Meteor Lake processors will mean better battery life for laptops that have these CPUs inside, thanks to Team Blue bringing the power of AI to bear on power management with these mobile chips.

PC World reports that Intel is heavily focused on AI not just with Meteor Lake in terms of having a VPU on the chips (a visual processing unit to help supercharge AI workloads), but that artificial intelligence also factors ‘heavily’ into power states for notebooks here.

Specifically, Intel revealed at the recent Hot Chips conference that Meteor Lake, as well as its future best CPUs, are going to employ AI to make some crucial decisions about exactly when to shift between high power (performance) and low power (idling) states.

As Intel explained in the talk at Hot Chips, the trick is how the CPU can switch between those states as efficiently as possible, in what Team Blue calls Dynamic Voltage and Frequency Scaling (or DVFS for short).

That DVFS-related decision making was revamped way back with Skylake processors (in 2015), when Intel introduced Speed Shift, a tech that intelligently shifted between high and low power CPU states based on standardized estimates for tasks (like opening a web page, for example, or any common workload).

Now, Intel has brought AI to bear so that this algorithm can, in the example given, predict how the user will open the web page, use it, then close it and move on.

As PC World makes clear, the difference is instead of a rougher estimate of how these common tasks work, and how best to tune power usage as a result, the algorithm has taught itself in a more in-depth way to hone the shift to result in lower power consumption.


Analysis: This isn’t just about power-saving – but responsiveness, too

This may sound like nitty-gritty details, but the overall effect could be sizeable indeed. Add up all those little power savings over every single task you carry out in a laptop session – lots and lots of them – and your battery is (hopefully) going to last appreciably longer.

Efraim Rotem, an Intel Fellow, and responsible for client SoC architecture at the company’s Design Engineering Group, reckons that the move with Meteor Lake could save 15% more energy than before (energy being work over time, divided by the power used, rather than just straightforward power consumption).

Not to mention that this helps to fine-tune the responsiveness of the system, too, in everyday tasks. Rotem estimates that in this regard, Meteor Lake ushers in an up to 35% improvement in responsiveness, no less, making these portables more likely to rank higher on our best laptops list.

Note that this is ‘up to’ and also that the AI smarts won’t apply to everything you do on your laptop. Thus far, the AI has only been trained on certain scenarios, and won’t train itself based on the way you use your computer as an individual – although that personalization is entirely possible as an avenue for the future. Exciting times indeed.

We’ll get the full lowdown at the launch of Meteor Lake processors at Intel’s Innovation event next month, and we’ll be hearing more about AI then, for sure.

You might also like



from TechRadar: computing components news https://ift.tt/28ywJPX
via IFTTT

Raptor Lake Refresh price rumor is worrying – could AMD’s Zen 5 CPUs leave Intel in the dust?

https://ift.tt/VU4yiRG

Intel’s Raptor Lake Refresh processors could witness a price hike with the initial launch of the K series models (that can be overclocked) for the 14th-gen.

Tom’s Hardware spotted that well-known leaker @momomo_us shared pre-release pricing from an unknown online retailer.

See more

Given that we aren’t told the source – though it’s obviously an obscure retail outlet somewhere – and the fact that these could well be placeholder listings, as is always the case with pre-release pricing, we must be extra-cautious here (even more so than usual with a rumor).

If the pricing is correct, the Core i9-14900K, Core i7-14700K, and Core i5-14600K are all set to be cranked up by about 15% for pricing compared to the equivalent Raptor Lake models. The actual price hikes vary slightly between 14% and 16%, but average out to 15%.

That would be a significant jump, of course, and would leave the 14900K flagship costing $695 in the US.

This comes alongside a recent leak showing that Raptor Lake Refresh may not be that much of a leap in performance, to potentially rub salt into the wounds.

If that spillage is correct, Intel’s inbound 14th-gen processors – which are expected to turn up in October – might only offer a 3% performance gain compared to Raptor Lake CPUs on average.

Analysis: Handing the CPU crown to AMD?

So, all this paints a rather worrying picture of a diminished value proposition with Raptor Lake Refresh, right? Well, it’s not quite that simple.

For starters, that performance leak we just mentioned comes with caveats. It shows single-core performance, and multi-core will see better gains, especially for the 14700K, which is expected to add four efficiency cores compared to the 13700K, so should be a much more meaningful upgrade.

Taking that nugget of info about the 14700K, it might make sense that this chip will get a price hike. But does it make sense that the 14600K would get the same level of price increase, given that it’ll be relatively unexciting in comparison? We’d argue not, and we’d expect a more modest increase for this mid-range CPU in all honesty, if it’s to make the cut for the likes of our best processor list.

This fact – and the source – casts doubt on this rumor for us. But we can’t rule out the possibility that even though the retailer may be bandying placeholder pricing around, it might still have heard a signal from Intel (or elsewhere in the industry) to expect price increases.

Intel’s 14th-gen has always been labeled as a simple refresh, and as such, was never going to be about big performance leaps. However, if Team Blue mixes very modest uplifts with price hikes, that’s not going to make for a particularly appetizing new range of processors.

Furthermore, bearing in mind that this is what Intel will have on the CPU table most of next year, with its following generation – Arrow Lake – likely not turning up until late in 2024, Raptor Lake Refresh is going to be facing off against Ryzen 8000. AMD’s next-gen CPUs, based on Zen 5, are rumored to potentially be looking at a release date of Q3 2024, which is very likely going to be before Arrow Lake.

And that could be a serious problem for Intel, particularly if the value proposition of Raptor Lake Refresh seems wonky.

A final observation: this is the second set of 14th-gen product listings we’ve spotted (the first didn’t provide any pricing), and that indicates something in itself – that perhaps Raptor Lake Refresh will arrive a bit sooner than expected.

You might also like



from TechRadar: computing components news https://ift.tt/kVrmzJ3
via IFTTT

Monday 28 August 2023

Hot Chips 2023: Intel Details More on Granite Rapids and Sierra Forest Xeons

https://ift.tt/VU4yiRG

With the annual Hot Chips conference taking place this week, many of the industry’s biggest chip design firms are at the show, talking about their latest and/or upcoming wares. For Intel, it’s a case of the latter, as the company is at Hot Chips to talk about its next generation of Xeon processors, Granite Rapids and Sierra Forest, which are set to launch in 2024. Intel has previously revealed this processors on its data center roadmap – most recently updating it in March of this year – and for Hot Chips the company is offering a bit more in the way of technical details for the chips and their shared platform.

While there’s no such thing as an “unimportant” generation for Intel’s Xeon processors, Granite Rapids and Sierra Forest promise to be one of Intel’s most important updated to the Xeon Scalable hardware ecosystem yet, thanks to the introduction of area-efficient E-cores. Already a mainstay on Intel’s consumer processors since 12th generation Core (Alder Lake), with the upcoming 6th generation Xeon Scalable platform will finally bring E-cores over to Intel’s server platform. Though unlike consumer parts where both core types are mixed in a single chip, Intel is going for a purely homogenous strategy, giving us the all P-core Granite Rapids, and the all E-core Sierra Forest.

As Intel’s first E-core Xeon Scalable chip for data center use, Sierra Forest is arguably the most important of the two chips. Fittingly, it’s Intel’s lead vehicle for their EUV-based Intel 3 process node, and it’s the first Xeon to come out. According to the company, it remains on track for a H1’2024 release. Meanwhile Granite Rapids will be “shortly” behind that, on the same Intel 3 process node.



from AnandTech https://ift.tt/cCzLaZK
via IFTTT

Sunday 27 August 2023

Retailer listings suggest Intel’s next-gen CPUs could arrive sooner than expected

https://ift.tt/VWNxCqa

Intel’s next-gen processors have been spotted listed by a retailer earlier than we anticipated, given the widely rumored launch date of (later in) October.

As Tom’s Hardware reports, Telemart, a Ukrainian retailer, listed a bunch of ‘K’ series Raptor Lake Refresh processors on its store (models which are unlocked and can be overclocked, and are expected to be the first products Intel wheels out).

The Core i9-14900K, Core i7-14700K and Core i5-14600K were the models listed – that and their KF variants, which don’t have integrated graphics. There’s no pricing info at this point, but we did see some listed specs which align with what the rumor mill has suggested in the past.

Namely that these 14th-gen CPUs will be 200MHz faster than their predecessors for boost speed (in most cases, anyway – with perhaps an exception or two dropping down to a mere 100MHz increase).

Furthermore, the 14700K is shown as the only chip which increases the core count, with Intel throwing in an extra four efficiency cores, again as rumored multiple times in the past. (Those efficiency cores are the low-power ones, so won’t make a major difference – but they will help as extra muscle for tougher multi-core tasks, and certainly, this CPU will be a strong candidate for our best processor list).

Analysis: Proceed very cautiously

Of course, these Raptor Lake Refresh CPUs are not available to pre-order, or even priced with placeholders – but their very appearance suggests that maybe Intel’s on-sale date could be a touch closer than we thought.

Currently, the expectation on the grapevine is for an October launch for 14th-gen processors, and likely a mid-to-late timeframe during that month. Seeing listings now, some two months ahead of that purported release schedule, might be a hint that next-gen models could turn up earlier in October.

Or it could just be a retailer accidentally-on-purpose jumping the gun to grab some limelight – as is always a possibility, especially when more minor retailers spill these kind of leaks (this isn’t Best Buy or Newegg, after all). So, we’d add a whole heap of caution – a double helping – here.

Time will tell, but it’s fair enough to say that this is at least a hint that Intel’s Raptor Lake Refresh processors might just turn up a bit sooner than expected – or certainly that they’re on track. At least the first batch of ‘K’ models anyway, as the other non-K processors (the ones that can’t be overclocked) aren’t expected until the start of next year.

These are on-sale dates, remember – the initial announcement of Raptor Lake Refresh will come earlier, in theory at Intel’s Innovation event (on September 28), just a month from now.

You might also like



from TechRadar: computing components news https://ift.tt/p7z8nQG
via IFTTT

Friday 25 August 2023

GEEKOM Mini IT13 Packs Core i9 into 4x4 NUC Chassis: 14-Cores NUC

https://ift.tt/kiUSwzl

While Intel's classic 4x4 NUCs have been pretty powerful systems capable of handling demanding workloads, the company never cared to install its top-of-the-range CPUs into its compact PCs. GEEKOM apparently decided to fix this and this week introduced its Mini IT13: the industry's first 4x4 desktop with an Intel Core i9 processor, offering with 14 CPU cores inside. 

The Mini IT13 from GEEKOM measures 117 mm × 112 mm × 49.2 mm, making it as small as Intel's classic NUC systems. Despite its compact size, it can pack Intel's mobile-focused 14-core Core i9-13900H (6P+8E cores, 20 threads, up to 5.40 GHz, 24 MB cache, 45W) that comes with integrated Xe graphics processing unit with enhanced performance (Xe-LP, 96 EUs or 768 stream processors at up to 1.50 GHz). 

To maintain consistent performance of the CPU and avoid overheating and performance drops of even under significant loads, the system employs a blower-style cooler, which produces up to 43.6 dBA of noise, so the machine is not exactly whisper quite to say the least.

The compact PC supports  up to 64 GB of DDR4 memory through two SODIMMs, an M.2-2280 with a PCIe 4.0 x4interface and an M.2-2242 SSD with a SATA interface, and an additional 2.5-inch HDD or SSD for more extensive storage.

As far as connectivity is concerned, the GEEKOM Mini IT13 comes with a Wi-Fi 6E+ Bluetooth 5.2 module, a 2.5 GbE port, two USB4 connectors, three USB 3.2 Gen2 ports, one USB 2.0 Type-A connector, two HDMI 2.0 outputs (in addition to two DPs supported through USB4), an SD card reader, and a TRRS audio jack for headphones. 

Although GEEKOM does not directly mention it, the USB4 ports potentially allow to connect an external graphics card in an eGFX enclosure and make the Mini IT13 a quite decent gaming machine. Meanwhile, even without an external graphics card, the unit can support up to four displays simultaneously.

Interestingly, the GEEKOM IT13 machine does not cost an arm and a leg. The cheapest version with Core i5-13500H, 16 GB of RAM, and a 512 GB SSD can be purchased for $499, whereas the most expensive model with Core i9-13900H, 32 GB of memory, and 2 TB of solid-state storage costs $789.



from AnandTech https://ift.tt/pvVEjFK
via IFTTT

Thursday 24 August 2023

Intel Core i7-14700K may be the only next-gen CPU worth buying if this leak’s right

https://ift.tt/wt3X9ns

Intel’s Raptor Lake Refresh line-up could have a standout CPU in the squad, and a bunch of very run-of-the-mill teammates, if a new leak pans out.

VideoCardz noticed that MSI accidentally shared what’s apparently a product training video (on YouTube) for its Intel motherboards, which imparts an overview of Team Blue’s 14th-gen chips, covering performance levels too.

Firstly, that’s a major faux pas to say the least. And secondly, it doesn’t make for comfortable viewing, at least not for anyone who has hopes that Raptor Lake Refresh will pull out some solid gains over current-gen Intel processors.

We’re told that Raptor Lake Refresh processors will be on average about 3% faster than Raptor Lake. In other words, most next-gen CPUs will only be slightly quicker than what we have now.

The exception is the Core i7-14700K, which will be 17% faster in multi-threading, according to MSI’s leaked briefing for staff. That’s in line with rumored 15% gains for the 14700K, and MSI also observes that the reason is Intel has upgraded the 14700K with four more efficiency cores than its predecessor – again, something that has been heavily rumored at this point.

Analysis: Take a beat

This is rather disappointing in all honesty, although we shouldn’t jump the gun as we can’t put too much weight on a single leak. 

Is it genuine? It looks that way, though we can’t see the video now to check (it’s been made private, as the clip should have been in the first place). Has MSI got some of this info wrong, perhaps? Maybe. We don’t know, and we won’t know until we get some Raptor Lake Refresh processors in for testing ourselves to compare to current Raptor Lake chips.

Of course, Raptor Lake Refresh has had somewhat reined-in expectations from the off, seeing as it’s always been billed as just a simple refresh. But some rumors have indicated that, as well as a strong 14700K that’s a candidate for the best processors list, some other 14th-gen chips might offer impressive gains too. For example, the 14600K has been suggested to be a pacey offering as well in the past. Maybe that leak is wrong, though.

We’d certainly expect more than a 3% average improvement for Intel’s next-gen CPUs. Although MSI doesn’t actually frame this as single-core or multi-core, so perhaps it’s the former (the 17% uptick for the 14700K clearly mentions multi-core). And from that perspective, 3% may not be terrible (some CPUs will still offer a bit more than that, of course – and maybe a fair bit more for multi-core, we can hope).

All this is even more reason to not be leaping to conclusions yet, but as it stands, this leak does not get us excited for Intel’s Raptor Lake Refresh processors.

You might also like



from TechRadar: computing components news https://ift.tt/OAMucNX
via IFTTT

Tuesday 22 August 2023

Synopsys Surpasses $500M/Year in AI Chip Revenue, Expects Further Rapid Growth

https://ift.tt/1Ujbp7c

Demand for generative artificial intelligence (AI) applications is so high that NVIDIA's high-performance compute GPUs like A100 and H100 are reportedly sold out for quarters to come. Dozens of companies are developing AI-oriented processors these days and, like the gold rushes of old, the tool suppliers are some of the biggest winners. As part of their Q3 earnings report, Synopsys, one of the leading suppliers of electronic design automation (EDA) tools and chip IP, disclosed that it's already booked over half of a billion of dollars in AI-related revenue in the last year.

"AI chips are a core value stream for Synopsys, already accounting on a trailing 12-month basis for well over $0.5 billion," said Aart J. de Geus, the outgoing chief executive of Synopsys, at the conference call with analysts and investors (via SeekingAlpha). "We see this growth continuing throughout the decade."

Rising demand for diverse generative AI applications is propelling the AI server market's growth, going from $30 billion in 2023 to an impressive $150 billion by 2027, according to the head of Foxconn. The market for AI processors is poised to expand at a similar pace, and Synopsys is projecting it to exceed $100 billion by 2030.

"Use cases for AI are proliferating rapidly, as are the number of companies designing AI chips," said de Geus. "Novel architectures are multiplying, stimulated by vertical markets, all wanting solutions optimized for their specific application. Third parties estimate that today's $20 billion to $30 billion market for AI chips will exceed $100 billion by 2030."

AI processors are set to become a sizable part of the semiconductor market in general. In fact, sales of AI chips may account for 10% of the whole semiconductor market several years down the road. Furthermore, they will be a major driver for the semiconductor market growth as they will enable new types of applications, such as self-driving vehicles.

"In this new era of 'smart everything,' these chips in turn drive growth in surrounding semiconductors for storage, connectivity, sensing, AtoD and DtoA converter, power management," said the head of Synopsys. "Growth predictions for the entire semi market to pass $1 trillion by 2030 are thus quite credible."

Perhaps the most amusing part about Synopsys earning over $500 million on AI chips in about a year is that a significant part of the company's revenue comes from AI-enabled EDA tools. Essentially, the company is selling EDA software that uses artificial intelligence to develop artificial intelligence chips.

Sources: Synopsys, SeekingAlpha.



from AnandTech https://ift.tt/zlLGhtW
via IFTTT

Friday 18 August 2023

Intel Cuts Some R&D Positions in California to Reduce Costs

https://ift.tt/RYhDaAV

As Intel continues to refocus on its core competencies, the company has been no stranger to shedding business units and jobs in the process. And while the roughly 132,000 headcount company hasn't enacted any massive layoffs, there have been numerous cuts at all levels over the past couple of years, with these layoffs now extending to R&D.

The Sacramento Inno reported this week that Intel is set to lay off 140 employees, including 89 from the Folsom, California campus, and 51 from San Jose. The Folsom cuts span across 37 job classifications, but most prominently impact roles titled 'engineer' and 'architect.' To provide further specifics, the layoffs include 10 GPU software development engineers, eight system software development engineers, six cloud software engineers, six product marketing engineers, and six system-on-chip design engineers.

The reductions are intended to decrease Intel's operational costs and pave a path to renewed profitability. Though it remains surprising that Intel decided to cut workforce at one of its key sites. In the end, Intel's long-term success depends on its R&D prowess and software is as important as hardware in Intel's business.

Intel's Folsom site has historically been pivotal for various research and development endeavors, including SSDs, graphics processors, software, and chipsets. Since Intel sold its 3D NAND and SSD business to SK Hynix in late 2021, engineers working on appropriate products either joined Solidigm, were relayed to other projects, left themselves, or were laid off. The recent layoffs of GPU specialists are somewhat unexpected, given that Intel's long-term plans still have the company developing GPUs for every step of the market, from datacenter accelerators to integrated GPUs.

California is where Intel is headquartered. As of now, Intel employs over 13,000 people in California, which is more than 12,000 in Arizona, but less than 20,000 in Oregon, two major manufacturing sites for the company. As of early 2022, the Folsom site employed 5,300 individuals, but considering these reductions, a total of almost 500 positions have been eliminated from the Folsom R&D campus within this year, following previous layoffs in January, March, and May.

Meanwhile, according to the Inno, in notifying state authorities, Intel has hinted at the possibility of internal relocations for some affected employees.



from AnandTech https://ift.tt/5lN6SKY
via IFTTT

Monday 14 August 2023

Apple’s M3 Ultra chips promise a huge performance jump - if rumors are true

https://ift.tt/KxqZ3Qh

As we gear up for Apple’s September iPhone 15 event, we’re getting more details of what to expect with the rumored M3 chip lineup, especially in regards to the high-end M3 Ultra chip, which is expected to launch in 2024.

Bloomberg’s Mark Gurman has been a source of plenty of juicy Apple scoops and predictions, and in his Power On newsletter, he states that the M3 Ultra will offer a huge increase in CPU cores and a humble bump in GPU cores. 9to5Mac breaks down the reported specifications for each of the Ultra chips we should be seeing.

The base M3 Ultra is rumored to feature a 32-core CPU with 24 performance cores and eight efficiency cores, plus a 64-core GPU, a potentially huge leap over the M2 Ultra, which has specs of 24-core CPU with 16 performance cores and eight efficiency cores, plus a 60-core GPU.

As far as higher-end models, the top-end M3 Ultra will apparently come with a 32-core CPU with 24 performance cores and eight efficiency cores, 80 GPU cores, which again surpasses the current top-end M2 Ultra chip which features 24-core CPU with 16 performance cores, and eight efficiency cores, 76-core GPU. Though note again there is only a slight bump in GPU cores. 

The jump in CPU cores for the M3 Ultra will come in the form of performance cores rather than efficiency cores. Having a higher performance core count means the new chip will be more capable of complex computational tasks, which should allow you to run multiple demanding programs with ease. 

Most daily tasks can be handled by the lower-powered efficiency cores (to help save battery life in MacBooks), but if you’re stressing your machine, the performance cores will kick in and power you through your work.

The M3 Ultra specs - if they turn out to be accurate - will be more than a bit overkill for most people so you might be better off setting your sights for the modest and hopefully more affordable M3 chip. It’ll be a good performance jump from the M2, and will ‘future-proof’ your new purchase for years to come.

Rumors suggest we’ll see new M3 Macs in October 2023, with the 14-inch and 16-inch MacBook Pro with the M3 Pro chips expected to make an appearance next year.



from TechRadar: computing components news https://ift.tt/su4O1Ag
via IFTTT

Samsung, MemVerge, and H3 Build 2TB CXL Memory Pool

https://ift.tt/7YWwRe3

Samsung, MemVerge, H3 Platform, and XConn have jointly unveiled their 2 TB Pooled CXL Memory System at the Flash Memory Summit. The device can be connected to up to eight hosts, allowing them to use its memory when needed. The 2 TB Pooled CXL Memory system has software enabling it to visualize, pool, tier, and dynamically allocate memory to connected hosts.

The 2 TB Pooled CXL Memory system is a 2U rack-mountable machine built by H3 with eight 256 GB Samsung CXL memory modules connected using XConn's XC50256 CXL 2.0 switch supporting 256 PCIe Gen5 lanes and 32 ports. The firmware of the 2 TB Pooled CXL Memory system allows you to connect it to up to eight hosts that can dynamically use CXL memory when they need it, thanks to software by MemVerge.

The Pooled CXL Memory system was developed to overcome limitations in memory capacity and composability in today's system architecture, which involves tight coupling between CPU and DRAM. Such architecture leads to performance challenges in highly distributed AI/ML applications, such as spilling memory to slow storage, excessive memory copying, I/O to storage, serialization/deserialization, and Out-of-Memory errors that can crash applications.

Attaching 2 TB of fast, low-latency memory using a PCIe 5.0 interface with the CXL 2.0 protocol on top to eight host systems and using it dynamically between eight hosts saves a lot of money while providing loads of performance benefits. According to companies, the initiative represents a significant step towards creating a more robust and flexible memory-centric data infrastructure for modern AI applications.

"Modern AI applications require a new memory-centric data infrastructure that can meet the performance and cost requirements of its data pipeline," said Charles Fan, CEO and co-founder of MemVerge. "Hardware and software vendors in the CXL Community are co-engineering such memory-centric solutions that will deeply impact our future."

The jointly developed demonstration system can be pooled, tiered with main memory, and dynamically provisioned to applications with Memory Machine X software from MemVerge and its elastic memory service. Viewer service showcases the system's physical layout and provides a heat map indicating memory capacity and bandwidth consumption per application. 

"The concept system unveiled at Flash Memory Summit is an example of how we are aggressively expanding its usage in next-generation memory architectures," said JS Choi, Vice President of New Business Planning Team at Samsung Electronics. "Samsung will continue to collaborate across the industry to develop and standardize CXL memory solutions, while fostering an increasingly solid ecosystem."



from AnandTech https://ift.tt/rzwNDdB
via IFTTT

Wednesday 9 August 2023

Memory Makers on Track to Double HBM Output in 2023

https://ift.tt/9vED1Bo

TrendForce projects a remarkable 105% increase in annual bit shipments of high-bandwidth memory (HBM) this year. This boost comes in response to soaring demands from AI and high-performance computing processor developers, notably Nvidia, and cloud service providers (CSPs). To fulfill demand, Micron, Samsung, and SK Hynix are reportedly increasing their HBM capacities, but new production lines will likely start operations only in Q2 2022.

More HBM Is Needed

Memory makers managed to more or less match the supply and demand of HBM in 2022, a rare occurrence in the market of DRAM. However, an unprecedented demand spike for AI servers in 2023 forced developers of appropriate processors (most notably Nvidia) and CSPs to place additional orders for HBM2E and HBM3 memory. This made DRAM makers use all of their available capacity and start placing orders for additional tools to expand their HBM production lines to meet the demand for HBM2E, HBM3, and HBM3E memory in the future.

However, meeting this HBM demand is not something straightforward. In addition to making more DRAM devices in their cleanrooms, DRAM manufacturers need to assemble these memory devices into intricate 8-Hi or 12-Hi stacks, and here they seem to have a bottleneck since they do not have enough TSV production tools, according to TrendForce. To produce enough HBM2, HBM2E, and HBM3 memory, leading DRAM producers have to procure new equipment, which takes 9 to 12 months to be made and installed into their fabs. As a result, a substantial hike in HBM production is anticipated around Q2 2024, the analysts claim.

A noteworthy trend pinpointed by TrendForce analysts is the shifting preference from HBM2e (Used by AMD's Instinct MI210/MI250/MI250X, Intel's Sapphire Rapids HBM and Ponte Vecchio, and Nvidia's H100/H800 cards) to HBM3 (incorporated in Nvidia's H100 SXM and GH200 supercomputer platform and AMD's forthcoming Instinct MI300-series APUs and GPUs). TrendForce believes that HBM3 will account for 50% of all HBM memory shipped in 2023, whereas HBM2E will account for 39%. In 2024, HBM3 is poised to account for 60% of all HBM shipments. This growing demand, when combined with its higher price point, promises to boost HBM revenue in the near future.

Just yesterday, Nvidia launched a new version of its GH200 Grace Hopper platform for AI and HPC that uses HBM3E memory instead of HBM3. The new platform consisting of a 72-core Grace CPU and GH100 compute GPU, boasts higher memory bandwidth for the GPU, and it carries 144 GB of HBM3E memory, up from 96 GB of HBM3 in the case of the original GH200. Considering the immense demand for Nvidia's offerings for AI, Micron — which will be the only supplier of HBM3E in 1H 2024 — stands a high chance to benefit significantly from the freshly released hardware that HBM3E powers.

HBM Is Getting Cheaper, Kind Of

TrendForce also noted a consistent decline in HBM product ASPs each year. To invigorate interest and offset decreasing demand for older HBM models, prices for HBM2e and HBM2 are set to drop in 2023, according to the market tracking firm. With 2024 pricing still undecided, further reductions for HBM2 and HBM2e are expected due to increased HBM production and manufacturers' growth aspirations.

In contrast, HBM3 prices are predicted to remain stable, perhaps because, at present, it is exclusively available from SK Hynix, and it will take some time for Samsung to catch up. Given its higher price compared to HBM2e and HBM2, HBM3 could push HBM revenue to an impressive $8.9 billion by 2024, marking a 127% YoY increase, according to TrendForce.

SK Hynix Leading the Pack

SK Hynix commanded 50% of the HBM memory market in 2022, followed by Samsung with 40% and Micron with a 10% share. Between 2023 and 2024, Samsung and SK Hynix will continue to dominate the market, holding nearly identical stakes that sum up to about 95%, TrendForce projects. On the other hand, Micron's market share is expected to hover between 3% and 6%.

Meanwhile, (for now) SK Hynix seems to have an edge over its rivals. SK Hynix is the primary producer of HBM3, the only company to supply memory for Nvidia's H100 and GH200 products. In comparison, Samsung predominantly manufactures HBM2E, catering to other chip makers and CSPs, and is gearing up to start making HBM3. Micron, which does not have HBM3 in the roadmap,  produces HBM2E (which Intel reportedly uses for its Sapphire Rapids HBM CPU) and is getting ready to ramp up production of HBM3E in 1H 2024, which will give it a significant competitive advantage over its rivals that are expected to start making HBM3E only in 2H 2024.



from AnandTech https://ift.tt/4WCoUqs
via IFTTT

Tuesday 8 August 2023

NVIDIA Unveils GH200 'Grace Hopper' GPU with HBM3e Memory

https://ift.tt/18HyDXS

At SIGGRAPH in Los Angeles, NVIDIA unveiled a new variant of their GH200' superchip,' which is set to be the world's first GPU chip to be equipped with HBM3e memory. Designed to crunch the world's most complex generative AI workloads, the NVIDIA GH200 platform is designed to push the envelope of accelerated computing. Pooling their strengths in both the GPU space and growing efforts in the CPU space, NVIDIA is looking to deliver a semi-integrated design to conquer the highly competitive and complicated high-performance computing (HPC) market.

Although we've covered some of the finer details of NVIDIA's Grace Hopper-related announcements, including their disclosure that GH200 has entered into full production, NVIDIA's latest announcement is a new GH200 variant with HBM3e memory is coming later, in Q2 of 2024, to be exact. This is in addition to the GH200 with HBM3 already announced and due to land later this year. This means NVIDIA has two versions of the same product, with GH200 incorporating HBM3 incoming and GH200 with HBM3e set to come later.

During their keynote at SIGGRAPH 2023, President and CEO of NVIDIA, Jensen Huang, said, "To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs." Jensen also went on to say, "The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center.

NVIDIA's GH200 GPU is set to be the world's first chip, complete with HBM3e memory. In a dual configuration setup, it will be available with up to 282 GB of HBM3e memory, which NVIDIA states "delivers up to 3.5 x more memory capacity and 3 x more bandwidth than the current generation offering."

Perhaps one of the most notable details NVIDIA shares is that the incoming GH200 GPU with HBM3e is 'fully' compatible with the already announced NVIDIA MGX server specification, unveiled at Computex. This allows system manufacturers to have over 100 different variations of servers that can be deployed and is designed to offer a quick and cost-effective upgrade method.

NVIDIA claims that the GH200 GPU with HBM3e provides up to 50% faster memory performance than the current HBM3 memory and delivers up to 10 TB/s of bandwidth, with up to 5 TB/s per chip.

We've already covered the announced DGX GH200 AI Supercomputer built around NVIDIA's Grace Hopper platform. The DGX GH200 is a 24-rack cluster fully built on NVIDIA's architecture, with each a single DGX GH200 combining 256 chips and offering 120 TB of CPU-attached memory. These are connected using NVIDIA's NVLink, which has up to 96 local L1 switches providing immediate and instantaneous communications between GH200 blades. NVIDIA's NVLink allows the deployments to work together with a high-speed and coherent interconnect, giving the GH200 full access to CPU memory and allowing access for up to 1.2 TB of memory when in a dual configuration.

NVIDIA states that leading system manufacturers are expected to deliver GH200-based systems with HBM3e memory sometime in Q2 of 2024. It should also be noted that GH200 with HBM3 memory is currently in full production and is set to be launched by the end of this year. We expect to hear more about GH200 with HBM3e memory from NVIDIA in the coming months

Source: NVIDIA



from AnandTech https://ift.tt/CSkbKoP
via IFTTT

Leaked benchmarks for Intel’s next-gen Raptor Lake Refresh CPUs look shaky – but don’t panic

https://ift.tt/CWwNjUP

Intel’s Raptor Lake Refresh desktop processors have been spotted in another leak, and this time round we have some more benchmarks to feast our eyes on.

In this case, the benchmarks are drawn from CrossMark (a suite produced by BAPCo to provide a gauge of app performance and responsiveness), and we have results of both Intel’s incoming Core i9-14900K and the Core i7-14700K.

As spotted by Tom’s Hardware, the Core i9-14900K recorded an overall score of 2,265 (add seasoning, as ever), with the Core i7-14700K hitting 1,980. Thus we can conclude the 14900K is around 14% faster than the 14700K, at least going by this one result.

The 14900K is slightly faster still in some of the tests that make up the overall benchmark – notably, it’s 20% quicker in the responsiveness test.

What this leak also shows is the purported core counts of these Raptor Lake Refresh CPUs, and it backs up what has previously been aired on the rumor mill – namely that the 14700K is going to have more cores, 20 of them. That’ll break down to 8 performance cores and 12 efficiency cores, with the latter being 4 more than the 13700K.

The 14900K will keep the same core configuration as its Raptor Lake predecessor, which occupies the number three spot in our list of the best processors (the 13700K is number one, incidentally, to give you some idea of how important these sequels are).

Unfortunately, the spec details provided by this leak don’t cover clock speeds, but Raptor Lake Refresh is only expected to juice things up to a relatively minor extent compared to Intel’s current-gen CPUs. One theory we’ve heard is that 14th-gen processors will be 200MHz faster across most of the range.


Analysis: Why we shouldn’t focus on the scores here

Why aren’t we comparing these 14th-gen processors to the results of their 13th-gen counterparts here? Well, because the latter are actually faster in the CrossMark results database.

While that might seem worrying, or indeed even ridiculous, remember that these Raptor Lake Refresh processors are still engineering samples – or that’s what we presume, anyway – and clearly not running at their full performance levels. After all, there’s no way Intel would release a new generation of silicon that is slower than the previous range of CPUs, for obvious reasons.

For the same reason, there’s no point in drawing comparisons between these leaked results and AMD Ryzen CPUs.

Don’t worry about the raw performance side of the equation here. The value of this leak is in showing that the Core i7-14700K does indeed up the efficiency core count – or at least, this is yet another indication that this is the case – and that it’s not too far off the performance of the 14900K. Actually, we would expect the gap to be a bit closer – based on current-gen Raptor Lake results in CrossMark – but there could be something else going on behind the scenes here.

As ever with spilled benchmarks, we must be careful about drawing too much in the way of conclusions.

The other interesting thing about this leak is its very existence, and the fact that we’re now seeing multiple Raptor Lake Refresh benchmarks popping up on the grapevine (there was a Cinebench leak for the 14700K last month). As the release of any hardware nears, more leaks appear, and so this is a hint that Intel’s launch date for 14th-gen CPUs is nearing. October – the rumored month for these chips to hit the shelves – looks a good bet at this point.

After all, we just heard that Intel’s Innovation event in September will see Team Blue reveal Meteor Lake processors, which are the laptop chips to run alongside Raptor Lake Refresh on the desktop – so surely the latter will get a mention, too? We shall see.



from TechRadar: computing components news https://ift.tt/kFyENTL
via IFTTT

Monday 7 August 2023

Intel to reveal Meteor Lake CPUs in September – and to talk about a ‘bold vision’ for AI

https://ift.tt/CWwNjUP

Intel will be revealing its next-gen Meteor Lake processors at an event in September, along with other details about the road ahead for its CPUs.

As VideoCardz flagged up, Intel Innovation is a two-day event taking place on September 19-20, and we’ve just got the schedule for the various sessions that’ll happen there.

One of which is the ‘Intel Client Hardware Roadmap and the Rise of AI,’ the translation of which is spilling the beans about future CPUs.

In the description of the session, Intel tells us that we will learn about the “highly anticipated Intel Core Ultra processors (codename Meteor Lake)” and the firm’s “bold vision for AI.”

Meteor Lake is Intel’s next generation of processors, as mentioned, though the caveat is that these will be laptop chips.

According to the rumor mill, next-gen desktop processors will be provided via a different route: Raptor Lake Refresh. As the name indicates, this is a simple refresh of existing Raptor Lake CPUs with faster clock speeds and a bit of honing.


Analysis: Blazing a trail for power-efficiency

Meteor Lake will usher in an interesting change in Intel’s naming scheme, as you may have noticed in the above spiel: Intel Core Ultra is the new name supposedly replacing the traditional Core i9, i7, and so forth, branding. So we’ll have Intel Core Ultra 9, and Core Ultra 7 (although exactly what that’ll mean for these chips isn’t clear, except that they'll be ’premium’).

Meteor Lake is expected to achieve great things with power efficiency, with chatter on the grapevine pointing to gains in the order of 50%. We also just heard that clock speeds are faster than anticipated (going by previous leakage), but all of this must be taken with an appropriate dose of skepticism.

Clearly, though, Intel will have taken strides forward with Meteor Lake, and efficiency is key when it comes to laptops, so it makes sense that the next-gen chips will do big things on this front.

You may have noticed Intel’s mentions of AI, too, and this is referencing that Meteor Lake CPUs will have dedicated AI acceleration cores (known as a VPU). This will beef up performance for AI tasks, though we don’t yet know the full details of how, and we’ll doubtless learn more at the Innovation event.

Intel is expected to use AI in the best processors of the future to do some really clever stuff, such as examining incoming tasks and directing CPU resources appropriately to tackle any workload in the most efficient way. We may hear a bit more about that, too, as part of the ‘bold vision’ for AI going forward.



from TechRadar: computing components news https://ift.tt/tYQ9zJ8
via IFTTT

Wednesday 2 August 2023

PCI-SIG Forms Optical Workgroup - Lighting The Way To PCIe's Future

https://ift.tt/Z4w8N2u

The PCI-Express interconnect standard may be going through some major changes in the coming years, based on a new announcement from the group responsible for the standard. The PCI-SIG is announcing this morning the formation of a PCIe Optical Workgroup, whose remit will be to work on enabling PCIe over optical interfaces. And while the group is still in its earliest of stages, the ramifications for the traditionally copper-bound standard could prove significant, as optical technology would bypass some increasingly stubborn limitations of copper signaling that traditional PCIe is soon approaching.

First released in the year 2000, PCI-Express was initially developed around the use of high-density edge connectors, which are still in use to this day. The PCIe Card Electromechanical specification (CEM) defines the PCIe add in card form factors in use for the last two decades, ranging from x1 to x16 connections.

But while the PCIe CEM has seen very little change over the years – in large part to ensure backward and forward compatibility – the signaling standard itself has undergone numerous speed upgrades. Including the latest PCIe 6.0 standard, the speed of a single PCIe lane has increased by 32-fold since 2000 – and the PCI-SIG will double that once more with PCIe 7.0 in 2025. As a result of increasing the amount of data transferred per pin by such a significant amount, the literal frequency band width used by the standard has increased by a similar degree, with PCIe 7.0 set to operate at nearly 32GHz.

In developing newer PCIe standards, the PCI-SIG has worked to minimize these issues, such as by employing alternative means of signaling that don’t require higher frequencies (e.g. PCIe 6 with PAM-4), and the use of mid-route retimers along with materials improvements have helped to keep up with the higher frequencies the standard does use. But the frequency limitations of copper traces within a PCB have never been eliminated entirely, which is why in more recent years the PCI-SIG has developed an official standard for PCIe over copper cabling.

Still in the works for late this year, the PCIe 5.0/6.0 cabling standard offers the option of using copper cables to carry PCIe both within a system (internal) and between systems (external). In particular, the relatively thick copper cables have less signal loss than PCB traces, overcoming the immediate drawback of high frequency comms, which is the low channel reach (i.e. short signal propagation distance). And while the cabling standard is designed to be an alternative to the PCIe CEM connector rather than a wholesale replacement, its existence underscores the problem at hand with high frequency signaling over copper, a problem that will only get even more challenging once PCIe 7.0 is made available.


PCIe Insertion Loss Budgets Over The Years (Samtec)

And that brings us to the formation of the PCI-SIG Optical Workgroup. Like the Ethernet community, which tends to be at the forefront of high frequency signaling innovation, PCI-SIG is looking towards optical, light-based communication as part of the future for PCIe. As we’ve already seen with optical networking technology, optical comms offers the potential for longer ranges and higher data rates vis-à-vis the vastly higher frequency of light, as well as a reduction in power consumed versus increasingly power-hungry copper transmission. For these reasons, the PCI-SIG is forming an Optical Workgroup to help develop the standards needed to supply PCIe over optical connections.

Strictly speaking, the creation of a new optical standard isn’t necessary to drive PCIe over optical connections. Several vendors already offer proprietary solutions, with a focus on external connectivity. But the creation of an optical standard aims to do just that – standardize how PCIe over fiber optics would work and behave. As part of the working group announcement, the traditionally consensus-based PCI-SIG is making it clear that they aren’t developing a standard for any single optical technology, but rather they are aiming to make it technology-agnostic, allowing the spec to support a wide range of optical technologies.

But the relatively broad announcement from the PCI-SIG doesn’t just stop with optical cabling as a replacement for current copper cabling, the group is also looking at “potentially developing technology-specific form factors.” While the classic CEM connector is unlikely to go away entirely any time soon – the backwards and forwards compatibility is that important – the CEM connector is the weakest/most difficult way to deliver PCIe today. So if the PCI-SIG is thinking about new form factors, then it’s likely the Optical Workgroup will at least be looking at some kind of optical-based successor to the CEM. And if that were to come to pass, this would easily be the biggest change in the PCIe specification in its 23+ year history.

But, to be sure, if any such change were to happen, it would be years down the line. The new Optical Workgroup has yet to form, let alone set its goals and requirements. With a broad remit to make PCIe more optical-friendly, any impact from the group is several years away – presumably no sooner than making a cabling standard for PCIe 7.0, if not a more direct impact on a PCIe 8.0 specification. But it shows where PCI-SIG leadership sees the future of the PCIe standard going, assuming they can get a consensus from their members. And, while not explicated stated in the PCI-SIG’s press release, any serious use of optical PCIe in this fashion would seem to be predicated on cheap optical transceivers, i.e. silicon photonics.

In any case, it will be interesting to see what eventually comes out of the PCI-SIG’s new Optical Workgroup. As PCIe begins to approach the practical limits of copper, the future of the industry’s standard peripheral interconnect may very well be to go towards the light.



from AnandTech https://ift.tt/MtyL794
via IFTTT

Tuesday 1 August 2023

Intel Lunar Lake CPU rumors are causing disappointment – but let’s not get carried away

https://ift.tt/WBOqRlj

The rumor mill is spinning up with info about Intel’s Lunar Lake processors – chips that may not debut until 2025 (or maybe very late in 2024) – and those revelations have been met with some disappointment.

There are reasons we shouldn’t get carried away here, though, and the first one is that anything heard on the grapevine, early in the development of any given product, must be regarded with a great deal of caution.

At any rate, let’s outline these rumors around Lunar Lake before we dissect and criticize them – although a crucial point to remember is that these CPUs are power-efficient efforts targeted at laptops (and will be kind of like Intel’s Ice Lake, we’re told). They’ll arrive after Arrow Lake processors which will be Intel’s new Core CPUs for next year, due later in 2024.

As divulged in a recent video from YouTube leaker Moore’s Law is Dead (MLID), Lunar Lake chips could run with a top configuration of four performance cores (Lion Cove) and four efficiency cores (Skymont) only.

Further to that, PC Gamer spotted that Twitter-based hardware leaker Bionic Squash reckons that Lunar Lake silicon will top out at 64 Xe2 (Battlemage) Execution Units (EUs) for integrated graphics, which is fewer than some folks expected.

See more

Analysis: It’s all about the efficiency

Those rumored specs have caused a stirring of disappointment online, as we noted at the outset. 4+4 performance and efficiency cores sounds rather underwhelming, and more EUs were expected for the integrated graphics (as you can see in the replies to the above tweet).

But hold your proverbial horses here. Remember, Lunar Lake is supposedly all about power-efficiency for laptops, not raw performance. (In 2025, Arrow Lake Refresh may be what Intel has planned for the latter department in the notebook arena – again according to the most recent whispers from MLID). Lunar Lake will be ultra-low-power (15W) chips to go in thin-and-light laptops, and should pack a good deal of performance for their size and spec.

Don’t forget that Lunar Lake will offer considerable architectural advancements from where we are now. And, of course, even if the top-end Battlemage integrated GPU only has 64 EUs, that’ll still compare very favorably with current-gen Alchemist graphics with more EUs, as again it’ll have made architectural strides forward.

Overall, then, Lunar Lake is likely to mean some sterling performance chops for lightweight laptops, and hopefully affordable gaming notebooks too. And besides, any specs floated here may end up being more powerful by the time 2025 rolls around, the most likely blast-off date for Lunar Lake (though as mentioned, there is a chance of a very late 2024 debut).



from TechRadar: computing components news https://ift.tt/v9Xj0zF
via IFTTT
Related Posts Plugin for WordPress, Blogger...