Reading view

There are new articles available, click to refresh the page.

Unlock Advanced Threat Correlation

Try the Enzoic + ThreatQ Integration Free on the ThreatQ Marketplace Exciting news for cybersecurity teams: Enzoic and ThreatQuotient have partnered to offer a powerful integration that combines Dark Web monitoring with advanced threat intelligence. And now, you can now try this integration for free on the ThreatQ marketplace, giving your organization a unique opportunity […]

The post Unlock Advanced Threat Correlation appeared first on Security Boulevard.

Intel Is Trucking a 916,000-Pound 'Super Load' Across Ohio To Its New Fab

Intel has begun ferrying around 20 "super loads" across Ohio for the construction of its new $28 billion Ohio One Campus. The extensive planning and coordination required for these shipments are expected to cause road closures and delays during the nine days of transport. Tom's Hardware reports: Intel's new campus coming to New Albany, OH, is in heavy construction, and around 20 super loads are being ferried across Ohio's roads by the Ohio Department of Transportation after arriving at a port of the Ohio River via barge. Four of these loads, including the one hitting the road now, weigh around 900,000 pounds -- that's 400 metric tons, or 76 elephants. The super loads were first planned for February but were delayed due to the immense planning workload. Large crowds are estimated to accumulate on the route, potentially slowing it even further. Intel's 916,000-pound shipment is a "cold box," a self-standing air-processor structure that facilitates the cryogenic technology needed to fabricate semiconductors. The box is 23 feet tall, 20 feet wide, and 280 feet long, nearly the length of a football field. The immense scale of the cold box necessitates a transit process that moves at a "parade pace" of 5-10 miles per hour. Intel is taking over southern Ohio's roads for the next several weeks and months as it builds its new Ohio One Campus, a $28 billion project to create a 1,000-acre campus with two chip factories and room for more. Calling it the new "Silicon Heartland," the project will be the first leading-edge semiconductor fab in the American Midwest, and once operational, will get to work on the "Angstrom era" of Intel processes, 20A and beyond. The Ohio Department of Transportation has shared a timetable for how long this process will take.

Read more of this story at Slashdot.

One-Line Patch For Intel Meteor Lake Yields Up To 72% Better Performance

Michael Larabel reports via Phoronix: Covered last week on Phoronix was a new patch from Intel that with tuning to the P-State CPU frequency scaling driver was showing big wins for Intel Core Ultra "Meteor Lake" performance and power efficiency. I was curious with the Intel claims posted for a couple benchmarks and thus over the weekend set out to run many Intel Meteor Lake benchmarks on this one-line kernel patch... The results are great for boosting the Linux performance of Intel Core ultra laptops with as much as 72% better performance. [...] When looking at the CPU power consumption overall, for the wide variety of workloads tested it was just a slight uptick in power use and thus overall leading to slightly better power efficiency too. See all the data here. So this is quite a nice one-line Linux kernel patch for Meteor Lake and will hopefully be mainlined to the Linux kernel for Linux 6.11 if not squeezing it in as a "fix" for the current Linux 6.10 cycle. It's just too bad though that it took six months after launch for this tuned EPP value to be determined. Fresh benchmarks between Intel Core Ultra and AMD Ryzen on the latest Linux software will be coming up soon on Phoronix.

Read more of this story at Slashdot.

Intel Ditches Hyperthreading For Lunar Lake CPUs

An anonymous reader shares a report: Intel's fastest processors have included hyperthreading, a technique that lets more than one thread run on a single CPU core, for over 20 years -- and it's used by AMD (which calls it "simultaneous multi-threading") as well. But you won't see a little "HT" on the Intel sticker for any Lunar Lake laptops, because none of them use it. Hyperthreading will be disabled on all Lunar Lake CPU cores, including both performance and efficiency cores. Why? The reason is complicated, but basically it's no longer needed. The performance cores or P-Cores on the new Lunar Lake series are 14 percent faster than the same cores on the previous-gen Meteor Lake CPUs, even with the multi-thread-processing of hyperthreading disabled. Turning on the feature would come at too high a power cost, and Lunar Lake is all about boosting performance while keeping laptops in this generation thin, light, and long-lasting. That means maximizing single-thread performance -- the most relevant to users who are typically focusing on one task at a time, as is often the case for laptops -- in terms of surface area, to improve overall performance per watt. Getting rid of the physical components necessary for hyperthreading just makes sense in that context.

Read more of this story at Slashdot.

Intel unveils Lunar Lake architecture, moves RAM on-die

Hot on the heels of AMD, here’s Intel’s next-generation processor, this time for the laptop market.

Overall, Lunar Lake represents their second generation of disaggregated SoC architecture for the mobile market, replacing the Meteor Lake architecture in the lower-end space. At this time, Intel has disclosed that it uses a 4P+4E (8 core) design, with hyper-threading/SMT disabled, so the total thread count supported by the processor is simply the number of CPU cores, e.g., 4P+4E/8T.

↫ Gavin Bonshor at AnandTech

The most significant change in Lunar Lake, however, has nothing to do with IPC improvements, core counts, or power usage. No, the massive sea change here is that Lunar Lake will do away with separate memory sticks, instead opting for on-die memory at a maximum of 32GB LPDDR5X. This is very similar to how Apple packages its memory on the M dies, and yes, this also means that as far as thin Intel laptops go, you’ll no longer be able to upgrade your memory after purchase. You choose your desired amount of memory at purchase, and that’s what you’ll be stuck with.

Buyer beware, I suppose. We can only hope Intel isn’t going to default to 8GB.

Intel details new Lunar Lake CPUs that will go up against AMD, Qualcomm, and Apple

A high-level breakdown of Intel's next-gen Lunar Lake chips, which preserve some of Meteor Lake's changes while reverting others.

Enlarge / A high-level breakdown of Intel's next-gen Lunar Lake chips, which preserve some of Meteor Lake's changes while reverting others. (credit: Intel)

Given its recent manufacturing troubles, a resurgent AMD, an incursion from Qualcomm, and Apple’s shift from customer to competitor, it’s been a rough few years for Intel’s processors. Computer buyers have more viable options than they have in many years, and in many ways the company’s Meteor Lake architecture was more interesting as a technical achievement than it was as an upgrade for previous-generation Raptor Lake processors.

But even given all of that, Intel still provides the vast majority of PC CPUs—nearly four-fifths of all computer CPUs sold are Intel’s, according to recent analyst estimates from Canalys. The company still casts a long shadow, and what it does still helps set the pace for the rest of the industry.

Enter its next-generation CPU architecture, codenamed Lunar Lake. We’ve known about Lunar Lake for a while—Intel reminded everyone it was coming when Qualcomm upstaged it during Microsoft’s Copilot+ PC reveal—but this month at Computex the company is going into more detail ahead of availability sometime in Q3 of 2024.

Read 20 remaining paragraphs | Comments

For the second time in two years, AMD blows up its laptop CPU numbering system

AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme.

Enlarge / AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme. (credit: AMD)

Less than two years ago, AMD announced that it was overhauling its numbering scheme for laptop processors. Each digit in its four-digit CPU model numbers picked up a new meaning, which, with the help of a detailed reference sheet, promised to inform buyers of exactly what it was they were buying.

One potential issue with this, as we pointed out at the time, was that this allowed AMD to change over the first and most important of those four digits every single year that it decided to re-release a processor, regardless of whether that chip actually included substantive improvements or not. Thus a “Ryzen 7730U” from 2023 would look two generations newer than a Ryzen 5800U from 2021, despite being essentially identical.

AMD is partially correcting this today by abandoning the self-described “decoder ring” naming system and resetting it to something more conventional.

Read 8 remaining paragraphs | Comments

Tech giants form AI group to counter Nvidia with new interconnect standard

Abstract image of data center with flowchart.

Enlarge (credit: Getty Images)

On Thursday, several major tech companies, including Google, Intel, Microsoft, Meta, AMD, Hewlett-Packard Enterprise, Cisco, and Broadcom, announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new interconnect standard for AI accelerator chips in data centers. The group aims to create an alternative to Nvidia's proprietary NVLink interconnect technology, which links together multiple servers that power today's AI applications like ChatGPT.

The beating heart of AI these days lies in GPUs, which can perform massive numbers of matrix multiplications—necessary for running neural network architecture—in parallel. But one GPU often isn't enough for complex AI systems. NVLink can connect multiple AI accelerator chips within a server or across multiple servers. These interconnects enable faster data transfer and communication between the accelerators, allowing them to work together more efficiently on complex tasks like training large AI models.

This linkage is a key part of any modern AI data center system, and whoever controls the link standard can effectively dictate which hardware the tech companies will use. Along those lines, the UALink group seeks to establish an open standard that allows multiple companies to contribute and develop AI hardware advancements instead of being locked into Nvidia's proprietary ecosystem. This approach is similar to other open standards, such as Compute Express Link (CXL)—created by Intel in 2019—which provides high-speed, high-capacity connections between CPUs and devices or memory in data centers.

Read 5 remaining paragraphs | Comments

Rapid7 Releases the 2024 Attack Intelligence Report

Rapid7 Releases the 2024 Attack Intelligence Report

Today, during our Take Command Summit, we released our 2024 Attack Intelligence Report, which pulls in expertise from our researchers, our detection and response teams, and threat intelligence teams. The result is the clearest picture yet of the expanding attack surface and the threats security professionals face every day.

Since the end of 2020, we’ve seen a significant increase in zero-day exploitation, ransomware attacks, and mass compromise incidents impacting many organizations worldwide. We have seen changes in adversary behaviors with ransomware groups and state-sponsored threat actors using novel persistence mechanisms and zero-day exploits to great effect.

Our 2024 Attack Intelligence Report is a 14-month look at data for marquee vulnerabilities and attack patterns. From it, we identified trends that are helpful for every security professional to understand.

Some key findings include:

A consistently high level of  zero-day exploitation over the last three years. Since 2020, our vulnerability research team has tracked both scale and speed of exploitation. For two of the last three years, more mass compromise events have arisen from zero-day exploits than from n-day exploits. 53% of widely exploited CVEs in 2023 and early 2024 started as zero-day attacks.  

Network edge device exploitation has increased. Large-scale compromises stemming from network edge device exploitation has nearly doubled in 2023. We found that 36% of the widely exploited vulnerabilities we tracked occurred within network edge technology. Of those, 60% were zero day exploits. These technologies represent a weak spot in our collective defenses.

Ransomware is still big business. We tracked more than 5,600 ransomware attacks between January 2023 and February 2024. And those are the attacks we know about, as many attacks may go unreported for a number of reasons. The ones we were able to track indicated trends in attacker motive and behavior. For instance, we saw an increase in what we term “smash-and-grab” attacks, particularly those involving file transfer solutions. A smash-and-grab attack sees adversaries gaining access to sensitive data and performing exfiltration as quickly as possible. While most ransomware incidents Rapid7 observed were still “traditional” attacks where data was encrypted, smash-and-grab extortion is becoming more common.

Attackers are preferring to exploit simple vulnerability classes. While attackers still target tougher-to-exploit vuln classes like memory corruption, most of the widely exploited CVEs we have tracked over the last few years have arisen from simpler root causes. For instance, 75% of widespread threat CVEs Rapid7 has analyzed since 2020 have improper access control issues, like remotely accessible APIs and authentication bypasses, and injection flaws (like OS command injection) as their root causes.

These are just a few of the key findings in our 2024 Attack Intelligence report. The report was released today in conjunction with our Take Command Summit — a day-long virtual cybersecurity summit, of which the report features as a keynote. The summit includes some of the most impactful members of the security community taking part in some of the most critical conversations at this critical time. You can read the report here.

Xeon Phi support removed in GCC 15 compiler

Last week I wrote about Intel aiming to remove Xeon Phi support in GCC 15 with the products being end-of-life and deprecated in GCC 14. While some openly wondered whether the open-source community would allow it given the Xeon Phi accelerators were available to buy just a few years ago and at some very low prices going back years so some potentially finding use still out of them especially during this AI boom (and still readily available to buy used for around ~$50 USD), today the Intel Xeon Phi support was indeed removed.

↫ Michael Larabel

Xeon Phi PCIe cards are incredibly cheap on eBay, and every now and then my mouse hovers over the buy button – but I always realise just in time that the cards have become quite difficult to use, since support for them, already sparse to begin with, is only getting worse by the day. Support for them was already removed in Linux 5.10, and now GCC is pulling he plug too, so the only option is to keep using old kernels, or pass the card on to a VM running an older Linux kernel version, which is a lot of headache for what is essentially a weird toy for nerds at this point.

GCC 15 will also, sadly, remove support for Itanium, which, as I’ve said before, is a huge disgrace and a grave mistake. Itanium is the future, and will stomp all over crappy architectures like x86 and ARM. With this deprecation, GCC relegates itself to the dustbin of history.

❌