Archive for the ‘AMD’ Category

AMD Radeon VII: A 7nm-long step in the right direction, but is that enough?

February 7th, 2019
Specs at a glance: AMD Radeon VII
CUDA CORES 3,840
TEXTURE UNITS 240
ROPS 64
CORE CLOCK 1,400MHz
BOOST CLOCK 1,800MHz
MEMORY BUS WIDTH 4,096-bit
MEMORY BANDWIDTH 1,024GB/s
MEMORY SIZE 16GB HBM2
Outputs 3x DisplayPort 1.4, 1x HDMI 2.0b
Release date February 7, 2019
PRICE $699 directly from AMD

In the world of computer graphics cards, AMD has been behind its only rival, Nvidia, for as long as we can remember. But a confluence of recent events finally left AMD with a sizable opportunity in the market.

Having established a serious lead with its 2016 and 2017 GTX graphics cards, Nvidia tried something completely different last year. Its RTX line of cards essentially arrived with near-equivalent power as its prior generation for the same price (along with a new, staggering $1,200 card in its "consumer" line). The catch was that these cards' new, proprietary cores were supposed to enable a few killer perks in higher-end graphics rendering. But that big bet faltered, largely because only one truly RTX-compatible retail game currently exists, and Nvidia took the unusual step of warning investors about this fact.

Meanwhile, AMD finally pulled off a holy-grail number for its graphics cards: 7nm. As in, a tiny fabrication process that packs even more components onto a GPU's silicon for other hardware and features (the Radeon VII's HBM2 RAM shares die space with the GPU). In the case of this week's AMD Radeon VII—which goes on sale today, February 7, for $699—that extra space is dedicated to a whopping 16GB VRAM, well above the 11GB maximum of any consumer-grade Nvidia product. AMD also insists that its memory bandwidth has been streamlined to make that VII-specific perk valuable for any 3D application.

Read 25 remaining paragraphs | Comments

Posted in AMD, feature, Features, Gaming & Culture, Radeon, radeon vii | Comments (0)

AMD announces the $699 Radeon VII: 7nm Vega, coming February

January 9th, 2019
AMD Radeon VII.

Enlarge / AMD Radeon VII. (credit: AMD)

AMD's next flagship video card will be the Radeon VII. The VII slots in above the RX Vega 64 and averages about 29 percent faster, putting it within spitting distance of Nvidia's RTX 2080.

The GPU inside the VII is called Vega 20, which is a die-shrunk version of the Vega 10 in the Vega 64. The Vega 10 is built on GlobalFoundries' 14nm process; the Vega 20 is built on TSMC's 7nm process. This new process has enabled AMD to substantially boost the clock rate from a peak of 1564MHz in the Vega 64 to 1,800MHz in the VII. The new card's memory subsystem has also been uprated: it's still using HBM2, but it's using 16GB clocked at 2Gb/s with a 4,096-bit bus compared to 8GB clocked at 1.89Gb/s with a 2,048-bit bus. This gives a total of 1TB/s memory bandwidth.

The new chip has 128 ROPs to the old chip's 64, doubling the number of rendered rasterized pixels it can produce. However, it does fall behind its predecessor in one spec: it has only 60 compute units (3,840 stream processors) compared to 64 (4,096 stream processors).

Read 4 remaining paragraphs | Comments

Posted in AMD, Gaming & Culture, GPU, Tech | Comments (0)

These 12 FreeSync monitors will soon be compatible with Nvidia’s G-Sync

January 7th, 2019
An Nvidia graphic that depicts what G-Sync technology is like, I guess...

Enlarge / An Nvidia graphic that depicts what G-Sync technology is like, I guess...

In addition to the entirely expected news about the upcoming RTX 2060, Nvidia CES presentation this weekend included a surprise about its G-Sync display standard. That screen-tear and input-lag-smoothing technology will soon work with select monitors designed for VESA's competing DisplayPort Adaptive-Sync protocol (which is used in AMD's FreeSync monitors).

In announcing the move, Nvidia says the gaming experience on these variable refresh rate (VRR) monitors "can vary widely." So Nvidia says it has gone to the trouble of testing 400 different Adaptive-Sync monitors to see which ones are worthy of being certified as "G-Sync Compatible."

Of those 400 tests, Nvidia says only 12 monitors have met its standards so far. They are:

Read 5 remaining paragraphs | Comments

Posted in AMD, freesync, Gaming & Culture, gsync, NVIDIA | Comments (0)

Intel unveils a new architecture for 2019: Sunny Cove

December 12th, 2018
OK, it's not all that sunny, but it's a nice picture of a cove.

OK, it's not all that sunny, but it's a nice picture of a cove. (credit: Neil Williamson)

In 2019, Intel will release Core and Xeon chips built around a new architecture: the chips will add a bunch of new instructions to accelerate certain popular workloads such as cryptography and compression, with the company demonstrating 75-percent improvement in compression performance relative to prior-generation parts.

Since 2015, Intel's mainstream processors under the Core and Xeon brands have been based around the Skylake architecture. Intel's original intent was to release Skylake on its 14nm manufacturing process and then follow that up with Cannon Lake on its 10nm process. Cannon Lake would add a handful of new features (it includes more AVX instructions, for example) but otherwise be broadly the same as Skylake.

However, delays in getting its 10nm manufacturing process running effectively forced Intel to stick with 14nm for longer than anticipated. Accordingly, the company followed Skylake (with its maximum of four cores in consumer systems) with Kaby Lake (with higher clock speeds and much greater hardware acceleration of modern video codecs), Coffee Lake (as many as eight cores), and Whiskey Lake (improved integrated chipset). The core Skylake architecture was unchanged across these variations, meaning that while their clock speeds differ, the number of instructions per cycle (IPC) is essentially identical.

Read 8 remaining paragraphs | Comments

Posted in AMD, Core, Intel, processors, sunny cove, Tech, Xeon | Comments (0)

Spectre, Meltdown researchers unveil 7 more speculative execution attacks

November 14th, 2018
Spectre, Meltdown researchers unveil 7 more speculative execution attacks

Enlarge (credit: Aurich Lawson / Getty Images)

Back at the start of the year, a set of attacks that leveraged the speculative execution capabilities of modern high-performance processors was revealed, with the names Meltdown and Spectre. Since then, numerous variants of these attacks have been devised. In tandem with this, a range of mitigation techniques have been created to enable at-risk software, operating systems, and hypervisor platforms to protect against these attacks.

A research team including many of the original researchers behind Meltdown and Spectre, and the related Foreshadow and BranchScope attacks, has published a new paper disclosing yet more attacks in the Spectre and Meltdown families. The result? Seven new possible attacks. Some are mitigated by known mitigation techniques, but others are not, meaning that further work is required to safeguard vulnerable systems.

The previous investigations into these attacks has been a little ad hoc in nature; examining particular features of interest to provide, for example, a Spectre attack that can be performed remotely over a network, or Meltdown-esque attack to break into SGX enclaves. The new research is more systematic, looking at the underlying mechanisms behind both Meltdown and Spectre and running through all the different ways the speculative execution can be misdirected.

Read 13 remaining paragraphs | Comments

Posted in AMD, ARM, Intel, meltdown, security, Spectre, speculative execution, Tech | Comments (0)

AMD outlines its future: 7nm GPUs with PCIe 4, Zen 2, Zen 3, Zen 4

November 6th, 2018
AMD Radeon Instinct MI60

Enlarge / AMD Radeon Instinct MI60 (credit: AMD)

AMD today charted out its plans for the next few years of product development, with an array of new CPUs and GPUs in the development pipeline.

On the GPU front are two new datacenter-oriented GPUs: the Radeon Instinct MI60 and MI50. Based on the Vega architecture and built on TSMC's 7nm process, the cards are aimed not primarily at graphics (despite what one might think given that they're called GPUs) but rather at machine learning, high-performance computing, and rendering applications.

MI60 will come with 32GB of ECC HBM2 (second-generation High-Bandwidth Memory) while the MI50 gets 16GB, and both have a memory bandwidth up to 1TB/s. ECC is also used to protect all internal memory within the GPUs themselves. The cards will also support PCIe 4.0 (which doubles the transfer rate of PCIe 3.0) and direct GPU-to-GPU links using AMD's Infinity Fabric. This will offer up to 200GB/s of bandwidth (three times more than PCIe 4) between up to 4 GPUs.

Read 10 remaining paragraphs | Comments

Posted in 10nm, 7nm, AMD, CPU, GPU, Intel, machine learning, processors, Tech, Zen | Comments (0)

Intel CPUs fall to new hyperthreading exploit that pilfers crypto keys

November 2nd, 2018
Intel CPUs fall to new hyperthreading exploit that pilfers crypto keys

Enlarge (credit: Intel)

Over the past 11 months, the processors running our computers, and in some cases phones, have succumbed to a host of attacks. Bearing names such as Meltdown and Spectre, BranchScope, TLBleed, and Foreshadow, the exploits threaten to siphon some of our most sensitive secrets—say passwords or cryptographic keys—out of the silicon microarchitecture in ways that can’t be detected or stopped by traditional security defenses. On Friday, researchers disclosed yet another leak that has already been shown to exist on a wide range of Intel chips and may also affect other makers, too.

PortSmash, as the new attack is being called, exploits a largely overlooked side-channel in Intel’s hyperthreading technology. A proprietary implementation of simultaneous multithreading, hyperthreading reduces the amount of time needed to carry out parallel computing tasks, in which large numbers of calculations or executions are carried out simultaneously. The performance boost is the result of two logical processor cores sharing the hardware of a single physical processor. The added logical cores make it easier to divide large tasks into smaller ones that can be completed more quickly.

Port contention as a side channel

In a paper scheduled for release soon, researchers document how they were able to exploit the newly discovered leak to recover an elliptic curve private key from a server running an OpenSSL-powered TLS server. The attack, which was carried out on servers running Intel Skylake and Kaby Lake chips and Ubuntu, worked by sending one logical core a steady stream of instructions and carefully measuring the time it took for them to get executed.

Read 17 remaining paragraphs | Comments

Posted in AMD, Biz & IT, chips, CPUs, Intel, side-channel | Comments (0)

Final Fantasy 15 on PC: Has Square Enix lost its way, or do graphics really matter?

August 25th, 2017

Enlarge

In a tech demo, which debuted at Nvidia’s GPU Technology Conference in May, famed Japanese developer Square Enix recreated a cinema-quality, computer-generated character inside of a video game. Nyx Ulric, voiced by Aaron Paul in the CGI film Kingsglaive: Final Fantasy XV, had been previously been confined to the silver screen, where the complexity of producing of detailed computer graphics is offloaded to vast farms of computers one frame at a time (each taking hours to render), before 24 of them are pieced together to create a single second of film.

With top-of-line PC hardware from Nvidia (the server-grade Tesla V100, no less), Square Enix pulled character models and textures from the film, and displayed them in real-time using Luminous Studio Pro, the same engine that powers Final Fantasy XV on the Xbox One, PlayStation 4, and—with the upcoming release of Final Fantasy XV: Windows Edition in 2018—PC. Like any good tech demo, Kingsglaive is as impressive as it is impractical, featuring authentic modelling of hair, skin, leather, fur, and lighting that no PC or console on the market today can display (at least in 4K).

The Xbox One X, Microsoft’s “most powerful console in the world,” sports around six teraflops of processing power (FP32, for those technically inclined) to push graphics at 4K resolution—that’s four times the number of pixels as a typical HD television. The Kingsglaive tech demo requires over 12 teraflops of processing power, more than is found in Nvidia’s $1000/£1000 Titan Xp graphics card.

Read 18 remaining paragraphs | Comments

Posted in AMD, Final Fantasy VII Remake, final fantasy XV, Gaming & Culture, NVIDIA, PC gaming, Square Enix | Comments (0)

AMD Threadripper 1950X review: Better than Intel in almost every way

August 10th, 2017

Enlarge / With an orange and blue color scheme to boot…

If Ryzen was a polite, if firm way of telling the world that AMD is back in the processor game, then Threadripper is a foul-mouthed, middle-finger-waving, kick-in-the-crotch “screw you” aimed squarely at the usurious heart of Intel. It’s an olive branch to a part of the PC market stung by years of inflated prices, sluggish performance gains, and the feeling that, if you’re not interested in low-power laptops, Intel isn’t interested in you.

Where Intel charges $1,000/£1,000 for 10 cores and 20 threads in the form of the Core i9-7900X, AMD offers 16C/32T with Threadripper 1950X. Where Intel limits chipset features and PCIe lanes the further down the product stack you go—the latter being ever more important as storage moves away from the SATA interface—AMD offers quad-channel memory, eight DIMM slots, and 64 PCIe lanes even on the cheapest CPU for the platform.

Threadripper embraces the enthusiasts, the system builders, and the content creators that shout loud and complain often, but evangelise products like no other. It’s the new home for extravagant multi-GPU setups, and RAID arrays built on thousands of dollars worth of M.2 SSDs. It’s where performance records can be broken, and where content creators can shave precious minutes from laborious production tasks, while still having more than enough remaining horsepower to get their game on.

Read 45 remaining paragraphs | Comments

Posted in AMD, Ars Approved, CPUs, Gadgetology, Gaming & Culture, Intel, PC gaming, pc hardware, Ryzen, Tech, Threadripper, X299, X399 | Comments (0)

The external graphics dream is real: Sonnet eGFX Breakaway Box reviewed

August 3rd, 2017

Enlarge / The Sonnet eGFX Breakaway Box and Sapphire RX 580. (credit: Mark Walton)

Specs at a glance: Sonnet eGFX Breakaway Box
Power 350W Asaka AK-PS035AF01 SFX
Ports 1x PCIe 3.0 X16, 1x Thunderbolt 3.0
Size 18.5cm x 34.0cm x 20.2cm
Other perks 120mm Asaka Fan
Price $300 (~£300, but TBC)

The external graphics card (or eGFX), long the pipe dream of laptop-touting gamers the world over, has finally come of age. Thanks to Thunderbolt 3—which offers up to 40Gbps of bandwidth, the equivalent of four PCIe 3.0 lanes—consumers finally have access to enough bandwidth in a universal standard to make eGFX a viable option.

So the theory goes, you can now take most laptops with a Thunderbolt 3 port, plug in a box containing a power supply and your GPU of choice, and enjoy better visuals and higher frame rates in games, and faster rendering in production tasks. You can even whack a PCIe video capture card or a production-ready audio interface in that external box, if you so wish.

Thus far the limiting factor, aside from some potential performance bottlenecks and driver support, has been price. The Razer Core, as beautifully designed as it is, costs a whopping £500/$500 without a graphics card—and that’s if it’s even in stock. Meanwhile, the Asus ROG XG Station 2—which is most certainly not beautifully designed—costs £400/$400. When paired with a decent graphics card like an Nvidia GTX 1070 or an AMD RX 580, a full eGFX setup runs just shy of £900/$900, not including the price of a laptop to pair it with.

Read 34 remaining paragraphs | Comments

Posted in AMD, eGFX, Features, Gadgetology, Gaming & Culture, GPU, Nvida, PC gaming, Tech | Comments (0)