Archive for the ‘AMD’ Category

Lenovo adds AMD Ryzen Pro-powered laptops to its ThinkPad family

May 8th, 2019

Lenovo is adding more choices to its beloved and iconic ThinkPad lineup this year: the new T495, T495s, and X395 laptops are all powered by AMD's Ryzen 7 Pro processors with integrated Vega graphics. With the same design and MIL-spec level of durability, these new ThinkPads will give customers the option to go with AMD without sacrificing what they love about the premium ThinkPad lineup.

The ThinkPad T495 and T495s models are 14-inch laptops while the X395 measures in at 13 inches. They will look similar to the T490 and T490s Intel-based laptops announced last month because Lenovo essentially took the same frames and stuck AMD APUs inside. That means they all have MIL-spec tested designs and features like far-field mics for VoIP conferences, Lenovo's camera privacy shutter, and optional PrivacyGuide screen filter.

The 14-inch displays on the T495 and T459s and the 13-inch display on the X395 will be FHD 1920×1080 panels with touch and non-touch options. They will also have AMD's FreeSync technology for improved refresh rates and pixel quality.

Read 3 remaining paragraphs | Comments

Posted in AMD, Lenovo, ryzen pro, T495, T495s, Tech, ThinkPad, Windows, X395 | Comments (0)

Cray, AMD to build 1.5 exaflops supercomputer for US government

May 7th, 2019
AMD CEO Lisa Su, holding a Rome processor. The large chip in the middle is the 14nm I/O chip; around it are pairs of 7nm chiplets containing the CPU cores.

Enlarge / AMD CEO Lisa Su, holding a Rome processor. The large chip in the middle is the 14nm I/O chip; around it are pairs of 7nm chiplets containing the CPU cores. (credit: AMD)

AMD and Cray have announced that they're building "Frontier," a new supercomputer for the Department of Energy at Oak Ridge National Laboratory. The goal is to deliver a system that can perform 1.5 exaflops: 1.5×1018 floating point operations per second.

By way of comparison, a single Nvidia RTX 2080 GPU manages about 14 teraflops of compute performance with 32-bit numbers. Frontier will achieve 100,000 times more. The fastest supercomputer in the Top 500 list weighs in at 200 petaflops, or 0.2 exaflops. As things stand, it'd take the top 160 machines on the list to match Frontier's performance.

Frontier will use custom versions of AMD's Epyc processors (likely Zen 3 or Zen 4), matched with 4 GPUs, all connected using AMD's Infinity Fabric. Between nodes, Cray's Slingshot interconnect will be used, which has transfer rates of up to 200Gb/s per port. The GPUs will have their own stacked HBM (High Bandwidth Memory). It'll be housed in 100 cabinets, taking about 7,300 square feet of floor space. Power consumption will be 30-40MW.

Read 2 remaining paragraphs | Comments

Posted in AMD, Cray, Department of Energy, Nuclear, supercomputer, Tech | Comments (0)

HP Chromebook 14 review: One of the first AMD Chromebooks, tested

May 4th, 2019
HP Chromebook 14 review: One of the first AMD Chromebooks, tested

Enlarge (credit: Valentina Palladino)

AMD wants in on the Chromebook craze. A few OEMs, including HP, Acer, and Lenovo, announced AMD-powered Chromebooks at CES this year, and those devices are just starting to become available. Intel processors power most Chromebooks available today, but now individual customers and businesses will be able to choose from a small, but growing, pool of AMD-powered devices.

Unsurprisingly, HP's Chromebook 14 with AMD processors and integrated Radeon graphics appeals to the largest group in the Chromebook market—those who want a low-powered Chrome OS device for home or school use. Starting at $269, this Chromebook is not meant to compete with Google's Pixelbook or the fancier Chromebooks toward which professionals gravitate. Since the new Chromebook 14 borrows a lot from previous models, we tested it out to see the gains (if any) an AMD-powered Chromebook provides over Intel-powered devices.

Look and feel

Manufacturers have been elevating the look and feel of their Chrome OS devices for the past couple of years as the stripped-down operating system gained popularity outside of the education system. However, HP's Chromebook 14 is one of the most traditionally "Chromebook-y" Chromebooks I've ever used. It's a not-too-big, not-too-small plastic hunk that will fit into most family living rooms well enough. At about 3.5 pounds, it's not the lightest Chromebook ever, but it feels similar to other low-cost Chromebooks in thickness and weight. I do appreciate that HP made this machine fanless, allowing it to remain quiet even when running our most challenging benchmark tests.

Read 21 remaining paragraphs | Comments

Posted in Acer, AMD, Android, Chrome OS, Chromebook, chromebook 14, dell, HP, Lenovo, Tech | Comments (0)

AMD to launch new 7nm Navi GPU, Rome CPU in 3rd quarter

May 1st, 2019
AMD CEO Lisa Su, holding a Rome processor. The large chip in the middle is the 14nm I/O chip; around it are pairs of 7nm chiplets containing the CPU cores.

Enlarge / AMD CEO Lisa Su, holding a Rome processor. The large chip in the middle is the 14nm I/O chip; around it are pairs of 7nm chiplets containing the CPU cores. (credit: AMD)

In its earnings call, AMD offered a little more detail about the launch of its next-generation processors, built using the Zen 2 architecture and TSMC's 7nm manufacturing process, and new GPU architecture, Navi, again built on 7nm. Server-oriented EPYC-branded chips (codenamed Rome) should be shipping to customers in the third quarter of this year, and so too will Navi-based video cards.

In November last year, AMD outlined the details of the Zen 2 design. It makes a number of architectural improvements to shore up some of Zen's weaker areas (for example, it now has native 256-bit floating point units to handle AVX2 instructions; the original Zen only had 128-bit units, so it had to split AVX2 workloads up into pieces). But perhaps more significant is the new approach to building the processors. Zen used modules of four cores (handling eight threads), with two such modules per chip. Mainstream Ryzen processors used one chip; the enthusiast Threadripper range used two chips (first generation) or four chips (second generation), and the server-oriented Epyc range used four chips. Each die is a full processor, containing the cores, cache, memory controllers, PCIe and Infinity Fabric connections for I/O, integrated SATA and USB controllers, and so on and so forth.

Zen 2 will continue to use multiple chips, but this time the chips will be more specialized. There will be 7nm chiplets, each containing CPU cores, cache, and Infinity Fabric links, and a 14nm I/O die, containing memory controllers, Infinity Fabric connections, and SATA and USB controllers. The 7nm parts should be able to achieve higher clock speeds and lower power consumption than their 14nm predecessors. The parts on the I/O die, however, generally don't benefit from higher clock speeds. In fact, they can't—PCIe, USB, SATA, and even memory, all need to run at predetermined speeds, because their performance is governed by the bus specification. The extra performance headroom that 7nm would offer is wasted. By keeping these parts on 14nm, AMD is likely able to cut costs (because well-established 14nm manufacturing should be cheaper than the newer, more advanced 7nm).

Read 2 remaining paragraphs | Comments

Posted in 7nm, AMD, CPU, epyc, GPU, graphics, hardware, processors, Rome, Tech, zen 2 | Comments (0)

Blackmagic eGPU Pro mini-review: Quiet, fast, and extremely expensive—like a Mac

April 26th, 2019

There are many criticisms of Apple's Mac products, but one of the most commonly cited is that they often don't have graphics power that's comparable to what you'd see in similarly priced Windows machines. Unfortunately, the company currently offers no desktop tower in which you could, say, slot two super-powerful gaming graphics cards, either.

Some of that could change soon when Apple moves to its own silicon on Macs or when it introduces a new Mac Pro. But for now, the company's official answer to this line of criticism is doubling down on external GPU support in macOS. Support for this began during the High Sierra cycle and was expanded upon in some helpful ways in last year's Mojave OS release.

In addition to providing software support for eGPUs, Apple has developed what is more or less its official-ish eGPU solution, in much the same that a couple of LG's monitors have been Apple's recommended external displays for a while now. The company did so by partnering with hardware-maker Blackmagic Design, an Australia-based company that specializes in products for video professionals. The first eGPU from Blackmagic included an AMD Radeon Pro 580 and was priced at $699. We reviewed it late last summer and found that—while it was quiet and easy-to-use, and the GPU was a big upgrade over the integrated graphics in many Macs—we wished a higher-end GPU option was offered for creative professionals and hardcore gamers who needed more.

Read 25 remaining paragraphs | Comments

Posted in AMD, Blackmagic Design, Blackmagic eGPU, Blackmagic eGPU Pro, EGPU, Features, Mac, MacBook Pro, Tech | Comments (0)

Apple finally updates the iMac with significantly more powerful CPU and GPU options

March 19th, 2019

Today, Apple will finally begin taking orders for newly refreshed 21- and 27-inch iMacs. The new versions don't change the basic design or add major new features, but they offer substantially faster configuration options for the CPU and GPU.

The 21.5-inch iMac now has a 6-core, eighth-generation Intel CPU option—up from a maximum of four cores before. The 27-inch now has six cores as the standard configuration, with an optional upgrade to a 3.6GHz, 9th-gen, 8-core Intel Core i9 CPU that Apple claims will double performance over the previous 27-inch iMac. The base 27-inch model has a 3GHz 6-core Intel Core i5 CPU, with intermediate configurations at 3.1GHz and 3.7GHz (both Core i5).

The big news is arguably that both sizes now offer high-end, workstation-class Vega-graphics options for the first time. Apple added a similar upgrade option to the 15-inch MacBook Pro late last year.

Read 7 remaining paragraphs | Comments

Posted in all-in-one, AMD, apple, desktop, iMac, Intel, Tech, vega | Comments (0)

Google: Software is never going to be able to fix Spectre-type bugs

February 23rd, 2019
Google: Software is never going to be able to fix Spectre-type bugs

Enlarge (credit: Aurich Lawson / Getty Images)

Researchers from Google investigating the scope and impact of the Spectre attack have published a paper asserting that Spectre-like vulnerabilities are likely to be a continued feature of processors and, further, that software-based techniques for protecting against them will both impose a high performance cost. In any case, the researchers continue, the software will be inadequate—some Spectre flaws don't appear to have any effective software-based defense. As such, Spectre is going to be a continued feature of the computing landscape, with no straightforward resolution.

The discovery and development of the Meltdown and Spectre attacks was undoubtedly the big security story of 2018. First revealed last January, new variants and related discoveries were made throughout the rest of the year. Both attacks rely on discrepancies between the theoretical architectural behavior of a processor—the documented behavior that programmers depend on and write their programs against—and the real behavior of implementations.

Specifically, modern processors all perform speculative execution; they make assumptions about, for example, a value being read from memory or whether an if condition is true or false, and they allow their execution to run ahead based on these assumptions. If the assumptions are correct, the speculated results are kept; if it isn't, the speculated results are discarded and the processor redoes the calculation. Speculative execution is not an architectural feature of the processor; it's a feature of implementations, and so it's supposed to be entirely invisible to running programs. When the processor discards the bad speculation, it should be as if the speculation never even happened.

Read 10 remaining paragraphs | Comments

Posted in AMD, google, Intel, meltdown, processors, security, Spectre, speculative execution, Tech, x86 | Comments (0)

AMD Radeon VII: A 7nm-long step in the right direction, but is that enough?

February 7th, 2019
Specs at a glance: AMD Radeon VII
CUDA CORES 3,840
TEXTURE UNITS 240
ROPS 64
CORE CLOCK 1,400MHz
BOOST CLOCK 1,800MHz
MEMORY BUS WIDTH 4,096-bit
MEMORY BANDWIDTH 1,024GB/s
MEMORY SIZE 16GB HBM2
Outputs 3x DisplayPort 1.4, 1x HDMI 2.0b
Release date February 7, 2019
PRICE $699 directly from AMD

In the world of computer graphics cards, AMD has been behind its only rival, Nvidia, for as long as we can remember. But a confluence of recent events finally left AMD with a sizable opportunity in the market.

Having established a serious lead with its 2016 and 2017 GTX graphics cards, Nvidia tried something completely different last year. Its RTX line of cards essentially arrived with near-equivalent power as its prior generation for the same price (along with a new, staggering $1,200 card in its "consumer" line). The catch was that these cards' new, proprietary cores were supposed to enable a few killer perks in higher-end graphics rendering. But that big bet faltered, largely because only one truly RTX-compatible retail game currently exists, and Nvidia took the unusual step of warning investors about this fact.

Meanwhile, AMD finally pulled off a holy-grail number for its graphics cards: 7nm. As in, a tiny fabrication process that packs even more components onto a GPU's silicon for other hardware and features (the Radeon VII's HBM2 RAM shares die space with the GPU). In the case of this week's AMD Radeon VII—which goes on sale today, February 7, for $699—that extra space is dedicated to a whopping 16GB VRAM, well above the 11GB maximum of any consumer-grade Nvidia product. AMD also insists that its memory bandwidth has been streamlined to make that VII-specific perk valuable for any 3D application.

Read 25 remaining paragraphs | Comments

Posted in AMD, feature, Features, Gaming & Culture, Radeon, radeon vii | Comments (0)

AMD announces the $699 Radeon VII: 7nm Vega, coming February

January 9th, 2019
AMD Radeon VII.

Enlarge / AMD Radeon VII. (credit: AMD)

AMD's next flagship video card will be the Radeon VII. The VII slots in above the RX Vega 64 and averages about 29 percent faster, putting it within spitting distance of Nvidia's RTX 2080.

The GPU inside the VII is called Vega 20, which is a die-shrunk version of the Vega 10 in the Vega 64. The Vega 10 is built on GlobalFoundries' 14nm process; the Vega 20 is built on TSMC's 7nm process. This new process has enabled AMD to substantially boost the clock rate from a peak of 1564MHz in the Vega 64 to 1,800MHz in the VII. The new card's memory subsystem has also been uprated: it's still using HBM2, but it's using 16GB clocked at 2Gb/s with a 4,096-bit bus compared to 8GB clocked at 1.89Gb/s with a 2,048-bit bus. This gives a total of 1TB/s memory bandwidth.

The new chip has 128 ROPs to the old chip's 64, doubling the number of rendered rasterized pixels it can produce. However, it does fall behind its predecessor in one spec: it has only 60 compute units (3,840 stream processors) compared to 64 (4,096 stream processors).

Read 4 remaining paragraphs | Comments

Posted in AMD, Gaming & Culture, GPU, Tech | Comments (0)

These 12 FreeSync monitors will soon be compatible with Nvidia’s G-Sync

January 7th, 2019
An Nvidia graphic that depicts what G-Sync technology is like, I guess...

Enlarge / An Nvidia graphic that depicts what G-Sync technology is like, I guess...

In addition to the entirely expected news about the upcoming RTX 2060, Nvidia CES presentation this weekend included a surprise about its G-Sync display standard. That screen-tear and input-lag-smoothing technology will soon work with select monitors designed for VESA's competing DisplayPort Adaptive-Sync protocol (which is used in AMD's FreeSync monitors).

In announcing the move, Nvidia says the gaming experience on these variable refresh rate (VRR) monitors "can vary widely." So Nvidia says it has gone to the trouble of testing 400 different Adaptive-Sync monitors to see which ones are worthy of being certified as "G-Sync Compatible."

Of those 400 tests, Nvidia says only 12 monitors have met its standards so far. They are:

Read 5 remaining paragraphs | Comments

Posted in AMD, freesync, Gaming & Culture, gsync, NVIDIA | Comments (0)