Archive for the ‘AI’ Category

Move over AlphaGo: AlphaZero taught itself to play three different games

December 6th, 2018
Starting from random play and knowing just the game rules, AlphaZero defeated a world champion program in the games of Go, chess, and shoji (Japanese chess).

Enlarge / Starting from random play and knowing just the game rules, AlphaZero defeated a world champion program in the games of Go, chess, and shoji (Japanese chess). (credit: DeepMind Technologies, Ltd.)

Google's DeepMind—the group that brought you the champion game-playing AIs AlphaGo and AlphaGoZero—is back with a new, improved, and more-generalized version. Dubbed AlphaZero, this program taught itself to play three different board games (chess, Go, and shoji, a Japanese form of chess) in just three days, with no human intervention.

A paper describing the achievement was just published in Science. "Starting from totally random play, AlphaZero gradually learns what good play looks like and forms its own evaluations about the game," said Demis Hassabis, CEO and co-founder of DeepMind. "In that sense, it is free from the constraints of the way humans think about the game."

Chess has long been an ideal testing ground for game-playing computers and the development of AI. The very first chess computer program was written in the 1950s at Los Alamos National Laboratory, and in the late 1960s, Richard D. Greenblatt's Mac Hack IV program was the first to play in a human chess tournament—and to win against a human in tournament play. Many other computer chess programs followed, each a little better than the last, until IBM's Deep Blue computer defeated chess grand master Garry Kasparov in May 1997.

Read 11 remaining paragraphs | Comments

Posted in AI, alphago, AlphaZero, Artificial intelligence, Computer science, deep learning, deepmind, game theory, gaming, Gaming & Culture, neural networks, reinforcement learning, science | Comments (0)

More than an auto-pilot, AI charts its course in aviation

December 5th, 2018
Boeing 787 Dreamliner.

Enlarge / Boeing 787 Dreamliner. (credit: Nicolas Economou/NurPhoto via Getty Images)

Welcome to Ars UNITE, our week-long virtual conference on the ways that innovation brings unusual pairings together. Each day this week from Wednesday through Friday, we're bringing you a pair of stories about facing the future. Today's focus is on AI in transportation—buckle up!

Ask anyone what they think of when the words "artificial intelligence" and aviation are combined, and it's likely the first things they'll mention are drones. But autonomous aircraft are only a fraction of the impact that advances in machine learning and other artificial intelligence (AI) technologies will have in aviation—the technologies' reach could encompass nearly every aspect of the industry. Aircraft manufacturers and airlines are investing significant resources in AI technologies in applications that span from the flightdeck to the customer's experience.

Automated systems have been part of commercial aviation for years. Thanks to the adoption of "fly-by-wire" controls and automated flight systems, machine learning and AI technology are moving into a crew-member role in the cockpit. Rather than simply reducing the workload on pilots, these systems are on the verge of becoming what amounts to another co-pilot. For example, systems originally developed for unmanned aerial vehicle (UAV) safety—such as Automatic Dependent Surveillance Broadcast (ADS-B) for traffic situational awareness—have migrated into manned aircraft cockpits. And emerging systems like the Maneuvering Characteristics Augmentation System (MCAS) are being developed to increase safety when there's a need to compensate for aircraft handling characteristics. They use sensor data to adjust the control surfaces of an aircraft automatically, based on flight conditions.

Read 28 remaining paragraphs | Comments

Posted in AI, analytics, ars-unite-2018, Artificial intelligence, aviation, Biz & IT, civil aviation, Features, fly-by-wire, machine learning | Comments (0)

Apple published a surprising amount of detail about how the HomePod works

December 3rd, 2018
Image of a HomePod

Enlarge / Siri on Apple's HomePod speaker. (credit: Jeff Dunn)

Today, Apple published a long and informative blog post by its audio software engineering and speech teams about how they use machine learning to make Siri responsive on the HomePod, and it reveals a lot about why Apple has made machine learning such a focus of late.

The post discusses working in a far-field setting where users are calling on Siri from any number of locations around the room relative to the HomePod's location. The premise is essentially that making Siri work on the HomePod is harder than on the iPhone for that reason. The device must compete with loud music playback from itself.

Apple addresses these issues with multiple microphones along with machine learning methods—specifically:

Read 7 remaining paragraphs | Comments

Posted in AI, apple, audio, HomePod, machine learning, Tech | Comments (0)

AIs trained to help with sepsis treatment, fracture diagnosis

October 27th, 2018
Image of a wrist x-ray.

Enlarge (credit: Bo Mertz)

Treating patients effectively involves a combination of training and experience. That's one of the reasons that people have been excited about the prospects of using AI in medicine: it's possible to train algorithms using the experience of thousands of doctors, giving them more information than any single human could accumulate.

This week has provided some indications that software may be on the verge of living up to that promise, as two papers describe excellent preliminary results with using AI for both diagnosis and treatment decisions. The papers involve very different problems and approaches, which suggests that the range of situations where AI could prove useful is very broad.

Choosing treatments

One of the two studies focuses on sepsis, which occurs when the immune system mounts an excessive response to an infection. Sepsis is apparently the third leading cause of death worldwide, and it remains a problem even when the patient is already hospitalized. There are guidelines available for treating sepsis patients, but the numbers suggest there's still considerable room for improvement. So a small UK-US team decided to see if software could help provide some of that improvement.

Read 9 remaining paragraphs | Comments

Posted in AI, Computer science, deep learning, diagnosis, medicine, science, sepsis, treatment | Comments (0)

Nvidia and Remedy use neural networks for eerily good facial animation

August 1st, 2017

Enlarge

Remedy, the developer behind the likes of Alan Wake and Quantum Break, has teamed up with GPU-maker Nvidia to streamline one of the more costly parts of modern games development: motion capture and animation. As showcased at Siggraph, by using a deep learning neural network—run on Nvidia’s costly eight-GPU DGX-1 server, naturally—Remedy was able to feed in videos of actors performing lines, from which the network generated surprisingly sophisticated 3D facial animation. This, according Remedy and Nvidia, removes the hours of “labour-intensive data conversion and touch-ups” that are typically associated with traditional motion capture animation.

Aside from cost, facial animation, even when motion captured, rarely reaches the same level of fidelity as other animation. That odd, lifeless look seen in even the biggest of blockbuster games is often down to the limits of facial animation. Nvidia and Remedy believe its neural network solution is capable of producing results as good, if not better than that produced by traditional techniques. It’s even possible to skip the video altogether and feed the neural network a mere audio clip, from which it’s able to produce an animation based on prior results.

The neural network is first fed a “high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations,” which essentially means feeding it information on prior animations Remedy has created. The network is said to require only five to 10 minutes of footage before it’s able to produce animations based on simple monocular video capture of actors. Compared to results from state-of-the-art monocular and real-time facial capture techniques, the fully automated neural network produces eerily good results, with far less input required from animators.

Read 7 remaining paragraphs | Comments

Posted in AI, AMD, deep learning, game development, Gaming & Culture, neural networks, NVIDIA, Tech | Comments (0)

Listen up: is this really who you think it is talking?

May 5th, 2017

Lyrebird, an AI startup, can produce uncannily good versions of real people’s voices. What does it mean for identity fraud?

Posted in AI, fake news, identity theft, Lyrebird, neural networks, University of Montreal, voice synthesis | Comments (0)

MWC: Completely superfluous ‘AI’ added to consumer items

February 28th, 2017

Manufacturers have moved on from just putting devices online, adding AI and machine learning to consumer items that would do perfectly well without them

Posted in AI, Artificial intelligence, Barcelona, IoT, machine learning, Mobile World Congress, MWC, Olay | Comments (0)

Artificial intelligence can be used to predict your death – but is it secure?

January 18th, 2017

Researchers say this is the first study to use AI to predict heart disease outcomes

Posted in AI, Artificial intelligence, heart failure, Imperial College London, MRI, Security threats | Comments (0)

How far can we go – and should we go – with robots?

January 13th, 2017

European lawmakers are preparing to vote on how we should govern artificial intelligence and robots

Posted in AI, robots | Comments (0)

Taking a ride in Nvidia’s self-driving car

January 7th, 2017

Enlarge

Sitting in the passenger seat of a car affectionately known at Nvidia as “BB8” is an oddly terrifying experience. Between me and the driver’s seat is a centre panel covered in touchscreens detailing readings from the numerous cameras and sensors placed around the car, and a large red button helpfully labelled “stop.”

As BB8 pulls away to take me on a short ride around a dedicated test track on the north side of the Las Vegas convention centre—with no-one in the driver’s seat—it’s hard to resist keeping a hand hovering over that big red button. After all, it’s not every day that you consciously put your life in the hands of a computer.

The steering wheel jerks and turns as BB8 sweeps around a corner at a cool 10 miles per hour, neatly avoiding a set of traffic cones while remaining within the freshly painted white lines of the makeshift circuit. After three smooth laps, two Nvidia employees wheel out an obstacle—a large orange panel—into the middle of track, which BB8 deftly avoids.

Read 16 remaining paragraphs | Comments

Posted in AI, Cars Technica, CES, CES 2017, NVIDIA, self driving cars | Comments (0)