Archive for the ‘AI’ Category

Facebook is working on an AI voice assistant similar to Alexa, Google Assistant

April 18th, 2019
Facebook's Portal+ smart display.

Enlarge / Along with video chatting through Facebook Messenger, both Portal devices have built-in Amazon Alexa. (credit: Facebook)

Facebook is working on developing an AI voice assistant similar in functionality to Amazon Alexa, Google Assistant, or Siri, according to a report from CNBC and a later statement from a Facebook representative.

The CNBC report, which cites "several people familiar with the matter," says the project has been ongoing since early 2018 in the company's offices in Redmond, Washington. The endeavor is led by Ira Snyder, whose listed title on LinkedIn is "Director, AR/VR and Facebook Assistant at Facebook." Facebook Assistant may be the name of the project. CNBC writes that Facebook has been reaching out to vendors in the smart-speaker supply chain, suggesting that Portal may only be the first of many smart devices the company makes.

When contacted for comment, Facebook sent a statement to Reuters, The Verge, and others, saying: "We are working to develop voice and AI assistant technologies that may work across our family of AR/VR products including Portal, Oculus, and future products."

Read 4 remaining paragraphs | Comments

Posted in AI, alexa, Facebook, Facebook M, Facebook Portal, Google Assistant, Siri, Smart Speaker, Tech, voice assistant | Comments (0)

Researchers, scared by their own work, hold back “deepfakes for text” AI

February 15th, 2019
This is fine.

Enlarge / This is fine.

OpenAI, a non-profit research company investigating "the path to safe artificial intelligence," has developed a machine learning system called Generative Pre-trained Transformer-2 (GPT-2 ), capable of generating text based on brief writing prompts. The result comes so close to mimicking human writing that it could potentially be used for "deepfake" content. Built based on 40 gigabytes of text retrieved from sources on the Internet (including "all outbound links from Reddit, a social media platform, which received at least 3 karma"), GPT-2 generates plausible "news" stories and other text that match the style and content of a brief text prompt.

The performance of the system was so disconcerting, now the researchers are only releasing a reduced version of GPT-2 based on a much smaller text corpus. In a blog post on the project and this decision, researchers Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever wrote:

Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Nearly a year ago we wrote in the OpenAI Charter: “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal "mafia"—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. Brockman now serves as OpenAI's CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Read 6 remaining paragraphs | Comments

Posted in AI, artificial intellignece, Biz & IT, computer-generated text, deep fake, deepfake, fake news, machine learning, Markov chain | Comments (0)

An AI crushed two human pros at StarCraft—but it wasn’t a fair fight

January 30th, 2019
Two groups of Stalkers controlled by AI AlphaStar approach the army of Grzegorz "MaNa" Komincz in the decisive battle of the pair's fourth game.

Enlarge / Two groups of Stalkers controlled by AI AlphaStar approach the army of Grzegorz "MaNa" Komincz in the decisive battle of the pair's fourth game. (credit: DeepMind)

DeepMind, the AI startup Google acquired in 2014, is probably best known for creating the first AI to beat a world champion at Go. So what do you do after mastering one of the world's most challenging board games? You tackle a complex video game. Specifically, DeepMind decided to write an AI to play the realtime strategy game StarCraft II.

StarCraft requires players to gather resources, build dozens of military units, and use them to try to destroy their opponents. StarCraft is particularly challenging for an AI because players must carry out long-term plans over several minutes of gameplay, tweaking them on the fly in the face of enemy counterattacks. DeepMind says that prior to its own effort, no one had come close to designing a StarCraft AI as good as the best human players.

Last Thursday, DeepMind announced a significant breakthrough. The company pitted its AI, dubbed AlphaStar, against two top StarCraft players—Dario "TLO" Wünsch and Grzegorz "MaNa" Komincz. AlphaStar won a five-game series against Wünsch 5-0, then beat Komincz 5-0, too.

Read 39 remaining paragraphs | Comments

Posted in AI, AlphaStar, deep learning, deepmind, Gaming & Culture, starcraft, StarCraft II | Comments (0)

Yes, “algorithms” can be biased. Here’s why

January 24th, 2019
Seriously, it's enough to make researchers cry.

Enlarge / Seriously, it's enough to make researchers cry. (credit: Getty | Peter M Fisher)

Dr. Steve Bellovin is professor of computer science at Columbia University, where he researches "networks, security, and why the two don't get along." He is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. The opinions expressed in this piece do not necessarily represent those of Ars Technica.

Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition "algorithms" (and by extension all "algorithms") "always have these racial inequities that get translated" and that "those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."

She was mocked for this claim on the grounds that "algorithms" are "driven by math" and thus can't be biased—but she's basically right. Let's take a look at why.

Read 23 remaining paragraphs | Comments

Posted in AI, algorithms, machine learning, ML, Policy | Comments (0)

AI can diagnose some genetic disorders using photos of faces

January 11th, 2019
AI can diagnose some genetic disorders using photos of faces

Enlarge (credit: Monty Rakusen)

Genomes are so five minutes ago. Personalized medicine is all about phenomes now.

OK, that’s an exaggeration. But plenty of genetic disorders do result in distinctive facial phenotypes (Down syndrome is probably the best known example). Many of these disorders are quite rare and thus not easily recognized by clinicians. This lack of familiarity can cause the patients with the disorders (and their parents) to endure a long and traumatic diagnostic odyssey before they figure out what ails them. While they may be uncommon individually, in aggregate, these rare disorders are not that rare: they affect eight percent of the population.

FDNA is a genomics/AI company that aims to “capture, structure and analyze complex human physiological data to produce actionable genomic insights.” They’ve made a facial-image-analysis framework, called DeepGestalt, that can diagnose genetic conditions based on facial images with a higher accuracy than doctors can. Results are published in Nature Medicine.

Read 7 remaining paragraphs | Comments

Posted in AI, Artificial intelligence, diagnostics, genetic disorders, science | Comments (0)

Move over AlphaGo: AlphaZero taught itself to play three different games

December 6th, 2018
Starting from random play and knowing just the game rules, AlphaZero defeated a world champion program in the games of Go, chess, and shoji (Japanese chess).

Enlarge / Starting from random play and knowing just the game rules, AlphaZero defeated a world champion program in the games of Go, chess, and shoji (Japanese chess). (credit: DeepMind Technologies, Ltd.)

Google's DeepMind—the group that brought you the champion game-playing AIs AlphaGo and AlphaGoZero—is back with a new, improved, and more-generalized version. Dubbed AlphaZero, this program taught itself to play three different board games (chess, Go, and shoji, a Japanese form of chess) in just three days, with no human intervention.

A paper describing the achievement was just published in Science. "Starting from totally random play, AlphaZero gradually learns what good play looks like and forms its own evaluations about the game," said Demis Hassabis, CEO and co-founder of DeepMind. "In that sense, it is free from the constraints of the way humans think about the game."

Chess has long been an ideal testing ground for game-playing computers and the development of AI. The very first chess computer program was written in the 1950s at Los Alamos National Laboratory, and in the late 1960s, Richard D. Greenblatt's Mac Hack IV program was the first to play in a human chess tournament—and to win against a human in tournament play. Many other computer chess programs followed, each a little better than the last, until IBM's Deep Blue computer defeated chess grand master Garry Kasparov in May 1997.

Read 11 remaining paragraphs | Comments

Posted in AI, alphago, AlphaZero, Artificial intelligence, Computer science, deep learning, deepmind, game theory, gaming, Gaming & Culture, neural networks, reinforcement learning, science | Comments (0)

More than an auto-pilot, AI charts its course in aviation

December 5th, 2018
Boeing 787 Dreamliner.

Enlarge / Boeing 787 Dreamliner. (credit: Nicolas Economou/NurPhoto via Getty Images)

Welcome to Ars UNITE, our week-long virtual conference on the ways that innovation brings unusual pairings together. Each day this week from Wednesday through Friday, we're bringing you a pair of stories about facing the future. Today's focus is on AI in transportation—buckle up!

Ask anyone what they think of when the words "artificial intelligence" and aviation are combined, and it's likely the first things they'll mention are drones. But autonomous aircraft are only a fraction of the impact that advances in machine learning and other artificial intelligence (AI) technologies will have in aviation—the technologies' reach could encompass nearly every aspect of the industry. Aircraft manufacturers and airlines are investing significant resources in AI technologies in applications that span from the flightdeck to the customer's experience.

Automated systems have been part of commercial aviation for years. Thanks to the adoption of "fly-by-wire" controls and automated flight systems, machine learning and AI technology are moving into a crew-member role in the cockpit. Rather than simply reducing the workload on pilots, these systems are on the verge of becoming what amounts to another co-pilot. For example, systems originally developed for unmanned aerial vehicle (UAV) safety—such as Automatic Dependent Surveillance Broadcast (ADS-B) for traffic situational awareness—have migrated into manned aircraft cockpits. And emerging systems like the Maneuvering Characteristics Augmentation System (MCAS) are being developed to increase safety when there's a need to compensate for aircraft handling characteristics. They use sensor data to adjust the control surfaces of an aircraft automatically, based on flight conditions.

Read 28 remaining paragraphs | Comments

Posted in AI, analytics, ars-unite-2018, Artificial intelligence, aviation, Biz & IT, civil aviation, Features, fly-by-wire, machine learning | Comments (0)

Apple published a surprising amount of detail about how the HomePod works

December 3rd, 2018
Image of a HomePod

Enlarge / Siri on Apple's HomePod speaker. (credit: Jeff Dunn)

Today, Apple published a long and informative blog post by its audio software engineering and speech teams about how they use machine learning to make Siri responsive on the HomePod, and it reveals a lot about why Apple has made machine learning such a focus of late.

The post discusses working in a far-field setting where users are calling on Siri from any number of locations around the room relative to the HomePod's location. The premise is essentially that making Siri work on the HomePod is harder than on the iPhone for that reason. The device must compete with loud music playback from itself.

Apple addresses these issues with multiple microphones along with machine learning methods—specifically:

Read 7 remaining paragraphs | Comments

Posted in AI, apple, audio, HomePod, machine learning, Tech | Comments (0)

AIs trained to help with sepsis treatment, fracture diagnosis

October 27th, 2018
Image of a wrist x-ray.

Enlarge (credit: Bo Mertz)

Treating patients effectively involves a combination of training and experience. That's one of the reasons that people have been excited about the prospects of using AI in medicine: it's possible to train algorithms using the experience of thousands of doctors, giving them more information than any single human could accumulate.

This week has provided some indications that software may be on the verge of living up to that promise, as two papers describe excellent preliminary results with using AI for both diagnosis and treatment decisions. The papers involve very different problems and approaches, which suggests that the range of situations where AI could prove useful is very broad.

Choosing treatments

One of the two studies focuses on sepsis, which occurs when the immune system mounts an excessive response to an infection. Sepsis is apparently the third leading cause of death worldwide, and it remains a problem even when the patient is already hospitalized. There are guidelines available for treating sepsis patients, but the numbers suggest there's still considerable room for improvement. So a small UK-US team decided to see if software could help provide some of that improvement.

Read 9 remaining paragraphs | Comments

Posted in AI, Computer science, deep learning, diagnosis, medicine, science, sepsis, treatment | Comments (0)

Nvidia and Remedy use neural networks for eerily good facial animation

August 1st, 2017

Enlarge

Remedy, the developer behind the likes of Alan Wake and Quantum Break, has teamed up with GPU-maker Nvidia to streamline one of the more costly parts of modern games development: motion capture and animation. As showcased at Siggraph, by using a deep learning neural network—run on Nvidia’s costly eight-GPU DGX-1 server, naturally—Remedy was able to feed in videos of actors performing lines, from which the network generated surprisingly sophisticated 3D facial animation. This, according Remedy and Nvidia, removes the hours of “labour-intensive data conversion and touch-ups” that are typically associated with traditional motion capture animation.

Aside from cost, facial animation, even when motion captured, rarely reaches the same level of fidelity as other animation. That odd, lifeless look seen in even the biggest of blockbuster games is often down to the limits of facial animation. Nvidia and Remedy believe its neural network solution is capable of producing results as good, if not better than that produced by traditional techniques. It’s even possible to skip the video altogether and feed the neural network a mere audio clip, from which it’s able to produce an animation based on prior results.

The neural network is first fed a “high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations,” which essentially means feeding it information on prior animations Remedy has created. The network is said to require only five to 10 minutes of footage before it’s able to produce animations based on simple monocular video capture of actors. Compared to results from state-of-the-art monocular and real-time facial capture techniques, the fully automated neural network produces eerily good results, with far less input required from animators.

Read 7 remaining paragraphs | Comments

Posted in AI, AMD, deep learning, game development, Gaming & Culture, neural networks, NVIDIA, Tech | Comments (0)