Archive for the ‘AI’ Category

Facebook AI Pluribus defeats top poker professionals in 6-player Texas Hold ‘em

July 11th, 2019

This video shows sample hands from Pluribus' experiment against professional poker players. Cards are turned face up to make it easier to see Pluribus' strategy. Courtesy of Carnegie-Mellon University.

Poker-playing AIs typically perform well against human opponents when the play is limited to just two players. Now Carnegie Mellon University and Facebook AI research scientists have raised the bar even further with an AI dubbed Pluribus, which took on 15 professional human players in six-player no-limit Texas Hold 'em and won. The researchers describe how they achieved this feat in a new paper in Science.

Playing more than 5,000 hands each time, five copies of the AI took on two top professional players: Chris "Jesus" Ferguson, six-time winner of World Series of Poker events, and Darren Elias, who currently holds the record for most World Poker Tour titles. Pluribus defeated them both. It did the same in a second experiment, in which Pluribus played five pros at a time, from a pool of 13 human players, for 10,000 hands.

Co-author Tuomas Sandholm of Carnegie Mellon University has been grappling with the unique challenges poker poses for AI for the last 16 years. No-Limit Texas Hold 'em is a so-called "imperfect information" game, since there are hidden cards (held by one's opponents in the hand) and no restrictions on the size of the bet one can make. By contrast, with chess and Go, the status of the playing board and all the pieces are known by all the players. Poker players can (and do) bluff on occasion, so it's also a game of misleading information.

Read 16 remaining paragraphs | Comments

Posted in AI, Facebook AI, Facebook Research, gaming, Gaming & Culture, Pluribus, poker, science, Texas Hold 'Em | Comments (0)

Steam uses machine learning for its new game recommendation engine

July 11th, 2019
The new recommendation engine is part of a new experimental Steam Labs branding.

Enlarge / The new recommendation engine is part of a new experimental Steam Labs branding.

For years now, Valve has been testing new approaches to filter the glut of Steam games down to the ones in which individual users are most likely to show an interest. To that end, the company is today rolling out a machine-learning-powered "Interactive Recommender" trained on "billions of play sessions" from the Steam user base.

In the past, Steam has relied largely on crowd-sourced metadata like user-provided tags, user-curated lists, aggregate review scores, and sales data to drive its recommendation algorithms. But the new Interactive Recommender is different, Valve says, because it works without any initial internal or external information about the games themselves (save for the release date). "Instead, the model learns about the games for itself during the training process," Valve says. "The model infers properties of games by learning what users do, not by looking at other extrinsic data."

Your own playtime history is a core part of this neural-network-driven model. The number of hours you put into each game in your library is compared with that of millions of other Steam users so the neural network can make "informed suggestions" about the kinds of games you might like. "The idea is that if players with broadly similar play habits to you also tend to play another game you haven't tried yet, then that game is likely to be a good recommendation for you," Valve writes.

Read 4 remaining paragraphs | Comments

Posted in @raiseyouriq, AI, Gaming & Culture, machine learning, recommendations, Steam, Valve | Comments (0)

Author pulls software that used deep learning to virtually undress women

June 28th, 2019
Author pulls software that used deep learning to virtually undress women

Enlarge (credit: AntonioGuillem)

On Wednesday, a Vice article alerted the world to the creation of DeepNude, a computer program that uses neural networks to transform an image of a clothed woman into a realistic rendering of what she might look like naked.

The software attracted widespread condemnation. This is an “invasion of sexual privacy,” legal scholar Danielle Citron told Vice.

The software's anonymous creator explained how it worked to Vice's Samantha Cole:

Read 5 remaining paragraphs | Comments

Posted in AI, danielle citron, deepfakes, DeepNude, Policy, revenge porn | Comments (0)

Google’s AI group moves on from Go, tackles Quake III Arena

May 30th, 2019
Representations of bots on a Quake map.

Enlarge / Representation of some of the behaviors developed by the FTW algorithm. (credit: Deep Mind)

Google's AI subsidiary Deep Mind has built its reputation by building systems that learn to play games by playing each other, starting with little more than the rules and what constitutes a win. That Darwinian approach of improvement through competition has allowed Deep Mind to tackle complex games like chess and Go, where there are vast numbers of potential moves to consider.

But at least for board games like those, the potential moves are discrete and don't require real-time decisionmaking. It wasn't unreasonable to question whether the same approach would work for completely different classes of games. Such questions, however, seem to be answered by a report in today's issue of Science, where Deep Mind reveals the development of an AI system that has taught itself to play Quake III Arena and can consistently beat human opponents in capture-the-flag games.

Not a lot of rules

Chess' complexity is built from an apparently simple set of rules: an 8 x 8 grid of squares and pieces that can only move in very specific ways. Quake III Arena, to an extent, gets rid of the grid. In capture-the-flag mode, both sides start in a spawn area and have a flag to defend. You score points by capturing the opponent's flag. You can also gain tactical advantage by "tagging" (read "shooting") your opponents, which, after a delay, sends them back to their spawn.

Read 15 remaining paragraphs | Comments

Posted in AI, Computer science, Deep Mind, gaming, quake III, science | Comments (0)

Why Google believes machine learning is its future

May 10th, 2019
Google CEO Sundar Pichai speaks during the Google I/O Developers Conference on May 7, 2019.

Enlarge / Google CEO Sundar Pichai speaks during the Google I/O Developers Conference on May 7, 2019. (credit: David Paul Morris/Bloomberg via Getty Images)

One of the most interesting demos at this week's Google I/O keynote featured a new version of Google's voice assistant that's due out later this year. A Google employee asked the Google Assistant to bring up her photos and then show her photos with animals. She tapped one and said, "Send it to Justin." The photo was dropped into the messaging app.

From there, things got more impressive.

"Hey Google, send an email to Jessica," she said. "Hi Jessica, I just got back from Yellowstone and completely fell in love with it." The phone transcribed her words, putting "Hi Jessica" on its own line.

Read 38 remaining paragraphs | Comments

Posted in AI, google, machine learning, pixel, Tech, TPU | Comments (0)

Google debuts “next-generation” Assistant, coming to next Pixel phones

May 7th, 2019
A man gives a speech on a stage in front of the image of three smartphones.

Enlarge (credit: Google/Screenshot)

Google on Tuesday debuted an updated version of its Google Assistant platform during the keynote of its Google I/O developers conference.

The company said it is internally calling this the "next-generation" Assistant and that it will first become available on Google's "new Pixel phones" later this year. (Not to be confused with the budget-friendly Pixel 3a phones Google also announced on Tuesday.)

Google is touting significant performance improvements with the updated Assistant, claiming that it can process and understand voice requests "in real time" and deliver results "up to 10 times faster" than its current iteration. The company says this is primarily due to it condensing the AI models used to interpret speech down to a half a gigabyte, which is small enough for them to process directly on a smartphone instead of requiring remote servers.

Read 7 remaining paragraphs | Comments

Posted in AI, google, Google Assistant, Google I/O, Google Pixel, smartphones, Tech | Comments (0)

Facebook is working on an AI voice assistant similar to Alexa, Google Assistant

April 18th, 2019
Facebook's Portal+ smart display.

Enlarge / Along with video chatting through Facebook Messenger, both Portal devices have built-in Amazon Alexa. (credit: Facebook)

Facebook is working on developing an AI voice assistant similar in functionality to Amazon Alexa, Google Assistant, or Siri, according to a report from CNBC and a later statement from a Facebook representative.

The CNBC report, which cites "several people familiar with the matter," says the project has been ongoing since early 2018 in the company's offices in Redmond, Washington. The endeavor is led by Ira Snyder, whose listed title on LinkedIn is "Director, AR/VR and Facebook Assistant at Facebook." Facebook Assistant may be the name of the project. CNBC writes that Facebook has been reaching out to vendors in the smart-speaker supply chain, suggesting that Portal may only be the first of many smart devices the company makes.

When contacted for comment, Facebook sent a statement to Reuters, The Verge, and others, saying: "We are working to develop voice and AI assistant technologies that may work across our family of AR/VR products including Portal, Oculus, and future products."

Read 4 remaining paragraphs | Comments

Posted in AI, alexa, Facebook, Facebook M, Facebook Portal, Google Assistant, Siri, Smart Speaker, Tech, voice assistant | Comments (0)

Researchers, scared by their own work, hold back “deepfakes for text” AI

February 15th, 2019
This is fine.

Enlarge / This is fine.

OpenAI, a non-profit research company investigating "the path to safe artificial intelligence," has developed a machine learning system called Generative Pre-trained Transformer-2 (GPT-2 ), capable of generating text based on brief writing prompts. The result comes so close to mimicking human writing that it could potentially be used for "deepfake" content. Built based on 40 gigabytes of text retrieved from sources on the Internet (including "all outbound links from Reddit, a social media platform, which received at least 3 karma"), GPT-2 generates plausible "news" stories and other text that match the style and content of a brief text prompt.

The performance of the system was so disconcerting, now the researchers are only releasing a reduced version of GPT-2 based on a much smaller text corpus. In a blog post on the project and this decision, researchers Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever wrote:

Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. Nearly a year ago we wrote in the OpenAI Charter: “we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research,” and we see this current work as potentially representing the early beginnings of such concerns, which we expect may grow over time. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal "mafia"—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. Brockman now serves as OpenAI's CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Read 6 remaining paragraphs | Comments

Posted in AI, artificial intellignece, Biz & IT, computer-generated text, deep fake, deepfake, fake news, machine learning, Markov chain | Comments (0)

An AI crushed two human pros at StarCraft—but it wasn’t a fair fight

January 30th, 2019
Two groups of Stalkers controlled by AI AlphaStar approach the army of Grzegorz "MaNa" Komincz in the decisive battle of the pair's fourth game.

Enlarge / Two groups of Stalkers controlled by AI AlphaStar approach the army of Grzegorz "MaNa" Komincz in the decisive battle of the pair's fourth game. (credit: DeepMind)

DeepMind, the AI startup Google acquired in 2014, is probably best known for creating the first AI to beat a world champion at Go. So what do you do after mastering one of the world's most challenging board games? You tackle a complex video game. Specifically, DeepMind decided to write an AI to play the realtime strategy game StarCraft II.

StarCraft requires players to gather resources, build dozens of military units, and use them to try to destroy their opponents. StarCraft is particularly challenging for an AI because players must carry out long-term plans over several minutes of gameplay, tweaking them on the fly in the face of enemy counterattacks. DeepMind says that prior to its own effort, no one had come close to designing a StarCraft AI as good as the best human players.

Last Thursday, DeepMind announced a significant breakthrough. The company pitted its AI, dubbed AlphaStar, against two top StarCraft players—Dario "TLO" Wünsch and Grzegorz "MaNa" Komincz. AlphaStar won a five-game series against Wünsch 5-0, then beat Komincz 5-0, too.

Read 39 remaining paragraphs | Comments

Posted in AI, AlphaStar, deep learning, deepmind, Gaming & Culture, starcraft, StarCraft II | Comments (0)

Yes, “algorithms” can be biased. Here’s why

January 24th, 2019
Seriously, it's enough to make researchers cry.

Enlarge / Seriously, it's enough to make researchers cry. (credit: Getty | Peter M Fisher)

Dr. Steve Bellovin is professor of computer science at Columbia University, where he researches "networks, security, and why the two don't get along." He is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. The opinions expressed in this piece do not necessarily represent those of Ars Technica.

Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition "algorithms" (and by extension all "algorithms") "always have these racial inequities that get translated" and that "those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."

She was mocked for this claim on the grounds that "algorithms" are "driven by math" and thus can't be biased—but she's basically right. Let's take a look at why.

Read 23 remaining paragraphs | Comments

Posted in AI, algorithms, machine learning, ML, Policy | Comments (0)