Welcome to the IKCEST
Marijuana or broccoli? Facebook illustrates AI's challenges with this example - CNET
20190501-fb-f8-01-3

Facebook CTO Mike Schroepfer says Facebook's AI can distinguish between images of marijuana (left) and broccoli tempura (right). 

Screenshot by Stephen Shankland/CNET

Facebook uses both human beings and artificial intelligence to combat some of its toughest problems, including hate speech, misinformation and election meddling. Now, the social network is doubling down on AI.

The tech giant has come under fire for a series of lapses, including its failure to pull down a live video of terrorist attack in New Zealand that killed 50 people at two mosquesContent moderators who review posts shared by the social network's 2.3 billion users say they've suffered trauma from repeatedly looking at gruesome and violent content. But AI has also helped Facebook flag spam, fake accounts, nudity and other offensive content before a user reports it to the social network. Overall, AI has had mixed results.

Facebook CTO Mike Schroepfer on Wednesday acknowledged that AI hasn't been a cure-all for the social network's "complex problems," but he said the company was making progress. He made the remarks in a keynote at the company's F8 developer conference.

Schroepfer showed the audience photographs of marijuana and broccoli tempura, which look surprisingly similar. Facebook employees, he said, built a new algorithm that can detect differences in similar images, allowing a computer to distinguish which was which.

Schroepfer said similar techniques can be used to help machines recognize other images that might otherwise escape the social network's detection.

"If someone reports something like this," he said, "we can then fan out and look at billions of images in a very short period of time and find things that look similar."

Facebook, which doesn't allow the sale of recreational drugs on its platform, discovered that people tried to work around its system by using packaging or baked goods, such as Rice Krispies treats. The social network can now flag those images by putting together signals like the text in a post, comments and the identity of the user.

"This is an intensely adversarial game," Schroepfer said. "We build a new technique, we deploy it, people work hard to try to figure out ways around this."

Identifying the right images isn't the only AI challenge the company is facing. When the company was building a smart camera for its Portal video chat device, Facebook had to make sure the technology wasn't biased and could recognize age, gender and skin tone.

Facebook is also trying to train its computers to learn with less supervision in order to tackle hate speech in elections. 

But as the social network uses AI to moderate more content, it also has to balance concerns that it's being fair to all groups. Facebook, for example, has been accused of suppressing conservative speech, but the company has denied those allegations. And people might disagree about what's considered hate speech or misinformation. 

Facebook data scientist Isabel Kloumann said in an interview that when the company is determining what is hate speech the identity of the person could be an important factor along with who they're targeting. At the same time, Facebook has to balance safety concerns with whether they're treating groups of people equally.

"We don't have a silver bullet for this," she said. "But the fact that we're having this conversation is the most important thing."

Originally published May 1, 1:46 p.m. PT
Update, 5:19 p.m.: Adds comments from Facebook data scientist and more background.

Original Text (This is the original text for your reference.)

20190501-fb-f8-01-3

Facebook CTO Mike Schroepfer says Facebook's AI can distinguish between images of marijuana (left) and broccoli tempura (right). 

Screenshot by Stephen Shankland/CNET

Facebook uses both human beings and artificial intelligence to combat some of its toughest problems, including hate speech, misinformation and election meddling. Now, the social network is doubling down on AI.

The tech giant has come under fire for a series of lapses, including its failure to pull down a live video of terrorist attack in New Zealand that killed 50 people at two mosquesContent moderators who review posts shared by the social network's 2.3 billion users say they've suffered trauma from repeatedly looking at gruesome and violent content. But AI has also helped Facebook flag spam, fake accounts, nudity and other offensive content before a user reports it to the social network. Overall, AI has had mixed results.

Facebook CTO Mike Schroepfer on Wednesday acknowledged that AI hasn't been a cure-all for the social network's "complex problems," but he said the company was making progress. He made the remarks in a keynote at the company's F8 developer conference.

Schroepfer showed the audience photographs of marijuana and broccoli tempura, which look surprisingly similar. Facebook employees, he said, built a new algorithm that can detect differences in similar images, allowing a computer to distinguish which was which.

Schroepfer said similar techniques can be used to help machines recognize other images that might otherwise escape the social network's detection.

"If someone reports something like this," he said, "we can then fan out and look at billions of images in a very short period of time and find things that look similar."

Facebook, which doesn't allow the sale of recreational drugs on its platform, discovered that people tried to work around its system by using packaging or baked goods, such as Rice Krispies treats. The social network can now flag those images by putting together signals like the text in a post, comments and the identity of the user.

"This is an intensely adversarial game," Schroepfer said. "We build a new technique, we deploy it, people work hard to try to figure out ways around this."

Identifying the right images isn't the only AI challenge the company is facing. When the company was building a smart camera for its Portal video chat device, Facebook had to make sure the technology wasn't biased and could recognize age, gender and skin tone.

Facebook is also trying to train its computers to learn with less supervision in order to tackle hate speech in elections. 

But as the social network uses AI to moderate more content, it also has to balance concerns that it's being fair to all groups. Facebook, for example, has been accused of suppressing conservative speech, but the company has denied those allegations. And people might disagree about what's considered hate speech or misinformation. 

Facebook data scientist Isabel Kloumann said in an interview that when the company is determining what is hate speech the identity of the person could be an important factor along with who they're targeting. At the same time, Facebook has to balance safety concerns with whether they're treating groups of people equally.

"We don't have a silver bullet for this," she said. "But the fact that we're having this conversation is the most important thing."

Originally published May 1, 1:46 p.m. PT
Update, 5:19 p.m.: Adds comments from Facebook data scientist and more background.

Comments

    Something to say?

    Log in or Sign up for free

    Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
    Translate engine
    Article's language
    English
    中文
    Pусск
    Français
    Español
    العربية
    Português
    Kikongo
    Dutch
    kiswahili
    هَوُسَ
    IsiZulu
    Action
    Related

    Report

    Select your report category*



    Reason*



    By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

    Submit
    Cancel