Welcome to the IKCEST
Facebook, Twitter: We spot trolls based on how they act, not their posts - CNET
cybersecurity-hacking-6
Angela Lang/CNET

That horrendously offensive meme in your Twitter or Facebook feed was probably posted by a Russian-backed troll, right? Actually, probably not.

Social media companies can't tell that much from the kinds of content an account posts, Del Harvey, Twitter's vice president of trust and safety, said at the RSA Conference on Wednesday. During a panel session, Harvey said it's how an account behaves that reveals whether it's part of a larger campaign to spread division online.

"Content is actually one of the weaker signals," Harvey said.

The revelation has social media companies focusing on the behavior that accounts exhibit. You know, who they're communicating with, what they're sharing and what gets them to respond. Following that behavior has prompted Twitter and Facebook, along with Google and Reddit, to take down thousands of accounts they say were associated with influence campaigns targeting the US, Middle East and Latin America. Peter W. Singer, a fellow at the New America think tank specializing in defense, says it's a problem that government leaders and social media companies failed to recognize soon enough.

"We were looking in the wrong place," Singer said during the session. Instead of hackers stealing Facebook account information, the real threat was people "buying ads on scale that over half the American public saw unwittingly, which was Russian propaganda."

During the run-up to the 2016 US presidential campaign, 126 million American Facebook accounts and 1.4 million Twitter accounts saw content from Russian-backed sources.

Of course, Twitter users don't always understand that, Twitter's Harvey said, leading them to accuse each other of being "bots" when they don't like content they see on the website. The confusion speaks to one of the impacts of influence campaigns: They lead users to dismiss people they disagree with as being foreign operatives. It's just one more way that the campaigns leading up to the election injected an element of chaos into public discourse.

Facebook cybersecurity chief Nathaniel Gleicher agreed with Harvey, saying that focusing on offensive content doesn't stop what the company calls "coordinated inauthentic behavior." That's when a group of fake accounts pose as Americans, for example, and work with each other to amplify certain posts and ideas.

"The majority of the content we see in information operations doesn't violate our policies," Gleicher said, "and it's not provably false." Instead, it's designed to fit into the "gray space" in Facebook's policies.

US intelligence agencies were fully aware of what Russian-backed influence campaigns were capable of, said Rob Joyce, a senior advisor on cybersecurity at the NSA. But knowing what to do with that information is another question, because people in the US value freedom of speech so highly.

"We're in the middle of speech," Joyce said. "Getting into the middle and breaking the disruptive speech on the platforms, that's a difficult place for America to go."

Both Twitter and Facebook would be willing to accept regulation that requires them to tell users more about where content comes from. Gleicher said Facebook implemented the requirements of the Honest Ads Act even though it hasn't yet passed in the US House of Representatives. Facebook CEO Mark Zuckerberg expressed his support for the bill last year.

Original Text (This is the original text for your reference.)

cybersecurity-hacking-6
Angela Lang/CNET

That horrendously offensive meme in your Twitter or Facebook feed was probably posted by a Russian-backed troll, right? Actually, probably not.

Social media companies can't tell that much from the kinds of content an account posts, Del Harvey, Twitter's vice president of trust and safety, said at the RSA Conference on Wednesday. During a panel session, Harvey said it's how an account behaves that reveals whether it's part of a larger campaign to spread division online.

"Content is actually one of the weaker signals," Harvey said.

The revelation has social media companies focusing on the behavior that accounts exhibit. You know, who they're communicating with, what they're sharing and what gets them to respond. Following that behavior has prompted Twitter and Facebook, along with Google and Reddit, to take down thousands of accounts they say were associated with influence campaigns targeting the US, Middle East and Latin America. Peter W. Singer, a fellow at the New America think tank specializing in defense, says it's a problem that government leaders and social media companies failed to recognize soon enough.

"We were looking in the wrong place," Singer said during the session. Instead of hackers stealing Facebook account information, the real threat was people "buying ads on scale that over half the American public saw unwittingly, which was Russian propaganda."

During the run-up to the 2016 US presidential campaign, 126 million American Facebook accounts and 1.4 million Twitter accounts saw content from Russian-backed sources.

Of course, Twitter users don't always understand that, Twitter's Harvey said, leading them to accuse each other of being "bots" when they don't like content they see on the website. The confusion speaks to one of the impacts of influence campaigns: They lead users to dismiss people they disagree with as being foreign operatives. It's just one more way that the campaigns leading up to the election injected an element of chaos into public discourse.

Facebook cybersecurity chief Nathaniel Gleicher agreed with Harvey, saying that focusing on offensive content doesn't stop what the company calls "coordinated inauthentic behavior." That's when a group of fake accounts pose as Americans, for example, and work with each other to amplify certain posts and ideas.

"The majority of the content we see in information operations doesn't violate our policies," Gleicher said, "and it's not provably false." Instead, it's designed to fit into the "gray space" in Facebook's policies.

US intelligence agencies were fully aware of what Russian-backed influence campaigns were capable of, said Rob Joyce, a senior advisor on cybersecurity at the NSA. But knowing what to do with that information is another question, because people in the US value freedom of speech so highly.

"We're in the middle of speech," Joyce said. "Getting into the middle and breaking the disruptive speech on the platforms, that's a difficult place for America to go."

Both Twitter and Facebook would be willing to accept regulation that requires them to tell users more about where content comes from. Gleicher said Facebook implemented the requirements of the Honest Ads Act even though it hasn't yet passed in the US House of Representatives. Facebook CEO Mark Zuckerberg expressed his support for the bill last year.

Comments

    Something to say?

    Log in or Sign up for free

    Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
    Translate engine
    Article's language
    English
    中文
    Pусск
    Français
    Español
    العربية
    Português
    Kikongo
    Dutch
    kiswahili
    هَوُسَ
    IsiZulu
    Action
    Related

    Report

    Select your report category*



    Reason*



    By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

    Submit
    Cancel