Facebook will warn these millions of users with a notice atop the News Feed with information about what data of theirs might have been attained, and what they should do now. It will also show its new bulk app permissions removal tool atop the feed.
Facebook warned that 87 million users, mostly in the U.S, that their data “may have been improperly shared with Cambridge Analytica by apps that they or their friends used,” the company announced. CTO at Facebook, Mike Schroepfer, tells TechCrunch that Facebook will warn these users with a notice atop the News Feed with information about what data of theirs might have been attained, and what they should do now. It will also show its new bulk app permissions removal tool atop the feed.
Schroepfer, says that 87 million is the maximum number of users impacted, up from initial reports from the New York Times of 50 million people effected, as Facebook isn’t positive of how many people had their data misused. It likely doesn’t want to low-ball and have to revise the number upward later, as it did when it initially reported the Russian election interference ads were seen by 10 million users and later had to admit to congress it was actually 126 million when organic posts were included. Mark Zuckerberg plans to take questions from reporters about the changes during a 1:00pm Pacific conference call on the subject.
The changes come as part of a slew of announcements in the wake of the Cambridge Analytica scandal including new restrictions on Facebook API use and the immediate shut down of the old Instagram API that was slated for July, but which started to break developers’ apps this week. Facebook is now undergoing a deep audit of app developers that pulled a lot of data or that look suspicious, and Schroepfer promises Facebook will make further disclosures if it finds any situations similar to the Cambridge Analytica fiasco.
Facebook is trying to fix its broken data privacy after a developer named Dr. Aleksandr Kogan used the platform to administer a personality test app that collected data about participants and their friends. That data was then passed to Cambridge Analytica where it may have been leveraged to optimize political campaigns including that of 2016 presidential candidate Donald Trumpand the Brexit vote, allegations which the company itself vehemently denies. Regardless of how the data was employed to political ends, that lax data sharing was enough to ignite a firestorm around Facebook’s privacy practices.
Following the Cambridge Analytica revelations, the company’s stock dropped precipitously, wiping more than $60 billion off its market capitalization from its prior period of stable growth. At the time of writing, Facebook was trading at $153.56.
Facebook’s core leadership was slow to respond to the explosion of negative attention, though Zuckerberg and Sandberg broke that silence with a flurry of media appearances, interviews and print ads. The company also came under the scrutiny of Congress once more and that pressure, which came from subcommittees in both the House and Senate and from both political parties, appears to have paid off. Zuckerberg is expected to testify before the House Energy and Commerce Committee, just one of the several powerful committees calling for him, on April 11.
While it’s certainly unfortunate that it took mishandling user data on a large scale to do so, the incident has become the straw that broke the Facebook camel’s back when it comes to privacy — and that appears to be catalyzing change. Schroepfer said Facebook is now lifting every rock to find any other vulnerabilities that could be used to illicitly access or steal people’s information. Now we’re getting changes that should have been in place years ago that could make Facebook a safer place to network for users concerned about how the company handles their private data.
For more on Facebook’s recent scandals and changes:
There’s no doubt it will be a battle to get there — requiring legal challenges and fresh case law to be set down — as an old guard of dominant tech platforms marshal their extensive resources to try to hold onto the power and wealth gained through years of riding roughshod over data protection law.
The exciting opportunity for startups — by thinking beyond exploitative legacy business models that amount to embarrassing blackboxes whose CEOs dare not publicly admitwhat the systems really do — and come up with new ways of operating and monetizing services that don’t rely on selling the lie that people don’t care about privacy.
More than just small print
Right now the EU’s General Data Protection Regulation can take credit for a whole lot of spilt ink as tech industry small print is reworded en masse. Did you just receive a T&C update notification about a company’s digital service? Chances are it’s related to the incoming standard.
The regulation is generally intended to strengthen Internet users’ control over their personal information, as explained before. But its focus on transparency — making sure people know how and why data will flow if they choose to click ‘I agree’ — combined with supersized fines for major data violations represents something of an existential threat to ad tech processes that rely on pervasive background harvesting of users’ personal data to be siphoned biofuel for their vast, proprietary microtargeting engines.
This is why Facebook is not going gentle into a data processing goodnight.
Indeed, it’s seizing on GDPR as a PR opportunity — shamelessly stamping its brand on the regulatory changes it lobbied so hard against, including by taking out full page print ads in newspapers…
This is of course another high gloss plank in the company’s PR strategy to try to convince users to trust it — and thus to keep giving it their data. Because — and only because — GDPR gives consumers more opportunity to lock down access to their information and close the shutters against countless prying eyes.
But the pressing question for Facebook — and one that will also test the mettle of the new data protection standard — is whether or not the company is doing enough to comply with the new rules.
One important point regarding Facebook and GDPR is that the standard applies globally, i.e. for all Facebook users whose data is processed by its international entity, Facebook Ireland(and thus within the EU); butnot necessarily universally — with Facebook users in North America not legally falling under the scope of the regulation.
(Update: Reuters has obtained confirmation from Facebook that it will be switching the data controller entity for all its international users to Facebook USA, rather than Facebook Ireland, with the exception of users in Europe — thereby shrinking the legal reach of GDPR across its international user-base.)
Facebook users in North America will only benefit from GDPR’s protections if Facebook chooses to apply the same standard everywhere as it must for EU users. (And on that point the company has stayed exceedingly fuzzy.)
It has claimed it won’t give US and Canadian users second tier status where their privacy is concerned — saying they’re getting the same “settings and controls” — but unless or until US lawmakers spill some ink of their own there’s nothing but an embarrassing PR message to regulate what Facebook chooses to do with Americans’ data. It’s the data protection principles, stupid.
Zuckerberg was asked by US lawmakers last week what kind of regulation he would and wouldn’t like to see laid upon Internet companies — and he made a point of arguing for privacy carve outs to avoid falling behind, of all things, competitors in China.
Which is an incredibly chilling response when you consider how few rights — including human rights — Chinese citizens have. And how data-mining digital technologies are being systematically used to expand Chinese state surveillance and control.
The ugly underlying truth of Facebook’s business is that it also relies on surveillance to function. People’s lives are its product.
That’s why Zuckerberg couldn’t tell US lawmakers to hurry up and draft their own GDPR. He’s the CEO saddled with trying to sell an anti-privacy, anti-transparency position — just as policymakers are waking up to what that really means.
Facebook has announced a series of updates to its policies and platform in recent months, which it’s said are coming to all users (albeit in ‘phases’). The problem is that most of what it’s proposing to achieve GDPR compliance is simply not adequate.
He could not tell Congress there wouldn’t be other such data misuse skeletons in its closet. Indeed the company has said it expects it will uncover additional leaks as it conducts a historical audit of apps on its platform that had access to “a large amount of data”. (How large is large, one wonders… )
Any new law will certainly take time to formulate and pass. In the meanwhile GDPR is it.
The most substantive GDPR-related change announced by Facebook to date is the shuttering of a feature called Partner Categories — in which it allowed the linking of its own information holdings on people with data held by external brokers, including (for example) information about people’s offline activities.
Evidently finding a way to close down the legal liabilities and/or engineer consent from users to that degree of murky privacy intrusion — involving pools of aggregated personal data gathered by goodness knows who, how, where or when — was a bridge too far for the company’s army of legal and policy staffers.
As my TC colleague Josh Constine wrote earlier in a critical post dissecting the flaws of Facebook’s approach to consent review, the company is — at very least — not complying with the spirit of GDPR’s law.
Indeed, Facebook appears pathologically incapable of abandoning its long-standing modus operandi of socially engineering consent from users (doubtless fed via its own self-reinforced A/B testing ad expertise). “It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes,” was his summary of the process.
To get into a few specifics, pre-ticked boxes — which is essentially what Facebook is deploying here, with a big blue “accept and continue” button designed to grab your attention as it’s juxtaposed against an anemic “manage data settings” option (which if you even manage to see it and read it sounds like a lot of tedious hard work) — aren’t going to constitute valid consent under GDPR.
Nor is this what ‘privacy by default’ looks like — another staple principle of the regulation. On the contrary, Facebook is pushing people to do the opposite: Give it more of their personal information — and fuzzing why it’s asking by bundling a range of usage intentions.
The company is risking a lot here.
In simple terms, seeking consent from users in a way that’s not fair because it’s manipulative means consent is not being freely given. Under GDPR, it won’t be consent at all. So Facebook appears to be seeing how close to the wind it can fly to test how regulators will respond.
“Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment,” runs one key portion of GDPR.
Now compare that with: “People can choose to not be on Facebook if they want” — which was Facebook’s deputy chief privacy officer, Rob Sherman’s, paper-thin defense to reporters for the lack of an overall opt out for users to its targeted advertising.
Data protection experts suggest Facebook is failing to comply with, not just the spirit, but the letter of the law here. Some were exceeding blunt on this point.
“I am less impressed,” said law professor Mireille Hildebrandt discussing how Facebook is railroading users into consenting to its targeted advertising. “It seems they have announced that they will still require consent for targeted advertising and refuse the service if one does not agree. This violates [GDPR] art. 7.4 jo recital 43. So, yes, they will be taken to court.”
Facebook says users must accept targeted ads even under new EU law: NO THEY MUST NOT, there are other types of advertising, subscription etc. https://t.co/zrUgsgxtwo
“Zuckerberg appears to view the combination of signing up to T&Cs and setting privacy options as ‘consent,’” adds cyber security professor Eerke Boiten. “I doubt this is explicit or granular enough for the personal data processing that FB do. The default settings for the privacy settings certainly do not currently provide for ‘privacy by default’ (GDPR Art 25(see below)).
“I also doubt whether Facebook Custom Audiencework correctly with consent. FB finds out and retains a small bit of personal info through this process (that an email address they know is known to an advertiser), and they aim to shift the data protection legal justification on that to the advertisers. Do they really then not use this info for future profiling?”
That looming tweak to the legal justification of Facebook’s Custom Audiences feature — a product which lets advertisers upload contact lists in a hashed form to find any matches among its own user-base (so those people can be targeted with ads on Facebook’s platform) — also looks problematical.
Here the company seems to be intending to try to claim a change in the legal basis, pushed out via new terms in which it instructs advertisers to agree they are the data controller (and it is merely a data processor). And thereby seek to foist a greater share of the responsibility for obtaining consent to processing user data onto its customers.
However such legal determinations are simply not a matter of contract terms. They are based on the fact of who is making decisions about how data is processed. And in this case — as other experts have pointed out — Facebook would be classed as a joint controller with any advertisers that upload personal data. The company can’t use a T&Cs change to opt out of that.
Wishful thinking is not a reliable approach to legal compliance.
Let’s not forget, facial recognition was a platform feature that got turned off in the EU, thanks to regulatory intervention. Yet here Facebook is now trying to use GDPR as a route to process this sensitive biometric data for international users after all — by pushing individual users to consent to it by dangling a few ‘feature perks’ at the moment of consent.
Veteran data protection and privacy consultant, Pat Walshe, is unimpressed.
“The sensitive data tool appears to be another data grab,” he tells us, reviewing Facebook’s latest clutch of ‘GDPR changes.’ “Note the subtlety. It merges ‘control of sharing’ such data with FB’s use of the data “to personalise features and products.” From the info available that isn’t sufficient to amount to consent for such sensitive data and nor is it clear folks can understand the broader implications of agreeing.
“Does it mean ads will appear in Instagram? WhatsApp etc? The default is also set to ‘accept’ rather than ‘review and consider.’ This is really sensitive data we’re talking about.”
“The Facial recognitionsuggestions are woeful,” he continues. “The second image — is using an example… to manipulate and stoke fear — “we can’t protect you.”
Of course it goes without saying that Facebook users will keep uploading group photos, not just selfies. What’s less clear is whether Facebook will be processing the faces of other people in those shots who have not given (and/or never even had the opportunity to give) consent to its facial recognition feature.
People who might not even be users of its product.
It can’t give non-users “settings and controls” not to have their data processed. So it’s already compromised their privacy — because it never gained consent in the first place.
New Mexico Representative Ben Lujan made this point to Zuckerberg’s face last week and ended the exchange with a call to action: “So you’re directing people that don’t even have a Facebook page to sign up for a Facebook page to access their data… We’ve got to change that.”
But nothing in the measures Facebook has revealed so far, as its ‘compliance response’ to GDPR, suggest it intends to pro-actively change that.
Walshe also critically flags how — again, at the point of consent — Facebook’s review process deploys examples of the social aspects of its platform (such as how it can use people’s information to “suggest groups or other features or products”) as a tactic for manipulating people to agree to share religious affiliation data, for example.
“The social aspect is not separate to but bound up in advertising,” he notes, adding that the language also suggests Facebook uses the data.
Again, this whiffs a whole lot more than smells like GDPR compliance.
“I don’t believe FB has done enough,” adds Walshe, giving a view on Facebook’s GDPR preparedness ahead of the May 25 deadline for the framework’s application — as Zuckerberg’s Congress briefing notes suggested the company itself believes it has. (Or maybe it just didn’t want to admit to Congress that U.S. Facebook users will get lower privacy standards vs users elsewhere.)
“In fact I know they have not done enough. Their business model is skewed against privacy — privacy gets in the way of advertising and so profit. That’s why Facebook has variously suggested people may have to pay if they want an ad free model & so ‘pay for privacy.’”
“On transparency, there is a long way to go,” adds Boiten. “Friend suggestions, profiling for advertising, use of data gathered from like buttons and web pixels (also completely missing from “all your Facebook data”), and the newsfeed algorithm itself are completely opaque.”
“What matters most is whether FB’s processing decisions will be GDPR compliant, not what exact controls are given to FB members,” he concludes.
US lawmakers also pumped Zuckerberg on how much of the information his company harvests on people who have a Facebook account is revealed to them when they ask for it — via its ‘Download your data’ tool.
‘Download your Data’ is clearly partial and self-serving — and thus it also looks very far from being GDPR compliant.
Not even half the story
Facebook is not even complying with the spirit of current EU data protection law on data downloads. Subject Access Request give individuals the right to request not just the information they have voluntarily uploaded to a service, but also personal data the company holds about them; Including giving a description of the personal data; the reasons it is being processed; and whether it will be given to any other organizations or people.
Facebook not only does not include people’s browsing history in the info it provides when you ask to download your data — which, incidentally, its own cookies policy confirms it tracks (via things like social plug-ins and tracking pixels on millions of popular websites etc etc) — it also does not include a complete list of advertisers on its platform that have your information.
Instead, after a wait, it serves up an eight-week snapshot. But even this two month view can still stretch to hundreds of advertisers per individual.
If Facebook gave users a comprehensive list of advertisers’ access to their information the number of third party companies would clearly stretch into the thousands. (In some cases thousands might even be a conservative estimate.)
In the EU it currently invokes a exception in Irish law to circumvent more fulsome compliance — which, even setting GDPR aside, raises some interesting competition law questions, as Paul-Olivier Dehaye told the UK parliament last month.
“All your Facebook data” isn’t a complete solution,” agrees Boiten. “It misses the info Facebook uses for auto-completing searches; it misses much of the information they use for suggesting friends; and I find it hard to believe that it contains the full profiling information.”
“Ads Topics” looks rather random and undigested, and doesn’t include the clear categories available to advertisers,” he further notes.
Facebook wouldn’t comment publicly about this when asked. But it maintains its approach towards data downloads is GDPR compliant — and says it’s reviewed what it offers via with regulators to get feedback.
Earlier this week it also put out a wordy blog post attempting to diffuse this line of attack by pointing the finger of blame at the rest of the tech industry — saying, essentially, that a whole bunch of other tech giants are at it too.
What its blog post didn’t say — yet again — was anything about how all the non-users it nonetheless tracks around the web are able to have any kind of control over its surveillance of them.
And remember, some Facebook non-users will be children.
So yes, Facebook is inevitably tracking kids’ data without parental consent. Under GDPR that’s a majorly big no-no. But hey, that’s business!
TC’s Constine had a scathing assessment of even the on-platform system that Facebook has devised in response to GDPR’s requirements on parental consent for processing the data of users who are between the ages of 13 and 15.
“Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info,” he observed. “But Facebook blindly trusts that they’ve actually selected their parent or guardian… [Facebook’s] Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.”
So again, the company is being shown doing the minimum possible — in what might be construed as a cynical attempt to check another compliance box and carry on its data-sucking business as usual.
Given that intransigence it really will be up to the courts to bring the enforcement stick. Change, as ever, is a process — and hard won.
Hildebrandt is at least hopeful that a genuine reworking of Internet business models is on the way, though — albeit not overnight. And not without a fight.
“In the coming years the landscape of all this silly microtargeting will change, business models will be reinvented and this may benefit both the advertisers, consumers and citizens,” she tells us. “It will hopefully stave off the current market failure and the uprooting of “democratic processes…” Though nobody can predict the future, it will require hard work.”
Just a few weeks before Facebook CEO Mark Zuckerberg apologized for the “breach of trust” that allowed Cambridge Analytica to access the private social media activity of 50 million people, Facebook plunked down $200,000 to fight a data privacy initiative in California.
The social media giant’s donation matched others from Google, AT&T, Comcast and Verizon—a million-dollar sign that the issue of how companies collect and share personal information is likely to grow into an expensive fight as election season unfolds in California.
The businesses are fighting an initiative proposed by San Francisco real estate developer Alastair Mactaggart, who’s already spent $1.7 million on a measure that would allow Californians to prohibit companies from selling or sharing their personal data. His campaign is gathering signatures with the goal of landing the California Consumer Privacy Act on the November ballot.
“What we are proposing is some very basic rights: Let people find out what information companies are collecting, and let them have the ability to say, ‘Don’t sell my information,’” said Mactaggart, who was inspired to draft the initiative after chatting at a party with a Google engineer who told him that people would freak out if they knew how much companies track, compile and sell their personal information.
Ever used Evite to invite friends to a religious celebration or a child’s birthday party? Your religion or the age of your child may be for sale. Use a fitness or fertility app?
They collect loads of personal information that can be shared—and a study by the Future of Privacy Forum found that some don’t have privacy policies telling users what happens to their data. Use a discount card at the grocery or drug store?
Everything you buy is a piece of data about you. Marketing companies compile these billions of bits of data to build profiles of the kind of consumer or voter they think you are.
The measure would give Californians the ability to opt out of having their personal data sold or shared by requiring businesses to display a button on their websites that says, “Do Not Sell My Personal Information.” Clicking the link would take users to an opt-out form.
Mactaggart and his supporters seized on the recent controversy over Cambridge Analytica accessing the data of millions of Facebook users to benefit clients such as the Trump campaign, publicly calling on the company to stop opposing his ballot measure. There’s little reason to think it will.
Though Zuckerberg said on national television that Facebook has a “basic responsibility to protect people’s data,”his company has worked with other internet giants to beat back numerous efforts to increase consumer privacy. They lobbied against federal legislation last year that would have required tech companies to obtain customers’ permission before selling their data to advertisers. And they lobbied against a bill in the California legislature that would have required internet service providers to get permission from customers before selling or sharing information about their browsing history.
the companies who collect the data are not using reasonable security measures to keep your data safe.
California Consumer Privacy Act
Now they’re fighting Mactaggart’s measure by warning that it would fundamentally disrupt the 21st century economy, not only impacting the business of digital advertising but also hampering many services people have come to rely on.Mapping apps, ride-hailing apps and email subscription services all rely on sharing users’ data. The initiative treats “sharing” data and “selling” data the same, and opponents say such services wouldn’t work if consumers were allowed to opt out of sharing the data. The measure says the opt-out wouldn’t apply when the consumer intentionally discloses personal information (for example, revealing their location when hailing a ride), but opponents maintain the distinction is unworkable.
“Just about every sector of business in the state will oppose this because it’s a direct threat to their vitality,” said Steve Maviglio, spokesman for the tech-funded political committee opposing the privacy initiative, called the Coalition to Protect California Jobs. The committee has backing from the California Chamber of Commerce, TechNet and the Internet Association—a trio of powerful and deep-pocketed interests.
Facebook didn’t respond to request for comment, but it’s a member of TechNet and the Internet Association, which argue that changing the internet’s rules in one state is impractical because it’s a global network.
“This measure will stifle innovation and send companies to competing states and countries that do not have such job-crushing regulations,” said a statement from TechNet vice president Andrea Deveau.
The initiative also would allow Californians to sue companies that violate their request not to share personal information—another point of contention for business groups, which almost always oppose policies making it easier for them to be sued.
Consumer groups have been assessing whether the initiative as drafted does what it aims to do. Several now support it, although others including the Electronic Frontier Foundation, which litigates civil liberties issues in technology, have not yet taken a position.
Leading the campaign with Mactaggart is Mary Ross, a former CIA analyst who moved to California two years ago. As a counterintelligence analyst, Ross said she helped monitor foreign governments’ efforts to spy on America. So when Mactaggart asked her to join his campaign, Ross said she had “an insider’s perspective” on the power of big data.
“Information is powerful whether it’s a government using it or a business,” Ross said.
“Information is being used to manipulate people and you don’t even know when you’re being manipulated… Maybe it’s being done to make you buy something or maybe it’s being done to get you to go vote a certain way. But if there is no transparency or accountability, it’s going to continue.”
For a complete list of the firms funding the effort visit: fppc.ca.gov
On Thursday evening, BuzzFeed published a memo from Andrew “Boz” Bosworth, a vice president at Facebook who currently leads its hardware efforts. In the memo, Bosworth says that the company’s core function is to connect people, despite consequences that he repeatedly called “ugly.” “That’s why all the work we do in growth is justified. All the questionable contact importing practices,” he wrote. “All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.”
Bosworth distanced himself from the memo, saying in a Twitter post that he hadn’t agreed with those words even when he wrote them. He was trying to galvanize a discussion around the company’s growth strategy, he said. CEO Mark Zuckerberg told BuzzFeed that he had not agreed with the sentiments in the post at the time, and that growth should not be a means to an end in itself. “We recognize that connecting people isn’t enough by itself. We also need to work to bring people closer together,” Zuckerberg said.
After publishing the memo, Bosworth deleted his original post. “While I won’t go quite as far as to call it a straw man, that post was definitely designed to provoke a response,” Bosworth wrote in a memo obtained by The Verge. “It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community. The post was of no particular consequence in and of itself, it was the comments that were impressive. A conversation over the course of years that was alive and well even going into this week.
“I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”
“That conversation is now gone,” Bosworth continued. “And I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”
Facebook and Bosworth declined to comment.
Nearly 3,000 employees had reacted to Bosworth’s memo when The Verge viewed it, responding with a mixture of likes, “sad,” and and “angry” reactions. Many employees rallied to Bosworth’s side, praising him for sharing his feelings about sensitive company matters using blunt language.
Others criticized Bosworth for deleting the post, saying it fueled a narrative about the company that it had something to hide. “Deleting things usually looks bad in retrospect,” one wrote. “Please don’t feed the fire by giving these individuals more fuel (eg, Facebook execs deleting internal communications”). If we are no longer open and transparent, and instead lock-down and delete, then our culture is also destroyed — but by our own hand.”
Dozens of employees criticized the unknown leakers at the company. “Leakers, please resign instead of sabotaging the company,” one wrote in a comment under Bosworth’s post. Wrote another: “How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”ing the company,” one wrote in a comment under Bosworth’s post. Wrote another: “How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”
Several employees suggested Facebook attempt to screen employees for a high degree of “integrity” during the hiring process. “Although we all subconsciously look for signal on integrity in interviews, should we consider whether this needs to be formalized in the interview process?” one wrote.
“This is so disappointing, wonder if there is a way to hire for integrity.”
Wrote another: “This is so disappointing, wonder if there is a way to hire for integrity. We are probably focusing on the intelligence part and getting smart people here who lack a moral compass and loyalty.”
Other employees said it would be difficult to detect leakers before they acted.
“I don’t think we’ve seen a huge internally leaked data breach, but I’ve always thought our ‘open but punitive’ stance was particularly vulnerable to suicide bombers,” one employee wrote “We would be foolish to think that we could adequately screen against them in a hiring process at our scale. … We have our representative share of sick people, drug addicts, wife beaters, and suicide bombers. Some of this cannot be mitigated by training. To me, this makes it just a matter of time.”
That employee followed up to say: “OMG, I just ran back to my ‘puter from a half-eaten lunch with food in my mouth. APOLOGIES to our brothers in sisters in the Austin Office for my insensitive choice of metaphors/words. I’m sorry.”
“We have our representative share of sick people, drug addicts, wife beaters, and suicide bombers.”
Another theory floated by multiple employees is that Facebook has been targeted by spies or state-level actors hoping to embarrass the company. “Keep in mind that leakers could be intentionally placed bad actors, not just employees making a one-off bad decision,” one wrote. “Thinking adversarially, if I wanted info from Facebook, the easiest path would be to get people hired into low-level employee or contract roles.” Another wrote: “Imagine that some percentage of leakers are spies for governments. A call to morals or problems of performance would be irrelevant in this case, because dissolution is the intent of those actors. If that’s our threat — and maybe it is, given the current political situation? — then is it even possible to build a system that defaults to open, but that is able to resist these bad actors (or do we need to redesign the system?)”
Several employees shared concerns that the leaks had removed some of Facebook’s luster. The company is routinely cited as among the best places to work in America.
“If this leak #$%^ continues, we will become like every other company where people are hesitant to discuss broad-reaching, forward-looking ideas and thoughts, that only the very average ideas and thoughts get discussed and executed,” one employee wrote.” Making them average companies.”
Another employee responded: “Will become? Seems like we are there.”
The leaks also became cause for discussion about the company’s internal sharing tools. Facebook runs on its enterprise product, Facebook for Work. One employee wondered whether the critics of leakers had ignored incentives for sharing created by the product itself. It’s a nuanced thought worth sharing in full:
“It’s interesting to note that this discussion is about leaks pushing us to be more cognizant of our sharing decisions. The result is that we are incentivized toward stricter audience management and awareness of how our past internal posts may look when re-surfaced today. We blame a few ill-intentioned employees for this change.
“The non-employee Facebook user base is also experiencing a similar shift: the move toward ephemeral and direct sharing results from realizing that social media posts that were shared broadly and are searchable forever can become a huge liability today.
A key difference between the outside discussion and the internal discussion is that the outside blames the Facebook product for nudging people to make those broad sharing decisions years ago, whereas internally the focus is entirely on employees.”
Another employee made a similar plea for empathy. “Can we channel our outrage over the mishandling of our information into an empathy for our users’ situation? Can the deletion of a post help us better understand #delete facebook? How we encourage ourselves to remain open while acknowledging a world that doesn’t always respect the audience and intention for that information might just be the key to it all. Maybe we should be dogfooding that?”
For his part, Bosworth promised employees he would continue sharing candid thoughts about Facebook, but said he would likely post less. “When posting comes with the risk that I’ll have to blow up my schedule and defend myself to the national press,” he wrote, “you can imagine it is an inhibitor.”
Here is Bosworth’s full memo to the company today.
I’m feeling a little heartbroken tonight.
I had multiple reporters reach out today with different stories containing leaks of internal information.
In response to one of the leaks I have chosen to delete a post I made a couple of years ago about our mission to connect people and the ways we grow. While I won’t go quite as far as to call it a straw man, that post was definitely designed to provoke a response. It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community. The post was of no particular consequence in and of itself, it was the comments that were impressive. A conversation over the course of years that was alive and well even going into this week.
That conversation is now gone. And I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.
This is the very real cost of leaks. We had a sensitive topic that we could engage on openly and explore even bad ideas, even if just to eliminate them. If we have to live in fear that even our bad ideas will be exposed then we won’t explore them or understand them as such, we won’t clearly label them as such, we run a much greater risk of stumbling on them later. Conversations go underground or don’t happen at all. And not only are we worse off for it, so are the people who use our products.
Now the first study detailing the process from start to finish is finally shedding some light. “This is the first time that I’ve seen all the dots connected,” says Joanna Bryson, an artificial intelligence researcher at the University of Bath, UK.
At the heart of the debate is psychometrics targeting – the directing of political campaigns at people via social media based on their personality and political interests, with the aid of vast amount of data filtered by artificial intelligence (AI).
Though Facebook doesn’t “explicitly” provide all the tools to target people based on political opinions, the new study shows how the platform can be exploited. Using combinations of people’s interests, demographics, and survey data it’s possible to direct campaigns at individuals based on their agreement with ideas and policies. This could have a big impact on the success of campaigns.
“The weaponized, artificially intelligent propaganda machine is effective. You don’t need to move people’s political dials by much to influence an election, just a couple of percentage points to the left or right,” says Chris Sumner at the Online Privacy Foundation, who is presented the work this at DEF CON in Las Vegas.
Checks and balances
No one yet knows how much this can permanently change people’s views. But Sumner’s study clearly reveals a form of political campaigning with no checks and balances.
To get to grips with the complex issue of psychographic targeting online, Sumner and his colleagues created four experiments.
In the first, they looked at what divides people. High up on the list was the statement: “with regards to internet privacy: if you’ve done nothing wrong, you have nothing to fear.” During the Brexit referendum they surveyed more than 5000 people and found that Leave voters were significantly more likely to agree with the statement, and Remain voters more likely to disagree.
Next, by administering various personality tests to a different group they found traits that correlate with how likely you are to agree with that statement on internet privacy. This was converted into an “authoritarianism” score: if you scored high you were more likely to agree with the statement. Then, using a tool called PreferenceTool, built by researchers at the University of Cambridge, they were able to reverse engineer what sort of Facebook interests and demographics people with those personalities were most likely to have.
Just 38 per cent of a random selection of people on Facebook agreed with the privacy statement but this shot up to 61 per cent when the tool was used to target people deemed more likely to agree, and down to 25 per cent for those who they deemed more likely to disagree. In other words, they were able to demonstrate that it is possible to target people on Facebook based on a political opinion.
Finally, the team created four different Facebook ad campaigns tailored to the personalities they had identified, using both pro and anti-surveillance messages. For example, the anti-surveillance ad aimed at people with high levels of authoritarianism read: “They fought for your freedom. Don’t give it away! Say no to mass surveillance,” with a backdrop of the D-day landings. In contrast, the version for people with low levels of authoritarianism said: “Do you really have nothing to fear if you have nothing to hide? Say no to state surveillance,” alongside an image of Anne Frank.
Overall they found that the tailored ads resonated best with the target groups. For example, the pro-surveillance, high-authoritarianism advert had 20 times as many likes and shares from the high-authoritarianism group versus the low one.
Though the picture is becoming clearer, we should be careful not to equate a short-term decision to share or like a post, with long-term political views, says Andreas Jungherr at the University of Konstanz, Germany. “Social media is impacting political opinions. But the hype makes it hard to tell exactly how much,” he says.
However, maybe changing political opinions doesn’t have to be the end game. Perhaps the goal is simply to dissuade or encourage people from voting. “We know it’s really easy to convince people not to go to the polls,”says Bryson. “Prime at the right time and you can have a big effect. It’s not necessarily about changing opinions.”
Facebook allows targeted advertising so long as a company’s use of “external data” adheres to the law.
Following months of European scrutiny over the impact of major tech firms, Germany has passed a controversial law that could hold Facebook and Twitter highly accountable for the content they host.
Lawmakers in Germany passed a hotly debated law enabling the country to issue heavy fines to Facebook, Twitter, and other social media platforms which leave up content that violates its laws governing hate speech. Known as the “Facebook law” among Germans, the approved Network Enforcement Act provides for fines of up to $57 million (€50 million) to companies which fail to take down “obviously illegal” content within 24 hours, and will go into effect in October.
As The Verge reported, Germany’s definition of such content includes hate speech, incitements to violence, and defamation–all of which have found their way onto Facebook in Germany, and virtually everywhere else. Under the new law, social media companies could face an initial fine of €5 million for continuing to host content considered illegal (not necessarily on the first offense), and see those fines rise as high as €50 million depending on subsequent steps and previous infractions.
Social media companies will also be required to publish semiannual reports on how many related complaints they’ve received about their content, and what was done about them. The Guardian noted that the new law also allows German authorities to issue fines of up to €5m to each company’s designated point-person for the issue if the company’s complaints procedure isn’t up to regulation.
– Photo: Syrian refugee Anas Modamani (C) is suing Facebook over selfie photos of himself with German Chancellor Angela Merkel that he says were misused by Facebook users accusing him of being a terrorist or guilty of other crimes and which Facebook refused to remove. (Credit: Thomas Lohnes/Getty Images)
Digital rights and free speech activists have criticized the law for its restrictiveness, and argued that it places too large a burden on social media companies to tackle the issue. German Justice Minister Heiko Maas argued today the ability to bring big consequences for companies was necessary in combating hate speech and radicalized content online. He commented in an address, “Experience has shown that, without political pressure, the large platform operators will not fulfill their obligations, and this law is therefore imperative … freedom of expression ends where criminal law begins.”
In an emailed statement, a Facebook representative told the Verge, “We believe the best solutions will be found when government, civil society and industry work together and that this law as it stands now will not improve efforts to tackle this important societal problem … We feel that the lack of scrutiny and consultation do not do justice to the importance of the subject. We will continue to do everything we can to ensure safety for the people on our platform.”
As The Guardian reported, the law has seen a few softening changes since Maas and other lawmakers began promoting the legislation. Companies will now have a week to consider flagged posts which aren’t as clearly illegal or protected, and can enlist outside vetters of content or even create shared vetting facilities. Users will also be able to appeal the decision if their content is removed.
Germany’s leading Jewish organization, the Central Council of Jews, told the Guardian that the law provides a “strong instrument against hate speech in social networks,” where Jews are being “exposed to antisemitic hatred [on] a daily basis.” Meanwhile, human rights experts have warned against potentially privatizing the censorship process and limiting free speech, and Germany’s leading nationalist part has announced it may challenge the law all the way to the top.
The establishment media is dying. This is not a biased view coming from “alternative media,” it is a fact borne out by metrics and opinion polls from within the establishment itself. It was true before the recent election, and is guaranteed to accelerate after their shameless defense of non-reality which refused to accept any discontent among the American population with standard politics.
Now, with egg on their face after the botched election coverage, and a wobbling uncertainty about how they can maintain multiple threads of a narrative so fundamentally disproven, they appear to be resorting to their nuclear option: a full shut down of dissent.
Voices within independent media have been chronicling the signposts toward full-on censorship as sites have encountered everything from excessive copyright infringement accusations, to de-monetization, to the open admission by advertising giants that certain images would not be tolerated.
However, until now these efforts have appeared random, haphazard, and rife with retractions and restorations of targeted sites and content. A massive backlash of reader outrage toward these restrictive measures has confirmed that most consumers don’t like the idea of being given boundaries to their intellectual freedom.
That said, there has been a notable increase of hoax websites beginning to populate the information stream. We can attest that this has been an incredible annoyance as we are bombarded daily with new outrageous claims and rabbit holes that readers expect us to sift through.
Most times, a cursory glance at the “About” page or any disclaimers quickly shows where this information is coming from. Other times, a bit of common sense and discernment about why a site that has just appeared on the scene (check Alexa – Actionable Analytics for the Web for this info) would have “EXCLUSIVE” “BREAKING” content under the banner of an apparent local news channel or a name that is the twisted version of a legitimate news outlet.
But even with those caveats, we’ve all been taken in at one time or another and have had to retract or update articles as necessary, or apologize to our e-mail list for sending out a given link. This does jam up the works, but it is the tax we all must pay if we believe in the free-market of ideas and information. We’re not perfect, but at least we have never been deliberately misleading like CNN and others often have been.
The government recently legalized using propaganda against US citizens. They wielded all of their establishment media force to sell their lies. And now they’re frustrated that people still prefer the truth as they see it naturally
The voices of the corporate media are making a show of calling Facebook to task for evidently not having stringent enough algorithms to discern “legitimate news” from deliberate hoax. We are being told that this very likely led to the election of Trump, and that this has become a major problem in need of a major solution.
The first shots are being fired as we speak. Yesterday we learned that Facebook and Google would take swift action against “fake news” by de-monetizing or banning them outright.
“Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property,” a Google spokesperson said in a statement given to Reuters. This policy includes fake news sites, the spokesperson confirmed. Google already prevents its AdSense program from being used by sites that promote violent videos and imagery, pornography, and hate speech.
This is problematic on a number of levels, not least of which is the vague notion of what constitutes violent imagery and hate speech. War, of course, is what should first come to mind when thinking of violence.
Police shootings and other clashes might qualify as well, but routinely populate the most mainstream of sources. And one person’s hate speech is another person’s dissent.
The second component is that of transparency, where we see claims about any effort to “conceal information about the publisher.” Again, very vague, but as any journalist worth their salt knows, it is anonymity which leads to the truth more often than not, especially when threats against journalists and whistleblowers are demonstrably on the rise.
Today, the mainstream media named us as one of the top “fake news” sites to avoid. It’s quite an honor.
US News (linked above) has published a list of websites that it deems unworthy of support, and is essentially urging to be de-monitized or banned based on the previous calls to action.
Here are several fake news sites that have become popular on Facebook, and which should be avoided if you’re looking for the facts:
Firstly, the grouping of satire, hoax, and propaganda is troubling, as the definitions of each aren’t even remotely related to one another.
Satire is literature and has a tradition dating back thousands of years; it has been recognized as an essential component of intellectual and political freedom. A deliberate hoax, we can all agree, is lacking integrity, purposely deceptive, and can be legitimately harmful or dangerous. Propaganda, though, is aligned with the State; and most commonly is directed and funded by the State. That is a serious accusation and one that is entirely without merit for this website. It is also an especially ironic and dubious accusation coming from an outlet called US News.
Yet we’re proud to be biased for peace, love, and liberty. Anyone against those principles is serving fake news as far as we’re concerned.
All of this is to say that we are entering dangerous new territory, as the Internet itself is under a new regime with the transfer to ICANN, an international body. If 2/3 of the globe is under digital dictatorship, what else is the likely outcome from such international control over information?
However, it is also an exhilarating time to be a part of such mammoth upheaval, where the entrenched apparatus of the State itself has declared information to be its enemy and to acknowledge that it must do everything in its power to maintain its tenuous monopoly on the truth.
The unfortunate reality for them is that the truth will always be more efficient and, therefore, simpler to disseminate than the complexities of lies and true propaganda.
How a strange new class of media outlet has arisen to take over our news feeds.
Open your Facebook feed. What do you see? A photo of a close friend’s child. An automatically generated slide show commemorating six years of friendship between two acquaintances. An eerily on-target ad for something you’ve been meaning to buy. A funny video. A sad video. A recently live video. Lots of video; more video than you remember from before. A somewhat less-on-target ad. Someone you saw yesterday feeling blessed. Someone you haven’t seen in 10 years feeling worried.
And then: A family member who loves politics asking, “Is this really who we want to be president?” A co-worker, whom you’ve never heard talk about politics, asking the same about a different candidate. A story about Donald Trump that “just can’t be true” in a figurative sense. A story about Donald Trump that “just can’t be true” in a literal sense. A video of Bernie Sanders speaking, overlaid with text, shared from a source you’ve never seen before, viewed 15 million times. An articlequestioning Hillary Clinton’s honesty; a headline questioning Donald Trump’s sanity. A few shares that go a bit too far: headlines you would never pass along yourself but that you might tap, read and probably not forget.
Maybe you’ve noticed your feed becoming bluer; maybe you’ve felt it becoming redder. Either way, in the last year, it has almost certainly become more intense. You’ve seen a lot of media sources you don’t recognize and a lot of posts bearing no memorable brand at all. You’ve seen politicians and celebrities and corporations weigh in directly; you’ve probably seen posts from the candidates themselves. You’ve seen people you’re close to and people you’re not, with increasing levels of urgency, declare it is now time to speak up, to take a stand, to set aside allegiances or hangups or political correctness or hate.
Facebook, in the years leading up to this election, hasn’t just become nearly ubiquitous among American internet users; it has centralized online news consumption in an unprecedented way. According to the company, its site is used by more than 200 million people in the United States each month, out of a total population of 320 million. A 2016 Pew study found that 44 percent of Americans read or watch news on Facebook. These are approximate exterior dimensions and can tell us only so much. But we can know, based on these facts alone, that Facebook is hosting a huge portion of the political conversation in America.
During the 2012 presidential election, Facebook secretly tampered with 1.9 million user’s news feeds. The company also tampered with news feeds in 2010 during a 61-million-person experiment to see how Facebook could impact the real-world voting behavior of millions of people. An academic paper was published about the secret experiment, claiming that Facebook increased voter turnout by more than 340,000 people. In 2012, Facebook also deliberately experimented on its users’ emotions. The company, again, secretly tampered with the news feeds of 700,000 people and concluded that Facebook can basically make you feel whatever it wants you to.
The Facebook product, to users in 2016, is familiar yet subtly expansive. Its algorithms have their pick of text, photos and video produced and posted by established media organizations large and small, local and national, openly partisan or nominally unbiased. But there’s also a new and distinctive sort of operation that has become hard to miss: political news and advocacy pages made specifically for Facebook, uniquely positioned and cleverly engineered to reach audiences exclusively in the context of the news feed.
These are news sources that essentially do not exist outside of Facebook, and you’ve probably never heard of them. They have names like Occupy Democrats; The Angry Patriot; US Chronicle; Addicting Info; RightAlerts; Being Liberal; Opposing Views; Fed-Up Americans; American News; and hundreds more. Some of these pages have millions of followers; many have hundreds of thousands.
Using a tool called CrowdTangle, which tracks engagement for Facebook pages across the network, you can see which pages are most shared, liked and commented on, and which pages dominate the conversation around election topics. Using this data, I was able to speak to a wide array of the activists and entrepreneurs, advocates and opportunists, reporters and hobbyists who together make up 2016’s most disruptive, and least understood, force in media.
Individually, these pages have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic. And unlike traditional media organizations, which have spent years trying to figure out how to lure readers out of the Facebook ecosystem and onto their sites, these new publishers are happy to live inside the world that Facebook has created.
Their pages are accommodated but not actively courted by the company and are not a major part of its public messaging about media. But they are, perhaps, the purest expression of Facebook’s design and of the incentives coded into its algorithm — a system that has already reshaped the web and has now inherited, for better or for worse, a great deal of America’s political discourse.
In 2006, when Mark Zuckerberg dropped out of college to run his rapidly expanding start-up, Mark Provost was a student at Rogers State University in Claremore, Okla., and going through a rough patch. He had transferred restlessly between schools, and he was taking his time to graduate; a stock-picking hobby that grew into a promising source of income had fallen apart. His outlook was further darkened by the financial crisis and by the years of personal unemployment that followed. When the Occupy movement began, he quickly got on board. It was only then, when Facebook was closing in on its billionth user, that he joined the network.
DNC Silence Bernie Delegates:
Now 36, Provost helps run U.S. Uncut, a left-leaning Facebook page and website with more than 1.5 million followers, about as many as MSNBC has, from his apartment in Philadelphia. (Sample headlines:“Bernie Delegates Want You to See This DNC Scheme to Silence Them”and “This Sanders Delegate Unleashing on Hillary Clinton Is Going Absolutely Viral.”)He frequently contributes to another popular page, The Other 98%, which has more than 2.7 million followers.
Clinton delegates have consistently been granted access to the convention hall before Sanders delegates, allowing them to sit at the front of the delegation and use their position to block media cameras from showing protesting delegates behind them in addition to not letting Bernie delegates in the door.
Clinton delegate – “If you see them holding signs, please stand and block them with your signs”#SeatFillerSitin
Occupy got him on Facebook, but it was the 2012 election that showed him its potential. As he saw it, that election was defined by social media. He mentioned a set of political memes that now feel generationally distant: Clint Eastwood’s empty chair at the 2012 Republican National Convention and Mitt Romney’s debate gaffe about “binders full of women.”He thought it was a bit silly, but he saw in these viral moments a language in which activists like him could spread their message.
Provost’s page now communicates frequently in memes, images with overlaid text. “May I suggest,” began one, posted in May 2015, when opposition to the Trans-Pacific Partnership was gaining traction, “the first 535 jobs we ship overseas?” Behind the text was a photo of Congress. Many are more earnest. In an image posted shortly thereafter, a photo of Bernie Sanders was overlaid with a quote: “If Germany, Denmark, Sweden and many more provide tuition-free college,” read the setup, before declaring in larger text,“we should be doing the same.”It has been shared more than 84,000 times and liked 75,000 more. Not infrequently, this level of zeal can cross into wishful thinking. A post headlined “Did Hillary Clinton Just Admit on LIVE TV That Her Iraq War Vote Was a Bribe?” was shared widely enough to “merit” (as if) a response from Snopes, which called it “quite a stretch.”
This year, political content has become more popular all across the platform: on homegrown Facebook pages, through media companies with a growing Facebook presence and through the sharing habits of users in general. But truly Facebook-native political pages have begun to create and refine a new approach to political news: cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture.
This strange new class of media organization slots seamlessly into the news feed and is especially notable in what it asks, or doesn’t ask, of its readers. The point is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary.
While web publishers have struggled to figure out how to take advantage of Facebook’s audience, these pages have thrived. Unburdened of any allegiance to old forms of news media and the practice, or performance, of any sort of ideological balance, native Facebook page publishers have a freedom that more traditional publishers don’t: to engage with Facebook purely on its terms. These are professional Facebook users straining to build media companies, in other words, not the other way around.
From a user’s point of view, every share, like or comment is both an act of speech and an accretive piece of a public identity. Maybe some people want to be identified among their networks as news junkies, news curators or as some sort of objective and well-informed reader. Many more people simply want to share specific beliefs, to tell people what they think or, just as important, what they don’t. A newspaper-style story or a dry, matter-of-fact headline is adequate for this purpose. But even better is a headline, or meme, that skips straight to an ideological conclusion or rebuts an argument.
Rafael Riveor is an acquaintance of Provost’s who, with his twin brother, Omar, runs Occupy Democrats Facebook page, which passed three million followers in June. This accelerating growth is attributed by Rivero, and by nearly every left-leaning page operator I spoke with, not just to interest in the election but especially to one campaign in particular: “Bernie Sanders is the Facebook candidate,” Rivero says. The rise of Occupy Democrats essentially mirrored the rise of Sanders’s primary run.
On his page, Rivero started quoting text from Sanders’s frequent email blasts, turning them into Facebook-ready media and memes with a consistent aesthetic: colors that pop, yellow on black. Rivero says that it’s clear what his audience wants. “I’ve probably made 10,000 graphics, and it’s like running 10,000 focus groups,” he said. (Clinton was and is, of course, widely discussed by Facebook users: According to the company, in the last month 40.8 million people “generated interactions” around the candidate. But Rivero says that in the especially engaged, largely oppositional left-wing-page ecosystem,Clinton’s message and cautious brand didn’t carry.)
Because the Sanders campaign has come to an end, these sites have been left in a peculiar position, having lost their unifying figure as well as their largest source of engagement. Audiences grow quickly on Facebook but can disappear even more quickly; in the case of left-leaning pages, many had accumulated followings not just by speaking to Sanders supporters but also by being intensely critical, and often utterly dismissive, of Clinton.
In retrospect, Facebook’s takeover of online media looks rather like a slow-motion coup. Before social media, web publishers could draw an audience one of two ways: through a dedicated readership visiting its home page or through search engines. By 2009, this had started to change. Facebook had more than 300 million users, primarily accessing the service through desktop browsers, and publishers soon learned that a widely shared link could produce substantial traffic. In 2010, Facebook released widgets that publishers could embed on their sites, reminding readers to share, and these tools were widely deployed. By late 2012, when Facebook passed a billion users, referrals from the social network were sending visitors to publishers’ websites at rates sometimes comparable to Google, the web’s previous de facto distribution hub. Publishers took note of what worked on Facebook and adjusted accordingly.
This was, for most news organizations, a boon. The flood of visitors aligned with two core goals of most media companies: to reach people and to make money. But as Facebook’s growth continued, its influence was intensified by broader trends in internet use, primarily the use of smartphones, on which Facebook became more deeply enmeshed with users’ daily routines. Soon, it became clear that Facebook wasn’t just a source of readership; it was, increasingly, where readers lived.
Facebook, however, is also a communications medium that facilitates conversation, organization and the distribution of information among users. It does so under the illusion that users are in control of the process, but of course it is Facebook puling the strings. Facebook could definitely manipulate its service to undermine Trump.
“With Facebook, we don’t know what we’re not seeing. We don’t know what the bias is or how that might be affecting how we see the world. Facebook has toyed with skewing news in the past…. If Facebook decided to, it could gradually remove any pro-Trump stories or media off its site—devastating for a campaign that runs on memes and publicity. Facebook wouldn’t have to disclose it was doing this, and would be protected by the First Amendment.”
Facebook, from a publisher’s perspective, had seized the web’s means of distribution by popular demand. A new reality set in, as a social-media network became an intermediary between publishers and their audiences. For media companies, the ability to reach an audience is fundamentally altered, made greater in some ways and in others more challenging. For a dedicated Facebook user, a vast array of sources, spanning multiple media and industries, is now processed through the same interface and sorting mechanism, alongside updates from friends, family, brands and celebrities.
Facebook can promote or block any material that it wants.
From the start, some publishers cautiously regarded Facebook as a resource to be used only to the extent that it supported their existing businesses, wary of giving away more than they might get back. Others embraced it more fully, entering into formal partnerships for revenue sharing and video production, as The New York Times has done. Some new-media start-ups, most notably BuzzFeed, have pursued a comprehensively Facebook-centric production-and-distribution strategy. All have eventually run up against the same reality: A company that can claim nearly every internet-using adult as a user is less a partner than a context — a self-contained marketplace to which you have been granted access but which functions according to rules and incentives that you cannot control.
The news feed is designed, in Facebook’s public messaging, to “show people the stories most relevant to them” and ranks stories “so that what’s most important to each person shows up highest in their news feeds.” It is a framework built around personal connections and sharing, where value is both expressed and conferred through the concept of engagement. Of course, engagement, in one form or another, is what media businesses have always sought, and provocation has always sold news.But now the incentives are literalized in buttons and written into software.
Any sufficiently complex system will generate a wide variety of results, some expected, some not; some desired, others less so. On July 31, a Facebook page called Make America Greatposted its final story of the day.“No Media Is Telling You About The Muslim Who Attacked Donald Trump, So We Will..,” read the headline, next to a small avatar of a pointing and yelling Trump. The story was accompanied by a photo of Khizr Khan, the father of a slain American soldier. Khan spoke a few days earlier at the Democratic National Convention, delivering a searing speech admonishing Trump for his comments about Muslims. Khan, pocket Constitution in hand, was juxtaposed with the logo of the Muslim Brotherhood in Egypt. “It is a sad day in America,” the caption read, “where we the people must expose the TRUTH because the media is in the tank for 1 Presidential Candidate!”
Readers who clicked through to the story were led to an external website, called Make America Great Today, where they were presented with a brief write-up blended almost seamlessly into a solid wall of fleshy ads. Khan, the story said — between ads for “(1) Odd Trick to ‘Kill’ Herpes Virus for Good” and “22 Tank Tops That Aren’t Covering Anything” — is an agent of the Muslim Brotherhood and a “promoter of Islamic Shariah law.” His late son, the story suggests, could have been a “Muslim martyr” working as a double agent. A credit link beneath the story led to a similar-looking site called Conservative Post, from which the story’s text was pulled verbatim. Conservative Post had apparently sourced its story from a longer post on a right-wing site called Shoebat.com.
Within 24 hours, the post was shared more than 3,500 times, collecting a further 3,000 reactions — thumbs-up likes, frowning emoji, angry emoji — as well as 850 comments, many lengthy and virtually all impassioned. A modest success. Each day, according to Facebook’s analytics, posts from the Make America Great page are seen by 600,000 to 1.7 million people. In July, articles posted to the page, which has about 450,000 followers, were shared, commented on or liked more than four million times, edging out, for example, the Facebook page of USA Today.
Make America Great Again, which inhabits the fuzzy margins of the political Facebook page ecosystem, is owned and operated out of St. Louis by 35-year-old online marketer Adam Nicoloff. He started the page in August 2015 and runs it from his home. Previously, Nicoloff provided web services and marketing help for local businesses; before that, he worked in restaurants. Today he has shifted his focus to Facebook pages and websites that he administers himself. Make America Great was his first foray into political pages, and it quickly became the most successful in a portfolio that includes men’s lifestyle and parenting.
Nicoloff’s business model is not dissimilar from the way most publishers use Facebook: build a big following, post links to articles on an outside website covered in ads and then hope the math works out in your favor. For many, it doesn’t: Content is expensive, traffic is unpredictable and website ads are both cheap and alienating to readers. But as with most of these Facebook-native pages, Nicoloff’s content costs comparatively little, and the sheer level of interest in Trump and in the type of inflammatory populist rhetoric he embraces has helped tip Nicoloff’s system of advertising arbitrage into serious profitability. In July, visitors arriving to Nicoloff’s website produced a little more than $30,000 in revenue. His costs, he said, total around $8,000, partly split between website hosting fees and advertising buys on Facebook itself.
Then, of course, there’s the content, which, at a few dozen posts a day, Nicoloff is far too busy to produce himself. “I have two people in the Philippines who post for me,” Nicoloff said, “a husband-and-wife combo.” From 9 a.m. Eastern time to midnight, the contractors scour the internet for viral political stories, many explicitly pro-Trump. If something seems to be going viral elsewhere, it is copied to their site and promoted with an urgent headline. (The Khan story was posted at the end of the shift, near midnight Eastern time, or just before noon in Manila.) The resulting product is raw and frequently jarring, even by the standards of this campaign.“There’s No Way I’ll Send My Kids to Public School to Be Brainwashed by the LGBT Lobby,”read one headline, linking to an essay ripped from Glenn Beck’s The Blaze; “ALERT: UN Backs Secret Obama Takeover Of Police; Here’s What We Know,” read another, copied from a site called The Federalist Papers Project. In the end, Nicoloff takes home what he jokingly described as a “doctor’s salary” — in a good month, more than $20,000.
Terry Littlepage, an internet marketer based in Las Cruces, N.M., has taken this model even further. He runs a collection of about 50 politically themed Facebook pages with names like The American Patriot and My Favorite Gun, which push visitors to a half-dozen external websites, stocked with content aggregated by a team of freelancers. He estimates that he spends about a thousand dollars a day advertising his pages on Facebook; as a result, they have more than 10 million followers. In a good month, Littlepage’s properties bring in $60,000.
Nicoloff and Littlepage say that Trump has been good for business, but each admits to some discomfort. Nicoloff, a conservative, says that there were other candidates he preferred during the Republican primaries but that he had come around to the nominee. Littlepage is also a recent convert. During the primaries, he was a Cruz supporter, and he even tried making some left-wing pages on Facebook but discovered that they just didn’t make him as much money.
In their angry, cascading comment threads,Make America Great‘s followers express no such ambivalence. Nearly every page operator I spoke to was astonished by the tone their commenters took, comparing them to things like torch-wielding mobs and sharks in a feeding frenzy. No doubt because of the page’s name, some Trump supporters even mistake Nicoloff’s page for an official organ of the campaign. Nicoloff says that he receives dozens of messages a day from Trump supporters, expecting or hoping to reach the man himself. Many, he says, are simply asking for money.
Many of these political news pages will likely find their cachet begin to evaporate after Nov. 8. But one company, theLiberty Alliance, may have found a way to create something sustainable and even potentially transformational, almost entirely within the ecosystem of Facebook. The Georgia-based firm was founded by Brandon Vallorani, formerly of Answers in Genesis (AiG), the organization that opened a museum in Kentucky promoting a literal biblical creation narrative. Today the Liberty Alliance has around 100 sites in its network, and about 150 Facebook pages, according to Onan Coca, the company’s 36-year-old editor in chief. He estimates their cumulative follower count to be at least 50 million.
A dozen or so of the sites are published in-house, but posts from the company’s small team of writers are free to be shared among the entire network. The deal for a would-be Liberty Alliance member is this: You bring the name and the audience, and the company will build you a prefab site, furnish it with ads, help you fill it with content and keep a cut of the revenue. Coca told me the company brought in $12 million in revenue last year.
(The company declined to share documentation further corroborating his claims about followers and revenue.)
Because the pages are run independently, the editorial product is varied. But it is almost universally tuned to the cadences and styles that seem to work best on partisan Facebook. It also tracks closely to conservative Facebook media’s big narratives, which, in turn, track with the Trump campaign’s messaging:Hillary Clinton is a crook and possibly mentally unfit; ISIS is winning; Black Lives Matter is the real racist movement; Donald Trump alone can save us; the system — all of it — is rigged. Whether the Liberty Alliance succeeds or fails will depend, at least in part, on Facebook’s algorithm. Systemic changes to the ecosystem arrive through algorithmic adjustments, and the company recently adjusted the news feed to “further reduce click-bait headlines.”
For now, the network hums along, mostly beneath the surface. A post from a Liberty Alliance page might find its way in front of a left-leaning user who might disagree with it or find it offensive, and who might choose to engage with the friend who posted it directly. But otherwise, such news exists primarily within the feeds of the already converted, its authorship obscured, its provenance unclear, its veracity questionable. It’s an environment that’s at best indifferent and at worst hostile to traditional media brands; but for this new breed of page operator, it’s mostly upside. In front of largely hidden and utterly sympathetic audiences, incredible narratives can take shape, before emerging, mostly formed, into the national discourse.
– Trump’s following on the major social media networks absolutely blow Clinton out of the water.
The article cited a litany of social-media statistics highlighting Trump’s superior engagement numbers, among them Trump’s Facebook following, which is nearly twice as large as Clinton’s. “Don’t listen to the lying media — the only legitimate attack they have left is Trump’s poll numbers,” it said. “Social media proves the GOP nominee has strong foundation and a firm backing.” The story spread across this right-wing Facebook ecosystem, eventually finding its way to Breitbart and finally to Sean Hannity’s “Morning Minute,” where he read through the statistics to his audience.
Before Hannity signed off, he posed a question: “So, does that mean anything?” It’s a version of the question that everyone wants to answer about Facebook and politics, which is whether the site’s churning political warfare is actually changing minds — or, for that matter, beginning to change the political discourse as a whole. How much of what happens on the platform is a reflection of a political mood and widely held beliefs, simply captured in a new medium, and how much of it might be created, or intensified, by the environment it provides? What is Facebook doing to our politics?
Appropriately, the answer to this question can be chosen and shared on Facebook in whichever way you prefer. You might share this story from The New York Times Magazine, wondering aloud to your friends whether our democracy has been fundamentally altered by this publishing-and-advertising platform of unprecedented scale. Or you might just relax and find some memes to share from one of countless pages that will let you air your political id. But for the page operators, the question is irrelevant to the task at hand. Facebook’s primacy is a foregone conclusion, and the question of Facebook’s relationship to political discourse is absurd — they’re one and the same. As Rafael Rivero put it to me, “Facebook is where it’s all happening.”
The mainstream media (msm) doesn’t just decide what stories to cover, they decide what stories to cover up!
And as much as the ‘Sandernistas ’ attempt to disarticulate Sanders “progressive” domestic policies from his documented support for empire (even the Obamaite aphorism “Perfect is the enemy of good” is unashamedly deployed), it should be obvious that his campaign is an ideological prop – albeit from a center/left position – of the logic and interests of the capitalist-imperialist settler state.
I agree with pretty much everything he is saying. He is articulating a Marxist critique of American empire and its justificatory narratives (the ‘civilizing mission’ and/or Orientalism). The origins of this country and its western expansion is basically the definition of settler colonialism and the brutality that the Native American population suffered shouldn’t be ignored as a necessary consequence of liberalism’s teleological mission. And yeah, Sanders never distanced himself from the status quo of American foreign policy and its interests, which are dictated by capitalist accumulation and demands that American military power insure the centrality of market capitalism. Actually, it’s my intellectual agreements that makes me so outraged because his presentation of what it means to be a ‘leftist’ doesn’t involve a nuanced critique of power relations and global inequality.
Instead he just wants to emphasize how not enough people are appropriately outraged by the status quo, which is informing his definition of white supremacy as well. Him calling the vigils for the Charlie Hebdo victims a ‘white supremacist rally’ is because those same people aren’t mobilizing and expressing their moral outrage when Iraqis are slaughtered by the French government, for example. And that’s just such an ugly way of arguing and it isn’t actually trying to articulate or realize a political alternative, or any potentially hegemonic political project to imagine a different future, which is the biggest problem with the contemporary Left, besides maybe its tendency towards jingoism.
I could write an essay expressing my anger and if anyone wants to keep talking about this just send me a PM.
The Federal Bureau of Investigation (FBI) wants to prevent information about its creepy biometric database on social media, which contains fingerprint, face, iris, and voice scans of millions of Americans, from getting out to the public.
Facebook has been accused of violating the privacy of its users by collecting their facial data, according to a class-action lawsuit filed last week.
This data-collection program led to its well-known automatic face-tagging service. But it also helped Facebook create “the largest privately held stash of biometric face-recognition data in the world,” the Courthouse News Service reports.
The lawsuit alleges that this facial-recognition program violates the privacy of its users, citing an Illinois law called the Illinois Biometrics Information Privacy Acts, which requires companies to get written content from a user if it is collecting biometric data.
Further, according to the statute, the company must state the purpose and length of its data-collection program.
The lead plaintiff, Carlo Licata, claims that Facebook’s biometric program shows “brazen disregard for its users’ privacy writes.” He added that the way changing user settings will not change what biometric data the company collects, the Chicago Tribune reports.
Facebook has been under fire over this feature for years. It first began offering the tagging service through a technology from Israeli company Face.com, which Facebook acquired in 2012.
A Senate hearing was held in 2012 to discuss this specific program. At the hearing, Facebook’s Robert Sherman rebutted that the “tag suggestions” program is “merely a ‘convenience feature’ and that users’ data is secure,” according to the report.
The company’s ‘faceprint’ database works only with its own software, and “alone, the templates are useless bits of data,” Sherman said. He said that users can opt out of the feature and their data will be deleted.
Business Insider reached out to Facebook for comment. A spokesperson wrote, “This lawsuit is without merit and we will defend ourselves vigorously.”
Licata’s intention is to get a court injunction that requires Facebook to put a halt to the program.
In early May the U.S. Department of Justice released a proposal which would exempt the FBI’s Biometrics database from public disclosure. Specifically, the proposal would exempt the Next Generation Identification (NGI) System from provisions of the federal Privacy Act, which “requires federal agencies to share information about the records they collect with the individual subject of those records, allowing them to verify and correct them if needed.”
The proposal is open for public comment until June 6, 2016.
Although the database does contain biometric data on convicted criminals, it also contains information on individuals who were only suspected or temporarily detained under the suspicion of a crime. The system also features data from people fingerprinted for jobs, licenses, military or volunteer service, background checks, security clearances, and other government processes.
Essentially the FBI is arguing that it will prevent individuals from knowing if their information is in the massive database if the release of information would “compromise” a law enforcement investigation. Next Gov first reported on the proposal:
“Letting individuals view their own records, or even the accounting of those records, could compromise criminal investigations or ‘national security efforts,’ potentially revealing a ‘sensitive investigative technique’ or information that could help a subject ‘avoid detection or apprehension,’ the draft posting said.
“Another clause requires agencies to keep the records they collect to assure individuals any determination made about them was made fairly. Arguing for an exemption, the FBI posting claimed it is ‘impossible to know in advance what information is accurate, relevant, timely and complete’ for ‘authorized law enforcement purposes.’”
Although the database may contain information about individuals conducting perfectly legal actions and behaviors, the proposal says the FBI should hold the data because “with time, seemingly irrelevant or untimely information may acquire new significance when new details are brought to light.” The FBI claims the information within the database could possibly help with “establishing patterns of activity and providing criminal lead.”
Jeramie Scott, a national security counsel at the Electronic Privacy Information Center, told NextGov that the proposal “would set a worrying precedent in which law enforcement has significant leeway to decide what information to collect without informing the subject.”
Although very little is actually known about the database, the Electronic Frontier Foundation (EFF) and EPIC have been able to uncover that the FBI would like to track every individual as they move from one location to another. In 2013, EPIC obtained a document which showed, “NGI shall return an incorrect candidate a maximum of 20% of the time.”
Once the collection of biometrics becomes standardized, it becomes much easier to locate and track someone across all aspects of their life. EFF believes that perfect tracking is inimical to a free society. A society in which everyone’s actions are tracked is not, in principle, free. It may be a livable society, but would not be our society.
In 2014 the EFF received documents from the FBI related to the NGI system. Based on the records, the EFF estimated that the face recognition component of NGI would include as many as 52 million face images by 2015.
If you are an outspoken activist, have ever been detained, or arrested, your biometrics are more than likely contained in this database. If you have ever applied for a driver’s license or state identification card you are likely in this database. Millions of innocent people are having their biometrics scooped up and logged into another database that federal agents will have unrestricted access to. When combined with cellphone surveillance, aerial surveillance, and every other surveillance tool available to the local, state, and federal agencies, a clear picture of a draconian Surveillance State emerges.
Thankfully, there are examples of individuals working to create technology (here, here) that can combat the pervasive eyes and ears of Big Brother. However, until those counter-surveillance tools are widespread in the mainstream we must take principled action against the Surveillance State. Invest in technologies that promote privacy and protection. Use encrypted chats, emails, and video calls. Learn about the myriad of alternative options available to you. Only by taking action and seeking solutions will we find the path towards a more free world that values privacy of the individual.
Happiness and other emotions have recently been an important focus of attention in a wide range of disciplines, including psychology, economics, and neuroscience. Some of this work suggests that emotional states can be transferred directly from one individual to another via mimicry and the copying of emotionally-relevant bodily actions like facial expressions. Experiments have demonstrated that people can “catch” emotional states they observe in others over time frames ranging from seconds to months, and the possibility of emotional contagion between strangers, even those in ephemeral contact, has been documented by the effects of “service with a smile” on customer satisfaction and tipping.
Based on information gathered from millions of social media users, emotions can be transferred or affect other users in the same social media sphere. This is according to a study conducted by the University of California, San Diego and Yale University titled “Detecting Emotional Contagion in Massive Social Networks.”
There are significant connections online where people feel happy, lonely or depressed at the same time. There may be two reasons why emotions are passed on from one person to the next on sites like Facebook.The first is contagion where people who post a status or tweet can directly affect the emotions of others who read their emotional statement. The second is Homophily in which social media users tend to choose and add social media friends or contacts who share the same emotions with them.
The study created a mathematical formula to show how emotional expressions can be influenced in social media networks. Particularly, rainfall was used as an instrument to determine how contagion can influence emotions. Since rainfall cannot be influenced by human emotions, it adequately presents the changes in emotions among social media users and not vice-versa. Rain-induced changes were introduced to predict and reveal changes in social media users’ emotions as reflected in their status messages.
The information was gathered for a period of 1,180 days from Facebook users from January 2009 to March 2012. The study was approved by and carried out under the guidelines of the Institutional Review Board at the University of California, San Diego, which “waived the need for participant consent.” To protect participant confidentiality, researchers did not personally view any names of users or words posted by users, and all analysis of identified data took place in the same “secure location” on servers where “Facebook currently keeps users’ data.”
Status updates or posts were used to determine positive or negative emotions. Particular text or words would describe the post as either positive or negative. It is, however, possible for some posts to be both positive and negative at the same time, showing mixed emotions, so users are given scores for both emotions. The study was limited to Facebook users who live in the 100 most populous cities in the United States.
The researchers concluded that the emotions of users on social media can directly affect or influence the emotions of others. The average rainy day lowered the total number of positive posts by 1.19 percent while negative posts rose by 1.16 percent. More Facebook users were also found to be happy on weekends and holidays.