Google has taken its Project Soli cause to the Federal Communications Commission (FCC), requesting an authorization to operate its fixed and mobile field disturbance sensors in the 60 GHz band at a different power level than what’s currently allowed.
– Soli is a new sensing technology that uses miniature radar to detect touchless gesture interactions.
When it comes to the internet of things, Google has thrown its hat in the ring like any good tech giant, and its looking to advance a sensing technology that uses miniature radar to detect touchless gesture interactions. Billed as “the only interface you’ll need,” Project Soli uses radar for motion tracking of the human hand. As Google explains, “We’re creating a ubiquitous gesture interaction language that will allow people to control devices with a simple, universal set of gestures. We envision a future in which the human hand becomes a universal input device for interacting with technology.”
TheFederal Communications Commission (FCC) allows operation of “mobile radars in short-range devices for interactive motion sensing” within the 60 GHz band, the unlicensed millimeter wave band generally used only by WiGig systems and a small number of industrial and scientific stakeholders—but only at power levels that Google said are too restrictive for optimum use of the sensors.
Field testing of device prototypes within the currently allowed power levels showed that blind spots can occur as close as 5 cm to the sensor location. “Low power levels lead to user dissatisfaction from missed motions, the perception of intermittent operation and ultimately fewer effective interactions,” Google argued.
– The world’s first radar-based key technology making the augmented reality breakthrough a reality
Instead, the internet giant wants to rely on a European Telecommunications Standards Institute (ETSI) standard known as EN 305 550.1 (PDF), which defines the conducted power, mean Power Spectral Density (PSD), Effective Isotropic Radiated Power (EIRP) and mean EIRP consistent parameters that Google said would allow optimization, while avoiding interference with other devices in the band.
It sweetened the pot in its request (PDF) by tying its waiver request to the FCC’s loftier goals, saying it would“encourage the provision of new technologies and services to the public”consistent with Section 7 of the Communications Act of 1934, align with the Commission’s intent to allow radars to “detect hand gestures very close to a device to control the device without touching it,” and advance the Commission’s efforts to harmonize its regulations and keep pace with global standards.
The Soli chip incorporates the entire sensor and antenna array in a compact package that’s smaller than a quarter, and it can be embedded in wearables, phones, computers, cars and IoT devices. The applications will rely on “Virtual Tools,” which are gestures that mimic familiar interactions with physical tools. Imagine an invisible button between your thumb and index fingers—you can press it by tapping your fingers together. Other interactions could include a virtual dial that you turn by rubbing thumb against index finger, or a virtual slider that users can grab and pull in the air. Feedback, meanwhile, is generated by the haptic sensation of fingers touching each other.
Both Deutsche Telekom and Telenor are gearing up for trials of fixed wireless in the band, in Hungary and Kuala Lumpur, respectively, and Facebook with Intel and RADWIN to deliver a reference design for Terragraph-certified 60GHz solutions based on the Intel architecture.
Soli: ubiquitous gesture sensing with millimeter wave radar (SIGGRAPH)
Video above: This paper presents Soli, a new, robust, high-resolution, low-power, miniature gesture sensing technology for human-computer interaction based on millimeter-wave radar. We describe a new approach to developing a radar-based sensor optimized for human-computer interaction, building the sensor architecture from the ground up with the inclusion of radar design principles, high temporal resolution gesture tracking, a hardware abstraction layer (HAL), a solid-state radar chip and system architecture, interaction models and gesture vocabularies, and gesture recognition. We demonstrate that Soli can be used for robust gesture recognition and can track gestures with sub-millimeter accuracy, running at over 10,000 frames per second on embedded hardware.
Watch the video about what developers have already developed
Project Soli has developed a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.
If we’re not careful, we will soon be at risk of being locked into mindless behavioral loops, craving distraction even from other distractions.
Take it from the insiders in Silicon Valley:
Former Google and Facebook executives are sounding the alarm about the pervasive power of tech. Will we listen?
One source of angst came close to being 2017’s signature subject: how the internet and the tiny handful of companies that dominate it are affecting both individual minds and the present and future of the planet. The old idea of the online world as a burgeoning utopia looks to have peaked around the time of the Arab spring, and is in retreat.
If you want a sense of how much has changed, picture the president of the US tweeting his latest provocation in the small hours, and consider an array of words and phrases now freighted with meaning: Russia, bots, troll farms, online abuse, fake news, dark money.
Another sign of how much things have shifted is a volte-face by Silicon Valley’s most powerful man. Barely more than a year ago the Facebook founder, Mark Zuckerberg, seemed still to be rejoicing in his company’s imperial phase, blithely dismissing the idea that fabricated news carried by his platform had affected the outcome of the 2016 US election as a “pretty crazy idea.” Now scarcely a week goes by without some Facebook pronouncement or other, either updating the wider world about its latest quest to put its operations beyond criticism or assuring us that its belief in an eternally upbeat, fuzzily liberal ethos is as fervent as ever.
Facebook has reached a fascinating point in its evolution; it is as replete with importance and interest as any political party.
Facebook is at once massively powerful and also suddenly defensive. Its deeply questionable tax affairs are being altered; 1,000 new employees have been hired to monitor its advertising. At the same time, it still seems unable to provide any answers to worries about its effects on the world beyond more and more Facebook. A pre-Christmas statement claimedthat although “passive” use of social media could harm users, “actively interacting with people” online was linked not just to “improvements in wellbeing,” but to “joy.” In short, if Facebook does your head in, the solution is apparently not to switch off, but more Facebook.
While Zuckerberg and his colleagues do ethical somersaults, there is rising noise from a group of people who made headlines towards the year’s end: the former insiders at tech giants who now loudly worry about what their innovations are doing to us. The former Facebook president Sean Parker warned in November that its platform “literally changes your relationship with society, with each other … God only knows what it’s doing to our children’s brains.”
At around the same time, the former Facebook executive Chamath Palihapitiya held a public interview at Stanford University in which he did not exactly mince his words. “The short-term, dopamine-driven feedback loops that we have created are destroying how society works,”he said. “No civil discourse, no cooperation, misinformation, mistruth … So we are in a really bad state of affairs right now, in my opinion.” “(Strangely, around a week later he seemed to recant, claiming he had only meant to “start an important conversation,” and that Facebook was still a company he “loved.”)
Then there is Tristan Harris, a former high-up at Google who is now hailed as “the closest thing Silicon Valley has to a conscience.” Under the banner of a self-styled “movement” called Time Well Spent, he and his allies are urging software developers to tone down the compulsive elements of their inventions, and the millions who find themselves hooked to change their behavior.
What they are up against, meanwhile, is apparently personified by Nir Eyal, a Stanford lecturer and tech consultant who could be a character from the brilliant HBO sitcom Silicon Valley. In 2013 he published ‘Hooked: How to Build Habit-Forming Products.’ His inspiration for the book is the behaviourist psychology pioneered by B.F. Skinner. Among his pearls of wisdom is one both simple and chilling: “For new behaviors to really take hold, they must occur often.” But on close inspection, even he sounds somewhat ambivalent: last April, at something called the Habit Summit, he told his audience that at home he had installed a device that cut off the internet at a set time every day.
Good for him. The reality for millions of other people is a constant experience that all but buries the online world’s liberating possibilities in a mess of alerts, likes, messages, retweets and internet use so pathologically needy and frantic that it inevitably makes far too many people vulnerable to pernicious nonsense and real dangers.
Thanks to manipulative ephemera, WhatsApp users anxiously await the ticks that confirm whether a message has been read by a receiver; and, a turbocharged version of the addictive dots that flash on an iPhone when a friend is replying to you, Snapchat now alerts its users when a friend starts typing a message to them. And we all know what lies around the corner: a world of Sensurround virtual reality, and an internet wired into just about every object we interact with. As the repentant Facebookers say: if we’re not careful, we will soon be at risk of being locked into mindless behavioral loops, craving distraction even from other distractions.
There is a possible way out of this, of course. It resides not in some luddite fantasy of an army of people carrying old Nokia phones and writing each other letters, but the possibility of a culture that actually embraces the idea of navigating the internet with a discriminating sensibility and an emphasis on basic moderation. We now know – don’t we? – that the person who begins most social encounters by putting their phone on the table is either an addict or an idiot.
There is also a mounting understanding that one of the single most important aspects of modern parenting is to be all too aware of how much social media can mess with people’s minds, and to limit our children’s screen time. This, after all, is what Bill Gates and Steve Jobs did, as evidenced by one of the latter’s most pithy statements. In 2010 he was asked about his children’s opinion of the iPad. “They haven’t used it,”he said.“We limit how much technology our kids use at home.”
Two billion people actively use Facebook; at least 3.5 billion are now reckoned to be online. Their shared habits, compulsions and susceptibilities will clearly have a huge influence on the world’s progress, or lack of it. So we ought to listen to Tristan Harrisand his campaign. “Religions and governments don’t have that much influence over people’s daily thoughts,” he recently told Wired magazine. “But we have three technology companies” – Facebook, Google and Apple – “who have this system that frankly they don’t even have control over … Right now, 2 billion people’s minds are already jacked in to this automated system, and it’s steering people’s thoughts toward either personalized paid advertising or misinformation or conspiracy theories. And it’s all automated; the owners of the system can’t possibly monitor everything that’s going on, and they can’t control it.”
And then came the kicker.“This isn’t some kind of philosophical conversation. This is an urgent concern happening right now.”Amid an ocean of corporate sophistry and double-think, those words have the distinct ring of truth.
Find technologies that help enhance your life rather than distract you from it. Meet people in person rather than scrolling through social news feeds. Turn it all off once in a while and live outside of the technological hole we’ve dug ourselves into.
Video: How often does technology interrupt us from what we really mean to be doing? At work and at play, we spend a startling amount of time distracted by pings and pop-ups — instead of helping us spend our time well, it often feels like our tech is stealing it away from us. Design thinker Tristan Harris offers thoughtful ideas for technology that creates more meaningful interaction. He asks: “What does the future of technology look like when you’re designing for the deepest human values?”
Connecting technology and Buddhist ideas – that’s quite unique!
Software entrepreneur, Jason Fried thinks deeply about collaboration, productivity and the nature of work. He’s the co-founder of 37signals, makers of Basecamp and other web-based collaboration tools, and co-author of “Rework.”
In the wake of the ongoing Cambridge Analytica debacle, Facebook has now been sued in federal court in San Francisco and San Jose. These new cases claim violations of federal securities laws, unfair competition, and negligence, among other allegations.
In the wake of the ongoing Cambridge Analytica debacle, Facebook has now been sued in federal court in San Francisco and San Jose. These new cases claim violations of federal securities laws, unfair competition, and negligence, among other allegations.
The pair of cases stem from recent revelations that Cambridge Analytica, a British data firm that contracted with the Donald Trump presidential campaign, retained private data from 50 million Facebook users despite claiming to have deleted it. New reporting on Cambridge Analytica has spurred massive public outcry from users and politicians, with CEO Mark Zuckerberg calling it a “breach of trust.”
These two cases, which were filed on March 20, could be just the first among what could be a coming wave of similar lawsuits.
One suit, filed by Lauren Price, of Maryland, says that she was served political ads during the 2016 presidential campaign and believes that she is part of the 50 million affected users. However, nowhere in her lawsuit does she specify why she thinks this—if she’s not actually on the list, then she would lack standing, and the case would likely be dismissed.
“Facebook lies within the penumbra of blame,” her complaint argues.
She seeks to represent“All persons who registered for Facebook accounts in the United States and whose Personal Information was obtained from Facebook by Cambridge Analytica without authorization or in excess of authorization.”
Her lawyers did not respond to request for comment.
A second lawsuit is being brought by Fan Yuan, a man who describes himself as a Facebook stockholder who bought stock at an “inflated price” after February 3, 2017. The suit claims that the company made false statements when it did not reveal the breach. As such, when Facebook’s stock price dropped after the news broke late last week, he and many other investors lost money.
Facebook has refused to answer Ars’ questions or to provide many further details beyond public statements by its top executives and lawyers. The company will not say precisely what data was shared or when or how it will formally notify affected users.
“We are committed to vigorously enforcing our policies to protect people’s information,” Paul Grewal, Facebook’s deputy general counsel, said in a statement. “We will take whatever steps are required to see that this happens.”
In a post, Zuckerberg said that the company would impose strict changes going forward.
“We will restrict developers’ data access even further to prevent other kinds of abuse,” he wrote on Wednesday. “For example, we will remove developers’ access to your data if you haven’t used their app in three months. We will reduce the data you give an app when you sign in—to only your name, profile photo, and email address. We’ll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we’ll have more changes to share in the next few days.”
Facebook will warn these millions of users with a notice atop the News Feed with information about what data of theirs might have been attained, and what they should do now. It will also show its new bulk app permissions removal tool atop the feed.
Facebook warned that 87 million users, mostly in the U.S, that their data “may have been improperly shared with Cambridge Analytica by apps that they or their friends used,” the company announced. CTO at Facebook, Mike Schroepfer, tells TechCrunch that Facebook will warn these users with a notice atop the News Feed with information about what data of theirs might have been attained, and what they should do now. It will also show its new bulk app permissions removal tool atop the feed.
Schroepfer, says that 87 million is the maximum number of users impacted, up from initial reports from the New York Times of 50 million people effected, as Facebook isn’t positive of how many people had their data misused. It likely doesn’t want to low-ball and have to revise the number upward later, as it did when it initially reported the Russian election interference ads were seen by 10 million users and later had to admit to congress it was actually 126 million when organic posts were included. Mark Zuckerberg plans to take questions from reporters about the changes during a 1:00pm Pacific conference call on the subject.
The changes come as part of a slew of announcements in the wake of the Cambridge Analytica scandal including new restrictions on Facebook API use and the immediate shut down of the old Instagram API that was slated for July, but which started to break developers’ apps this week. Facebook is now undergoing a deep audit of app developers that pulled a lot of data or that look suspicious, and Schroepfer promises Facebook will make further disclosures if it finds any situations similar to the Cambridge Analytica fiasco.
Facebook is trying to fix its broken data privacy after a developer named Dr. Aleksandr Kogan used the platform to administer a personality test app that collected data about participants and their friends. That data was then passed to Cambridge Analytica where it may have been leveraged to optimize political campaigns including that of 2016 presidential candidate Donald Trumpand the Brexit vote, allegations which the company itself vehemently denies. Regardless of how the data was employed to political ends, that lax data sharing was enough to ignite a firestorm around Facebook’s privacy practices.
Following the Cambridge Analytica revelations, the company’s stock dropped precipitously, wiping more than $60 billion off its market capitalization from its prior period of stable growth. At the time of writing, Facebook was trading at $153.56.
Facebook’s core leadership was slow to respond to the explosion of negative attention, though Zuckerberg and Sandberg broke that silence with a flurry of media appearances, interviews and print ads. The company also came under the scrutiny of Congress once more and that pressure, which came from subcommittees in both the House and Senate and from both political parties, appears to have paid off. Zuckerberg is expected to testify before the House Energy and Commerce Committee, just one of the several powerful committees calling for him, on April 11.
While it’s certainly unfortunate that it took mishandling user data on a large scale to do so, the incident has become the straw that broke the Facebook camel’s back when it comes to privacy — and that appears to be catalyzing change. Schroepfer said Facebook is now lifting every rock to find any other vulnerabilities that could be used to illicitly access or steal people’s information. Now we’re getting changes that should have been in place years ago that could make Facebook a safer place to network for users concerned about how the company handles their private data.
For more on Facebook’s recent scandals and changes:
Just a few weeks before Facebook CEO Mark Zuckerberg apologized for the “breach of trust” that allowed Cambridge Analytica to access the private social media activity of 50 million people, Facebook plunked down $200,000 to fight a data privacy initiative in California.
The social media giant’s donation matched others from Google, AT&T, Comcast and Verizon—a million-dollar sign that the issue of how companies collect and share personal information is likely to grow into an expensive fight as election season unfolds in California.
The businesses are fighting an initiative proposed by San Francisco real estate developer Alastair Mactaggart, who’s already spent $1.7 million on a measure that would allow Californians to prohibit companies from selling or sharing their personal data. His campaign is gathering signatures with the goal of landing the California Consumer Privacy Act on the November ballot.
“What we are proposing is some very basic rights: Let people find out what information companies are collecting, and let them have the ability to say, ‘Don’t sell my information,’” said Mactaggart, who was inspired to draft the initiative after chatting at a party with a Google engineer who told him that people would freak out if they knew how much companies track, compile and sell their personal information.
Ever used Evite to invite friends to a religious celebration or a child’s birthday party? Your religion or the age of your child may be for sale. Use a fitness or fertility app?
They collect loads of personal information that can be shared—and a study by the Future of Privacy Forum found that some don’t have privacy policies telling users what happens to their data. Use a discount card at the grocery or drug store?
Everything you buy is a piece of data about you. Marketing companies compile these billions of bits of data to build profiles of the kind of consumer or voter they think you are.
The measure would give Californians the ability to opt out of having their personal data sold or shared by requiring businesses to display a button on their websites that says, “Do Not Sell My Personal Information.” Clicking the link would take users to an opt-out form.
Mactaggart and his supporters seized on the recent controversy over Cambridge Analytica accessing the data of millions of Facebook users to benefit clients such as the Trump campaign, publicly calling on the company to stop opposing his ballot measure. There’s little reason to think it will.
Though Zuckerberg said on national television that Facebook has a “basic responsibility to protect people’s data,”his company has worked with other internet giants to beat back numerous efforts to increase consumer privacy. They lobbied against federal legislation last year that would have required tech companies to obtain customers’ permission before selling their data to advertisers. And they lobbied against a bill in the California legislature that would have required internet service providers to get permission from customers before selling or sharing information about their browsing history.
the companies who collect the data are not using reasonable security measures to keep your data safe.
California Consumer Privacy Act
Now they’re fighting Mactaggart’s measure by warning that it would fundamentally disrupt the 21st century economy, not only impacting the business of digital advertising but also hampering many services people have come to rely on.Mapping apps, ride-hailing apps and email subscription services all rely on sharing users’ data. The initiative treats “sharing” data and “selling” data the same, and opponents say such services wouldn’t work if consumers were allowed to opt out of sharing the data. The measure says the opt-out wouldn’t apply when the consumer intentionally discloses personal information (for example, revealing their location when hailing a ride), but opponents maintain the distinction is unworkable.
“Just about every sector of business in the state will oppose this because it’s a direct threat to their vitality,” said Steve Maviglio, spokesman for the tech-funded political committee opposing the privacy initiative, called the Coalition to Protect California Jobs. The committee has backing from the California Chamber of Commerce, TechNet and the Internet Association—a trio of powerful and deep-pocketed interests.
Facebook didn’t respond to request for comment, but it’s a member of TechNet and the Internet Association, which argue that changing the internet’s rules in one state is impractical because it’s a global network.
“This measure will stifle innovation and send companies to competing states and countries that do not have such job-crushing regulations,” said a statement from TechNet vice president Andrea Deveau.
The initiative also would allow Californians to sue companies that violate their request not to share personal information—another point of contention for business groups, which almost always oppose policies making it easier for them to be sued.
Consumer groups have been assessing whether the initiative as drafted does what it aims to do. Several now support it, although others including the Electronic Frontier Foundation, which litigates civil liberties issues in technology, have not yet taken a position.
Leading the campaign with Mactaggart is Mary Ross, a former CIA analyst who moved to California two years ago. As a counterintelligence analyst, Ross said she helped monitor foreign governments’ efforts to spy on America. So when Mactaggart asked her to join his campaign, Ross said she had “an insider’s perspective” on the power of big data.
“Information is powerful whether it’s a government using it or a business,” Ross said.
“Information is being used to manipulate people and you don’t even know when you’re being manipulated… Maybe it’s being done to make you buy something or maybe it’s being done to get you to go vote a certain way. But if there is no transparency or accountability, it’s going to continue.”
For a complete list of the firms funding the effort visit: fppc.ca.gov
On Thursday evening, BuzzFeed published a memo from Andrew “Boz” Bosworth, a vice president at Facebook who currently leads its hardware efforts. In the memo, Bosworth says that the company’s core function is to connect people, despite consequences that he repeatedly called “ugly.” “That’s why all the work we do in growth is justified. All the questionable contact importing practices,” he wrote. “All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.”
Bosworth distanced himself from the memo, saying in a Twitter post that he hadn’t agreed with those words even when he wrote them. He was trying to galvanize a discussion around the company’s growth strategy, he said. CEO Mark Zuckerberg told BuzzFeed that he had not agreed with the sentiments in the post at the time, and that growth should not be a means to an end in itself. “We recognize that connecting people isn’t enough by itself. We also need to work to bring people closer together,” Zuckerberg said.
After publishing the memo, Bosworth deleted his original post. “While I won’t go quite as far as to call it a straw man, that post was definitely designed to provoke a response,” Bosworth wrote in a memo obtained by The Verge. “It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community. The post was of no particular consequence in and of itself, it was the comments that were impressive. A conversation over the course of years that was alive and well even going into this week.
“I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”
“That conversation is now gone,” Bosworth continued. “And I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”
Facebook and Bosworth declined to comment.
Nearly 3,000 employees had reacted to Bosworth’s memo when The Verge viewed it, responding with a mixture of likes, “sad,” and and “angry” reactions. Many employees rallied to Bosworth’s side, praising him for sharing his feelings about sensitive company matters using blunt language.
Others criticized Bosworth for deleting the post, saying it fueled a narrative about the company that it had something to hide. “Deleting things usually looks bad in retrospect,” one wrote. “Please don’t feed the fire by giving these individuals more fuel (eg, Facebook execs deleting internal communications”). If we are no longer open and transparent, and instead lock-down and delete, then our culture is also destroyed — but by our own hand.”
Dozens of employees criticized the unknown leakers at the company. “Leakers, please resign instead of sabotaging the company,” one wrote in a comment under Bosworth’s post. Wrote another: “How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”ing the company,” one wrote in a comment under Bosworth’s post. Wrote another: “How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”
Several employees suggested Facebook attempt to screen employees for a high degree of “integrity” during the hiring process. “Although we all subconsciously look for signal on integrity in interviews, should we consider whether this needs to be formalized in the interview process?” one wrote.
“This is so disappointing, wonder if there is a way to hire for integrity.”
Wrote another: “This is so disappointing, wonder if there is a way to hire for integrity. We are probably focusing on the intelligence part and getting smart people here who lack a moral compass and loyalty.”
Other employees said it would be difficult to detect leakers before they acted.
“I don’t think we’ve seen a huge internally leaked data breach, but I’ve always thought our ‘open but punitive’ stance was particularly vulnerable to suicide bombers,” one employee wrote “We would be foolish to think that we could adequately screen against them in a hiring process at our scale. … We have our representative share of sick people, drug addicts, wife beaters, and suicide bombers. Some of this cannot be mitigated by training. To me, this makes it just a matter of time.”
That employee followed up to say: “OMG, I just ran back to my ‘puter from a half-eaten lunch with food in my mouth. APOLOGIES to our brothers in sisters in the Austin Office for my insensitive choice of metaphors/words. I’m sorry.”
“We have our representative share of sick people, drug addicts, wife beaters, and suicide bombers.”
Another theory floated by multiple employees is that Facebook has been targeted by spies or state-level actors hoping to embarrass the company. “Keep in mind that leakers could be intentionally placed bad actors, not just employees making a one-off bad decision,” one wrote. “Thinking adversarially, if I wanted info from Facebook, the easiest path would be to get people hired into low-level employee or contract roles.” Another wrote: “Imagine that some percentage of leakers are spies for governments. A call to morals or problems of performance would be irrelevant in this case, because dissolution is the intent of those actors. If that’s our threat — and maybe it is, given the current political situation? — then is it even possible to build a system that defaults to open, but that is able to resist these bad actors (or do we need to redesign the system?)”
Several employees shared concerns that the leaks had removed some of Facebook’s luster. The company is routinely cited as among the best places to work in America.
“If this leak #$%^ continues, we will become like every other company where people are hesitant to discuss broad-reaching, forward-looking ideas and thoughts, that only the very average ideas and thoughts get discussed and executed,” one employee wrote.” Making them average companies.”
Another employee responded: “Will become? Seems like we are there.”
The leaks also became cause for discussion about the company’s internal sharing tools. Facebook runs on its enterprise product, Facebook for Work. One employee wondered whether the critics of leakers had ignored incentives for sharing created by the product itself. It’s a nuanced thought worth sharing in full:
“It’s interesting to note that this discussion is about leaks pushing us to be more cognizant of our sharing decisions. The result is that we are incentivized toward stricter audience management and awareness of how our past internal posts may look when re-surfaced today. We blame a few ill-intentioned employees for this change.
“The non-employee Facebook user base is also experiencing a similar shift: the move toward ephemeral and direct sharing results from realizing that social media posts that were shared broadly and are searchable forever can become a huge liability today.
A key difference between the outside discussion and the internal discussion is that the outside blames the Facebook product for nudging people to make those broad sharing decisions years ago, whereas internally the focus is entirely on employees.”
Another employee made a similar plea for empathy. “Can we channel our outrage over the mishandling of our information into an empathy for our users’ situation? Can the deletion of a post help us better understand #delete facebook? How we encourage ourselves to remain open while acknowledging a world that doesn’t always respect the audience and intention for that information might just be the key to it all. Maybe we should be dogfooding that?”
For his part, Bosworth promised employees he would continue sharing candid thoughts about Facebook, but said he would likely post less. “When posting comes with the risk that I’ll have to blow up my schedule and defend myself to the national press,” he wrote, “you can imagine it is an inhibitor.”
Here is Bosworth’s full memo to the company today.
I’m feeling a little heartbroken tonight.
I had multiple reporters reach out today with different stories containing leaks of internal information.
In response to one of the leaks I have chosen to delete a post I made a couple of years ago about our mission to connect people and the ways we grow. While I won’t go quite as far as to call it a straw man, that post was definitely designed to provoke a response. It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community. The post was of no particular consequence in and of itself, it was the comments that were impressive. A conversation over the course of years that was alive and well even going into this week.
That conversation is now gone. And I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.
This is the very real cost of leaks. We had a sensitive topic that we could engage on openly and explore even bad ideas, even if just to eliminate them. If we have to live in fear that even our bad ideas will be exposed then we won’t explore them or understand them as such, we won’t clearly label them as such, we run a much greater risk of stumbling on them later. Conversations go underground or don’t happen at all. And not only are we worse off for it, so are the people who use our products.
Now the first study detailing the process from start to finish is finally shedding some light. “This is the first time that I’ve seen all the dots connected,” says Joanna Bryson, an artificial intelligence researcher at the University of Bath, UK.
At the heart of the debate is psychometrics targeting – the directing of political campaigns at people via social media based on their personality and political interests, with the aid of vast amount of data filtered by artificial intelligence (AI).
Though Facebook doesn’t “explicitly” provide all the tools to target people based on political opinions, the new study shows how the platform can be exploited. Using combinations of people’s interests, demographics, and survey data it’s possible to direct campaigns at individuals based on their agreement with ideas and policies. This could have a big impact on the success of campaigns.
“The weaponized, artificially intelligent propaganda machine is effective. You don’t need to move people’s political dials by much to influence an election, just a couple of percentage points to the left or right,” says Chris Sumner at the Online Privacy Foundation, who is presented the work this at DEF CON in Las Vegas.
Checks and balances
No one yet knows how much this can permanently change people’s views. But Sumner’s study clearly reveals a form of political campaigning with no checks and balances.
To get to grips with the complex issue of psychographic targeting online, Sumner and his colleagues created four experiments.
In the first, they looked at what divides people. High up on the list was the statement: “with regards to internet privacy: if you’ve done nothing wrong, you have nothing to fear.” During the Brexit referendum they surveyed more than 5000 people and found that Leave voters were significantly more likely to agree with the statement, and Remain voters more likely to disagree.
Next, by administering various personality tests to a different group they found traits that correlate with how likely you are to agree with that statement on internet privacy. This was converted into an “authoritarianism” score: if you scored high you were more likely to agree with the statement. Then, using a tool called PreferenceTool, built by researchers at the University of Cambridge, they were able to reverse engineer what sort of Facebook interests and demographics people with those personalities were most likely to have.
Just 38 per cent of a random selection of people on Facebook agreed with the privacy statement but this shot up to 61 per cent when the tool was used to target people deemed more likely to agree, and down to 25 per cent for those who they deemed more likely to disagree. In other words, they were able to demonstrate that it is possible to target people on Facebook based on a political opinion.
Finally, the team created four different Facebook ad campaigns tailored to the personalities they had identified, using both pro and anti-surveillance messages. For example, the anti-surveillance ad aimed at people with high levels of authoritarianism read: “They fought for your freedom. Don’t give it away! Say no to mass surveillance,” with a backdrop of the D-day landings. In contrast, the version for people with low levels of authoritarianism said: “Do you really have nothing to fear if you have nothing to hide? Say no to state surveillance,” alongside an image of Anne Frank.
Overall they found that the tailored ads resonated best with the target groups. For example, the pro-surveillance, high-authoritarianism advert had 20 times as many likes and shares from the high-authoritarianism group versus the low one.
Though the picture is becoming clearer, we should be careful not to equate a short-term decision to share or like a post, with long-term political views, says Andreas Jungherr at the University of Konstanz, Germany. “Social media is impacting political opinions. But the hype makes it hard to tell exactly how much,” he says.
However, maybe changing political opinions doesn’t have to be the end game. Perhaps the goal is simply to dissuade or encourage people from voting. “We know it’s really easy to convince people not to go to the polls,”says Bryson. “Prime at the right time and you can have a big effect. It’s not necessarily about changing opinions.”
Facebook allows targeted advertising so long as a company’s use of “external data” adheres to the law.
Following months of European scrutiny over the impact of major tech firms, Germany has passed a controversial law that could hold Facebook and Twitter highly accountable for the content they host.
Lawmakers in Germany passed a hotly debated law enabling the country to issue heavy fines to Facebook, Twitter, and other social media platforms which leave up content that violates its laws governing hate speech. Known as the “Facebook law” among Germans, the approved Network Enforcement Act provides for fines of up to $57 million (€50 million) to companies which fail to take down “obviously illegal” content within 24 hours, and will go into effect in October.
As The Verge reported, Germany’s definition of such content includes hate speech, incitements to violence, and defamation–all of which have found their way onto Facebook in Germany, and virtually everywhere else. Under the new law, social media companies could face an initial fine of €5 million for continuing to host content considered illegal (not necessarily on the first offense), and see those fines rise as high as €50 million depending on subsequent steps and previous infractions.
Social media companies will also be required to publish semiannual reports on how many related complaints they’ve received about their content, and what was done about them. The Guardian noted that the new law also allows German authorities to issue fines of up to €5m to each company’s designated point-person for the issue if the company’s complaints procedure isn’t up to regulation.
– Photo: Syrian refugee Anas Modamani (C) is suing Facebook over selfie photos of himself with German Chancellor Angela Merkel that he says were misused by Facebook users accusing him of being a terrorist or guilty of other crimes and which Facebook refused to remove. (Credit: Thomas Lohnes/Getty Images)
Digital rights and free speech activists have criticized the law for its restrictiveness, and argued that it places too large a burden on social media companies to tackle the issue. German Justice Minister Heiko Maas argued today the ability to bring big consequences for companies was necessary in combating hate speech and radicalized content online. He commented in an address, “Experience has shown that, without political pressure, the large platform operators will not fulfill their obligations, and this law is therefore imperative … freedom of expression ends where criminal law begins.”
In an emailed statement, a Facebook representative told the Verge, “We believe the best solutions will be found when government, civil society and industry work together and that this law as it stands now will not improve efforts to tackle this important societal problem … We feel that the lack of scrutiny and consultation do not do justice to the importance of the subject. We will continue to do everything we can to ensure safety for the people on our platform.”
As The Guardian reported, the law has seen a few softening changes since Maas and other lawmakers began promoting the legislation. Companies will now have a week to consider flagged posts which aren’t as clearly illegal or protected, and can enlist outside vetters of content or even create shared vetting facilities. Users will also be able to appeal the decision if their content is removed.
Germany’s leading Jewish organization, the Central Council of Jews, told the Guardian that the law provides a “strong instrument against hate speech in social networks,” where Jews are being “exposed to antisemitic hatred [on] a daily basis.” Meanwhile, human rights experts have warned against potentially privatizing the censorship process and limiting free speech, and Germany’s leading nationalist part has announced it may challenge the law all the way to the top.
The establishment media is dying. This is not a biased view coming from “alternative media,” it is a fact borne out by metrics and opinion polls from within the establishment itself. It was true before the recent election, and is guaranteed to accelerate after their shameless defense of non-reality which refused to accept any discontent among the American population with standard politics.
Now, with egg on their face after the botched election coverage, and a wobbling uncertainty about how they can maintain multiple threads of a narrative so fundamentally disproven, they appear to be resorting to their nuclear option: a full shut down of dissent.
Voices within independent media have been chronicling the signposts toward full-on censorship as sites have encountered everything from excessive copyright infringement accusations, to de-monetization, to the open admission by advertising giants that certain images would not be tolerated.
However, until now these efforts have appeared random, haphazard, and rife with retractions and restorations of targeted sites and content. A massive backlash of reader outrage toward these restrictive measures has confirmed that most consumers don’t like the idea of being given boundaries to their intellectual freedom.
That said, there has been a notable increase of hoax websites beginning to populate the information stream. We can attest that this has been an incredible annoyance as we are bombarded daily with new outrageous claims and rabbit holes that readers expect us to sift through.
Most times, a cursory glance at the “About” page or any disclaimers quickly shows where this information is coming from. Other times, a bit of common sense and discernment about why a site that has just appeared on the scene (check Alexa – Actionable Analytics for the Web for this info) would have “EXCLUSIVE” “BREAKING” content under the banner of an apparent local news channel or a name that is the twisted version of a legitimate news outlet.
But even with those caveats, we’ve all been taken in at one time or another and have had to retract or update articles as necessary, or apologize to our e-mail list for sending out a given link. This does jam up the works, but it is the tax we all must pay if we believe in the free-market of ideas and information. We’re not perfect, but at least we have never been deliberately misleading like CNN and others often have been.
The government recently legalized using propaganda against US citizens. They wielded all of their establishment media force to sell their lies. And now they’re frustrated that people still prefer the truth as they see it naturally
The voices of the corporate media are making a show of calling Facebook to task for evidently not having stringent enough algorithms to discern “legitimate news” from deliberate hoax. We are being told that this very likely led to the election of Trump, and that this has become a major problem in need of a major solution.
The first shots are being fired as we speak. Yesterday we learned that Facebook and Google would take swift action against “fake news” by de-monetizing or banning them outright.
“Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property,” a Google spokesperson said in a statement given to Reuters. This policy includes fake news sites, the spokesperson confirmed. Google already prevents its AdSense program from being used by sites that promote violent videos and imagery, pornography, and hate speech.
This is problematic on a number of levels, not least of which is the vague notion of what constitutes violent imagery and hate speech. War, of course, is what should first come to mind when thinking of violence.
Police shootings and other clashes might qualify as well, but routinely populate the most mainstream of sources. And one person’s hate speech is another person’s dissent.
The second component is that of transparency, where we see claims about any effort to “conceal information about the publisher.” Again, very vague, but as any journalist worth their salt knows, it is anonymity which leads to the truth more often than not, especially when threats against journalists and whistleblowers are demonstrably on the rise.
Today, the mainstream media named us as one of the top “fake news” sites to avoid. It’s quite an honor.
US News (linked above) has published a list of websites that it deems unworthy of support, and is essentially urging to be de-monitized or banned based on the previous calls to action.
Here are several fake news sites that have become popular on Facebook, and which should be avoided if you’re looking for the facts:
Firstly, the grouping of satire, hoax, and propaganda is troubling, as the definitions of each aren’t even remotely related to one another.
Satire is literature and has a tradition dating back thousands of years; it has been recognized as an essential component of intellectual and political freedom. A deliberate hoax, we can all agree, is lacking integrity, purposely deceptive, and can be legitimately harmful or dangerous. Propaganda, though, is aligned with the State; and most commonly is directed and funded by the State. That is a serious accusation and one that is entirely without merit for this website. It is also an especially ironic and dubious accusation coming from an outlet called US News.
Yet we’re proud to be biased for peace, love, and liberty. Anyone against those principles is serving fake news as far as we’re concerned.
All of this is to say that we are entering dangerous new territory, as the Internet itself is under a new regime with the transfer to ICANN, an international body. If 2/3 of the globe is under digital dictatorship, what else is the likely outcome from such international control over information?
However, it is also an exhilarating time to be a part of such mammoth upheaval, where the entrenched apparatus of the State itself has declared information to be its enemy and to acknowledge that it must do everything in its power to maintain its tenuous monopoly on the truth.
The unfortunate reality for them is that the truth will always be more efficient and, therefore, simpler to disseminate than the complexities of lies and true propaganda.
How a strange new class of media outlet has arisen to take over our news feeds.
Open your Facebook feed. What do you see? A photo of a close friend’s child. An automatically generated slide show commemorating six years of friendship between two acquaintances. An eerily on-target ad for something you’ve been meaning to buy. A funny video. A sad video. A recently live video. Lots of video; more video than you remember from before. A somewhat less-on-target ad. Someone you saw yesterday feeling blessed. Someone you haven’t seen in 10 years feeling worried.
And then: A family member who loves politics asking, “Is this really who we want to be president?” A co-worker, whom you’ve never heard talk about politics, asking the same about a different candidate. A story about Donald Trump that “just can’t be true” in a figurative sense. A story about Donald Trump that “just can’t be true” in a literal sense. A video of Bernie Sanders speaking, overlaid with text, shared from a source you’ve never seen before, viewed 15 million times. An articlequestioning Hillary Clinton’s honesty; a headline questioning Donald Trump’s sanity. A few shares that go a bit too far: headlines you would never pass along yourself but that you might tap, read and probably not forget.
Maybe you’ve noticed your feed becoming bluer; maybe you’ve felt it becoming redder. Either way, in the last year, it has almost certainly become more intense. You’ve seen a lot of media sources you don’t recognize and a lot of posts bearing no memorable brand at all. You’ve seen politicians and celebrities and corporations weigh in directly; you’ve probably seen posts from the candidates themselves. You’ve seen people you’re close to and people you’re not, with increasing levels of urgency, declare it is now time to speak up, to take a stand, to set aside allegiances or hangups or political correctness or hate.
Facebook, in the years leading up to this election, hasn’t just become nearly ubiquitous among American internet users; it has centralized online news consumption in an unprecedented way. According to the company, its site is used by more than 200 million people in the United States each month, out of a total population of 320 million. A 2016 Pew study found that 44 percent of Americans read or watch news on Facebook. These are approximate exterior dimensions and can tell us only so much. But we can know, based on these facts alone, that Facebook is hosting a huge portion of the political conversation in America.
During the 2012 presidential election, Facebook secretly tampered with 1.9 million user’s news feeds. The company also tampered with news feeds in 2010 during a 61-million-person experiment to see how Facebook could impact the real-world voting behavior of millions of people. An academic paper was published about the secret experiment, claiming that Facebook increased voter turnout by more than 340,000 people. In 2012, Facebook also deliberately experimented on its users’ emotions. The company, again, secretly tampered with the news feeds of 700,000 people and concluded that Facebook can basically make you feel whatever it wants you to.
The Facebook product, to users in 2016, is familiar yet subtly expansive. Its algorithms have their pick of text, photos and video produced and posted by established media organizations large and small, local and national, openly partisan or nominally unbiased. But there’s also a new and distinctive sort of operation that has become hard to miss: political news and advocacy pages made specifically for Facebook, uniquely positioned and cleverly engineered to reach audiences exclusively in the context of the news feed.
These are news sources that essentially do not exist outside of Facebook, and you’ve probably never heard of them. They have names like Occupy Democrats; The Angry Patriot; US Chronicle; Addicting Info; RightAlerts; Being Liberal; Opposing Views; Fed-Up Americans; American News; and hundreds more. Some of these pages have millions of followers; many have hundreds of thousands.
Using a tool called CrowdTangle, which tracks engagement for Facebook pages across the network, you can see which pages are most shared, liked and commented on, and which pages dominate the conversation around election topics. Using this data, I was able to speak to a wide array of the activists and entrepreneurs, advocates and opportunists, reporters and hobbyists who together make up 2016’s most disruptive, and least understood, force in media.
Individually, these pages have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic. And unlike traditional media organizations, which have spent years trying to figure out how to lure readers out of the Facebook ecosystem and onto their sites, these new publishers are happy to live inside the world that Facebook has created.
Their pages are accommodated but not actively courted by the company and are not a major part of its public messaging about media. But they are, perhaps, the purest expression of Facebook’s design and of the incentives coded into its algorithm — a system that has already reshaped the web and has now inherited, for better or for worse, a great deal of America’s political discourse.
In 2006, when Mark Zuckerberg dropped out of college to run his rapidly expanding start-up, Mark Provost was a student at Rogers State University in Claremore, Okla., and going through a rough patch. He had transferred restlessly between schools, and he was taking his time to graduate; a stock-picking hobby that grew into a promising source of income had fallen apart. His outlook was further darkened by the financial crisis and by the years of personal unemployment that followed. When the Occupy movement began, he quickly got on board. It was only then, when Facebook was closing in on its billionth user, that he joined the network.
DNC Silence Bernie Delegates:
Now 36, Provost helps run U.S. Uncut, a left-leaning Facebook page and website with more than 1.5 million followers, about as many as MSNBC has, from his apartment in Philadelphia. (Sample headlines:“Bernie Delegates Want You to See This DNC Scheme to Silence Them”and “This Sanders Delegate Unleashing on Hillary Clinton Is Going Absolutely Viral.”)He frequently contributes to another popular page, The Other 98%, which has more than 2.7 million followers.
Clinton delegates have consistently been granted access to the convention hall before Sanders delegates, allowing them to sit at the front of the delegation and use their position to block media cameras from showing protesting delegates behind them in addition to not letting Bernie delegates in the door.
Clinton delegate – “If you see them holding signs, please stand and block them with your signs”#SeatFillerSitin
Occupy got him on Facebook, but it was the 2012 election that showed him its potential. As he saw it, that election was defined by social media. He mentioned a set of political memes that now feel generationally distant: Clint Eastwood’s empty chair at the 2012 Republican National Convention and Mitt Romney’s debate gaffe about “binders full of women.”He thought it was a bit silly, but he saw in these viral moments a language in which activists like him could spread their message.
Provost’s page now communicates frequently in memes, images with overlaid text. “May I suggest,” began one, posted in May 2015, when opposition to the Trans-Pacific Partnership was gaining traction, “the first 535 jobs we ship overseas?” Behind the text was a photo of Congress. Many are more earnest. In an image posted shortly thereafter, a photo of Bernie Sanders was overlaid with a quote: “If Germany, Denmark, Sweden and many more provide tuition-free college,” read the setup, before declaring in larger text,“we should be doing the same.”It has been shared more than 84,000 times and liked 75,000 more. Not infrequently, this level of zeal can cross into wishful thinking. A post headlined “Did Hillary Clinton Just Admit on LIVE TV That Her Iraq War Vote Was a Bribe?” was shared widely enough to “merit” (as if) a response from Snopes, which called it “quite a stretch.”
This year, political content has become more popular all across the platform: on homegrown Facebook pages, through media companies with a growing Facebook presence and through the sharing habits of users in general. But truly Facebook-native political pages have begun to create and refine a new approach to political news: cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture.
This strange new class of media organization slots seamlessly into the news feed and is especially notable in what it asks, or doesn’t ask, of its readers. The point is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary.
While web publishers have struggled to figure out how to take advantage of Facebook’s audience, these pages have thrived. Unburdened of any allegiance to old forms of news media and the practice, or performance, of any sort of ideological balance, native Facebook page publishers have a freedom that more traditional publishers don’t: to engage with Facebook purely on its terms. These are professional Facebook users straining to build media companies, in other words, not the other way around.
From a user’s point of view, every share, like or comment is both an act of speech and an accretive piece of a public identity. Maybe some people want to be identified among their networks as news junkies, news curators or as some sort of objective and well-informed reader. Many more people simply want to share specific beliefs, to tell people what they think or, just as important, what they don’t. A newspaper-style story or a dry, matter-of-fact headline is adequate for this purpose. But even better is a headline, or meme, that skips straight to an ideological conclusion or rebuts an argument.
Rafael Riveor is an acquaintance of Provost’s who, with his twin brother, Omar, runs Occupy Democrats Facebook page, which passed three million followers in June. This accelerating growth is attributed by Rivero, and by nearly every left-leaning page operator I spoke with, not just to interest in the election but especially to one campaign in particular: “Bernie Sanders is the Facebook candidate,” Rivero says. The rise of Occupy Democrats essentially mirrored the rise of Sanders’s primary run.
On his page, Rivero started quoting text from Sanders’s frequent email blasts, turning them into Facebook-ready media and memes with a consistent aesthetic: colors that pop, yellow on black. Rivero says that it’s clear what his audience wants. “I’ve probably made 10,000 graphics, and it’s like running 10,000 focus groups,” he said. (Clinton was and is, of course, widely discussed by Facebook users: According to the company, in the last month 40.8 million people “generated interactions” around the candidate. But Rivero says that in the especially engaged, largely oppositional left-wing-page ecosystem,Clinton’s message and cautious brand didn’t carry.)
Because the Sanders campaign has come to an end, these sites have been left in a peculiar position, having lost their unifying figure as well as their largest source of engagement. Audiences grow quickly on Facebook but can disappear even more quickly; in the case of left-leaning pages, many had accumulated followings not just by speaking to Sanders supporters but also by being intensely critical, and often utterly dismissive, of Clinton.
In retrospect, Facebook’s takeover of online media looks rather like a slow-motion coup. Before social media, web publishers could draw an audience one of two ways: through a dedicated readership visiting its home page or through search engines. By 2009, this had started to change. Facebook had more than 300 million users, primarily accessing the service through desktop browsers, and publishers soon learned that a widely shared link could produce substantial traffic. In 2010, Facebook released widgets that publishers could embed on their sites, reminding readers to share, and these tools were widely deployed. By late 2012, when Facebook passed a billion users, referrals from the social network were sending visitors to publishers’ websites at rates sometimes comparable to Google, the web’s previous de facto distribution hub. Publishers took note of what worked on Facebook and adjusted accordingly.
This was, for most news organizations, a boon. The flood of visitors aligned with two core goals of most media companies: to reach people and to make money. But as Facebook’s growth continued, its influence was intensified by broader trends in internet use, primarily the use of smartphones, on which Facebook became more deeply enmeshed with users’ daily routines. Soon, it became clear that Facebook wasn’t just a source of readership; it was, increasingly, where readers lived.
Facebook, however, is also a communications medium that facilitates conversation, organization and the distribution of information among users. It does so under the illusion that users are in control of the process, but of course it is Facebook puling the strings. Facebook could definitely manipulate its service to undermine Trump.
“With Facebook, we don’t know what we’re not seeing. We don’t know what the bias is or how that might be affecting how we see the world. Facebook has toyed with skewing news in the past…. If Facebook decided to, it could gradually remove any pro-Trump stories or media off its site—devastating for a campaign that runs on memes and publicity. Facebook wouldn’t have to disclose it was doing this, and would be protected by the First Amendment.”
Facebook, from a publisher’s perspective, had seized the web’s means of distribution by popular demand. A new reality set in, as a social-media network became an intermediary between publishers and their audiences. For media companies, the ability to reach an audience is fundamentally altered, made greater in some ways and in others more challenging. For a dedicated Facebook user, a vast array of sources, spanning multiple media and industries, is now processed through the same interface and sorting mechanism, alongside updates from friends, family, brands and celebrities.
Facebook can promote or block any material that it wants.
From the start, some publishers cautiously regarded Facebook as a resource to be used only to the extent that it supported their existing businesses, wary of giving away more than they might get back. Others embraced it more fully, entering into formal partnerships for revenue sharing and video production, as The New York Times has done. Some new-media start-ups, most notably BuzzFeed, have pursued a comprehensively Facebook-centric production-and-distribution strategy. All have eventually run up against the same reality: A company that can claim nearly every internet-using adult as a user is less a partner than a context — a self-contained marketplace to which you have been granted access but which functions according to rules and incentives that you cannot control.
The news feed is designed, in Facebook’s public messaging, to “show people the stories most relevant to them” and ranks stories “so that what’s most important to each person shows up highest in their news feeds.” It is a framework built around personal connections and sharing, where value is both expressed and conferred through the concept of engagement. Of course, engagement, in one form or another, is what media businesses have always sought, and provocation has always sold news.But now the incentives are literalized in buttons and written into software.
Any sufficiently complex system will generate a wide variety of results, some expected, some not; some desired, others less so. On July 31, a Facebook page called Make America Greatposted its final story of the day.“No Media Is Telling You About The Muslim Who Attacked Donald Trump, So We Will..,” read the headline, next to a small avatar of a pointing and yelling Trump. The story was accompanied by a photo of Khizr Khan, the father of a slain American soldier. Khan spoke a few days earlier at the Democratic National Convention, delivering a searing speech admonishing Trump for his comments about Muslims. Khan, pocket Constitution in hand, was juxtaposed with the logo of the Muslim Brotherhood in Egypt. “It is a sad day in America,” the caption read, “where we the people must expose the TRUTH because the media is in the tank for 1 Presidential Candidate!”
Readers who clicked through to the story were led to an external website, called Make America Great Today, where they were presented with a brief write-up blended almost seamlessly into a solid wall of fleshy ads. Khan, the story said — between ads for “(1) Odd Trick to ‘Kill’ Herpes Virus for Good” and “22 Tank Tops That Aren’t Covering Anything” — is an agent of the Muslim Brotherhood and a “promoter of Islamic Shariah law.” His late son, the story suggests, could have been a “Muslim martyr” working as a double agent. A credit link beneath the story led to a similar-looking site called Conservative Post, from which the story’s text was pulled verbatim. Conservative Post had apparently sourced its story from a longer post on a right-wing site called Shoebat.com.
Within 24 hours, the post was shared more than 3,500 times, collecting a further 3,000 reactions — thumbs-up likes, frowning emoji, angry emoji — as well as 850 comments, many lengthy and virtually all impassioned. A modest success. Each day, according to Facebook’s analytics, posts from the Make America Great page are seen by 600,000 to 1.7 million people. In July, articles posted to the page, which has about 450,000 followers, were shared, commented on or liked more than four million times, edging out, for example, the Facebook page of USA Today.
Make America Great Again, which inhabits the fuzzy margins of the political Facebook page ecosystem, is owned and operated out of St. Louis by 35-year-old online marketer Adam Nicoloff. He started the page in August 2015 and runs it from his home. Previously, Nicoloff provided web services and marketing help for local businesses; before that, he worked in restaurants. Today he has shifted his focus to Facebook pages and websites that he administers himself. Make America Great was his first foray into political pages, and it quickly became the most successful in a portfolio that includes men’s lifestyle and parenting.
Nicoloff’s business model is not dissimilar from the way most publishers use Facebook: build a big following, post links to articles on an outside website covered in ads and then hope the math works out in your favor. For many, it doesn’t: Content is expensive, traffic is unpredictable and website ads are both cheap and alienating to readers. But as with most of these Facebook-native pages, Nicoloff’s content costs comparatively little, and the sheer level of interest in Trump and in the type of inflammatory populist rhetoric he embraces has helped tip Nicoloff’s system of advertising arbitrage into serious profitability. In July, visitors arriving to Nicoloff’s website produced a little more than $30,000 in revenue. His costs, he said, total around $8,000, partly split between website hosting fees and advertising buys on Facebook itself.
Then, of course, there’s the content, which, at a few dozen posts a day, Nicoloff is far too busy to produce himself. “I have two people in the Philippines who post for me,” Nicoloff said, “a husband-and-wife combo.” From 9 a.m. Eastern time to midnight, the contractors scour the internet for viral political stories, many explicitly pro-Trump. If something seems to be going viral elsewhere, it is copied to their site and promoted with an urgent headline. (The Khan story was posted at the end of the shift, near midnight Eastern time, or just before noon in Manila.) The resulting product is raw and frequently jarring, even by the standards of this campaign.“There’s No Way I’ll Send My Kids to Public School to Be Brainwashed by the LGBT Lobby,”read one headline, linking to an essay ripped from Glenn Beck’s The Blaze; “ALERT: UN Backs Secret Obama Takeover Of Police; Here’s What We Know,” read another, copied from a site called The Federalist Papers Project. In the end, Nicoloff takes home what he jokingly described as a “doctor’s salary” — in a good month, more than $20,000.
Terry Littlepage, an internet marketer based in Las Cruces, N.M., has taken this model even further. He runs a collection of about 50 politically themed Facebook pages with names like The American Patriot and My Favorite Gun, which push visitors to a half-dozen external websites, stocked with content aggregated by a team of freelancers. He estimates that he spends about a thousand dollars a day advertising his pages on Facebook; as a result, they have more than 10 million followers. In a good month, Littlepage’s properties bring in $60,000.
Nicoloff and Littlepage say that Trump has been good for business, but each admits to some discomfort. Nicoloff, a conservative, says that there were other candidates he preferred during the Republican primaries but that he had come around to the nominee. Littlepage is also a recent convert. During the primaries, he was a Cruz supporter, and he even tried making some left-wing pages on Facebook but discovered that they just didn’t make him as much money.
In their angry, cascading comment threads,Make America Great‘s followers express no such ambivalence. Nearly every page operator I spoke to was astonished by the tone their commenters took, comparing them to things like torch-wielding mobs and sharks in a feeding frenzy. No doubt because of the page’s name, some Trump supporters even mistake Nicoloff’s page for an official organ of the campaign. Nicoloff says that he receives dozens of messages a day from Trump supporters, expecting or hoping to reach the man himself. Many, he says, are simply asking for money.
Many of these political news pages will likely find their cachet begin to evaporate after Nov. 8. But one company, theLiberty Alliance, may have found a way to create something sustainable and even potentially transformational, almost entirely within the ecosystem of Facebook. The Georgia-based firm was founded by Brandon Vallorani, formerly of Answers in Genesis (AiG), the organization that opened a museum in Kentucky promoting a literal biblical creation narrative. Today the Liberty Alliance has around 100 sites in its network, and about 150 Facebook pages, according to Onan Coca, the company’s 36-year-old editor in chief. He estimates their cumulative follower count to be at least 50 million.
A dozen or so of the sites are published in-house, but posts from the company’s small team of writers are free to be shared among the entire network. The deal for a would-be Liberty Alliance member is this: You bring the name and the audience, and the company will build you a prefab site, furnish it with ads, help you fill it with content and keep a cut of the revenue. Coca told me the company brought in $12 million in revenue last year.
(The company declined to share documentation further corroborating his claims about followers and revenue.)
Because the pages are run independently, the editorial product is varied. But it is almost universally tuned to the cadences and styles that seem to work best on partisan Facebook. It also tracks closely to conservative Facebook media’s big narratives, which, in turn, track with the Trump campaign’s messaging:Hillary Clinton is a crook and possibly mentally unfit; ISIS is winning; Black Lives Matter is the real racist movement; Donald Trump alone can save us; the system — all of it — is rigged. Whether the Liberty Alliance succeeds or fails will depend, at least in part, on Facebook’s algorithm. Systemic changes to the ecosystem arrive through algorithmic adjustments, and the company recently adjusted the news feed to “further reduce click-bait headlines.”
For now, the network hums along, mostly beneath the surface. A post from a Liberty Alliance page might find its way in front of a left-leaning user who might disagree with it or find it offensive, and who might choose to engage with the friend who posted it directly. But otherwise, such news exists primarily within the feeds of the already converted, its authorship obscured, its provenance unclear, its veracity questionable. It’s an environment that’s at best indifferent and at worst hostile to traditional media brands; but for this new breed of page operator, it’s mostly upside. In front of largely hidden and utterly sympathetic audiences, incredible narratives can take shape, before emerging, mostly formed, into the national discourse.
– Trump’s following on the major social media networks absolutely blow Clinton out of the water.
The article cited a litany of social-media statistics highlighting Trump’s superior engagement numbers, among them Trump’s Facebook following, which is nearly twice as large as Clinton’s. “Don’t listen to the lying media — the only legitimate attack they have left is Trump’s poll numbers,” it said. “Social media proves the GOP nominee has strong foundation and a firm backing.” The story spread across this right-wing Facebook ecosystem, eventually finding its way to Breitbart and finally to Sean Hannity’s “Morning Minute,” where he read through the statistics to his audience.
Before Hannity signed off, he posed a question: “So, does that mean anything?” It’s a version of the question that everyone wants to answer about Facebook and politics, which is whether the site’s churning political warfare is actually changing minds — or, for that matter, beginning to change the political discourse as a whole. How much of what happens on the platform is a reflection of a political mood and widely held beliefs, simply captured in a new medium, and how much of it might be created, or intensified, by the environment it provides? What is Facebook doing to our politics?
Appropriately, the answer to this question can be chosen and shared on Facebook in whichever way you prefer. You might share this story from The New York Times Magazine, wondering aloud to your friends whether our democracy has been fundamentally altered by this publishing-and-advertising platform of unprecedented scale. Or you might just relax and find some memes to share from one of countless pages that will let you air your political id. But for the page operators, the question is irrelevant to the task at hand. Facebook’s primacy is a foregone conclusion, and the question of Facebook’s relationship to political discourse is absurd — they’re one and the same. As Rafael Rivero put it to me, “Facebook is where it’s all happening.”
The mainstream media (msm) doesn’t just decide what stories to cover, they decide what stories to cover up!
And as much as the ‘Sandernistas ’ attempt to disarticulate Sanders “progressive” domestic policies from his documented support for empire (even the Obamaite aphorism “Perfect is the enemy of good” is unashamedly deployed), it should be obvious that his campaign is an ideological prop – albeit from a center/left position – of the logic and interests of the capitalist-imperialist settler state.
I agree with pretty much everything he is saying. He is articulating a Marxist critique of American empire and its justificatory narratives (the ‘civilizing mission’ and/or Orientalism). The origins of this country and its western expansion is basically the definition of settler colonialism and the brutality that the Native American population suffered shouldn’t be ignored as a necessary consequence of liberalism’s teleological mission. And yeah, Sanders never distanced himself from the status quo of American foreign policy and its interests, which are dictated by capitalist accumulation and demands that American military power insure the centrality of market capitalism. Actually, it’s my intellectual agreements that makes me so outraged because his presentation of what it means to be a ‘leftist’ doesn’t involve a nuanced critique of power relations and global inequality.
Instead he just wants to emphasize how not enough people are appropriately outraged by the status quo, which is informing his definition of white supremacy as well. Him calling the vigils for the Charlie Hebdo victims a ‘white supremacist rally’ is because those same people aren’t mobilizing and expressing their moral outrage when Iraqis are slaughtered by the French government, for example. And that’s just such an ugly way of arguing and it isn’t actually trying to articulate or realize a political alternative, or any potentially hegemonic political project to imagine a different future, which is the biggest problem with the contemporary Left, besides maybe its tendency towards jingoism.
I could write an essay expressing my anger and if anyone wants to keep talking about this just send me a PM.