UNITED STATES (VOP TODAY NEWS) – The New York Stock Exchange is in talks to buy the Chicago bourse a month and a half after US regulators refused to sell the Chicago bourse to an alliance of Chinese investors, the Wall Street Journal reported on Friday. The New York Stock Exchange could pay about $ 70…
UNITED STATES (VOP TODAY NEWS) – The New York Stock Exchange is in talks to buy the Chicago Bourse (the Paris stock exchange) a month and a half after US regulators refused to sell the Chicago Bourse to an alliance of Chinese investors, the Wall Street Journal reported on Friday. The New York Stock Exchange could pay about $ 70…
“How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”
The publication of a June 2016 memo describing the consequences of Facebook’s growth-at-all-costs triggered an emotional conversation at the company today. An internal post reacting to the memo found employees angry and heartbroken that their teammates were sharing internal company discussions with the media. Many called on the company to step up its war on leakers and hire employees with more “integrity.”
On Thursday evening, BuzzFeed published a memo from Andrew “Boz” Bosworth, a vice president at Facebook who currently leads its hardware efforts. In the memo, Bosworth says that the company’s core function is to connect people, despite consequences that he repeatedly called “ugly.” “That’s why all the work we do in growth is justified. All the questionable contact importing practices,” he wrote. “All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.”
Bosworth distanced himself from the memo, saying in a Twitter post that he hadn’t agreed with those words even when he wrote them. He was trying to galvanize a discussion around the company’s growth strategy, he said. CEO Mark Zuckerberg told BuzzFeed that he had not agreed with the sentiments in the post at the time, and that growth should not be a means to an end in itself. “We recognize that connecting people isn’t enough by itself. We also need to work to bring people closer together,” Zuckerberg said.
After publishing the memo, Bosworth deleted his original post. “While I won’t go quite as far as to call it a straw man, that post was definitely designed to provoke a response,” Bosworth wrote in a memo obtained by The Verge. “It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community. The post was of no particular consequence in and of itself, it was the comments that were impressive. A conversation over the course of years that was alive and well even going into this week.
“I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”
“That conversation is now gone,” Bosworth continued. “And I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.”
Facebook and Bosworth declined to comment.
Nearly 3,000 employees had reacted to Bosworth’s memo when The Verge viewed it, responding with a mixture of likes, “sad,” and and “angry” reactions. Many employees rallied to Bosworth’s side, praising him for sharing his feelings about sensitive company matters using blunt language.
Others criticized Bosworth for deleting the post, saying it fueled a narrative about the company that it had something to hide. “Deleting things usually looks bad in retrospect,” one wrote. “Please don’t feed the fire by giving these individuals more fuel (eg, Facebook execs deleting internal communications”). If we are no longer open and transparent, and instead lock-down and delete, then our culture is also destroyed — but by our own hand.”
Dozens of employees criticized the unknown leakers at the company. “Leakers, please resign instead of sabotaging the company,” one wrote in a comment under Bosworth’s post. Wrote another: “How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”ing the company,” one wrote in a comment under Bosworth’s post. Wrote another: “How fucking terrible that some irresponsible jerk decided he or she had some god complex that jeopardizes our inner culture and something that makes Facebook great?”
Several employees suggested Facebook attempt to screen employees for a high degree of “integrity” during the hiring process. “Although we all subconsciously look for signal on integrity in interviews, should we consider whether this needs to be formalized in the interview process?” one wrote.
“This is so disappointing, wonder if there is a way to hire for integrity.”
Wrote another: “This is so disappointing, wonder if there is a way to hire for integrity. We are probably focusing on the intelligence part and getting smart people here who lack a moral compass and loyalty.”
Other employees said it would be difficult to detect leakers before they acted.
“I don’t think we’ve seen a huge internally leaked data breach, but I’ve always thought our ‘open but punitive’ stance was particularly vulnerable to suicide bombers,” one employee wrote “We would be foolish to think that we could adequately screen against them in a hiring process at our scale. … We have our representative share of sick people, drug addicts, wife beaters, and suicide bombers. Some of this cannot be mitigated by training. To me, this makes it just a matter of time.”
That employee followed up to say: “OMG, I just ran back to my ‘puter from a half-eaten lunch with food in my mouth. APOLOGIES to our brothers in sisters in the Austin Office for my insensitive choice of metaphors/words. I’m sorry.”
“We have our representative share of sick people, drug addicts, wife beaters, and suicide bombers.”
Another theory floated by multiple employees is that Facebook has been targeted by spies or state-level actors hoping to embarrass the company. “Keep in mind that leakers could be intentionally placed bad actors, not just employees making a one-off bad decision,” one wrote. “Thinking adversarially, if I wanted info from Facebook, the easiest path would be to get people hired into low-level employee or contract roles.” Another wrote: “Imagine that some percentage of leakers are spies for governments. A call to morals or problems of performance would be irrelevant in this case, because dissolution is the intent of those actors. If that’s our threat — and maybe it is, given the current political situation? — then is it even possible to build a system that defaults to open, but that is able to resist these bad actors (or do we need to redesign the system?)”
Several employees shared concerns that the leaks had removed some of Facebook’s luster. The company is routinely cited as among the best places to work in America.
Another employee responded: “Will become? Seems like we are there.”
The leaks also became cause for discussion about the company’s internal sharing tools. Facebook runs on its enterprise product, Facebook for Work. One employee wondered whether the critics of leakers had ignored incentives for sharing created by the product itself. It’s a nuanced thought worth sharing in full:
“It’s interesting to note that this discussion is about leaks pushing us to be more cognizant of our sharing decisions. The result is that we are incentivized toward stricter audience management and awareness of how our past internal posts may look when re-surfaced today. We blame a few ill-intentioned employees for this change.
“The non-employee Facebook user base is also experiencing a similar shift: the move toward ephemeral and direct sharing results from realizing that social media posts that were shared broadly and are searchable forever can become a huge liability today.
A key difference between the outside discussion and the internal discussion is that the outside blames the Facebook product for nudging people to make those broad sharing decisions years ago, whereas internally the focus is entirely on employees.”
Another employee made a similar plea for empathy. “Can we channel our outrage over the mishandling of our information into an empathy for our users’ situation? Can the deletion of a post help us better understand #delete facebook? How we encourage ourselves to remain open while acknowledging a world that doesn’t always respect the audience and intention for that information might just be the key to it all. Maybe we should be dogfooding that?”
For his part, Bosworth promised employees he would continue sharing candid thoughts about Facebook, but said he would likely post less. “When posting comes with the risk that I’ll have to blow up my schedule and defend myself to the national press,” he wrote, “you can imagine it is an inhibitor.”
Here is Bosworth’s full memo to the company today.
I’m feeling a little heartbroken tonight.
I had multiple reporters reach out today with different stories containing leaks of internal information.
In response to one of the leaks I have chosen to delete a post I made a couple of years ago about our mission to connect people and the ways we grow. While I won’t go quite as far as to call it a straw man, that post was definitely designed to provoke a response. It served effectively as a call for people across the company to get involved in the debate about how we conduct ourselves amid the ever changing mores of the online community. The post was of no particular consequence in and of itself, it was the comments that were impressive. A conversation over the course of years that was alive and well even going into this week.
That conversation is now gone. And I won’t be the one to bring it back for fear it will be misunderstood by a broader population that doesn’t have full context on who we are and how we work.
This is the very real cost of leaks. We had a sensitive topic that we could engage on openly and explore even bad ideas, even if just to eliminate them. If we have to live in fear that even our bad ideas will be exposed then we won’t explore them or understand them as such, we won’t clearly label them as such, we run a much greater risk of stumbling on them later. Conversations go underground or don’t happen at all. And not only are we worse off for it, so are the people who use our products.
Ban on “Material Support or Resources” for Warrantless Federal Surveillance, FISA Sec. 702, Information Sharing Environment (ISE), Jefferson City Missouri, Missouri, Missouri Bill HB 2402, NSA Data Sharing, Surveillance, Warrantless Federal Surveillance
“The most threatening situation to our constitutional republic since the Civil War!”
If not banned, the FISA warrant authorizes government surveillance on all landlines, mobile devices and desktop computers in a given area. While the process was created to monitor foreign agents, it sweeps up reams of data belonging to Americans.
JEFFERSON CITY, Mo. (Feb. 8, 2018) – A bill introduced in the Missouri House would ban “material support or resources” for warrantless federal surveillance programs. This represents an essential step states need to take at a time when the federal government seems unlikely to ever end unconstitutional spying on its own.
Rep. Rick Brattin (R-Harrisonville) introduced House Bill 2402 (HB2402) on Feb. 7th. The legislation would prohibit the state and its political subdivisions from assisting, participating with, or providing “material support or resources, to a federal agency to enable it to collect, or to facilitate in the collection or use of a person’s electronic data” unless one of three conditions apply:
(a) The person has given informed consent.
(b) The action is pursuant to a warrant that is based upon probable cause and particularly describes the person, place, or thing to be searched or seized.
(c) The action is in accordance with a legally recognized exception to warrant requirements.
HB2402 is similar to a measure working its way through the Michigan legislature. That bill has already passed the state House by a vote of 107-1.
Despite concerns about warrantless surveillance in the wake of Edward Snowden’s revelations, Congress has done nothing to rein in NSA spying. In fact, it has facilitated its expansion. For instance, just last January, Congress Reauthorized FISA Section 702.
As Andrew Napolitano explained, “the FISA-created process permits a secret court in Washington to issue general warrants based on the government’s need to gather intelligence about national security from foreigners among us. It pretends that the standard is probable cause of foreign agency, but this has now morphed into the issuance of general warrants whenever the government wants them.” A typical Foreign Intelligence Surveillance Court (FISA) warrant authorizes government surveillance on all landlines, mobile devices and desktop computers in a given area. While the process was created to monitor foreign agents, it sweeps up reams of data belonging to Americans.
Before approving a six-year extension of Section 702, the House voted to kill an amendment that would have overhauled the surveillance program and addressed some privacy concerns. Provisions in the amendment would have required agents to get warrants in most cases before hunting for and reading Americans’ emails and other messages that get swept up under the program.
Just one day after Trump signed the extension into law, news came out about the infamous FISA memo. This memo was available to members of the House Intelligence Committee prior to the vote to reauthorize FISA. None of this information was made available to Congress at large. Most telling, every single Republican member of the House Intelligence Committee voted to reauthorize Sec. 702, and in a heartwarming show of bipartisanship, six of the nine Democratic representatives on the committee joined their colleagues.
This is yet another indication we can’t count on Congress to limit its spy-programs.
The feds share and tap into vast amounts of information gathered at the state and local level through a program known as the “Information Sharing Environment“ (ISE). In other words, these partnerships facilitate federal efforts to track the movements of, and obtain and store information on, millions of Americans. This includes monitoring phone calls, emails, web browsing history and text messages, all with no warrant, no probable cause, and without the people even knowing it.
According to its website, the ISE “provides analysts, operators, and investigators with information needed to enhance national security. These analysts, operators, and investigators… have mission needs to collaborate and share information with each other and with private sector partners and our foreign allies.” In other words, ISE serves as a conduit for the sharing of information gathered without a warrant.
Because the federal government relies heavily on partnerships and information sharing with state and local law enforcement agencies, passage of HB2402 could potentially hinder warrantless surveillance in the state. For instance, if the feds wanted to engage in mass surveillance on specific groups or political organizations in Missouri, it would have to proceed without state or local assistance. That would likely prove problematic.
State and local law enforcement agencies regularly provide surveillance data to the federal government through ISE and Fusion Centers. They collect and store information from cell-site simulators (AKA “stingrays”), Automated License Plate Readers (ALPRs), drones, facial recognition systems, and even “Smart” or “Advanced” power meters in homes.
Passage of HB2402 could set the stage to end this sharing of warrantless information with the federal government. It would also prohibit state and local agencies from actively assisting in warrantless surveillance operations.
By including a prohibition on participation in the illegal collection and use of electronic data and metadata by the state, HB2402 would also prohibit what NSA former Chief Technical Director William Binney called the country’s “greatest threat since the Civil War.”
The bill would ban the state from obtaining or making use of electronic data or metadata obtained by the NSA without a warrant.
Reuters revealed the extent of such NSA data sharing with state and local law enforcement in an August 2013 article. According to documents obtained by the news agency, the NSA passes information to police through a formerly secret DEA unit known Special Source Operations and the cases “rarely involve national security issues.” Almost all of the information involves regular criminal investigations, not terror-related investigations.
In other words, not only does the NSA collect and store this data. using it to build profiles, the agency encourages state and local law enforcement to violate the Fourth Amendment by making use of this information in their day-to-day investigations.
This is “the most threatening situation to our constitutional republic since the Civil War,” Binney said.
The original definition of “material support or resources” included providing tangible support such as money, goods, and materials and also less concrete support, such as “personnel” and “training.” Section 805 of the PATRIOT Act expanded the definition to include “expert advice or assistance.”
Practically-speaking, the legislation would almost certainly deter the NSA from ever setting up a new facility in Missouri.
In 2006, the agency maxed out the Baltimore-area power grid, creating the potential, as the Baltimore Sun reported, for a “virtual shutdown of the agency.” Since then, the NSA aggressively expanded in states like Utah, Texas, Georgia and elsewhere, generally focusing on locations that can provide cheap and plentiful resources like water and power.
For instance, analysts estimate the NSA data storage facility in Bluffdale, Utah, will use 46 million gallons of water every day to cool its massive computers. The city supplies this water based on a contract it entered into with the spy agency. The state could turn off the water by voiding the contract or refusing to renew it. No water would effectively mean no NSA facility.
What will stop the NSA from expanding in other states? Bills like HB2402. By passing this legislation, Missouri would become much less attractive for the NSA because it would not be able to access state or local water or power supplies. If enough states step up and pass the 4th Amendment Protection Act, we can literally box them in and shut them down.
HB2402 rests on a well-established legal principle known as the anti-commandeering doctrine. Simply put, the federal government cannot force states to help implement or enforce any federal act or program. The anti-commandeering doctrine is based primarily on four Supreme Court cases dating back to 1842. Printz v. US serves as the cornerstone.
“We held in New York that Congress cannot compel the States to enact or enforce a federal regulatory program. Today we hold that Congress cannot circumvent that prohibition by conscripting the States’ officers directly. The Federal Government may neither issue directives requiring the States to address particular problems, nor command the States’ officers, or those of their political subdivisions, to administer or enforce a federal regulatory program. It matters not whether policy making is involved, and no case by case weighing of the burdens or benefits is necessary; such commands are fundamentally incompatible with our constitutional system of dual sovereignty.”
The first step is for the bill to be assigned to a committee. Once it receives a committee assignment, it will need to pass by a majority vote before moving forward in the legislative process.
April Parks indicted in guardianship case, Elder Exploitation, Guardianship Abuse, Guardianship case, Las Vegas Metropolitan Police Department, Las Vegas police officer, Lieutenant James Thomas Melton indicted for Exploitation of an Older Person, Stop Guardianship Abuse
A Las Vegas Metropolitan Police Department (LVMPD) lieutenant was jailed last Valentine’s Day, caught up in a sweeping indictment involving elder exploitation. Contact 13 Darcy Spears continues her years-long expose on guardianship abuse with this heart- breaking case.
He was supposed to serve and protect but instead he’s accused of felony crimes for using Clark County’s guardianship system to steal from the estate of a vulnerable couple. And this police officer is directly connected to others first exposed in our ongoing investigation of guardianship corruption.
Lieutenant James Thomas Melton is a decorated police veteran. As a sergeant, Melton received a group Medal of Valor and Purple Heart in 2009 for being wounded during a domestic violence call where a baby was pulled away from gunfire.
He was also a homicide detective and Metro’s SWAT commander, making about $300,000 a year including benefits.
Court records claim Melton deceived the court after the victim died, representing that she was still alive so he could be named beneficiary on various accounts.
And Melton didn’t act alone. The indictment shows he hired private guardian April Parks. Parks is already in jail facing over 200 felony counts after our investigation revealed she was double-billing and exploiting clients.
According to the indictment, Melton is also accused of stealing the victim’s Ford Explorer and taking $2,187.50 from her Disabled American Veterans Charitable Service Trust.
When allegations first surfaced in July, he was put on leave with pay. Metro says he will now be relieved of duty without pay. The trial for Parks and her co-defendants is scheduled for May.
Below is a description of the individual charges:
- James Thomas Melton: Two counts Exploitation of an Older Person (category B), one count Theft (category B), one count Theft (category C), seven counts Offering False Instrument for Filing or Record (category C), one count Grand Larceny Auto (category C), two counts Perjury (category D)
- April Parks: One count Exploitation of an Older Person (category B), six counts Offering False Instrument for Filing or Record (category C), one count Perjury (category D)
- Mark Simmons: One count Exploitation of an Older Person (category B), two counts Offering False Instrument for Filing or Record (category C)
- Noel Palmer Simpson: One count Exploitation of an Older Person (category B), one count Theft (category C), eight counts Offering False Instrument for Filing or Record (category C), one count Perjury (category D)
In March of last year, April Parks, Mark Simmons and Noel Palmer Simpson were named in a 270-count indictment related to guardianship exploitation.
Parks was indicted on over 200 felony charges for similar conduct, including racketeering, theft, exploitation of an older person, offering false instrument for filing or record, and perjury.
Her office manager, Mark Simmons, was indicted on over 130 felonies, and her attorney Noel Palmer Simpson was implicated in that case and indicted on two charges.
A fourth defendant, Guardian Gary Neal Taylor, was indicted for seven felonies. That case is currently set for trial in May 2018.
An indictment is merely a charging document; the courts insist every defendant is presumed innocent until and unless proven guilty in a court of law.
The science fiction writer Arthur C. Clarke famously wrote, “Any sufficiently advanced technology is indistinguishable from magic.” Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. It’s known as Artificial Super-Iintelligence (ASI), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.
We’ve all encountered Artificial Intelligence (AI) in the media. We hear about it in science fiction movies like “Avengers: Age of Ultron” and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.
In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence has progressed from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast can read all of the books in the Library of Congress the first millisecond you press “enter” on the program, and then integrate all that knowledge into a comprehensive analysis of humanity’s 4,000 year intellectual journey before your next blink.
The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on the Turing Machine, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switches—on’s and off’s, 0’s and 1’s—could think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBM’s AI bot “Watson” sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBM’s Watson can already analyze your medical records and become your personal, all-knowing doctor.3
While these achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called “intelligence” to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, she’ll say “Starting Facetime with Matt Soffer.” A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.
This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)—roughly defined as a machine intelligence that outperforms humans in all intellectual tasks — the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check “never” if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI.4
HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I. J. Good said it best: “The first ultraintelligent machine is the last invention that man need ever make ….”5
There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6
The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of “Whole Brain Emulation” is to copy or simulate our brain’s neural networks, taking advantage of nature’s millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switch—it either fires or it doesn’t. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White House’s BRAIN Initiative8 and the EU’s Human Brain Project.9 In reality, these two routes to human level machine intelligence—algorithmic and emulation—are not black and white. Whatever technology achieves HLMI will probably be a combination of the two.
The rate of HLMI advancement have increased very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4
Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMI’s. Thus, once HLMI’s truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMI’s will set the smarter HLMI’s to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.
Here is a reimagining of a human-computer dialogue taken from the collection of short stories, “Angels Spaceships”11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted “Program Complete.” The programmer types, “What have you read?” and the program responds, “The entire internet. Ask me anything.” After deliberating for a few seconds, one of the programmers types, hands trembling, “Do you think there’s a God?” The computer instantly responds, “There is now.”
This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10
Another understandable doubt may be that it’s hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0’s and 1’s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by science—it is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent property—a result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter— then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0’s and 1’s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine Intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why can’t it be on another?
Thus, under the assumption that superintelligence is possible, the world is reaching a critical point in history. First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.
The implications of this intelligence for society would be far-reaching—in some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects— and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence (IJCAI). The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12
When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% “extremely good,” 28% “good,” 17% “neutral,” 13% “bad,” and 18% “extremely bad” (existential catastrophe).4 18% is not a statistic to take lightly.
Although Artificial Superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Plato’s cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we don’t even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that we could bear witness to the greatest answers of all time. Are we ready to take the risk?
- Turing, A. On Computable Numbers, with an Application to the Entscheidungsproblem. Oxford Journal. 1936, 33.
- Plambeck, J. A Peek at The Promise of Artificial Intelligence Unfolds in Small Steps. The New York Times, Aug. 7, 2015.
- Markoff, J. Computer Wins on ‘Jeopardy!:’ Trivial, It’s Not. The New York Times, Feb. 16, 2011.
- Müller, V. C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. 2014, 9-13.
- Good, I. J. Speculations Concerning the First Ultraintelligent Machine. Academic Pres. 1965, 33.
- Sharpe, L. Now You Can Turn Your Photos Into Computerized Nightmares with ‘DeepDream.’ Popular Science, July 2, 2015.
- Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, 2014; pp. 30-36.
- BRAIN Initiative. The White House [Online], Sept. 30, 2014, whitehouse. gov/share/brain-initiative (accessed Oct. 20, 2015).
- Human Brain Project [Online], humanbrainproject.eu (accessed Oct. 21, 2015).
- Kurzweil, R. The Singularity Is Near; Penguin Books: England, 2005; pp. 10-14.
- Brown, F. “Answer.” Angels Spaceships; Dutton: New York, 1954.
- Pagliery, J. Elon Musk and Stephen Hawking warn over ‘killer robots.’ The New York Times, July 28, 2015.
AI-targeted ads, Artificial Intelligence (AI), Brexit referendum, Facebook, Mass Surveillance, Political campaigns, Propaganda, Psychographic Targeting, Psychometrics Centre - University of Cambridge, Social Media, Social media impacting political opinions, US presidential election, Voting
How to turn Facebook into a weaponized AI propaganda machine
Could Facebook really tip the balance in an election? Over the past year firms like Aggregateiq and Cambridge Analytica have been credited with using AI-targeted ads on social media to help swing the Brexit referendum and the US presidential election respectively. But a lack of evidence meant we have never known whether the technology exists to make this possible.
Now the first study detailing the process from start to finish is finally shedding some light. “This is the first time that I’ve seen all the dots connected,” says Joanna Bryson, an artificial intelligence researcher at the University of Bath, UK.
At the heart of the debate is psychometrics targeting – the directing of political campaigns at people via social media based on their personality and political interests, with the aid of vast amount of data filtered by artificial intelligence (AI).
Though Facebook doesn’t “explicitly” provide all the tools to target people based on political opinions, the new study shows how the platform can be exploited. Using combinations of people’s interests, demographics, and survey data it’s possible to direct campaigns at individuals based on their agreement with ideas and policies. This could have a big impact on the success of campaigns.
“The weaponized, artificially intelligent propaganda machine is effective. You don’t need to move people’s political dials by much to influence an election, just a couple of percentage points to the left or right,” says Chris Sumner at the Online Privacy Foundation, who is presented the work this at DEF CON in Las Vegas.
Checks and balances
To get to grips with the complex issue of psychographic targeting online, Sumner and his colleagues created four experiments.
In the first, they looked at what divides people. High up on the list was the statement: “with regards to internet privacy: if you’ve done nothing wrong, you have nothing to fear.” During the Brexit referendum they surveyed more than 5000 people and found that Leave voters were significantly more likely to agree with the statement, and Remain voters more likely to disagree.
Next, by administering various personality tests to a different group they found traits that correlate with how likely you are to agree with that statement on internet privacy. This was converted into an “authoritarianism” score: if you scored high you were more likely to agree with the statement. Then, using a tool called PreferenceTool, built by researchers at the University of Cambridge, they were able to reverse engineer what sort of Facebook interests and demographics people with those personalities were most likely to have.
Just 38 per cent of a random selection of people on Facebook agreed with the privacy statement but this shot up to 61 per cent when the tool was used to target people deemed more likely to agree, and down to 25 per cent for those who they deemed more likely to disagree. In other words, they were able to demonstrate that it is possible to target people on Facebook based on a political opinion.
Finally, the team created four different Facebook ad campaigns tailored to the personalities they had identified, using both pro and anti-surveillance messages. For example, the anti-surveillance ad aimed at people with high levels of authoritarianism read: “They fought for your freedom. Don’t give it away! Say no to mass surveillance,” with a backdrop of the D-day landings. In contrast, the version for people with low levels of authoritarianism said: “Do you really have nothing to fear if you have nothing to hide? Say no to state surveillance,” alongside an image of Anne Frank.
Overall they found that the tailored ads resonated best with the target groups. For example, the pro-surveillance, high-authoritarianism advert had 20 times as many likes and shares from the high-authoritarianism group versus the low one.
Though the picture is becoming clearer, we should be careful not to equate a short-term decision to share or like a post, with long-term political views, says Andreas Jungherr at the University of Konstanz, Germany. “Social media is impacting political opinions. But the hype makes it hard to tell exactly how much,” he says.
However, maybe changing political opinions doesn’t have to be the end game. Perhaps the goal is simply to dissuade or encourage people from voting. “We know it’s really easy to convince people not to go to the polls,” says Bryson. “Prime at the right time and you can have a big effect. It’s not necessarily about changing opinions.”
Facebook allows targeted advertising so long as a company’s use of “external data” adheres to the law.
Greater transparency is the aim for the future. The Information Commissioner (ICO) in the UK, Elizabeth Denham, is midway through an investigation into the use of targeted advertising by political campaigns and is due to publish her findings later this year. The Green Party in Germany has started making all of its social media adverts available online for anyone to see. This may encourage others to follow.(?)
But a better approach might be to create new institutions to audit the algorithms used for political targeting. “It’s absolutely fundamental to democracy that there is more transparency,” says Bryson.
Child sex doll, Child sex offenders, CREEPER Act, Curbing Realistic Exploitative Electronic Pedophilic Robots (CREEPER) Act of 2017 (HR 4655, Electronic Pedophilic Robots (CREEPER), Pedophiles, Pedophilic urges on the dolls, Sexbots, Sexually Inappropriate Behaviour, Stop Abuse Campaign
Congressman Daniel M. Donovan, Jr. has introduced legislation to ban child sex dolls and robots, while some pedophilia experts are torn about whether they can help or harm. (Warning: Graphic content.)
– Anything that normalizes adult sexual attraction towards children will only worsen this epidemic, from Stop Abuse Campaign.
The CREEPER Act
The Curbing Realistic Exploitative Electronic Pedophilic Robots (CREEPER) of 2017 (HR 4655, 115th Congress) updates the United States Code on Importation or transportation of obscene matters (Section 1462 of title 18) to prohibit the importation or transportation of “any child sex doll.” It would apply to anyone who imports, takes or receives a doll in the U.S. Violators would be fined and/or imprisoned for up to five years for the first offense and imprisoned for up to 10 years for subsequent offenses.
The bill defines a “child sex doll” as an “anatomically-correct doll, mannequin, or robot, with the features of, or with features that resemble those of, a minor, intended for use in sexual acts.’’ The remainder, and bulk of the bill text, describes the motivation behind the bill’s introduction, which includes the assertion of a correlation between possession of materials and participation in the abuse of minors.
Child sex robot dolls that should never appear together are suddenly—and disturbingly—making headlines around the world every week, as is the debate surrounding their implementation or banishment to either curb or reinforce pedophilia. The Stop Abuse Campaign has launched a new campaign designed to grab your attention. “Children play with dolls,” it reads. “Sex abusers should not.”
Most recently: A 33-year-old Essex man was found not guilty of importing a 3-foot-tall child sex doll in the United Kingdom. Meanwhile, a case in Canada that began in 2013 with the intercepted “controlled delivery” of one such doll is still being prosecuted five years later.
– These sex robots, which in and of itself were creepy enough, are now being morphed into child sex dolls for your every day pedophile. These child sex dolls are made to cry like real children and can also mimic child like behavior such as sadness and fear.
Unsurprisingly, heated controversy surrounds the subject, with some advocates suggesting child sex dolls could be used to deter the real-life fulfillment of pedophilic urges. Most notably, Juliet Grayson, chair of the Wales-based organization the Specialist Treatment Organization for the Prevention of Sexual Offending (StopSO), told The Independent that the prescription of child sex dolls might potentially curb assaults against human children. (?)
However, in an interview, Donovan shot down the notion that child sex dolls could be used to prevent abuse with a simple analogy.
“You don’t give an alcoholic a bottle of liquor to stop their addiction, so why would you provide a pedophile with a tool that would further normalize harmful actions?” Donovan asked. “Once a child sex abuser tires of practicing on a doll, it’s a small step to move on to an innocent child. This isn’t just speculation. Psychologists and researchers believe that these dolls reinforce, normalize, and encourage pedophilic behavior, potentially putting more children at risk to harm. It is absurd to argue that permitting sexual abuse against a realistic portrayal of a child somehow stops pedophiles from viewing real children as sexual outlets for their sick desires.”
“Let’s be clear, these dolls aren’t related to free speech. They are used to act out sick fantasies.”
— Rep. Dan Donovan (R-NY)
With both the AI revolution and the cultural awakening that’s been coined the post-Weinstein effect (sexual abuse allegations), there is an intense focus right now on the best way to protect our most vulnerable populations against sexual abuse. Incidentally, conversations about pedophilia that once were shrouded in darkness are now being brought into the light. For example: Is it possible for pedophiles to get help before offending? How does grooming of children happen? What is the extent of child sexual abuse online? Should there be preemptive imprisonment for pedophiles at risk of molesting a child?
Could there be a danger of the same issue happening with the CREEPER Act?
In the United Kingdom, where a similar ban exists to the one being introduced by the CREEPER Act, authorities seized 128 child sex dolls last year, and 85 percent of the men who imported them were found to also be in possession of child pornography. Child sex dolls are already here, with child sex robots hitting the market soon—causing heated legal, ethical, and scientific debate around the world.
“I support the CREEPER Act and helped Congressman Donovan’s team draft it,” Noel Sharkey, co-director of the Foundation for Responsible Robotics. “I believe that a ban on the general use of child-like sex robots is necessary because of the dangers that they may create. They could have a pernicious impact on society and potentially normalise sexual assault on minors. It would be relatively easy to make these as replicas of actual children from photographs. The way forward is to have international laws against them.”
– Repliee R1 is a copy of a real 4-year-old girl. Built at Osaka University, it has nine DC motors in its head, shows fear and tears up, prosthetic eyeballs, and silicone skin.
Still, the topic inspires a merry-go-round of researcher versus researcher. On the one end of the spectrum, legal scholars Maras and Shapiro dismiss the possibility of potential therapeutic use of child sex dolls, writing, “Scientific evidence contradicts these claims as nonsensical and irrational.” On the other end, noted pedophilia researcher and Sexual Abuse Editor in Chief Michael Seto disagrees that such definitive evidence exists yet.
“I don’t understand why the authors can be so confident in their opinions given the lack of research on this topic,” Seto explained in an email to The Daily Beast. “I conduct research on pedophilia and sexual offending against children and I am not aware of any research on the impacts of access to child sex dolls or robots. The study that is cited in the article discusses factors that are important in the treatment of identified sex offenders to reduce offending. I know this research, and it does not address the impact of child sex dolls or robots, which are relatively new inventions.”
In a passionate piece for The Hill, Donovan made his case for the CREEPER Act, which has 18 congressional co-sponsors, explaining, “During my 20 years as a prosecutor, I put away animals who played out their disgusting fantasies on innocent children. What I saw and heard was enough to make anybody sick. Now, as a legislator in Congress, I’m introducing a bill to ban the newest outlet for pedophiles: child sex dolls. These lifelike, anatomically accurate recreations of young children include ‘accessories’ such as false eyelashes, wigs, warming devices, and cleaning tools.”
Donovan said his work as a prosecutor is linked closely to this current legislation: “Every case has stayed with me—there is no situation where a child was hurt or victimized that doesn’t leave your thoughts. As a former DA and current legislator, but more importantly as a father, I will do everything possible to stop crimes against children.”
After moving through the proper committees, Donovan says, “I hope to see [the CREEPER Act] considered quickly on the House floor. We must protect our nation’s children. I know the American public want this done—there is more than 160,000 signatures on a Change.org petition supporting my legislation.”
Maras and Shapiro assert in their recent editorial that the introduction of the CREEPER Act is a “step in the right direction,” but they also advocate for additional prohibitions which would “criminalize the manufacture and possession of both child sex dolls and child sex robots,” such as when criminals “find ways to evade criminal sanction by, for example, creating these child sex dolls and sex robots themselves (for example, using a 3D printer).”
Donovan responds, “Right now, the proliferation of these dolls is being pushed by manufacturers in international markets—not through 3D printers. We, of course, should be forward-looking to ensure that the law continues to keep up with technology—but my focus is stopping the ‘here and now.’ For example, ICE has already confiscated one of these dolls in the U.S. that was imported from abroad.”
Child sex dolls are already being imported into America?
“I have been in touch with ICE and know that a child sex doll was found during a bust,” explains Donovan. “While I can’t speak more on ongoing cases, I can say that this situation shows that these dolls are being shipped here now. The ability to obtain child sex dolls needs to be stopped immediately.”
Can the law even keep up with the technology?
“Writing legislation for technology we don’t yet know will exist in 10, 20-plus years time is a difficult task,” observes Emily C. Collins, a robotics researcher at the University of Liverpool and member of the Foundation for Responsible Robotics (FRR). “But it is not fruitless to attempt to do so… When a machine is built, the builders, in my opinion, should be asking, ‘How will this robot impact its users?’”
How will child sex dolls and robots affect their users? Are pedophiles who have purchased the child sex dolls in fact “virtuous”?
Last year, 72-year-old David Turner, a church warden with local school oversight, was convicted of importing a child sex doll. In a landmark decision for this new form of sex crime against children, the judge ruled the importation of the item “obscene.” Authorities who later searched Turner’s home found two other child sex dolls and more than 34,000 child pornography images.
The pictures showed victims ages 3 to 16.
- Sex robots
- outlaw the trade
- possession of a child sex doll
- “treating” sex offenders
- more recent case
- existing offence
- Child sexual abuse
- child sexual abuse prevention
- UK law
- sex crime
You’re one in 400 trillion, or pretty much a miracle!
Crazy. But also amazing!
This is the probability of you being born at the time you were born to your particular parents, with your particular genetic make-up.
Dr. Ali Binazir took it further. He attended the Ted Talk and wrote about it afterward, doing his own calculations on how likely your existence is. Dr. Binazir is an author and personal change specialist who studied at Harvard, received a medical degree from the University of California, and studied philosophy at Cambridge University.
He looked at the odds of your parents meeting, given how many men and women there are on Earth and how many people of the opposite sex your mother and father would have met in their first 25 years of life. Then he looked at the chances of them talking, of meeting again, of forming a long-term relationship, of having kids together, and of the right egg and the right sperm combining to make you. He goes further back to look at the probability of all your ancestors successfully mating, and of all the right sperm meeting all the right eggs to make each one of those ancestors.
He illustrates it this way: “It is the probability of 2 million people getting together each to play a game of dice with trillion-sided dice. They each roll the dice and they all come up with the exact same number—for example, 550,343,279,001.”
“A miracle is an event so unlikely as to be almost impossible. By that definition, I’ve just shown that you are a miracle,” he wrote. “Now go forth and feel and act like the miracle that you are.”
Buddhists have talked of the preciousness of this incarnation. Binazir recounted this Buddhist analogy: “Imagine there was one life preserver thrown somewhere in some ocean and there is exactly one turtle in all of these oceans, swimming underwater somewhere. The probability that you came about and exist today is the same as that turtle sticking its head out of the water—in the middle of that life preserver. On one try.”
Binazir decided to test the Buddhist understanding against the modern scientific understanding. He looked at the amount of water in the oceans, compared to the size of a life-preserver. He concluded that the chances of a turtle sticking its head out in the middle of the life preserver was about one in 700 trillion.
“One in 400 trillion vs one in 700 trillion? I gotta say, the two numbers are pretty darn close, for such a far-fetched notion from two completely different sources: old-time Buddhist scholars and present-day scientists.”
In conclusion: “The odds that you exist at all are basically zero!”
* The impact that I want to have is I want to teach people how to discover the power that’s inside of them. To live fully in the open and share themselves, who they are…. I want to teach people how to live with more courage because courage is nothing more than the ability to do things that are uncertain…. The impact that I want to have is I want to teach people a simple way to discover the power that’s locked inside them and then to unleash it and go out and live the life they’ve always dream of.
In Hinduism, dharma is the religious and moral law governing individual conduct and is one of the four ends of life. In addition to the dharma that applies to everyone (sadharana dharma)—consisting of truthfulness, non-injury, and generosity, among other virtues—there is also a specific dharma (svadharma) to be followed according to one’s class, status, and station in life. Dharma constitutes the subject matter of the Dharma-sutras, religious manuals that are the earliest source of Hindu law, and in the course of time has been extended into lengthy compilations of law, the Dharma-shastra.
In Buddhism, dharma is the doctrine, the universal truth common to all individuals at all times, proclaimed by the Buddha. Dharma, the Buddha, and the sangha (community of believers) make up the Triratna, “Three Jewels,” to which Buddhists go for refuge. In Buddhist metaphysics the term in the plural (dharmas) is used to describe the interrelated elements that make up the empirical world.
The clues to truth and deception are everywhere… can you spot them?How much deceit do we encounter? On a given day, studies show, you may be lied to anywhere from 10 to 200 times. Now granted, many of those are white lies. Another study showed that strangers lied three times within the first 10 minutes of meeting each other.
Liespotting reveals the sophisticated lie-detection methods of security experts and interrogators, and arms you with proven techniques to detect deception and build trust.
After an epiphany at her Harvard reunion, Pamela Meyer, a Certified Fraud Examiner, embarked on a three-year research adventure to discover how and why people deceive. She shares much of what she learned so fraud examiners can become liespotters.
You’re seated across from Stan in the interview room, and all you can think of are the immortal words of George Costanza’: “Just remember, it’s not a lie if you believe it.” You think Stan is about to tell you some big fibs about his possible involvement in a company embezzlement, but how will you tell? Pamela Meyer can help.
Meyer, a CFE and author of the bestselling book, “Liespotting: Proven Techniques to Detect Deception,” can give you a holistic approach that will indicate if Stan is a believable liar, unbelievably lying or both. She can show how to watch for his telltale facial expressions and body language. She can teach you 10 questions to get him to tell you anything you want. And she can show you methods to parse Stan’s words. However, the methods she’ll give you aren’t parlor tricks. They’re part of a scientifically grounded system for ferreting out deception.
Meye says she accidentally walked into the world of deception detection five years ago when she attended her 20th year reunion at Harvard Business School.
“I took a workshop at this reunion with 350 of my classmates where a professor detailed his findings on how people behave when they are being deceptive,” she says. “What they do with their posture, their purses, their backpacks, their language structure, their smiles. I witnessed something you rarely see. For 45 minutes, 350 high-level, busy people were riveted. No one was tapping at their Blackberries. No one was running to the hall to start a conference call.
“People, who thought they had seen it all, were learning something completely new and useful,” Meyer says. “When I witnessed this unusual moment of executive silence, I knew I had happened onto something transformational.” She says she set out to immerse herself in learning techniques for spotting deception that intelligence, security, law enforcement and espionage agencies had developed and were using.
Eye opening! MUST WATCH VIDEO!
These techniques will also help you gain a lasting advantage in business and dramatically improve your personal relationships by learning to decode the body language, facial expressions, words and actions of everyone you encounter.
Pamela Meyer is founder and CEO of Calibrate, a leading deception detection training company, and of social networking company Simpatico Networks. She holds an MBA from Harvard, an MA in Public Policy from Claremont Graduate School, and is a Certified Fraud Examiner. She has extensive training in the use of visual clues and psychology to detect deception.
- About The Book
- Liespotting Basics
Writing and talk do not prove me, I carry the plenum of proof and every thing else in my face —Walt Whitman, “Song of Myself”
The first rule in deception detection is to watch the face. Our faces reveal multitudes about what we are thinking, feeling, intending. A slack jaw hints that we’ve been surprised, flared nostrils suggest hostility. Drooping eyelids in… Learn More →
Don’t try lying to Pamela Meyer. She’s known internationally as an expert deception detector. Her TED talk on the subject is super popular, and she’s written a book called “Liespotting.” She also runs a company called Calibrate in Washington, D.C. … Read More
Lawrence O’Donnell asks Salon writer Amy Punt about her provocative new piece “5 Reasons Chris Christie Might Be Lying,” in which she applied lie detection techniques from the book “Liespotting” to Governor Christie’s press conference. // Read More
Pamela Meyer’s Ted Talk : How to Spot a Liar makes “top 20 most popular talks” list with over twelve million views.
The 20 most popular TED Talks, as of this moment “As 2013 draws to a close, TED is deeply humbled to have posted 1600+ talks, each representing an idea worth spreading. So which ideas have had the most widespread impact? … Read More
Why you don’t need a labcoat to tell Ryan Braun was doping Though Braun was able to bluff his way past MLB officials for the better part of two years, they clearly suspected something was up. Sometimes those suspicions are … Read More
What do a hard shove in an NBA playoff game, a wayward ball in The Masters golf tournament, and a high school soccer match in Utah have in common? Nike’s new slogan sums it up: Winning Takes Care of Everything. … Read More
New research shows for the first time that a pair of liars will recall events differently than truth-tellers, offering crucial clues for law enforcement and intelligence officers who operate in social settings. Read More
The Center for Leadership and Ethics at Virginia Military Institute is doing something brilliant: An ambitious and highly relevant conference on cheating. Two thousand participants will be discussing this critical topic, in small groups and in a larger forum. Pamela … Read More
Herman Cain has accused 5 women of lying. Comedians are having a field day—yet running for President is serious business. Lie detection experts suggest Cain is the deceptive one. Read More
A glance at recent headlines indicates just how serious and pervasive deceit and lying are in daily life. Republican presidential candidate Herman Cain is busy trading allegations of sexual harassment with several women; each side accuses the other of lying. … Read More
Do gorillas lie?
They have been known to. Koko, the gorilla taught sign language, once blamed her pet kitten for ripping a sink out of the wall, but it’s us humans who are the true masters of the art. According to Pamela Meyer, a social media expert, we are living in a “post-truth society”. Those Facebook friends of yours, for example? Just how real are they? Lying, she says, is the bridge between reality and our fantasies, between who we are and who we want to be.
And it’s a cooperative act. You can only be lied to if you agree to it. Strangers lie three times within the first 10 minutes of meeting. But then again, according to Meyer, married couples lie to each other once in every 10 interaction…
If Edith Wharton lived in the Age of Innocence, surely we now live in the Age of Deception…. Read More
By now you have surely participated in the nation’s “Weiner Roast” as one of our country’s public servants self destructs over weak denials regarding a lewd photo sent from his Twitter account. Watch the video below, and you’ll see flashes … Read More
We are just beginning to understand how the reward circuits in our brains become activated via observation of others. By CARL ZIMMER New York Times In the middle of a phone call four years ago, Paula Niedenthal began to wonder … Read More
When screening a fund manager, investors like to see experience and a consistent record or returns. Elizabeth Prial, however, looks for dilated pupils and uneven breathing.Ms. Prial, a psychologist and former Federal Bureau of Investigation agent, has spent most of her career looking for lies in the statements of mafia hitmen and terrorists. Now, she is on the hunt for the next Bernard Madoff, selling her deception-detection skills to institutional investors and others with large pools of money who want to know if prospective fund managers are telling the truth. Read More
Liespotting Challenge Archive
This post is part of the Liespotting Challenge series.
CIA Veteran Regretfully Suggests Lance Armstrong is Lying: We asked Liespotters worldwide to comment on this video clip of Lance denying use of unauthorized substances. The team at Liespotting.com was very impressed with the response from readers. But Phil Houston, expert deception detector takes it much further. He says Lance displays over 25 deceptive indicators in just a few minutes. Take another look at the video, then read Phil’s fascinating analysis. Read More