Cops Use Facebook For Precrime, Thoughtcrime


Just seconds after the trigger of a gun is squeezed, police officers in cities and towns across America are alerted thanks to the latest and greatest state-of-the-art technology. Up-to-the-moment accuracy isn’t always enough, though.

Programs like the ShotSpotter system were already in place in 44 US cities by 2009, and in recent years the company has only added more names to its list of customers that can learn about gun activity the second shots are fired. ShotSpotter’s developers describe it as “a gunfire alert and analysis solution” that uses specialized sensors and software to triangulate and pinpoint the precise location of each spent round within seconds, and dozens of law enforcement agencies across the United States have signed-on.

When it’s a matter of life or death, though, seconds can mean all the difference. That’s the reasoning, at least, for why a number of police departments across America are relying not just on systems like ShotSpotter but other, more Orwellian surveillance techniques to spy on citizens and predict problems before they even occur. The result, depending on who you ask, means a drop in crime. It also, however, could mean no one is safe from the ever watching eye of Big Brother.

Predictive policing programs that rely on algorithms and historic data to hypothesize the location and nature of future crimes are already being deployed New York City and other towns. Last month, in fact, Seattle, Washington Mayor Mike McGinn announced that two precincts there were starting to use predictive policing programs, promising “This technology will allow us to be proactive rather than reactive in responding to crime.”

“The Predictive Policing software is estimated to be twice as effective as a human data analyst working from the same information” Seattle Police Chief John Diaz told reporters. “It’s all part of our effort to build an agile, flexible and innovative police department that provides the best service possible to the public.”

But specialized software and sensors aren’t the only tools law enforcement officers are using to look into suspicious activity. In Los Angeles, one police department has at least one officer on the clock 24 hours a day patrolling social media sites for unusual activity.

Tweets, Facebook posts and even Instagram photos are all subject to surveillance, Los Angeles County Sheriff’s Department Capt. Mike Parker admits to the San Gabriel Valley Tribune. Parker works with the eight-member Electronic Communications Triage, or eComm Unit that monitor public social media posts at all hours of the day in order to see if advertised parties and other get-togethers could benefit from a surprise visit by the police.

“They’re watching social media and Internet comments that pertain to this geographic area, watching what would pertain to our agencies so we can prevent crime, help the public,” Parker says. “And now they’re going to be ramping up more and more with more sharing and interacting, especially during crises, whether it’s local or regional.”

Tribute writer Brenda Gazzar cites unspecified incidents in LA where teenagers attend parties, drink heavily and engage in illegal activity. “The partygoers usually get high, get a girl drugged up and then sexually assault her,” Gazzar quotes Capt. Parker. “Often gang members will show up, start fighting over a girl and end up shooting or stabbing someone.”

“We are absolutely and completely convinced that we are preventing wild assaults from our efforts with these illegal social media advertised parties,” Capt. Parker says, adding that the eComm unit has already thwarter around 250 “illegal parties” in Los Angeles County.

So-called “illegal parties” aren’t the only thing being searched for, though. The Tribune goes on to say that “unsanctioned protests” are also put under the magnifying glass by officers with the eComm unit who actively scour to Web to see what demonstrations are being planned and by whom.

Capt. Parker says the eComm unit doesn’t search for specific people, just certain activity, and stands by the system so far. With a number of other law enforcement agencies using state-of-the-art technologies to try and stop crime, though, it’s forcing more and more Americans to submit to a society where the police become privy to their personal activity, whether they like it or not.

Karen North, director of the University of Southern California’s Annenberg Program on Online Communities, tells the Tribune that scouring social media sites for suspicious activity is “a smart move” on behalf of law enforcement, and that “All people should know that anything you put up on social media is public.”

“Even if you put it up on your private Facebook feed, you should still assume it’s public” she tells the Tribune. When social media analysts have access to other implements, however, it raises all sorts of questions about what activity is fair game for the fuzz.

Evgeny Morozov, a Bulgarian writer and researcher, reports for the UK’s Observer this week that police agencies are starting to combine more and more of the data that enters eComm divisions and other units in agencies across the United States. In New York City, for example, Morozov acknowledges that the NYPD’s recently rolled-out Domain Awareness System doesn’t start and end with real-time gunshot alerts. That system, he says, “syncs the city’s 3,000 closed-circuit camera feeds with arrest records, 911 calls, licence plate recognition technology and radiation detectors.”

“It can monitor a situation in real time and draw on a lot of data to understand what’s happening. The leap from here to predicting what might happen is not so great,” he says.

The thousands of surveillance cameras on the island of Manhattan alone have existed for years, and the American Civil Liberties Union and other groups have led relentless campaigns against the NYPD’s all-watching spy system and other constitutional-questionable behavior that brings every step in the City that Never Sleeps subject to police scrutiny. On the other side of the country, though, Seattle, Washington is soon becoming the surveillance capital of America. Earlier this year it was revealed that the major Pacific Northwest hub is in the midst of installing 30 surveillance cameras that will create a “wireless mesh network security system” on the city’s harbor that can be monitored by law enforcement agencies across the region. Coupled with other activity, though, Seattle’s eye-in-the-sky programs might be more serious than once suspected.

When Seattle recently signed onto the ShotSpotter system at a cost of $950,000 over two years for installation and operation, the city agreed to install 52 mobile gunshot locators that can collect intelligence up to 600 feet away using high-tech microphones and cameras.

“Having a private corporation control more than fifty audio/video surveillance stations in Seattle is likely to attract external interest,” security researcher Jacob Appelbaum tweeted over the weekend. A resident of Seattle, Appelbaum wrote on Twitter that he was looking for more information about the on-the-rise spy program being constructed in his city. “I find it rather depressing that surveillance/dataveillance programs are created and are used without so much as a public discussion,” he tweeted. “It would be interesting to learn how much money it costs to spin up the system and to FOIA the real data as input into the system.”

With public discourse on the subject sparse in many cities, though, obtaining, processing and sharing information with other concerned residents isn’t as commonplace as Appelbaum and others might want it to be. When many cities sign contracts with ShotSpotter, press write-ups are few and far between. In other locales, cameras that monitor car traffic are accepted as a necessity to curb red-light runners and other haphazard drivers. Rarely, however, is it discussed what other intelligence these cameras collect, and with whom it’s being shared with.

Predictive policing “may very well end up reducing crime to a certain degree,” Loyola Law School professor Stan Goldman told National Public Radio in a 2011 interview. “The question is at what cost, at what price?”

According to a CBS report, a predictive policing program in an area of Los Angeles drove burglaries down by one-third in a matter of only five months. And when ShotSpotter was first installed in Saginaw, Michigan, crime soon dropped by 30 percent. As for the price, however, consider this: if each of the 52 ShotSpotter sensors in Seattle can collect data within a radius of 600 feet, then roughly 58,780,800 square feet of the city under surveillance — or over 2 square miles where privacy ceases to exist. That, of course, isn’t even taking into account the other surveillance systems in place, including the one on the city’s harbor.

And don’t even think about sharing this story on Facebook.

Facebook Scans Chats And Posts For “Criminal Activity”


Facebook has added sleuthing to its array of data-mining capabilities, scanning your posts and chats for criminal activity. If the social-networking giant detects suspicious behavior, it flags the content and determines if further steps, such as informing the police, are required.

The new tidbit about the company’s monitoring system comes from a Reuters interview with Facebook Chief Security Officer Joe Sullivan. Here’s the lead-in to the Reuters story:

A man in his early 30s was chatting about sex with a 13-year-old South Florida girl and planned to meet her after middle-school classes the next day. Facebook’s extensive but little-discussed technology for scanning postings and chats for criminal activity automatically flagged the conversation for employees, who read it and quickly called police. Officers took control of the teenager’s computer and arrested the man the next day.

Facebook’s software focuses on conversations between members who have a loose relationship on the social network. For example, if two users aren’t friends, only recently became friends, have no mutual friends, interact with each other very little, have a significant age difference, and/or are located far from each other, the tool pays particular attention.

The scanning program looks for certain phrases found in previously obtained chat records from criminals, including sexual predators (because of the Reuters story, we know of at least one alleged child predator who is being brought before the courts as a direct result of Facebook’s chat scanning). The relationship analysis and phrase material have to add up before a Facebook employee actually looks at communications and makes the final decision of whether to ping the authorities.

“We’ve never wanted to set up an environment where we have employees looking at private communications, so it’s really important that we use technology that has a very low false-positive rate,” Sullivan told Reuters. While details of the tool are still scarce, it’s a well-known fact that Facebook cooperates with the police, since, like any company, it has to abide by the law. In fact, just a few months ago, Facebook complied with a police subpoena by sending over 62 pages of photos, Wall posts, messages, contacts, and past activity on the site for a murder suspect.

For more information about Facebook’s stance on working with the police, I checked out these two pages: Law Enforcement and Third-Party Matters, as well as Information for Law Enforcement Authorities. It’s worth noting that neither of these documents discusses the aforementioned tool (a quick search for the words “monitor” and “scan” bring up nothing).

Facebook likely wants to avoid discussing the existence of the monitoring technology in order to avoid further privacy concerns. Many users don’t like the idea of having their conversations reviewed, even if it’s done by software and rarely by Facebook employees.

What Facebook Knows


If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content.

Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

 For one example of how Facebook can serve as a proxy for examining society at large, consider a recent study of the notion that any person on the globe is just six degrees of separation from any other. The best-known real-world study, in 1967, involved a few hundred people trying to send postcards to a particular Boston stockholder. Facebook’s version, conducted in collaboration with researchers from the University of Milan, involved the entire social network as of May 2011, which amounted to more than 10 percent of the world’s population. Analyzing the 69 billion friend connections among those 721 million people showed that the world is smaller than we thought: four intermediary friends are usually enough to introduce anyone to a random stranger. “When considering another person in the world, a friend of your friend knows a friend of their friend, on average,” the technical paper pithily concluded. That result may not extend to everyone on the planet, but there’s good reason to believe that it and other findings from the Data Science Team are true to life outside Facebook. Last year the Pew Research Center’s Internet & American Life Project found that 93 percent of Facebook friends had met in person. One of Marlow’s researchers has developed a way to calculate a country’s “gross national happiness” from its Facebook activity by logging the occurrence of words and phrases that signal positive or negative emotion. Gross national happiness fluctuates in a way that suggests the measure is accurate: it jumps during holidays and dips when popular public figures die. After a major earthquake in Chile in February 2010, the country’s score plummeted and took many months to return to normal. That event seemed to make the country as a whole more sympathetic when Japan suffered its own big earthquake and subsequent tsunami in March 2011; while Chile’s gross national happiness dipped, the figure didn’t waver in any other countries tracked (Japan wasn’t among them). Adam Kramer, who created the index, says he intended it to show that Facebook’s data could provide cheap and accurate ways to track social trends—methods that could be useful to economists and other researchers.

Other work published by the group has more obvious utility for Facebook’s basic strategy, which involves encouraging us to make the site central to our lives and then using what it learns to sell ads. An early study looked at what types of updates from friends encourage newcomers to the network to add their own contributions. Right before Valentine’s Day this year a blog post from the Data Science Team listed the songs most popular with people who had recently signaled on Facebook that they had entered or left a relationship. It was a hint of the type of correlation that could help Facebook make useful predictions about users’ behavior—knowledge that could help it make better guesses about which ads you might be more or less open to at any given time. Perhaps people who have just left a relationship might be interested in an album of ballads, or perhaps no company should associate its brand with the flood of emotion attending the death of a friend. The most valuable online ads today are those displayed alongside certain Web searches, because the searchers are expressing precisely what they want. This is one reason why Google’s revenue is 10 times Facebook’s. But Facebook might eventually be able to guess what people want or don’t want even before they realize it.

Recently the Data Science Team has begun to use its unique position to experiment with the way Facebook works, tweaking the site—the way scientists might prod an ant’s nest—to see how users react. Eytan Bakshy, who joined Facebook last year after collaborating with Marlow as a PhD student at the University of Michigan, wanted to learn whether our actions on Facebook are mainly influenced by those of our close friends, who are likely to have similar tastes. That would shed light on the theory that our Facebook friends create an “echo chamber” that amplifies news and opinions we have already heard about. So he messed with how Facebook operated for a quarter of a billion users. Over a seven-week period, the 76 million links that those users shared with each other were logged. Then, on 219 million randomly chosen occasions, Facebook prevented someone from seeing a link shared by a friend. Hiding links this way created a control group so that Bakshy could assess how often people end up promoting the same links because they have similar information sources and interests.
He found that our close friends strongly sway which information we share, but overall their impact is dwarfed by the collective influence of numerous more distant contacts—what sociologists call “weak ties.” It is our diverse collection of weak ties that most powerfully determines what information we’re exposed to.

That study provides strong evidence against the idea that social networking creates harmful “filter bubbles,” to use activist Eli Pariser‘s term for the effects of tuning the information we receive to match our expectations. But the study also reveals the power Facebook has. “If [Facebook’s] News Feed is the thing that everyone sees and it controls how information is disseminated, it’s controlling how information is revealed to society, and it’s something we need to pay very close attention to,” Marlow says. He points out that his team helps Facebook understand what it is doing to society and publishes its findings to fulfill a public duty to transparency. Another recent study, which investigated which types of Facebook activity cause people to feel a greater sense of support from their friends, falls into the same category.

But Marlow speaks as an employee of a company that will prosper largely by catering to advertisers who want to control the flow of information between its users. And indeed, Bakshy is working with managers outside the Data Science Team to extract advertising-related findings from the results of experiments on social influence. “Advertisers and brands are a part of this network as well, so giving them some insight into how people are sharing the content they are producing is a very core part of the business model,” says Marlow.

Facebook told prospective investors before its IPO that people are 50 percent more likely to remember ads on the site if they’re visibly endorsed by a friend. Figuring out how influence works could make ads even more memorable or help Facebook find ways to induce more people to share or click on its ads.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

Doubling Data

Like other social-Web companies, such as Twitter, Facebook has never attained the reputation for technical innovation enjoyed by such Internet pioneers as Google. If Silicon Valley were a high school, the search company would be the quiet math genius who didn’t excel socially but invented something indispensable. Facebook would be the annoying kid who started a club with such social momentum that people had to join whether they wanted to or not. In reality, Facebook employs hordes of talented software engineers (many poached from Google and other math-genius companies) to build and maintain its irresistible club. The technology built to support the Data Science Team’s efforts is particularly innovative. The scale at which Facebook operates has led it to invent hardware and software that are the envy of other companies trying to adapt to the world of “big data.”

In a kind of passing of the technological baton, Facebook built its data storage system by expanding the power of open-source software called Hadoop, which was inspired by work at Google and built at Yahoo. Hadoop can tame seemingly impossible computational tasks—like working on all the data Facebook’s users have entrusted to it—by spreading them across many machines inside a data center. But Hadoop wasn’t built with data science in mind, and using it for that purpose requires specialized, unwieldy programming. Facebook’s engineers solved that problem with the invention of Hive, open-source software that’s now independent of Facebook and used by many other companies. Hive acts as a translation service, making it possible to query vast Hadoop data stores using relatively simple code. To cut down on computational demands, it can request random samples of an entire data set, a feature that’s invaluable for companies swamped by data.

Much of Facebook’s data resides in one Hadoop store more than 100 petabytes (a million gigabytes) in size, says Sameet Agarwal, a director of engineering at Facebook who works on data infrastructure, and the quantity is growing exponentially. “Over the last few years we have more than doubled in size every year,” he says. That means his team must constantly build more efficient systems.

All this has given Facebook a unique level of expertise, says Jeff Hammerbacher, Marlow’s predecessor at Facebook, who initiated the company’s effort to develop its own data storage and analysis technology. (He left Facebook in 2008 to found Cloudera, which develops Hadoop-based systems to manage large collections of data.) Most large businesses have paid established software companies such as Oracle a lot of money for data analysis and storage. But now, big companies are trying to understand how Facebook handles its enormous information trove on open-source systems, says Hammerbacher. “I recently spent the day at Fidelity helping them understand how the ‘data scientist’ role at Facebook was conceived … and I’ve had the same discussion at countless other firms,” he says.

As executives in every industry try to exploit the opportunities in “big data,” the intense interest in Facebook’s data technology suggests that its ad business may be just an offshoot of something much more valuable. The tools and techniques the company has developed to handle large volumes of information could become a product in their own right.

Mining for Gold

Facebook needs new sources of income to meet investors’ expectations. Even after its disappointing IPO, it has a staggeringly high price-to-earnings ratio that can’t be justified by the barrage of cheap ads the site now displays. Facebook’s new campus in Menlo Park, California, previously inhabited by Sun Microsystems, makes that pressure tangible. The company’s 3,500 employees rattle around in enough space for 6,600. I walked past expanses of empty desks in one building; another, next door, was completely uninhabited. A vacant lot waited nearby, presumably until someone invents a use of our data that will justify the expense of developing the space.

One potential use would be simply to sell insights mined from the information. DJ Patil, data scientist in residence with the venture capital firm Greylock Partners and previously leader of LinkedIn’s data science team, believes Facebook could take inspiration from Gil Elbaz, the inventor of Google’s AdSense ad business, which provides over a quarter of Google’s revenue. He has moved on from advertising and now runs a fast-growing startup, Factual, that charges businesses to access large, carefully curated collections of data ranging from restaurant locations to celebrity body-mass indexes, which the company collects from free public sources and by buying private data sets. Factual cleans up data and makes the result available over the Internet as an on-demand knowledge store to be tapped by software, not humans. Customers use it to fill in the gaps in their own data and make smarter apps or services; for example, Facebook itself uses Factual for information about business locations. Patil points out that Facebook could become a data source in its own right, selling access to information compiled from the actions of its users. Such information, he says, could be the basis for almost any kind of business, such as online dating or charts of popular music. Assuming Facebook can take this step without upsetting users and regulators, it could be lucrative. An online store wishing to target its promotions, for example, could pay to use Facebook as a source of knowledge about which brands are most popular in which places, or how the popularity of certain products changes through the year.

Hammerbacher agrees that Facebook could sell its data science and points to its currently free Insights service for advertisers and website owners, which shows how their content is being shared on Facebook. That could become much more useful to businesses if Facebook added data obtained when its “Like” button tracks activity all over the Web, or demographic data or information about what people read on the site. There’s precedent for offering such analytics for a fee: at the end of 2011 Google started charging $150,000 annually for a premium version of a service that analyzes a business’s Web traffic.

Back at Facebook, Marlow isn’t the one who makes decisions about what the company charges for, even if his work will shape them. Whatever happens, he says, the primary goal of his team is to support the well-being of the people who provide Facebook with their data, using it to make the service smarter. Along the way, he says, he and his colleagues will advance humanity’s understanding of itself. That echoes Zuckerberg’s often doubted but seemingly genuine belief that Facebook’s job is to improve how the world communicates. Just don’t ask yet exactly what that will entail. “It’s hard to predict where we’ll go, because we’re at the very early stages of this science,” says ­Marlow. “The number of potential things that we could ask of Facebook’s data is enormous.”

The Real Price of ‘Free’ Online Services


Why Social Media Is Killing Your Online Privacy

On March 1st, Google announced a major change to their privacy policy, which states that Google can now use any information it has collected about you in the past (from any of your Google accounts – Gmail, Google Maps, etc.) in order to provide “better” search and advertising offers to you going forward.

So for example, if you receive lots of travel-related e-mails to your GMail account, don’t be surprised if ads from travel agencies all of a sudden pop up while you’re browsing YouTube.

For this, Google has been branded as “evil” by many in the media and privacy space. In fact, there are complaints about how this change might even be illegal in the European Union.

This new privacy policy has certainly caught people off guard. A recent WSJ article detailing how Google overrode the privacy settings in the popular browser Safari to put tracking cookies on user’s computers have not helped. This, combined with previous claims that Google was funded by the CIA, certainly make it appear as though Google is a major privacy abuser when it comes to users of their free services.

But they are far from alone, and far from being the worst.

For example, Twitter recently sold an archive of every “tweet” (what users call the messages sent using Twitter) sent via the service. DataSift, the company who purchased the Twitter archive, claims they are striking a similar deal with Facebook in the near future. DataSift plans to sell access to this archive to anyone who wants to tap into this information to help with their marketing (or for whatever purpose). I am sure the FBI, NSA, and CIA all have their credit cards in hand…

What you see as an invasion of privacy is what these companies call a business model.

The Real Price You Pay for “Free” Online Services

This excellent article from CNET discusses the new paradigm in detail. While companies like Google, Facebook, and Twitter provide you with free services, what they get in return is much more valuable than what they could charge users for their services. They get free information about you, your life, and your preferences, as well as every single time you search, tweet, and post updates.

With Google at over 2 billions users, Facebook at over 845 million users, and Twitter at over 300 million users, these companies have data stores which are literally worth billions of dollars. They use this information to sell ads on their own network (e.g., Facebook generated $3.2 Billion last year in advertising revenue – not bad huh?), or sell the information to companies like DataSift.

If you want to see the end result of all this lost privacy, go to Spokeo and enter in your name and state. Chances are, you are in there. Along with where you live, your sex, your race, how much your house is probably worth, and who else in your family lives with you. And that is just the free information! For the low, low price of US$3.95 per month, someone can sign up for their service and REALLY dig into your personal information. They can get your phone numbers, e-mail addresses, information about your religion, your hobbies, your political affiliation… all served up in a neat, tidy report.

If You Aren’t Part Of The Solution, You Are Part Of The Problem

If you go to Spokeo’s Privacy page (where you can find out how to remove your records from their service) you can see all the ways they collect information about you for their service – namely, by aggregating data from the following sources:

Social Networks
Real Estate Records
Marketing Surveys
Online Maps

There are ways to solve the privacy issues around your home phone number being published (most phone companies can provide you with an unlisted number) and having real estate records in your name (using an New Mexico LLC to hold the property, for instance)… but the ones I am most troubled by are the social networks and the marketing surveys. Why?

Because they are the most detailed sources of information about you.
And because they are damage you are inflicting on your own privacy.

You see, when you fill out an online survey in hopes of winning something for free and that company tells you they need your name, address, phone number, and e-mail address so they can contact you in case you win… you just gave away your privacy. Your contact information and all of your survey answers ends up on a service like Spokeo and now anyone in the world with a credit card can access personal details about you for any purpose they choose.

When you spend hours a day on Facebook, posting where you are at any given time, commenting on articles and clicking the “Like” button, companies can access that information for their own use. Facebook starts sending ads to your “friends” based on this information, and even uses your name and photos in the ads. And soon, all of that information will end up in the hands of DataSift.

At the end of the day, Google is not responsible for your privacy. Neither is Facebook, Twitter, Spokeo, DataSift, or any other company.

YOU are responsible for your own online privacy.

Google did not force you to sign up for a free GMail account. Facebook did not make you post the intimate details of your life on their site. Twitter did not coerce you to tweet about every place you go to in a day. And no one held a gun to your head while you filled out that online marketing survey in hoping of winning a new iPad. You did all of that to yourself.

Facebook Free, and Still Sociable

Here is the good news… Since you choose to do all of that yourself, you can also choose to stop. You can detach yourself from these social networks. I have been Facebook and Twitter free for months now. Amazingly, my family and friends still find a way to communicate with me.

The bottom line is: don’t offer up your personal information for free to just any random marketing offer out there. Stop putting the details of your life online for someone else to monetize. Start using services like StartPage for your web searches to get more privacy. Use a VPN service that can help encrypt your Internet traffic and hide where you are browsing from. The sooner you take action, the sooner you can begin to take back your online privacy.

Facebook Has 25 Employees Who Only Provide Information To The Government


“Most of his security team is based at headquarters in Menlo Park, Calif. and sits at clusters of desks close enough to take dead aim at one another with Nerf darts. Broken roughly into five parts, the team has 10 people review new features being launched, 8 monitor the site for bugs and privacy flaws, 25 handle requests for user information from law enforcement, and a few build criminal and civil cases against those who misbehave on the network”

So only 10 people review new features for facebook and 150% more people send information to the government.

Facebook IS Datamining For The CIA

As I have said from day one which is why I have never had a personal facebook, myspace, or twitter account.  I urge everyone to share this story with everyone they know, as well as the other facebook stories I have covered.


My loyal readers may recall that DARPA (Defense Advanced Research Projects Agency) has some grotesque tentacles: the Information Awareness Office (IAO); TIA (Total Information Awareness, renamed Terrorism Information Program); and TIPS (Terrorism Information and Prevention System).

It is commonly believed that in 2003 an irate American people forced the government to stop these Orwellian command-and-control police state operations—or did they?

Congress stopped the IAO from gathering as much information as possible about everyone in a centralized nexus for easy spying by the United States government, including internet activity, credit card purchase histories, airline ticket purchases, car rentals, medical records, educational transcripts, driver’s licenses, utility bills, tax returns, and all other available data. The government’s plan was to emulate Communist East Germany’s STASI police state by getting mailmen, boy scouts, teachers, students and others to spy on everyone else. Children would be urged to spy on parents.

These layers of the mind control infrastructure were seemingly dead and buried. But was the stake actually driven through its evil heart? History leads us to believe that it was not.

Then shazam here comes the privacy killing juggernaut called Facebook.

Facebook, however, does what Chairman Mao, Joseph Stalin, or Adolf Hitler could not have dreamt of – it has a half billion people willingly doing a form of spy work on all their friends, family, neighbors, etc.—while enthusiastically revealing information on themselves. The huge database on these half a billion members (and non-members who are written about) is too much power for any private entity—but what if it is part of, or is accessed by, the military-industrial-national security-police state complex?

We all know that “he who pays the check, calls the shots,” therefore; whoever controls the purse strings controls the whole project. When it had less than a million or so participants, Facebook demonstrated the potential to do even more than IAO, TIA and TIPS combined. Facebook really exploded after its second round of funding—$12.7 million from the venture capital firm Accel Partners. Its manager, James Breyer, was formerly chairman of the National Venture Capital Association and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital front established by the CIA in 1999. In-Q-Tel is the same outfit that funds Google and other technological powerhouses. One of its specialties is “data mining technologies.”

Dr. Anita Jones, who joined the firm, also came from Gilman Louie and served on In-Q-Tel’s board. She had been director of Defense Research and Engineering for the U.S. Department of Defense. This link goes full circle because she was also an adviser to the secretary of defense, overseeing DARPA, which is responsible for high-tech, high-end development.

But as bad as the beginning of Facebook is, the parallels between the CIA’s backing of Google’s dream of becoming “the mind of God,” and the CIA’s funding of Facebook’s goal of knowing everything about everybody is anything but benign.

Furthermore, the CIA uses a Facebook group to recruit staff for its National Clandestine Service. Check it out if you dare.

Do not become a victim of this full frontal assault on your personal information. Think twice about putting your entire life on Facebook or by that matter on any social media site. None of it is ever private. Everything you put online stays online forever in a server farm somewhere for anyone to analyze you and the people you love. They do not care about your privacy at all and put great value on uncovering all they can about you. They have an agenda that will become more and more apparent to people as time goes by. Believe it or not there is a great change coming in our culture that many choose to be blind too. The mass loss of liberty and freedom we are experiencing is just a signal to the direction this is all going.