On November 2, 2010, Facebook’s American users were subject to an ambitious experiment in civic-engineering: Could a social network get otherwise-indolent people to cast a ballot in that day’s congressional midterm elections?
The answer was yes.
The prod to nudge bystanders to the voting booths was simple. It consisted of a graphic containing a link for looking up polling places, a button to click to announce that you had voted, and the profile photos of up to six Facebook friends who had indicated they’d already done the same. With Facebook’s cooperation, the political scientists who dreamed up the study planted that graphic in the newsfeeds of tens of millions of users. (Other groups of Facebook users were shown a generic get-out-the-vote message or received no voting reminder at all.) Then, in an awesome feat of data-crunching, the researchers cross-referenced their subjects’ names with the day’s actual voting records from precincts across the country to measure how much their voting prompt increased turnout.
Overall, users notified of their friends’ voting were 0.39 percent more likely to vote than those in the control group, and any resulting decisions to cast a ballot also appeared to ripple to the behavior of close Facebook friends, even if those people hadn’t received the original message. That small increase in turnout rates amounted to a lot of new votes. The researchers concluded that their Facebook graphic directly mobilized 60,000 voters, and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day.
From the Scientific American
While proponents view such big data analytics as promising tools for discovering useful insights in medicine, education, marketing and many other fields, consumer advocates warn that without explicit federal rules or policies overseeing their use, computer-generated algorithms could potentially be used to identify people who would prefer to remain anonymous, or to discriminate unfairly. They could be used, for example, to offer some consumers perks while others are charged higher prices or interest rates.
Now, in clear, conversational language, Davenport explains what big data means–and why everyone in business needs to know about it. “Big Data at Work” covers all the bases: what big data means from a technical, consumer, and management perspective; what its opportunities and costs are; where it can have real business impact; and which aspects of this hot topic have been oversold.
The so-called Big Data movement, which has been largely co-opted by the for-profit education industry, will serve as “a portal to fundamental change in how education research happens, how learning is measured, and the way various credentials are measured and integrated into hiring markets,” says Mitchell Stevens, an associate professor of education at Stanford University. “Who is at the table making decisions about these things,” he says, “is also up for grabs.”There are a few different ways to try to wrap your head around the implications of the Big Data movement in higher education. You can think about how it changes the experience of teachers and students in the classroom. You can think about how it informs the strategic thinking in university administrative offices. You can think about how it binds nonprofit universities and for-profit product vendors in new legal relationships.
The White House released a long-awaited report Thursday on how the technology industry’s collection of big data affects the online privacy of millions of Americans.
The report, authored by a group led by White House counselor John Podesta, makes several recommendations on how the government can grapple with the way widespread data collection affects the online privacy of average Americans.
The report recommends that Congress pass national data breach legislation, extend privacy protections to non-U.S. citizens, and update the Electronic Communications Privacy Act, which controls how the government can access e-mail.
London’s Metropolitan Police Service, in collaboration with Accenture, just tested a new predictive policing system to assess the likelihood of known gang members re-offending, and already people are comparing it to the pre-crime system in “Minority Report.” The pilot program combined four years of historic data of criminal activity across London with social media network analysis to predict which gang members were most likely to commit a violent crime in the following year. While the addition of social network analysis to existing big data capabilities represents the next big step in being able to predict wrongdoing and spot likely wrongdoers, we’re still a long way from having a pre-crime system that truly predicts, rather than just forecasts, potential crime.