Facebook Reportedly Wants to Use AI to Predict Your 'Future Behavior'—So Advertisers Can Change It

We may earn a commission from links on this page.

Among the unanswered questions at Mark Zuckerberg’s congressional hearings this week, the CEO was a bit stumped when asked if he would be willing to change Facebook’s business model in order to protect users’ privacy. Facebook’s data collection has received a lot of attention from a security perspective, but a new report illustrates why we should be just as concerned about how it uses that data to influence our behavior.

The Intercept has obtained what it claims is a recent document that describes a new service being offered to Facebook’s advertising clients. Going beyond micro-targeting ads based on what it knows about your past and present, the social media company is now reportedly offering to use its artificial intelligence to predict what you will do in the future—and giving clients the opportunity to intervene through a barrage of influence. From the report:

One slide in the document touts Facebook’s ability to “predict future behavior,” allowing companies to target people on the basis of decisions they haven’t even made yet. This would, potentially, give third parties the opportunity to alter a consumer’s anticipated course. Here, Facebook explains how it can comb through its entire user base of over 2 billion individuals and produce millions of people who are “at risk” of jumping ship from one brand to a competitor. These individuals could then be targeted aggressively with advertising that could pre-empt and change their decision entirely — something Facebook calls “improved marketing efficiency.” This isn’t Facebook showing you Chevy ads because you’ve been reading about Ford all week — old hat in the online marketing world — rather Facebook using facts of your life to predict that in the near future, you’re going to get sick of your car. Facebook’s name for this service: “loyalty prediction.”

Advertisement

Facebook is reportedly using its FBLearner Flow technology to drive this new initiative. The tool was first introduced in 2016 as Facebook’s next step in machine learning and in since then it’s been discussed as a way to improve people’s experience on the platform rather than a way to improve marketing. Any time Zuckerberg was asked for a solution to a tough problem by a member of Congress this week, his response was some variation on “better AI will solve it.” Well, the company calls FBLearner Flow the “backbone” of its AI initiative and it introduces plenty of problems of its own.

Advertisement

For years, advertising has relied on a few core principles and a handful of tools. There are essentially two kinds of businesses: those that recognize a problem and offer a solution, and those that have a solution and want to introduce a problem that people didn’t really have before. Advertising is useful for both, but it’s essential for the latter. It’s well-understood that ad execs prey on people’s insecurities and concoct unnecessary desires to shape their behavior. And for a long time, that business was conducted through gut feelings, limited market research, and a little dash of Freud. The era of Big Data changes that.

Advertisement

Someone might lie or withhold the truth from a marketing survey, but they spill their guts inside their private “place for friends.” Online users betray their true instincts as they travel around the web tracked by cookies and walk around the real world bugged by the GPS in their phones. Now we have facial recognition, ubiquitous cameras, microphones, and fingerprint scanners to worry about. Even so, the billions of data points about billions of people couldn’t be effectively parsed by humans, so we have machines go through it, categorize it and analyze it.

If you keep up with the marketing or tech business, you probably think you understand that. Check out tech Twitter to see people smugly explaining that we’ve known everything bad about Facebook since forever. But just because knowledge is around doesn’t mean that everyone understands it, or has received it, or has been convinced to accept it as true. The Intercept is detailing further developments about a tool that would, of course, be used for marketing purposes but has barely been discussed in that light.

Advertisement

It’s worth hammering home just how consequential it is that an algorithm is slowly being trained to be extremely good at making behavioral predictions, extremely good at monitoring how those predictions play out, and extremely good at adjusting based on its failures and successes. When that same system is trained to modify your behavior through advertising, it’s going to learn how to do it well. It’s equally troubling that Facebook will have a monetary incentive to make its predictions come true. Frank Pasquale, a scholar at Yale’s Information Society Project, pointed out to The Intercept that it’s entirely possible that AI predictions will become “self-fulfilling prophecies.”

Say that Facebook tells a client that its system predicts 10,000 people will stop buying name-brand detergent this year. It goes to all of the name-brand detergent advertisers, tells them its prediction, and they all decide not to run Facebook ads. Over the course of a year, Facebook has an incentive to make that prediction come true by tilting what you see in a way that might persuade you not to buy name-brand detergent. Pasquale notes that this is akin to a machine learning “protection racket.”

Advertisement

One of the common lines of questioning that Zuckerberg received from Congress this week was whether or not it provides all of the data it possesses on a user through its “downloading your info” tool. Zuckerberg was evasive and took advantage of his inquisitors’ lack of technical knowledge—always saying that’s his “understanding” or hedging with “your” data. He was asked directly: “If I download my Facebook information, is there other information accessible to you within Facebook that I wouldn’t see on that document, such as browsing history or other inferences that Facebook has drawn from users for advertising purposes?” His response was “Congressman, I believe that all of your information is in that—that file.” He can get away with that answer because Facebook doesn’t consider its inferences regarding your data as belonging to you. It would be crazy interesting to find out what Facebook thinks we’re going to do in the future, but that would ruin the whole gambit because you’d have it in the back of your mind when making those decisions.

By its own admission, Facebook is unable to adequately address many of the negative consequences of its scale. “The reality of a lot of this is that when you are building something like Facebook that is unprecedented in the world, there are going to be things that you mess up,” Zuckerberg told reporters in a conference call last week. Things it has “messed up” include aiding foreign actors in a propaganda campaign to interfere in the 2016 US election and being a conduit for ethnic cleansing in Myanmar. If you follow the news, you know that there many, many more examples. It’s not hard to imagine things going terribly wrong when Facebook fulfills its dream of deploying its own mini-version of the internet in regions of the world that don’t have it, and then proceeds to nudge people in whatever direction the highest bidder demands.

Advertisement

Facebook did not respond to The Intercept’s questions regarding whether or not these predictive behavior tools are currently offered to clients working on political campaigns or healthcare. We’ve also requested an answer to that question and will update this post if and when we receive a reply.

But Facebook isn’t the only company to worry about, everyone is working on machine learning in one way or another. For now, AI systems will be clumsy, but likely superior to old-fashioned market research. When machine learning really comes into its own, it could hold tremendous power over our dumb monkey brains. It will be like the difference between a musket and a rocket launcher. We should really consider whether we to continue in this direction, largely oblivious thanks to corporate secrecy.

Advertisement

Update: Facebook provided this almost completely unrelated response to our inquiry: “Facebook, just like many other ad platforms, uses machine learning to show the right ad to the right person. We don’t claim to know what people think or feel, nor do we share an individual’s personal information with advertisers.”

[The Intercept]

Advertisement