“Free” isn’t free: A Ronin Research Scholar examines the web and its problems

By Ronin Research Scholar Ralph Haygood 

Remember when the World Wide Web was new and shiny (albeit somewhat rickety)? It wasn’t very long ago. Like me, many Ronin Research Scholars no doubt can recall the widespread excitement about the new medium. I was in graduate school when the web took off and became part of everyday life.

Two decades later, it isn’t just names like AltaVista, GeoCities, and Netscape that have faded into history. Instead of excitement, there’s widespread concern that the web has become problematic, possibly doing more harm than good. These days, discussions of the web tend to emphasize fake news, hate speech, compulsive “doomscrolling”, and the unaccountable power of a few big companies like Google and Facebook. How did we get here, and what should we do about it?

That’s the subject of my new book “Free” isn’t free: The Original Sin of the web and what to do about it. The book explains that a major cause of many problems with the web is what it dubs the Original Sin of the web: collecting personal information about users and selling it to marketers. Web companies offer us “free” services, on the condition that we let them “data-mine” us and sell the data to people who, in turn, use it to try to sell us everything under the sun. However, “free” isn’t free; this business model has significant costs that we all pay.

So what’s the solution? Obviously, better laws could help, particularly by limiting what information web companies are allowed to collect about us and what they’re allowed to do with it. However, I argue that the key to a better web is for us users, rather than marketers, to become the customers. This isn’t a panacea, but it addresses multiple problems with the web at once, by reducing conflicts of interests between websites and users.

Although other books cover some of the same ground, I felt it was worth writing “Free” isn’t free in order to present the main issues concisely, highlight the central significance of the Original Sin, and address objections to making users the customers. As obvious as making users the customers may seem, most discussions of the web and its problems ignore or downplay this possibility. “Free” isn’t free examines several common objections to it, arguing that although some of them are warranted, none of them is decisive. For example, although there are reasonable concerns about deepening the “digital divide” between people who can afford to pay for the web and people who can’t, there are also practical strategies for avoiding this outcome, despite being supported by users.

Who am I to write such a book? The answer may interest even Ronin scholars who aren’t especially interested in the web and its problems. Like the founder of the Ronin Institute, Jon Wilkins, I’m an evolutionary geneticist, with a Ph.D., postdoctoral fellowships, and published research. However, before all that, I was a computer programmer and researcher. In fact, I found my way into evolutionary genetics through genetic algorithms, computation schemes inspired by evolutionary genetics. During my years as a grad student and postdoc, I remained attentive to developments in computation, and since leaving academia, I’ve made a living mostly by creating web applications. So I’ve been building, using, and pondering the web for quite awhile.

One reason why I decided not to become a professor was that I didn’t relish the prospect of devoting myself almost exclusively to a single topic for many years, in order to establish myself as the world’s leading authority on that topic. As competition for jobs and funding has become ever more intense, many academics have found that professional survival demands focus to the point of monomania. So an academic career seemed too cramped for my interests, which have always been broad (e.g., before I worked with computers, I studied physics and mathematics). Of course, a project such as writing “Free” isn’t free may require sharply focused attention and effort for weeks or months at a time. However, when it’s finished, I’m free to contemplate quite different things if I wish. Fortunately, as a software developer, I’m able to make a comfortable living from part-time work, leaving many hours for other pursuits. If more people were able to do likewise, I expect that many of their “other pursuits”—art, science, environmental conservation, social justice, and much more— would enrich us all.

I’m grateful for and enthusiastic about the Ronin Institute, which encourages and facilitates scholarly work by people like me who choose to spread our attention and effort more broadly than most academics are free to do.

I thank Keith Tse for inviting me to post here.

“Free” isn’t free is available as an e-book or paperback. For links to sellers, visit the website for the book.

Ralph Haygood is a population biologist, emphasizing evolutionary genetics and mathematical, computational, and statistical methods. He is also a software developer, emphasizing web applications. He has been a Ronin Research Scholar since 2012—before it was trendy! He currently lives in Vancouver, British Columbia. You can read more about him and his work on his website.

This post is a perspective of the author, and does not necessarily reflect the views of the Ronin Institute.


  1. I’d like to suggest a contrary position, namely that “collecting personal information about users and selling it to marketers” is not necessarily “sinful.”

    Why should I as a consumer be concerned that companies may know a lot about my needs and preferences? Assuming no fraud, extortion, or blackmail, isn’t it a good thing for companies to know what I may want to buy? Why should they waste their time and money and I waste my time and money as they tell me things in which I have no interest? Assuming the volume of advertising remains the same, I’d much rather see ads for products that interest me than for those that don’t.

    • Consider four points:

      (1) Covert data collection and algorithmic inferences apply not only to ads but to “organic content” such as search results and social-media posts. As I say in the book regarding social media:

      “It’s important to recognize that social-media sites aren’t unbiased communication channels. They don’t simply show us everything our ‘friends’, ‘followees’, etc. post and nothing else, as we might reasonably expect. Instead, the industry has embraced so-called algorithmic feeds, which means they show us only some of what our ‘friends’, ‘followees’, etc. post. Moreover, they show us other things, such as posts by people we’ve never heard of that garnered a ‘like’, comment, etc. from one of our ‘friends’, ‘followees’, etc. or even posts that just happen to be ‘trending’ at the moment.”

      So this isn’t just about ads. Particularly with respect to social media, it isn’t even mostly about ads.

      (2) When you use the web (as a medium of mass communication, which is the focus of the book, rather than e-commerce and other task-oriented aspects of the web), you usually indicate fairly explicitly what you’re interested in. If it’s a search engine, you enter a query. If it’s a social-media site, you choose users to “like”, “follow”, etc. If it’s a news article or blog post, why would you even skim it unless you’re at least maybe interested in it? And so on. So there’s little need for covert data collection or algorithmic inference in order to show you what you want to see. Even ads often can be targeted effectively based on context alone. That’s why DuckDuckGo is a profitable business.

      (3) However, web companies that commit what I’ve dubbed (with a dash of hyperbole) the Original Sin don’t necessarily aim to show you what you want to see. What they aim to maximize isn’t your satisfaction but your engagement. As I say in the book:

      “If you’re running a web company, and your business is selling personal information about users, then you have to collect personal information about users. And the more you collect, the more you can sell. Naturally, you want to know users’ genders, ages, ethnic backgrounds, sexual orientations, etc., because that’s what marketers want to know about them. But that isn’t all. You want to know anything and everything about them that might help you keep them engaged … that is, using your website and hence seeing advertisements and doing things that tell you still more about them.”

      As much as for targeting ads, that’s the purpose of the covert data collection and algorithmic inferences: to keep you glued to your screen. And as the book explains at length, the side effects are apt to include making you less well-informed and happy. (There’s an obvious analogy with narcotics that make users feel good for awhile but have nasty side effects such as addiction.)

      (4) The misalignment of interests between websites and users when users aren’t the customers tends to ramify. For example, privacy settings of sites like Google and Facebook are notoriously confusing, partly because they’re laced with “dark patterns” that encourage users to surrender privacy. (There’s actually an app, Jumbo, that tries to present some of Google’s and Facebook’s privacy settings more comprehensibly than Google and Facebook themselves do.) This kind of problem potentially exists even for sites like DuckDuckGo that are supported by contextual advertising, but it’s acute for sites like Google and Facebook that commit the Original Sin.

    • However, Mark Zuckerberg certainly would affirm this comment. Here’s an excerpt from his testimony to the senate judiciary committee in April 2018:

      “What we found is that even though some people don’t like ads, people really don’t like ads that aren’t relevant. And while there is some discomfort for sure with using information in making ads more relevant, the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not.”


      Of course, Zuck was subtly misrepresenting the situation, in that most Facebook users haven’t given informed consent to the methods used to show them “relevant” ads; see, for example, “Most Facebook users still in the dark about its creepy ad practices, Pew finds” (https://techcrunch.com/2019/01/16/most-facebook-users-still-in-the-dark-about-its-creepy-ad-practices-pew-finds/).

      You may not be bothered by those methods. However, there’s a good deal of evidence that most people are bothered by them, once they learn about them; see, for example, “Only 17% of consumers believe personalized ads are ethical, survey says” (https://www.forbes.com/sites/johnkoetsier/2019/02/09/83-of-consumers-believe-personalized-ads-are-morally-wrong-survey-says/).

Leave a Reply to Ralph Haygood Cancel

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.