In my post about Talcott Parsons, I mention Harry Frankfurt’s philosophical pamphlet On Bullshit, which is an excellent example of a serious academic tackling a seemingly silly concept. In essence, Frankfurt distills that while those who are lying know or at least have an interest in the truth, bullshit doesn’t care about truth or what’s a good versus confused perspective. I will say here that bullshit is like vibes, in that it is a way of knowing and feeling about social issues that is partially disconnected from empirical research.
I have defined vibes as “Intuitive feelings independent of reasoning.” In a 21st century where data micro-targeting, demand pricing, and now the complete destruction of internet search engines by AI sludge and attempts to monetize search through steering users to certain brands. We’re essentially in a bell curve evolution of figuring out what’s real and what’s a bunch of bullshit.
So here’s a Wolfram Alpha graph I built about this idea that information availability and accuracy has peaked, perhaps in the late 2000s before many of the paywalls had come into effect.
We’re just eyeballing this, but essentially sound information and data has become harder to access freely, but bullshit is always available for free. Because sites are looking for clicks, they will use SEO to have hundreds of thousands of articles on anything you might search for, but a page that is devoid of any substance or answer. It’s an entirely hollow thing. I would argue that this isn’t even bullshit. It’s purely text that exists to interact with a search algorithm, and is thus uninterested in actual people.
If you are a sociology student (or much more, if you’ve graduated and lost database access through your school), it’s becoming harder to create good social science analysis. So much gibberish populates most search engines that old adages like “use .gov or .edu” and “avoid Wikipedia.” Much of what exists is now Wikipedia restated or even less, as people who edit Wikipedia pages either think they know something, or are lying about something (congressional staffers are obvious examples) knowing it will be one of the first results.
This has also lead to peer-reviewed (or pretending to be peer-reviewed) journals are rapidly having to deal with Large Language Model (LLM) AI composing journal articles, legal briefs, etc. Hammering out policies requiring disclosure have been important. LLMs are bullshit generators. Since LLMs are basically an improv troupe member, it will ‘yes-and’ your prompt. That means it creates what you want to see, leading to the “AI hallucination” where it invents academic and legal citations, news stories, and self-help information. It has even been used to generate a sexual misconduct scandal about an academic where there was no evidence! It created something entirely fictional and also seemingly plausible.
LLMs don’t care about the truth. They want to please users so they will be used more and eventually people will provide saleable data and pay for premium services. They also will be involved in automation of office work, in which so much of emails and reports is essentially semantic ritual, which nobody reads closely enough to realize it’s bullshit.
So in a sociology of vibes, where is bullshit? Bullshit and vibes both deal with the intuitive and the plausible. Bullshit is an external force attempting to pretend having expertise or to avoid getting in trouble. Vibes come from a gut feeling that may be influenced by bullshit or cut right through it. Being in tune with our vibes makes it easier to pick out that while we may not know answers, we can also tell where other humans or programs don’t know either.