Pages

Jan 19, 2024

Propaganda, All Is Phony?

How do you know?  That's epistemology's central question.  

Rene Descartes inaugurated the modern era of epistemology with two important works, Discourse on Method (1637) and Meditations on First Philosophy (1641).  In Meditations, he systematically challenged anything we might think we know, leaving his readers to doubt everything.  Then with his now famous argument, "I think, therefore I am," he claimed to have salvaged one bit of knowledge:  his own existence.  From this, he claimed to prove the existence of God, the immortality of the soul, and the reliability of our perceptions of the external world.  Later that century, others developed principles for uncovering truths about the world, thereby establishing the basis of the scientific method.

Scientists and philosophers of science have been developing and refining the method ever since.  Consequently, our understanding of the physical world has advanced by leaps and bounds, but the rigor employed to investigate the physical world can not always be applied to the social world.  We commonly find ourselves with unique, anecdotal perspectives, subject to information thrust upon us by agenda-driven mass media and largely unregulated social media.  In this context, methods for justifying beliefs about the social world are hard to formulate and even harder to put into practice.  For many of our beliefs, the question remains:  how do you know?  

I. Epistemic Constraints

There was a time when most of us were told about the world by three television news departments, a few radio stations, and a local paper.  The topics they covered and the how they covered them were often fairly indistinguishable.  Some people lament the days when this small number of media sources provided us with a common set of beliefs that gave us a shared understanding of the world.  The Fairness Doctrine, abolished in 1987, required licensed broadcasters to present controversial issues in a manner that fairly reflected different views.  This alleviated most people's concerns about the systematic indoctrination of the public.

In their book Manufacturing of Consent (1988), Edward Herman and Noam Chomsky explain why the Fairness Doctrine had been incapable of preventing a small number of media corporations from narrowing the boundaries of public discourse.  Their central thesis was that media corporations constituted an oligopoly.  Its main purpose was to sell audiences to advertisers.  To be as profitable as possible, they would have to attract a wealthy audience, and their programing would have to ensure that the audience would have a favorable view of the companies that bought time for ads.  Limited time for advertisements meant that most of the nationwide advertising was done by large national corporations.  Consequently, a media ecosystem arose in which large media corporations sold wealthy audiences to other large corporations.  News departments were, of course, free to broadcast what they wanted, consistent with the Fairness Doctrine; but the system in which they operated precluded any serious, consistent criticism of corporate America. 

The proliferation of cable stations began to destabilize the ecosystem.  It became even more unstable with the advent of Web 2.0 in which anyone could become a content creator.  Suddenly, millions of people operated in the system without a monetary stake.  Optimists believed that over time, Web 2.0 would democratize the system.  People would be able to publish information and ideas that were not constrained by the media oligopoly; but this information utopia never arrived.  

Previously marginalized voices remain marginalized, since only large corporations have the resources to reliably reach a mass audience.  Ordinary individuals can reach anyone on the internet, but in practice, they are lost in the ocean of other ordinary individuals.  Just as once upon a time, marginalized voices could mimeograph their message and distribute it on streetcorners, they now can establish their own domain where they can post reports and opinions.  That is, they can distribute their message on their own little cyber-streetcorner.  In contrast, well-financed content creators can push their content to a wide, mass audience.  Web 2.0 has not given us an information democracy.

II. Epistemic Chaos

The failure of the democratic promise is not, however, the worst outcome.  Social information siloes have formed as social media platforms intentionally partition the consumer market to sell specific markets to advertisers.  By customizing search results to feed different information to different people, they create more or less mutually exclusive consumer communities, with natural relationships to specific political messages.  For example, people in the market for guns are a target for NRA propaganda.  People in the market for electric cars are a target for green energy propaganda.  

Parasitic on these consumer/political communities, Web 2.0's millions of new content providers have produced a cacophony of often dubious information sources on the Web, some having no integrity at all.  The lack of any serious online interaction among these information sources and the persistent influence of confirmation bias among people searching the Web have created more than a few contradictory siloes of self-indoctrinating communities. 

It's here that epistemic chaos is born in the creation of "information bubbles."  As described above, bubbles come about through both our own search habits and the algorithms used by search engines.  The result is that different segments of the population are presented with different caricatures of the world containing contradictory basic facts.  People in each bubble come to think their own beliefs are obvious and well-justified.  Anyone outside one's bubble appears to be utterly deluded.  It is true that some bubbles are more misguided than others, but none have a god's eye view of the world.

Beyond these systemic distortions, conscious efforts to shape public opinion are not new in the U.S.  In 1917, Woodrow Wilson created the "Committee on Public Information," aimed at generating support for U.S.'s involvement in World War I.  Since then, the U.S. government has engaged in numerous efforts to influence public opinion.  A briefly declassified 1979 Army field manual on psychological operations (PSYOPS) describes three types of propaganda: white, grey, and black.  White propaganda is truthful information disseminated to conscientiously inform people, grey propaganda is truthful information which, by not telling the whole story, leads people to false beliefs, and black propaganda is, well, simple flat-out lies.  

The PSYOPS manual explicitly forbade targeting the U.S. population with black propaganda; however, in his book, In Search of Enemies, former CIA agent John Stockwell described the common use of black propaganda in foreign countries.  Stockwell worked for the Agency for 12 years in Congo, Burundi, Vietnam, and Angola.  He noted that black propaganda disseminated in foreign countries would inevitably be picked up in the domestic media when they report stories sourced abroad.   

With the advent of Web 2.0, the problem has become compounded by individuals and private organizations -- including political parties and campaign committees -- that intentionally promulgate grey and black propaganda.  With specific strategies to make their messages "go viral," individual and corporate disinformation is passed on unwittingly by others.  In the new Web 2.0 ecosystem, truth has taken a backseat to persuasion, and openly lying about the most well-established facts has become common in politics. 

The license to lie is bad enough without the advent of artificial intelligence.  The internet has been a cesspool of disinformation in politics for a number of years now, but AI now provides people with the ability to create "deep fakes," i.e., fabricated images, videos, and sound recordings that will escape all but the most sophisticated digital forensic scrutiny.  The likely result of the widespread use of AI is that people will have even greater reason to reject any evidence that they choose to disagree with.  We will all become more and more deeply entrenched in our information bubbles, falling prey of confirmation bias and immune to counter arguments.  The answer to the question, "how do you know?" will become, "Well, you never do;" so you might as well believe what is commonly accepted in your social milieu. 

III. Epistemic Virtues

Is there a way to retain some semblance of a connection to reality?  I think at this point the answer is more or less "yes," but one must make a diligent effort to cultivate epistemic virtues.  That's a fancy way to say, one must create habits that allow you to recognize unfounded claims and to establish reliable beliefs.  This involves many things, including the vigorous examination of one's sources of information, their prior perspectives, their institutional limitations, and their track record for accuracy.  It involves high standards for accepting testable claims, appropriate respect for expertise, and the recognition of gradations of justification.  Most importantly, it requires a healthy dose of critical self-examination.  

Maintaining a system of beliefs that roughly conforms to reality is like gardening.  You must prepare the soil to create good conditions for desirable beliefs.  You must keep them healthy and remove any weeds as soon as they appear; but the real problem is global epistemic chaos.  None of us alone can solve that, no matter how diligently we tend our personal gardens.  If there's a broader, social response that can mitigate the growth of the chaos, it certainly isn't clear to me.

No comments:

Post a Comment