What Causes Someone to Join a Cult?
And will those we hope to disabuse ever believe our debunking explanations?
At the New Yorker, Zoë Heller writes about cults and other “belief-based organizations.” We often turn to a concept of “brainwashing” when we try to explain why otherwise regular people join cults and aver belief in obviously crazy ideas. But brainwashing is a rather unscientific, wooly concept. Heller writes:
The term “brainwashing” was originally used to describe the thought-reform techniques developed by the Maoist government in China. Its usage in connection with cults began in the early seventies. Stories of young people being transformed into “Manchurian Candidate”-style zombies stoked the paranoia of the era and, for a time, encouraged the practice of kidnapping and “deprogramming” cult members. Yet, despite the lasting hold of brainwashing on the public imagination, the scientific community has always regarded the term with some skepticism. Civil-rights organizations and scholars of religion have strenuously objected to using an unproven—and unprovable—hypothesis to discredit the self-determination of competent adults. Attempts by former cult members to use the “brainwashing defense” to avoid conviction for crimes have repeatedly failed. Methods of coercive persuasion undoubtedly exist, but the notion of a foolproof method for destroying free will and reducing people to robots is now rejected by almost all cult experts.
I will put off for another time offering any comment of my own on the non-sensical idea of destroying something that doesn’t exist—free will—and instead follow Heller’s train of thought from the untenable notion of brainwashed cult members to the idea that people join with their agency and volition pretty much (or more or less) intact.
Heller writes:
Acknowledging that joining a cult requires an element of voluntary self-surrender also obliges us to consider whether the very relinquishment of control isn’t a significant part of the appeal.
The historian and psychiatrist Robert Lifton’s book Thought Reform and the Psychology of Totalism (1961) provided one of the earliest and most influential accounts of coercive persuasion. Heller writes:
People with certain kinds of personal history are more likely to experience such a longing: those with “an early sense of confusion and dislocation,” or, at the opposite extreme, “an early experience of unusually intense family milieu control.” But [Lifton] stresses that the capacity for totalist submission lurks in all of us and is probably rooted in childhood, the prolonged period of dependence during which we have no choice but to attribute to our parents “an exaggerated omnipotence.”
[…]
Some scholars theorize that levels of religiosity and cultic affiliation tend to rise in proportion to the perceived uncertainty of an environment.
Or, I would hasten to add, economic and other kinds of insecurity—perhaps even metaphysical insecurity.
The less control we feel we have over our circumstances, the more likely we are to entrust our fates to a higher power. […] This propensity has been offered as an explanation for why cults proliferated during the social and political tumult of the nineteen-sixties, and why levels of religiosity have remained higher in America than in other industrialized countries. Americans, it is argued, experience significantly more economic precarity than people in nations with stronger social safety nets and consequently are more inclined to seek alternative sources of comfort.
All the more reason for the Biden administration and the Democrat-controlled congress should pass legislation creating a stronger social safety net along the lines of Bernie Sanders’ and Elizabeth Warren’s great model: Northern European socialist democracies. There is considerable sociology and political wisdom to suggest that right wing fascist elements will persist in the United States until insecurity of a variety of types is eliminated.
The problem with any psychiatric or sociological explanation of belief is that it tends to have a slightly patronizing ring. People understandably grow irritated when told that their most deeply held convictions are their “opium.” (Witness the outrage that Barack Obama faced when he spoke of jobless Americans in the Rust Belt clinging “to guns or religion.”
Here, finally, is where I can get philosophical of my own accord. We are in the domain of philosophy of social science and the idea of levels of explanations and the relations among levels of explanation. An individual will insist that they believe, for example, in God because they have heard His voice or for no reason other than faith, which is allowed to be unreasonable and not reasons-responsive. But a sociologist, or another type of social scientist, has real data showing that, at the level of statistics, if you decrease economic insecurity and decrease threats to cultural prestige, you will see a decrease in religiosity measured in the traditional sociological means.
I think both explanations can be true; and the fact that the subject of a sociological explanation vehemently objects to said explanation is not in itself a drop-dead argument against the veracity of that explanation. This is especially true when we operate at least sometimes at the level of statistical probabilities.
How can I defend the idea that it can be simultaneously true that an economically disenfranchised subject nevertheless believes in the Biblical God for their own subjective reasons and also that, statistically speaking, you’ll find less religious belief if you change certain socioeconomic macro conditions? It’s a long answer. But briefly, it has to do with the fact that you need not necessarily confront the subject with this debunking explanation of their beliefs. In philosophy, certain theories suffer from what’s called a “problem of publicity.” The theory—fine on its own hypothetical terms—runs into a kind of a sort of performative contradiction if it is tried to be believed by more than just a small elite.
In his Methods of Ethics (1874), Sidgwick writes:
On Utilitarian principles, it may be right to do and privately recommend, under certain circumstances, what it would not be right to advocate openly; it may be right to teach openly to one set of persons what it would be wrong to teach to others; …. it seems expedient that the doctrine that esoteric morality is expedient should itself be kept esoteric[…] And thus a Utilitarian may reasonably desire, on Utilitarian principles, that some of his conclusions should be rejected by mankind generally; or even that the vulgar should keep aloof from his system as a whole, in so far as the inevitable indefiniteness and complexity of its calculations render it likely to lead to bad results in their hands (Sidgwick 1874).
Kant would have vehemently disagreed. He worried about “A maxim which I cannot divulge without defeating my own purpose…” Kant’s hypothetical publicity test appears in the second appendix to his 1795 Perpetual Peace. He writes:
“All actions relating to the right of other human beings are wrong if their maxim is incompatible with publicity.”
I’m pretty sure I do not agree. I think you have to finesse the problem of publicity or bite the bullet. What do you think?