Tag: sam harris

Agency and Free Will Are about Influence, not Control

Craig Axford | Canada

By now, just about everybody who has taken an interest in the question of free will is familiar with the arguments for and against it. On the one hand, it can be shown that it is at least possible under the laws of nature to react multiple ways in certain situations. It can also be demonstrated that many people act differently in the same or similar situations. On the other hand, the strict determinists respond, everything has a cause and therefore, all things being equal (right down to the last atom), anyone finding themselves in exactly the same situation could not have done otherwise.

Both arguments come with their own particular set of weaknesses that makes absolutism in defense of either total agency or unyielding determinism difficult to take seriously. When it comes to agency, though it may be possible to do otherwise under the laws of physics — stand up or keep your seat, for example — it’s impossible to prove that the person doing the standing or the sitting possessed enough agency to make that decision entirely free of the influences of either her genes or the environment. If you don’t believe it, just try creating a test in which a human subject makes a choice, mundane or otherwise, completely independent of either external environmental or internal biological inputs.

And of course, the determinists are right: everything does have a cause(s). But who is to say we can’t be one of the causes? To argue that a gene or collection of genes caused me to stand up while the rest of me had nothing to do with it is taking reductionism to a ridiculous extreme. After all, genes lack any knowledge of either sitting or standing. Likewise, my back pain may cause me to either want to take a load off or stand and stretch, but can it truly be said that it made me do either?

A single action may be the product of multiple causes, while a single cause can open the door to several choices that each invite us to act upon them. Roy F. Baumeister pointed out in his 2008 paperFree Will in Scientific Psychology, “Most psychological experiments demonstrate probabilistic rather than deterministic causation: A given cause changes the odds of a particular response but almost never operates with the complete inevitability that determinist causality would entail.”

The fact that when provided with a particular stimulus, the outcomes are more like a roll of the dice than throwing a light switch won’t come as a surprise to most people. That’s exactly how it feels to have agency. The fact that the dice are often loaded won’t be coming to determinism’s rescue because even loaded dice are an example of “probabilistic rather than [absolute] deterministic causation”.

That said, free will deniers will be quick to point out that we have no choice when it comes to rolling the dice and no control over which numbers turn up, but that’s true for all living things. If you’ll forgive me mixing my metaphors, the fact that we all must play the cards we were dealt doesn’t mean we lack the ability to play them creatively.

. . .

In the opening pages of his book, Freedom Evolves (2003), the philosopher Daniel Dennett lays the groundwork for what is to follow by emphasizing the important role language played in the development of Homo sapiens:

In just one species, our species, a new trick evolved: language. It has provided us a broad highway of knowledge-sharing, on every topic. Conversation unites us, in spite of our different languages. We can all know quite a lot about what it is like to be a Vietnamese fisherman or a Bulgarian taxi driver, an eighty-year-old nun or a five-year-old boy blind from birth, a chess master or a prostitute.

Other animals, Dennett points out, “can’t compare notes.” Since Dennett wrote those words nearly two decades ago, we’ve learned that there are, in fact, other creatures engaged in their own kind of mental and linguistic note taking too. But none of them do it in as many ways or cover nearly the variety of topics that humans do.

Using language to “compare notes” via so many mediums means we have integrated numerous opportunities to learn from each other into our individual lives as well as our cultures. As new information comes in, sooner or later we accommodate it by altering both our thinking and our behavior in response, creating more space for even greater possibility in the process.

The capacity to intentionally attempt influencing the course of events using the information we’ve gathered from others and from our surroundings is the most likely place to find a little bit of freedom, albeit nowhere near as much as we imagine ourselves to possess. Our influence is always shared with other actors and forces pushing and pulling upon the universe, making the exact quality and strength of our contribution difficult if not impossible to determine with anything like precision.

Sam Harris and other critics of free will like to point out that the problem with the concept of free will isn’t that we lack any influence, but that we lack any capacity for intention in the first place. Rain is the product of a number of atmospheric conditions combining at just the right moment, but none of these intended to cause a downpour. As biological creatures, ultimately we are no different, even if the processes involved in keeping us alive and conscious are far more complex than the weather.

Those denying we have the capacity to truly act intentionally have abundant research available to them to help make the case. In the 1980s, the neuroscientist Benjamin Libet demonstrated a brain state which he dubbed the “readiness potential”. This potential could be detected on an EEG before a person was even conscious of the “decision” their brain was preparing to carry out. How can a person be said to make a decision at all when their brain knows what they’re going to do before they do? That’s a good question.

Some have responded to Libet’s research by arguing that free agency doesn’t come in the form of “free will” but “free won’t”. Having become conscious of a temptation that we had no power to prevent — for example, the sudden urge to break our diet and eat the box of chocolates we just got for Christmas — we are faced with a choice of either giving in or not to our desire. Maybe we compromise by rationing the chocolates or we decide to remove the temptation altogether by giving the chocolates to a neighbor. Regardless, these choices arise from our awareness of the temptation to consume the chocolate, not an absolutely uncontrollable impulse to eat chocolate whenever it’s available. Our consciousness of the desire is what gives us the power to say no to it.

In these “free won’t” situations, psychologists like Baumeister don’t think the tests by Libet and others really tell us much about how free will actually operates. He writes, “Modern research methods and technology have emphasized slicing behavior into milliseconds, but these advances may paradoxically conceal the important role of conscious choice, which is mainly seen at the macro level.”rb

It’s tempting to conclude that Libet and others using his research to once and for all bury any possibility of free will are merely stating the obvious. All information in the universe is necessarily generated before we can become aware of it. The light from my laptop screen travels to my retinas and is converted to neural signals before I am conscious of the letters and images appearing on its screen. That some part of me should already be formulating a reaction to this information before it has reached every corner of my skull and become an integrated part of my present reality not only seems natural but inevitable, under every conceivable scenario.

. . .

We exist in a universe that is a chain of causes and effects, with effects inevitably turning into causes for more effects, and so on. Humans are, like everything that has come before and that will follow, both a cause and an effect. As information enters consciousness, that that awareness should become a cause for effects like intention shouldn’t really come as a surprise.

That we are beings trapped in a web of causes and effects is an ultimate argument against free will. But we are proximate creatures. There are good reasons we cannot defend ourselves from criminal charges by arguing that we had no say in our conception or the Big Bang and are therefore ultimately not responsible for our actions. Free will is about what we do or don’t possess while we are alive, not whether we had any choice about entering the world in the first place.

There’s no intuitively obvious place to put the brackets around concepts like determinism and free will. But if we start assessing agency at the moment I “decided” to write this article or to accept a new job offer a few weeks ago, instead of the moment the universe started around 15 billion years ago or the night in 1968 my parents were feeling particularly amorous, the arguments for strict determinism blur quickly. I don’t have article-writing or job-offer genes, and saying that I suddenly felt like writing about free will because my blood sugar had spiked doesn’t really bring any clarity to the question either. Other people feel like going for a jog or take a trip to the mall to do a little shopping when they’re feeling energized. Why an article on free will in my case?

That our capacity for self-control diminishes when we are hungry or tired doesn’t weaken the case for agency either, even if it usually does come in the form of free won’t. The state of our blood chemistry at the critical moment certainly matters. How we react to low glucose levels or high levels of adrenaline provide evidence that self-control and highly conscious states require our body to be functioning in a particular optimal condition, but that’s a far cry from proof that blood chemistry determines our actions.

Our brain is a glucose consuming machine. It’s not in its interest to waste energy resisting a box of chocolates when it’s running low on the very fuel it needs to effectively exercise self-control. Restraint demands more brain work (willpower) than just going with the flow. If the box of chocolates will help you resist the pack of cigarettes, reaching for the chocolates can actually be a good choice even if it’s not the ideal one.

I, like Sam Harris, Robert Sapolsky, and others, am a firm materialist. I have yet to hear anyone articulate how a force liberated from the laws of physics could even function. To say something like a soul is responsible for free will without offering any sort of description of what a soul is, where it came from, or how it acts upon our brain is just a cowardly evasion of the issues arising from consciousness. Arguing a soul rather than a biological entity is conscious merely moves the problem of consciousness and questions regarding free will to another realm. It doesn’t dispose of them.

If free will exists to any degree, it will have emerged as a property of a materialist universe. We need not default to some sort of ill-conceived dualism or deny we live within a universe governed by physical laws to make room for it. It could very well be that because each cause is itself the effect of another cause, it’s impossible to distinguish where intention begins and internal biological and outside environmental influences end.

Even free will skeptics like Sam Harris and Robert Sapolsky spend a great deal of time making ethical arguments about what we should doShould implies canwhich is very different than must. In the opening chapter of his book, The Moral Landscape, Harris states “I am arguing that science can, in principle, help us understand what we should do and should want — and, therefore, what other people should do and should want in order to live the best lives possible.” (Italics included in original)

If, in fact, Harris believes we have absolutely no control over what we do, let alone what we want, his argument regarding science’s ability to contribute substantively to moral issues — an argument with which I largely agree — isn’t merely dubious, it’s self-contradictory. Plants or animals that lack any capacity to develop informed intentions regarding how they are going to behave in the future are by definition amoral creatures incapable of giving any meaningful consideration to what they ought to do. In such a world, Harris’ “moral landscape” isn’t made up of peaks and valleys; it’s perfectly flat and featureless.

Robert Sapolsky, like Harris, has gone on record stating he doesn’t believe humans have anything like free will. Yet in the final chapter of his book, Behave, Sapolsky also can’t resist reaching ethical conclusions when it comes to how knowledge of our implicit biases should shape our actions. He writes, “revealing implicit biases indicates where to focus your monitoring to lessen their impact. This notion can be applied to all the realms of our behaviors being shaped by something implicit, subliminal, interoceptive, unconscious, subterranean — and where we then post-hoc rationalize our stance.” Sapolsky concludes, “For example, every judge should learn that judicial decisions are sensitive to how long it’s been since they ate.”

Sapolsky and Harris can’t have it both ways. While it’s true, judges tend to issue their harshest sentences just before lunch, you can’t tell a judge they should mitigate the effects of low blood sugar by having a glass of lemonade or a candy bar handy by no later than 11:00 AM in one breath and use the fact that low blood sugar leads to harsher sentences as proof people have no free will in the next. If a judge’s knowledge of his or her implicit bias can truly lead to choices that will minimize or eliminate the bias, isn’t acting on this knowledge an example of the moral exercise of free will? Is a judge with normal blood sugar in a better position to make wise rulings probabilistically speaking or isn’t she? Is a judge capable of intentionally regulating their blood sugar toward this end or not?

If by free will we mean absolute control over ourselves and our environment, then I agree, we don’t have it. But if by free will we mean something more subtle — the capacity to intentionally influence our world, even if only a little bit — then the answer is at worst a qualified maybe. People are complex creatures, prohibited from ever gaining an outside objective view of themselvesWe are both cause and effect, both subject and object. As animals with a consciousness, we are both determined and intentional at once. Just how much our intention matters may be impossible to know, but it does matter.

Follow Craig on Twitter or read him on Medium.com

Other articles by Craig you may enjoy:

Social Media Shouldn’t Be Free
If we want to be treated like customers instead of the product, we need to pay up medium.com 

71 Republic is the Third Voice in media. We pride ourselves on distinctively independent journalism and editorials. Every dollar you give helps us grow our mission of providing reliable coverage. Please consider donating to our Patreon, which you can find here. Thank you very much for your support!


Sam Harris and Scientific Morality Show the Height of Hubris

By Kaihua Zhou | United States

Merriam-Webster defines hubris as “exaggerated pride or self-confidence”. It is the most characteristic crime of intellectuals. In so many cases, they identify an existing issue and propose a baseless solution. Such is the case of Sam Harris, who is a philosopher and neuroscientist. Harris draws attention to a serious issue: religious extremism. However, his solution of atheism and scientific morality clearly shows his hubris, as his reasoning is deeply flawed.

Harris: Hubris and Worldview

Perilous pessimism flavors Harris’s worldview. According to him, the root cause of religious extremism is religion itself:

If you really believe that calling God by the right name can spell the difference between eternal happiness and eternal suffering, then it becomes quite reasonable to treat heretics and unbelievers rather badly. The stakes of our religious differences are immeasurably higher than those born of mere tribalism, racism, or politics. -Sam Harris

Note that Harris identifies religion solely as a cause of religious extremism. Economics, government structure, and education do not figure into the equation. Such is Harris’ hubris. If religion is inherently dangerous, we would expect religiously diverse communities to be unstable.

Stability and Religion

However, Singapore, the world’s most religiously diverse nation, is quite the opposite. 34% of its inhabitants are Buddhist, 18% are Muslim, and 14% are Christian. Of course, each religion argues that its truths are universal; their faithful followers believe in eternal consequences.

Despite these distinct religious communities, Singapore enjoys a considerable amount of what Harris calls “human flourishing.” Singapore is economically prosperous: its unemployment rate is about 2.2% and its GDP is 527 billion dollars. Surely, religious life is not the only cause of prosperity, or even necessarily one of them. Nevertheless, it presents a powerful counterexample to the claim that religion alone results in intolerance and instability.

Science and Morality

This flawed explanation of religious extremism is evidence of hubris. Though Harris claims to support scientific approaches to essential questions, he ignores clearly proven evidence that goes against his claim.

In fact, his scientific look at morality appears to be further evidence of his own hubris. Harris views moral questions primary in terms of consciousness:

Without a doubt, it is important to know the facts when looking at moral questions. We understand human flourishing in terms of economics (standards of living, the poverty line) and psychology (mental health). These facts can help alleviate suffering. For example, a proper medical diagnosis of PTSD or depression helps someone cope with their illness.

For Morality, Fact is Not Everything

Still, facts do not provide a compelling reason to be concerned with human suffering. Consider two individuals. One is a lifelong religious leader who has taken an active political role. Another is a former mathematics professor.  Which of these individuals is more likely to have a concentrated understanding of facts? If Harris is correct, the professor will be in a better position to answer moral questions, due to his understanding of fact. They will be more attached to reality and more tolerant, by his own logic.

However, the first man in the scenario is the Dalai Lama, Tenzin Gyatso. The second, on the other hand, is the Unabomber, Ted Kaczynski. Where did Harris’ hypothesis fall short? A former mathematics professor is more likely to be unbound by arbitrary dogma. Despite this, Kaczynski was unconcerned whether or not his victims were flourishing.  He perfectly understood that his actions would result in human suffering.

This is not to suggest that Gyatso’s religious beliefs alone have given him greater moral expertise than Kaczynski. This would ignore the sophistication of human motivation. It does, however, refute Harris’ claim that facts can primarily answer moral questions, as Gyatso is not a murderer. It appears that knowledge does not necessarily allow someone to properly answer moral questions. There must, thus, be another way to determine this. Making such rigid criteria allows for vast errors. Not every man wise in fact can answer questions of opinion.

How To Address Religious Extremism

What can we do to address religious extremism? Rule of law, separation of church and state, and freedom of speech provide a beginning.  The United States and much of the West benefit from these institutions. Thankfully, they are largely free of religious violence. This accomplishment did not require societies to wholly abandon their religious traditions and adopt an empirical moral philosophy.

Yet, this is precisely the solution Harris uncompromisingly prescribes. Such is the height of his hubris, seeing science alone as a savior of humanity. Science cannot hope to resolve issues of morality without cooperating or begrudgingly tolerating religion. To say otherwise is to be blinded by pride.

To support 71 Republic, please donate to our Patreon, which you can find here.

Featured Image Source

Humanist Spirituality

Craig Axford | United States

In an Op-ed published in the New York Times last month, the philosopher Stephen T. Asma offered a defense of religion. I responded with an article of my own published here on Medium a day or two later.

Asma’s article had a somewhat provocative title: What Religion Gives Us (That Science Can’t). My primary beef with Asma’s argument wasn’t that he was wrong about religion’s emotional benefits so much as he seemed to be arguing there are no other means — or at least no better means — of providing them.

Asma is correct that however we get them, as a social species we require the things religion provides: community, meaning, and ritual among them. As I’ve reflected now and then upon the argument he laid out in his June 3rd Op-ed, it’s clear to me that whether we are religious or not, we will seek out means of meeting these needs to the best of our ability.

Unfortunately, when it comes to theological debates there’s something inherently problematic about how we frame the argument. There’s an either/or quality to it. Having been raised in a religious family, this quality of religious belief has been something I’ve had to wrestle with throughout my own life.

Atheism isn’t a belief system. It’s the absence of a belief in a very specific idea: the existence of a supernatural being or beings. By itself, this absence does not determine any particular personal moral code. The failure to believe that Zeus still lives somewhere on Mt. Olympus or Yahweh really did speak to Moses from a burning bush does not require one to take a relativistic or nihilist view either of human relations or the universe as a whole, let alone mandate a lack of concern for the well-being of others.

Atheism represents one end of a spectrum. It’s a spectrum that is when we get right down to it, rather uninteresting in its dualism. On the other end is a belief in a god or gods and in between there is a short space occupied by doubters that lean one way or the other along with a fair number who don’t really have an opinion on the subject and don’t care to develop one. No matter how much any of the partisans in this fight may think otherwise, nowhere along this short line is the really important questions about human well-being, ethics, political philosophy, or science successfully resolved by any of the answers people give to the question of whether or not there’s a god.

 If we’re being honest, the question of God’s existence is a rather annoying distraction. It is, after all, an unanswerable question. If it were answerable it wouldn’t be a matter of faith but one of science. If certain practices or customs work to enhance human well-being, then we should strive to understand the reasons they work and to duplicate and perfect them to the greatest degree possible. If certain actions reduce suffering and improve our individual/collective quality of life, then we should laud them and seek to incorporate them into both our lives and our societies. This is a rule of thumb that shouldn’t be controversial to either believers or non-believers.

In a recent episode of the NPR program The Hidden Brain entitled Creating God, host Shankar Vedantam explored some of the current research surrounding religion’s origins and benefits. The broadcast featured University of British Columbia psychologist Azim Shariff. Though Stephen Asma’s name never comes up, Shariff generally agrees with his assessment of many of the benefits religion provides. But Shariff views religion from the longer perspective of biological and cultural evolution. As a result, the program ends with him pointing out that many of the functions religion once served have recently been taken over by other institutions.

We’ll sacralize ideas like freedom. We’ll sacralize our nation. We’ll sacralize the flag. And in terms of the governmental institutions that can spread trust, one of the interesting things you see is that if you look across countries, those countries that report having the least importance of religion to their daily lives are the countries that have the highest faith in the rule of law. So those are the places where you trust institutions like the bank, or contract enforcement, or the police, or the justice system.

Once you can set up those types of trusted secular institutions, well that obviates the need for a lot of what religion has done. Now, it’s only been in recent years that we’ve been able to have those types of centralized effective institutions, and still in most parts of the world we’re not able to. But, in those places where we are, we see ourselves moving towards a post-religious world where a lot of the functions of religion are accomplished by other means and potentially better means. ~ Psychologist Azim Shariff on NPR’s Hidden Brain, July 16, 2018 (Emphasis added)

Given the human tendency to sacralize objects, symbols, rituals, and beliefs is hardly restricted only to religion, we shouldn’t be surprised other institutions can take its place. Political ideologies, nationalism, pieces of art, stirring music, and even scientific theories are all examples of things that humans have, for better or for worse, sanctified and ritualized. That most biologists have the same visceral response to attacks on evolution as orthodox believers have when faced with challenges to their literal interpretation of scripture is not an indication that both are equally valid descriptions of reality. But it does demonstrate that whatever humans attach meaning to will become emotionally salient to at least some extent.

“I have never come across a coherent notion of bad or good, right or wrong, desirable or undesirable that did not depend upon some change in the experience of conscious creatures,” Sam Harris wrote in Waking Up: A guide spirituality without religion. The idea that we could create a moral philosophy that justified itself on anything other than its actual or foreseeable impacts upon us or other creatures similar enough to us for us to imagine how they would feel is, if we stop to think about it, patently absurd. As Harris points out later in the same paragraph, “If you think [particular] actions are wrong primarily because they would anger God or would lead to your punishment after death, you are still worried about perturbations of consciousness…”

That morality is grounded primarily in our experience is a fundamental tenet of what is commonly referred to as Humanism. The humanist label has been attached to a number of periods and philosophies, but the emergence of what we commonly understand as humanism today is best seen in the Renaissance and the Enlightenment periods.

As the political philosopher Larry Siedentop argues in his 2014 book Inventing the Individual, “Any set of basic assumptions opens up some avenues for thought, while closing down others.” Siedentop goes on to state that the Christian emphasis on the importance of each soul rather ironically paved the way for the individualism that became central to what eventually developed into modern secular humanism.

It was precisely this initially Christian and later secular regard for the dignity and worth of the individual upon which modern democratic societies are built. It provides a blueprint for all contemporary societies to follow as they hopefully move toward greater freedom in the future. However, while theism can exist within a humanist framework, humanism cannot exist within a theocratic one. In a pluralistic society humanism already rules the day because pluralism is a humanist ideal.

Every mainstream tradition existing within a modern pluralistic context has necessarily sacralized the individual. Each person, no matter where they are along the belief spectrum, relies upon their personal right to determine for themselves where they will stand and to express their reasons for standing there if they choose to do so. This sanctifying of the individual can readily be found within our churches as well as in conversations among secular humanists.

From a purely humanist perspective, the challenge isn’t how best to articulate the dignity and rights of the individual but how to incorporate religion’s commitment to the community into its ethos without sacrificing its core principle. Enshrining freedom of religion into law didn’t just enable heretics to break away and speak their mind without risking punishment. It also enabled worshipers to willingly commit themselves to a religious community, with all the personal sacrifices that often entails, while still maintaining that doing so was an expression of their own individual freedom.

Humanism as a philosophy places a number of intellectual demands upon those that embrace it: an appreciation for the scientific method, healthy skepticism, and a degree of openness to uncertainty. However, in practice, it has struggled to replicate the structured setting and ethic of mutual support religion has historically been good at.

As is pointed out in the Hidden Brain episode Creating God mentioned above, the threat of punishments such as eternal damnation does play an important role in sustaining membership in religious organizations and motivating followers to adhere to the moral codes their religion promotes. Humanists, on the other hand, believe that people should do the right thing not because they desire a future reward or fear a future punishment, but because there are reasons that we can identify for doing the right thing. Those reasons follow from the consequences of the actions in question and can be evaluated both objectively as actual physical or emotional impacts on ourselves and others, and subjectively in terms of how we would feel if we were on the receiving end of the action.

Humanists and other secularly minded people are organizing themselves into communities in greater numbers, though membership lags far behind anything seen in most religious denominations. American humanists first began seriously organizing themselves in the 1920s. The humanists responsible for starting what became the American Humanist Association emerged from the Unitarian tradition at that time. For its part, Unitarianism represented the first religion to formally embrace the Enlightenment values many of us take for granted today, but remains relatively small as religions go.

Unitarian ministers and humanists organizing regular meetings of like-minded individuals could be forgiven for sometimes wishing eternal damnation was available to them to hold over a membership that too often chooses to sleep in on Sunday mornings. But humanism’s success shouldn’t be measured in membership numbers or attendance statistics. Humanism’s greatest accomplishment is the variety of museums, concerts, non-profit organizations, political beliefs, and religious choices now available for billions of people around the world to choose from.

Humanism does not require people to give up a belief in a supreme being or other “supernatural” powers. However, it does set aside such beliefs as meaningless to our attempts to address life’s most fundamental challenges and enhance our understanding of reality. As Azim Shariff pointed out, as societies provide greater economic security and a longer menu of activities and ideas for people to choose from, the emotional needs that religion once met can increasingly be satisfied via other means. As more and more communities develop around causes and pursuits in the secular realm that fulfill our desire to find meaning and form communities, the types of demands religions place upon individuals as a condition for membership will make it harder and harder for them to compete.

The sense of wonder we often describe as spirituality can also be readily evoked listening to a symphony, viewing an awe-inspiring work of art, at the local natural history museum, in solitude or with others watching a beautiful sunset, or even lost in conversation with friends at the local coffee shop. Imposing a religious doctrine or highly ritualized behavior upon these pastimes simply isn’t necessary to receive many of the benefits Asma and others argue religion provides.

What religion has been able to give us that humanism can’t effectively deliver is the illusion of membership in a chosen tribe. In addition, with membership in a particular faith has come the assurance of comfort during periods of suffering and loss. However, whether we’re believers or not the price we are each increasingly required to pay in return for the benefits of living a modern secular society is greater personal responsibility. The role of providing for each other is now not only the proper moral stance of the individual as a person in their own right but the proper civic role of the citizen within a much larger national/global cosmopolitan community. This is true not because we will receive some heavenly reward in exchange but because regardless of our personal religious beliefs or nationality we all benefit right here on earth from such mutual concern and cooperation.

For the first time in human history, we must find within ourselves the motivation to care for each other rather than relying upon promises of heaven or threats of hell to do the heavy ethical lifting that comes with being born human. Likewise, mere assertions that a divine being has dictated a moral code is no longer sufficient in a pluralistic setting where others often don’t share the same religious beliefs. Within societies aspiring to provide greater freedom to their people morality must rest upon reason. That’s a heavier burden than we’re used to carrying, but one lightened by the shared humanist conviction that our individual right to choose what we will believe and how we will pursue fulfillment is only guaranteed by our willingness to recognize everyone else’s right to do the same.

Follow Craig on Twitter or read him on Medium.com

Other articles you might enjoy:

To support 71 Republic, please donate to our Patreon, which you can find here.

Featured Image Source