Author: Craig Axford

I'm a US citizen, but for most of the past decade I've been living in British Columbia, Canada. I have degrees in both anthropology and environmental studies from the University of Victoria. I am currently working on a master's degree in environmental management at Royal Roads University. I've been a Green Party candidate for the US Congress as well as an organizer for the DNC. I've also served as the program director for a local non-profit environmental group.

Some Thoughts on the True Importance of Grades

Craig Axford | Canada

School is supposed to be about learning. Tests to determine whether we’ve learned how to do something may be unavoidable, especially when it comes to the so-called STEM (science, technology, engineering, and math) fields. But even in those cases, grades should be part of a broader context that includes a degree of playfulness and creativity.

As a student myself, it’s impossible not to sound at least a little self-serving when it comes to the topic of grades, exams, and all the other various methods employed to evaluate us. Believe me, I understand society has an interest in making sure the desired information has been retained and that we are capable of using it appropriately. The certificate we frame to hang on our office wall needs to signal much more than merely the fact that we were able to scrape together enough money to pay our tuition.

But as great as our need to know that the professionals we encounter have at some idea what they’re talking about is, it’s at least as important that the fields they work in be populated with a few rebels who are willing to challenge prevailing opinions now and then. This doesn’t mean adopting skepticism for skepticism’s sake or rejecting the value of a good education. Rather, it calls for a strategic skepticism that rejects the orthodoxy that existing knowledge is sufficient.

We want our heretics to possess an understanding of conceptual weak spots and have the ability to look at old problems from new perspectives. Fostering the kind of playfulness, irreverence, and creativity this demands can be difficult in a traditional school setting. However, while it may be unrealistic to expect schools to abandon grades as an evaluation tool it’s not unrealistic to encourage students (and the rest of us) to stop seeing their grade point average as an adequate measure of their intelligence and potential future productivity.

Developing a firm understanding of a subject doesn’t always translate into an easy A. Arguably, for those truly willing to explore the philosophical and scientific thickets that lie at the heart of virtually every field it never does. Though Einstein developed the theory of relativity only after initially gaining an understanding of Newtonian physics, he did not endear himself to all his professors in the process. As his knowledge of physics grew, so did his understanding of the limits of the Newtonian worldview. His willingness to challenge settled thinking and ask difficult questions others could not answer made him a very difficult student indeed.

Likewise, Darwin was only able to formulate the theory of evolution by first becoming well informed about contemporary thinking about biology and geology. But first, he was a terrible medical student and a rather mediocre would-be divinity student with a tendency to take unrelated classes and develop hobbies like beetle collecting simply because he found them interesting. Had Darwin attended university now instead of in the 1820s, he could very easily have found himself enrolled for years, piling up student debt as he struggled to select a major.


The mission of any school should be to teach; the goal of any student should be to learn. But learning is a complex activity that involves the integration of new information that must be filtered through each student’s unique life experiences and personality. It is rarely if ever as linear a process as we tend to think it is. Given this reality, the correlation between “good students” and good grades is frequently as complicated at the ‘A’ end of the spectrum as it can be nearer the ‘F’ end.

A few years ago I took an osteology course to meet a requirement for my anthropology degree. It was the final year of an undergraduate program that included two degrees and that, for a variety of reasons, I was very much looking forward to finishing.

Cultural, not physical anthropology had been my focus. Bones were interesting but I didn’t see myself participating in any archaeological digs in the future. More problematic, however, was the expectation that students be able to very quickly identify bone fragments by the end of the course. Lab tests involved spending about 30 minutes with roughly 30 often very small pieces of teeth or bone, moving systematically from one to the other while the instructor timed us. I usually found myself spending the first 15 seconds at the next sample writing down my answer for the last one.

Given such identification probably involved hours of examination and discussion in the field, I seriously questioned the purpose of providing so little time with samples in the lab. It didn’t help that high-pressure tests requiring me to make very quick decisions also produce what I can only describe as mild panic attacks.

I barely passed the course, and then thanks only to an extra credit paper I wrote on the topic of rickets. But that paper enabled me to apply my cultural anthropology learning to osteology. To this day I still remember the results of much of my research for that extra credit assignment. In addition, due to the countless hours spent watching Youtube videos about the human skeleton and engaging with interactive anatomy programs, to say nothing of the time spent staring at images of Homo habilis and modern human bone fragments trying to identify the subtle differences between them, I can honestly say that my final grade did not reflect the amount of learning that took place that semester.


In his December 8th opinion piece in the New York Timesthe organizational psychologist Adam Grant argues it is among past students with grade point averages of 2.0–3.0 that we often find the best minds, not 4.0 or 1.0 and under. Grant writes:

Academic grades rarely assess qualities like creativity, leadership and teamwork skills, or social, emotional and political intelligence. Yes, straight-A students master cramming information and regurgitating it on exams. But career success is rarely about finding the right solution to a problem — it’s more about finding the right problem to solve.

In a classic 1962 study, a team of psychologists tracked down America’s most creative architects and compared them with their technically skilled but less original peers. One of the factors that distinguished the creative architects was a record of spiky grades. “In college our creative architects earned about a B average,” Donald MacKinnon wrote. “In work and courses which caught their interest they could turn in an A performance, but in courses that failed to strike their imagination, they were quite willing to do no work at all.” They paid attention to their curiosity and prioritized activities that they found intrinsically motivating — which ultimately served them well in their careers.

Anyone who has ever had to bring home a mediocre report card, let alone a downright disappointing one, has likely reminded their parents that Einstein had a reputation as a poor student or that Bill Gates and Steve Jobs were both university dropouts. Indeed, Grant provides his own list in his NY Times op-ed, pointing out that JK Rowling was a C student and Martin Luther King only ever received a single A.

Our parents were probably right to keep pressuring us to bring our GPA up. Though a great many great minds have been mediocre students, it doesn’t follow that mediocrity is necessarily an indicator of a mind functioning at the height of its potential. Usually, it’s just a sign of mediocrity.

Nonetheless, we need to be aware of what our grades are actually measuring, and in too many cases it is an ability to recite information we’ve heard or read rather than apply knowledge in new ways no one has previously thought of before. It’s one thing to know the dates marking great events in history or ace every spelling test, but it’s those able to apply the lessons of history to contemporary social problems or write great novels that really keep our culture marching forward.

If your goal is to graduate without a blemish on your transcript, you end up taking easier classes and staying within your comfort zone. If you’re willing to tolerate the occasional B, you can learn to program in Python while struggling to decipher “Finnegans Wake.” You gain experience coping with failures and setbacks, which builds resilience. ~ Adam Grant


Rule number five among the Dalai Lama’s 18 Rules for Living is “Learn the rules so you know how to break them properly.” Finding ways to produce the same or better outcomes more efficiently without causing yourself or others harm means first learning the path conventionally followed by others to get there.

But a willingness to apply the information that comes our way in the classroom — or anywhere else — in creative ways is necessarily going to be risky, even when we carry out our experiments “properly”. First of all, we may find out we were wrong. But as social creatures, perhaps what’s most threatening to us is our innate fear of rejection.

Credit: Anna Jordanous, University of Kent School of Computing — Science Daily

Others often don’t appreciate the fact that we’re thinking outside the box, even when we’re proven right in the end. In a classroom setting, our teachers may not approve. At work, it may mean getting called into the boss’ office for a lecture on following company procedures to the letter.

Unfortunately, both mistakes and rejection are inherent risks of the learning process. While we need to be mindful of the lines we choose to color outside of, we still need to be willing to wander beyond them now and then. The lessons that really stick with us, whether from the classroom or elsewhere, are the ones that involved the most growth. If we’re honest, those experiences almost never include a perfect grade.

Follow Craig on Twitter or read him on Medium.com


71 Republic is the Third Voice in media. We pride ourselves on distinctively independent journalism and editorials. Every dollar you give helps us grow our mission of providing reliable coverage. Please consider donating to our Patreon, which you can find here. Thank you very much for your support!

Featured Image Source

Advertisements

Government Shutdowns and Debt Ceilings

Craig Axford | Canada

Government shutdowns and flirtations with default by putting off raising the federal debt ceiling have become regular occurrences in Washington, D.C. I suppose we shouldn’t be surprised given the number of representatives and senators regularly expressing disdain for the very institution they were elected to run, but still.

Americans like to believe their nation is exceptional, and it is: it’s the only developed nation on the planet that doesn’t guarantee all its citizens healthcare, higher education is more expensive there than just about anywhere else, it has the only government that it’s possible to shut down without having to resort to violence, and it’s the only nation that flirts with suicide by requiring votes on its debt ceiling.

That’s right. No other governments have even one, let alone two, kill switches built into their system. And why would they? What’s the point? Unless the intent is to erode public confidence in government it makes no sense for elected officials to even contemplate closing down popular national parks or giving all the people in charge of enforcing our public health and safety regulations an extended unpaid holiday?

The habit of shutting down the government now and then (as well as the continuing resolutions passed to avoid them) is an unintended bug in the American system rather than a feature of it. So too is the necessity to authorize more borrowing periodically once the national debt has reached a predetermined threshold. Both of these bugs are extremely dangerous but, unfortunately, they are likely to remain unfixed for the foreseeable future.

America’s founding fathers were revolutionaries. As such, they were no fans of the British government, which by the late 18th century was already well established and quite recognizable to any citizen of the 21st century. Though King George III was the titular head of state, like his contemporary successor Queen Elizabeth II, he had very little actual power to match the privileges that came with his hereditary title. Parliament was already very much in charge.

Nothing like what took place in Philadelphia following the American Revolution had ever been seriously considered, let alone attempted, in London. To intentionally sit down and craft rules for a new government quite literally being built from scratch was a radical idea if ever there was one. To call America an experiment is not an exaggeration. As with any experiment, the outcome is unknown until it has come to a close. The American experiment hasn’t ended, but so far it certainly has produced some unanticipated results.

In creating the modern world’s first republic, America’s victorious rebels were faced with the task of establishing rules for a country that no longer had centuries of tradition to fall back on. The norms of the mother country they had just abandoned had evolved over hundreds of years of power struggles between the aristocracy and the crown, with a nascent merchant middle class increasingly making its own demands over the course of the 17th and 18th centuries. The newly independent colonies wanted to distinguish themselves from the nation they had just liberated themselves from, but how?

The US Constitution settled for a president instead of a monarch, while the House of Representatives took the place of the House of Commons and the Senate stood in for the House of Lords. Each elected member of these respective branches is subject to regular fixed terms of office, with the power balanced more or less equally between them rather than resting largely in the representative branch (i.e., parliament) alone. With the exception of the extremely rare and difficult case of impeachment, the US Constitution provides no opportunity to hold any single officeholder accountable for failure during the period between elections, let alone the government as a whole. Federal judges receive lifetime appointments, something else not seen in any other developed representative democracy to this day.

In a parliamentary system, the failure to pass something as routine as an annual budget triggers a crisis. Under the Westminster parliamentary model followed in the UK, Canada and several other members of the Commonwealth, this crisis brings down the government and forces the monarch or her designated representative to dissolve the government and call an election. In unstable periods when minority governments are common, elections tend to be relatively more frequent, while in less turbulent political times a majority government can persist for five years or so before facing a vote.

Likewise, when a parliament authorizes spending beyond the government’s anticipated revenues, it is understood they have necessarily approved an increase in the national debt. Therefore, there is no need to consider raising the debt limit independently. From the perspective of citizens living in parliamentary countries, it makes no sense that the same Congress that approved deficit spending one month can spend time the next flirting with a refusal to allow any borrowing. It’s like having a government that doesn’t know its own mind.

Unfortunately, the kind of crises that bring down governments in parliamentary systems has become commonplace in the United States. Budgets go years without being approved, with Congress lurching from one continuing resolution to the next while various factions hold federal employees and the citizens dependent upon their services hostage until some pet project or favorite policy or another is approved in exchange for keeping things running for a while longer. A Prime Minister Donald Trump would either be facing a vote of the people at this point in the budget process or a leadership challenge by members of his own caucus. One year in office would be unlikely, but four would almost certainly be impossible.

I’ve been living in Canada for the better part of a decade now. On most days I find myself feeling pretty ambivalent about the monarchy if I even think about it at all. That’s not because I can see equal merit in both sides of the argument regarding having someone born into the role of head of state. It’s because I recognize all societies require a sense of continuity and for some countries that can take the shape of a monarchy that has existed in one form or another for centuries. A woman that appears on our money while playing an entirely ceremonial role is harmless, if not for the actual person forced into the job by an accident of birth then at least for the rest of us.

I’m not feeling so ambivalent about having a parliament, however. I have strong opinions about the two Canadian prime ministers I’ve lived under so far. But the extent of my approval or disapproval aside, at least I know that the nearby Pacific Rim National Park will, weather permitting, always be open and that with the exception of national holidays at the local Services Canada office the door will never be locked. Even the UK Brexit debacle hasn’t convinced me parliaments are less effective or ultimately less democratic than the divided governments that have become the norm in the US.

If for some reason, it turns out parliament can’t do its job there will be an election lasting a little over a month while the people try to vote one in with a sufficient mandate to do it. In the meantime, things will go on pretty much as before without any nightly news reports about government employees unable to pay the rent because someone got it into their head they wanted to build a wall. I know it’s incredibly unAmerican to say so, but if you were to put me in a time machine and send me back to 1776, I would tell the founding fathers to get rid of the monarchy if they must, but at least keep the parliament.

Follow Craig on Twitter or read him on Medium.com


71 Republic is the Third Voice in media. We pride ourselves on distinctively independent journalism and editorials. Every dollar you give helps us grow our mission of providing reliable coverage. Please consider donating to our Patreon.

Featured Image Source

British Columbia’s Carbon Tax Is Working

Craig Axford | Canada

If we’re ever going to get to a carbon neutral or carbon negative economy, placing a price on carbon is going to be a necessary part of that effort. With new U.S. Government and Intergovernmental Panel on Climate Change (IPCC) reports coming out this year warning of extremely expensive and environmentally significant consequences if we fail to act quickly, minor public policy adjustments here and there are no longer an option.

But just because strong action is needed that can be implemented both rapidly and at large scales, it doesn’t follow that the consequences of these actions on people either can or should be ignored. That’s particularly true when it comes to the working poor and middle class. As we’ve seen in France over the past few weeks, taxes targeting fossil fuels won’t receive the kind of public support they’re going to need if States implement them without sufficient regard for the people paying them.

Fortunately, there is a proven solution that facilitates the CO² emission reductions carbon taxes are intended to achieve while also taking into account the burden these taxes impose upon society; simply make the carbon tax revenue-neutral, taking special care to use the money it generates to prioritize tax reductions for the poor, middle class and rural residents that the tax affects most.

This is precisely what the Canadian province of British Columbia did when it implemented North America’s first carbon tax in 2008. This tax survived the financial crisis that reached its zenith just two months after its implementation. That alone is a testament to the fact that even during the worst of times, it is possible to persuade a skeptical and insecure public to support a policy if it truly reconciles environmental protection with equity and fairness.

As in the French countryside, residents of rural British Columbia often have no choice but to drive long distances on a regular basis. Unlike their fellow citizens in cities like Vancouver and Victoria, public transportation opportunities frequently don’t exist or are insufficient to completely replace the automobile.

When the carbon tax was first imposed in July of 2008, it started small. It began at $10 per tonne with incremental annual increases of $5 per tonne scheduled through 2012 until it reached $30. That meant that by July of 2012, the cost of gasoline would rise by 6.67 cents per liter. For American readers, that amounts to approximately 27 cents per gallon. To put that in perspective, the gas tax the French government had been planning to impose next month amounted to roughly 25 cents per gallon.

But unlike British Columbia, France was moving to implement its tax in one fell swoop. It also had no plans to offset the gasoline tax increase with middle and low-income tax cuts or use the revenue to provide other significant tangible benefits. As the British Columbia experiment with carbon taxes shows, phasing in the tax and making it revenue-neutral is crucial to winning public support for any carbon tax that’s going to be significant enough to make a difference.

One study into the effectiveness of the BC carbon tax described the steps the then Liberal government took to achieve revenue neutrality this way:

First, the BC government lowered the tax rate on the bottom two personal income tax brackets. For a household earning a nominal income of $100,000…the average provincial tax rate was reduced from 8.74% in 2007 to 8.02% in 2008. Two lump-sum transfers were also included to protect low-income and rural households. Low-income households receive quarterly rebates, which, for a family of four, equal approximately $300 per year, and beginning in 2011, northern and rural homeowners received a further benefit of up to $200. Finally, taxes on corporations and small businesses were reduced…Since residents’ tax burden did not increase, the government was able to promote the policy as a “tax shift” rather than a tax increase.

President Macron eliminated France’s wealth tax in 2017 in advance of his proposed gas tax increase, not in concert with it, so it proved impossible to claim that ending that tax was part of an effort to implement some sort of revenue-neutral carbon tax “shift”. More importantly, however, by putting the wealth tax repeal first and failing to offer low-income and rural households additional tax breaks to offset the impact of the gas tax, Macron signaled his willingness to let the poor and middle class carry most of the burden when it came to taxes on carbon. Had the BC Liberal government followed a similar approach in 2008, I don’t know if cars would have been burning in downtown Vancouver. But they almost certainly would have been trounced in the following year’s election.

Now, just when we need carbon taxes the most, the ‘yellow vests’ movement threatens to render them a political third rail few politicians will want to touch. Sadly, most environmentalists cheered Macron’s gas tax proposal when they should have jeered, costing them valuable credibility with working-class voters that they’re going to need for any successful campaign against climate change.

The phrase “carbon tax” too often triggers a kind of Pavlovian response in the environmental community, regardless of the impact they will have on those paying them. If the environmental policies these times demand are ever going to exist on a global scale, then we must abandon the view that sustainability and social justice exist in separate policy silos. People don’t like being treated as a means to an environmental end any more than they appreciate being treated as a means to any other end, nor should they.

Carbon taxes, whether they are revenue-neutral or not, will, unfortunately, usually face stiff opposition in the beginning. In British Columbia, the major opposition party had been in favor of taxing carbon, but it flip-flopped when the opportunity to tag the Liberal Party with the initially unpopular policy presented itself just prior to an election year.

That said, the Liberal Party (rather confusingly, BC’s most conservative major party) was able to retain control of the BC government in 2009 in spite of everything. In a March 2016 article on BC’s experience with taxing carbon, the New York Times reported that public opposition to the tax had dropped from 47% in 2009 right after implementation to 32% by 2015.

The left-of-center New Democratic Party (NDP) has since flip-flopped back to its original support for the carbon tax. With the help of Green Party members elected to the province’s legislative assembly, the NDP took control of the provincial government in May of 2017. BC’s carbon tax not only again enjoys support across the political spectrum, but is in the process of increasing by $5 a year through 2021. It is scheduled to hit $50 a tonne one year ahead of the federal government’s proposed carbon tax.

. . .

That just leaves the question of whether a revenue-neutral approach to carbon taxation can actually reduce carbon emissions. After all, if all the money raised through the tax is ultimately returned to taxpayers in one form or another, where’s the incentive to reduce spending on gasoline, the largest source of CO² in BC?

Source: BC Government, sustainability page

Well, it has worked. A 2015 review of the existing research on the tax’s efficacy found that up to that point, all the studies indicated a reduction in greenhouse gas emissions of around 9%. Furthermore, that gasoline sales had dropped anywhere between 7% and 17%. One study found that commercial demand for natural gas had plunged a whopping 67% since the initiation of the tax (coal is not used to any significant degree in BC). These decreases occurred in spite of the fact that the province saw slightly higher annual economic growth than Canada as a whole in the years immediately following the 2008 financial crisis, as well as steady population growth.

It’s important to remember that even a revenue-neutral carbon tax can still function as a tax increase for a significant emitter of CO². The government hasn’t committed to making sure no one pays more in taxes, only that all the money the tax generates goes back to the public in one way or another.

Under a revenue-neutral carbon tax program, those inclined to use cleaner technologies can reduce their taxes considerably below what others in roughly the same financial boat are paying. A low-income person who decides to purchase a car instead of riding his bike or using mass transit still gets to pocket the quarterly refund payments. But unlike his friends who choose cleaner alternative modes of transportation, his refund payments will go entirely toward offsetting the additional cost of gasoline instead of groceries, rent, or other necessities.

It’s difficult to imagine citizens from the French countryside invading Paris to protest quarterly tax credits or reductions in their income taxes intended to offset a 25¢ per gallon gasoline tax. Even if polls indicated opposition to the new tax, as they did initially in BC, it’s hard to work yourself up into a lather about it when the government can demonstrate that your overall tax burden hasn’t really changed. And, in all likelihood, what was at first mild opposition or ambivalence would eventually become support once people began to realize the benefits.

The inescapable reality we now face is that whether we tax carbon or not, the cost of emitting so much of it will only be going up from this point forward. Whether it’s coastal buildings washing into the sea or houses built near the edge of a forest burning to the ground, there’s no avoiding climate change’s toll. A carbon tax that disincentivizes the use of fossil fuels ultimately benefits both the environment and people. Hopefully, from now on national, regional, and local governments will learn from British Columbia’s example as well as France’s mistakes.

Follow Craig on Twitter or read him at Medium.com

What’s In A Name?

Craig Axford | Canada

Sometimes an idea comes along that just takes off. The suggestion that our conquest of the planet is now so thorough that we’ve left the Holocene behind and entered an entirely new geologic period, the Anthropocene, is a case in point. That we might be such effective niche builders that we’ve earned our own geologic age leaves me wondering whether I should feel awe-struck or pretentious.

The Anthropocene was first popularized by Nobel Laureate Paul Crutzen in 2000. In 2016, an expert panel recommended to the International Geologic Congress (IGC) that they make the Anthropocene official. One proposed start date for this new age, though there are many others, is around 1950 with the advent of atmospheric nuclear weapons testing. So far, the IGC hasn’t reached a decision.

That we’ve had a profound impact upon the planet is hardly in dispute. From the most ardent developer to the most radical environmentalist virtually everyone agrees the earth is a far different place than it was just a couple of hundred years ago. It’s the quality rather than the scale of humanity’s impact that is in dispute.

Regardless, it’s a virtual certainty that much of our impact will leave a significant mark in the geologic record. Even if we were to vanish tomorrow, an alien arriving a million years from now would likely have little difficulty uncovering evidence of our existence. Future visitors from a nearby star system using nothing more sophisticated than the same methods and instruments we employ today would quickly find the spike in the atmospheric CO² that we’re responsible for, followed shortly thereafter by the introduction of other more exotic chemicals previously unseen. Digging just a few meters into the earth they would uncover frequent signs of our love affair with plastic. Unusually high concentrations of tar and concrete would indicate the presence of our vast transportation systems while the skyscrapers we’ve filled with office furniture and electronics could keep alien archaeologists busy for centuries.

But in spite of all of that, I’m still ambivalent about naming a geologic period after ourselves. These periods are only truly evident in retrospect and, so far at least, they’ve always spanned thousands of years at a minimum. If a comet were to slam into us right after the IGC announced it had made the Anthropocene official, in hindsight, its very short 68 years or so would appear to any possible distant investigator more like the K/T boundary marking the line between the Cretaceous and Tertiary periods approximately 65 million years ago than as an entire age unto itself.

I fear we are once again overestimating our importance. Even the catastrophic events that brought the reign of the dinosaurs to an end, as calamitous as they were, fill only a few brief pages of the volumes written into the rocks that make up our planet’s history. Who are we to think our time on Earth will amount to much more than that, even if we are leaving behind some hitherto unseen debris in our wake?

But all those arguments aside, ultimately the reasons for my lack of enthusiasm for declaring this the Anthropocene are more philosophical and psychological than they are geological. We keep finding new ways to separate ourselves from nature. If we aren’t declaring ourselves in charge of it then we’re concluding we’re bringing an end to it. Either way, we see ourselves as somehow transcending nature rather than being embedded in it. The further we imagine ourselves to be from some idyllic Garden of Eden like setting the more removed from the natural world we feel we are.

This view of nature as a pristine place that humans have not gotten around to either conquering or ruining is both wrong and self-destructive. It’s wrong because humans, as much as any other species on the planet, are a product of nature. We’re animals. That we’re animals with a remarkable capacity to shape the world isn’t a reason to think we can either just let it be or enslave it.

We were having a huge impact on nature long before the Industrial Revolution. After our distant Stone Age ancestors first arrived in Australia and North America, megafauna on both continents soon went extinct. Having no impact on the environment, whether it comes in the form urbanization or our socially constructed definitions of wilderness, simply isn’t an option for us and it never was.

If we’re going to declare this the geologic age of humans, it should be because we finally recognize that leaving a mark is what our species does rather than an act of self-flagellation because we’ve failed to leave no mark at all. Ultimately, it was always going to end up being a question of what kind of record we were going to leave behind.

Unlike our ancestors, we have biologists, climatologists and ecologists closely monitoring the environmental changes we are creating and issuing warnings as we approach tipping points. Animals or plants at risk, if they have any sort of charisma at all, now make it to our Twitter feed or the evening news in a matter of days when new dangers come to light. If our power to destroy the world has grown immensely, so has our power to acquire and share knowledge that will enable us to live more sustainably.

Ironically, the same technologies now bringing us regular bulletins on the state of our planet are products of the very advances that have enabled our population to explode, doubled our average life expectancy, and created many of the materials that will leave traces in the geologic record long after we’ve likely vanished from the face of the Earth. We and the environment we’ve become so good at using to our advantage are both victims of our success. It seems humanity, like evolution itself, is a bag of mixed blessings.

But naming an era after ourselves probably isn’t the best way to build the kind of awareness and political will our times call for. To those inclined to see the planet as humanity’s oyster, it will sound more like a declaration of victory than a call to change their ways. To those worried about our most harmful behaviors, it’s more likely to produce a sense of resignation than generate support for something like a Green New Deal. Environmentalists have become very good at describing why we’ve been cast out of the Garden yet again, declaring “the end of nature” and writing premature obituaries for the Great Barrier Reef. Unfortunately, that kind of rhetoric is far more effective at generating anxiety and paralysis than it is positive results.

Nature is neither permanent nor “pristine” and it never was. It’s a process, not a condition or a thing. It runs according to laws that apply in cities as much as they do in areas uninhabited by humans. Raise the CO² level and the atmosphere naturally responds in predictable ways. Remove a keystone species from an ecosystem and the system will naturally react as it seeks a new dynamic equilibrium. A solar panel is as natural as a lump of coal, but the consequences of using these two sources of energy are very different.

Nature is always in flux. Neither absolute control nor removing ourselves from the picture are options that are open to us. Both approaches are based on faulty premises. Change is inevitable, but as a conscious species, we can intentionally and intelligently influence the changes taking place if we choose.

If we’re going to declare this the Anthropocene, then the geologic age of humanity needs to be about the potential for redemption rather than the “end of nature”. In her book, The Human Age, Diane Ackerman praises “Reconciliation Ecology”, a term “coined by Michael Rosenzweig in his book Win-Win Ecology”.

Ackerman writes, Reconciliation Ecology “suggests fence-mending and coexisting in harmony, not a wallop of blame.” It provides individuals, NGOs, and government at all levels a way forward that will protect biodiversity and clean the air. It offers us a way to change the narrative our mark in the geologic record tells from one of accelerating pollution and disruption into a story of a reversal toward sustainability and recovery. An Anthropocene that wrote into stone a tale of mistakes learned from and acts of redemption should be, to my way of thinking, the only one worth declaring.

Follow Craig on Twitter or read him on Medium.com

Other stories that you may enjoy:


71 Republic is the Third Voice in media. We pride ourselves on distinctively independent journalism and editorials. Every dollar you give helps us grow our mission of providing reliable coverage. Please consider donating to our Patreon.

Featured Image Source

Agency and Free Will Are about Influence, not Control

Craig Axford | Canada

By now, just about everybody who has taken an interest in the question of free will is familiar with the arguments for and against it. On the one hand, it can be shown that it is at least possible under the laws of nature to react multiple ways in certain situations. It can also be demonstrated that many people act differently in the same or similar situations. On the other hand, the strict determinists respond, everything has a cause and therefore, all things being equal (right down to the last atom), anyone finding themselves in exactly the same situation could not have done otherwise.

Both arguments come with their own particular set of weaknesses that makes absolutism in defense of either total agency or unyielding determinism difficult to take seriously. When it comes to agency, though it may be possible to do otherwise under the laws of physics — stand up or keep your seat, for example — it’s impossible to prove that the person doing the standing or the sitting possessed enough agency to make that decision entirely free of the influences of either her genes or the environment. If you don’t believe it, just try creating a test in which a human subject makes a choice, mundane or otherwise, completely independent of either external environmental or internal biological inputs.

And of course, the determinists are right: everything does have a cause(s). But who is to say we can’t be one of the causes? To argue that a gene or collection of genes caused me to stand up while the rest of me had nothing to do with it is taking reductionism to a ridiculous extreme. After all, genes lack any knowledge of either sitting or standing. Likewise, my back pain may cause me to either want to take a load off or stand and stretch, but can it truly be said that it made me do either?

A single action may be the product of multiple causes, while a single cause can open the door to several choices that each invite us to act upon them. Roy F. Baumeister pointed out in his 2008 paperFree Will in Scientific Psychology, “Most psychological experiments demonstrate probabilistic rather than deterministic causation: A given cause changes the odds of a particular response but almost never operates with the complete inevitability that determinist causality would entail.”

The fact that when provided with a particular stimulus, the outcomes are more like a roll of the dice than throwing a light switch won’t come as a surprise to most people. That’s exactly how it feels to have agency. The fact that the dice are often loaded won’t be coming to determinism’s rescue because even loaded dice are an example of “probabilistic rather than [absolute] deterministic causation”.

That said, free will deniers will be quick to point out that we have no choice when it comes to rolling the dice and no control over which numbers turn up, but that’s true for all living things. If you’ll forgive me mixing my metaphors, the fact that we all must play the cards we were dealt doesn’t mean we lack the ability to play them creatively.

. . .

In the opening pages of his book, Freedom Evolves (2003), the philosopher Daniel Dennett lays the groundwork for what is to follow by emphasizing the important role language played in the development of Homo sapiens:

In just one species, our species, a new trick evolved: language. It has provided us a broad highway of knowledge-sharing, on every topic. Conversation unites us, in spite of our different languages. We can all know quite a lot about what it is like to be a Vietnamese fisherman or a Bulgarian taxi driver, an eighty-year-old nun or a five-year-old boy blind from birth, a chess master or a prostitute.

Other animals, Dennett points out, “can’t compare notes.” Since Dennett wrote those words nearly two decades ago, we’ve learned that there are, in fact, other creatures engaged in their own kind of mental and linguistic note taking too. But none of them do it in as many ways or cover nearly the variety of topics that humans do.

Using language to “compare notes” via so many mediums means we have integrated numerous opportunities to learn from each other into our individual lives as well as our cultures. As new information comes in, sooner or later we accommodate it by altering both our thinking and our behavior in response, creating more space for even greater possibility in the process.

The capacity to intentionally attempt influencing the course of events using the information we’ve gathered from others and from our surroundings is the most likely place to find a little bit of freedom, albeit nowhere near as much as we imagine ourselves to possess. Our influence is always shared with other actors and forces pushing and pulling upon the universe, making the exact quality and strength of our contribution difficult if not impossible to determine with anything like precision.

Sam Harris and other critics of free will like to point out that the problem with the concept of free will isn’t that we lack any influence, but that we lack any capacity for intention in the first place. Rain is the product of a number of atmospheric conditions combining at just the right moment, but none of these intended to cause a downpour. As biological creatures, ultimately we are no different, even if the processes involved in keeping us alive and conscious are far more complex than the weather.

Those denying we have the capacity to truly act intentionally have abundant research available to them to help make the case. In the 1980s, the neuroscientist Benjamin Libet demonstrated a brain state which he dubbed the “readiness potential”. This potential could be detected on an EEG before a person was even conscious of the “decision” their brain was preparing to carry out. How can a person be said to make a decision at all when their brain knows what they’re going to do before they do? That’s a good question.

Some have responded to Libet’s research by arguing that free agency doesn’t come in the form of “free will” but “free won’t”. Having become conscious of a temptation that we had no power to prevent — for example, the sudden urge to break our diet and eat the box of chocolates we just got for Christmas — we are faced with a choice of either giving in or not to our desire. Maybe we compromise by rationing the chocolates or we decide to remove the temptation altogether by giving the chocolates to a neighbor. Regardless, these choices arise from our awareness of the temptation to consume the chocolate, not an absolutely uncontrollable impulse to eat chocolate whenever it’s available. Our consciousness of the desire is what gives us the power to say no to it.

In these “free won’t” situations, psychologists like Baumeister don’t think the tests by Libet and others really tell us much about how free will actually operates. He writes, “Modern research methods and technology have emphasized slicing behavior into milliseconds, but these advances may paradoxically conceal the important role of conscious choice, which is mainly seen at the macro level.”rb

It’s tempting to conclude that Libet and others using his research to once and for all bury any possibility of free will are merely stating the obvious. All information in the universe is necessarily generated before we can become aware of it. The light from my laptop screen travels to my retinas and is converted to neural signals before I am conscious of the letters and images appearing on its screen. That some part of me should already be formulating a reaction to this information before it has reached every corner of my skull and become an integrated part of my present reality not only seems natural but inevitable, under every conceivable scenario.

. . .

We exist in a universe that is a chain of causes and effects, with effects inevitably turning into causes for more effects, and so on. Humans are, like everything that has come before and that will follow, both a cause and an effect. As information enters consciousness, that that awareness should become a cause for effects like intention shouldn’t really come as a surprise.

That we are beings trapped in a web of causes and effects is an ultimate argument against free will. But we are proximate creatures. There are good reasons we cannot defend ourselves from criminal charges by arguing that we had no say in our conception or the Big Bang and are therefore ultimately not responsible for our actions. Free will is about what we do or don’t possess while we are alive, not whether we had any choice about entering the world in the first place.

There’s no intuitively obvious place to put the brackets around concepts like determinism and free will. But if we start assessing agency at the moment I “decided” to write this article or to accept a new job offer a few weeks ago, instead of the moment the universe started around 15 billion years ago or the night in 1968 my parents were feeling particularly amorous, the arguments for strict determinism blur quickly. I don’t have article-writing or job-offer genes, and saying that I suddenly felt like writing about free will because my blood sugar had spiked doesn’t really bring any clarity to the question either. Other people feel like going for a jog or take a trip to the mall to do a little shopping when they’re feeling energized. Why an article on free will in my case?

That our capacity for self-control diminishes when we are hungry or tired doesn’t weaken the case for agency either, even if it usually does come in the form of free won’t. The state of our blood chemistry at the critical moment certainly matters. How we react to low glucose levels or high levels of adrenaline provide evidence that self-control and highly conscious states require our body to be functioning in a particular optimal condition, but that’s a far cry from proof that blood chemistry determines our actions.

Our brain is a glucose consuming machine. It’s not in its interest to waste energy resisting a box of chocolates when it’s running low on the very fuel it needs to effectively exercise self-control. Restraint demands more brain work (willpower) than just going with the flow. If the box of chocolates will help you resist the pack of cigarettes, reaching for the chocolates can actually be a good choice even if it’s not the ideal one.

I, like Sam Harris, Robert Sapolsky, and others, am a firm materialist. I have yet to hear anyone articulate how a force liberated from the laws of physics could even function. To say something like a soul is responsible for free will without offering any sort of description of what a soul is, where it came from, or how it acts upon our brain is just a cowardly evasion of the issues arising from consciousness. Arguing a soul rather than a biological entity is conscious merely moves the problem of consciousness and questions regarding free will to another realm. It doesn’t dispose of them.

If free will exists to any degree, it will have emerged as a property of a materialist universe. We need not default to some sort of ill-conceived dualism or deny we live within a universe governed by physical laws to make room for it. It could very well be that because each cause is itself the effect of another cause, it’s impossible to distinguish where intention begins and internal biological and outside environmental influences end.

Even free will skeptics like Sam Harris and Robert Sapolsky spend a great deal of time making ethical arguments about what we should doShould implies canwhich is very different than must. In the opening chapter of his book, The Moral Landscape, Harris states “I am arguing that science can, in principle, help us understand what we should do and should want — and, therefore, what other people should do and should want in order to live the best lives possible.” (Italics included in original)

If, in fact, Harris believes we have absolutely no control over what we do, let alone what we want, his argument regarding science’s ability to contribute substantively to moral issues — an argument with which I largely agree — isn’t merely dubious, it’s self-contradictory. Plants or animals that lack any capacity to develop informed intentions regarding how they are going to behave in the future are by definition amoral creatures incapable of giving any meaningful consideration to what they ought to do. In such a world, Harris’ “moral landscape” isn’t made up of peaks and valleys; it’s perfectly flat and featureless.

Robert Sapolsky, like Harris, has gone on record stating he doesn’t believe humans have anything like free will. Yet in the final chapter of his book, Behave, Sapolsky also can’t resist reaching ethical conclusions when it comes to how knowledge of our implicit biases should shape our actions. He writes, “revealing implicit biases indicates where to focus your monitoring to lessen their impact. This notion can be applied to all the realms of our behaviors being shaped by something implicit, subliminal, interoceptive, unconscious, subterranean — and where we then post-hoc rationalize our stance.” Sapolsky concludes, “For example, every judge should learn that judicial decisions are sensitive to how long it’s been since they ate.”

Sapolsky and Harris can’t have it both ways. While it’s true, judges tend to issue their harshest sentences just before lunch, you can’t tell a judge they should mitigate the effects of low blood sugar by having a glass of lemonade or a candy bar handy by no later than 11:00 AM in one breath and use the fact that low blood sugar leads to harsher sentences as proof people have no free will in the next. If a judge’s knowledge of his or her implicit bias can truly lead to choices that will minimize or eliminate the bias, isn’t acting on this knowledge an example of the moral exercise of free will? Is a judge with normal blood sugar in a better position to make wise rulings probabilistically speaking or isn’t she? Is a judge capable of intentionally regulating their blood sugar toward this end or not?

If by free will we mean absolute control over ourselves and our environment, then I agree, we don’t have it. But if by free will we mean something more subtle — the capacity to intentionally influence our world, even if only a little bit — then the answer is at worst a qualified maybe. People are complex creatures, prohibited from ever gaining an outside objective view of themselvesWe are both cause and effect, both subject and object. As animals with a consciousness, we are both determined and intentional at once. Just how much our intention matters may be impossible to know, but it does matter.

Follow Craig on Twitter or read him on Medium.com

Other articles by Craig you may enjoy:

Social Media Shouldn’t Be Free
If we want to be treated like customers instead of the product, we need to pay up medium.com 


71 Republic is the Third Voice in media. We pride ourselves on distinctively independent journalism and editorials. Every dollar you give helps us grow our mission of providing reliable coverage. Please consider donating to our Patreon, which you can find here. Thank you very much for your support!