Tag: knowledge

Epistemocracy: If Leaders Knew the Limitations of Their Knowledge

By Mason Mohon | @mohonofficial

The human mind is a powerful machine. Its capabilities for knowledge has given us the wonders of modern medicine, ever-advancing technology, and beautiful arts that we admire almost constantly. Its cognitive abilities dwarf those of nearly every animal on Earth. The capabilities for creativity and individualized abstract thinking set humanity apart even from our most advanced machines.

Yet despite the unbelievable complexity of our brains, it tends to fall victim to many traps – set by itself. Cognitive biases and fallacious modes of thought plague the minds of even the smartest among us. One can be blinded by their own knowledge; lack of humility and high hubris can lead one to make very serious mistakes, and this has very serious implication when it comes to putting individuals in positions of power. Those who occupy authority positions in academia, business, and especially government are responsible when it comes to being aware and working against the traps of the brain.

In his book The Black Swan, Nassim Taleb talks about how history is mainly driven by Black Swans: completely unpredictable events that have a profound impact. These include events such as 9/11, the 2008 financial crash, and the discovery of the new world. Hindsight is 20/20, as the saying goes, so we tend to embed these events into a narrative that make them seem completely predictable. In fact, though, events occur in a completely unpredictable manner. It is impossible to see a Black Swan coming.

This lack of ability to predict the results of an action calls into question the viability of having a technocrat make economic/political decisions for an entire country. Top-down approaches to governance can prove to do more harm than good. Making decisions for others may not be the best way to organize and socio-political order.

Taleb introduces the reader to a mathematician named Henri Poincaré. Poincaré is a scientific thinker that Taleb believes does not get the credit he deserves. Some believe that he came up with the theory of relativity before Einstein, yet dismissed it as unimportant. This scientific one-upsmanship is beside the point, though. What Taleb (and I) would like to focus on is Poincaré “three body problem.”

The underrated scientist argued that mathematics can easily show us its own limitations. Taleb explains that Poincaré “introduced nonlinearities, small effects that can lead to severe consequences.” He discovered that when making predictions, precision is key because these tiny nonlinear errors can compound the overall error of the prediction quite rapidly. Poincaré then introduces the three body problem.

If we were to organize a solar-style system with only two planets and no other factors, we would be able to “indefinitely predict the behavior of these planets, no sweat.” Yet our capabilities to predict are completely decimated when we add a seemingly inconsequential comet into the mix. Initially, its effect will be essentially nothing, but over time it will have a large impact, making the ability to predict the movement of 3 rocks governed only by gravity practically impossible. And I do not use “practically” just because it is a nice word. I use it because, in practice, humans cannot make such a forecast.

Now, look at the real world. Your own life is infinitely more complicated than three rocks floating in space. We can barely predict the events in our own lives because we all face our own Black Swans. There are many events that come straight out of left field, leaving us completely dumbfounded and uncertain about the future. No human can make perfect predictions in their own life. While some may be more competent than others, we are not as good as we think when it comes to making plans.

I wrote on this once before, but I focused mostly on political opinions. I failed to see that the problem of not recognizing the unknowns that create most of reality (in relation to ourselves) goes much further than having a political opinion. We are far more incompetent when it comes to prediction than we believe. This is because we are surrounded by humans as complex and dynamic as ourselves. We can barely keep an accurate representation of ourselves in mind, so how are we supposed to do the same with others?

If we move forward to political leaders, the impact of this problem becomes much more pressing. Friederich Hayek, the libertarian economist and philosopher, explained the problem of knowledge when it comes to central planning. Taleb goes on to say that the planner “cannot aggregate knowledge… But society as a whole will be able to integrate into its functioning these multiple pieces of information.” It is better when smaller institutions are in “control” (whatever form that may take) because they are more capable of being aware of the information that is pertinent to the organization.

Hayek indicated that this root of this problem was something called “scientism.” Because of the continued explosion of knowledge available to the human race, we have become blind. As we learn more, the Dunning-Kruger effect takes hold. We begin to think that because we know more, we can do more, plan more, and control more. This is ultimately false because the human mind has a very limited capacity when compared to the totality of all human knowledge. Taleb continues that “owing to the growth of scientific knowledge, we overestimate our ability to understand the subtle changes that constitute the world, and what weight needs to be imparted to each such change.”

So how do we solve such a problem? If we cannot expect the world’s foremost experts in physics and mathematics to make long-term predictions of the behavior of 3 floating rocks, how can we expect a democratically elected leader to make plans that accurately account for the future problems of nearly 400 million individuals? Clearly, the current system of government we have is a joke. The idea that just because 51% of humans chose the leader does not mean that the leader has perfect knowledge. This is because no one short of an angel (or possibly even God) will be able to make the most fair and predictably accurate decisions when governing.

Taleb’s solution lays in something called Epistemocracy. This is not as much of a revamping of the structure of the political organization, but rather a change in the attitude of those in power. Anyone holding a position of rank must be an epistemocrat – someone that recognizes the extreme limitations of their knowledge. They need to realize that they cannot plan because they cannot fully understand. They must put their trust in the spontaneous order created by the interacting aggregate knowledge of society.

This, paired with a further decentralized governmental structure, could prove to be phenomenally beneficial for the human race. Fewer pompous leaders that think they know everything will only help us. And fewer people under the jurisdiction of each leader will create a decentralized society that would allow for accurate knowledge to win out in the formation of society and politics.


Get awesome merchandise. Help 71 Republic end the media oligarchy. Donate today to our Patreon, which you can find here. Thank you very much for your support!

Featured Image Source

Advertisements

No, You Can’t Think Outside of the Box

By Ryan Lau | @agorisms  

Try to think of a new color, one that you have never experienced before. It’s purple, but without all of the purple, or green, with a greater amount of intensity. Having trouble yet? Now, attempt the exercise again, but this time, describe it without using any other colors as a reference point to go from. Rather than speaking relatively, imagine the creation of a color outside of our spectrum, and describe it absolutely. You can’t, because you can’t think outside of the box.

Surely, the human brain is not capable of such a task. For that matter, it is also not able to even do so for a color within the existing spectrum. Try, for example, to describe the color green without merely echoing how your individual brain processes the light. Initially, some ideas may include “a cool color”, “the color of the trees in the forest”, or even “light waves that reflect a wavelength of about 550 nm”.

Alas, none of these descriptions are at all meaningful in saying what exactly the color green is. In order to determine what is and is not meaningful, it is best to introduce the situation of a blind man.  If he who cannot see on his own can understand a concept of sight, then that concept must be sound. As the blind man has no bias, he cannot have any pre-existing ideas as to what the color green is.

Suppose the man has a wife and daughter, and both have normal vision. The child goes to a movie with some friends and eagerly comes home after, telling her parents what she saw. The wife can see and the husband cannot, but neither of them has watched the movie. So, neither of them will receive a completely accurate recounting of it. However, by using sensory details, the daughter can convey information that both the wife and husband will understand.

For example, if the movie had a grotesque alien, the girl may say that it had slimy skin, eight legs, and large teeth. Though blind, the man’s other senses compensate, allowing for both he and the wife to get a rough idea of the creature. But what if the daughter only described the alien as green? Even if she called it the forest’s color or remarked about frequency, the blind man would have absolutely no idea what this strange, foreign concept was.

As a person without sight could not understand this aspect of it, then it is safe to say that the description of green is not objective. That is to say, the definition alone holds no absolute truth; the senses are also required in order to understand it. 

Could, on the other hand, the senses alone be a useful tool in determining what the building blocks of the world really are? Though an interesting thought, this appears to be even more difficult to maintain. By looking at forest leaves alone, a person would have no true concept of what makes them green.

What is to say that the way I perceive green is not that way someone else perceives red? Think outside of the box. When looking at a Christmas tree, it is entirely possible that someone may see what I believe to be red. I cannot disprove it, nor can anyone, as that would involve being in everyone else’s minds. The thought, in addition to impossible, is not even slightly appealing and would not be useful in this context.

Clearly, a sense of sight is necessary in order to understand things based on sight alone. The blind man’s assumptions on the movie, referring back, most likely came from information his other senses provided. But just as convincingly, the sense of sight alone is not adequate to bring about objective truth.

Now, a bit of a paradox begins to form. It is obvious that the sense of sight is absolutely necessary to understand the objective quality of sight. However, the sense of sight is also meaningless at determining any objectivity of sight. So, if both statements are true, how can we be sure of anything at all concerning the properties of sight? 

Ultimately, the answer boils down to a level of societal consistency. I, of course, have no idea how anybody else perceives the so-called “green” of a Christmas tree. In fact, there is mounting evidence to suggest different people are able to see and identify different colors. 

One study, done by Debi Roberson of the University of Essex, looks at this phenomenon. The Himba tribe in Namibia has multiple words for green, but not one distinguishing blue from green. So, when trying to pick out different shades of green, they excelled. But, they were often unable to determine the difference between the common green and cyan, even though the distinction is so obvious to native English speakers.

The northwest square is a different color, which the Himba saw immediately. Source: https://static01.nyt.com/images/2012/09/04/magazine/046thfl-green-color-ring/046thfl-green-color-ring-blog480.jpg

I do know, however, that the idea of green is commonly accepted among all. It matters not whether we all perceive colors as the same, and in fact, is entirely possible that we each view them wildly uniquely.

What does matter, in this situation, is that we can communicate the existence of color. The above study all but proves the notion that color is different from language to language. But even within a single language, there is no guarantee that, as I previously said, someone does not think a Christmas tree matches my perception of red.

Color, then, if not objectively verifiable, cannot have inherent objective truths to it. Its only use is for communication, and thus, is nothing more than a human construct. This is where the idea of thinking outside of the box comes into play.

Before the cavemen were able to write, were they able to see color? If so, how many were they able to perceive? The Himba tribe study would suggest that the answer to this is no. If a word for a color is necessary in order to see it, how could cavemen existing before language possibly understand it? But if they did not understand it, how could they have possibly come up with words for it? What came first, the color or the idea of color? Either situation proves humanity’s inability to think outside of the box. 

The less likely of the two situations is that cavemen were unable to see color before conceptualizing color. Of course, the study would support this hypothesis, if carried to a full extension. But, it would be impossible for a human to come up with a word for something that he or she could not perceive.

In the modern day, it is safe to assume that the five senses shape our perception of reality. It is impossible for any human being to think of a world in which another sense exists that we do not have the organ to detect, or that our organ is not yet advanced enough to detect. The proof of this lies in the human inability to imagine a new color that does not entail a combination of existing colors or a relative assessment in comparison to them.

So, it is safe to assume that the study has some limitations. Though it is fascinating how language shapes existing color perceptions, it cannot create new ones entirely. Given the scope of human knowledge now, compared to the times of the Neanderthals, it is highly unlikely that they were any more able to do so.

This, by process of elimination, implies that color itself predated the idea of color. So, an ancient human being, deductively, must have looked at his or her own perception of color and given it words. With those words, the ancient could begin showing others this new property, which they had already observed, but not named.

Time passes by. The words red, blue, green, and others fly across the globe, but when we use them today, what are we actually saying? Is there anything true about them, or we are just unable to think outside of the box?

We cannot possibly imagine another term for color, and if we did, we would not be able to explain it. Language has already set aside certain words that denote color, but they are all subjective. So, when seeing green, one is not seeing anything that has any intrinsic green properties, as green itself does not exist. Science tries to approximate it, but even then, it only uses someone else’s definition, someone else’s box. You can’t think outside of the box, because language is in the box. Without language, what is a coherent thought?

The box encompasses all things commonly accepted as human knowledge, given the pattern of perception and storytelling that is knowledge itself. Rather than physical, it is metaphorical, but nonetheless very real. It is a collection of natural occurrences that humans snared with language, conceptualizing them in ways that allow for their understanding. Color, of course, is in the box, as is language. Neither of those is anything more than social constructs used to further communications.

What else can we place into the box of possible thought? Measurements of time, for one. Though time objectively moves, there is nothing intrinsic to say that a second is really a second at all. We merely invent and use the measurements to better understand each other. Try to describe one second without mentioning any words that denote an interval or direction. That means, no “minute”, “year”, or even “forward”. All of them are constructs used to quantify things we cannot otherwise understand. You cannot think in any other terms because you cannot think outside of the box.

Numbers, too, fit into the box. Ask any schoolchild and they will tell you with absolute certainty that 1+1=2. But what is two, and what is one? Again, you cannot describe two without using another human construct for quantity. 3-1, two things, doubled one, the second number after zero. They all are in the box, and you cannot think outside of the box. We can add many more things to it, such as shape, size, and texture. Though they definitely all exist, we cannot describe the concepts without merely giving circular examples of them.

The logic behind the box is not unlike that of a Nigerian prince’s email, offering you his billion dollar fortune. All he asks is for your social security number so that he may complete the transaction. In it, the fake prince says that you can trust the email because he is a trustworthy prince. How do you know he is a trustworthy prince? Well, the email says so, so it has to be true. 

The same reasoning applies to anything in the box. The forest is green because green is found in the forest. Two is more than one because one is less than two. A second is a duration of time because one second goes by for one second to pass. All of these things are very real, but all are also very logically invalid. Each of these resides on a different concept. But in order for any of them to be objectively true, the concept can’t be its own catalyst.

So, what lies beyond the box? We cannot know, because we cannot think outside of the box. But in so many ways, we likewise cannot think inside of the box. So, in a sense, we cannot wrap our heads around the things that lie inside of the box, and we cannot comprehend things that fall outside of it. 

Is this a contradiction? It would appear that this very logic falls within the same category of circular reasoning as many things in the box. Does the box exist at all? If we cannot understand the things inside or outside of it, where can the line fall?

The exact location appears nearly impossible to ascertain. Though individuals can have limitations, they are always in flux. Some evidence suggests that we did not see the color blue until recently. Does this mean that there are other things which we have the capacity to perceive, but simply do not? Surely I cannot rule this idea out, and thus, these things too would fall in the box. If more is in the box than we can see, then we cannot possibly see the edge of the box. 

Where would such a box come from? What kind of limitations exist on the human capacity to know and perceive? The origins of a box may lead back to a Creator, or a scientific guideline, or perhaps some fusion. Maybe all of these possibilities are moot, and are only products of the box.

Being unable to think fully inside or outside of the box, I cannot begin to fathom where it may have come from. With certainty, though, I can state its existence. But in order to investigate its origin, I must first know what it is. And to know what it is, I must know of the boundary between in and out of the box.

The line must exist, for without it, there is no distinction. Without distinction, there is nothing. But with distinction, there is only circular reasoning, which leads itself down a road free of knowledge. The box must exist, but for it to exist, it must have a place to fall. But, without knowledge of the inside or outside, how can such a place exist? Perhaps, if I was able to think outside of the box, it would be more imaginable. Perhaps not. In a world of circular reasoning, the prospect of an answer appears as dark and unclear as is the box itself, turned over on top of humanity.

Get awesome merchandise and help 71 Republic end the media oligarchy by donating to our Patreon, which you can find here. Thank you very much for your support!

Featured Image Source

Knowledge is Asymptotic, Not Absolute or Relative

By Craig Axford | United States

It is understandable that we would prefer sharp clean distinctions. From an evolutionary perspective, being decisive is far more advantageous on the savanna where lions, snakes, and various other dangers lurk than being a philosopher or scientist who prefers to consider every predator’s behavior in context. Yes, the blood dripping from its mouth is probably an indication it just ate, but let’s just climb the nearest tree and figure out how hungry it really is later.

In nature, being wrong often comes with little cost. Indeed, caution could be defined as the willingness to be wrong almost every time in order to improve your chances of survival. Overreactions to potential threats really only need to be right once to pay huge dividends.

We still act this way today. The odds of a car accident during any given trip are hundreds to one against, but we still buckle up and the government still requires manufacturers to place airbags in automobiles. Likewise, we expect pharmaceutical companies and medical professionals to inform us of the risk associated with various treatments even though psychological research has shown that we consistently tend to overestimate the true danger even when we’re in possession of the data.

Given our tolerance for error in so many of our routine daily activities, you could be forgiven for thinking we are quite tolerant of it in a scientific context. After all, error is an expected part of the scientific process, even if the hope is that it will progressively diminish as our knowledge increases.

Instead, scientists are consistently accused of crying wolf or of just plain getting it wrong. Every meteorologist knows that if they claim there’s an 80% chance of rain and it doesn’t rain, they’ll be accused ruining people’s weekend plans. Likewise, if a doctor tells a patient there’s a 1% risk of experiencing a particular side effect and the patient turns out to be that unlucky one in a hundred, the patient will inevitably suspect the doctor underestimated the risk.

In 1989, Isaac Asimov published an essay in The Skeptical Inquirer entitled “The Relativity of Wrong.” It was inspired by a letter Asimov had received from a student majoring in English literature who felt the need to set Asimov straight on the subject of science. The English lit major was convinced that Asimov had been a too eager promoter of science, given that science had over the past few centuries been initially wrong about a great many things. The student reminded the great writer and scientist that Socrates had said that “If I am the wisest man, it is because I alone know that I know nothing.”

The student’s letter serves simultaneously to remind us that both relativism and absolutism are poor perspectives from which to view the universe if our goal is to expand our comprehension of it. Socrates was offering a lesson in humility and a statement regarding absolute or perfect knowledge, not making a claim that it is impossible to gain greater understanding. Too many of us, like the student who wrote Asimov, think that anything short of perfect knowledge is ignorance, and therefore unworthy of our effort.

An asymptote is a line that continually approaches another line but never actually touches it. Knowledge is asymptotic, not absolute.

In his response to the student Asimov wrote, “John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.” He goes on to point out that the curvature of the earth is only about “0.000126 per mile,” so the flat earthers weren’t completely out to lunch given that’s awfully close to zero. As for the earth being a sphere, due to a bulge at the equator it’s more like an “oblate spheroid.”

But Asimov’s larger point is that as we gather more facts and consider how they fit together our understanding approaches a perfect knowledge. That it never becomes absolute is irrelevant. The flat earthers weren’t 100% wrong, but they were more wrong than those who thought the world was spherical, who were more wrong than those who realized planets bulge at their equator, and so on.

Isaac Asimov would be disappointed to read all the headlines circulating on the Internet these days that provocatively declare — more in the interest of clicks than truth, I suspect — that “Science is wrong” about this or that. Skeptics throw out words like “scientism,” which I suppose is meant to distinguish those of us convinced that science is something worthy of our respect and attention from people like John, the English literature major who is wise because he, like Socrates, knows he knows nothing.

People will argue that a vaccine with a success rate of 50% is a failure because up to half the people receiving it might still get sick, as though reducing the instances of a particular disease by half was just as bad as not reducing the number of people sickened by the virus at all. They will point to the one in 10,000 infants that has a negative reaction to a vaccination while ignoring the millions that no longer get measles or mumps, as if that single infant is proof the whole vaccination enterprise is a disaster.

Finally, someone will inevitably cite some scientist or another who falsified their data in order to convince the rest of us their hypothesis was correct, and use this case to discredit science. Integrity is central to the scientific method. Blaming science for a scientist who was either dishonest or inept is like blaming the law every time a bank is robbed or blaming internal combustion for accidents caused by distracted driving. Scientists, like everyone else, make mistakes. Sometimes they are intentional, though usually it’s just human error.

Arriving at a better understanding of our world and how all the different bits fit together and influence one another is what makes science and philosophy interesting. One of the reasons I enjoy writing is it allows me to explore different perspectives and to use the process to improve my understanding. Writing is thinking in action.

Nothing we do leads to a perfect understanding, but hopefully our efforts move us steadily further from the chaos of absolute relativism. Truth is asymptotic. Knowledge is asymptotic. The arc of our understanding will hopefully keep bending steadily toward perfection. We shouldn’t be discouraged because it will never reach it.

Follow Craig on Twitter or read him on 71Republic.com

Other stories by Craig that you may enjoy:

Beware The Confident ‘Thinker’

By Mason Mohon | @mohonofficial

The more you know, the more you realize you don’t know. – Aristotle

There are people in this world who walk around, touting their political and philosophical beliefs like nobody’s business. They act smart, and at first glance seem smart. It is as if they have absorbed the knowledge of a wise teacher and wish to dissipate the newfound knowledge into the populace.

But then you begin to peel back the surface, and you realize this person knows absolutely nothing. They cannot defend their ideas when they clash with those of others. All they seem to be able to do is repeat a mantra or soundbite that sounds good.

“Facts don’t care about your feelings.”

“Equal pay for equal work.”

“The freer the market, the freer the people.”

The mental complexity of people like this is lacking. They seem to not understand how to open a book. Their views probably come from PragerU or NowThis videos, or badly-made political YouTube “documentaries.”

We all know someone like this. You’re probably thinking of someone specific in your life right now.

There’s a reason for the confidence of such people. The reason is science.

A Political Psychology study (linked above) outlines the phenomena in those that are politically inept. The reason this happens is because of something called the Dunning-Kruger Effect. It is a cognitive bias in which people of low ability suffer from perceived superiority.

We all face the unknown, it is a fact of life. Whether or not we know how many unknowns there are varies from person to person, though. A more intelligent person realizes that there are more known unknowns than knowns within their mind. They realize that there is a vast amount of skills, areas of expertise, and pearls of wisdom that they have yet to master, and probably will never master.

The less knowledgeable person, though, is not knowledgeable of these unknowns, turning them into unknown unknowns. They are ignorant about that which they do not realize. They do not realize that there is an insurmountable sea of political, philosophical, and economic thought that nobody can fully absorb and comprehend.

Because they do not see all of the things they don’t know, they assume they have reached a high level of intellectual prowess. In reality, they are on the lowest level around.

As the study says:

This double burden of incompetence means that low-performing individuals often overestimate their own objective performance. A second and related aspect of the Dunning-Kruger effect is that these low achievers will be less capable of rating and comparing peers’ performances.

If you have read this article and been faced with an uncomfortable conviction, do not worry. All hope is not lost. There are ways to realize humility in this ocean of ideas. My personal favorite method is the one which I have taken up myself.

I have created an “anti-library.” I am 17 years old, yet I have over 100 books on politics, economics, psychology, and philosophy sitting on my overflowing bookshelves. I add more books faster than I can read them.

Having more books than I know I can read, and always adding more, gives me a healthy dose of intellectual humility in life. Each book that I have in my room but have yet to read shows me that there is knowledge I do not have.

If you know a pseudointellectual, let them borrow a book. If you are a pseudointellectual, start collecting books and read a lot of them.


Featured image source.

Have Scientists Begun Ending Human Mortality?

By Ryan Lau | United States

Mortality, as humanity currently stands, is a defining attribute of the human condition. In fact, save the exception of a few crustaceans and ageless microorganisms, it is the condition of life itself to die. The tiniest life forms all the way up the most prosperous humans all recognize death as a future action. Essentially, organisms define it as a scarcity of life.

There is not enough life to go around to satisfy all those in possession of it, and thus, life makes choices, some conscious, some unconscious. The tomato plant drops its seeds, preparing a new generation for life, not long before death. The fox avoids poisonous berries, knowing they would further impose scarcity. The human being, of course, carefully selects an occupation, spouse, location, hobby, and much more, all with the knowledge that he is choosing these over others, making his limited life the best that he can. In every instance, life forms use scarcity as a driving force in conscious or unconscious decision-making.

Now, imagine a world in which scarcity is no longer a factor. Somehow, someway, the world has overcome its own beautiful yet crippling condition. What exactly would this entail? Philosophers have created many models of such a world. Yet, post-scarcity of life is a largely uncharted territory. Without a doubt, this would fundamentally change what it means to be a human being. Such a change is no easy task, but scientists are beginning to lay the groundwork. Yes, they are taking the first steps towards immortality, through the tiny organism of Caenorhabditis Elegans.

The C. Elegans worm, which has a lifespan of just two weeks, may hold the key to immortality in its short-lived body. Specifically, it contains the gene known as DAF-2, which has shown vast potential in the field of life extension. In fact, when molecular biologist Cynthia Kenyon first mutated the DAF-2 gene in these worms in 1993, the results were amazing. One simple mutation doubled the life expectancy of the worms to four weeks, halving the aging process. The strong and direct result showed that there are living organisms with genes that control aging and mortality. Though a long way off, could such a gene show movement in the direction of immortality?

To answer this question, it is first important to recognize what immortality truly means. Despite many often giving it the wrong meaning, it does not have to mean the full end of death. Simply put, immortality is the end to death by natural causes. Those worms would still meet their end if Kenyon was a cruel scientist and lit their habitat on fire. They would still perish if sliced in half or poisoned. Yet, they will take twice as long to die of old age, as their cells double twice as many times. Can this doubling in lifespan be the first step to immortality? The simple answer is yes.

In order to end relative mortality, humanity must extend their life expectancy by more than one year for every calendar year that passes. Essentially, this would mean that the average human will never die, though many of course still will, from unnatural causes, including violence and disease. Imagine a man born today, when he is age 40, with a life expectancy of 90. In the next year, the ever-increasing pace of technology and science increases the life expectancy to 91. On his forty-first birthday, he has the same estimated 50 years left of life. In a sense, he has just lived through the entire year without being a second closer to death.

The following year, the man turns 42. Yet this time, technology and medicine have continued to accelerate, as they always have throughout human history. Now, the life expectancy has soared to 92 years, one month. Though the man just lived through an entire year, assuming he has not drastically changed his lifestyle habits, he is now one month further from mortality than he was two years ago. Could this really be in our future?

Surely, we are nowhere near this current state. In the United States, the life expectancy rises by a couple of years each decade. Yet, this increase, since the 1800’s, has drastically accelerated. Similarly, the amount of human knowledge has accelerated, at increasingly rapid rates. In fact, 13 months from now, humanity will know twice as much as it knows today. The medical field will double in knowledge in 18 months. As a comparison, human knowledge in 1900 doubled in 100 years, and in 1945, it doubled in 25 years. What will happen, then, when our doubling rate reaches one month? Well, the rates will continue to increase, as the technology discovered and used will only enable us to further our knowledge even more.

As this rate increases for human and clinical knowledge, scientists and doctors may be able to use the study found in C. Elegans, and apply it to the human state of being. A similar gene, if present in the many thousands of human genes, could perhaps double human life expectancy. Moreover, doctors will be working with more knowledge and success than ever to cure physical disease. As humanity works faster and faster, will we eventually see a day when one of those ailments is mortality itself? Without a doubt, we are still far from this state ever becoming a reality, but at our current rate of increasing knowledge, as well as life expectancy experiments on lower life forms, it just might be a possible future.