Archive

July 2013

Browsing

Since today is Bastille Day, I’ve been thinking about the nature of revolutions. Here’s my question for today: what makes a revolution succeed or fail? Why did the American Revolution result in relative political stability, while the French Revolution ran amok and eventually resulted in the rise of another dictator?

This question is also becoming increasingly relevant as we watch the transitions and revolutions sweeping the Middle East and North Africa. Why did Egypt’s democratic revolution bring Mohamed Morsi to power?Perhaps the answer is prudence. Russell Kirk notes that the American Revolution was “not a revolution started, but a revolution prevented.” I mostly agree with that. The American system retained many parts of the old British system, including the bicameral structure and English common law. The nature of the executive they created was somewhat new, but for the most part, the new American government mirrored many aspects of the English government. While the French found themselves caught up in attempting a radical social transformation, the Americans were slightly more cautious. For example, French revolutionaries were the intellectual heirs of Rousseau’s Social Contract and demands for instant social change, whereas American revolutionaries were influenced by the more cautious ideals of Burke and Montesquieu.

There is something dangerous about being so furious with old system that the revolutionaries seek to do away with all parts of it. Many populist revolutions, like the French Revolution and later the Bolshevik Revolution, have the potential to pander so much to the public outcry for change that they do not stop to consider what they are changing. The French Revolution may well be the definition of the “tyranny of the majority,” and the tides of emotion and righteous indignation are difficult to quell.

Once a revolution begins, it is difficult to tell where it is going. There must be a mechanism in place to stem the passions of social transformation, or else the whole process can go horribly awry. For example, the Chinese Communist Revolution was so brutal in part because of Chairman Mao’s demand for “constant revolution” so that the people would never become “complacent.”

Social change is a worthy goal, but the lessons of history suggest that prudence is necessary to make sure it succeeds. Revolutions cannot be solely emotional if they are to work – logic and caution must play a prominent role as well.

On a lighter note, here is a great comic by one of my favorite comic artists, Kate Beaton:

I love science. There are few methods of understanding dearer to my heart than the scientific method – it has been responsible for some of the most amazing advances and discoveries in human history, and I look forward to what it has to contribute to us in the future.

That said, we should resist labeling anyone who questions a scientific idea “anti-science.” “Anti-science” is not a scientific term but a political one. So what, then, does it mean to be “anti-science?”

To investigate why labeling people who express skepticism “anti-science” is problematic, it is necessary to start with a few definitions: the scientific method and the difference between scientific hypotheses, theories and laws.

The Scientific Method

I define the scientific method as the following process:

1. Formulation of a question

2. Hypothesis – a guess based on prior knowledge

3. Prediction – determining the logical outcome of the hypothesis

4. Testing – usually an experiment, but always an investigation about whether the real world is in accordance with the hypothesis

5. Analysis – determining the results of the testing and deciding what steps to take next

6. Retesting – repeat steps 1-5 over and over again until you keep coming to the same conclusion almost every time

Scientific Hypotheses, Theories and Laws

Hypotheses, theories and laws are not interchangeable, and each one does not possess the same level of proof.

Hypothesis: An educated guess based on observation. (e.g. If you notice that your iPhone goes missing every time your little cousin comes to visit, you might hypothesize that your cousin steals your iPhone.)

Theory: The summary of a hypothesis or set of hypotheses that have been repeatedly tested and supported (e.g. Darwin’s theory of evolution.)

Law: The generalization of a large body of observations that appears to always be true – as of right now, there are no laws with exceptions (e.g. Newton’s Law of Gravity or the Laws of Thermodynamics.)

The Concept of “Anti-Science”

Occasionally, scientists will formulate hypotheses and theories, and other scientists, economists, and people generally will question them. The questioners then become labeled “anti-science” because “the evidence is all there – how can they not agree? If they disagree, they must simply hate science.”

The labelers believe that the questioners are anti-science because, to them, the hypotheses and theories they have formed are so self-evidently true that the questioners might as well be trying to refute laws. For example, scientists will come up with a theory and say that questioners of their theory are akin to questioners of gravity.

Hypotheses and theories are not law. It is one thing to question whether climate simulation models are sufficiently accurate at predicting future weather events and another thing to question the existence of gravity. The models constitute a theory, and gravity constitutes a law. Remember that there are no exceptions to laws and that laws cannot be disproven. Theories, on the other hand, can always be countered by competing theories.

Why the Label Itself is Anti-Scientific

Nothing is more truly anti-science than labeling anyone who doesn’t agree with your hypothesis or theory a pariah. Science is at its best when scientists are constantly questioning and testing each others’ claims. The more rigorous this process of questioning and testing is, the closer our science comes to being true. Science is much better for the existence of competing theories because, without them, we would never approach the middle ground where laws usually lie.

If we really care about science, we ought to be encouraging questioning of methodology, data, and experimental processes, not demonizing it.

I was glancing through my Facebook News Feed the other day, when I saw a meme that my friend posted that said, “When we destroy something created by man, we call it vandalism. When we destroy something created by nature, we call it progress” (by Ed Begley, Jr.).

Why are humans considered separate from nature? Human beings are, at the end of the day, still “animals” in the strictest sense – we are mammals. It is as though many people consider humans “apart from nature,” some kind of rogue element, a parasite, that destroys nature by way of its mere existence. Yet, we don’t treat other animals this way. When humans use wood to build homes and make paper products, they are accused of “cutting down the trees and destroying nature.” Yet, when birds use wood to build nests, that is considered “natural.” Why? The things that bears and beavers build are considered natural, but the things humans build are considered unnatural. That is illogical.

The line between what is “natural” and what is “unnatural” has become somewhat arbitrary. Many people routinely consider chemicals unnatural, but nearly every natural occurrence is the result of a chemical reaction. So, then, why are some chemicals privileged over others? What makes certain chemicals more “natural” than others? In the late 20th century, Greenpeace attempted to ban the element chlorine entirely – but chlorine is part of table salt, it filters our water, it cleans our swimming pools, and it performs a number of other functions that are crucial to society. Banning the existence of a naturally-occurring element on the periodic table seems unnatural.

The idea of “harmony with nature” has both upsides and downsides. Because we exist in a modern industrialized society where we have the privilege of taking for granted our hot water, our instant light-switches, and our internet, we have begun to idealize a pastoral Eden that has never existed. Many people have become seduced by the vision of “humans in harmony with nature,” a picture in which we dance in the woods, live in small homes, and drink water straight from the streams. Yet, that is a utopia – it has never existed.

The reality of nature is more harsh than romantic. Before the late 19th century, when humans actually lived in a more “natural” state, that “natural” state included such horrors as burning wood and dung to make fuel, malaria, tuberculosis, drinking water from streams that contained animal waste, and floods and storms from which we had little shelter or protection. Americans used to boast proudly in the 19th century about taming nature, because it meant that they had overcome such hardships and could live comfortably. When we say that we want to “live in harmony with nature,” what we actually mean is that we want to go camping for about a week (perhaps stay in a log cabin while we are out in the woods), bring modern conveniences with us, and then go back to our heated, weather-resistant homes once that week is over. What we do not mean is that we want to abandon modern civilization entirely and put ourselves at the mercy of natural forces.

We should be willing to consider our industrial achievements part of nature. Since we are part of nature and they are our creations (created from elements found from nature as well), there is no reason they should not be considered part of the environment. Being in “harmony with nature” does not mean recklessly attempting to turn back time – it means charting a more prudent future. So let us build and be proud of what we build – they are as much a part of nature as we are.

 

I’ve been reading Henry Hazlitt’s “Economics in One Lesson,” and it occurred to me that there is a moral case both for and against minimum wage laws. Yet, it seems to almost be a truism among many people today that these laws are inherently moral. Maybe we should rethink that.

The minimum wage debate basically comes down to which is the greater of the two evils: low wages or unemployment. The fact is that minimum wage laws do put people out of work, and that is a real ethical issue that we should care about.I’m certainly no economist, but here is a hypothetical thought experiment for considering the moral implications of the minimum wage:

Cindy lives in Seattle and is looking for work. Based on her qualifications and experience, most employers would be willing to pay her $280 a week ($7 an hour, assuming a 40-hour work week) to work in a free market. The state of Washington, however, requires employers to pay her a minimum of $367.60 a week ($9.19 an hour, again assuming a 40-hour work week). Since employers don’t value her labor at $367.60, they are unlikely to hire her and will instead hire someone whose qualifications and experience are worth $367.60 to them. Thus, Cindy can’t find a job.

Now, let’s consider what happens when we add unemployment benefits to the equation. Since Cindy can’t find a job, she applies for unemployment relief. This poses two distinct problems for her and for society:

  1. On one end of the scenario, the minimum level of unemployment relief that Cindy can obtain in Washington is $148 a week ($3.70 an hour, if we thought of this as the equivalent of a 40-hour work week). Because her skills actually make her worth $280 a week in the market, but she is prohibited by law from earning that amount of money, she is losing $132 a week that she could have otherwise made. Cindy is effectively banned from earning money she could have used to sustain herself.
  2. On the other end of the scenario, the maximum level of unemployment relief that Cindy can receive is $624 a week ($15.60 an hour, assuming the same conditions as in the previous example). Since the minimum wage entitles her to $367.60 a week, but unemployment benefits are now worth $624 a week, it makes no practical sense for her to work at all. It is more economical for her to be unemployed than to have a minimum wage job. This situation deprives Americans of her services who could have otherwise benefited from them.

Clearly, the moral case is not as black-and-white as minimum wage proponents would have us believe.  Their argument rests on the idea that extremely low wages hurt Cindy’s fundamental human dignity. Yet, unemployment is also an indignity. She loses her sense of self-sufficiency and her feeling that she has earned any success she might have. The indignity of unemployment merits consideration every bit as much as that of low wages.

It’s almost considered heresy in many circles to say that you care about international development and that you believe in classical liberal economics at the same time. You get a lot of funny looks and statements like, “Neoliberalism was one of the worst things to happen to the developing world – look at what the Washington Consensus did to Latin America!”Property rights can help free markets work in developing countries. I recently finished reading Hernando de Soto’s Mystery of Capital, and I think his ideas make sense. De Soto proposes that part of the reason post-colonial developing countries have been unable to replicate the success of Western capitalism is because of insufficient legal infrastructure. For example, he argues that property rights are not respected in many countries and that tasks so simple as starting a small business require as much as two years of paperwork and bureaucratic delay. Ultimately, the poor find the legal system so stacked against them that they simply opt out of it entirely and operate in an extralegal realm where they enforce their own rules. This situation creates an extralegal society not unlike the American Wild West. De Soto also suggests that the representations of capital that exist in the United States (deeds to houses, titles to land, etc.) don’t exist in legally enforceable or consolidated ways in many developing countries. What good is buying a house if you don’t have a deed to show that you own it? What do you do if you cannot sue someone for simply squatting on your land? These situations create obvious problems.

To suggest that markets are failing in developing countries because the poor are simply not entrepreneurial enough or because the whole situation is the fault of Western former colonial powers is both simplistic and somewhat insulting.  The poor in most developing countries are exceptionally entrepreneurial – you cannot walk through the streets of India or China without seeing vendors in stalls selling things everywhere around you. Haggle with the vendors and witness their business savvy for yourself. Yet, it is hard to be successful when a small business permit takes two years to obtain or when you can’t representationally trade your capital through the elaborate transactional mechanisms we have developed in the West.

I don’t disagree that the Washington Consensus was largely a bad idea. It doesn’t seem wise to force austerity measures and artificial liberalization upon economies that aren’t our own. The Washington Consensus, in many ways, flew in the face of certain tenets of classical liberal economics – it was an attempt at central planning. Like many other attempts at central planning, it failed because the central planners and bureaucrats at the IMF and the World Bank didn’t know very much about the local experiences of Peruvians, Venezuelans, Argentinians, and many more people.  The decisions should have ultimately been made in Lima, Caracas, and Buenos Aires, among other capitals. Perhaps then markets would have grown organically and without the intense backlash to Western free market ideas that the Washington Consensus engendered.