Even though I believe the church is important, as well as any spiritual kind of community, I think the pain and suffering the church causes must be challenged. I think the church, like any other institution or corporation, can become possessed by the principalities and powers and enslave people. I feel a personal responsibility to resist this phenomenon personally and publicly because I have been a victim of it myself, conspired with it myself, and witnessed the destruction it causes.David has recently written a number of posts on fundamentalism (some of which are here, here, and here), and makes the point that we are all fundamentalists at some time or another on certain issues (not necessarily religious issues). I thought about this for a while, and I'm still somewhat conflicted on whether I agree with this premise. To me, there are two ways to interpret this point:
First, we all mistakenly hold onto beliefs for which we don't have good evidence, even when presented with evidence for an opposing view from time to time. This I agree with; I know I have held beliefs for which I didn't really have good justification, and would even argue about them with bad logic and reason. Eventually I've fixed some of these, but I'm sure there are still others of which I'm unaware.
However, I don't think this is a very good definition of fundamentalism. I try my best not to hold beliefs without good justification for them, and if I realize that something I believe is not properly justified, I reconsider and change my beliefs to better fit the evidence. While I know I'm not perfect at doing this, I don't think that makes me a fundamentalist, because I aspire to do the best I can. A fundamentalist willfully adheres to some set of principles without justification for those principles, and believes that they should (i.e., are obligated to) hold certain beliefs without the proper evidence. To me, fundamentalism is more about a person's overarching attitude toward justification than a practical assessment of the justification for their beliefs. That doesn't mean David's advice for being compassionate while talking with fundamentalists isn't helpful, but that there is a difference between being mistakenly stubborn about a particular belief, and believing one should be unjustifiably stubborn.
Another way to interpret this is that we all have certain fundamental beliefs that we can't justify, and accept that this must be the case. This reminds me of the argument that some lob out against the scientific/skeptical community: that everyone must have some basis for their beliefs that can't be justified. The argument is basically that to justify any belief, we need to assume that something else is true. All of those assumptions must either be justified themselves, or we are essentially holding them as true without justification.
As an example, I believe evolution happened because of the variety of evidence I have seen supporting the theory, from a variety of sources (fossils, genetics, etc.). But I also need justification for believing my evidence is credible (e.g., how do I know the fossil is really X years old?), which leads to other assumptions (e.g., justifying my dating mechanism), that seems to lead to an infinite regress. At some point, it seems like there must be some foundation on which all of these beliefs stand. Perhaps I take the scientific method itself as true a priori, or something more basic (laws about cause and effect, for instance), but eventually I have to stop justifying things somewhere.
But that assumes a linear progression of beliefs justifying other beliefs (i.e., Belief A justifies belief B, which justifies belief C, and so on). But my belief system seems, at least to me, more like a complex network of interconnected beliefs. If I want to know whether I should believe A, its not enough to know that B is good justification for A. I judge A on the basis of a number of other beliefs I hold to be true, and make sure that everything meshes together.
One consequence is that when I find that certain beliefs of mine are mistaken, even ones that I've held for a long time, it's not usually necessary to throw out a whole line of reasoning that goes along with it. For instance, many beliefs I hold are based on simple common sense. However, being a scientist has shown me that common sense isn't always reliable. Just learning the basics about quantum mechanics made me question all kinds of things. But I didn't radically change all of my beliefs because of it, because most of my beliefs, while consistent with common sense, are also bolstered by many other beliefs I hold. I have to think about things from time to time to make sure my beliefs are still consistent, and occasionally I realize that an old belief I've held no longer fits, and I have to adjust. However, most of my beliefs are held because of their consistency within the larger picture
A possible problem with this view is that if you start out with a bad set of beliefs (e.g., when you're young and susceptible to believe what you're told), then beliefs that are consistent aren't likely to be true. As we computer scientists say: garbage in, garbage out. So even if an adult believer doesn't need a fundamental belief without justification, children This is also a problem for a linear progression of belief and justification, although it still has to be dealt with here.
Thinking about this more, there is something that I know that I take for granted in my belief system: the belief that my sense of the world is an accurate depiction of objective reality.* Now if you've taken some philosophy courses, you've probably been presented with thought experiments that show problems with this assumption, often called the "skeptical hypotheses." Epistemological skepticism is the argument that we cannot know anything about objective reality, as opposed to the way we use the word skepticism in everyday vernacular. As an example, there is the brain in a vat argument:
The basic situation is that although you have experiences of what you think is objective reality, that is simply a simulation created by some deceptive power, which in this case is a supercomputer. Imagine that you are actually just a "brain in a vat" attached to the supercomputer, which generates all of your experiences, excellently depicted above (thanks Wikipedia!). Other thought experiments presenting the same point have also been made, including Descartes' evil daemon, and the dream argument.
The question is, how do you know that this hypothesis is false, and your experiences really are "real" (i.e., generated by physical objects in the real world)? There are very few arguments against the skeptical hypothesis, and I don't find them very convincing, so I'm currently under the assumption that we cannot be certain that our senses are accurately projecting reality. Given that, how can I make the assumption that my senses are basically trustworthy?
Well let's assume that my assumption is bad, and that my senses are completely untrustworthy. If that's true, then what does holding that belief get me, other than being right about my senses? Nothing. There is no way to hold any other beliefs based on the fact that me senses are essentially worthless. Without my senses, I have trouble coming up with other ways to generate beliefs (other than perhaps in a few specialized subjects that don't rely on a posteriori knowledge, like mathematics and logic). However, if I assume that my senses are trustworthy in most situations, then I can use my sense data to justify other beliefs. While this is quite convenient, it doesn't mean that my sense data really is useful.
Let's assume one more time that my senses are not reliable. How would that change what beliefs I should hold? I would suggest it doesn't change anything. Let's assume further that it's equally possible that right now, I am either a brain in a vat, or actually in a room sitting at a desk with a laptop on it typing. But in either situation, I have to act as if what I'm experiencing is real. If I jump out of my second story window, I'm going to experience pain, whether I'm actually jumping out of a physical window, or it's all being simulated in a supercomputer. In either case, I should act as if what I'm experiencing is real, and therefore it helps to base my beliefs on that premise. It is possible that my beliefs are wrong, but they are useful. And if those beliefs are wrong, it would be impossible to hold any beliefs that are correct anyway (other than the belief that your senses are useless).
Even if your beliefs are technically false because you're actually a brain in a vat, there is a sense in which they are still true, if modified slightly. For example, I may hold the belief: "I'm sitting at a desk," which may or may not be true. Most beliefs about the external world can be internalized in the way, however, it is definitely true that "I'm perceive that I am sitting at a desk" regardless of objective reality, making the question of whether my senses are reliable for recognizing objective reality moot. So to me it makes sense to develop a belief system based on the assumption that my senses are basically reliable, because even if they aren't the belief system I have is still the best I'm going to be able to come up with.
So there's my armchair philosophical argument for why I'm not a fundamentalist, while still having a useful belief system about the external world. Anyone have any thoughts on any of this? (Probably not, only I would spend this much time thinking about this...)
* Accurate enough, at least. I know that the signal coming from my sense organs (eyes, ears, etc.) is changed all along the way to the brain, and then is manipulated in plenty of ways by my brain to make our senses easier to decipher.**
** That's right, double minor in philosophy and psychology. What of it?!