Logic as Theory, Not Dogma

There are many interesting remarks that arise when people are discussing areas of disagreement, perhaps especially in areas of philosophical disagreement. These include:

  • “That’s illogical!”
  • “It’s logically impossible for that to happen!”
  • “That violates the Laws of Logic™!”
  • Etc. etc.

There are a number of interesting questions that can be asked here: “What is logic? What is the relationship (if there is one) between logic and reality? What are these ‘Laws of Logic’ and why are they special?” However, let’s stick with the question of what logic itself is. This can be phrased a number of ways:

  • The study of valid argument forms
  • The study of the correct principles of reasoning or inference
  • The study of the logical consequence relationship
  • The study of what follows from some set of truths, and why it follows

And so on. One fundamental reason we want to have the correct answers about how to reason is that we want to be able to discern how to determine what else is true, given some set of known truths. And if a party makes an invalid inference, we want to be able to point out that there is a gap between what’s asserted & what is concluded.

But there’s an interesting quality to the discourse about logic. When speaking about whether or not some piece of reasoning is valid, the remarks I mentioned at the beginning seem to conceptualize logic as some concrete, unchanging thing. It seems to view logic and logical rules as something handed down, rather than as a topic that has changed over time. And this is flatly untrue.

Without getting into the discussion about Non-Classical Logics, logic has changed over time. For about 2,300(-ish) years, Aristotelian Logic was the dominant logic in Western Philosophy. However, around the end of the 19th century, logicians began to realize that Aristotelian logic was unable to account for the inferences that were being made in mathematics at the time. In order to make logical sense of the reasoning mathematicians were engaging in, the systems of logic we now call “Classical Logic” were born. Logicians like Frege intended for this new logic to form the foundation of mathematics (a program known as “Logicism”). This project, however, ended in failure thanks to the work of other logicians like Kurt Gödel and Bertrand Russell.

But there is something important to note here: Logic changed. And it didn’t change by the pure light of natural reason or some intuition about a priori truths. Rather, logicians had data which they needed to account for – the reasoning mathematicians were engaging in at the time. The new logic had a different logical consequence relation than the old one.  Some argument forms which were valid in Aristotelian Logic were no longer valid, & some previously invalid argument forms were now valid. Take the following syllogism:

  • All Bs are Cs
  • All Bs are As
  • Therefore some As are Cs

Aristotelian logic considers this a valid argument, but it is invalid when translated into Classical Logic. What I aim to get at is fairly simple. The picture of logic as this inscrutable, unquestionable entity is blatantly ahistorical. Logical systems are theories about what follows from what, and why they follow. Just as other fields construct theories to account for the relevant data, so too are logics created to ascertain what the norms of correct reasoning are. Unsurprisingly, there are many debates about the respective virtues of logical systems and the problems they purport to solve.

But what exactly should qualify as correct reasoning is a complex topic. Is there only one correct system of reasoning, one true logic (logical monism)? Or perhaps different logics are apt to different domains, so that there is no one true logic (logical pluralism)? How do we decide between logical systems in the first place? And in doing so, must we privilege a particular logic?

Irrespective of your view on these & other related philosophical problems, you must be able to account for the historical facts about how logic has developed. Otherwise you seem to be giving a baseless “just so” story, and this makes it difficult to take your view as anything other than dogma.

If you’re interested in why exactly Classical Logic superseded Aristotelian Logic, I’ve uploaded an edited part of a talk Graham Priest once gave. It outlines the history & reasons behind the shift from Aristotelian Logic to Classical Logic. I hope this is helpful!

Link: https://www.youtube.com/watch?v=f3gMR0qVjRc


Priest, Graham. Doubt Truth to Be a Liar. Oxford: Clarendon, 2006. Print.

Two Types of Moral Skepticism

Philosophical skepticism comes in many varieties. The skeptic can be a real challenger or a fictitious construct, created as a methodological tool in epistemology. Usually, the main similarity between all forms of skepticism is that it concerns the epistemic realm; however, in meta-ethics, there are two kinds of skeptical challenge that can be raised. Unlike other areas of philosophical inquiry, one can admit that we have moral knowledge, or that moral knowledge is possible, while still remaining a moral skeptic in completely different sense. Moral skepticism comes in a practical variety as well, which is best characterized by the question, “why be moral?”.

Call the two different kinds of moral skepticism, epistemic moral skepticism and practical moral skepticism. The former is analogous to the traditional skepticism that targets the epistemic realm of justification and knowledge, whereas the latter targets reasons for action. The practical moral skeptic questions why moral reasons ought to move us in any way. The epistemic moral skeptic will raise well known structural challenges, such as the regress problem, as well as challenges concerning how we distinguish true from false representations or impressions, and challenges stemming from (allegedly) possible skeptical scenarios like the evil demon and brains in vats. Typically, the antiskeptic can appropriate strategies used against more general kinds of skeptics in epistemology. However, some people in meta-ethics think ethics has distinct (epistemic) skeptical challenges that aren’t found elsewhere, such as concerns arising from intractable (in principle) moral disagreement.

A good way of representing the practical skeptic is as the amoralist who is unmoved by ethical concerns. What could we give the amoralist in terms of reasons that would convince him to be moral? The amoralist will reject moral reasons, so one typical way of meeting the challenge is by providing selfish reasons to be moral, such as rational self interest over long-term interpersonal interactions. However, there are instances where we would want the amoralist to act morally even though there aren’t sufficient selfish reasons to do so, which is made salient by the Ring of Gyges. Imagine somebody had a ring that could make them undetectable when performing actions. Modify the situation to make that somebody an amoralist, and then ask what reasons we could provide him to convince him to act morally when wearing the ring.

Another strategy for countering the amoralist is a position called internalism. I have explained the varieties of internalism in a previous post, so I’ll briefly outline the relevant elements here rather than rehashing entire positions. If one takes the position that recognizing moral facts necessarily provides the recognizer with reasons for action, then the amoralist (assuming he has moral knowledge) will be impossible. Another position is internalism about moral judgment, which says that anybody who makes a sincere moral judgment necessarily is (at least partially) motivated to act morally, which means that an amoralist who makes sincere moral judgments is impossible. If an amoralist cannot make sincere moral judgments, then he is in some way deficient, and not a suitable example for raising challenges concerning practical moral skepticism. The amoralist will be so unlike normal people that he won’t be capable of using moral concepts, which means that he does not raise genuine concerns about moral reasons or motivation. To be a challenge, he would have to employ the same moral concepts we do, and competently so. It would be like raising a challenge to the claim that pain provides reasons for action by producing a thought experiment concerning a being that cannot feel pain.

An externalist, on the other hand, can admit that genuine amoralists are possible, and not deficient in any relevant way. The externalist will simply say that normal humans operate under psychological laws that reliably link up recognition of moral facts with motivation to act morally. The amoralist will be a rational actor who happens not to operate under such psychological laws. If the externalist is also a moral realist, then the amoralist will be said to be both rational and morally reprehensible (if he acts immorally). Rationality and morality are not as tightly connected on most externalist theories as they are on many internalist theories. Internalism tends to be an element of moral rationalism, which takes moral rationality to be a species of practical rationality. Moral rationalism entails that amoralists are practically irrational in some way (which means rational amoralists are impossible). Externalists tend to take a Humean theory of rationality, which means that one is practically rational just if one’s actions align with one’s desires. So, according to the Humean, the amoralist merely has a different desire-set than normal people, which means that the amoralist acts rationally by not being moved by moral concerns, whereas normal people operating under normal psychological laws would be irrational if they weren’t moved by moral concerns.

Practical moral skepticism is a unique form of skepticism, as it concerns action rather than belief. The challenge could be extended to standards of rationality in general, insofar as they concern rational action rather than (just) belief. However, that is a topic for a different occasion.

Error Theory, Queerness, and Non-Negotiability

Attached is a PDF of a paper I wrote last fall. The paper outlines the various forms that J.L. Mackie’s argument from queerness can take, and why I believe that the proponent of those arguments must do more work to discharge his burden of proof than has been done so far.

[googleapps domain=”docs” dir=”spreadsheets/d/1M-FYtfT0K7wEjJkwsRLkNfAgjIZu7RX5qM9-0Hv6X6I/pubhtml” query=”widget=false&headers=false” width=”894″ height=”750″ /]

An Argument for the Principle of Sufficient Reason

Michael Della Rocca, in his paper PSR, defines the Principle of Sufficient Reason, the PSR, as the principle by which each fact has an explanation (1). For every object or state of affairs there can be given a reason for its existence. Della Rocca argues that the PSR is widely rejected by philosophers because it has failed to be adequately argued for and that there has been relentless attacks on the PSR over the last 270 years (1-2). Not only has there been relentless attacks on the PSR, but philosophers such as David Hume and Immanuel Kant have even constructed entire philosophical systems around the assumption that the PSR is false. To make matters worse, according to Della Rocca, there appears to be adequate reasons to give up on the PSR given evidence from contemporary physics (2). Despite acknowledging reasons to give up on the PSR, Della Rocca continues. Before giving his argument for the PSR, Della Rocca gives a few cases of so-called explicability arguments. An explicability argument is, for Della Rocca, “such an argument, [where] a certain state of affairs is said not to obtain simply because its obtaining would be inexplicable, a so-called brute fact” (2). The first example Della Rocca uses is an example from Gottfried Leibniz: “[Archimedes] takes it for granted that if there is a balance in which everything is alike on both sides, and if equal weights are hung on the two ends of that balance, the whole will be at rest. That is because no reason can be given why one side should weigh down rather than the other” (321). Leibniz does not consider the possibility of this fact being inexplicable, which would be a perfectly plausible inference. The point of this example is to illuminate the legitimacy of explicability arguments, in at least some cases. If Della Rocca can get the reader to accept explicability arguments generally, then he has forced the reader to accept the PSR itself. This is because, as Della Rocca defined earlier, the PSR is the claim that each fact has an explanation, the rejection of inexplicability generally. To accept the PSR, under the definition given here, is to reject brute facts.

It seems plausible that the above example may point to an instance in which an explicability argument works, but it seems that one can accept the above argument without being committed to explicability arguments generally. Della Rocca offers a second example of a seemingly plausible explicability argument, he calls these brute dispositions (2). Della Rocca offers his second example as follows:

“Imagine two objects that are in the same world and that are categorically exactly alike. They each have (qualitatively) the same molecular structure and have all the same categorical physical features. If one of these objects has the disposition to dissolve in water, could the other one fail to have that disposition? It would seem not: given their exact categorical similarity, nothing could ground this dispositional difference between the two objects, and so we reject the scenario in which there is such a difference” (3).

This is another instance in which it seems that an explicability argument is justified. Such an argument seems to work because there is no explanation as to why one object would dissolve and the qualitatively identical object fail to.

Once again, the reader still is not forced to accept the PSR. Della Rocca offers a number of other examples of explicability arguments, but they are not necessary to the understanding of the argument as a whole. The goal of Della Rocca’s examples are to show instances in which explicability arguments are successful. The point is that philosophers often want to appeal to explicability arguments, whether it is in regards to consciousness, rejecting Aristotelian explanations, defending induction, causation, modality, and so on. All of these phenomena seem to involve appealing to explicability arguments. All of the instances in which explicability arguments are successful not only give intuitive support for the PSR, but these arguments also make it more difficult to draw a non-arbitrary line between when explicability arguments are acceptable and when they are not.

The final case that Della Rocca considers is that of existence. While the previous cases do not commit one to the full-blown PSR, the case of existence does entail the PSR. Della Rocca believes there is no non-question begging, non-arbitrary, way of rejecting the PSR in the case of existence. In this way, explicability in the case of existence amounts to an argument for the PSR. Just as the previous examples may or may not have asked for explicability arguments for various phenomena the same can be done for existence itself. Della Rocca best illuminates the importance of explicability in the case of existence as follows:

“…the explicability argument in the case of existence differs from the previous ones in one crucial respect: while the other explicability arguments do not by themselves commit one to the full-blown PSR, the explicability argument concerning existence does, for to insist that there be an explanation for the existence of each existing thing is simply to insist on the PSR itself, as I stated it at the outset of this paper. So the explicability argument concerning existence, unlike the other explicability arguments, is an argument for the PSR itself, and it is our willingness to accept explicability arguments in other, similar cases that puts pressure on us to accept the explicability argument in the case of existence, i.e., puts pressure on us to accept the PSR itself” (6-7).

The above passage is Della Rocca’s major argument for the PSR. Appealing to an explicability argument in the case of existence is to assert the PSR itself because the PSR is the claim that everything has an explanation. That is, for each thing that exists there can be given a reason for its existence. Given the above argument, Della Rocca considers three options that the denier of the PSR could take:

1. One can say that some of the explicability arguments are legitimate and some—in particular, the explicability argument concerning existence—are not.
2. One can say that none of the explicability arguments is legitimate.
3. One can say that all of the explicability arguments, including the explicability argument concerning existence, are legitimate (7).

None of the above options end up being appealing to the denier of the PSR. The denier cannot take option three because to accept explicability arguments, including the case of existence, is to accept the PSR itself. Della Rocca offers a sophisticated response to the second option, but the response essentially comes down to the fact that it seems like the entire practice of philosophical and scientific inquiry depends on explicability arguments, or the denial of brute facts and the demanding of explanations. The examples of explicability arguments in this paper are instances in which philosophers want to make appeals to explicability arguments. Many philosophical arguments appear to be explanations. There is nothing wrong, logically, with taking the second option, but it prevents one from appealing to explicability arguments in the cases of Archimedes’ balance, consciousness, personal identity, mechanistic explanation, induction, causation, modality, and so on. The second option does not come without considerable cost.

The first option is most likely the most appealing option to the denier of the PSR. If one wants to draw a line between legitimate and illegitimate explicability arguments, then, for Della Rocca, one must draw principled and non-arbitrary line (7). If the denier of the PSR attempts to draw an arbitrary line, then the denier is begging the question against the PSR because to appeal to an arbitrary line is to appeal to inexplicability and that is to assume the PSR is false (8).

Works Cited
Leibniz, Gottfried Wilhelm, Roger Ariew, and Daniel Garber. Philosophical Essays. Indianapolis: Hackett Pub., 1989. Print.
Rocca, Michael Della. “PSR.” Philosophers’ Imprint, July 2010. Web. 7 Mar. 2016.

Why Verificationism Is Not Self-Refuting

In the early to mid Twentieth Century, there was a philosophical movement stemming from Austria that aimed to do away with metaphysics. The movement has come to be called Logical Positivism or Logical Empiricism, and it is widely seen as a discredited research program in philosophy (among other fields). One of the often repeated reasons that Logical Empiricism is untenable is that the criterion the positivists employed to demarcate the meaningful from the meaningless, when applied to itself, is meaningless, and therefore it refutes itself. In this post, I aim to show that the positivists’ criterion does not result in self-refutation.

Doing away with metaphysics is a rather ambiguous aim. One can take it to mean that we ought to rid universities of metaphysicians, encourage people to cease writing and publishing books and papers on the topic, and adjust our natural language such that it does not commit us to metaphysical claims. Another method of doing away with metaphysics is by discrediting it as an area of study. Logical Positivists saw the former interpretation of their aim as an eventual outgrowth of the latter interpretation. The positivists generally took their immediate goal to be discrediting metaphysics as a field of study, and probably hoped that the latter goal of removing metaphysics from the academy would follow.

Discrediting metaphysics can be a difficult task. The positivists’ strategy was to target the language used in expressing metaphysical theses. If the language that metaphysicians employed was only apparently meaningful, but underneath the surface it was cognitively meaningless, then the language of metaphysics would consist of meaningless utterances. Cognitive meaning consists of a statement being truth-apt, or having truth conditions. If a statement isn’t truth-apt, then it is cognitively meaningless, but it can serve other linguistic functions besides assertion (e.g. ordering somebody to do something isn’t truth-apt, but it has a linguistic function).

If metaphysics is a discourse that purports to be in the business of assertion, yet it consists entirely of cognitively meaningless statements, then it is a failure as a field of study. But how did the positivists aim to demonstrate that metaphysics is a cognitively meaningless enterprise? The answer is by providing a criterion to demarcate cognitively meaningful statements from cognitively meaningless statements.

The positivists were enamored with Hume’s fork, which is the distinction between relations of ideas and matters of fact, or, in Kant’s terminology, the analytic and the synthetic. The distinction was applied to all cognitively meaningful statements. So, for any cognitively meaningful statement, it is necessarily the case that it is either analytic or synthetic (but not both). Analytic statements, for the positivists, were not about extra-linguistic reality, but instead were about concepts and definitions (and maybe rules). Any claim about extra-linguistic reality was synthetic, and any synthetic claim was about extra-linguistic reality.

Synthetic statements were taken to be cognitively meaningful just if they could be empirically confirmed. The only other cognitively meaningful statements for the positivists were analytic statements and contradictions. This is an informal statement of the verificationist criterion for meaningfulness. Verificationism was the way that the positivists discredited metaphysics as a cognitively meaningless discipline. If metaphysics consisted of synthetic statements that could not be empirically confirmed (e.g. the nature of possible worlds), then metaphysics consisted of cognitively meaningless statements. In short, the positivists took a non-cognitivist interpretation of the language used in metaphysics.

Conventional wisdom says that verificationism, when applied to itself, results in self-refutation, which means that the positivists’ project is an utter failure. But why does it result in self-refutation? One reason is that it is either analytic or synthetic, but it doesn’t appear to be analytic, so it must be synthetic. But if the verificationist criterion is synthetic, then it must be empirically confirmable. Unfortunately, verificationism is not empirically confirmable, so it is cognitively meaningless. Verificationism, then, is in the same boat with metaphysics.

Fortunately for the positivists, the argument above fails. First off, there are ways to interpret verificationism such that it is subject to empirical confirmation. Verificationism could express a thesis that aims to capture or explicate the ordinary concept of meaning (Surovell 2013). If it aims to capture the ordinary concept of meaning, then it could be confirmed by studying how users of the concept MEANING could employ it in discourse. If such concept users employ the concept in the way the verificationist criterion says it does, then it is confirmed. So, given that understanding of verificationism, it is cognitively meaningful. If verificationism aims to explicate the ordinary concept of meaning, then it would be allowed more leeway when it deviates from standard usage of ordinary concept in light of its advantages within a comprehensive theory (Surovell 2013). Verificationism construed as an explication of the ordinary concept of meaning, then, would be subject to empirical confirmation if the overall theory it contributes to is confirmed.

Secondly, if one takes the position traditionally attributed to Carnap, then one can say that the verificationist criterion is not internal to a language, but external. It is a recommendation to use language in a particular way that admits of only empirically confirmable, analytic, and contradictory statements. Recommendations are not truth-apt, yet they serve important linguistic functions. So, verificationism may be construed non-cognitively, as a recommendation motivated by pragmatic reasons. There’s nothing self-refuting about that.

Lastly, one could take verificationism to be internal to a language, in Carnap’s sense, and analytic. However, the criterion would not aim to capture the ordinary notion of meaning, but instead it would be a replacement of that notion. Carnap appears to endorse this way of construing verificationism in the following passage,

“It would be advisable to avoid the terms ‘meaningful’ and ‘meaningless’ in this and in similar discussions . . . and to replace them with an expression of the form “a . . . sentence of L”; expressions of this form will then refer to a specified language and will contain at the place ‘. . .’ an adjective which indicates the methodological character of the sentence, e.g. whether or not that sentence (and its negation) is verifiable or completely or incompletely confirmable or completely or incompletely testable and the like, according to what is intended by ‘meaningful’” (Carnap 1936).

Rather than documenting the way ordinary users of language deploy the concept MEANING, Carnap appears to be proposing a replacement for the ordinary concept of meaning. The statement of verificationism is internal to the language in which expressions of meaning are replaced with “a . . . sentence of L” where ‘. . .’ is an adjective that indicates whether or not the sentence is verifiable, and thus is analytic in that language. The motivation for adopting verificationism thus construed would then be dependent on the theoretical and pragmatic advantages of using that language.

So, verificationism can be construed as synthetic, analytic, or cognitively meaningless. It could be considered a recommendation to use language in a certain way, and that recommendation is then motivated by pragmatic reasons (or other reasons), which makes it cognitively meaningless but linguistically useful, which does not result in self-refutation. Or, it could be considered a conventional definition aimed to capture or explicate the ordinary concept of meaning. It would then be verifiable because it could be confirmed by an empirical investigation into the way people use the ordinary notion of meaning, or by its overall theoretical merits. Lastly, it could be internal to a language, and thus analytic, but not an attempt at capturing the ordinary notion of meaning. Instead, it would be a replacement that served a particular function within a particular language that is itself chosen for pragmatic (non-cognitive) reasons. In any of these construals, verificationism is not self-refuting.

Works Cited:

Carnap, Rudolf. Testability and Meaning – Continued. Philosophy of Science, Jan. 1936.

Surovell, Jonathan. Carnap’s Response to the Charge that Verificationism is Self-Undermining. March 2013.


Is Determinism Self-Refuting?

Determinism is usually considered to be the claim that every event is necessitated by the laws of nature (or whatever) in conjunction with the causal history leading up to that event. Many people consider whether or not determinism is true to be an open question. Some folks think it’s a contingent thesis, obtaining in some but not all possible worlds, while others take it to be necessarily true or false, like many positions in metaphysics. In this post, the modal status of determinism will be briefly touched upon at the end. First, what I want to know about determinism is whether or not it is false. Here’s an argument concluding that it is:

  1. We should refrain from believing falsehoods about determinism.
  2. Whatever should be done, can be done (Ought Implies Can).
  3. If determinism is true, then whatever can be done, is done.
  4. At least one person believes that determinism is false.
  5. We can refrain from believing falsehoods about determinism. (1, 2)
  6. If determinism is true, then we refrain from believing falsehoods about determinism. (3,5)
  7. If determinism is true then it is true that determinism is false. (6,4)
  8. It’s true that determinism is false. (7)

The argument isn’t very straightforward. Premise two is a formulation of “ought implies can” which is a principle that states that if S ought to bring about some state of affairs P, then S can bring about the state of affairs P. Premise three is a consequence of determinism. If determinism is true, then everything that you can do, you do at some point in your causal history. In effect, it’s a denial of the notion that we have alternative possibilities available to us when deciding to act. Premise four is the claim that at least one person believes that determinism is false. The fourth premise is an empirical claim in need of empirical justification, which is readily available in the form of books, papers, and talks from philosophers who accept indeterminism. From the first two premises, it follows that we can refrain from believing falsehoods about determinism, because ought implies can plus the epistemic norm expressed by premise one jointly entail that we can do what we ought to do, and in this case that is refrain from believing things that are false about determinism. From three and five it follows that we (at least somebody) refrains from believing falsehoods about determinism, because we can refrain from believing falsehoods about determinism, which, combined with determinism, entails that we do refrain from believing falsehoods about determinism (what we ought to do we can do, and what we can do we actually do). From six and four it follows that given the truth of determinism, at least one person’s belief that determinism is false turns out to be true, which is because that person ought to refrain from believing falsehoods about determinism, and that person can refrain from believing falsehoods about determinism, so that person actually does refrain from believing falsehoods about determinism. The transition is from ought to refrain from P to can refrain from P to actually does refrain from P, where the first transition is due to ought implies can, and the second transition is due to determinism. Finally, the conclusion is that it’s true that determinism is false, which follows from seven since determinism implies its own falsehood given the other premises.

This is quite a fishy argument in my opinion. I’m utterly unconvinced that it is sound, yet I cannot find anything blatantly wrong with it. The argument most likely fails somewhere, but I cannot think of anywhere in particular that sticks out as being clearly implausible/false. If I had to take a stab at critiquing the argument, I would question the application of “ought implies can” to belief formation. The application of that principle to doxastic practices assumes that belief formation is a subset of human ability. Perhaps belief formation is not within the realm of our control, so it does not qualify as an ability.

Another issue is with formulating determinism as the thesis what whatever a person can do, that person actually does. Perhaps the “can” in this formulation is implicitly indexed to our actual world such that it is a redundant formulation of determinism, and a broader interpretation of “can” that involves nearby possible worlds is in order. I’m thinking of something to the effect of a person can do X even if they do not do X in the actual world, but in some possible world, but in that world that person is determined to do X. However, I’m unsure how this formulation of determinism avoids the argument above, as it would require loading different background conditions into the analyses of what a person can do at a particular world. Since there are different background conditions, “ought implies can” wouldn’t apply to the actual world. Given S’s background history in the actual world, S ought to P, but S can only P in a nearby possible world with a different causal history and set of epistemic responsibilities, thus making “ought implies can” as formulated by the argument irrelevant to this discussion.

One last way to possibly resist the conclusion is to deny that ought necessarily implies can. Perhaps there are impossible demands that our norms place on us. If moral obligations/norms are not always such that we can fulfill them, then perhaps the same holds for epistemic obligations/norms.

A last question that can be asked is about determinism’s modal status given this argument’s soundness. Some may think that if the principles that are conjoined with determinism are themselves necessarily true/false, and they entail that determinism is necessarily false, then determinism has the status of necessity, be it necessary truth or falsity. But this argument proves no such thing. There is a contingent premise in the argument, and there are (probably) some possible worlds where nobody believes that determinism is false, which means that if a philosopher in those worlds ran this argument, they would conclude that determinism is true, so it follows that determinism’s truth value is contingent on which possible world one examines it in.

What I find interesting about the argument is that there is no obvious, knockdown refutation at hand (as far as I can tell). All of the ways around the argument involve substantive philosophical commitments that require independent argument that will itself probably carry further substantive philosophical commitments. So, the argument is a bit of a challenge to those who entertain the possibility of determinism being true (as I do).

The argument can be found here.


Fallibilism and The Gettier Problem

The Gettier Problem is one of the most well known cases of an alleged refutation by counterexample in philosophy. Most philosophers probably believe that the counterexample successfully shows that the JTB analysis of the concept: KNOWLEDGE (My convention for symbolizing concepts in text is to put the concept being mentioned in all caps. Tokens of words are in quotes, and propositions are flanked by greater-than and less-than symbols (< and > respectively). JTB analyses of knowledge cash the concept KNOWLEDGE out in terms of having a belief that is justified and true. One formulation of the problem goes like this:

“Let us suppose that Smith has strong evidence for the following proposition: (f) Jones owns a Ford. Smith’s evidence might be that Jones has at all times in the past within Smith’s memory owned a car, and always a Ford, and that Jones has just offered Smith a ride while driving a Ford. Let us imagine, now, that Smith has another friend, Brown, of whose whereabouts he is totally ignorant. Smith selects three place names quite at random and constructs the following three propositions: (g) Either Jones owns a Ford, or Brown is in Boston. (h) Either Jones owns a Ford, or Brown is in Barcelona. (i) Either Jones owns a Ford, or Brown is in Brest-Litovsk. Each of these propositions is entailed by (f). Imagine that Smith realizes the entailment of each of these propositions he has constructed by (0, and proceeds to accept (g), (h), and (i) on the basis of (f). Smith has correctly inferred (g), (h), and (i) from a proposition for which he has strong evidence. Smith is therefore completely justified in believing each of these three propositions. Smith, of course, has no idea where Brown is. But imagine now that two further conditions hold. First, Jones does not own a Ford, but is at present driving a rented car. And secondly, by the sheerest coincidence, and entirely unknown to Smith, the place mentioned in proposition (h) happens really to be the place where Brown is. If these two conditions hold, then Smith does not KNOW that (h) is true, even though (i) (h) is true, (ii) Smith does believe that (h) is true, and (iii) Smith is justified in believing that (h) is true. These two examples show that definition (a) does not state a sufficient condition for someone’s knowing a given proposition. The same cases, with appropriate changes, will suffice to show that neither definition (b) nor definition (c) do so either” (Gettier 1963).

That is possibly the most famous version of Gettier’s counterexample, directly from the horse’s mouth. What I take to be the lesson of the counterexample is that a belief must be non-accidentally true as well as justified to be a case of knowledge. One possible way to deal with the problem that internalists should adopt (and maybe do) is to give an account of justification that makes one’s true belief such that it is non-accidental enough to constitute a case of knowledge. Setting that aside, I want to explore the intersection between fallibilism about knowledge and the Gettier Problem.

Fallibilism about knowledge is the thesis that for any particular case of a person knowing that P, that person could have had the same grounds for justifiably believing that P (or being justified in believing that P) and yet not know that P because P is false. Setting aside formulation problems related to knowledge of necessary truths, fallibilists target what it is in virtue of which our beliefs are justified, and claim that those same things that justify our beliefs could have been the case, while our beliefs were false. What this means is that the standards for justification do not ensure true belief. Following our justification norms does not guarantee that the belief being justified is true. Obviously an externalist fallibilist will object to talk of following justification norms, as following norms involves some attention to a rule, and that carries unacceptably internalist commitments. The externalist will probably want to appeal to some disconnect between the way in which we form warranted/justified/virtuously-formed beliefs and the truth of those beliefs such that sometimes the way we form them allows for warranted/justified/virtuously-formed false beliefs. What I have to say is not affected by externalist/internalist complications.

The intersection between fallibilism and the Gettier Problem is apparent when one conceives of the Gettier Problem as a demand for an analysis of KNOWLEDGE to include a non-accidentality clause, or for non-accidentality to be built into the epistemic component of the analysis. What’s required is a belief to be non-accidentally true; a belief must be such that it wasn’t by sheer dumb luck or an accident that it is true.

Fallibilism appears to introduce a level of accidentality to KNOWLEDGE, which means that it  allows for a belief to be maximally justified but still false, which seems to make the truth of that belief disconnected from its justification in such a way that the it is an accident that the two connect in particular possible worlds. If the maximal degree of justification for a belief does not guarantee that belief being true, then KNOWLEDGE requires a level of luck to obtain.

One can now see the connection between Gettier Problems and fallibilism about knowledge. Both positions involve accidentality in some sense; the former being anti-accidentality, and the latter allowing for accidentality. The way to decisively solve the Gettier Problem becomes clear in light of this connection: drop fallibilism and adopt infallibilism. If one’s standard of justification was such that meeting it ensured the truth of that belief, and the way it ensured the belief was clearly not accidental, then the Gettier Problem does not get off the ground.

A good example of justification ensuring truth is where appearance and reality appear to break down, as in the case of pain. Being in pain seems to be a state that does not allow one not to be aware of being in pain. To be in pain is to be aware of a sensation that is on a spectrum of unpleasant to torturous. Being in pain is a fantastic candidate for a state that one cannot be mistaken about. Obviously one could produce counterexamples to the pain case, such as being sneakily touched with an ice cube somewhere sensitive and not being sure if one is feeling pain or extreme cold. If one is persuaded by such counterexamples, remember that the pain example is not essential to my point. Another example that may be more comfortable to some is knowing that at least something exists, be it an occurring thought or a thinker thinking that occurring thought. The point is that these cases seem to show that there are instances where a belief being true is linked to its justification such that it isn’t an accident that it’s true (in these cases it’s a direct acquaintance link).

What I’ve set out to show is that fallibilism invites Gettier Problems in a way that infallibilism does not. Infallibilism does not seem to allow for enough room between justification and truth for the two to break apart such that there is justified accidentally true belief. Fallibilism, then, is more susceptible to the Gettier Problem than infallibilism is, which is a point in favor of infallibilism given the plausibility of the JTB analysis of KNOWLEDGE.


Works Cited:

Gettier, E. L. “Is Justified True Belief Knowledge?Analysis 23.6 (1963): 121-23.


A More Interesting Open Question Argument

The orthodox interpretation of G.E. Moore’s Open Question Argument (OQA) has it that Moore set out to show that no satisfactory definition of goodness (or other evaluative/deontic terms) could be given. To define goodness in terms of some other property was to commit the naturalistic fallacy. The reason why such definitions are impossible is because any competent user of the concepts being employed in the definition can sensibly ask if such and such is good. One example employed by Moore, from Bertrand Russell, is that goodness is that which we desire to desire. But any competent user of the concepts “desire” and “goodness” can sensibly ask if what we desire to desire is good, whereas we cannot sensibly ask if what we desire to desire is what we desire to desire. So, “goodness” does not mean “that which we desire to desire”. The same argument can be run in terms of pleasure, or what God wills (or anything).

What one should notice about this version of the argument is that it’s about meanings of words. The argument does not entail anything about property identities, which should provide reductionists about moral ontology some relief. A hedonist could claim that she isn’t in the business of giving analytic identity claims, but rather goodness being the same thing as pleasure is a synthetic identity claim. Synthetic identity claims carry no commitment to synonymy between the terms denoting the things that are identical. So, water is identical to H2O, but “water” doesn’t mean the same thing as “H2O”. It should be noted, however, that the proponent of the OQA explained above would take this as a concession, because the synthetic reductionist is granting her conclusion.

A more interesting kind of OQA can be formulated with Leibniz’s law. Roughly, Leibniz’s law says that it is necessarily true that for any A and any B, A is identical to B iff every property that A has, B also has. And any property that B has, A also has. It seems like a relatively uncontroversial principle, until we get into the quantum domain, which doesn’t concern us here. So, bracketing any quantum concerns, we can formulate the new OQA using this principle. Let’s take goodness and happiness. It seems like we could sensibly doubt whether happiness is good, but we cannot sensibly doubt whether goodness is good. So, happiness has a property that goodness lacks, which is that happiness is such that we can sensibly doubt whether it is good. By Leibniz’s law, goodness and happiness are not identical, because happiness has a property that goodness lacks. What’s interesting about this argument is it gets you a metaphysical conclusion, unlike the OQA discussed before, which gets you a semantic conclusion. The OQA employing Leibniz’s law is more worrisome for synthetic reductionists, as it has to do with properties rather than meanings.

The OQA employing Leibniz’s law is not obviously unsound, and figuring out where it goes wrong is challenging. Personally, I think it goes wrong somewhere, but it isn’t obvious to me where exactly that is.


Further Reading:

Metaethics: A Contemporary Introduction by Mark van Roojen (Chapter 2)

An Argument Against Moral Intuitionism

Moral intuitionism is usually characterized as the thesis that we have non-inferential moral knowledge. Any epistemological theory that posits non-inferential knowledge is a form of foundationalism, so moral intuitionism is a form of foundationalism. The thesis is usually accompanied by a description of the faculty of moral intuition. Sometimes moral intuition is considered a faculty of judgment that produces non-inferentially justified/warranted moral beliefs, some of which are true. Another characterization of moral intuition is as a special faculty of moral perception, analogous to vision (how far that analogy can be pushed depends on who you ask). All of these ways of describing moral intuition have their respective strengths and weaknesses, which is a topic for another time. In this post, I’m going to present an argument against moral intuitionism that does not assume any robust account of the faculty of moral intuition. All that is assumed for the sake of this argument regarding intuitionism is that it is a form of foundationalism; whether it’s internalist or externalist is immaterial to the thrust of the argument.

Now for the argument: There appear to be good reasons to think that our moral beliefs are formed under less-than epistemically appropriate conditions. Moral belief formation is supposedly subject to various cognitive biases and emotional influences. These various psychological phenomena, compounded by facts such as massive moral disagreement among seemingly rational people present us with good reason to think that many of our moral beliefs are probably false, or at the very least, unjustified/unwarranted.

Assuming that there is good evidence of these cognitive biases and emotional influences coming out of psychology and cognitive science, the moral intuitionist is presented with a dilemma. She is presented with a defeater for her moral beliefs in the form of evidence of the unreliable conditions under which they are formed. Either she can defeat this defeater or she cannot. If she cannot defeat the defeater, then she does not have non-inferential moral knowledge, which means some form of moral skepticism is true. If she attempts to defeat the defeater, then she must provide good reasons to think that her moral beliefs are formed under conditions conducive to their reliability. Let’s say she succeeds at defeating the defeater, and has given good reasons to think her moral beliefs are reliably formed. She has now provided a justificatory basis for her moral beliefs that renders her moral justification inferential. So, she does not have non-inferentially justified/warranted moral beliefs, but rather her moral beliefs are inferentially justified. So, either moral skepticism is the case or moral justification is inferential.

The intuitionist appears to be backed into a corner. However, things aren’t as they seem; she has two ways to resist the dilemma. The first way to avoid the conclusion is by challenging the principle that the defeater defeater must come in the form of evidence of the reliability of moral belief formation. Perhaps the defeater defeater could be evidence that moral belief formation is not influenced by the cognitive biases and emotional influences mentioned above. Note that that evidence is not evidence for the reliability of moral belief formation, but merely evidence against the case for their unreliability; so, the intuitionist using this strategy isn’t committed to the no non-inferential moral knowledge horn of the dilemma.

The second way to avoid the dilemma is by allowing for epistemically overdetermined beliefs; such beliefs gain justification/warrant from non-inferential and inferential sources. The intuitionist can allow for a defeater defeater that generates inferentially justified/warranted moral beliefs, while also claiming that such a defeater defeater restored non-inferential justification/warrant as well.

One may wonder what part the internalism/externalism distinction plays in this discussion. The argument appears to be neutral about whether or not justification/warrant/knowledge is extended. Even if some sort of reliabilism is true, the alleged evidence from psychology and cognitive science presents a potential defeater to one’s moral beliefs. So, it really doesn’t matter if one adopts internalism or externalism about moral knowledge.


Further Reading:

For some of the alleged evidence against moral intuitionism and various formulations of the argument presented above, see Walter Sinnott-Armstrong’s book Moral Skepticisms, and his papers, An Empirical Challenge to Moral Intuitionism, Framing Moral Intuitions, and Moral Intuitionism Meets Moral Psychology.

For a more developed response to Armstrong’s argument along the lines of the critiques I explored above, see Moral Intuitionism Defeated? by Nathan Ballantyne and Joshua Thurow.


Explaining Some Terms Used in Normative Ethics

In online and offline discussions about normative ethics, terms like, “consequentialism,” “deontology,” “hedonism,” and “utilitarianism” get tossed around with little to no indication of what those words actually mean. In this post, I’m going to briefly discuss some terms, and do a bit of taxonomy. Hopefully I can clear up more confusion than I create.

A theory in normative ethics has two parts, an axiology and a deontology. An axiology is a theory of value. Axiologies can be monistic or pluralistic. If an axiology is monistic, it identifies value with one property, such as hedonism identifying pleasure (or happiness) as the only intrinsic value. If an axiology is pluralistic, it identifies value with multiple properties, such as a theory that holds both pleasure and friendship as intrinsically valuable. An axiology is relevant to normative ethics when the value being identified is morally relevant. So an axiology that identifies certain aesthetic properties as the only intrinsic values would be of no use to an ethical theory.

A deontology is a theory of rightness or the right. An example of a deontology is consequentialism. Consequentialism takes right actions to be those that produce a particular kind of effect. Combine consequentialism with hedonism and you get a theory that says an action is right iff its effect is pleasurable. In this case, the good informs the right, which means your substantive account of goodness informs what is the right course of action. Another kind of deontology is deontology, which is an unfortunate name, which obviously gives rise to terminological confusion. Deontology is a family of theories of rightness that identify permissible and impermissible actions. A typical example is an action is morally permissible iff it does not violate any person’s autonomy. This is another example of the good informing the right, where the good is autonomy of persons.

A third example of a deontology is social contract theory. Some versions do not employ any substantive conceptions of the good, but rather take permissible action to be that which a group of informed negotiators would agree to allow. Each negotiator could bring their own conception of the good to the table when informing their decisions, but ultimately what’s permissible is a function of their agreement, and not their theories of value.

These notions play out in such a way that combining them produces theories like hedonic act utilitarianism (HAU). HAU combines a hedonic axiology with a consequentialist deontology, and adds a principle of maximization and produces the view that an action is right iff it produces maximal pleasure or contributes to maximizing overall pleasure. Sometimes a theory can avoid an axiology entirely, such as a Kantian view that takes a permissible action to be such that it could be universalized without contradiction. Suffice it to say, these theories tend to get a bit complicated when you look under the hood.