https://en.wikipedia.org/wiki/Jennifer_Nagel
"The branch of philosophy dedicated to answering them—epistemology—has been active for thousands of years. Some core questions have remained constant throughout this time: How does knowledge relate to truth ? Do senses like vision and hearing supply us with knowledge in the same way as abstract reasoning does ? Do you need to be able to justify a claim in order to count as knowing it ? Other concerns have emerged only recently, in light of new discoveries about humanity, language, and the mind. Is the contrast between knowledge and mere opinion universal to all cultures ? In ordinary language, does the word ‘know’ always stand for the same thing, or does it refer to something heavier in a court of law, and something lighter in a casual bus-stop conversation ? What natural instinctive impressions do we have about what others know, and how much can these impressions tell us about knowledge itself ?"
"It’s tempting to identify knowledge with facts, but not every fact is an item of knowledge. Imagine shaking a sealed cardboard box containing a single coin. As you put the box down, the coin inside the box has landed either heads or tails: let’s say that’s a fact. But as long as no one looks into the box, this fact remains unknown; it is not yet within the realm of knowledge. Nor do facts become knowledge simply by being written down. If you write the sentence ‘The coin has landed heads’ on one slip of paper and ‘The coin has landed tails’ on another, then you will have written down a fact on one of the slips, but you still won’t have gained knowledge of the outcome of the coin toss. Knowledge demands some kind of access to a fact on the part of some living subject. Without a mind to access it, whatever is stored in libraries and databases won’t be knowledge, but just ink marks and electronic traces. In any given case of knowledge, this access may or may not be unique to an individual: the same fact may be known by one person and not by others. Common knowledge might be shared by many people, but there is no knowledge that dangles unattached to any subject. Unlike water or gold, knowledge always belongs to someone.
More precisely, we should say that knowledge always belongs to some individual or group: the knowledge of a group may go beyond the knowledge of its individual members. There are times when a group counts as knowing a fact just because this fact is known to every member of the group (‘The orchestra knows that the concert starts at 8 pm’). But we can also say that the orchestra knows how to play Beethoven’s entire Ninth Symphony, even if individual members know just their own parts. Or we can say that a rogue nation knows how to launch a nuclear missile even if there is no single individual of that nation who knows even half of what is needed to manage the launch. Groups can combine the knowledge of their members in remarkably productive (or destructive) ways."
"Knowledge, in the sense that matters here, is a link between a person and a fact."
"English does have one confusing feature, shared with a number of other languages: our verb ‘know’ has two distinct senses in common use. In the first, it can take either a propositional complement or ‘that’-clause (as in ‘He knows that the car was stolen’) or an embedded question (as in ‘She knows who stole the car’ or ‘She knows when the theft occurred’). In the second, it takes a direct object (‘He knows Barack Obama’; ‘She knows London’). Many other languages use different words for these two meanings (like the French ‘savoir’ and ‘connaître’). In what follows, we will be focusing on the first sense of ‘knows’, the kind of knowledge that links a person with a fact.
This sense of ‘knows’ has an interesting feature: there’s a word for it in all of the world’s 6,000+ human languages (‘think’ shares this feature). This status is surprisingly rare: an educated person has a vocabulary of about 20,000 words, but fewer than 100 of these are thought to have precise translations in every other language. Common words you might expect to find in every language (like ‘eat’ and ‘drink’) don’t always have equivalents. (Some aboriginal languages in Australia and Papua New Guinea get by with a single word meaning ‘ingest’.) Elsewhere, other languages make finer rather than rougher distinctions: many languages lack a single word translating ‘go’, because they use distinct verbs for self-propelled motions like walking and for vehicular motion. Sometimes the lines are just drawn in different places: where the common English pronouns ‘he’ and ‘she’ force a choice of gender, other languages have third-person pronouns that distinguish between present and absent persons, but not between male and female. Human languages have remarkable diversity. But despite this diversity, a few terms appear in all known languages, perhaps because their meanings are crucial to the way language works, or because they express some vital aspect of human experience. These universals include ‘because’, ‘if’, ‘good’, ‘bad’, ‘live’, ‘die’ … and ‘know’ ."
"What do we ordinarily do with this vital verb, and how is ‘know’ different from the contrasting verb ‘think’ ? Everyday usage provides some clues.
Consider the following two sentences:
Jill knows that her door is locked.
Bill thinks that his door is locked.
We immediately register a difference between Jill and Bill—but what is it ?
One factor that comes to mind has to do with the truth of the embedded claim about the door. If Bill just thinks that his door is locked, perhaps this is because Bill’s door is not really locked. Maybe he didn’t turn the key far enough this morning as he was leaving home. Jill’s door, however, must be locked for the sentence about her to be true: you can’t ordinarily say, ‘Jill knows that her door is locked, but her door isn’t locked.’ Knowledge links a subject to a truth. This feature of ‘knowing that’ is called factivity: we can know only facts, or true propositions. ‘To know that’ is not the only factive construction: others include ‘to realize that’, ‘to see that’, ‘to remember that’, ‘to prove that’. You can realize that your lottery ticket has won only if it really has won. One of the special features of ‘know’ is that it is the most general such verb, standing for the deeper state that remembering, realizing, and the rest all have in common."
"Of course, it’s possible to seem to know something that later turns out to be false—but as soon as we recognize the falsity, we have to retract the claim that it was ever known. (‘We thought he knew that, but it turned out he was wrong and didn’t know.’) To complicate matters, it can be hard to tell whether someone knows something or just seems to know it. This doesn’t erase the distinction between knowing and seeming to know. In a market flooded with imitations it can be hard to tell a real diamond from a fake, but the practical difficulty of identifying the genuine article shouldn’t make us think there is no difference out there: real diamonds have a special essence —a special structure of carbon atoms—not shared by lookalikes.
The dedicated link to truth is part of the essence of knowledge. We speak of ‘knowing’ falsehoods when we are speaking in a non-literal way (just as we can use a word like ‘delicious’ sarcastically, describing things that taste awful). Emphasis—in italics or pitch—is one sign of non-literal use. ‘That cabbage soup smells delicious, right?’ ‘I knew I had been picked for the team. But it turned out I wasn’t.’ This use of ‘knows’ has been called the ‘projected’ use: the speaker is projecting herself into a past frame of mind, recalling a moment when it seemed to her that she knew. The emphasis is a clue that the speaker is distancing herself from that frame of mind: she didn’t literally or really know (as our emphatic speaker didn’t really like the soup). The literal use of ‘know’ can’t mix with falsehood in this way."
"By contrast, belief can easily link a subject to a false proposition: it’s perfectly acceptable to say, ‘Bill thinks that his door is locked, but it isn’t.’ The verb ‘think’ is non-factive. (Other non-factive verbs include ‘hope’, ‘suspect’, ‘doubt’, and ‘say’—you can certainly say that your door is locked when it isn’t.) Opinions being non-factive does not mean that opinion is always wrong: when Bill just thinks that his door is locked, he could be right. Perhaps Bill’s somewhat unreliable room-mate Bob occasionally forgets to lock the door. If Bill isn’t entirely sure that his door is locked, then he could think that it is locked, and be right, but fail to know that it is locked. Confidence matters to knowledge.
Knowledge has still further requirements, beyond truth and confidence. Someone who is very confident but for the wrong reasons would also fail to have knowledge. A father whose daughter is charged with a crime might feel utterly certain that she is innocent. But if his confidence has a basis in emotion rather than evidence (suppose he’s deliberately avoiding looking at any facts about the case), then even if he is right that his daughter is innocent, he may not really know that she is. But if a confidently held true belief is not enough for knowledge, what more needs to be added ?"
"We’ll assume in what follows that truth is objective, or based in reality and the same for all of us. Most philosophers agree about the objectivity of truth, but there are some rebels who have thought otherwise. The Ancient Greek philosopher Protagoras (5th century BCE) held that knowledge is always of the true, but also that different things could be true for different people. Standing outdoors on a breezy summer day and feeling a bit sick, I could know that the wind is cold, while you know that it is warm. Protagoras didn’t just mean that I know that the wind feels cold to me, while you know that it feels warm to you—the notion that different people have different feelings is something that can be embraced by advocates of the mainstream view according to which truth is the same for everyone. (It could be a plain objective fact that the warm wind feels cold to a sick person.) Protagoras says something more radical: it is true for me that the wind really is cold and true for you that the wind is warm. In fact, Protagoras always understands truth as relative to a subject: some things are true-for-you ; other things are true-for-your-best-friend or true-for-your-worst-enemy, but nothing is simply true.
Protagoras’s relativist theory of knowledge is intriguing, but hard to swallow, and perhaps even self-refuting. If things really are for each person as he sees them, then no one ever makes a mistake. It’s true for the hallucinating desert traveller that there really is an oasis ahead ; it’s true for the person who makes an arithmetical error that seven and five add up to eleven. What if it later seems to you that you have made a mistake ? If things always are as they appear, then it is true for you that you have made a mistake, even though appearances can never be misleading, so it should have been impossible for you to get things wrong in the first place. This is awkward. One Ancient Greek tactic for handling this problem involved a division of you-at-this-moment from you-a-moment-ago. Things are actually only true for you-right-now, and different things might be true-for-you-later (for example, it might be true-for-your-future-self that your past self made a mistake).
Splintering the self into momentary fragments is arguably a high price to pay for a theory of knowledge. If you find the price too high, and want to hold on to the idea that there is a lasting self, then Protagoras’s theory may start to seem false to you. But if Protagoras’s theory seems false, remember that by the lights of this theory you can’t be mistaken: the theory itself tells you that things always are for you as they seem to you. Now Protagoras’s theory is really in trouble. The self-destructive potential of relativism was remarked upon by Plato (c.428–348 BCE), who also noticed a tension between what Protagoras was trying to do in the general formulation of his theory, and what the theory says about truth being local to the individual. If Protagoras wants his theory of what is true-for-each-person-at-an-instant to capture what is going on with all of us, over time, it’s not clear how he can manage that."
"Could you be dreaming that you are reading this book right now ? If this is a dream, you could be lying barefoot in bed. Or you could be asleep on the commuter train, fully dressed. You may consider it unlikely that you are now dreaming, but you might wonder whether you have any way of establishing conclusively that you are awake, and that things are as they seem. Perhaps you remember reading somewhere that pinching yourself can end a dream—but did you read that in a trustworthy source ? Or is it even something that you really read, as opposed to something that you are just now dreaming you once read ? If you can’t find any sure way of proving that you are now awake, can you really take your sensory experience at face value ?"
"These philosophers are ‘sceptics’, from the Ancient Greek for ‘inquiring’ or ‘reflective’."
"Academic sceptics argued for the conclusion that knowledge was impossible ; Pyrrhonian sceptics aimed to reach no conclusions at all, suspending judgement on all questions, even the question of the possibility of knowledge.
Academic Scepticism is named for the institution it sprang from: the Academy in Athens, originally founded by Plato. The movement’s two great leaders each served a turn as head of the Academy: Arcesilaus in the 3rd century BCE, and then Carneades a hundred years later. Although both of them originally framed their scepticism in opposition to the once-influential Stoic theory of knowledge, their arguments continue to be taken seriously in philosophy to the present day. These sceptical arguments have enduring power because the core Stoic ideas they criticize are still embraced within many other theories of knowledge, and may even be part of our commonsense way of thinking about the difference between knowledge and mere belief.
Stoic epistemology draws a distinction between impressions and judgements. The Stoics noticed that you can have an impression—say, of shimmering water on a desert road—without judging that things really are as they seem. Judgement is the acceptance (or rejection) of an impression ; knowledge is wise judgement, or the acceptance of just the right impressions. In the Stoic view, people make mistakes and fall short of knowledge when they accept poor impressions—say, when you judge that some friend is approaching, on the basis of a hazy glimpse of someone in the distance. When an impression is unclear, you might be wrong—or even if you are right, you are just lucky to be hitting the mark. A lucky guess does not amount to knowledge. The wise person would wait until that friend was closer and could be seen clearly. Indeed, according to the Stoics, you attain knowledge only when you accept an impression that is so clear and distinct that you couldn’t be mistaken."
"Academic Sceptics were happy to agree that knowledge would consist in accepting only impressions that couldn’t be wrong, but they proceeded to argue that there simply are no such impressions. Would it be enough to wait until your friend comes nearer ? Remember that people can have identical twins, with features so similar that you can’t tell them apart, even close up. If you feel sure your friend has no twin, remind yourself that you might be misremembering, dreaming, drunk, or hallucinating. If the wise person waits to accept just impressions that couldn’t possibly be wrong, he will be waiting forever: even the sharpest and most vivid impression might be mistaken. Because impressions are always fallible, the Academic Sceptics argued, knowledge is impossible.
One might wonder about the internal consistency of this position: how could the Academics be so certain of the impossibility of knowledge while at the same time doubting our ability to establish anything with certainty ? Such concerns helped to motivate an even deeper form of scepticism. Imagine a way of thinking which consists of pure doubt, making no positive claims at all, not even the claim that knowledge could be proven to lie out of reach. Pyrrhonian Scepticism aimed to take this more radical approach.
The movement was named in honour of Pyrrho of Elis (c.360–270 BCE), who is known to us not through his own written texts—he left no surviving works—but through the reports of other philosophers and historians. As a young man, Pyrrho joined Alexander the Great’s expedition to India, where he is said to have enjoyed some exposure to Indian philosophy. On his return, Pyrrho started to attract followers, eventually becoming so popular that his home town honoured him with a statue and a proclamation that all philosophers could live there tax free. Pyrrho’s influence now reaches us mainly through the writings of his admirer Sextus Empiricus (c.160–210 CE), who drew sceptical ideas from a range of ancient sources to form the branch of scepticism now known as Pyrrhonism."
" [Pyrrhonians] developed a general strategy for generating doubt on any topic at all: whenever you are tempted to make up your mind one way on an issue, consider the other way. Instead of settling the matter one way or another (which would be ‘dogmatism’), just continue the search for further evidence, keeping the two sides of a question balanced against each other in your mind. Many techniques were available for keeping contrary ideas in balance. You could think about how different things would appear to other animals, or when seen from other perspectives, or in different cultures. Drawing on the work of earlier sceptics, Sextus Empiricus developed an extensive catalogue of ways to keep yourself from settling for any particular answer to any given question. He also developed lists of phrases that the sceptic could say to himself (‘I determine nothing’ ; ‘Perhaps it is, and perhaps it is not’). Sextus did not want these phrases to be seen as his own expressions of dogma: his scepticism was laid out as a practice or way of life, and not as a positive theory of reality. Keeping all questions open may sound like a recipe for anxiety, but curiously enough, Sextus reported the impression that his sceptical practice seemed to bring peace of mind. (Only an impression, of course—he couldn’t be sure that he had achieved true peace of mind, or that it had come as a result of the scepticism, rather than by chance.)
One early criticism of scepticism was that it would be problematic for human survival: if sceptics suspend judgement even on the question of whether eating will satisfy their hunger, aren’t they at risk of starvation ? The Pyrrhonians suggested that behaviour can be guided by instinct, habit, and custom rather than judgement or knowledge: in resisting dogma,
sceptics do not have to fight against their raw impulses or involuntary impressions. Sceptics can satisfy their hunger and thirst on autopilot while refraining from judgement about reality.
The sceptical path of resisting all judgement is not an easy one to follow, and throughout the Middle Ages the dominant figures of Western philosophy were firmly non-sceptical. Scepticism did flourish in the Indian tradition, however, most remarkably in the work of Śrīharśa, whose 11th century text The Sweets of Refutation promised to teach its readers some spectacular techniques of argument that could be used against any positive theory whatsoever."
"Radical doubts about the possibility of knowledge emerged periodically over the following centuries, most strikingly during periods of intellectual upheaval. As new scientific work challenged the medieval world view in the 16th century, there was a resurgence of scepticism in Europe. The works of Sextus Empiricus were rediscovered, and his arguments were eagerly taken up by philosophers such as Michel de Montaigne (1533–92), whose personal motto (‘What do I know?’) expressed his enthusiasm for Pyrrhonian doubt. This sceptical spirit was contagious: early in the 17th century, René Descartes (1596–1650) reported that, far from being an extinct ancient view, scepticism was ‘vigorously alive today’. Descartes’s best-known work, Meditations on First Philosophy, presents truly novel sceptical arguments about the limits of reason, alongside familiar ancient arguments about dreaming and illusions. In his deepest sceptical moment, Descartes invites you to contemplate a scenario in which a powerful evil demon is dedicated to deceiving you at every turn, not only sending you illusory sensory impressions, but also leading you astray each time you attempt to make an abstract judgement such as a simple arithmetical calculation. This vivid scenario has lingered in the philosophical imagination, even though Descartes himself thought that there was a sure way to dispel it. Descartes was not himself a sceptic (despite his considerable talent for presenting sceptical arguments): he took himself to have an airtight proof that scepticism is mistaken."
"The old question of scepticism received some surprising new answers in the 20th century. A strangely simple approach was advanced by the English philosopher G. E. Moore in a public lecture in 1939. In answer to the question of how we could prove the reality of the external world, Moore simply held up his hands (saying, ‘Here is one hand, and here is another’), explained that they were external objects, and drew the logical conclusion that external objects actually exist. Moore considered this to be a fully satisfactory proof: from the premise that he had hands, and the further premise that his hands were external objects (or, as he elaborated, ‘things to be met with in space’), it clearly does follow that external things exist. The sceptic might, of course, complain that Moore did not really know that he had hands—but here Moore proposed shifting the burden of proof over to the sceptic. ‘How absurd it would be to suggest that I did not really know it, but only believed it, and that perhaps it was not the case!’ Moore insists that he knows that he has hands, but doesn’t even try to prove that he is right about this. After shrugging off the sceptic’s worries as absurd, Moore aims to explain why he won’t produce a proof that he has hands, and why we should still accept him as having knowledge on this point. Moore starts by remarking that when he claims to know (without proof) that he has hands, he is not claiming that a person can never prove he has hands.
Moore was willing to grant that there are special situations in which someone might reasonably prove the existence of his hands: for example, if anyone suspects that you are an amputee with artificial limbs (and you are not), you could let him examine your hands more closely to dispel that particular doubt. If you were really keen to prove your point, you could even let him feel your pulse or scratch you with a sharp object. But however well that strategy might work to dispel particular doubts about artificial limbs, Moore does not think that there is an all-purpose strategy for proving that your hands exist, a general proof that would dispel all possible doubts. The range of possible doubts is truly enormous. To take just one example, a fully general proof against all doubts would have to show that you were not a sleeping amputee, dreaming in your hospital bed after an accident in which you lost your arms. Moore is pessimistic about anyone’s chances of proving that this (admittedly far-fetched!) scenario isn’t happening.
However, just as Moore thinks you could know that you have hands without being able to prove as much, he also thinks that your inability to prove that you are not dreaming does not stop you from knowing that you are not dreaming. Moore once again retains confidence in his knowledge despite the limitations on what he is able to prove: ‘I have, no doubt, conclusive reasons for asserting that I am not now dreaming; I have conclusive evidence that I am awake: but that is a very different thing from being able to prove it. I could not tell you what all my evidence is; and I should require to do this at least, in order to give you a proof.’
In claiming to have conclusive evidence that he is awake, Moore is resisting the hypothetical push of the sceptic’s reasoning. Moore actually agrees with the sceptic that if you are dreaming, then you will not know just by looking that you have hands. But Moore reminds us that the sceptic’s argument rests on that big ‘if’: as Moore sees it, the person who knows that he is not dreaming (whether or not he can prove this) should not be troubled by the sceptic’s worries.
The strategy of declaring that one has knowledge without proof may set off some alarm bells (is Moore declaring victory after refusing to fight?). It may also seem odd that Moore is willing to construct what he thinks is a very good proof of the claim ‘External objects exist,’ while simply turning his back on the project of proving the somewhat similar claim ‘These hands exist.’ There’s an important difference between those two assertions, however: the first is general and philosophical in character, and the second particular and ordinary. Explicit reasoning or proof has a clear role to play when we are supporting general philosophical claims: we can engage in extended reasoning about exactly what it means for something to be an ‘external object’, and indeed much of Moore’s lecture is taken up with detailed discussion of this issue. By contrast, an ordinary claim like ‘Here is a hand’ is so basic that it is hard to find simpler and better-known claims we could use to support it. (There is a parallel with mathematics here, where some basic claims are taken as axioms, not themselves in need of proof.) If the sceptic attempts to undermine our certainty about such basic matters, Moore would urge us to distrust the sceptic’s fancy philosophical reasoning well before we distrust our original common sense. We could reasonably be alarmed by someone claiming to know a controversial philosophical claim despite an inability to prove it; we should not feel such resistance to someone who claims to know a simple observable fact about his immediate environment."
"Russell grants one point to the sceptic right away: it is logically possible that all of our impressions (or ‘sense data’, to use Russell’s terminology) have their origin in something quite different from the real world we ordinarily take ourselves to inhabit. But in Russell’s approach to scepticism, now known as the ‘Inference to the Best Explanation’ approach, we can grant that point about logical possibility and still hang on to fight the sceptic. Russell argues that there is a large gap between admitting that something is logically possible and concluding that we can’t rationally rule it out: we have rational principles other than the rules of logic, narrowly conceived. In particular, Russell invokes the principle of simplicity: other things being equal, a simpler explanation is rationally preferred to a more complex one. It’s logically possible that all the sense data you ordinarily credit to your pet cat (meowing sounds, the sight and feel of fur, and so on) do not come from the source you expect. Perhaps these impressions issue from a succession of different creatures, or from a series of inexplicably consistent dreams or some other strange source. But the simplest hypothesis, according to Russell, is the one that you would most naturally believe: there is a single real animal whose periodic interactions with you cause the relevant cat-like impressions in the stream of your private experience. Just as it is rational for scientists to explain patterns in their data by appeal to simple laws, it is rational to explain patterns in our everyday experience by appeal to a simple world of lasting objects (the ‘real-world’ hypothesis).
Russell’s approach has its attractions, but a few worries may linger. Even if we grant that Inference to the Best Explanation is generally a rational strategy, we might feel that it seems insufficiently conclusive to ground knowledge as opposed to just rational belief. This potential weakness of Inference to the Best Explanation can be illuminated by thinking about other contexts in which this style of reasoning is used. For example, a detective might use Inference to the Best Explanation when investigating a crime: once he has found mud matching the crime scene on the butler’s shoes, heard the maid’s testimony about the butler’s hatred of the victim, and discovered a bloody knife under the butler’s bed, the detective could reasonably conclude that the best explanation of the available evidence is that the butler committed the murder. However, the sceptic could point out, things might not be as they seem: perhaps the maid has committed the murder and very skilfully framed the innocent butler. This is not the simplest explanation, but it just might be the true one. Assuming that the detective uncovered no evidence of the maid’s involvement, it could be rational for him to conclude that the butler is guilty, but this wouldn’t establish that the butler actually was guilty. Likewise, some sceptics might be willing to grant that it is very likely that our experiences arise from ordinary external objects, or even that it is reasonable to believe as much, without being willing to grant that these experiences give us knowledge: knowledge, they might argue, calls for a higher standard than rational belief.
A further worry about Russell’s strategy is that it is not obvious that the real-world hypothesis really is a better explanation of our experience than rival explanations the sceptic might offer. A sceptic might argue that the evil demon hypothesis can neatly explain the very features of our experience that impressed Russell: of course an evil demon would send us vivid and apparently coherent experiences over time, given that the evil demon is trying to deceive us into believing that there is an outer world of objects. This objection applies equally well to other versions of the sceptical hypothesis. For those who resist the supernatural element of the evil demon story, there is a modernized scientific version available: just suppose that your brain has been removed from your body and connected to a supercomputer which simulates experiences of a coherent reality, sending signals along your sensory pathways. If the program is good enough, maintaining consistency over time and adjusting its displays to match your outgoing motor signals (you decide to look to the left, and your visual experience changes accordingly …), your experience as a brain in a vat might be internally indistinguishable from the experience of someone interacting with an ordinary physical environment. Everything you think you see and feel—the blue sky outside, the warmth of the sun—could be an element in the large-scale virtual reality simulated by the supercomputer. Assuming that the point of the whole simulation is to give you sensory experiences that perfectly mirror the sensory experiences you’d have in an ordinary physical world, the challenge to the advocate of the Inference to the Best Explanation approach would be to explain why exactly the real-world hypothesis is a better explanation of our experience than the brain-in-a-vat hypothesis. To answer this challenge, various suggestions have been advanced: for example, the American philosopher Jonathan Vogel has argued that the basic spatial structure of the real world is much simpler than the spatial structure of the brain in a vat’s virtual-reality-within-a-real-world, making the real-world hypothesis a better way to explain our experience."
-Jennifer Nagel, Knowledge. A Very Short Introduction, Oxford University Press, 2014.
"The branch of philosophy dedicated to answering them—epistemology—has been active for thousands of years. Some core questions have remained constant throughout this time: How does knowledge relate to truth ? Do senses like vision and hearing supply us with knowledge in the same way as abstract reasoning does ? Do you need to be able to justify a claim in order to count as knowing it ? Other concerns have emerged only recently, in light of new discoveries about humanity, language, and the mind. Is the contrast between knowledge and mere opinion universal to all cultures ? In ordinary language, does the word ‘know’ always stand for the same thing, or does it refer to something heavier in a court of law, and something lighter in a casual bus-stop conversation ? What natural instinctive impressions do we have about what others know, and how much can these impressions tell us about knowledge itself ?"
"It’s tempting to identify knowledge with facts, but not every fact is an item of knowledge. Imagine shaking a sealed cardboard box containing a single coin. As you put the box down, the coin inside the box has landed either heads or tails: let’s say that’s a fact. But as long as no one looks into the box, this fact remains unknown; it is not yet within the realm of knowledge. Nor do facts become knowledge simply by being written down. If you write the sentence ‘The coin has landed heads’ on one slip of paper and ‘The coin has landed tails’ on another, then you will have written down a fact on one of the slips, but you still won’t have gained knowledge of the outcome of the coin toss. Knowledge demands some kind of access to a fact on the part of some living subject. Without a mind to access it, whatever is stored in libraries and databases won’t be knowledge, but just ink marks and electronic traces. In any given case of knowledge, this access may or may not be unique to an individual: the same fact may be known by one person and not by others. Common knowledge might be shared by many people, but there is no knowledge that dangles unattached to any subject. Unlike water or gold, knowledge always belongs to someone.
More precisely, we should say that knowledge always belongs to some individual or group: the knowledge of a group may go beyond the knowledge of its individual members. There are times when a group counts as knowing a fact just because this fact is known to every member of the group (‘The orchestra knows that the concert starts at 8 pm’). But we can also say that the orchestra knows how to play Beethoven’s entire Ninth Symphony, even if individual members know just their own parts. Or we can say that a rogue nation knows how to launch a nuclear missile even if there is no single individual of that nation who knows even half of what is needed to manage the launch. Groups can combine the knowledge of their members in remarkably productive (or destructive) ways."
"Knowledge, in the sense that matters here, is a link between a person and a fact."
"English does have one confusing feature, shared with a number of other languages: our verb ‘know’ has two distinct senses in common use. In the first, it can take either a propositional complement or ‘that’-clause (as in ‘He knows that the car was stolen’) or an embedded question (as in ‘She knows who stole the car’ or ‘She knows when the theft occurred’). In the second, it takes a direct object (‘He knows Barack Obama’; ‘She knows London’). Many other languages use different words for these two meanings (like the French ‘savoir’ and ‘connaître’). In what follows, we will be focusing on the first sense of ‘knows’, the kind of knowledge that links a person with a fact.
This sense of ‘knows’ has an interesting feature: there’s a word for it in all of the world’s 6,000+ human languages (‘think’ shares this feature). This status is surprisingly rare: an educated person has a vocabulary of about 20,000 words, but fewer than 100 of these are thought to have precise translations in every other language. Common words you might expect to find in every language (like ‘eat’ and ‘drink’) don’t always have equivalents. (Some aboriginal languages in Australia and Papua New Guinea get by with a single word meaning ‘ingest’.) Elsewhere, other languages make finer rather than rougher distinctions: many languages lack a single word translating ‘go’, because they use distinct verbs for self-propelled motions like walking and for vehicular motion. Sometimes the lines are just drawn in different places: where the common English pronouns ‘he’ and ‘she’ force a choice of gender, other languages have third-person pronouns that distinguish between present and absent persons, but not between male and female. Human languages have remarkable diversity. But despite this diversity, a few terms appear in all known languages, perhaps because their meanings are crucial to the way language works, or because they express some vital aspect of human experience. These universals include ‘because’, ‘if’, ‘good’, ‘bad’, ‘live’, ‘die’ … and ‘know’ ."
"What do we ordinarily do with this vital verb, and how is ‘know’ different from the contrasting verb ‘think’ ? Everyday usage provides some clues.
Consider the following two sentences:
Jill knows that her door is locked.
Bill thinks that his door is locked.
We immediately register a difference between Jill and Bill—but what is it ?
One factor that comes to mind has to do with the truth of the embedded claim about the door. If Bill just thinks that his door is locked, perhaps this is because Bill’s door is not really locked. Maybe he didn’t turn the key far enough this morning as he was leaving home. Jill’s door, however, must be locked for the sentence about her to be true: you can’t ordinarily say, ‘Jill knows that her door is locked, but her door isn’t locked.’ Knowledge links a subject to a truth. This feature of ‘knowing that’ is called factivity: we can know only facts, or true propositions. ‘To know that’ is not the only factive construction: others include ‘to realize that’, ‘to see that’, ‘to remember that’, ‘to prove that’. You can realize that your lottery ticket has won only if it really has won. One of the special features of ‘know’ is that it is the most general such verb, standing for the deeper state that remembering, realizing, and the rest all have in common."
"Of course, it’s possible to seem to know something that later turns out to be false—but as soon as we recognize the falsity, we have to retract the claim that it was ever known. (‘We thought he knew that, but it turned out he was wrong and didn’t know.’) To complicate matters, it can be hard to tell whether someone knows something or just seems to know it. This doesn’t erase the distinction between knowing and seeming to know. In a market flooded with imitations it can be hard to tell a real diamond from a fake, but the practical difficulty of identifying the genuine article shouldn’t make us think there is no difference out there: real diamonds have a special essence —a special structure of carbon atoms—not shared by lookalikes.
The dedicated link to truth is part of the essence of knowledge. We speak of ‘knowing’ falsehoods when we are speaking in a non-literal way (just as we can use a word like ‘delicious’ sarcastically, describing things that taste awful). Emphasis—in italics or pitch—is one sign of non-literal use. ‘That cabbage soup smells delicious, right?’ ‘I knew I had been picked for the team. But it turned out I wasn’t.’ This use of ‘knows’ has been called the ‘projected’ use: the speaker is projecting herself into a past frame of mind, recalling a moment when it seemed to her that she knew. The emphasis is a clue that the speaker is distancing herself from that frame of mind: she didn’t literally or really know (as our emphatic speaker didn’t really like the soup). The literal use of ‘know’ can’t mix with falsehood in this way."
"By contrast, belief can easily link a subject to a false proposition: it’s perfectly acceptable to say, ‘Bill thinks that his door is locked, but it isn’t.’ The verb ‘think’ is non-factive. (Other non-factive verbs include ‘hope’, ‘suspect’, ‘doubt’, and ‘say’—you can certainly say that your door is locked when it isn’t.) Opinions being non-factive does not mean that opinion is always wrong: when Bill just thinks that his door is locked, he could be right. Perhaps Bill’s somewhat unreliable room-mate Bob occasionally forgets to lock the door. If Bill isn’t entirely sure that his door is locked, then he could think that it is locked, and be right, but fail to know that it is locked. Confidence matters to knowledge.
Knowledge has still further requirements, beyond truth and confidence. Someone who is very confident but for the wrong reasons would also fail to have knowledge. A father whose daughter is charged with a crime might feel utterly certain that she is innocent. But if his confidence has a basis in emotion rather than evidence (suppose he’s deliberately avoiding looking at any facts about the case), then even if he is right that his daughter is innocent, he may not really know that she is. But if a confidently held true belief is not enough for knowledge, what more needs to be added ?"
"We’ll assume in what follows that truth is objective, or based in reality and the same for all of us. Most philosophers agree about the objectivity of truth, but there are some rebels who have thought otherwise. The Ancient Greek philosopher Protagoras (5th century BCE) held that knowledge is always of the true, but also that different things could be true for different people. Standing outdoors on a breezy summer day and feeling a bit sick, I could know that the wind is cold, while you know that it is warm. Protagoras didn’t just mean that I know that the wind feels cold to me, while you know that it feels warm to you—the notion that different people have different feelings is something that can be embraced by advocates of the mainstream view according to which truth is the same for everyone. (It could be a plain objective fact that the warm wind feels cold to a sick person.) Protagoras says something more radical: it is true for me that the wind really is cold and true for you that the wind is warm. In fact, Protagoras always understands truth as relative to a subject: some things are true-for-you ; other things are true-for-your-best-friend or true-for-your-worst-enemy, but nothing is simply true.
Protagoras’s relativist theory of knowledge is intriguing, but hard to swallow, and perhaps even self-refuting. If things really are for each person as he sees them, then no one ever makes a mistake. It’s true for the hallucinating desert traveller that there really is an oasis ahead ; it’s true for the person who makes an arithmetical error that seven and five add up to eleven. What if it later seems to you that you have made a mistake ? If things always are as they appear, then it is true for you that you have made a mistake, even though appearances can never be misleading, so it should have been impossible for you to get things wrong in the first place. This is awkward. One Ancient Greek tactic for handling this problem involved a division of you-at-this-moment from you-a-moment-ago. Things are actually only true for you-right-now, and different things might be true-for-you-later (for example, it might be true-for-your-future-self that your past self made a mistake).
Splintering the self into momentary fragments is arguably a high price to pay for a theory of knowledge. If you find the price too high, and want to hold on to the idea that there is a lasting self, then Protagoras’s theory may start to seem false to you. But if Protagoras’s theory seems false, remember that by the lights of this theory you can’t be mistaken: the theory itself tells you that things always are for you as they seem to you. Now Protagoras’s theory is really in trouble. The self-destructive potential of relativism was remarked upon by Plato (c.428–348 BCE), who also noticed a tension between what Protagoras was trying to do in the general formulation of his theory, and what the theory says about truth being local to the individual. If Protagoras wants his theory of what is true-for-each-person-at-an-instant to capture what is going on with all of us, over time, it’s not clear how he can manage that."
"Could you be dreaming that you are reading this book right now ? If this is a dream, you could be lying barefoot in bed. Or you could be asleep on the commuter train, fully dressed. You may consider it unlikely that you are now dreaming, but you might wonder whether you have any way of establishing conclusively that you are awake, and that things are as they seem. Perhaps you remember reading somewhere that pinching yourself can end a dream—but did you read that in a trustworthy source ? Or is it even something that you really read, as opposed to something that you are just now dreaming you once read ? If you can’t find any sure way of proving that you are now awake, can you really take your sensory experience at face value ?"
"These philosophers are ‘sceptics’, from the Ancient Greek for ‘inquiring’ or ‘reflective’."
"Academic sceptics argued for the conclusion that knowledge was impossible ; Pyrrhonian sceptics aimed to reach no conclusions at all, suspending judgement on all questions, even the question of the possibility of knowledge.
Academic Scepticism is named for the institution it sprang from: the Academy in Athens, originally founded by Plato. The movement’s two great leaders each served a turn as head of the Academy: Arcesilaus in the 3rd century BCE, and then Carneades a hundred years later. Although both of them originally framed their scepticism in opposition to the once-influential Stoic theory of knowledge, their arguments continue to be taken seriously in philosophy to the present day. These sceptical arguments have enduring power because the core Stoic ideas they criticize are still embraced within many other theories of knowledge, and may even be part of our commonsense way of thinking about the difference between knowledge and mere belief.
Stoic epistemology draws a distinction between impressions and judgements. The Stoics noticed that you can have an impression—say, of shimmering water on a desert road—without judging that things really are as they seem. Judgement is the acceptance (or rejection) of an impression ; knowledge is wise judgement, or the acceptance of just the right impressions. In the Stoic view, people make mistakes and fall short of knowledge when they accept poor impressions—say, when you judge that some friend is approaching, on the basis of a hazy glimpse of someone in the distance. When an impression is unclear, you might be wrong—or even if you are right, you are just lucky to be hitting the mark. A lucky guess does not amount to knowledge. The wise person would wait until that friend was closer and could be seen clearly. Indeed, according to the Stoics, you attain knowledge only when you accept an impression that is so clear and distinct that you couldn’t be mistaken."
"Academic Sceptics were happy to agree that knowledge would consist in accepting only impressions that couldn’t be wrong, but they proceeded to argue that there simply are no such impressions. Would it be enough to wait until your friend comes nearer ? Remember that people can have identical twins, with features so similar that you can’t tell them apart, even close up. If you feel sure your friend has no twin, remind yourself that you might be misremembering, dreaming, drunk, or hallucinating. If the wise person waits to accept just impressions that couldn’t possibly be wrong, he will be waiting forever: even the sharpest and most vivid impression might be mistaken. Because impressions are always fallible, the Academic Sceptics argued, knowledge is impossible.
One might wonder about the internal consistency of this position: how could the Academics be so certain of the impossibility of knowledge while at the same time doubting our ability to establish anything with certainty ? Such concerns helped to motivate an even deeper form of scepticism. Imagine a way of thinking which consists of pure doubt, making no positive claims at all, not even the claim that knowledge could be proven to lie out of reach. Pyrrhonian Scepticism aimed to take this more radical approach.
The movement was named in honour of Pyrrho of Elis (c.360–270 BCE), who is known to us not through his own written texts—he left no surviving works—but through the reports of other philosophers and historians. As a young man, Pyrrho joined Alexander the Great’s expedition to India, where he is said to have enjoyed some exposure to Indian philosophy. On his return, Pyrrho started to attract followers, eventually becoming so popular that his home town honoured him with a statue and a proclamation that all philosophers could live there tax free. Pyrrho’s influence now reaches us mainly through the writings of his admirer Sextus Empiricus (c.160–210 CE), who drew sceptical ideas from a range of ancient sources to form the branch of scepticism now known as Pyrrhonism."
" [Pyrrhonians] developed a general strategy for generating doubt on any topic at all: whenever you are tempted to make up your mind one way on an issue, consider the other way. Instead of settling the matter one way or another (which would be ‘dogmatism’), just continue the search for further evidence, keeping the two sides of a question balanced against each other in your mind. Many techniques were available for keeping contrary ideas in balance. You could think about how different things would appear to other animals, or when seen from other perspectives, or in different cultures. Drawing on the work of earlier sceptics, Sextus Empiricus developed an extensive catalogue of ways to keep yourself from settling for any particular answer to any given question. He also developed lists of phrases that the sceptic could say to himself (‘I determine nothing’ ; ‘Perhaps it is, and perhaps it is not’). Sextus did not want these phrases to be seen as his own expressions of dogma: his scepticism was laid out as a practice or way of life, and not as a positive theory of reality. Keeping all questions open may sound like a recipe for anxiety, but curiously enough, Sextus reported the impression that his sceptical practice seemed to bring peace of mind. (Only an impression, of course—he couldn’t be sure that he had achieved true peace of mind, or that it had come as a result of the scepticism, rather than by chance.)
One early criticism of scepticism was that it would be problematic for human survival: if sceptics suspend judgement even on the question of whether eating will satisfy their hunger, aren’t they at risk of starvation ? The Pyrrhonians suggested that behaviour can be guided by instinct, habit, and custom rather than judgement or knowledge: in resisting dogma,
sceptics do not have to fight against their raw impulses or involuntary impressions. Sceptics can satisfy their hunger and thirst on autopilot while refraining from judgement about reality.
The sceptical path of resisting all judgement is not an easy one to follow, and throughout the Middle Ages the dominant figures of Western philosophy were firmly non-sceptical. Scepticism did flourish in the Indian tradition, however, most remarkably in the work of Śrīharśa, whose 11th century text The Sweets of Refutation promised to teach its readers some spectacular techniques of argument that could be used against any positive theory whatsoever."
"Radical doubts about the possibility of knowledge emerged periodically over the following centuries, most strikingly during periods of intellectual upheaval. As new scientific work challenged the medieval world view in the 16th century, there was a resurgence of scepticism in Europe. The works of Sextus Empiricus were rediscovered, and his arguments were eagerly taken up by philosophers such as Michel de Montaigne (1533–92), whose personal motto (‘What do I know?’) expressed his enthusiasm for Pyrrhonian doubt. This sceptical spirit was contagious: early in the 17th century, René Descartes (1596–1650) reported that, far from being an extinct ancient view, scepticism was ‘vigorously alive today’. Descartes’s best-known work, Meditations on First Philosophy, presents truly novel sceptical arguments about the limits of reason, alongside familiar ancient arguments about dreaming and illusions. In his deepest sceptical moment, Descartes invites you to contemplate a scenario in which a powerful evil demon is dedicated to deceiving you at every turn, not only sending you illusory sensory impressions, but also leading you astray each time you attempt to make an abstract judgement such as a simple arithmetical calculation. This vivid scenario has lingered in the philosophical imagination, even though Descartes himself thought that there was a sure way to dispel it. Descartes was not himself a sceptic (despite his considerable talent for presenting sceptical arguments): he took himself to have an airtight proof that scepticism is mistaken."
"The old question of scepticism received some surprising new answers in the 20th century. A strangely simple approach was advanced by the English philosopher G. E. Moore in a public lecture in 1939. In answer to the question of how we could prove the reality of the external world, Moore simply held up his hands (saying, ‘Here is one hand, and here is another’), explained that they were external objects, and drew the logical conclusion that external objects actually exist. Moore considered this to be a fully satisfactory proof: from the premise that he had hands, and the further premise that his hands were external objects (or, as he elaborated, ‘things to be met with in space’), it clearly does follow that external things exist. The sceptic might, of course, complain that Moore did not really know that he had hands—but here Moore proposed shifting the burden of proof over to the sceptic. ‘How absurd it would be to suggest that I did not really know it, but only believed it, and that perhaps it was not the case!’ Moore insists that he knows that he has hands, but doesn’t even try to prove that he is right about this. After shrugging off the sceptic’s worries as absurd, Moore aims to explain why he won’t produce a proof that he has hands, and why we should still accept him as having knowledge on this point. Moore starts by remarking that when he claims to know (without proof) that he has hands, he is not claiming that a person can never prove he has hands.
Moore was willing to grant that there are special situations in which someone might reasonably prove the existence of his hands: for example, if anyone suspects that you are an amputee with artificial limbs (and you are not), you could let him examine your hands more closely to dispel that particular doubt. If you were really keen to prove your point, you could even let him feel your pulse or scratch you with a sharp object. But however well that strategy might work to dispel particular doubts about artificial limbs, Moore does not think that there is an all-purpose strategy for proving that your hands exist, a general proof that would dispel all possible doubts. The range of possible doubts is truly enormous. To take just one example, a fully general proof against all doubts would have to show that you were not a sleeping amputee, dreaming in your hospital bed after an accident in which you lost your arms. Moore is pessimistic about anyone’s chances of proving that this (admittedly far-fetched!) scenario isn’t happening.
However, just as Moore thinks you could know that you have hands without being able to prove as much, he also thinks that your inability to prove that you are not dreaming does not stop you from knowing that you are not dreaming. Moore once again retains confidence in his knowledge despite the limitations on what he is able to prove: ‘I have, no doubt, conclusive reasons for asserting that I am not now dreaming; I have conclusive evidence that I am awake: but that is a very different thing from being able to prove it. I could not tell you what all my evidence is; and I should require to do this at least, in order to give you a proof.’
In claiming to have conclusive evidence that he is awake, Moore is resisting the hypothetical push of the sceptic’s reasoning. Moore actually agrees with the sceptic that if you are dreaming, then you will not know just by looking that you have hands. But Moore reminds us that the sceptic’s argument rests on that big ‘if’: as Moore sees it, the person who knows that he is not dreaming (whether or not he can prove this) should not be troubled by the sceptic’s worries.
The strategy of declaring that one has knowledge without proof may set off some alarm bells (is Moore declaring victory after refusing to fight?). It may also seem odd that Moore is willing to construct what he thinks is a very good proof of the claim ‘External objects exist,’ while simply turning his back on the project of proving the somewhat similar claim ‘These hands exist.’ There’s an important difference between those two assertions, however: the first is general and philosophical in character, and the second particular and ordinary. Explicit reasoning or proof has a clear role to play when we are supporting general philosophical claims: we can engage in extended reasoning about exactly what it means for something to be an ‘external object’, and indeed much of Moore’s lecture is taken up with detailed discussion of this issue. By contrast, an ordinary claim like ‘Here is a hand’ is so basic that it is hard to find simpler and better-known claims we could use to support it. (There is a parallel with mathematics here, where some basic claims are taken as axioms, not themselves in need of proof.) If the sceptic attempts to undermine our certainty about such basic matters, Moore would urge us to distrust the sceptic’s fancy philosophical reasoning well before we distrust our original common sense. We could reasonably be alarmed by someone claiming to know a controversial philosophical claim despite an inability to prove it; we should not feel such resistance to someone who claims to know a simple observable fact about his immediate environment."
"Russell grants one point to the sceptic right away: it is logically possible that all of our impressions (or ‘sense data’, to use Russell’s terminology) have their origin in something quite different from the real world we ordinarily take ourselves to inhabit. But in Russell’s approach to scepticism, now known as the ‘Inference to the Best Explanation’ approach, we can grant that point about logical possibility and still hang on to fight the sceptic. Russell argues that there is a large gap between admitting that something is logically possible and concluding that we can’t rationally rule it out: we have rational principles other than the rules of logic, narrowly conceived. In particular, Russell invokes the principle of simplicity: other things being equal, a simpler explanation is rationally preferred to a more complex one. It’s logically possible that all the sense data you ordinarily credit to your pet cat (meowing sounds, the sight and feel of fur, and so on) do not come from the source you expect. Perhaps these impressions issue from a succession of different creatures, or from a series of inexplicably consistent dreams or some other strange source. But the simplest hypothesis, according to Russell, is the one that you would most naturally believe: there is a single real animal whose periodic interactions with you cause the relevant cat-like impressions in the stream of your private experience. Just as it is rational for scientists to explain patterns in their data by appeal to simple laws, it is rational to explain patterns in our everyday experience by appeal to a simple world of lasting objects (the ‘real-world’ hypothesis).
Russell’s approach has its attractions, but a few worries may linger. Even if we grant that Inference to the Best Explanation is generally a rational strategy, we might feel that it seems insufficiently conclusive to ground knowledge as opposed to just rational belief. This potential weakness of Inference to the Best Explanation can be illuminated by thinking about other contexts in which this style of reasoning is used. For example, a detective might use Inference to the Best Explanation when investigating a crime: once he has found mud matching the crime scene on the butler’s shoes, heard the maid’s testimony about the butler’s hatred of the victim, and discovered a bloody knife under the butler’s bed, the detective could reasonably conclude that the best explanation of the available evidence is that the butler committed the murder. However, the sceptic could point out, things might not be as they seem: perhaps the maid has committed the murder and very skilfully framed the innocent butler. This is not the simplest explanation, but it just might be the true one. Assuming that the detective uncovered no evidence of the maid’s involvement, it could be rational for him to conclude that the butler is guilty, but this wouldn’t establish that the butler actually was guilty. Likewise, some sceptics might be willing to grant that it is very likely that our experiences arise from ordinary external objects, or even that it is reasonable to believe as much, without being willing to grant that these experiences give us knowledge: knowledge, they might argue, calls for a higher standard than rational belief.
A further worry about Russell’s strategy is that it is not obvious that the real-world hypothesis really is a better explanation of our experience than rival explanations the sceptic might offer. A sceptic might argue that the evil demon hypothesis can neatly explain the very features of our experience that impressed Russell: of course an evil demon would send us vivid and apparently coherent experiences over time, given that the evil demon is trying to deceive us into believing that there is an outer world of objects. This objection applies equally well to other versions of the sceptical hypothesis. For those who resist the supernatural element of the evil demon story, there is a modernized scientific version available: just suppose that your brain has been removed from your body and connected to a supercomputer which simulates experiences of a coherent reality, sending signals along your sensory pathways. If the program is good enough, maintaining consistency over time and adjusting its displays to match your outgoing motor signals (you decide to look to the left, and your visual experience changes accordingly …), your experience as a brain in a vat might be internally indistinguishable from the experience of someone interacting with an ordinary physical environment. Everything you think you see and feel—the blue sky outside, the warmth of the sun—could be an element in the large-scale virtual reality simulated by the supercomputer. Assuming that the point of the whole simulation is to give you sensory experiences that perfectly mirror the sensory experiences you’d have in an ordinary physical world, the challenge to the advocate of the Inference to the Best Explanation approach would be to explain why exactly the real-world hypothesis is a better explanation of our experience than the brain-in-a-vat hypothesis. To answer this challenge, various suggestions have been advanced: for example, the American philosopher Jonathan Vogel has argued that the basic spatial structure of the real world is much simpler than the spatial structure of the brain in a vat’s virtual-reality-within-a-real-world, making the real-world hypothesis a better way to explain our experience."
-Jennifer Nagel, Knowledge. A Very Short Introduction, Oxford University Press, 2014.
Dernière édition par Johnathan R. Razorback le Mar 27 Aoû - 22:39, édité 1 fois