Friday, June 8, 2012

Volume 16.1


THESISxii

A Philosophical Review

Volume 16 • Number 1

May, 2008

                                     

Inside this Issue:                                                              


Matthew R. Silliman
WHAT MAKES HONORS STUDENTS HONORABLE?                                                                                                                                  
                                                                               
Gerol Petruzella
YOU WANT ME TO WHAT? RESEARCH, GRADUATE SCHOOL, AND A REAL LIFE                

Jessica Dennis
HUME ON MIRACLES                                                                                                    

Carolyn Cook
A BABOON’S ROLE IN THE EVOLUTION OF LANGUAGE                                                                                                                              


What Makes Honors Students Honorable?


Matthew R. Silliman

Decent grades qualify any student to join the honors program, but what does it take to become an honors student in a deeper sense, worthy of some particular academic honor beyond receiving high grades? I recently asked this question of an honors seminar; I offer this distillation of our collective thoughts for further discussion, and as a challenge to those who see themselves as or hope someday to be honors students at MCLA.

Accomplished honors students are:

·         Fearlessly communicative. They express themselves willingly, listen carefully to what others have to say, and listen especially for reasons to think that their views may be partial or wrong. This quality combines intellectual honesty and courage with an active generosity of attention toward others and curiosity about the world.

·         Active readers of books, other media, other people, and the world. Honors students are interested in many things -- big questions, different walks of life, new experiences, new knowledge. They appreciate the process of learning itself, not just its products or rewards, and take it beyond the classroom, accepting every possible invitation to hear speakers, attend performances and films, go on hikes, and participate in a wide array of educational experiences.

·         Thoughtful persons. Not only are they dependable students, friends, and colleagues who keep their commitments, but honors students strive for a deeper integrity between what they think and say, how they feel, and how they act. Their commitment to the principles in this list, for example, is not merely rhetorical, but steadfast and genuine.

·         Critical thinkers. Honors students avoid the temptation of being overly critical (in the pejorative sense); instead, they are curious and respectful toward all sources of knowledge. At the same time, they accept no important claim to know uncritically or on mere authority, but seek both corroborative evidence for and respectful challenges to any proposition, including their own most cherished opinions.

·         Creative risk-takers. More likely to take a course because it sounds fascinating or challenging than because it fulfills a requirement, honors students reach beyond their comfort zones and take risks for the sake of learning. They seek a balance between academic and nonacademic pursuits, often supplementing their studies with the creative arts, physical activities, and other productive, fulfilling social activities.

·         Committed writers. Inclined to process and articulate what they experience and learn, accomplished honors students are likely to write regularly for themselves (and to their friends and family) as well as for their classes, as an active means of consolidating and advancing the learning process. In formal writing contexts, they spend more time researching, organizing, and editing than simply drafting.

·         Members of a learned community. Honors students seek to belong to a socially engaged, politically aware (and tolerant), and expansively inclusive group of intellectuals.

Matthew R. Silliman teaches Philosophy and Co-Directs the Honors Program at MCLA 

~~~

You Want Me to What? Research, Graduate School, and a Real Life

Gerol Petruzella

(The following is the text of the author’s keynote address to the sixth annual undergraduate research conference at MCLA)
The title of these remarks:  “You Want Me to What?” was my rhetorical response the first time someone suggested that I consider doing graduate research. It was also my response at various points during my graduate life, as I came upon new and surprising aspects of what I’d gotten myself into. I make this question the focus of my remarks today because I’m trying to do two distinct things in the present essay, and this question covers them both. First, to give some anecdotal evidence about the experience of graduate research and its impact; and second, to talk about my own research, and how it has developed from my training in graduate school.

I’m a product of public education. After graduating from the Pittsfield Public School system, I attended Berkshire Community College for two years, then transferred to MCLA in 1999 as a junior. My interest was in ancient languages, and so I became a Philosophy major, focusing on ancient Greek philosophy. Thanks to the reciprocal compact among Berkshire County’s institutions of higher education, I was able to take two semesters of ancient Greek at Williams my senior year. As a senior, I participated in the Philosophy Department’s Mini-Conference with no small degree of eager trepidation. Throughout my undergraduate years, I had never really given much thought to what would come after graduation. But by the end of my first year here, I’d begun to realize that I wasn’t quite ready to leave the academic world behind. A large part of this realization was simply that I saw how much more there was to learn in my fields of interest. I felt as though I had seen a banquet spread out in front of me, but had only just gotten through the salad course! As I saw graduation approaching, I realized that my undergraduate work had prepared me in many ways for the rest of my life; but one thing for which it had utterly failed to prepare me – was leaving it.

With advice from professors, friends and family, I decided to enter a Ph.D. program in philosophy. This decision felt good. I had a picture in my mind of what grad school would be like, and it was exactly where I saw myself:  in classes with other people who cared as much as I did about the arcana of ancient Greek philosophy and language, writing papers, debating, attending lectures by world-famous scholars. But, after all, I was a philosopher (or at least a student of philosophy), and simply feeling good wasn’t sufficient reason to justify such a monumental decision. Did it make sense? Was it worth it – the expense, the effort, the time? What did I hope to achieve? Among all the bits and pieces of information I’d collected about grad school, a common thread was the intensity of study. Grad school is a sort of apprenticeship: you are not simply ‘the student’, learning from ‘the professor’. You are a professional-in-training, whether your field is scientific, academic, or business-related; and you are evaluated more on the original applications of your knowledge than on the accumulation of that knowledge. In the summer after I graduated from MCLA, I wondered at the wisdom of organizing the next several years of my life around an expensive apprenticeship, at the end of which I would be an expert in … what, exactly? Aristotle, Plutarch, Alexander of Aphrodisias, perhaps Xenophon’s Memorabilia…? When so many of my peers were entering careers, college education in hand, at the age of 21, was I closing myself off from the ‘real world’, simply for the self-indulgent pleasure of study? As I faced the impending prospect of graduate study, that small cynical part of my mind kept asking, “You want me to what?” And I found I didn’t have an easy answer.

I entered the Ph.D. program at the University at Buffalo in September 2001. I didn’t know it, but I was about to learn my first lesson in balancing graduate work with my life in the world. On Tuesday September 11, my seminar in Aristotle had met only once so far, but we were about 50 pages into the Nicomachean Ethics, and already planning our thesis topics. We met in the grad student lounge before class to hang out and ‘talk shop’. Then Judy, our department secretary, told us to turn on the television. Crowding around the 13-inch black-and-white screen, we saw the fall of the Twin Towers, and felt the boundaries of our world suddenly expand far, far beyond the walls of our seminar room. In the face of such a world-changing tragedy, what in the world was the relevance of what I was doing here? Yet in the weeks and months that followed, I saw first-hand how our discipline, so often criticized for ‘ivory tower’ disconnectedness, was suddenly at the forefront of the most relevant issues and events in our world. As world leaders debated the ethics of retaliation, of violence and non-violence, of pre-emptive war, policy makers and the public turned to philosophers for clarification, explanation, even direction. My epistemology professor had come to Buffalo from West Point Academy; he offered presentations to the university community on pre-emptive war and torture. Another department member, who taught courses in ontology and Husserl, won a multi-million-dollar grant doing research for the European Union on data mining, and was invited to speak on the origins of terrorism to government panels in Paris, Kyoto, and Leipzig, Germany. These were my mentors, my colleagues. I worked with them on a daily basis. We ate General Tso’s chicken together at the Chinese restaurant down the road from campus. And here they were, directly involved with perhaps the defining world events of a generation. And they were not setting aside their ‘academic’ work to address contemporary events: they were doing philosophy, using precisely the knowledge and skills of their research and teaching to be relevant and effective participants in the ‘real world’.

And so eventually I came to refine further my understanding of the relevance of graduate study to my life. I realized that I had to make something of my studies. If I waited for a seminar or thesis topic to appear that somehow made ancient philosophy relevant to my present-day life and society, I would be waiting, while others were doing. This was a key realization for me, which came after I had taken all my required courses, and was ready to write my topic proposal – the 50-page document outlining the dissertation I eventually hoped to write. I was enthusiastic! I had my new-found insight! I spent a semester writing it, and gave it to my advisor for approval.

He rejected it. He told me not simply to re-write it, but to re-focus my whole project.

You want me to what?

I did it, of course, after the requisite couple of weeks of despair. And, by the way, if there’s anything useful I can tell you about research, graduate-level or otherwise, it is this:  do not, under any circumstances, measure the value of your work by the success or failure of any single thesis or project. In grad school, as in life, everyone gets intimately acquainted with failure. Being successful in research must include the ability to deal with these situations effectively. Another very common frustration in grad school: coming up with a fantastic, revolutionary, ground-breaking idea for a paper…only to find out, as you begin researching the literature, that this very same fantastic, revolutionary, ground-breaking idea was proposed fifteen years ago by one of the leading experts in your field. What defines success in situations like these? I see three distinct components. First, being flexible: adapting the direction of your research to new information (even when that information is negative). Next, being self-confident: realizing that the rejection of your proposal is not a comment on your abilities, or your innate suitability for graduate work. Last, being sneaky:  that is, finding ways to continue to pursue your own research program, even while adapting your work to your advisor’s advice and direction.

Let me illustrate these three points in my own experience. First, some background about my research. One of my primary areas of interest is a particular type of ethical thought common to most ancient philosophers, called eudaimonistic ethics. In modern discussions about morality, we tend to assume that what is ‘right’ is an essentially different sort of thing than what is ‘useful’ or ‘beneficial’. This distinction is where so many of our moral debates arise: embryonic stem cell research is considered to be (at least potentially) an extremely useful type of research; but someone can admit that, and still call it morally wrong. In contrast to such a stark division, philosophers like Aristotle start with the premise that ‘morally right’ and ‘beneficial’ are inter-related – rather than being separate or conflicting. In my original topic proposal paper, I was arguing that a particularly important part of Aristotle’s ethical writing seemed incoherent to modern scholars due to a lack of clarity in translating a certain class of words from ancient Greek to the modern languages. Aristotle generally argues that leading an ethical life is possible no matter what your life circumstances – it’s within the power of absolutely anyone at any time. (So there are no excuses.) However, there are also central passages that indicate that he considers material prosperity to be essential for true success in life. Now, the Greek term that translates both ‘ethical life’ and ‘successful life’ is eudaimonia, a word that includes many shades of meaning, and is notoriously opaque to translators. And of course, having an accurate and precise meaning for a crucial term in an argument is a prerequisite for any meaningful discourse, philosophical or not. In my paper, I proposed a course of research that would include linguistic analysis of this Greek word-group, not just in Aristotle’s writings, but in other ancient Greek texts as well, from the earliest texts to the late Hellenistic period. I had already read enough to suppose that, given the way the Greeks actually used these words, the supposed incoherence in Aristotle’s work arose from the translation, and didn’t represent a problem in the philosophy itself. I had compiled a bibliography that would be the basis of my research; had a clear and well-articulated thesis; and a definite program of research. My preliminary research was confirming my thesis. At this point you may be wondering:  so why was my proposal rejected? Well, remember what I said earlier about finding one’s idea has already been proposed by someone else? In my case, my idea had been proposed in the 1990s by a pre-eminent ancient philosopher. She had suggested that the incoherence in Aristotle might be due to the translation of this term. Interestingly enough, she then proceeded to reject this explanation on other grounds. My advisor suggested that, since the idea had not only been proposed already, but was then proven to be false as well, it wasn’t exactly the best choice for my dissertation.

And so I learned the first necessary part of graduate research – to be flexible. In consultation with my advisor, I broadened my research program to include, not only Aristotle, but Socrates, Plato, and the Stoic philosophers. Instead of a narrow focus on a particular word-group, my revised project dealt with the concept of ‘external goods’: physical and mental health, material prosperity, social and family stability, and how all these factors influence the Greeks’, and our own, ideas of ethical living. I adapted my research goals to fit the current state of scholarship in the field.

When it comes to the second aspect of success I mentioned earlier – self-confidence – I have to admit that I failed miserably. Through the entire latter half of my time in grad school, I was wracked with self-doubt. I felt like a fraud – my work was derivative and unoriginal, and the department was continuing to tolerate my presence only because of the tuition I was paying. I had invested too much of myself in my original thesis, and I crumpled when it was rejected. It was a long, difficult process for me to realize that the rejection of my proposal was not a comment on my abilities, or my suitability for graduate work.

As for being sneaky:  I accepted that my dissertation work could not have the focus I had originally hoped it would. But I remained unconvinced that my original thesis wasn’t viable. And I wanted to pursue my original line of research in some capacity, to satisfy my own curiosity about it. And so, as I worked on my new project for my dissertation over the next three years, I also pursued my eudaimonia research on the side. And in 2005, I turned that rejected 50-page proposal into a Master’s Thesis for my M.A. in the Classics department. The next year I had a paper based on this research accepted for presentation at an international conference.

So now, here I am. I’ve made it through graduate school, and my junk mail now comes addressed to “Doctor Gary Petrozulla.” What have I gained? In a very interesting way, my ‘official’ studies have dovetailed with the knowledge I’ve gained from my life experiences. I no longer doubt the value of my graduate school training, or its relevance, even for a field of study like ancient Greek philosophy. Today I’m continuing my research into both eudaimonia and external goods; and in doing so, I’m finding fascinating areas of overlap with research being conducted in other fields, for example, Mihaly Csikszentmihalyi and other psychologists studying the experience of optimal mental states. I’ve also expanded my linguistic research, studying connections and correlations in philosophical terminology between ancient Greek and Sanskrit texts. And I’m finding more and more opportunities to see the ideas I read about in my research in practice. What is eudaimonia – a successful life, a flourishing life? I try to answer that question in what I write; but also in how I live. Higher education is a great place to gain, not only knowledge, but the wisdom to use your knowledge.

My studies within ancient philosophy have focused on what thinkers in the founding period of Western thought considered to be the requirements for leading a flourishing life. To a great extent, those ideals we value in our contemporary understanding of society rely on premises shared with Plato, Aristotle and their successors. Inasmuch as a person’s happiness is influenced by the society he inhabits, it is one of my primary research interests to understand this connection. And since ethics is the philosophic discipline concerned not only with the life of the mind, but also with active engagement in life in society, there is no topic more characteristic of, nor essential to, my chosen field than this.

When I was first confronted with the suggestion to pursue graduate-level research, my response was “You want me to what?” For what should I commit yet another large chunk of my life to study? My own answer came through both my studies and my experiences. As I close, let me ask you this: Do you see your work here today, your research in general, as a real part of your life, the things you care about? Or is it something separate, something disconnected, that you’ll quickly leave behind once you’ve got the grade, or fulfilled the requirement, or graduated? I hope it’s the former option. Because as far as I can tell, perhaps the most important aspect of the liberal arts tradition is that its goal is the complete development of a human life through education. To quote one early 20th-century educator, “The liberal arts…teach one how to live; they train the faculties and bring them to perfection; they enable a person to rise above his material environment to live an intellectual, a rational, and therefore a free life in gaining truth.” Sounds like philosophy to me!

Gerol Petruzella is an MCLA Philosophy Department alumnus, philosophy scholar, and teacher.

~~~

Hume on Miracles

Jessica Dennis

One chapter of David Hume’s An Enquiry Concerning Human Understanding is devoted to refuting the notion of miracles. The chapter “Of Miracles” is interesting because much of Hume’s argument against miracles does not agree with his other arguments about the nature of human knowledge. The contradiction seems to discredit Hume and his ideas. However, closer examination of Hume as a skeptic and an author reveals that “Of Miracles” is not as far out of line with the rest of his thinking as one might assume at first glance.

In the Enquiry, Hume makes the argument that in their everyday lives, people are guided by custom, not by reason. He asserts that people experience connections between objects and/or events. With enough exposure to the same event, people come to expect that connection by custom. For example, the first time a person encounters a flame he also notices that the fire gives off heat. The same person will experience that connection many times over the course of his life, and he will expect that connection to continue based on custom.

Custom relies on the assumption that there is continuity between the past and the present. People make inferences based on their past experiences, such as:  if I light a match, the flame will give off heat. These inferences are not always epistemically strong, but they are very often useful in daily life. By making inferences based on past experience and observations, people create a guide by which they can make decisions and act in the real world. Inferences allow us to live our lives guided by custom.

In the chapter “Of Miracles” Hume paints a somewhat contrary picture of human knowledge and reason. He defines a miracle as anything that goes against the laws of nature. In order for us to consider something a miracle, there must be a uniformity of experience contrary to that miraculous event. According to Hume, a miracle can only become credible if its falsification is more astounding than the miracle itself.

Hume presents four reasons why there is a lack of evidence for miracles. The first is that there is a lack of enough trustworthy people to provide testimony for the miracle in question. The second reason is that when people encounter the unknown they tend to relate it to the known. In human thought what is known or usual also seems to be the most plausible. The third reason against evidence for miracles is that claims about miracles often originate among the uneducated or the sheltered. The final reason is that all miracles that have been claimed in the past have experienced some form of opposition.

The reasoning in “Of Miracles” does not fit with the rest of Hume’s work. In the Enquiry Hume argues that the inner workings of the universe are hidden from human view. We have limited knowledge about all of the factors that influence an event; this means that we can never know the true probability of an event occurring. Hume insists that we must strive to see the probability of even the most unusual event. Of course, this also applies to the probability of a miraculous event occurring.

The question then remains, why would Hume include “Of Miracles” if it is so inconsistent with his other written work? I believe the answer resides in his insistence that no good can come from excessive skepticism. He believes that some skeptical thought would rid us of prejudices and preformed notions about the world, but that in order to function in day-to-day life we must be moderate in our skeptical thought.

In addition, at the end of the chapter, Hume concedes that there may be miracles. He objects to the lack of proof that is offered for historical miracles, and he chafes at the idea of those miracles being used as the foundation for major world religions. If a person believes in something without applying any reason, she is left with only her faith. That faith is leading her to believe in something that is contrary to all of her life experiences, and that is something that Hume finds uncomfortable.

Instead, Hume encourages his readers to use skepticism as a tool for enlightenment. He says that as people we must acknowledge our limitations. However, those limitations should not stop us from using our power of observation to come as close to the truth as possible. Hume concedes that miracles are (barely) possible, but that does not mean that we should believe everything we hear. 

Jessica Dennis is a student at MCLA 

~~~

A Baboon’s Role in the Evolution of Language

Carolyn Cook

Many people find it difficult to understand how humans acquired their remarkable sophistication of language, given that our closest evolutionary cousins are comparatively inarticulate and non-linguistic. However, the social studies of baboons have created a hypothesis that baboon life was the precursor for human cognition.

Humans routinely classify others according to both their individual attributes, such as social status or wealth, and membership in higher order groups, such as families or castes. They also recognize that people’s individual attributes may be influenced and regulated by their group affiliations. The social ordering for baboons is similar. Baboons recognize that a dominance hierarchy can be subdivided into family groups. They respond more strongly to call sequences mimicking dominance-rank reversals between families than within families, indicating that they classify others simultaneously according to both individual rank and kinship. The selective pressures imposed by complex societies may have favored cognitive skills that constitute an evolutionary precursor to some components of human cognition. (Bergman, et al.)

Baboons are Old World monkeys who shared a common ancestor with today’s humans about 36 million years ago. Their social knowledge now shares several properties with human language. Representational knowledge is confirmed when a baboon hears a vocalization. It acquires specific information about an interaction between specific individuals. They recognize that vocalizations follow certain rules of directionality (for example, screams are only given by subordinates to dominants). When two baboons produce a sequence of calls, they are interpreted by listeners in a manner that resembles the way we interpret sentences, both in the information acquired and in the manner of its construction. Baboons acquire propositional information by combining their knowledge of call types, callers, and the callers' places in a social network, and by assuming a causal relation between one animal's vocalizations and another's. (Worden; Seyfarth, et al.)

Like baboons, our ancestors evidently lived in groups with intricate networks of relationships that were simultaneously competitive and cooperative. Before the emergence of language, hominids assigned meaning to other individuals' calls and extracted rule-governed, propositional information from them. Human language may have evolved from such primitive communication, then, under highly selective pressure to communicate their thoughts. (Hespos, et al)

References

Bergman, T.J., Beehner, J.C., Cheney, D.L. & Seyfarth, R.M. Science 302, 1234-1236 (2003).

R. Worden, The evolution of language from social intelligence. In: J.R. Hurford et al., Editors, Approaches to the Evolution of Language, Cambridge University Press (1998), pp. 148–168.

R.M. Seyfarth and D.L. Cheney, The structure of social knowledge in monkeys. In: F. de Waal and P. Tyack, Editors, Animal Social Complexity: Intelligence, Culture, and Individualized Societies, Harvard University Press (2003), pp. 207–229.

S.J. Hespos and E. Spelke, Conceptual precursors to language, Nature 430 (2004), pp. 453–456).

Carolyn Cook is a student at MCLA

No comments: