The Identity of Psychiatry and the Challenge of Mad Activism: Rethinking the Clinical Encounter

[Introduction to an essay currently in press with the Journal of Medicine & Philosophy]

Psychiatry has an identity in the sense that it is constituted by certain understandings of what it is and what it is for. The key element in this identity is that psychiatry is a medical speciality. During the early years of their training, medical doctors make a choice about the speciality they want to pursue. Psychiatry is one of them, and so is ophthalmology, cardiology, gynaecology, and paediatrics. Modern medical specialities share some fundamental features: they treat conditions, disorders, or diseases; they aspire to be evidence-based in the care and treatments they offer; they are grounded in basic sciences such as physiology, anatomy, histology, and biochemistry; and they employ technology in investigations, research, and development of treatments. These features characterize modern medical specialities even as physicians are increasingly framing their work in ways that take account of the whole person, recognising conflicting values and their implications for diagnosis and treatment, and acknowledging the role of the arts and humanities in medical education and practice (see, for example, Cox, Campbell, and Fulford 2007; Fulford, van Staden, and Crisp 2013; Cook 2010; and McManus 1995).

Psychiatry differentiates itself from other medical specialties by the conditions that it treats: mental health conditions or disorders, to be contrasted with physical health conditions or disorders. The nature of its subject matter, which are disturbances of the mind and their implications, raises certain complexities for psychiatry that, in extreme, are sometimes taken to suggest that psychiatry’s positioning as a medical speciality is suspect; these include the normative nature of psychiatric judgements, the explanatory limitations of psychiatric theories, and the classificatory inaccuracies that beset the discipline.

There are significant, ongoing debates in these three areas that do not, at present, appear to be nearing resolution. But these debates are themselves superseded by a foundational challenge to psychiatry’s identity as a medical speciality, a challenge that emanates from particular approaches in mental health activism. These approaches, which I will be referring to as Mad activism, reject the language of ‘mental illness’ and ‘mental disorder’, and with it the assumption that people have a condition that requires treatment. The idea that medicine treats conditions, disorders, or diseases is at the heart of medical practice and theory, and this includes psychiatry in so far as it wishes to understand itself as a branch of medicine. In rejecting the premise that people ‘have’ a ‘condition’, Mad activism is issuing a challenge to psychiatry’s identity as a medical speciality.
In this paper I examine how psychiatry might accommodate the challenge of Mad activism in the context of the clinical encounter.

CONTINUE READING HERE

Madness & Society: Pathways to Reconciliation

cropped-img_9832.jpg

On the 10th of July 2019 I delivered the Annual Lecture of the Lived Experiences of Distress Research Group at the London South Bank University. The title of the talk was Madness & Society: Pathways to Reconciliation.

Thank you to Professor Paula Reavey for the invitation, and thank you to Seth Hunter for the introduction.

The talk explored three main questions:

  1. What is reconciliation?

  2. What are the challenges to societal reconciliation with Mad activism?

  3. What can be done about these challenges?

Click on the following links for:

Transcript of the talk (pdf)

Audio recording of the event

Slides (PowerPoint)

 

Aimless in Australia

IMG_7905

I was woken up at 6.34 a.m. by the sound of Chinese chatter outside my door. Room 407 was right opposite the lift and in my immediate post-waking stupor, the repeated ding-dongs and the upward and downward inflections of Mandarin amounted to a form of torture. The Great Southern Hotel where I had been staying for the past two nights was in the heart of Sydney’s Central Business District, right on the edge of China Town, and two hundred metres from Central Station, Sydney’s main transport hub. A good location no doubt, yet it was a hotel of whom only one of the adjectives in its self-appointed title was true; there was nothing great about the Great Southern Hotel, or perhaps nothing great anymore. Built in 1858 and extended to seven floors in 1903, it sported an impressive Art Deco façade and a marble laden lobby. It stood incongruously amid the eateries of China Town, surrounded by modern, ugly glass towers. Even though the rooms of the hotel had clearly been renovated recently, the renovations must have been conducted under a limited budget, for why else would the rooms fail to be either functional or beautiful? The carpet was ugly, the water-pressure non-existent, the A.C. had two settings: sweltering hot or freezing cold, the T.V. was untunable, the mattress broke your back, the blanket was covered in hair, and the fridge – whose only contents were two small packets of soured milk – stank. It reminded me of the dodgy bed and breakfasts around Sussex Gardens in Paddington. Back in 2003, during my exile in Hull, I would spend a couple of nights at one of those places on my weekend escapes to London. These were establishments that were not loved by anyone and, accordingly, did not love anyone back. You do not need to believe in Feng Shui to know that a building can repulse you, or be repulsed by you.

Good thing, then, that I was leaving. Yes, that was my last morning at the Great Southern Hotel and in Sydney. And there was no better day to leave than this. Last night, the weather had taken a turn; the sunny and pleasantly warm winter days of the previous week gave way to a daring wind and an increasingly confident rain. As the temperature dropped, my winter coat, once again, came to the forefront of my wardrobe. Yes, it was the perfect time to leave New South Wales and head to Queensland, the state famous for its sunshine, its national parks, its tropical beaches, its great reef, and its not-so-open-minded inhabitants (as the New South Welsh and the Victorians I had met in Sydney were quick to warn me). But it would be a lie if I were to claim that I had any reason to go to Queensland, or any grand plan. In fact, I had no personal reason to come to Australia, and had it not been for the invitation to speak at the seriously titled conference Culture, Cognition, and Mental Illness, it is unlikely I would have set foot on this continent.

I’ve never had a burning urge to go to Australia. It never struck me as a place I ought to visit before I’ve travelled in South America and East Asia to my satisfaction, and I haven’t yet. I have similar sentiments about Canada, a country that is so low on my list of travel priorities it is unlikely I will ever get to it. I’ve often wondered why I harbour these sentiments. To be sure, there is something unattractive about the New World nations owing to their often tarnished histories; perhaps distance has something to do with it, a point that definitely applies to Australia as I was to learn during the brutal experience of 22 hours of confinement in an economy seat; maybe there’s a personal prejudice lurking somewhere, a prejudice regularly stoked by the encounters I have had with a certain type of Australian in London. You could say that my travel consciousness of the world never really included Australia, a consciousness that, during high-school in Egypt in the 90s, was directed towards Europe.

In the late 90s and the first decade of the millennium I had my fill travelling in Europe. The first country I travelled to completely on my own was Germany in 1995, followed by Morocco in 1997, Spain in 1998, Norway in 1999, and California and Nevada in 2000. After moving to England in 2003, I made best use of my new-found proximity to Europe to explore the continent and I made no less than twenty-five visits to many of its countries. From 2006 onwards, my travel consciousness expanded markedly: China, South Africa, Mozambique, Cuba, Chile, Bolivia, Peru, Namibia, Lebanon, Swaziland, Lesotho. Yet, aside from a three week visit to New Zealand in 2012 – also motivated by conference attendance – it never occurred to me to set foot in that part of the world. It’s not strange that my travel consciousness had developed in this way. When Egyptians travel, they invariably go to Europe, and in particular to the Northern and Western parts of Europe. I know very few Egyptians who have ventured beyond this region. It’s where my father cut his travelling teeth, and where I sharpened mine. And perhaps if I had not had the chance to really satisfy my European curiosity, I would not have ventured further either. But something more is going on: Egyptians have a specific idea of what travelling should be about. For many Egyptians, the idea of leaving Cairo to holiday, say, in New Delhi is absurd – why would you replace one maniacal metropolis for another? And so is the idea of going ‘camping’ – a good holiday is defined by comfort, shopping, and a smattering of culture, and not by tents, cold oats, mosquito nets, and bush toilets.

Mohammed Abouelleil Rashed, Syndey 2018

(tbc someday)

Best of 2018 Philosophy List by Oxford University Press

Philosophy-BEST-OF-2017-AKT11460-with-2017-v2-980-x-160

Check out Oxford University Press’ list of articles chosen from across its journals to represent the ‘Best of 2018’.

My article In Defense of Madness: The Problem of Disability is included under the entries for the Journal of Medicine and Philosophy.

For other articles, I enjoyed reading Roger Scruton’s Why Beauty Matters in The Monist.

Madness & the Demand for Recognition

Kan Zaman...

mandess coverAfter four years of (almost) continuous work, I have finally completed my book:

Madness and the Demand for Recognition: A Philosophical Inquiry into Identity and Mental Health Activism.

You can find the book at the Oxford University Press website and at Amazon.com.A preview with the table of contents, foreword, preface, and introduction is here.

Madness is a complex and contested term. Through time and across cultures it has acquired many formulations: for some, madness is synonymous with unreason and violence, for others with creativity and subversion, elsewhere it is associated with spirits and spirituality. Among the different formulations, there is one in particular that has taken hold so deeply and systematically that it has become the default view in many communities around the world: the idea that madness is a disorder of the mind.

Contemporary developments in mental health activism pose a radical challenge to psychiatric and societal…

View original post 135 more words

Public Mental Health Across Cultures: The Ethics of Primary Prevention of Depression, Focusing on the Dakhla Oasis of Egypt

(Introduction to a chapter I wrote with Rachel Bingham. It will be part of the volume ‘Mental Health as Public Health: Interdisciplinary Perspectives on the Ethics of Prevention’, edited by Kelso Cratsley and Jennifer Radden.)

 

P1000731

 

For over a decade there has been an active and ambitious movement concerned with reducing the “global burden” of mental disorders in low- and middle-income countries.[1] Global Mental Health, as its proponents call it, aims to close the “treatment gap”, which is defined as the percentage of individuals with serious mental disorders who do not receive any mental health care. According to one estimate, this amounts to 75%, rising in sub-Saharan Africa to 90% (Patel and Prince 2010, p. 1976). In response to this, the movement recommends the “scaling up” of services in these communities in order to develop effective care and treatment for those who are most in need. This recommendation, the movement states, is founded on two things: (1) a wealth of evidence that medications and psychosocial interventions can reduce the disability accrued in virtue of mental disorder, and (2) closing the treatment gap restores the human rights of individuals, as described and recommended in the Convention on the Rights of Persons with Disabilities (Patel et al. 2011; Patel and Saxena 2014).

In addition to its concern with treatment, the movement has identified prevention among the “grand challenges” for mental and neurological disorders. It states, among its key goals, the need to identify the “root causes, risk and protective factors” for mental disorders such as “modifiable social and biological risk factors across the life course”. Using this knowledge, the goal is to “advance prevention and implementation of early interventions” by supporting “community environments that promote physical and mental well-being throughout life” and developing “an evidence-based set of primary prevention interventions” (Collins et al. 2011, p. 29). Similar objectives have been raised several years before by the World Health Organisation, who identified evidence-based prevention of mental disorders as a “public health priority” (WHO 2004, p. 15).

Soon after its inception, the movement of Global Mental Health met sustained and substantial critique.[2] Essentially, critics argue that psychiatry has significant problems in the very contexts where it originated and is not a success story that can be enthusiastically transported to the rest of the world.[3] The conceptual, scientific, and anthropological limitations of psychiatry are well known and critics appeal to them in making their case. Conceptually, psychiatry is unable to define ‘mental disorder’, with ongoing debates on the role of values versus facts in distinguishing disorder from its absence.[4] Scientifically, the lack of discrete biological causes, or biomarkers, for major psychiatric conditions has resulted in the reliance on phenomenological and symptomatic classifications. This has led to difficulties in defining with precision the boundaries between disorders, and accusations that psychiatric categories lack validity.[5] Anthropologically, while the categories themselves are associated with tangible and often severe distress and disability, they remain culturally constructed in that they reflect a ‘Western’ cultural psychology (including conceptions of the person and overall worldview).[6] Given this, critics see Global Mental Health as a top-down imposition of ‘Western’ norms of health and ideas of illness on the ‘Global South’, suppressing long-standing cultural ideas and healing practices that reflect entirely different worldviews. It obscures conditions of extreme poverty that exist throughout many non-Western countries, and which underpin the expressions of distress that Global Mental Health now wants to medicalise. On the whole, Global Mental Health, in the words of the critics, becomes a form of “medical imperialism” (Summerfield 2008, p. 992) that “reproduces (neo)colonial power relationships” (Mills and Davar 2016, p. 443).

We acknowledge the conceptual, scientific, and anthropological critiques of psychiatry and have written about them elsewhere.[7] At the same time we do not wish to speculate about and judge the intention of Global Mental Health, or whether it’s a ‘neo-colonial’ enterprise that serves the interests of pharmaceutical companies. Our concern is to proceed at face-value by examining a particular kind of interaction: on one hand, we have scientifically grounded public mental health prevention campaigns that seek to reduce the incidence of mental disorders in low- and middle-income countries; on the other hand, we have the cultural contexts in these countries where there already are entirely different frameworks for categorising, understanding, treating, and preventing various forms of distress and disability. What sort of ethical principles ought to regulate this interaction, where prevention of ‘mental disorders’ is at stake?

The meaning of prevention with which we are concerned in this chapter is primary, universal prevention, to be distinguished from mental health promotion, from secondary prevention, and from primary prevention that is of a selective or indicated nature. Primary prevention “aims to avert or avoid the incidence of new cases” and is therefore concerned with reducing risk factors for mental disorders (Radden 2018, p. 127, see also WHO 2004, p. 16). Secondary prevention, on the other hand, “occurs once diagnosable disease is present [and] might thus be seen as a form of treatment” (Radden 2018, p. 127). In contrast to prevention, mental health promotion “employs strategies for strengthening protective factors to enhance the social and emotional well-being and quality of life of the general population” (Peterson et al. 2014, p. 3). It is not directly concerned with risk factors for disorders but with positive mental health. With universal prevention the entire population is within view of the interventions, whereas with selective and indicated prevention, the target groups are, respectively, those “whose risk for developing the mental health disorder is significantly higher than average” and those who have “minimal but detectable signs or symptoms” (Evans et al. 2012, p .5). While there is overlap among these various efforts, we focus on primary, universal prevention. Our decision to do so stems from the fact that such interventions, in being wholly anticipatory and population wide put marked, and perhaps even unique, ethical pressure on the encounter between the cultural context (and existing ideas on risk and prevention of distress and disability) and the biomedical public mental health approach.

It is helpful for ethical analysis to begin with a sufficiently detailed understanding of the contexts and interactions that are the subject of analysis. With these details at hand, what matters in a particular interaction is brought to light and the ethical issues become easier to grasp. Accordingly, we begin in section 2 with an ethnographic account of the primary prevention of ‘depression’ in the Dakhla Oasis of Egypt from the perspective of the community. The Dakhla Oasis is a rural community where there is no psychiatric presence or modern biomedical concepts yet – like most communities around the world – there is no shortage of mental-health related distress and disability. It is a paradigmatic example of the kind of community where Global Mental Health would want to action its campaigns. In section 3 we move on to the perspective of a Public Health Team concerned with preventing depression in light of scientific and evidence-based risk factors and preventive strategies. Section 4 outlines the conflict between the perspective of the Team and that of the community. Given this conflict, sections 5 and 6 discuss the ethical issues that arise in the case of two levels of intervention: family and social relationships, and individual interventions.

PDF

Notes:

[1] See Horton (2007), Prince et al. (2007), and Saxena et al. (2007).

[2] Most recently there was vocal opposition to a ‘Global Ministerial Mental Health Summit’ that was held on the 9th and 10th of October 2018 in London. The National Survivor and User Network (U.K.) sent an open letter to the organisers of the summit, objecting to the premise, approach, and intention of Global Mental Health.

[3] See Summerfield (2008, 2012, 2013), Mills and Davar (2016), Fernando (2011), and Whitley (2015).

[4] For debates on the definition of the concept of mental disorder consult Boorse (2011), Bolton (2008, 2013), Varga (2015), and Kingma (2013).

[5] For discussions of the (in)validity of psychiatric categories see Kinderman et al. (2013), Horwitz and Wakefield (2007), and Timimi (2014). Often, the problem is framed by asking whether mental disorders are natural kinds (see Jablensky 2016, Kendell and Jablensky 2003, Zachar 2015, and Simon 2011).

[6] See, for example, Fabrega (1989), Littlewood (1990), and Rashed (2013a).

[7] For example: Rashed and Bingham (2014), Rashed (2013b), and Bingham and Banner (2014).

Jennifer Radden: “Rethinking disease in psychiatry: Disease models and the medical imaginary”

Abstract

The first decades of the 21st century have seen increasing dissatisfaction with the diagnostic psychiatry of the American Psychiatric Association’s Diagnostic and Statistical Manuals (DSMs). The aim of the present discussion is to identify one source of these problems within the history of medicine, using melancholy and syphilis as examples. Coinciding with the 19th‐century beginnings of scientific psychiatry, advances that proved transformative and valuable for much of the rest of medicine arguably engendered, and served to entrench, mistaken, and misleading conceptions of psychiatric disorder. Powerful analogical reasoning based on what is assumed, projected, and expected (and thus occupying the realm of the medical imaginary), fostered inappropriate models for psychiatry. Dissatisfaction with DSM systems have given rise to alternative models, exemplified here in (i) network models of disorder calling for revision of ideas about causal explanation, and (ii) the critiques of categorical analyses associated with recently revised domain criteria for research. Such alternatives reflect welcome, if belated, revisions.

Click here for paper

 

 

On the idea of Mad Culture (and a comparison with Deaf Culture)

  1. WHAT IS CULTURE?

 Part of the difficulty in making sense of the notion of Mad culture is the meaning of culture as such. The term ‘culture’ refers to a range of related concepts which are not always sufficiently distinguished from each other in various theoretical discussions. There are, at least, three concepts of culture (see Rashed 2013a and 2013b):

  • Culture as an activity: the “tending of natural growth” (Williams 1958, p. xvi); “to inhabit a town or district, to cultivate, tend, or till the land, to keep and breed animals” (Jackson 1996, p. 16); to grow bacteria in a Petri-dish; to cultivate and refine one’s artistic and intellectual capacities – to become cultured. This final meaning – culture as intellectual refinement – lives today in the Culture section of newspapers.
  • Culture as an analytic concept in the social sciences: this is the concept of culture that we find, for example, in the academic discipline of anthropology. The academic concept of culture has evolved rapidly since its introduction by Edward Tylor in the late 19th[1] Today, ‘culture’ is used to refer to socially acquired and shared symbols, meanings, and significances that structure experience, behaviour, interpretation, and social interaction; culture “orients people in their ways of feeling, thinking, and being in the world” (Jenkins and Barrett 2004, p. 5; see Rashed 2013a, p. 4). As an analytic concept it enables researchers and theoreticians to account for the specific nature of, and the differences among, social phenomena and peoples’ subjective reports of their experiences. For example, a prolonged feeling of sadness can be explained by one person as the effect of a neurochemical imbalance, by another as the effect of malevolent spirits, and by another as a test of one’s faith: these differences can be accounted for through the concept of culture. (See Risjord (2012) for an account of various models of culture in the social sciences.)

When we refer to ‘culture’ in constructions such as Mad culture and Maori culture we are not appealing to either of the two concepts of culture just outlined. For what we intend is not an activity or an analytic concept but a thing. This brings us to the third concept of culture I want to outline and the one that features in political discussions on cultural rights.

  • Culture as a noun: this is the societal concept of culture; Will Kymlicka (1995, p. 76) defines it as follows:

a culture which provides its members with meaningful ways of life across the full range of human activities, including social, educational, religious, recreational, and economic life, encompassing both public and private spheres. These cultures tend to be territorially concentrated, and based on a shared language.

Similarly, Margalit and Halbertal (1994, pp. 497-498) understand the societal concept of culture “as a comprehensive way of life”, comprehensive in the sense that it covers crucial aspects of individuals’ lives such as occupations, the nature of relationships, a common language, traditions, history, and so on. Typical examples of societal cultures include Maori, French-Canadian, Ultra-Orthodox Jewish, Nubian, and Aboriginal Canadian cultures. All these groups have previously campaigned for cultural rights within the majorities in which they exist, such as the right to engage in certain practices or to ensure the propagation of their language or to protect their way of life.

To stave off the obvious objections to this final concept of culture I point out that there is no necessary implication here that a given societal culture is fixed in time – Nubian culture can change while remaining ‘Nubian’. Neither is there an implication that all members of the community agree on what is necessary and what is contingent in the definition of their culture, or on the extent of the importance of this belief or that practice. And neither is a societal culture hermetically sealed from the outside world: “there is no watertight boundary around a culture” is the way Mary Midgley (1991, p. 83) puts it. Indeed it is because there is no hermetic seal around a societal culture that it can change, thrive, or disintegrate in light of its contact with other communities. In proceeding, then, I consider the key aspects of a societal culture to be that it is enduring (it existed long before me), shared (there many others who belong to it), and comprehensive (it provides for fundamental aspects of social life). In light of a societal culture’s appearance of independence, it can be looked upon as a ‘thing’ that one can relate to in various ways such as being part of it, alienated from it, rejected by it, or rejecting it. Can Madness constitute a culture in accordance with this concept?

2. CAN MADNESS CONSTITUTE A CULTURE? 

In the activist literature we find descriptions of elements of Mad culture, as the following excerpts indicate:

Is there such a thing as a Mad Culture? … Historically there has been a dependence on identifying Mad people only with psychiatric diagnosis, which assumes that all Mad experiences are about biology as if there wasn’t a whole wide world out there of Mad people with a wide range of experiences, stories, history, meanings, codes and ways of being with each other. Consider some of these basics when thinking about Madness and Mad experiences: We have all kinds of organized groups (political or peer) both provincially and nationally. We have produced tons and tons of stories and first person accounts of our experiences. We have courses about our Mad History. We have all kinds of art which expresses meaning – sometimes about our madness. We have our own special brand of jokes and humour. We have films produced about our experiences and interests. We have rights under law both Nationally and internationally. We have had many many parades and Mad Pride celebrations for decades now. (Costa 2015, p.4 – abridged, italics added)

As the italicised words indicate, this description of Mad culture recalls key aspects of culture: shared experiences, shared histories, codes of interaction and mutual understanding, social organisation, creative productions, cultural events. Many of these notions can be subsumed under the idea that Mad people have unique ways of looking at and experiencing the world:

Mad Culture is a celebration of the creativity of mad people, and pride in our unique way of looking at life, our internal world externalised and shared with others without shame, as a valid way of life. (Sen 2011, p.5)

When we talk about cultures, we are talking about Mad people as a people and equity-seeking group, not as an illness… As Mad people, we have unique ways of experiencing the world, making meaning, knowing and learning, developing communities, and creating cultures. These cultures are showcased and celebrated during Mad Pride (Mad Pride Hamilton).

A key component of culture is a shared language, and cultural communities are frequently identified as linguistic communities (e.g. the French-Canadians or the Inuit). A similar emphasis on language and shared understanding can also be found in accounts of Mad culture:

As Mad people we develop unique cultural practices: We use language in particular ways to identify ourselves (including the reclamation of words like crazy, mad, and nuts). We form new understandings of our experiences that differ from those of biomedical psychiatry. (deBei 2013, p. 8)

The experience of Madness produces unique behaviour and language that many Normals don’t understand but which make complete sense to many of us. (Costa 2015, p.4)

We can find a community in our shared experiences. We can find a culture in our shared creativity, our comedy and compassion. Sit in a room full of Nutters and one Normal, see how quickly the Normal is either controlling the conversation or outside of it. They do not share our understanding of the world, and here you can see evidence of our Culture, our Community. (Clare 2011, p. 16)

So, can madness constitute a culture? In the foregoing excerpts, activists certainly want to affirm this possibility. But the idea of Mad culture does not fit neatly with communities typically considered to be cultural communities. A typical cultural community, as outlined in section 1, tends to have shared language and practices, a geographic location or locations, a commitment to shared historical narrative(s), and offers for its members a comprehensive way of life. Compared to this, Mad culture appears quite atypical; for example, there is no shared language as such – references to ‘language’ in the previous quotes indicate the kind of private codes that tend to develop between friends who have known each other for many years, and not to a systematic medium of communication. People who identify as Mad, or who are diagnosed with ‘schizophrenia’ or ‘bipolar disorder’, come from all over the world and have no geographic location, no single language or a single shared history (the history of mental health activism in the English speaking world is bound to be different to that in South America). Further, Mad culture does not offer a comprehensive way of life in the same way that Aboriginal Canadian culture may. Mad people can and do form communities of course – Mad Pride and similar associations are a case in point – the question here, however, is whether these can be considered cultural communities.

Perhaps Quebeckers and Maoris are not suitable comparisons to Mad culture. Another community to examine, and which may be more analogous in so far as it also continues to fight medicalisation and disqualification, is Deaf culture. On visiting Gallaudet University in 1986 – a university for the education of deaf students – Oliver Sacks (1989, p. 127) remarked upon “an astonishing and moving experience”:

 I had never before seen an entire community of the deaf, nor had I quite realized (even though I knew this theoretically) that Sign might indeed be a complete language – a language equally suitable for making love or speeches, for flirtation or mathematics. I had to see philosophy and chemistry classes in Sign; I had to see the absolutely silent mathematics department at work; to see deaf bards, Sign poetry, on the campus, and the range and depth of the Gallaudet theatre; I had to see the wonderful social scene in the student bar, with hands flying in all directions as a hundred separate conversations proceeded – I had to see all this for myself before I could be moved from my previous “medical” view of deafness (as a “condition,” a deficit, that had to be treated) to a “cultural” view of the deaf as forming a community with a complete language and culture of its own.

In Sacks’ account, Sign language appears as a central component of Deaf culture – the core from which other cultural practices and attitudes arise. The centrality of Sign to the Deaf community is confirmed through a perusal of writings on Deaf culture: the World Federation of the Deaf describes Deaf people as “a linguistic minority” who have “a common experience of life” manifesting in “Deaf culture”.[2] Acceptance of a deaf person into the Deaf community, they continue, “is strongly linked to competence in a signed language”. In Inside Deaf Culture, Padden and Humphries (2005, p. 1) note that even though the Deaf community does not possess typical markers of culture – religion, geographical space, clothing, diet – they do possess sign language(s), which play a “central role … in the everyday lives of the community”. The British Deaf Association remarks upon Deaf people as a linguistic minority who have a “unique culture” evident in their history, tradition of visual story-telling, and the “flourishing of BSL in a range of art forms including drama, poetry, comedy and satire”.[3] Similarly, the Canadian Cultural Society of the Deaf and the American non-profit organisation Hands & Voices both describe Sign language as the core of Deaf cultural communities.[4] Sign language is central to Deaf culture and is the crux around which a sense of community can arise. This community fosters awareness of being Deaf as a positive and not a deficit state; the deaf person is frequently described as the Seeing person (distinct from the Hearing person), emphasising the visual nature of Sign language and Deaf communication.[5] Deaf culture is also supported by the existence of institutions dedicated for Deaf people such as schools, clubs, and churches. Finally, as a consequence of living in a world not always designed for them, and in the process of campaigning for their rights and the protection of their culture, Deaf people develop a sense of community and solidarity.

Even though Deaf culture differs from typical cultural communities, in its most developed form it does approach the ideal of offering its members “meaningful ways of life” across key human activities (Kymlicka 1995, p. 76). It may not be a comprehensive culture in the way that Ultra-Orthodox Jewish culture is, but its central importance to the life of some deaf people – arising in particular from learning and expressing oneself in Sign – suggests that it can be viewed as a cultural community.

If we compare Mad culture to Deaf culture we find many points of similarity. For example, like Deaf people, people who identify as Mad – at least in the English-speaking world – are united by a set of connected historical narratives, by opposition to ‘sanism’ and psychiatric coercion, and by phenomenologically related experiences (such as voices, unusual beliefs, and extremes of mood).[6] In addition, they share a tradition of producing distinctive art and literature and a concern with transforming negative perceptions in society surrounding mental health. But Mad people, unlike Deaf people, are not a linguistic community, and this does weaken the coherence of the idea that madness can constitute a culture. An alternative is to regard Mad people as forming associations within the broader cultural context in which they live, the very context they are trying to transform in such a way that allows them a better chance to thrive.

The comparisons drawn in this section cannot be the final word, as it is conceivable for different conceptions of societal culture and Mad culture to yield different conclusions. However, in what follows I shall argue that even if madness can constitute a culture, a consideration of the general justification for cultural rights leads us to social identity and not directly to culture as the key issue at stake.

 

Mohammed Abouelleil Rashed (2018)

Note: the above is an excerpt from Madness and the Demand for Recognition: A Philosophical Inquiry into Identity and Mental Health Activism (Oxford University Press, 2019).

***

[1] In Primitive Culture, Edward Tylor (1891, p. 1) provided the following definition: “culture or civilisation .. is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of a society”.

[2] Online: https://wfdeaf.org/our-work/focus-areas/deaf-culture

[3] British Sign Language. Online: https://www.bda.org.uk/what-is-deaf-culture

[4] Online: http://www.deafculturecentre.ca/Public/Default.aspx?I=294. http://www.handsandvoices.org/comcon/articles/deafculture.htm

[5] Online: http://www.handsandvoices.org/comcon/articles/deafculture.htm

[6] Sanism: discrimination and prejudice against people perceived to have, or labelled as having, a mental disorder. The equivalent term in disability activism is ableism.

A History of Mental Health Advocacy & Activism (Beginnings to 1990s)

image

  1. Early advocacy and activism

The modern consumer/service-user/survivor movement is generally considered to have begun in the 1970s in the wake of the many civil rights movements that emerged at the time.[1] The Survivors’ History Group – a group founded in April 2005 and concerned with documenting the history of the movement – traces an earlier starting point.[2] The group sees affinity between contemporary activism and earlier attempts to fight stigma, discrimination and the poor treatment of individuals variously considered to be mad, insane and, since the dominance of the medical idiom, to suffer with mental illness.[3] In their website which documents Survivor history, the timeline begins with 1373, the year the Christian mystic Margery Kempe was born. Throughout her life, Margery experienced intense voices and visions of prophets, devils, and demons. Her unorthodox behaviour and beliefs upset the Church, the public, her husband, and resulted in her restraint and imprisonment on a number of occasions. Margery wrote about her life in a book in which she recounted her spiritual experiences and the difficulties she had faced.[4]

The Survivors’ history website continues with several recorded instances of individual mis-treatment on the grounds of insanity. But the first explicit evidence of collective action and advocacy in the UK appears in 1845 in the form of the Alleged Lunatics’ Friend Society: an organisation composed of individuals most of whom had been incarcerated in madhouses and subjected to degrading treatment (Hervey 1986). For around twenty years, the Society campaigned for the rights of patients, including the right to be involved in decisions pertaining to their care and confinement. In the US, around the same time, patients committed to a New York Lunatic Asylum produced a literary magazine – The Opal – published in ten volumes between 1851 and 1860. Although this production is now seen to have painted a rather benign picture of asylum life, and to have allowed voice only to those patients who were deemed appropriate and self-censorial (Reiss 2004), glimpses of dissatisfaction and even of liberatory rhetoric emerge from some of the writing (Tenney 2006).

An important name in what can be considered early activism and advocacy is Elizabeth Packard. In 1860, Packard was committed to an insane asylum in Illinois by her husband, a strict Calvinist who could not tolerate Packard’s newly expressed liberal beliefs and her rejection of his religious views. At the time, state law gave husbands this power without the need for a public hearing. Upon her release, Packard campaigned successfully for a change in the law henceforth requiring a jury trial for decisions to commit an individual to an asylum (Dain 1989, p.9). Another important campaigner is Clifford Beers, an American ex-patient who published in 1908 his autobiography A Mind That Found Itself. Beer’s biography documented the mistreatment he experienced at a number of institutions. The following year he founded the National Committee for Mental Hygiene (NCMH), an organisation that sought to improve conditions in asylums and the treatment of patients by working with reform-minded psychiatrists. The NCMH achieved limited success in this respect, and its subsequent efforts focused on mental health education, training, and public awareness campaigns in accordance with the then dominant concept of mental hygiene (Dain 1989, p. 6).

  1. 1900s−1950s: ‘Mental Hygiene’

On both sides of the Atlantic, mental health advocacy in the first few decades of the 20th century promoted a mental hygiene agenda.[5] Mental hygiene is an American concept and was understood as “the art of preserving the mind against all incidents and influences calculated to deteriorate its qualities, impair its energies, or derange its movements” (Rossi 1962). These “incidents and influences” were conceived broadly and included “exercise, rest, food, clothing and climate, the laws of breeding, the government of the passions, the sympathy with current emotions and opinions, the discipline of the intellect”, all of which had to be governed adequately to promote a healthy mind (ibid.). With such a broad list of human affairs under their purview, the mental hygienists had to fall back on a set of values by which the ‘healthy’ life-style was to be determined. These values, as argued by Davis (1938) and more recently by Crossley (2006), were those of the educated middle classes who promoted mental hygiene in accordance with a deeply ingrained ethic. For example, extra-marital sex was seen as a deviation and therefore a potential source of mental illness. Despite this conservative element, the discourse of mental hygiene was progressive, for its time, in a number of ways: first, it considered mental illness to arise from interactions among many factors, including the biological and the social, and hence to be responsive to improvements in the person’s environment; second, it fought stigma by arguing that mental illness is similar to physical illness and can be treated; third, it promoted the prevention of mental illness, in particular through paying attention to childhood development; and fourth, it argued for the importance of early detection and treatment (Crossley 2006, pp. 71-75).

In the US, Clifford Beer’s own group, the NCMH, continued to advance a mental hygiene agenda and, in 1950, merged with two other groups to form the National Association for Mental Health, a non-profit organisation that exists since 2006 as Mental Health America.[6] In the UK, mental hygiene was promoted by three inter-war groups that campaigned for patient wellbeing and education of the public. These groups merged, in 1946, to form the National Association for Mental Health (NAMH), which later, in 1972, changed its name to Mind, the name under which it remains to this day as a well-known and influential charity.[7] In the late 50s, these two groups continued to educate the public through various campaigns and publications, and were involved in training mental health professionals in accordance with hygienist principles. In addition, they were advocates for mental patients, campaigning for the government to improve commitment laws, and, in the UK, working with the government to instate the move from asylums to ‘care in the community’.

Even though the discourse of mental hygiene was dominant during these decades, the developments that were to come in the early 70s were already taking shape in the emerging discourse of civil rights. A good example of these developments in the UK is the National Council for Civil Liberties (NCCL), better known today as Liberty. Founded in 1934 in response to an aggressive police reaction to protestors during the “hunger marches”, it became involved in 1947 in its first “mental health case”: a woman wrongly detained in a mental health institution for what appeared to be ‘moral’ rather than ‘medical’ reasons.[8] During the 50s, the NCCL campaigned vigorously for reform of mental health law to address this issue, and was able to see some positive developments in 1959 with the abolition of the problematic 1913 Mental Deficiency Act and the introduction of tribunals in which patients’ interests were represented.

  1. 1960s: The ‘Anti-psychiatrists’

During the 1960s criticism of mental health practices and theories was carried through by a number of psychiatrists who came to be referred to as the ‘anti-psychiatrists’. Most famous among them were Thomas Szasz, R. D. Laing, and David Cooper. Szasz (1960) famously argued that mental illness is a myth that legitimizes state oppression (via the psychiatric enterprise) on those judged as socially deviant and perceived to be a danger to themselves or others. Mental illnesses for Szasz are problems in living: morally and existentially significant problems relating to social interaction and to finding meaning and purpose in life. Laing (1965, 1967) considered the medical concept of schizophrenia to be a label applied to those whose behaviour seems incomprehensible, thereby permitting exercises of power. For Laing (1967, p. 106) the people so labelled are not so much experiencing a breakdown but a breakthrough: a state of ego-loss that permits a wider range of experiences and may culminate in a “new-ego” and an “existential rebirth”. These individuals require guidance and encouragement, and not the application of a psychiatric label that distorts and arrests this process. David Cooper (1967, 1978) considered ‘schizophrenia’ a revolt against alienating familial and social structures with the hope of finding a less-alienating, autonomous yet recognised existence. In Cooper’s (1978, p. 156) view, it is precisely this revolt that the ‘medical apparatus’, as an agent of the ‘State’, aims to suppress.

From the perspective of those individuals who have experienced psychiatric treatment and mental distress, the anti-psychiatrists of the 1960s were not activists but dissident mental health professionals. As will be noted in the following section, the mental patients’ liberation movement did not support the inclusion  of sympathetic professionals within its ambit. Nevertheless, the ideas of Thomas Szasz, R. D. Laing, and David Cooper were frequently used by activists themselves to ground their critique of mental health institutions and the medical model. At the time, these ideas were radical if not revolutionary, and it is not surprising that they inspired activists engaged in civil rights struggles in the 1970s.

  1. The 1970s civil rights movements

Civil rights activism in mental health began through the work of a number of groups that came together in the late 60s and early 70s in the wake of the emerging successes and struggles of Black, Gay and women civil rights activists. In the UK, a notable group was the Mental Patients’ Union (1972), and in the US three groups were among the earliest organisers: Insane Liberation Front (1970), Mental Patients’ Liberation Front (1971), and Network Against Psychiatric Assault (1972).[9] An important difference between these groups and earlier ones that may have also pursued a civil rights agenda such as the NCCL, is that they, from the start or early on, excluded sympathetic mental-health professionals and were composed solely of patients and ex-patients. Judi Chamberlin (1990, p. 324), a key figure in the American movement, justified it in this way:

Among the major organising principles of [black, gay, women’s liberation movements] were self-definition and self-determination. Black people felt that white people could not truly understand their experiences … To mental patients who began to organise, these principles seemed equally valid. Their own perceptions about “mental illness” were diametrically opposed to those of the general public, and even more so to those of mental health professionals. It seemed sensible, therefore, not to let non-patients into ex-patient organisations or to permit them to dictate an organisation’s goals.

The extent of the resolve to exclude professionals – even those who would appear to be sympathetic such as the anti-psychiatrists – is evident in the writings of Chamberlin as well as in the founding document of the Mental Patients’ Union. Both distance themselves from anti-psychiatry on the grounds that the latter is “an intellectual exercise of academics and dissident mental health professionals” which, while critical of psychiatry, did not include ex-patients or engage their struggles (Chamberlin 1990, p. 323).[10] Further, according to Chamberlin, a group that permits non-patients and professionals inevitably abandons its liberatory intentions and ends up in the weaker position of attempting to reform psychiatry. And reform was not on the agenda of these early groups.

On the advocacy front, the mental patients’ liberation movement – the term generally used to refer to this period of civil rights activism – sought to end psychiatry as they knew it.[11] They sought to abolish involuntary hospitalisation and forced treatment, to prioritise freedom of choice and consent above other considerations, to reject the reductive medical model, to restore full civil rights to mental patients including the right to refuse treatment, and to counter negative perceptions in the media such as the inherent dangerousness of the ‘mentally ill’. In addition to advocacy, a great deal of work went into setting up non-hierarchical, non-coercive alternatives to mental health institutions such as self-help groups, drop-in centres, and retreats.[12] The purpose of these initiatives was not only to provide support to individuals in distress, but to establish that mental patients are self-reliant and able to manage their own lives outside of mental health institutions. Central to the success of these initiatives was a radical transformation in how ex-patients understood their situation. This transformation was referred to as consciousness-raising.

Borrowed from the women’s liberation movement, consciousness-raising is the process of placing elements of one’s situation in the wider context of systematic social oppression (Chamberlin 1990). This begins to occur in meetings in which people get together and share their experiences, identifying commonalities, and re-interpreting them in a way that gives them broader meaning and significance. An implication of this process is that participants may be able to reverse an internalised sense of weakness or incapability – which hitherto they may have regarded as natural – and regain confidence in their abilities. In the mental patients’ liberation movement, consciousness-raising involved ridding oneself of the central assumptions of the ‘mental health system’: that one has an illness, and that the medical profession is there to provide a cure. In the discourse of the time, inspired by the writings of Thomas Szasz and others, psychiatry was a form of social control, medicalising unwanted behaviour as a pre-text for ‘treating’ it and forcing individuals into a sane way of behaving. By sharing experiences, participants begin to see that the mental health system has not helped them. In a book first published in 1977 and considered a founding and inspirational document for mental health activists, Chamberlin (1988, pp. 70-71) writes of the important insights ex-patients gained through consciousness-raising:

Consciousness-raising … helps people to see that their so called symptoms are indications of real problems. The anger, which has been destructively turned inward, is freed by this recognition. Instead of believing that they have a defect in their psychic makeup (or their neurochemical system), participants learn to recognise the oppressive conditions in their daily lives.

Mental suffering and distress, within this view, are a normal response to the difficulties individuals face in life such as relationship problems, social inequality, poverty, loss and trauma. In such situations, individuals need a sympathetic, caring and understanding response, and not the one society offers in the form of psychotropic drugs and the difficult environment of a mental health hospital (Chamberlin 1988).  Consciousness-raising does not stop at the ‘mental health system’, and casts a wider net that includes all discriminatory stereotypes against ex-patients. In a deliberate analogy with racism and sexism, Chamberlin uses the term mentalism to refer to the widespread social tendency to call disapproved of behaviour ‘sick’ or ‘crazy’. Mental patients’ liberation required of patients and ex-patients to resist the ‘mental health system’ as well as social stereotyping, and to find the strength and confidence to do so. In this context, voluntary alternatives by and for patients and ex-patients were essential to providing a forum for support and consciousness-raising.

  1. Consumers/Service-Users & Survivors

In the 1980s, the voices of advocates and activists began to be recognised by national government agencies and bodies. This was in the context of a shift towards market approaches to health-care provision, and the idea of the patient as a consumer of services (Campbell 2009). Patients and ex-patients – now referred to as consumers (US) or users (UK) of services – were able to sit in policy meetings and advisory committees of mental health services and make their views known. Self-help groups, which normally struggled for funding, began to be supported by public money. In the US, a number of consumer groups formed that were no longer opposed to the medical model or to working with mental health professionals in order to reform services.[13] While some considered these developments to be positive, others regarded them as indicating what Linda Morrison, an American activist and academic, referred to as a “crisis of co-optation”: the voice of mental health activists had to become acceptable to funding agencies, which required relinquishing radical demands in favour of reform (Morrison 2005, p. 80). Some activists rejected the term consumer as it implied that patients and professionals were in an equal relation, with patients free to determine the services they receive (Chamberlin 1988, p. vii).[14]

Countering the consumer/user discourse was an emerging survivor discourse reflected in a number of national groups, for example the National Association of Psychiatric Survivors (1985) in the US and Survivors Speak Out (1986) in the UK. Survivor discourse shared many points of alignment with earlier activism, but whereas the latter was opposed to including professionals and non-patients, survivors were no longer against this as long as it occurred within a framework of genuine and honest partnership and inclusion in all aspects of service structure, delivery and evaluation (Chamberlin 1995, Campbell 1992). [15]

In the US, developments throughout the 1990s and into the millennium confirm the continuation of these two trends: the first oriented towards consumer discourse and involvement, and the second towards survivors, with a relatively more radical tone and a concern with human rights (Morrison 2005). Today, representative national groups for these two trends include, respectively, the National Coalition for Mental Health Recovery (NCMHR), and Mind Freedom International (MFI).[16] The former is focused on promoting comprehensive recovery, approvingly quoting the ‘New Freedom Mental Health Commission Report’ target of a “future when everyone with mental illness will recover”.[17] To this end they campaign for better services, for consumers to have a voice in their recovery, for tackling stigma, discrimination, and promoting community inclusion via consumer-run initiatives that offer assistance with education, housing and other aspects of life. On the other hand, MFI state their vision to be a “nonviolent revolution in mental health care”. Unlike NCMHR, MFI do not use the language of ‘mental illness’, and support campaigns such as Creative Maladjustment, Mad Pride, and Boycott Normal. Further, MFI state emphatically that they are completely independent and do not receive funds from or have any links with government, drug companies or mental health agencies.[18] Despite their differences, both organisations claim to represent both survivors and consumers, and both trace their beginnings to the 1970s civil rights movements. But whereas NCMHR refer to ‘consumers’ always first and generally more often, MFI do the opposite and state that the majority of their members identify as psychiatric survivors.

In the UK, the service-user/survivor movement – as it came to be referred to – is today represented nationally by a number of groups.[19] Of note is the National Survivor User Network (NSUN) which brings together survivor and user groups and individuals across the UK in order to strengthen their voice and assist with policy change.[20] Another long-standing group (1990), though less active today, is the UK Advocacy Network, a group which campaigns for user led advocacy and involvement in mental health services planning and delivery.[21] A UK survey done in 2003 brings some complexity to this appearance of a homogenous movement (Wallcraft et al. 2003). While most respondents agreed that there is a national user/survivor movement – albeit a rather loose one – different opinions arose on all the important issues; for example, disagreements over whether compulsory treatment can ever be justified, and whether receiving funds from drug companies compromises the movement. In addition, there were debates over the legitimacy of the medical model, with some respondents rejecting it in favour of social and political understandings of mental distress. In this context, they drew a distinction between the service-user movement and the survivor movement, the former concerned with improving services, and the latter with challenging the medical model and the “supposed scientific basis of mental health services” (Wallcraft et al. 2003, p. 50). More radical voices suggested that activists who continued to adopt the medical model have not been able to rid themselves of the disempowering frameworks of understanding imposed by the mental health system. In a similar vein, some respondents noted the de-politicisation of the movement, as activists ceased to be primarily concerned with civil rights and began to work for the mental health system (Wallcraft et al. 2003, p. 14).

In summary, there exists within the consumer/service-user/survivor movements in the US and the UK a variety of stances in relation to involuntary detention and treatment, acceptable sources of funding, the medical model, and the extent and desirability of user involvement in services. Positions range from working for mental health institutions and reforming them from the ‘inside’, to rejecting any co-operation and engaging in activism to end what is considered psychiatric abuse and social discrimination in the guise of supposed medical theory and treatment. It appears that within national networks and movements pragmatic and co-operative approaches are more common, with radical positions pushed somewhat aside though by no means silenced. In this context Mad Pride, representing the latest wave of activism in mental health, re-invigorates the radicalism of the movement and makes the most serious demand yet of social norms and understandings. But Mad Pride, underpinned by the notions of Mad culture and Mad identity, builds on the accomplishments of Survivor identity to which I now briefly turn.

  1. Survivor identity

The connotations of survivor discourse are unmistakable and powerful. With survivor discourse the term ‘patient’ and its implications of dependence and weakness are finally discarded (Crossley 2004, p.169). From the perspective of those individuals who embraced the discourse, there is much that they have survived: forced detention in the mental health system; aggressive and unhelpful treatments; discrimination and stigma in society; and, for some, the distress and suffering they experienced and which was labelled by others ‘mental illness’. By discarding of what they came to see as an imposed identity – viz. ‘patient’ – survivors took one further step towards increased self-definition (Crossley 2006, p. 182). Further, the very term ‘survivor’ implies a positive angle to this definition in so far as to survive something implies resilience, strength, and other personal traits considered valuable. Morrison (2005, p. 102) describes it as the “heroic survivor narrative” and accords it a central function in the creation of a collective identity for the movement and a shared sense of injustice.

Central to survivor identity is the importance of the voice of survivors, and their ability to tell their own stories, a voice which neither society nor the psychiatric system respected. The well-known British activist and poet Peter Campbell (1992, p. 122) writes that a great part of the “damage” sustained in the psychiatric system

has been a result of psychiatry’s refusal to give value to my personal perceptions and experience … I cannot believe it is possible to dismiss as meaningless people’s most vivid and challenging interior experiences and expect no harm to ensue.

The emphasis on survivor voice highlights one further difference from 1970s activism: whereas earlier activists sustained their critique of psychiatry by drawing upon the writings of Szasz, Goffman, Marx and others, survivor discourse eschewed such sources of ‘authority’ in favour of the voice of survivors themselves; Crossley (2004, p. 167) writes:

Survivors have been able to convert their experiences of mental distress and (mis)treatment into a form of cultural and symbolic capital. The disvalued status of the patient is reversed within the movement context. Therein it constitutes authority to speak and vouches for authenticity. The experience of both distress and treatment, stigmatized elsewhere, has become recognized as a valuable, perhaps superior knowledge base. Survivors have laid a claim, recognized at least within the movement itself, to know ‘madness’ and its ‘treatment’ with authority, on the basis that they have been there and have survived it.

Survivors are therefore experts on their own experiences, and experts on what it is like to be subject to treatment in mental health institutions and to face stigma and discrimination in society. So construed, to survive is to be able to emerge from a range of difficulties, some of which are external and others internal, belonging to the condition (the distress, the experiences) that led to the encounter with psychiatry in the first place. In this sense, survivor discourse had not yet been able to impose a full reversal of the negative value attached to phenomena of madness, a value reflected in the language of mental illness, disorder and pathology. This is clearly evident in the idea that one had survived the condition, for if that is the attitude one holds towards it, it is unlikely that the ‘condition’ is looked upon positively or neutrally (except perhaps teleologically in the sense that it had had a formative influence on one’s personality). Similarly, if one considers oneself to have survived mental health institutions rather than the condition, there still is no direct implication that the condition itself is regarded in a non-negative light, only that the personal traits conducive to survival are laudable. It is only with the discourse of Mad Pride, yet to come, that the language of mental illness and the social norms and values underpinning it are challenged in an unambiguous manner.

Mohammed Abouelleil Rashed (2018)

Note: the above is an excerpt from Madness and the Demand for Recognition: A Philosophical Inquiry into Identity and Mental Health Activism (Oxford University Press, 2019).

***

[1] The following account outlines key moments, figures, groups and strategies in mental health advocacy and activism; it is not intended to be exhaustive but rather to illustrate the background to the Mad Pride movement and discourse.

[2] The timeline can be found at: http://studymore.org.uk/mpu.htm. (The website states that Survivor history is being compiled into a book.) See also Campbell and Roberts (2009).

[3] In contrast to Survivor history, there is a tradition of historical and critical writing on the history of ‘psychiatry’ and ‘madness’, and on the development of lunacy reform and mental health law. Notable names in this tradition are Roy Porter, Andrew Scull, and Michel Foucault.

[4] See Peterson (1982, pp. 3-18).

[5] This section benefits, in part, from Crossley’s (2006, Chapter 4) account of mental hygiene.

[6] Mental Health America. Online: http://www.mentalhealthamerica.net/

[7] Mind. Online: http://www.mind.org.uk/

[8] The history of Liberty can be found on their website: https://www.liberty-human-rights.org.uk/who-we-are/history/liberty-timeline

[9] In the US, groups were able to communicate with each other through a regular newsletter, Madness Network News (1972-1986), and an annual Conference on Human Rights and Against Psychiatric Oppression (1973-1985).

[10] For a similar point see the founding document of the Mental Patients’ Union, reprinted in Curtis et al. (2000, pp. 23-28).

[11] Some activists referred to themselves as ‘psychiatric inmates’ or ‘ex-inmates’ highlighting the fact of their incarceration in mental institutions and their rejection of the connotations of the term ‘patient’. This early difference in terminology – inmate versus patient – prefigures the multiplicity of terms and associated strategies that will come to define activism and advocacy in mental health to this day.

[12] The earliest example of a self-help group is WANA (We Are Not Alone). Formed in New York in the 1940s as a patient-run group, it developed into a major psychosocial rehabilitation centre, eventually to be managed by mental health professionals (see Chamberlin 1988, pp. 94-95).

[13] See Bluebird’s History of the Consumer/Survivor Movement. Online: https://www.power2u.org/downloads/HistoryOfTheConsumerMovement.pdf

[14] Mclean (1995, p. 1054) draws the distinction between consumers and survivors as follows: “Persons who identify themselves as ‘consumers’, ‘clients’ or ‘patients’, tend to accept the medical model of mental illness and traditional mental health treatment practices, but work for general system improvement and for the addition of consumer controlled alternatives. Those who refer to themselves as ‘ex-patients’, ‘survivors’ or ‘ex-inmates’ reject the medical model of mental illness, professional control and forced treatment and seek alternatives exclusively in user controlled centres.”

[15] Consumers and survivors aside, more radical voices persisted, continuing the discourse and activities of the 1970s’ groups. These voices were vehemently opposed to psychiatry and rejected any cooperation with services or with advocates/activists who tended towards reform. Examples include the Network to Abolish Psychiatry (1986) in the US and Campaign Against Psychiatric Oppression (CAPO, 1985) in the UK, both of which were active for a few years in the 1980s. (CAPO was an offshoot of the earlier Mental Patients’ Union.) For these groups, the ‘mental health system’ was intrinsically oppressive and had to be abolished: attempts to reform it, merely strengthened it (see Madness Network News, Summer 1986, vol.8, no.3, p.8). Reflecting on the beginnings of Survivors Speak Out (SSO, 1986), Peter Campbell, a founder, wrote that CAPO and other “separatist” groups were more concerned with “philosophical and ideological issues” and that SSO was “born partly in reaction to this: they were the first part of the ‘pragmatic’ wing which now dominates the user movement” with an emphasis on dialogue with others (Peter Campbell on The History and Philosophy of The Survivor Movement. Southwark Mind Newsletter, issue 24 – year not specified).

[16] Note that the reference here is to national networks and groups and not the local groups engaged in self-help, support, education, training, and advocacy of which there are hundreds in the US, UK and elsewhere.

[17] National Coalition for Mental Health Recovery. Online: http://www.ncmhr.org/purpose.htm

[18] Mind Freedom International. Online: http://www.mindfreedom.org/mfi-faq

[19] National organisations are of two types: those concerned with mental health generally (discussed in the text), and those with a focus on a particular condition or behaviour such as the Hearing Voices Network and the National Self-Harm network.

[20] National Survivor User Network. Online: https://www.nsun.org.uk/our-vision

[21] UK Advocacy Network. Online: http://www.u-kan.co.uk/mission.html

Protected: COMPLEXITIES FOR PSYCHIATRY’S IDENTITY AS A MEDICAL SPECIALTY

This content is password protected. To view it please enter your password below:

Religious Fundamentalism, Scientific Rationality, and the Evaluation of Social Identities

IMG_6521

[Excerpt from Chapter 10 of Madness and the Demand for Recognition (2019, OUP)]

Referring to religious fundamentalism, Gellner (1992, p. 2) writes:

The underlying idea is that a given faith is to be upheld firmly in its full and literal form, free of compromise, softening, re-interpretation or diminution. It presupposes that the core of religion is doctrine, rather than ritual, and also that this doctrine can be fixed with precision and finality.

Religious doctrine includes fundamental ideas about our nature, the nature of the world and the cosmos, and the manner in which we should live and treat each other. In following to the letter the doctrines of one’s faith, believers are trying to get it right, where getting it right means knowing with exactness what God intended for us. In the case of Islam, the tradition I know most about, the Divine intent can be discerned from the Qur’an (considered to be the word of the God) and the Traditions (the sayings) attributed to the Prophet (see Rashed 2015b).[1] The process of getting it right, therefore, becomes an interpretive one, raising questions such as: how do we understand this verse; what does God mean by the words ‘dust’ and ‘clot’ in describing human creation; who did the Prophet intend by this Tradition; does this Tradition follow a trusted lineage of re-tellers?

We can see that ‘getting it right’ for the religious fundamentalist and for the scientific rationalist mean different things – interpreting the Divine intent, and producing true explanations of the nature of the world, respectively. But then we have a problem, for religious doctrine often involves claims whose truth – in the sense of their relation to reality – can, in principle, be established. Yet in being an interpretive enterprise, religious fundamentalism cannot claim access to the truth in this sense. The religious fundamentalist can immediately respond by pointing out that the Divine word corresponds to the truth; it is the truth. If we press the religious fundamentalist to tell us why this is so we might be told that the truth of God’s pronouncements in the Qur’an is guaranteed by God’s pronouncement (also in the Qur’an) that His word is the truth and will be protected for all time from distortion.[2] Such a circular argument, of course, is unsatisfactory, and simply points to the fact that matters of evidence and logic have been reduced to matters of faith. If we press the religious fundamentalist further we might encounter what has become a common response: the attempt to justify the truth of the word of God by demonstrating that the Qur’an had anticipated modern scientific findings, and had done so over 1400 years ago. This is known as the ‘scientific miracle of the Qur’an’; scholars interpret certain ambiguous, almost poetic verses to suggest discoveries such as the relativity of time, the process of conception, brain functions, the composition of the Sun, and many others. The irony in such an attempt is that it elevates scientific truths to the status of arbiter of the truth of the word of God. But the more serious problem is that science is a self-correcting progressive enterprise – what we know today to be true may turn out tomorrow to be false. The Qur’an, on the other hand, is fixed; every scientific claim in the Qur’an (assuming there are any that point to current scientific discoveries) is going to be refuted the moment our science develops. You cannot use a continually changing body of knowledge to validate the eternally fixed word of God.

Neither the faith-based response nor the ‘scientific miracle of the Qur’an’ response can tie the Divine word to the truth. From the stance of scientific rationality, all the religious fundamentalist can do is provide interpretations of the ‘Divine’ intent as the latter can be discerned in the writings of his or her tradition. Given this, when we are presented with identities constituted by doctrinal claims whose truth can, in principle, be established (and which therefore stand or fall subject to an investigation of their veracity), we cannot extend a positive response to these identities; scientific rationality is within its means to pass judgement.

But not all religion is purely doctrinal in this sense or, more precisely, its doctrines are not intended as strictly factual claims about the world; Appiah (2005, p. 188) makes this point:

Gore Vidal likes to talk about ancient mystery sects whose rites have passed down so many generations that their priests utter incantations in language they no longer understand. The observation is satirical, but there’s a good point buried here. Where religious observance involves the affirmation of creeds, what may ultimately matter isn’t the epistemic content of the sentences (“I believe in One God, the Father Almighty …”) but the practice of uttering them. By Protestant habit, we’re inclined to describe the devout as believers, rather than practitioners; yet the emphasis is likely misplaced.

This is a reasonable point; for many people, religion is a practical affair: they attend the mosque for Friday prayers with their family members, they recite verses from the Qur’an and repeat invocations behind the Imam, and they socialise with their friends after the prayer, and during all of this, ‘doctrine’ is the last thing on their minds. They might even get overwhelmed with spiritual feelings of connectedness to the Divine. In the course of their ritual performance, they are likely to recite verses the content of which involves far-fetched claims about the world. It would be misguided to press them on the truth of those claims (in an empirical or logical sense), as it would be to approach, to use Taylor’s (1994a, p. 67) example, “a raga with the presumptions of value implicit in the well-tempered clavier”; in both cases we would be applying the wrong measure of judgement, it would be “to forever miss the point” (ibid.).[3]

And then there is the possibility that the ‘truths’ in question are metaphorical truths, symbolic expressions of human experience, its range and its moral heights and depths. Charles Taylor (2007, 1982) often talks about the expressive dimension of our experience, a dimension that has been largely expunged from scientific research and its technological application. Human civilizations have always developed rich languages of expression, religious languages being a prominent example. The rarefied language of scientific rationality and its attendant procedural asceticism are our best bet to get things right about the world, but they are often inadequate as a means to express our psychological, emotional, and moral complexity.

To judge the practical (ritualistic) and expressive dimensions of identities in light of the standards of scientific rationality is to trespass upon these identities. Our judgements are misplaced and have limited value. My contention is that every time we suspect that we do not possess the right kind of language to understand other identities, or that there is an experience or mode of engagement that over-determines the language in which people express their identities, we have a genuine problem of shared understanding; we are not within our means to pass judgements of irrationality on the narratives that constitute these identities. Now I am not suggesting that the distinctions between doctrine and practice, or between understanding the world and expressing ourselves, are easy to make. And neither am I suggesting that a particular case falls neatly on side or the other of these distinctions. But if we are going to adopt the stance of scientific rationality – given that we have to adopt some stance as I have argued earlier – then these are the issues we need to think about: (1) Is the narrative best apprehended in its factual or expressive dimension? (2) Are there experiences that over-determine the kind of narrative that can adequately express them?

In Defense of Madness: The Problem of Disability

By developing a perspective on the social model of disability and by appealing to the concept of intelligiblity, I respond to arguments against Mad Pride activism. You can access the articlm_covere HERE.

The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, Volume 44, Issue 2, April 2019, Pages 150–174, https://doi.org/10.1093/jmp/jhy016

 

Abstract: At a time when different groups in society are achieving notable gains in respect and rights, activists in mental health and proponents of mad positive approaches, such as Mad Pride, are coming up against considerable challenges. A particular issue is the commonly held view that madness is inherently disabling and cannot form the grounds for identity or culture. This paper responds to the challenge by developing two bulwarks against the tendency to assume too readily the view that madness is inherently disabling: the first arises from the normative nature of disability judgments, and the second arises from the implications of political activism in terms of being a social subject. In the process of arguing for these two bulwarks, the paper explores the basic structure of the social model of disability in the context of debates on naturalism and normativism, the applicability of the social model to madness, and the difference between physical and mental disabilities in terms of the unintelligibility often attributed to the latter.

 

Mohammed Abouelleil Rashed, In Defense of Madness: The Problem of Disability, The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, Volume 44, Issue 2, April 2019, Pages 150–174, https://doi.org/10.1093/jmp/jhy016

Mad Activism and Mental Health Practice

On the 6th of August 2018 I delivered a live webinar that was part of a Mad Studies series organised by Mad in America. The aim of the webinar was to explore ways of incroporating ideas from Mad activism into clinical practice. The full recording of the webinar and the accompanying slides can be found below.

 

More Things in Heaven and Earth

cropped-img_6448.jpg

For a few months in 2009 and 2010 I was a resident of Mut, a small town in the Dakhla Oasis in the Western desert of Egypt. My aim was to become acquainted with the social institution of spirit possession, and with sorcery and Qur’anic healing (while keeping an eye on how all of this intersects with ‘mental disorder’ and ‘madness’). I learnt many things, among which was the normalness with which spirit possession was apprehended in the community: people invoked spirits to explain a slight misfortune as much as a life- changing event; to make sense of what we would refer to as ‘schizophrenia’, and to make sense of a passing dysphoria. It was part of everyday life. The way in which spirit possession cut across these diverse areas of life got me thinking about the broader role it plays in preserving meaning when things go wrong. To help me think these issues through I brought in the concepts of ‘intentionality’ and ‘personhood’. The result is my essay More Things in Heaven and Earth: Spirit Possession, Mental Disorder, and Intentionality (2018, open access at the Journal of Medical Humanities).

The essay is a philosophical exploration of a range of concepts and how they relate to each other. It appeals sparingly, though decisively, to the ethnography that I had conducted at Dakhla. If you want to know more about the place and the community you can check these blog-posts:

The Dakhla Diaries (1) : Fast to Charing-X, Slow to Hell

The Dakhla Oasis: Stories from the ‘field’ (0)

The Dakhla Diaries (3): Wedding Invitation

Old Mut, Dakhla

The Dakhla Oasis: Stories from the ‘field’ (I)

And this is a piece I published in the newspaper Al-Ahram Weekly (2009) voicing my view on some of the practices that I had observed: To Untie or Knot

 

On Irrational Identities

(Excerpt from Chapter 10 of Madness and the Demand for Recognition. OUP, 2018)

In Chapter 7 I raised and examined the distinction between failed and controversial identities. I began by pointing out that every demand for recognition – all gaps in social validation – involves the perception by each side that the other is committing a mistake. Given this, I formulated the question we had to address as follows: how do we sort out those mistakes that can be addressed within the scope of recognition (controversial identities) from those that cannot (failed identities)? The implication was that a failed identity involves a mistake that cannot be corrected by revising the category with which a person identifies, while a controversial identity involves a mistake that can, in principle, be corrected in that way. The issue I am concerned with here is no longer the identity-claim as such but the validity of the collective category itself; the question is no longer ‘what kind of mistake is the person identifying as x implicated in?’ but ‘is x a valid category?’. This question features as an element of adjudication for the reason that some social identities can be irrational in such a way that they cannot be regarded as meriting a positive social or a political response. As Appiah (2005, p. 181) writes:

Insofar as identities can be characterised as having both normative and factual aspects, both can offend against reason: an identity’s basic norms might be in conflict with one another; its constitutive factual claims might be in conflict with the truth.

For example, consider members of the Flat Earth Society if they were to identify as Flat-Earthers and demand recognition of the validity of their identity. They may successfully demonstrate that society’s refusal to recognise them as successful agents incurs on them a range of social harms such as disqualification. Yet it is clear that their identity does not merit further consideration and this for the reason that it is false: Earth is not flat. A similar predicament befalls some Creationists; Young-Earth Creationists, for example, believe that Earth is about ten thousand years old and was created over a period of six days, a belief that stands against all scientific evidence. It is not unreasonable to suggest that neither the Flat-Earthers nor the Young-Earth Creationists ought to have their identity-claims taken seriously, as the facts that constitute their identities do not measure up to what we know to be true, given the best evidence we now possess. To put it bluntly, whatever else might be at stake between us and the Flat-Earthers or Young-Earth Creationists, the shape of the Earth, its age, and the emergence and development of life on it are not.

Who does ‘us’ refer to in this context? To those who regard scientific rationality as an important value to uphold in society. By scientific rationality I mean an epistemological and methodological framework that prioritises procedural principles of knowledge acquisition (such as empirical observation, atomisation of evidence, and non-metaphysical, non-dogmatic reasoning), and eschews substantive convictions about the world derived from a sacred, divine, or otherwise infallible, authority (see Gellner 1992, p. 80-84). In rejecting the demands of Flat-Earthers and Young-Earth Creationists, we are prioritising the value of scientific rationality over the value of an individual’s attachment to a particular identity. We are saying: we know that it matters to you that your view of the world is accepted by us, but to accept it is to undermine what we consider, in this instance, to be a more important value. Note that such a response preserves the value of free-speech – Flat-Earthers and Young-Earth Creationists are free to espouse their views. Note also that refusing to accord these identities a positive response is a separate issue from taking an active stand against them (an example of the latter would be government intervention to ban the teaching of creationism in schools).[1] What we are trying to determine here is not who should receive a negative response but who is a legitimate candidate for a positive one. Owing to the irrationality of their constituting claims, Flat-Earthers and Young-Earth Creationists are not.

At this point in the argument someone could object to the premise of assessing the rationality of identities. They could object on two grounds: they could say there is no stance from where we can make such assessments; or they could say that even if such a stance exists and it is possible to determine the rationality of an identity, such a determination is always trumped by the demand for recognition and by individuals’ attachment to their identities. Both positions could further argue that as long as an identity is neither trivial nor morally objectionable, it ought to be considered for a positive response. We can recognise in the first position a commitment to cognitive relativism; in the second position we can recognise an extreme form of liberal tolerance. Both positions are problematic…

[1] For an example of what an active stance would look like in such cases and the problems it raises, see Appiah (2005, pp. 182-189) for an ingenious thought experiment based in the mythical Republic of Cartesia. The regime in Cartesia encourages the creed of hard rationalism and actively seeks to transform any deviations from rationality among its citizens.

The Identity of Psychiatry in the Aftermath of Mad Activism

[Introduction to an essay I am working on for a special issue of the Journal of Medicine & Philosophy with the title ‘The Crisis in Psychiatric Science’]

581805_10151341450506409_318871042_n
281.full

THE IDENTITY OF PSYCHIATRY IN THE AFTERMATH OF MAD ACTIVISM

  1. INTRODUCTION

 Psychiatry has an identity in the sense that it is constituted by certain understandings of what it is and what it is for. The key element in this identity, and the element from where other features arise, is that psychiatry is a medical speciality. Upon completion of their medical education and during the early years of their training, medical students – now budding doctors – make a choice about the speciality they want to pursue. Psychiatry is one of them, and so is ophthalmology, cardiology, gynaecology, and paediatrics. Modern medical specialities share some fundamental features: they treat conditions, disorders, or diseases; they aspire to be evidence-based in the care and treatments they offer; they are grounded in basic sciences such as physiology, anatomy, histology, and biochemistry; and they employ technology in investigations, research, and development of treatments. All of this ought to occur (and in the best of cases does occur) in a holistic manner, taking account of the whole person and not just of an isolated organ or a system; i.e. person-centred medicine (e.g. Cox, Campbell, and Fulford 2007). In addition, it is increasingly recognised that the arts and humanities have a role to play in medical education, training, and practice. Literature, theatre, film, history, and the various arts, it is argued, can help develop the capacity for good judgement, and can broaden the ability of clinicians to understand and empathise with patients (e.g. Cook 2010, McManus 1995). None of the above, I will assume in this essay, is particularly controversial.

Even though psychiatry is a medical speciality, it is a special medical speciality. This arises from its subject matter, ordinarily conceived of as mental health conditions or disorders, to be contrasted with physical health conditions or disorders. Psychiatry deals with the mind not working as it should while ophthalmology, for example, deals with the ophthalmic system not working as it should. The nature of its subject matter raises certain complexities for psychiatry that, in extreme, are sometimes taken to suggest that psychiatry’s positioning as a medical speciality is suspect; these include the normative nature of psychiatric judgements, the explanatory limitations of psychiatric theories, and the classificatory inaccuracies that beset the discipline.[1] Another challenge to psychiatry’s identity as a medical speciality comes from particular approaches in mental health activism. Mad Pride and mad-positive activism (henceforth Mad activism) rejects the language of ‘mental illness’ and ‘mental disorder’, and rejects the assumption that people have a ‘condition’ that is the subject of treatment. The idea that medicine treats ‘things’ that people ‘have’ is fundamental to medical practice and theory and hence is fundamental to psychiatry in so far as it wishes to continue understanding itself as a branch of medicine. Mad activism, therefore, challenges psychiatry’s identity as a medical speciality.

In this essay, I argue that among these four challenges, only the fourth requires of psychiatry to rethink its identity. By contrast, as I demonstrate in section 2, neither the normative, nor the explanatory, or the classificatory complexities undermine psychiatry’s identity as a medical speciality. This is primarily for the reason that the aforementioned complexities obtain in medicine as a whole, and are not unique to psychiatry even if they are more common and intractable. On the other hand, the challenge of Mad activism is a serious problem. In order to understand what the challenge amounts to, I develop in section 3 the notion of the hypostatic abstraction, a logical and semantic operation which I consider to lie at the heart of medical practice and theory. It distinguishes medicine from other social institutions concerned with human suffering such as religious and some therapeutic institutions. In section 4 I demonstrate how Mad activism challenges the hypostatic abstraction. And in section 5 I discuss a range of ways in which psychiatry can respond to this challenge, and the modifications to its identity that may be necessary.

[1] These are not the only complexities; there are, for example, well-known difficulties and controversies surrounding the efficacy and risks of anti-depressant and anti-psychotic medication. In addition, psychiatry faces distinctive ethical complexities arising from the fact that mental health patients can be particularly vulnerable which raises questions of capacity not ordinarily raised in other medical specialities (see Radden and Sadler 2010).

 

Madness & the Demand for Recognition

mandess cover

After four years of (almost) continuous work, I have finally completed my book:

Madness and the Demand for Recognition: A Philosophical Inquiry into Identity and Mental Health Activism.

You can find the book at the Oxford University Press website and at Amazon.com. A preview with the table of contents, foreword, preface, and introduction is here.

Madness is a complex and contested term. Through time and across cultures it has acquired many formulations: for some, madness is synonymous with unreason and violence, for others with creativity and subversion, elsewhere it is associated with spirits and spirituality. Among the different formulations, there is one in particular that has taken hold so deeply and systematically that it has become the default view in many communities around the world: the idea that madness is a disorder of the mind.

Contemporary developments in mental health activism pose a radical challenge to psychiatric and societal understandings of madness. Mad Pride and mad-positive activism reject the language of mental ‘illness’ and ‘disorder’, reclaim the term ‘mad’, and reverse its negative connotations. Activists seek cultural change in the way madness is viewed, and demand recognition of madness as grounds for identity. But can madness constitute such grounds? Is it possible to reconcile delusions, passivity phenomena, and the discontinuity of self often seen in mental health conditions with the requirements for identity formation presupposed by the theory of recognition? How should society respond?

Guided by these questions, this book is the first comprehensive philosophical examination of the claims and demands of Mad activism. Locating itself in the philosophy of psychiatry, Mad studies, and activist literatures, the book develops a rich theoretical framework for understanding, justifying, and responding to Mad activism’s demand for recognition.

 

The Motivation for Recognition & the Problem of Ideology

mandess cover

[Excerpt from Chapter 4 of my book Madness & the Demand for Recognition, forthcoming Oxford University Press, 2018]

 

In the foregoing account of identity (section 4.2) there is frequent mention of the demand for recognition (indeed, the title of the book features the same). We have made some progress towards understanding the nature of the gaps in social validation under which such a demand can become possible: individuals who are unable to find their self-understanding reflected in the social categories with which they identify and who are demanding social change to address this; what motivates people to seek this kind of social change – what motivates them to struggle for recognition?

4.3 THE STRUGGLE FOR RECOGNITION

4.3.1 The motivation for recognition

There are, at least, four possible sources of motivation for recognition. One of these sources has already been identified in the discussion of Hegel’s teleology (section 3.5.1). In accordance with this, the struggle for more equal and mutual forms of recognitive relations is driven forward by the telos of human nature which is the actualisation of freedom: if that is the ultimate goal, then the dialectical development of consciousness’ understanding of itself will lead to an awareness of mutual dependency as a condition of freedom. But this account has been considered and rejected on the grounds that positing an ultimate, rational telos for human beings that tends towards realisation is a problematic assumption, with connotations to the kind of metaphysical theorising which Kant’s critical philosophy had put to rest. The metaphysical source of the motivation for recognition must be rejected.

Another possible source is empirical and has to do with the psychological nature of human beings. In the Struggle for Recognition, Axel Honneth (1996) provides such an account through the empirical social psychology of G. H. Mead. According to Mead (1967) the self develops out of the interaction of two perspectives: the ‘me’ which is the internalised perspective of the social norms of the generalised other, and the ‘I’ which is a response to the ‘me’ and the source of individual creativity and rebellion against social norms. It is the movement of the ‘I’ – the impulse to individuation – that shows up the limitations of social norms and motivates the expansion of relations of recognition (see Honneth 1996, pp. 75-85).

In a later work Honneth (2002, p. 502) rejects his earlier account; he begins by noting: “there has always seemed to me to be something particularly attractive about the idea of an ongoing struggle for recognition, though I did not quite see how it could still be justified today without the idealistic presupposition of a forward-driven process of Spirit’s complete realization”. Honneth thus rejects the teleological account that we, also, found wanting. He then goes on to render problematic his earlier proposal that seeks to ground the motivation for recognition in Mead’s social psychology:

I have come to doubt whether [Mead’s] views can actually be understood as contributions to a theory of recognition: in essence, what Mead calls ‘recognition’ reduces to the act of reciprocal perspective taking, without the character of the other’s action being of any crucial significance; the psychological mechanism by which shared meanings and norms emerge seems to Mead generally to develop independently of the reactive behaviour of the two participants, so that it also becomes impossible to distinguish actions according to their respective normative character. (Honneth 2002, p. 502)

In other words, what Mead describes is a general process that is always occurring behind people’s backs in so far as it is a basic feature of the human life form. His theory explains how shared norms emerge and why they expand but deprives agents’ behaviours towards each other of normative significance. They become unwitting subjects of this process rather than agents struggling for recognition. To struggle for recognition is to perceive oneself to be denied a status one is worthy of, and not to mechanically act out one’s innate nature. And this remains the case even if our treatment by others engenders feelings of humiliation and disrespect. To experience humiliation is to already consider oneself deserving of a certain kind of treatment, of a normative status that is denied. Such feelings, therefore, cannot themselves constitute the motivation for recognition, rather they are symptoms of the prior existence of a conviction that one must be treated in a better way.

If the motivation for recognition cannot be accounted for metaphysically (by the teleology of social existence), or empirically (by the facts of one’s psychological nature), or emotionally (by the powerful feelings that signal the need for social change), then it must somehow be explained with reference to the ideas that together make up the theory of recognition. These ideas include specific understandings of individuality, self-realisation, freedom, authenticity, social dependence, the need for social confirmation, in addition to notions of dignity, esteem, and distinction, among others. To be motivated to struggle for recognition is to already be shaped by a historical tradition where such notions have become part of how we relate to ourselves and others, and the normative expectations that structure such relations; as McBride (2013, p. 137) writes, “we are the inheritors of a long and complex history of ethical, religious, philosophical, and, more recently, social scientific thought about the stuff of recognition: pride, honour, dignity, respect, status, distinction, prestige”. It is partly that we are within the space of these notions that we can see, as pointed out in section 3.5.2, that living a life of delusion and disregard for what others think, or a life of total absorption in social norms, is not to live a worthwhile life, for we would be giving up altogether either on social confirmation or on our individuality. We are motivated by these notions in so far as we are already constituted socially so as to be moved by them.

Putting the issue this way may raise concerns. By grounding the motivation for recognition in the subject’s prior socialisation, it becomes harder to establish whether that motivation is, ultimately, a means for the individual to broaden his or her social freedom, or a means for reproducing existing relations of domination. As McNay (2008, p. 10) writes, “the desire for recognition might be far from a spontaneous and innate phenomenon but the effect of a certain ideological manipulation of individuals” (see also McBride 2013, pp. 37-40; Markell 2003). Honneth (2012, p. 77) provides a number of examples where recognition may be seen as contributing to the domination of individuals:

The pride that ‘Uncle Tom’ feels as a reaction to the constant praises of his submissive virtues makes him into a compliant servant in a slave-owning society. The emotional appeals to the ‘good’ mother and housewife made by churches, parliaments or the mass media over the centuries caused women to remain trapped within a self-image that most effectively accommodated gender-specific division of labour.

Instead of constituting moral progress (in the sense of an expansion of individual freedom), recognition becomes a mechanism by which people endorse the very identities that limit their freedom. They seek recognition for these identities and in this way “voluntarily take on tasks or duties that serve society” (Honneth 2012, p. 75). There is a need, therefore, to see if we can distinguish ideological forms of recognition from those relations of recognition in which genuine moral progress can be said to have occurred, since what we are after are relations of the latter sort.

4.3.2 The problem of ideology

I first consider, and exclude, some ways in which the problem of ideology cannot be solved. It may seem attractive to find a solution by appeal to a Kantian notion of rational autonomy, where the subject withdraws from social life in order to know what it ought to do. If such withdrawal were possible, we would have had an instance of genuine recognition in the sense that an autonomous choice has been made. But as argued in section 3.2, withdrawing to pure reason can only produce the form that moral principles must take, without those principles thereby possessing sufficient content that can guide action. Moral principles acquire content, and hence can be action guiding, through the very social practices that Kant urged us to withdraw from in order to exercise our rational autonomy. Somehow then, the distinction between ideological and genuine recognition, if it can be made at all, will have to be drawn from within those social practices, as an appeal to a noumenal realm of freedom where we can rationally will what we ought to do cannot work. This is further complicated by the fact that both genuine and ideological recognition – being forms of recognition – must meet the approval of the subject in the sense that both must make the subject feel valued and are considered positive developments conducive to individual growth. Hence, the experience of the subject cannot help us here either. Ideological recognition then consists in practices that are “intrinsically positive and affirmative” yet “bear the negative features of an act of willing subjection, even though these practices appear prima facie to lack all such discriminatory features” (Honneth 2012, p. 78). How can these acts of recognition be identified?

The key seems to lie in the notion of ‘willing subjection’ and the possibility of identifying this despite subjects’ pronouncements of their wellbeing. The judgement that particular practices of recognition are ideological in the sense that they constitute acts of willing subjection must therefore be made by an external observer. The observer needs to perceive subjection, while at the same time explaining away the person’s acceptance of the situation as an indication that he has internalised his oppression in such a way that he willing subjects himself. The case of the ‘good mother’ is a case in point; by voluntarily endorsing that role, she remains uncompensated for her work and many other opportunities in life would be foreclosed to her. Now the observer, in this kind of theoretical narrative, is no longer concerned with the quality of interpersonal relations or the subject’s experience of freedom and wellbeing. What is at issue here seems to be that the observer disagrees with the values and beliefs that structure those relations, rather than the quality of those relations being relations of mutual recognition. A contemporary example can further clarify.

Consider the claim, often heard in certain public discourse, that Muslim women who cover their hair – who wear a hijab – are ‘oppressed’. Frequently, the claims made do not require that the women in question report any oppression, and hence concepts such as ‘internalised oppression’ are invoked to explain the lack of a negative experience. Of course, some women are coerced into wearing the hijab, and given the right context they would remove it and see it as an unnecessary imposition on them. For others the hijab is about modesty and has religious connotations. In this sense, it is not a symbol of their oppression and may even be regarded as a feature that can generate positive recognition as a pious and religiously observant person. An observer who claims that the desire for recognition in such cases is ideological – that women who cover their hair are willingly (and subconsciously) subjecting themselves to existing norms – is making a statement about his or her views on the cultural context: the problem the observer has is with the religious weight placed on clothing, or the fact that it is mainly women who have to observe such practices. Some women who wear a hijab reject this account since it bypasses their own understanding of what they are doing and the value they attach to it (in fact such an account can itself end up being a form of misrecognition). Not surprisingly, the exact claim is made in reverse by some Muslim women who argue that ‘Westernised’ women who dress ‘immodestly’ are oppressed by a dominant, male culture that subtly forces them to show their bodies. Those who believe that dressing in this way is an expression of freedom and secularism have simply internalised the values by which they willing subject themselves to existing norms.

The point of presenting this case from both sides is to show that once we bypass people’s accounts of what they are doing, and put aside their reported experience of freedom and wellbeing, we can see that what is going on is an ideological conflict between two worldviews. This conflict can itself be described within the framework of misrecognition as a continued devaluing of agent’s identities under the cover of an interest in their wellbeing. Of course, people are not always right about what they are doing, and our psychological depth is such that we can deceive ourselves and accept an abusive situation, even more not be able to see that it is abusive. We may convince ourselves that a particular role is exactly right for us, whereas others can see that it is obviously limiting our lives. But psychological depth and the possibility of self-deception go both ways; if that person over there is not transparent to himself then neither am I, even if transparency admits of degrees. Hence, if we are going to argue that a person is willingly subjecting herself, we also need to account for our motivations in making such an argument and what we are, in a sense, getting out of it in terms of validating our worldview, our take on what matters.

This perspective on the idea of ‘willing subjection’ should not be interpreted as a call for inaction; what it is, is a call for personalising and contextualising our moral and political responses and analyses of the lives of others. This means that if we are inclined to persuade individuals to change their understanding of their situation, then we cannot simply bypass their experience of wellbeing and their specific circumstances. In other words, sweeping judgements that take the form ‘group x is oppressed’ are not helpful; clearly there are all sorts of possibilities and the only way to sort these out is to be aware of this complexity, without losing sight of ‘structural’ discrimination in a particular community. With this in mind we will find that the spectrum of oppression includes the following: some in group x are oppressed and are already fighting to change that; some do not consider themselves oppressed but change their take on the situation once they are presented with a different analysis of it; some do not consider themselves oppressed – despite clear evidence to the contrary – yet no amount of persuasion can get them to see this; some consider your interest in their freedom as an attempt to oppress them; others consider themselves perfectly free and empowered.

Returning to our original question – the distinction between ideological and genuine forms of recognition – it appeared, to begin with, that the idea of ‘willing subjection’ held the key to that distinction. However, on having a closer look at this idea it emerged that what it communicates is a conflict of worldviews rather than a view on the quality of interpersonal relations as relations of recognition. As argued earlier, whether ‘ideological’ or ‘genuine’, if the relations in question are to be relations of recognition then the individuals concerned must feel valued for who they are, and be able to see existing relations as contributing to their personal growth and fulfilment. In this sense the distinction between ideological and genuine recognition cannot be drawn using the notion of ‘willing subjection’. What this notion brings to light are the very real, and very deep, disagreements in beliefs, values, social roles, and life goals that exist across contexts and ideologies. And while it certainly is of importance to debate and negotiate these differences, in order for such disagreements not to end up themselves generating conditions for misrecognition, it is necessary not to lose sight of the individuals involved, including their take on what they are doing and their experience of freedom and wellbeing.

Response to Order/Disorder, Kai Syng Tan’s UCL Institute of Advanced Studies Talking Points Seminar

5th December 2017

Title of seminar:

Order/Disorder – The artist-researcher as connector-disrupter-running messenger? 

by Dr Kai Syng Tan

My response:

Thank you very much for inviting me today.

I was pleased when I received this invitation, not only because it meant I can return to the IAS where I spent a year a couple of years ago, but because Kai’s work is hugely important, as well as being relevant to my work in philosophy and psychiatry.

For too long there has been a gap between, on one hand, social and professional understandings of mental health conditions and, on the other, individuals’ own understanding of their experiences and situation. There wasn’t much of a conversation going on, or if there was, it was framed in terms that emphasise disorder and deficit.

For some time, activism in mental health has been trying to change this, by demanding that people are heard on their own terms.

But then how do we bridge this gap, how do we create the possibility for generating shared understandings of the various mental health conditions? Just what do we to do? Well, we do what Kai is doing: inventive projects that bring people together, engage them in creative activities that unsettle some of their assumptions and broaden their  understanding, perhaps even their sense of empathy. For this kind of progress, it is not sufficient to give people information; they need to have an experience, and as I see it, Kai’s work provides both. 

*

There is a point I would like to make and to have your opinion on: it has to do with the distinction between order and disorder.

I came to this distinction first as a doctor and then as a researcher in philosophy and psychiatry. In philosophy, the concept of mental disorder has been the subject of many search and destroy as well as rescue missions over the past twenty-five years.

The key point of contention was whether or not we can define disorder (or more precisely, dysfunction) in purely factual terms, for instance as the breakdown of the natural functions of psychological mechanisms. The goal in such attempts was to define dysfunction in terms that do not involve value-judgements.

These attempts were not successful: at some point in the process of describing the relevant mechanisms and their functions, value-judgements sneak in.

Now demonstrating the value-ladenness of the concept of disorder does not mean that it suddenly disappears; and it does not mean that the boundary between order and disorder vanishes into thin air. It just means that it has become a much more controversial boundary than previously thought, and the distinctions it involves are difficult ones to make.

My point is that making qualitative distinctions among behaviours and experiences – whether our own or other people’s – is not optional: it is part of how we understand ourselves and understand others as psychological and social beings. 

That being said: even if the distinction between order and disorder – or between whatever terms you wish to use – even if that distinction is inevitable, it is one that we continually ought to attempt to transcend.

 Why should we attempt to overcome it? Because there might be order in what appears to be disorder, and disorder in what appears to be order; because in attempting to transcend this distinction we can grasp what it is that we share with others and not just what sets us apart; and because there’s no telling on which side of that distinction any of us is going to fall one day.

 It is precisely this paradox that we need to be conscious off and work with: the paradox of accepting the inevitability of a distinction while at the very same time seeking to transcend it. And I wonder what you think of this?

*

The other point I want to make has to do with the relation between our research and the activism that is connected to it. I must admit that in my own work I’ve frequently thought about this but I have not yet arrived at a satisfactory view. The question of course is broader than our area of research and applies to the humanities in general: to what extent should a researcher commit to the social cause they are researching, and what does this mean for the objectivity of what they are producing. What kind of balance do we need to strike here? And have you thought about this in your work?

The Meaning of Madness

mandess cover

Excerpt from Chapter 1 of my book “Madness and the Demand for Recognition”. Forthcoming with Oxford University Press, 2018

Mad with a capital m refers to one way in which an individual can identify, and in this respect it stands similar to other social identities such as Maori, African-Caribbean, or Deaf. If someone asks why a person identifies as Mad or as Maori, the simplest answer that can be offered is to state that he identifies so because he is mad or Maori. And if this answer is to be anything more than a tautology – he identifies as Mad because he identifies as Mad – the is must refer to something over and above that person’s identification; i.e. to that person’s ‘madness’ or ‘Maoriness’. Such an answer has the implication that if one is considered to be Maori yet identifies as Anglo-Saxon – or white and identifies as Black – they would be wrong in a fundamental way about their own nature. And this final word – nature – is precisely the difficulty with this way of talking, and underpins the criticism that such a take on identity is ‘essentialist’.

Essentialism, in philosophy, is the idea that some objects may have essential properties, which are properties without which the object would not be what it is; for example, it is an essential property of a planet that it orbits around a star. In social and political discussions, essentialism means something somewhat wider: it is invoked as a criticism of the claim that one’s identity falls back on immutable, given, ‘natural’ features that incline one – and the group with which one shares those features – to behave in certain ways, and to have certain predispositions. The critique of certain discourses as essentialist has been made in several domains including race and queer studies, and in feminist theory; as Heyes (2000, p. 21) points out, contemporary North American feminist theory now takes it as a given that to refer to “women’s experience” is merely to engage in an essentialist generalisation from what is actually the experience of “middle-class white feminists”. The problem seems to be the construction of a category – ‘women’ or ‘black’ or ‘mad’ – all members of which supposedly share something deep that is part of their nature: being female, being a certain race, being mad. In terms of the categories, there appears to be no basis for supposing either gender essentialism (the claim that women, in virtue of being women, have a shared and distinctive experience of the world: see Stone (2004) for an overview), or the existence of discrete races (e.g. Appiah 1994a, pp. 98-101), or a discrete category of experience and behaviour that we can refer to as ‘madness’ (or ‘schizophrenia’ or any other psychiatric condition for this purpose). Evidence for the latter claim is growing rapidly as the following overview indicates.

There is a body of literature in philosophy and psychiatry that critiques essentialist thinking about ‘mental disorder’, usually by rebutting the claim that psychiatric categories can be natural kinds (see Zachar 2015, 2000; Haslam 2002; Cooper 2013 is more optimistic). A ‘natural kind’ is a philosophical concept which refers to entities that exist in nature and are categorically distinct from each other. The observable features of a natural kind arise from its internal structure which also is the condition for membership of the kind. For example, any compound that has two molecules of hydrogen and one molecule of oxygen is water, irrespective of its observable features (which in the case of H2O can be ice, liquid, or gas). Natural kind thinking informs typical scientific and medical approaches to mental disorder, evident in the following assumptions (see Haslam 2000, pp. 1033-1034): (1) different disorders are categorically distinct from each other (schizophrenia is one thing, bipolar disorder another); (2) you either have a disorder or not – a disorder is a discrete category; (3) the observable features of a disorder (symptoms and signs) are causally produced by its internal structure (underlying abnormalities); (4) diagnosis is a determination of the kind (the disorder) which the individual instantiates.

If this picture of strong essentialism appears as a straw-man it is because thinking about mental disorder has moved on or is in the process of doing so. All of the assumptions listed here have been challenged (see Zachar 2015): in many cases it’s not possible to draw categorical distinctions between one disorder and another, and between disorder and its absence; fuzzy boundaries predominate. Symptoms of schizophrenia and of bipolar disorder overlap, necessitating awkward constructions such as schizoaffective disorder or mania with psychotic symptoms. Similarly, the boundary between clinical depression and intense grief has been critiqued as indeterminate. In addition, the reductive causal picture implied by the natural kind view seems naive in the case of mental disorder: it is now a truism that what we call psychiatric symptoms are the product of multiple interacting factors (biological, social, cultural, psychological). And diagnosis is not a process of matching the patient’s report with an existing category, but a complicated interaction between two parties in which one side – the clinician – constantly reinterprets what the patient is saying in the language of psychiatry, a process which the activist literature has repeatedly pointed out permits the exercise of power over the patient.

The difficulties in demarcating health from disorder and disorders from each other have been debated recently under the concept of ‘vagueness’; the idea that psychiatric concepts and classifications are imprecise with no sharp distinctions possible between those phenomena to which they apply and those to which they do not (Keil, Keuck, and Hauswald 2017). Vagueness in psychiatry does not automatically eliminate the quest for more precision – it may be the case, for example, that we need to improve our science – but it does strongly suggest a formulation of states of health and forms of experience in terms of degrees rather than categorically, i.e. a gradualist approach to mental health. Gradualism is one possible implication of vagueness, and there is good evidence to support it as a thesis. For example, Sullivan-Bissett and colleagues (2017) have convincingly argued that delusional and non-delusional beliefs differ in degree, not kind: non-delusional beliefs exhibit the same epistemic short-comings attributed to delusions: resistance to counterevidence, resistance to abandoning the belief, and the influence of biases and motivational factors on belief formation. Similarly, as pointed out earlier, the distinction between normal sadness and clinical depression is difficult to make on principled grounds, and relies on an arbitrary specification of the number of weeks during which a person can feel low in mood before a diagnosis can be given (see Horwitz and Wakefield 2007). Another related problem is the non-specificity of symptoms: auditory hallucinations, thought insertion, and other passivity phenomena which are considered pathognomonic of schizophrenia, can be found in the non-patient population as well as other conditions (e.g. Jackson 2007).

Vagueness in mental health concepts and gradualism with regards to psychological phenomena undermine the idea that there are discrete categories underpinned by an underlying essence and that go with labels such as schizophrenia, bipolar disorder, or madness. But people continue to identify as Women, African-American, Maori, Gay, and Mad. Are they wrong to do so? To say they are wrong is to mistake the nature of social identities. To prefigure a discussion that will occupy a major part of Chapters 4 and 5, identity is a person’s understanding of who he or she is, and that understanding always appeals to existing collective categories: to identify is to place oneself in some sort of relation to those categories. To identify as Mad is to place oneself in some sort of relation to madness; to identify as Maori is to place oneself in some sort of relation to Maori culture. Now those categories may not be essential in the sense of falling back on some immutable principle, but they are nevertheless out there in the social world and their meaning and continued existence does not depend on one person rejecting them (nor can one person alone maintain a social category even if he or she can play a major role in conceiving it). Being social in nature they are open to redefinition, hence collective activism to reclaim certain categories and redefine them in positive ways. In fact, the argument that a particular category has fuzzy boundaries and is not underpinned by an essence may enter into its redefinition. But demonstrating this cannot be expected to eliminate people’s identification with that category: the inessentiality of race, to give an example, is not going to be sufficient by itself to end people’s identification as White or Black.

In the context of activism, to identify as Mad is to have a stake in how madness is defined, and the key issue becomes the meaning of madness. To illustrate the range of ways in which madness has been defined, I appeal to some key views that have been voiced in a recent, important anthology: Mad Matters: A Critical Reader in Canadian Mad Studies (2013). A key point to begin with is that Mad identity tends to be anchored in experiences of mistreatment and labelling by others. By Mad, Poole and Ward (2013, p. 96) write, “we are referring to a term reclaimed by those who have been pathologised/ psychiatrised as ‘mentally ill,'”. Similarly, Fabris (2013, p. 139) proposes Mad “to mean the group of us considered crazy or deemed ill by sanists … and are politically conscious of this”. These definitions remind us that a group frequently comes into being when certain individuals experience discrimination or oppression that is then attributed by them as arising from some features that they share, no matter how loosely. Those features have come to define the social category of madness. Menzies, LeFrancois, and Reaume (2013, p. 10) write:

Once a reviled term that signalled the worst kinds of bigotry and abuse, madness has come to represent a critical alternative to ‘mental illness’ or ‘disorder’ as a way of naming and responding to emotional, spiritual, and neuro-diversity. … Following other social movements including queer, black, and fat activism, madness talk and text invert the language of oppression, reclaiming disparaged identities and restoring dignity and pride to difference.

In a similar fashion, Liegghio (2013, p. 122) writes:

madness refers to a range of experiences – thoughts, moods, behaviours – that are different from and challenge, resist, or do not conform to dominant, psychiatric constructions of ‘normal’ versus ‘disordered’ or ‘ill’ mental health. Rather than adopting dominant psy constructions of mental health as a negative condition to alter, control, or repair, I view madness as a social category among other categories like race, class, gender, sexuality, age, or ability that define our identities and experiences.

Mad activism may start with shared experiences of oppression, stigma and mistreatment, it continues with the rejection of biomedical language and reclamation of the term mad, and then proceeds by developing positive content to madness and hence to Mad identity. As Burstow (2013, p. 84) comments:

 What the community is doing is essentially turning these words around, using them to connote, alternately, cultural difference, alternate ways of thinking and processing, wisdom that speaks a truth not recognised …, the creative subterranean that figures in all of our minds. In reclaiming them, the community is affirming psychic diversity and repositioning ‘madness’ as a quality to embrace; hence the frequency with which the word ‘Mad’ and ‘pride’ are associated.

The Dakhla Oasis: Stories from the ‘field’ (0)

P1000733

Arrivals

The road from Kharga is an isolated strip of asphalt winding through arid desert that alternates between flat, uneventful plains and more spectacular sand-dunes. Seventy kilometres before you arrive to the village of Mūt, the landscape bursts with numerous shades of pastel colours: the desert alternates with lush vegetation, plain fields, and palm tree groves bounded on the Northern side by a mountain chain and on the Southern side by more flat desert. Several villages dot the remainder of the road, some tucked in the bosom of the mountain and barely visible and others, like Asmant, start right at the road and sprawl into the distance. A few villages later the vegetation is overtaken by the low-lying buildings of Mūt; a curious mix of half-completed one and two-storey concrete buildings and traditional mud brick dwellings. This is the largest village in Dakhla, the Inner Oasis and the third most populated in the Western desert.

Oases of the Western Desert: An Historical Snapshot

Extending west of the Nile-valley and occupying two thirds of the land surface of Egypt, the Western Desert – otherwise an arid expanse of over 680,000 square kilometres – is dotted by six depressions: the oases of Siwa, Fayyoum, Bahariyya, Farafra, Dakhla, and Kharga. Beyond the Western political border of Egypt, it becomes the Libyan Desert, and shortly after merges in with the Sahara. With Cairo as a reference point, Fayyoum is the closest, largest and most populated of the six oases. Siwa lies only 50 km east of the Libyan border and, traditionally, has been the most isolated. The other four oases lie on an arc that starts at Cairo, curves West into the desert and 1330 km later returns to the Nile-valley at Luxor. Farafra lies at the Western-most point of that arc, followed by Dakhla.

According to the 2006 national census, the population of the locality of Dakhla is 79,812 (CAPMAS 2007). This is the total population of each locality including the main town, Mūt, and the many villages surrounding it. The oases have seen a significant population boost over the past few decades. Throughout its history, however, the population of the oases tended to fluctuate dramatically for reasons to do with diminishing water supplies and the ever-present danger of Bedouin raids.

What we know from archaeological remains is that Dakhla has seen human activity since Palaeolithic times (Kubiak and Zabowski 1995, 10). In the Pharaonic period, Egyptians from the Nile-valley appear to have arrived at Dakhla in 2300 BC (Mills 1999). Ohat R’seit – the Southern Oasis – was part of the same administrative division as the oasis of Kharga, and evidence indicates that the responsible authority sought to maintain and develop the oases primarily as a first line of defence against incoming raids from the West and South (Abu-Zayd 1997). Papyri from the Graeco-Roman period indicate that Dakhla was governed through a tight organisational structure, with the population mainly of Libyan Berber origin and engaged in agriculture (Wagner 1987). Prior to the introduction of Christianity, religion was a form of paganism; late Egypto-Hellenic syncretism. The Roman period is considered to have been unusually prosperous for the oases but at a price. Intensive farming and the use of new irrigation techniques have, as Thurston wrote, “sucked the oasis dry of easily accessible water” and meant that during the Byzantine period, Dakhla was “unable to provide more than a subsistence living for a greatly reduced and declining population” (2003, 320). That did not preclude the spread of Christianity to the oases, with evidence of dominant Christian presence persisting up till the 10th century; four centuries after the Arab (Muslim) conquest of Egypt.

By 644 AD the Arabs had already conquered all of Arabia, Syrian and Egyptian parts of the Byzantine Empire, and parts of Persia. The decline of the oases in later Roman and Byzantine empires continued into the first few centuries of Arab rule: wells were not maintained, and the population were offered no protection against Bedouin raids, which resulted in immigration to safer areas and a relative depopulation of the oases between the 11th and 15th centuries (Fakhry 2003; Beadnell 1909). By the 14th and 15th centuries (towards the end of Mamluk rule) the oases experienced a second period of prosperity. The outcome of this period has been likened to that of its earlier Roman equivalent in that the extensive use of water was not accompanied by a concern for the “long-term consequences for the residents or the productivity of the land itself” (Thurston 2003, 324; see also Keall 1981). It was during this period that the medieval walled city of El-Qasr would be built in Dakhla. It survives to this day, unlike the similarly named city in Farafra which collapsed after heavy rains in 1945. Here is Thurston describing El-Qasr in Dakhla:

Heavy acacia gates divided the city into neighbourhoods, each of which would have been the precinct of a tribe-based clan. The gates were locked at night, as were the main city gates, against the threat of raids, which continued into the 19th c. (2003, 323).

The Ottoman conquest of 1517 signalled the end of Mamluk rule in Egypt. The three centuries that followed witnessed a decline in farming areas and the oases were not spared. They were also three centuries with no information on the oases, a fact that has been linked to the general cultural decline in Egypt under Ottoman rule (Kubiak and Zabowski 1995). It was only with the rise of European exploration by the early 19th century that the oases began to be mentioned again in various topographical and geological works. Absent from the work of these explorers, which is perhaps expected considering their interests, are consistent observations or accounts of social and cultural life in the oases. And in the case where such accounts exist, as in W. J. Harding King’s (1925), they take the form of travelogues with superficial observations that would now seem to us ethnocentric and biased, if not racist. The paucity of historical material has only been partly rectified by Ahmed Fakhry, the first Egyptian Egyptologist. Fakhry (2003) visited Farafra and Bahariyya several times between 1938 and 1968 and wrote in an informal style about archaeological findings as well as brief comments on life in the oases.

In 1938, according to Fakhry, the only link between Farafra and the neighbouring oases (Bahariyya and Dakhla) was a four day camel trip. There were no modern means of communication or any form of “mechanized transport”. There was no electricity, and in the whole oasis only three watches existed. Houses, as Fakhry described them, were similar in form to those found on the edges of cultivated areas in the Nile-valley: a central courtyard with a dwelling in one corner, a small garden and a well. The inhabitants, he observed, had a different dialect to those of Bahariyya and Dakhla, a fact he attributed to their Bedouin blood. They were more religious and stricter than their neighbours; women did not mix with strangers, unlike in Bahariyya, which had been a place of forced exile for Siwan women accused of adultery. He described a total absence of what he called “European clothing” in 1938, but by 1968 teachers and the few government officials in the oasis could be seen in trousers and coats. This seemingly minor observation, however, was an indication of significant changes that began to happen in the oases of the Western desert. The “new-valley project” had already been conceived by the Egyptian government, and it was only a few years into the project, and as he was leaving Farafra in 1968 that he wrote:

I thought of the rapidly changing life in the oasis and wondered how long the inhabitants could keep their old traditions alive. The concept of bringing several thousand immigrants here from the Nile Valley when the new irrigation projects opening (sic) thousands of feddans take place, saddened and distressed me. Will the honest, peaceful citizens of Farafra be pushed into a corner by the new, aggressive immigrants, as has happened in Kharga? (2003, 180)

The New-Valley Project

Land reclamation in the Western desert began in the 1960s under the guidance of President Gamal ‘Abdel-Nasser. The idea was to counter overpopulation in the Nile-valley by reclaiming desert land, creating new villages, and boosting agricultural production (Gudowski and Raubo 1995). The Western desert was seen as a land of opportunity that could potentially solve the problems of a “new Egypt” freed of British and Monarchic rule and set to modernise and develop:

Escaping the confines of the narrow and over-populated Nile-Valley into the wide expanse of Egypt’s land is the only way to build a future for Egypt, by absorbing the increasing population, and opening wide horizons for development and progress (Abu-Zayd 1997, 17).

Throughout the 60s and 70s hundreds of wells were dug, bringing to the surface millions of cubic metres of the non-renewable fossil water under the desert (Thurston 2003). Roads were built connecting the four oases with the Nile-valley (Kuzak 1995, Mills 1999). The project slowed down during the two wars with Israel and the six years in between (1967 to 1973) and large scale reclamation properly began in the 1980s. Despite intense development, the New-valley project did not fulfil the grand ambitions of the Egyptian government which included an estimated post-reclamation population of 2 million (current population of the area originally included in the project is 210,352 (CAPMAS 2007)). Part of the reason was the expense of land reclamation and the reluctance of Egyptians to leave the Nile-valley for the desert. Concerns about the falling water table further slowed down the digging of new wells. The water situation is such that unless alternative projects for transporting water from the Nile succeed, the water under Dakhla oasis could be depleted in 50 years, in which case the oasis will gradually cease to exist (Thurston 2003).

A recent document issued by the State Information Service entitled ‘New-Valley Panorama’ (Abu-Zayd 1997) acknowledged that development in the New-Valley has not fulfilled the “dreams” of Egyptians and has failed to mine the huge potential of the region. Mubarak’s government has been attempting since the mid-nineties to rectify this situation by constructing a canal, the ‘New-Valley Canal’, which carries Nile water from Lake Nasser south of the country for a distance of 850 kilometres passing by Uweinat, Kharga, Dakhla and terminating in Farafra. The grand plan is not just land reclamation but the construction of eighteen new towns, a hundred industrial sites, and a number of tourism projects. The ultimate goal is to build “a new civilisation, parallel to that of the old [Nile] Valley”. This project has met immense technical difficulties, not least due to the intense heat in this part of Egypt which salinates the water before it can cover a fraction of the intended distance; the reliance on underground water continues.

Dakhla Today

Today, Dakhla is a major oasis with 33 villages and an urban centre, Mūt, with a population of ten thousand. Agriculture remains the main activity, although the inhabitants supplement their income through various other means. All families own land and the men usually rotate tending for it. Prior to the New-Valley project the local leader, the ‘Umda, controlled most of the land and was the one who rented and sold to others. In the sixties this changed, and the government started digging wells and offering local families and immigrants 5 feddans and a cow for symbolic prices. If a family are particularly well-to-do they could dig their own well, otherwise they must rent water from the government for a symbolic fee of forty Egyptian pounds a year.

Water is rarely owned by a single family, and a well is shared among several plots of land. Owners must observe a specific water rota, and intentional or inadvertent violations of the rota may lead to problems which are usually resolved locally and very rarely involve the police. When a well dries and the family cannot afford to dig deeper or buy a bigger pump to pull the water, the owners must accept the slow death of their land. Land owned by families tends to be small, and the families grow less for profit and primarily for subsistence. Many families own livestock kept in a shed on their plot of land. Once a year they may sell a cow which would bring about 4000 to 6000 Egyptian pounds.

Contrasting with the small plots of lands that locals own, the government has been offering areas of up to thousands of Feddans for investment. Huge plots of land, in Uweinat and to a lesser extent Dakhla, have been bought by Egyptian and foreign investors who have launched major projects with the sole purpose of profit. Occasionally I saw huge trucks passing through Mūt and carrying produce straight to Suez and Alexandria for export. These projects are subject to modern irrigation standards in order to reduce water loss. Recently the government began to interfere in what the families could grow. Rice was banned a few years ago in view of the huge water waste involved in growing it. This has been met with disdain by the locals who are big rice eaters and must now buy it at a greater expense from Asyut, the nearest city. Concerns regarding the water supply do not strike a chord here. Most people reject the scientific evidence that underground water is non-renewable, and many argue that water is replenished directly from the Nile. The government’s grand project of the ‘New-Valley canal’ is branded as mere propaganda. The water issue for the people of Dakhla is a financial issue: enough money means more wells, and pumps that can bring water from deeper levels.

The fact remains, if you exclude a few families, that agriculture on its own cannot be the sole financial sustenance. Many men and women are employed as civil servants in the various government institutions that accompanied the development of the region. These positions are sought for the consistent income they offer and the guarantee of a pension. Otherwise people get by through working in multiple jobs. A man will work as a civil-servant in the morning and tend the land in the evening or early in the morning before heading off to work. Trading in livestock, running a shop or a coffee-house, or driving a micro-bus are all common sources of income. Tourism is not as big as in Bahariyya or Farafra, and consists mainly in arranging safaris for European tourists to the Great Sand Sea and El-Gilf Al-Kabeer. Free time is rare amid multiple jobs, and the few free hours of the day are spent at the coffee-house or at home. The reality remains, however, that the land must be preserved and tended regardless of any other jobs a man takes on. A father expects his sons to take over the land from him as he grows older.

Despite the constant struggle to make ends meet, many of the older inhabitants – those who recall life in the fifties – acknowledge the significant improvements brought by the New-Valley project. Back then food was scarce, and the absence of electricity constrained the range of edible foods to what could be dried and stored safely. The scarcity of water required that vegetables and fruit be brought from Asyut on desert tracks at huge expense. The construction of roads linking the oasis with the Nile-Valley meant that more people could travel to Asyut and Cairo to seek work and send money back to their families. More recently, some men have taken to economic migration to Kuwait and Saudi-Arabia to the detriment of the land they leave behind for their neighbours to tend. However the New-Valley project resulted in a huge surge of immigrants, mostly men, descending on the oasis. Up to this day the inhabitants of Dakhla clearly distinguish between themselves – the natives – and the immigrants. Families who have lived in the oasis for over fifty years are still considered outsiders, and even though they have married local women, the preference remains to “give your daughter in marriage” to a local.

Up to, perhaps, two decades ago the old-town represented the physical boundary of the villages. Within its high gates the inhabitants lived, stored their grain, married, died and, when required, sought protection from marauding Bedouins and others. Today the old-town no longer fulfils its historic role since the majority of the population have descended to the plains below, leaving the old-town to crumble in disrepair. In any case it would not have accommodated the population increase, which perhaps explains why people in the villages continue to live in mud-brick houses. A few families continue to live in El-Kharaba (the ruins, and the name given to the crumbling old-town), and those who have moved out continue to use their old houses for storage of chickens and pigeons. Today, El-Kharaba is a cornucopia of interlocking dwellings and shaded corridors and mazes that form an asymmetrical, organic mass with a fluid horizontal and vertical perspective. The houses, which are entirely made of mud-bricks, palm reeds, and natural wood, are now mostly in ruins, although you can still see the occasional standing house in perfect condition with electric lights and a satellite dish on top. The ruins also lend themselves to more ominous uses: drug users and seekers of illicit sex frequent the ruins at night unmoved by the inherent dangers of being caught or the risk of unsettling evil spirits that tend to inhabit deserted spots.

The descent from Old Mūt to the plains below occurred gradually over a twenty year period, although some families do not want, or cannot afford, to leave their mud-brick house for a concrete one. My friend Tariq told me that the first concrete building in Mūt was raised about thirty years ago and was a government site. When I asked him when he moved out of the old-town he paused and said: “you can only extend your feet as far as your covers go”. They’ve moved out only three years ago and this, in Mūt, is very recent. Moving out of the old-town has acquired status significance over the years: everyone is expected to want to build and live in a concrete house. Even though people complain about the concrete houses – the manner in which they lock-in the heat unlike the mud-brick dwellings which are cool in the middle of summer – the majority agree that modern houses are more comfortable and require less maintenance.

The concrete and steel construction surge that accompanied the descent from Old-Mūt still shows no sign of abating. All over town you see one and two-floor storey buildings with concrete pillars jutting out of the roofs and exposed bare steel rods rising upwards, the whole construction eerily resembling a helpless upturned insect. Such constructions are scattered all over town, giving the place the feel of a perpetually developing building site. Later when I understood the economics and pragmatics of house construction I appreciated why all the houses must have these unsightly pillars and steel rods jutting out of them. These pillars stand testament to deficient funding and the intention of “raising another floor”. Families in Dakhla continue to live together. Whereas the older mud-brick dwellings were organised around a central court-yard surrounded by several rooms – one for each family, the newer concrete houses allow more privacy, since each family lives in a separate flat. Brothers tend to share a house and as they approach the marriage age and once a few thousand pounds are amassed, concrete and steel can be bought from Asyut and a further floor could be built. Fathers build for their sons, while daughters usually move in with their husband’s family.

Perhaps the best spot to observe Mūt is from the top of El-Kharaba. Up there you could see concrete buildings stretching all the way to the green fields, beyond which the desert and mountains start. Most of the buildings have the obligatory steel rods dangling upwards; few are painted from the outside, and are built of manufactured red-bricks. The amorphous brown mass that is old Mūt gives way to a chaotically arranged red mass of houses occasionally interspersed by palm tree groves until, finally, it gives way to the fields proper and then the desert. From this vantage point Mūt is a simple collage of four colours: Brown, Red, Green, and Yellow. The colours of the naturally sourced material that comprise the substance of the mud-brick dwellings blend in seamlessly with the green fields and the yellow desert, but the New Mūt with its manufactured red-bricks and steel rods breaks this harmony.

In addition to agriculture, work, and housing, modernisation has touched other elements of life in the oasis. The literacy rate here is one of the highest in the country (81.2%). There are several primary and secondary schools in Mūt alone, although seekers of higher education must head to Kharga or further to Asyut and Cairo. Secondary education may take one of two routes: the conventional academic route that allows the graduate to go to university, and the more common route of specialised education where the students are trained in a practical discipline of their choice: agricultural, industrial, religious or commercial.

Technology has taken on in the oasis rather quickly. Mobile phones are ubiquitous since their introduction nine years ago. Television and electricity were introduced in 1982, and satellite dishes made an appearance in the past seven years. Whether at home or at the coffee-houses people spend significant time watching Egyptian, Indian and foreign films as well as American wrestling which is extremely popular with the men here. A number of religious channels offer an alternative for the more conservative. There are a few internet cafes and it remains quite rare for a family to have their own connection at home, primarily because personal computers are still quite expensive. People in Dakhla remain ambivalent about the effects of technology on society. A primary school teacher complained that children are no longer as focused at school; they are distracted by what they see on the Internet. Mobile phones are blamed for limiting family visits; prior to their introduction visits among the extended family would happen at least once a week and now people suffice with a phone call. Elders blame television for changing people’s expectations regarding marriage: “The youth now want to fall in love and live in their own separate apartments; they are copying what they see in the Egyptian [read: Cairene] serials and films”.

Health care has also seen significant infrastructural development. Mūt has a central hospital with several medical and surgical specialties in addition to an accident and emergency department, although they have no psychiatry or neurology departments. There are a number of private clinics and two private hospitals. The consensus here is that doctors in Mūt are largely incompetent. Seriously ill individuals are frequently taken to Asyut, 600 kilometres away, where there is a teaching hospital with well-known specialists. Otherwise healers are consulted for a wide range of problems, including those which medical doctors would recognise as falling within their domain.

This sketch of the major changes that accompanied the inception of the New-Valley project shows that Dakhla in the first decade of the 21st century is a different place from what it was prior to the sixties. Elements of modernisation have touched all people’s lives, extending to the intimate areas of speech and dress. Under the influence of immigrants, television and local teachers (many of whom are from the Nile valley) the Dakhlan dialect is now all but absent, increasingly resembling the Cairene dialect, a fact that Woidich (2000) already observed towards the end of the 90s. Local dress is largely indistinguishable from that in the rest of Egypt, even among older women whom Rugh (1987) noted were the final preservers of traditional dress. These changes are most evident in Mūt which I find accurate to describe as an urban centre with a village feel. The changes which started rather abruptly about forty-five years ago have affected people’s lives, creating new problems and opportunities and constituting specific identities.

Cairo 2010

The Dakhla Oasis: Stories from the ‘field’ (I)

P1000726

Marriage and Reputation

Tariq lives with his mother in a concrete house in the old part of town. They moved out of El-Kharaba only three years ago, a fact that he declares with slight embarrassment. A month ago his wife and three year old daughter would have been living with them, but they had fallen out and she moved in with her elder brothers; he remains reluctant to bring her back home. I first met Tariq at the coffee house at El-Midan (the square dominating the old part of Mūt); we instantly became friends. Tariq’s day starts at seven in the morning: he cycles to the government site where he rents out tractors and loaders by the hour. Like most civil-servants, he is expected to sign-in at eight-thirty in the morning and not to leave before two in the afternoon, and like many government employees he tries to negotiate this in order to have more time “earning his keep” through various other jobs.

Like most men in Dakhla, Tariq shares the responsibilities of the land with his brothers. A few days a week he has water duties which involve moving around the panels that regulate water supply to guarantee that others have their share. On Sunday and Thursday evenings he coaches table tennis at a modest ‘youth centre’ on the edge of town, a feat that involves a few informal strokes with high school students interspersed by several cigarette breaks. On Friday and Saturday mornings, his days off, he works a shift at a coffee-house that belongs to his friend, ‘Ali. I frequently accompanied Tariq to the fields helping him with the water, to the youth-centre for a few informal table tennis strokes, and on those early week-end mornings I joined him at the coffee-house where I would prepare my Turkish coffee and utilise the rarity of patrons to chat with him. On one such morning I first learnt that he had been married once before.

It would be hard to overestimate the importance of marriage. In my first week of field work I attended Friday prayers at the “big mosque”. The topic of the sermon was marriage and the imam was listing its benefits: “to marry is to complete your religion; marriage is half your faith. It provides you with companionship and offspring. It is the natural progression of life”. For the men and women of Mūt it is frequently their only sexual outlet. Hussein, a resident of the village of Asmant, told me when I visited him to talk to his mentally unwell and unmarried sister, Fayza, that spinsterhood is a mosiba (disaster).

Tariq’s first marriage was not consummated; they signed the contract but she continued living at her father’s house. Tariq was not pleased with her conduct: she did not ask his permission before leaving the house, she met her brother-in-law with her hair uncovered,[1] and on occasion he saw her conversing with male cousins and relatives in the street. Tariq did not receive the expected support when he approached her father complaining of his daughter’s conduct. “Until you’ve entered,”[2] he told him, “the final word remains with me”. Tariq refused to accept this and his mind was set on divorce but not before “teaching her a lesson” and leaving her “hanging for a few weeks”. He stopped visiting them at home, cut all contact, and took no steps towards a divorce. The bride’s family contacted his mother urging her to convince her son either to return or to divorce their daughter, and not to leave her in this disgraceful situation. “Don’t give them a reason to hurt us”, Tariq recalls his mother warning of the possibility of retaliatory magic: “divorce her now, and find another woman who can make you happy”. He finally complied.

A couple of years later he married his cousin and had a child with her. She moved in, as custom dictates, with him and his mother. Over the few years of their marriage it became apparent that they did not get along well. Their life was dominated by frequent quarrels, disagreements and fights usually over trivial matters. Sayyed, Tariq’s friend, believes their personalities do not match: “they are both hot-blooded”. On several occasions she would leave the family home and return to her brothers, only this last time he vowed not to make any effort to get her back. Despite reconciliatory interventions by her brothers he remains reluctant. “The very act of leaving the house like this,” he told me once, “is unforgivable; here it’s a big thing and could lead to divorce”. Both Tariq and Sayyed think the situation would have been different if her father was alive: “no one in the family has the authority to tell her to return home; this is something the father should be involved in”. A common story, one that I have heard several times and which I finally learned refers to Sayyed’s second cousin, drives the point home:

My relative has been having problems with her husband and returned to her father’s house. She asked for a divorce, and a meeting was arranged. I was there, and the father, brother and husband. The father asked his daughter: ‘I will divorce you from him right now, if you can do one thing: strip naked here in front of us’. The girl was shocked, and couldn’t understand why her father was saying this. Then he suddenly stood up and tore the dress off her. The girl, screaming, ran to her husband and took cover in his arms. The father then looked at her and said: ‘Your husband, and not me or your brother, is your satr [shelter/protection]; you must return with him.

His wife’s brothers, who also happen to be his cousins, played a role. One of them urged Tariq to treat her gently, to take her out for walks more frequently so she does not feel “suffocated” at home. Another, who happens to be more conservative, was opposed to the idea of divorce since it is religiously permissible but discouraged (abghad el-halal). The third raised the possibility that someone, probably the estranged first wife or her family, had sought a magician to prepare an ‘amal (hex) to separate them; the infamous ‘separation magic’. In fact, he had taken steps and visited a magician who confirmed the above and requested to see Tariq. I asked him in the presence of Sayyed whether he would go: “I don’t want to go down this route; its haram (prohibited), we must always refer back to Islamic teaching and engaging in magic is haram”. I said he would not be engaging in magic, only undoing it but he was not convinced: “the magician is probably a charlatan”. Sayyed interrupted him saying that magic is mentioned in the Qur’an and definitely exists. He suggested consulting a Qur’anic healer: “go to Sheikh Rayyes and he will read on you, there is nothing haram with that!” Tariq refused all our suggestions. At this point Sayyed looked at me and said: “he has decided, he doesn’t want her, he wants to change, it’s been three years now and he is bored; we shouldn’t try and solve the problem, we should find him a new wife”. Tariq was silent, but smiled at us meaningfully, a smile of complicity.

 That Tariq should be bored is not surprising. Excepting minor and infrequent youthful, sexual skirmishes most men here first encounter sex on their wedding night. Frequently their spouse would be the only sexual partner they will ever have. He once told me that the novelty of sex with your wife dissipates even after the first time. I half-jokingly suggested mot’a marriage,[3] but was told that marriage in Dakhla is no longer that simple; the bride’s parents always ask for an apartment and mahr (dowry). “In the old days”, Tariq explained, “it was halal [permitted] to sleep with maids that worked at your residence, but we can’t do this now; slavery is finished”. This narrows down his options to a second marriage. According to Islamic law a man can marry up to four women but only under strict criteria which include that his first wife must know and agree; if she does not she has a right to an immediate divorce. Tariq’s predicament is complicated by the fact that he does not wish to divorce his wife, yet remains obliged to declare his intention to marry a second time, upon which she will certainly ask for a divorce. A divorce will complicate several things for him: his rights to see his daughter, financial obligations, and it will constitute a blow to his reputation: a man who has been divorced twice is hardly eligible marriage material.

Divorce and second marriages remain rare in Dakhla, and in the absence of obvious and pressing legitimating reasons are frowned upon. This was apparent in the response to Hajj Sa’ad’s marriage to a twenty-four year old girl from Kafr El-Shiekh, one of the Nile-Delta governorates. Hajj Sa’ad is in his late forties and has been married for the past twenty-five years to a lady known for her good manners and religiosity; no one could understand why he would bring her another woman. I joined him and a friend once at the coffee-house. His friend was reprimanding him for marrying a second time: “since your marriage a month ago, people in town have been infuriated”. Hajj Sa’ad reasoned with him: “people don’t know the circumstances; she wasn’t taking good care of my mother who is an old woman now, what am I supposed to do? And my children are older and busy with their life. I told her I will marry another woman and I gave her the choice; she could ask for a divorce, or stay in the house with us, or I could get her a flat and move the furniture into it. She asked to move out but didn’t mention the divorce. And now two weeks later I get a letter from court saying she wants to terminate our marriage”.

In any case the reputation of Hajj Sa’ad, a man who has sustained a marriage for over twenty years, will not be affected that much by his decision to marry again, but this is not so for a young man like Tariq or indeed for young men in general. Reputation in Dakhla, for men and women alike, is a fragile attribute, subject to various factors that may elevate or reduce ‘it’. A positive reputation in men involves desirable traits such as generosity and kindness to parents, pre-marital celibacy in addition to financial ability. In women, virginity is paramount and contact with other men problematic. With both sexes, physical integrity, mental stability, and absence of chronic illness are crucial. This is Mahdi, a twenty-six year old chicken and vegetable trader, telling me about a potential bride:

A few months ago I was attracted to this girl who works at a shop by the hospital. Day by day I noticed that every time I pass by her house on my way home and she would be standing outside she would immediately run inside. I met her at the shop and asked her why she runs away every time she sees me, have I done something wrong? She said she does that with any man passing by the house. I liked that. I went to speak with her mother and told her that I am interested in her daughter, and we agreed that we will get to know each other for a short time by talking on the mobile phone before informing the father or anyone else about my intention to marry her. This is more common now, although not everyone does it. It’s not like the old days where you wouldn’t have spoken to the bride at all. Its better this way; if we find that we are not compatible then things end without the whole town knowing about it, and so they don’t count another engagement on us. As the engagements pile on you, any future family will demand to know why the engagements failed and you could develop a reputation around town that you are a difficult, unstable person. It’s worse for the woman. It’s not like Egypt [Cairo], where you could get engaged without the whole town knowing; here news travels immediately, and your reputation is always at stake.

A few weeks later he formally proposed and the father followed the customary procedure of “asking around” about him. Fortunately for Mahdi, he is neither involved in drug use, nor is he indolent; a “hard-working, honest man” is how he describes himself. The engagement went ahead. For others it may not fare that well. Mohammed Kamal is a young man who has been mentally unwell for several years; we will meet him in later chapters. A few months into my fieldwork I discussed with Hajj Khedr, the owner of a local stationery shop, whether he has a chance of getting married one day:

He doesn’t really. If he stays reasonable, calm and settled like he is now for the next five years he may have a chance of marrying, and then not from Mūt, but from the villages, where people might not know his past, or if they do, wouldn’t have seen him in the state people in Mūt have. Even his brother had to marry from outside Mūt. People here worry that problems like this run in the family – the branch extends, and they are wary of marrying their daughters or sons into the family. But there will always be men and women who for some reason or other will settle for a husband or wife with such a history; those who have missed the marriage train or those who don’t have a particularly good reputation.

Behavioural and psychological disturbances constitute such a major blow to a person’s reputation that a disgruntled son allegedly feigned ‘madness’ in order to prevent his father bringing home a step-mother:

I learnt today about a sixty year old man who had lost his wife three years ago and who recently sought marriage in order to have someone take care of him and the house. His youngest son, the only one still living with him, was opposed to this: he didn’t want another woman to replace his mother’s place and have a share in the inheritance. To ruin his father’s chances at marrying he began to behave in aggressive and bizarre ways in town, and to cause trouble. He works in a shop selling chicken and livestock fodder and he began to be rude to customers, refusing to sell to some people without any reason and causing fights when someone objected. He was frequently shouting around town, cursing the people sitting by the coffee-houses, and once took off his shirt and walked half-naked. Some thought he was possessed; the only thing that could explain this sudden change in behaviour. Tariq thought he was doing this intentionally to ruin his father’s chances of getting married. If a family think of marrying their daughter to his father they would now think twice. First they wouldn’t want her to move into a house where a disturbed man lives; this would no doubt cause her problems and grief. Second, they would worry that his father too might be unstable. Even if they think he is possessed and not mentally ill, they would take care and avoid this family. Ever since his father gave up the idea, his son seems to have stopped his disruptive behaviours.

In the quiet squares and streets of Mūt, extreme behavioural and psychological disturbances cannot be missed; they are right there for all to see and hear. A disturbed person can hardly escape being judged by others, but reputation is subject to judgment in the absence of such exotic displays. This is because Mūt is a small place; people know each other and gossip is rife at the coffee-houses, among women in the privacy of their homes, and news travels around at surprisingly fast speeds. This was demonstrated to me one night when I was chatting to some Christian youths who frequent a coffee-house on the outskirts of town. When I returned to El-Midan an hour later I was told by Tariq: “you were sitting with two Christian boys at Sayyed’s coffee-house; see? Your news reaches me”. Perhaps there was a question why, as a Muslim, I was socialising with Christians, a point I will attend to later, but the webs of gossip are most dangerous when someone commits a transgression; the threat to a person’s reputation is at its greatest.

One night while drinking tea with Sayyed and Tariq, the former suggested that we go for a picnic on the dunes just outside town. It was a moon-lit night, and the prospect of leaving town for the cool sand was welcome. Beer and mezza would sustain the night, and Sayyed offered to take the drive-of-shame to the only beer shop in town himself. I started walking with Tariq and mid-way Sayyed picked us up on his motorbike. Drinking alcohol is certainly frowned upon in Dakhla, and we had to keep our wits about us lest someone saw us. Up there on the dunes it was definitely safe. Sayyed was the least concerned about this though; he has a turbulent history of drug use, and has previously told me stories of spectacular drug-fuelled shows in town. But the concern was there nevertheless; Tariq put it this way: “the town is small and talk goes around, if someone sees us here drinking, by the time we return to town the word will be that we had women with us”.

Well into our second beer, we saw a seventies jeep coming fast down the sandy track. We expected it would continue around the dune but it veered left sharply, gained some speed and climbed up the dune adjacent to ours; we heard a woman let out a playful scream. Initially Tariq and Sayyed thought they were foreigners, but then we heard a woman addressing a child “come here!”. My companions’ interest was aroused: they wanted to know who was there and what they were up to. They left me for a brief reconnaissance trip and when they were close enough they saw two children playing in the sand a short distance from the car. The man and woman were not outside and so were obviously in the car. Tariq and Sayyed were convinced the couple were up to some sort of brief sexual liaison, perhaps an engaged couple ‘making out’ and bringing the children with them as an excuse; chaperones who are let loose to play in the sand while they have some private time. This seemed to unsettle my friends; Sayyed was surprised why anyone would “commit dirty acts on this pure sand”. I pointed out that we were drinking alcohol, but he excluded that and said “adultery? I would never do that, everything but penetration, yes, which must be halal (religiously permissible)”.

Perhaps what struck me the most in this incident was my friends’ inconsistency. While they complained of the gossipy nature of their brethren they simultaneously partook in the surveillance of others, and with an intention to expose them if they had known who they were. Several men here have told me with visible pride that they have previously caught couples ‘making out’ in El-Kharaba or other hidden ‘hot-spots’ in Mūt, the same men who have disclosed in confidence that they have previously stolen a kiss or a hug from a girl they once knew. One evening when visiting Hisham at the Dakhla youth-centre I was expected to intervene and ‘catch’ a couple who were kissing in the dark. While waiting for him to finish some errands, I saw a young man walking towards the gate followed shortly by a girl. He was turning periodically and shouting at her “walk, quick!” Half a minute later came the night-guard panting: “catch these two!”

The speed with which news travels around town is guaranteed by the tendency for gossip and the fact that most people know each other, and its efficacy is maintained by the ever present worry of the implications of a tarnished reputation. To me this presented a serious constraint on personal freedom, a view that others did not share. Tariq asked rhetorically: “what would you want to do behind people’s back, sex? If you do this here it will be known, you will know people know without them having to tell you; you will see it in the way they look at you. It’s a good thing that people know each other and news spreads; it keeps the dirtiness at bay”. A quintessential example of such ‘dirtiness’ and the power of the mechanisms to counteract it were demonstrated to me over the course of my stay through the developing story of a Christian boy and a Muslim girl who were caught in the act.

I first heard about them on a visit to Youssef at his office in El-Sha’arawy primary school. We were discussing the details of a lecture on mental health I was about to deliver to school teachers two days later. The school guard came to his office and asked us if we had heard: “an hour ago a young man and a girl were found at the back of a store, they were undressed and the man was taken to the police station”. The following evening I overheard an independent update on the event. The girl wears a niqab (full face veil) as a decoy, her husband had been working in Saudi Arabia for the past three months and the consensus was that she was looking for sex. A month later I learnt that her husband had divorced her and she was forced to leave Mūt and live in one of the villages where people might not know about the scandal. The boy was placed under surveillance, the purpose of which was to protect him: the incident had enraged Muslim men. While this incident demonstrates the serious consequences of sexual transgressions in general, the seriousness of the transgression in this instance was amplified by the fact that the man was Christian and the woman Muslim. What happened was an affront to Islam, and Islam for the people of Dakhla is the global basis of their identity.

[1] This is problematic since, not being a brother or uncle, he remains permissible in marriage.

[2] This refers to the wedding night when bride and groom move to the marital abode and experience their first moment of intimacy.

[3] The literal translation of mot’a is enjoyment or pleasure. A mot’a marriage is a temporary marriage conducted in order to legalise and render permissible the union between a man and a woman primarily to facilitate temporary sexual relations. Nowadays it is rare and is generally frowned upon.

In Defence of Madness: The Problem of Disability

My essay, about to be published in the Journal of Medicine & Philosophy.

I write defending mad positive approaches against the tendency to adopt a medical view of the limitations associated with madness. Unlike most debates that deal with similar issues – for example the debate between critical psychiatrists and biological psychiatrists, or between proponents of the social model of disability versus those who endorse the medical model of disability – my essay is not a polemical adoption of one or other side, but a philosophical examination of how we can talk about disability in general, and madness in particular.

You can read the essay here: IN DEFENCE OF MADNESS

And here is the abstract: At a time when different groups in society are achieving notable gains in respect and rights, activists in mental health and proponents of mad positive approaches, such as Mad Pride, are coming up against considerable challenges. A particular issue is the commonly held view that madness is inherently disabling and cannot form the grounds for identity or culture. This paper responds to the challenge by developing two bulwarks against the tendency to assume too readily the view that madness is inherently disabling: the first arises from the normative nature of disability judgements, and the second from the implications of political activism in terms of being a social subject. In the process of arguing for these two bulwarks, the paper explores the basic structure of the social model of disability in the context of debates on naturalism and normativism; the applicability of the social model to madness; and the difference between physical and mental disabilities in terms of the unintelligibility often attributed to the latter

Beyond Dysfunction: Distress & the Distinction Between Social Deviance & Mental Disorder

Over the course of last year I have been working on a small project with Rachel Bingham examining the possibility of distinguishing ‘social deviance’ from ‘mental disorder’ in light of recent work on concepts of health. The result was an essay published recently in the journal Philosophy, Psychiatry & Psychology (21:3-September 2014).

Johanna Moncrieff and Dan Stein wrote commentaries on our essay to which we responded in a short piece published in the same issue with the original essay.

In our response to Moncrieff and Stein we found it necessary to point out that in the writings of some critical psychiatrists and psychologists there is a problematic conflation of empirical with conceptual issues in relation to ‘mental disorder’. That section is reproduced below. Note that Criterion E is the final clause in the DSM definition of mental disorder. It states that a mental disorder must not solely be a result of social deviance or conflicts with society.

Mental Disorder: Separating Empirical From Conceptual Considerations

Let us begin by revisiting the conceptual basis of attributions of mental disorder. Criterion E is not, as we argued with Stein et al. (2010, 1765), conceptually necessary, but is of ethical and political importance given the historical context. Thus, notwithstanding the other criteria, a condition can only be considered for candidacy for mental disorder if “dysfunction” is present. What is a dysfunction? As Moncrieff puts it, there is a tautology in the definition of mental disorder where it is stated that a mental disorder reflects an “underlying psychobiological dysfunction” (Moncreiff 2014). Moncrieff argues that this is flawed because underlying processes have not been established, which renders the definition tantamount to saying that a dysfunction is a reflection of a dysfunction: a definition that adds nothing to our knowledge.

Here Moncrieff follows Thomas Szasz in finding a lack of resemblance to physical disorder to be the primary problem with the concept of mental disorder (see Fulford et al. 2013).1 In pursuing this, the critical psychiatrist not only fails to see the complexity of the concept of physical disorder, but also commits the same error as the biological psychiatrist. The latter implies that an ever longer awaited complete neurochemistry of mental health conditions would solve the conceptual problems. The former—the critical psychiatrist—implies the converse; that the absence of proof for the “existence of separate and distinct foundational processes,” as Moncrieff (2014) puts it, proves that mental health conditions are not disorders. As we have argued elsewhere, identifying the biological basis for a set of behaviors or symptoms does not in itself pick out what is pathological or disordered: for example, a complete description of the neurochemical states governing sexuality would not permit the inference that homosexuality is a disorder, any more than discovery of the neural correlates of falling in love or criminality would make these mental illnesses (Bingham and Banner 2012). Neurobiological changes—their presence or their absence—tells us about conditions when we find them by other means, but it does not tell us what is or is not a disorder. The same arguments could be run for underlying psychological processes. Consequently, emphasis on scientific progress or failure to progress in understanding the neurobiological correlates of mental health conditions does little to advance the conceptual debates, a point that may help to explain the impasse in the ongoing exchange between critical and biological psychiatrists.

Thus, although Moncrieff is right in pointing out that the term ‘dysfunction’ is redundant in the definition of mental disorder, she is wrong about the reason why this is so. It is not, as she claims, due to the point that no “separate and distinct foundational processes” (2014) that can ground dysfunction have been discovered empirically. After all, this leaves her open to the simple response that they actually have been, a response many biological psychiatrists do offer. The redundancy of the term ‘dysfunction’ in the definition of mental disorder is a result of conceptual analysis (and not empirical evidence), whereby it has not proven possible to define dysfunction in a way that excludes values. Here, we follow Derek Bolton in the view that once we “give up trying to conceptually locate a natural fact of the matter [dysfunction] that underlies illness attribution… then we are left trying to make the whole story run on the basis of something like ‘distress and impairment of functioning’” (2010, 332). We are left then with those things that matter in real life, the reasons that lead to healthcare being sought: usually the presence of significant distress and disability.

This is what the terms ‘dysfunction’ and ‘mental disorder’ pick out once we achieve some clarity on their referents. Stein is clearly aware of the problems inherent in defining dysfunction. However, somewhat surprisingly, the assumption that we can talk of ‘dysfunction’ over and above experienced factors (distress and disability in particular) arises through Stein’s commentary. In other words, although Stein has acknowledged the conceptual problem, in places he still writes as if there were a clear definition of dysfunction, without telling us what this would be. For example, he describes “situations when there is evidence of dysfunction, but an absence of distress and/or impairment” and gives the example of tic disorders which have no “clinical criterion (emphasizing distress and/or impairment)” (Stein 2014). We would argue that, despite the lack of explicit acknowledgement in DSM, tic disorders enter the manual because of their association with clinically significant distress and disability. It is important to avoid confusing the empirical questions (e.g., Why do people have tics? Can people have tics and not be distressed?) with the conceptual questions (e.g., When is a tic a disorder? Can tics be disorders if they do not cause distress or impairment?).

A further potential pitfall is to conflate the technical use of ‘dysfunction’ with the ordinary use of that term. This might occur where, on the one hand, we perceive a ‘dysfunction’ but on the other hand we are unable to say what the dysfunction consists of. When Moncrieff writes that dysfunction and distress are not co-extant, because, “people may neglect themselves and act in other ways that compromise their safety and survival without necessarily being distressed,” she is offering a description of behavior many would consider ‘dysfunctional’ in the lay sense (2014). Considered as a basis for conceptual analysis, however, this does not illuminate any “underlying psychobiological dysfunction”, which previous definitions aspired to do. Indeed, it is somewhat surprising that Moncrieff provides this counterexample rather than sticking to her argument that dysfunction in fact does not exist. In citing safety and survival, Moncrieff’s phrase does resemble the evolutionary theoretic approach (notably described in Wakefield’s Harmful Dysfunction Analysis), which as has been discussed widely elsewhere and noted in our paper, has fallen out of favor owing to problems with evolutionary theory specifically and naturalistic definitions in general. What of importance is left in Moncrieff’s putative definition if not underlying psychobiological and evolutionary dysfunction? We would argue: only the harm or threat of harm experienced by the individual, whether that harm is cashed out as distress and disability or as some other similar negatively evaluated experienced factor.

Response to the commentary on ‘A Critical Perspective on Second-order Empathy’: Phenomenological psychopathology must come to terms with the nature of its enterprise as a formalisation of folk-psychology (and the permeation of this enterprise with ethics)

[A response to the commentary by Jann Schlimme, Osborne Wiggins, and Michael Schwartz on my essay published in Theoretical Medicine Bioethics, April 2015 (36/2).]

In a recent polemic against certain increasingly dominant strands of phenomenological psychopathology, I launched a critique of the concept of ‘second-order’ empathy. This concept has been proposed by prominent psychopathologists and philosophers of psychiatry, including Giovanni Stanghellini, Mathew Ratcliffe, Louis Sass and others, as a sophisticated advancement over ‘ordinary’ or ‘first-order’ empathy. The authors argue that this concept allows us to refute Jaspers’ claim that certain psychopathological phenomena are un-understandable, by demonstrating that theoretical sophistication allows a ‘take’ on the these phenomena that reveals them as meaningful in the context of the person’s ‘life-world’. In my essay I argued that, given its philosophical commitments, the second-order empathic stance is incoherent, and given the constraints it places on the possibility of recognitive justice, it is unethical. The commentators take issue with both these points, to which I now respond.

First critique: ‘Psychopathology is not first philosophy’

In a succinct yet accurate summary of the first part of my argument the commentators write:

Rashed first addresses the issue of the feasibility of psychopathologists engaging in second-order empathy with persons with psychotic experiences/schizophrenia … [He] marshals textual evidence that psychopathologists can only make their case for second-order empathy by showing that it requires the performance of the Husserlian ‘phenomenological [transcendental] reduction’. Then, by citing phenomenologists such as Merleau-Ponty, as well as developing his own arguments, Rashed maintains that phenomenologists themselves do not agree that the phenomenological reduction is even possible. Assuming now that this conflicting reasoning demonstrates the impossibility of performing Husserl’s reduction, Rashed concludes that second-order empathy is impossible (because such empathy presupposes the successful performance of an impossible reduction).

Now their critique: the commentators begin by pointing out that the “‘transcendental reduction’ is designed to reach the level of a ‘transcendental consciousness’, which is the subject matter for a ‘first philosophy’ (namely, transcendental phenomenology) [that] can supply the foundation for all of knowledge”, a characterisation with which I am in agreement. I would go further and state that I consider, together with a long line of modern philosophers from Hegel to Wittgenstein, that such a project cannot work: we cannot get behind knowledge in order to establish the grounds for certainty of knowledge. As Hegel put it in his Logic, to aim to investigate knowledge prior to attempts to know the world is “to seek to know before we know [which] is as absurd as the wise resolution of Scholasticus, not to venture into the water until he had learned to swim”. The commentators then go on to state, in criticism of my essay, that psychopathology is not ‘first philosophy’. To examine, as I do, the “quarrels among phenomenological philosophers about the founding level of phenomenological inquiry” and the possibility of the transcendental reduction, is to burden psychopathology with irrelevant problems. Hence, they write, psychopathologists “can breathe a deep sigh of relief”. I suggest they hold their breath. Psychopathology is not ‘first philosophy’ – I whole heartedly agree with this statement – but in order to establish its basis and validity, phenomenological psychopathology helps itself to the entire Husserlian philosophy, and therein the problem lies.

What is psychopathology? It is a formalisation of abnormal folk psychology : it is the meticulous documentation of mental states and their connections – or lack thereof – and in this sense has no special claim to expertise on mental states except in so far as meticulous documentation can be illuminating. Put differently, psychopathology cannot overstep the soil or ground from which it arises – namely, folk psychology – and claim knowledge of the supposed ‘true’ nature of ‘abnormal’ mental states. But that is precisely what contemporary phenomenological psychopathology wants to do. It is not content with psychopathology being a formalisation of folk psychology and hence dependent on it; it wants psychopathology to be a ‘science’ that exceeds folk psychology and from which the latter can learn. In order for psychopathology to be a ‘science’ it claims a theoretical basis that is not available to folk psychology. It establishes its credentials as a ‘science’ by helping itself to the entire Husserlian philosophy: it helps itself, in particular, to the concept of the ‘transcendental reduction’ without which the proposal for ‘second-order’ empathy as a mode of philosophically articulated understanding of others would not work. (I argued this final point in detail in my essay: achieving second-order empathy requires as a first step that one suspends the natural attitude and grasps that the sense of reality with which experience is ordinarily endowed is a phenomenological achievement, a move which presupposes the possibility of the transcendental reduction.)

Shorn of its theoretical ‘transcendental’ basis, psychopathology falls back to earth as the discipline which meticulously documents mental states and their connections in accordance with the implicit rules and principles of a particular folk psychology (particular since the rules and principles in question are normative and subject to, among other things, the influence of ‘culture’). Psychopathologists may be better in this than others, but that is because they have made it their vocation, and not because they have somehow ventured beyond folk psychology. Indeed, somewhat ironically, the commentators’ own account of how understanding works proves my argument that all we’ve got is ‘first-order’ empathy, of which the qualification ‘first-order’ can now be removed as there is nothing left to contrast it with:

 Jaspers realized that, in order to apply the phenomenological method (in this less demanding sense), I first need to ‘evoke’ the perspective of the other in my own consciousness. This evocation is not some kind of (‘mysterious’) self-immersion into the other’s psyche, but a meticulous and often strenuous (and necessarily imperfect) hermeneutical reconstruction of the other’s mental life (i.e., drawing on my own experiences and elaborate narrations of the pertinent experiences in order to get a ‘feeling’ for the other’s mental life).

Indeed: empathic understanding involves a “hermeneutical reconstruction of the other’s mental life”, a reconstruction in which I draw upon “my own experiences”. It seems then that the commentators’ disagreement with the first part of my essay is not as intractable as it first appeared to be. However, the important point to reiterate is that phenomenological psychopathology faces a dilemma: either it holds fast to its basis in transcendental philosophy and hence becomes theoretically incoherent, or it abandons its pretentions to be a ‘science’ and hence, as indicated, rest content with what it is: a formalised folk psychology. In my view, given the arguments of the original essay, only the latter option is available. And contrary to what it may seem, that is not a bad position to be in; far from it. The documentation of the various states of the mind, their description and the search for connections among them, while that is a vocation that cannot exceed folk psychology, it can certainly make available for the ‘folk’ certain possibilities of human experience and belief of which they were not explicitly aware, and therein its value may lie.

Second critique: ‘Distinguishing methodological from ethical value’

 In the second part of my essay I considered the ethical dimension of the second-order empathic stance. I asked if an attitude which emphasises radical difference – as required by this stance – is the right one to hold towards persons diagnosed with schizophrenia. My answer was that it is not, but the reason why this is so is important and deserves restatement. An attitude which emphasises differences is not the right one to hold, not because such emphasis is bad in itself; I would, for example, consider an attitude which emphasises similarity as also potentially problematic. This is because the issue at stake is not the nature of the attitude, but the degree to which the persons who are at its receiving end have had a say in its construction. The reason such a consideration is normatively significant has to do with the necessity of reciprocal relations of recognition for identity formation and self-realisation. To have an academic discipline launching discourses about others cloaked in the technical jargon of phenomenological philosophy, and possessing of the prestige and authority of scholarly argument in general, is to give those others no real chance and no say in how they would like to be represented. This is not a call to ban certain words or discourses – of course not! But it is a call to appreciate that there is no ethically neutral discourse or methodology. Unfortunately this neutrality is precisely what the commentators seem to be arguing for in critique of the second part of my paper.

They begin by stating that emphasising differences is important as this may ultimately enable the psychiatrist to understand his or her patients:

On the contrary, we assert that psychopathology emphasizes difference in order to encourage the examining psychiatrist to keep on going in the attempt to understand even when such understanding seems to have ‘reached a brick wall’. Examining psychiatrists should keep on going even when they fear that they have hit a limit inherent in understanding the patient.

Now this argument seems to rest on an assumed value being attached to understanding others. They restate their point again as follows:

It is valuable to be aware of the differences of persons with psychotic experiences/schizophrenia and typically ‘‘normal’’ persons, and consequently, to persist in the task of understanding.

They go on to describe the value in question as a ‘methodological’ value and distinguish this from the “ethical value of the person with psychotic experiences/schizophrenia [which] is the same as the ethical value of the rest of us”. I admit I find such a pronouncement somewhat unusual, as it implies that our methodological approaches towards others can be disentangled from our ethical evaluations towards them as long as we insist that they are our equals. If only it was this easy.

Understanding others is not merely of ‘methodological’ value: it is ultimately a core issue in any normative moral theory, and hence much broader. The distinction drawn by the commentators between methodological and ethical value suggests that it doesn’t matter what approaches we adopt towards others as long as we are motivated by understanding them, and never lose sight of the fact that they are our equals. Once seen as a concern with how we should treat others, such a picture appears naïve. For one thing, over and the above the need to understand, lays the wishes of those we are trying to understand: they may wish to have a say in how they would like to be understood, and in the language and method which they consider more representative of who they are. All this is to say that there is no domain of human interaction that lies, as it were, beyond the ethical. Phenomenological psychopathology cannot hide behind this claim to ethical neutrality, irrespective of whether or not it is methodologically valuable.

Mohammed Abouelleil Rashed – May 2015

A Critical Perspective On Second-Order Empathy In Understanding Psychopathology: Phenomenology And Ethics

Article published in Theoretical Medicine & Bioethics 2015

You can find the final version HERE, and the pre-production version HERE

Abstract: The centenary of Karl Jaspers’ General Psychopathology was recognised in 2013 with the publication of a volume of essays dedicated to his work (edited by Stanghellini and Fuchs). Leading phenomenological-psychopathologists and philosophers of psychiatry examined Jaspers notion of empathic understanding and his declaration that certain schizophrenic phenomena are ‘un-understandable’. The consensus reached by the authors was that Jaspers operated with a narrow conception of phenomenology and empathy and that schizophrenic phenomena can be understood through what they variously called second-order and radical empathy. This article offers a critical examination of the second-order empathic stance along phenomenological and ethical lines. It asks: (1) Is second-order empathy (phenomenologically) possible? (2) Is the second-order empathic stance an ethically acceptable attitude towards persons diagnosed with schizophrenia? I argue that second-order empathy is an incoherent method that cannot be realised. Further, the attitude promoted by this method is ethically problematic insofar as the emphasis placed on radical otherness disinvests persons diagnosed with schizophrenia from a fair chance to participate in the public construction of their identity and, hence, to redress traditional symbolic injustices.

Mohammed Abouelleil Rashed   2015

Islamic Perspectives on Psychiatric Ethics

My chapter published online at Oxford Handbooks.

Will appear in print in the Oxford Handbook for Psychiatric Ethics Volume 1 next year.

Abstract

Islamic Perspectives on Psychiatric Ethics explores the implications for psychiatric practice of key metaphysical, psychological, and ethical facets of the Islamic tradition. It examines: (1) the nature of suffering and the ways in which psychological maladies and mental disorder are bound up with the individual’s moral and spiritual trajectory. (2) The emphasis placed on social harmony and the formation of a moral community over personal autonomy. (3) The sources of normative judgements in Islam and the principles whereby ethical/legal rulings are derived from the Qur’an and the Prophetic Traditions. Finally, the perspective of the chapter as a whole is employed to present an Islamic view on a number of conditions, practices, and interventions of interest to psychiatric ethics.

Click HERE for Pre-Production version