Check out Oxford University Press’ list of articles chosen from across its journals to represent the ‘Best of 2018’.
For other articles, I enjoyed reading Roger Scruton’s Why Beauty Matters in The Monist.
Check out Oxford University Press’ list of articles chosen from across its journals to represent the ‘Best of 2018’.
For other articles, I enjoyed reading Roger Scruton’s Why Beauty Matters in The Monist.
(Introduction to a chapter I wrote with Rachel Bingham. It will be part of the volume ‘Mental Health as Public Health: Interdisciplinary Perspectives on the Ethics of Prevention’, edited by Kelso Cratsley and Jennifer Radden.)
For over a decade there has been an active and ambitious movement concerned with reducing the “global burden” of mental disorders in low- and middle-income countries. Global Mental Health, as its proponents call it, aims to close the “treatment gap”, which is defined as the percentage of individuals with serious mental disorders who do not receive any mental health care. According to one estimate, this amounts to 75%, rising in sub-Saharan Africa to 90% (Patel and Prince 2010, p. 1976). In response to this, the movement recommends the “scaling up” of services in these communities in order to develop effective care and treatment for those who are most in need. This recommendation, the movement states, is founded on two things: (1) a wealth of evidence that medications and psychosocial interventions can reduce the disability accrued in virtue of mental disorder, and (2) closing the treatment gap restores the human rights of individuals, as described and recommended in the Convention on the Rights of Persons with Disabilities (Patel et al. 2011; Patel and Saxena 2014).
In addition to its concern with treatment, the movement has identified prevention among the “grand challenges” for mental and neurological disorders. It states, among its key goals, the need to identify the “root causes, risk and protective factors” for mental disorders such as “modifiable social and biological risk factors across the life course”. Using this knowledge, the goal is to “advance prevention and implementation of early interventions” by supporting “community environments that promote physical and mental well-being throughout life” and developing “an evidence-based set of primary prevention interventions” (Collins et al. 2011, p. 29). Similar objectives have been raised several years before by the World Health Organisation, who identified evidence-based prevention of mental disorders as a “public health priority” (WHO 2004, p. 15).
Soon after its inception, the movement of Global Mental Health met sustained and substantial critique. Essentially, critics argue that psychiatry has significant problems in the very contexts where it originated and is not a success story that can be enthusiastically transported to the rest of the world. The conceptual, scientific, and anthropological limitations of psychiatry are well known and critics appeal to them in making their case. Conceptually, psychiatry is unable to define ‘mental disorder’, with ongoing debates on the role of values versus facts in distinguishing disorder from its absence. Scientifically, the lack of discrete biological causes, or biomarkers, for major psychiatric conditions has resulted in the reliance on phenomenological and symptomatic classifications. This has led to difficulties in defining with precision the boundaries between disorders, and accusations that psychiatric categories lack validity. Anthropologically, while the categories themselves are associated with tangible and often severe distress and disability, they remain culturally constructed in that they reflect a ‘Western’ cultural psychology (including conceptions of the person and overall worldview). Given this, critics see Global Mental Health as a top-down imposition of ‘Western’ norms of health and ideas of illness on the ‘Global South’, suppressing long-standing cultural ideas and healing practices that reflect entirely different worldviews. It obscures conditions of extreme poverty that exist throughout many non-Western countries, and which underpin the expressions of distress that Global Mental Health now wants to medicalise. On the whole, Global Mental Health, in the words of the critics, becomes a form of “medical imperialism” (Summerfield 2008, p. 992) that “reproduces (neo)colonial power relationships” (Mills and Davar 2016, p. 443).
We acknowledge the conceptual, scientific, and anthropological critiques of psychiatry and have written about them elsewhere. At the same time we do not wish to speculate about and judge the intention of Global Mental Health, or whether it’s a ‘neo-colonial’ enterprise that serves the interests of pharmaceutical companies. Our concern is to proceed at face-value by examining a particular kind of interaction: on one hand, we have scientifically grounded public mental health prevention campaigns that seek to reduce the incidence of mental disorders in low- and middle-income countries; on the other hand, we have the cultural contexts in these countries where there already are entirely different frameworks for categorising, understanding, treating, and preventing various forms of distress and disability. What sort of ethical principles ought to regulate this interaction, where prevention of ‘mental disorders’ is at stake?
The meaning of prevention with which we are concerned in this chapter is primary, universal prevention, to be distinguished from mental health promotion, from secondary prevention, and from primary prevention that is of a selective or indicated nature. Primary prevention “aims to avert or avoid the incidence of new cases” and is therefore concerned with reducing risk factors for mental disorders (Radden 2018, p. 127, see also WHO 2004, p. 16). Secondary prevention, on the other hand, “occurs once diagnosable disease is present [and] might thus be seen as a form of treatment” (Radden 2018, p. 127). In contrast to prevention, mental health promotion “employs strategies for strengthening protective factors to enhance the social and emotional well-being and quality of life of the general population” (Peterson et al. 2014, p. 3). It is not directly concerned with risk factors for disorders but with positive mental health. With universal prevention the entire population is within view of the interventions, whereas with selective and indicated prevention, the target groups are, respectively, those “whose risk for developing the mental health disorder is significantly higher than average” and those who have “minimal but detectable signs or symptoms” (Evans et al. 2012, p .5). While there is overlap among these various efforts, we focus on primary, universal prevention. Our decision to do so stems from the fact that such interventions, in being wholly anticipatory and population wide put marked, and perhaps even unique, ethical pressure on the encounter between the cultural context (and existing ideas on risk and prevention of distress and disability) and the biomedical public mental health approach.
It is helpful for ethical analysis to begin with a sufficiently detailed understanding of the contexts and interactions that are the subject of analysis. With these details at hand, what matters in a particular interaction is brought to light and the ethical issues become easier to grasp. Accordingly, we begin in section 2 with an ethnographic account of the primary prevention of ‘depression’ in the Dakhla Oasis of Egypt from the perspective of the community. The Dakhla Oasis is a rural community where there is no psychiatric presence or modern biomedical concepts yet – like most communities around the world – there is no shortage of mental-health related distress and disability. It is a paradigmatic example of the kind of community where Global Mental Health would want to action its campaigns. In section 3 we move on to the perspective of a Public Health Team concerned with preventing depression in light of scientific and evidence-based risk factors and preventive strategies. Section 4 outlines the conflict between the perspective of the Team and that of the community. Given this conflict, sections 5 and 6 discuss the ethical issues that arise in the case of two levels of intervention: family and social relationships, and individual interventions.
 See Horton (2007), Prince et al. (2007), and Saxena et al. (2007).
 Most recently there was vocal opposition to a ‘Global Ministerial Mental Health Summit’ that was held on the 9th and 10th of October 2018 in London. The National Survivor and User Network (U.K.) sent an open letter to the organisers of the summit, objecting to the premise, approach, and intention of Global Mental Health.
 See Summerfield (2008, 2012, 2013), Mills and Davar (2016), Fernando (2011), and Whitley (2015).
 For debates on the definition of the concept of mental disorder consult Boorse (2011), Bolton (2008, 2013), Varga (2015), and Kingma (2013).
 For discussions of the (in)validity of psychiatric categories see Kinderman et al. (2013), Horwitz and Wakefield (2007), and Timimi (2014). Often, the problem is framed by asking whether mental disorders are natural kinds (see Jablensky 2016, Kendell and Jablensky 2003, Zachar 2015, and Simon 2011).
 See, for example, Fabrega (1989), Littlewood (1990), and Rashed (2013a).
 For example: Rashed and Bingham (2014), Rashed (2013b), and Bingham and Banner (2014).
The modern consumer/service-user/survivor movement is generally considered to have begun in the 1970s in the wake of the many civil rights movements that emerged at the time. The Survivors’ History Group – a group founded in April 2005 and concerned with documenting the history of the movement – traces an earlier starting point. The group sees affinity between contemporary activism and earlier attempts to fight stigma, discrimination and the poor treatment of individuals variously considered to be mad, insane and, since the dominance of the medical idiom, to suffer with mental illness. In their website which documents Survivor history, the timeline begins with 1373, the year the Christian mystic Margery Kempe was born. Throughout her life, Margery experienced intense voices and visions of prophets, devils, and demons. Her unorthodox behaviour and beliefs upset the Church, the public, her husband, and resulted in her restraint and imprisonment on a number of occasions. Margery wrote about her life in a book in which she recounted her spiritual experiences and the difficulties she had faced.
The Survivors’ history website continues with several recorded instances of individual mis-treatment on the grounds of insanity. But the first explicit evidence of collective action and advocacy in the UK appears in 1845 in the form of the Alleged Lunatics’ Friend Society: an organisation composed of individuals most of whom had been incarcerated in madhouses and subjected to degrading treatment (Hervey 1986). For around twenty years, the Society campaigned for the rights of patients, including the right to be involved in decisions pertaining to their care and confinement. In the US, around the same time, patients committed to a New York Lunatic Asylum produced a literary magazine – The Opal – published in ten volumes between 1851 and 1860. Although this production is now seen to have painted a rather benign picture of asylum life, and to have allowed voice only to those patients who were deemed appropriate and self-censorial (Reiss 2004), glimpses of dissatisfaction and even of liberatory rhetoric emerge from some of the writing (Tenney 2006).
An important name in what can be considered early activism and advocacy is Elizabeth Packard. In 1860, Packard was committed to an insane asylum in Illinois by her husband, a strict Calvinist who could not tolerate Packard’s newly expressed liberal beliefs and her rejection of his religious views. At the time, state law gave husbands this power without the need for a public hearing. Upon her release, Packard campaigned successfully for a change in the law henceforth requiring a jury trial for decisions to commit an individual to an asylum (Dain 1989, p.9). Another important campaigner is Clifford Beers, an American ex-patient who published in 1908 his autobiography A Mind That Found Itself. Beer’s biography documented the mistreatment he experienced at a number of institutions. The following year he founded the National Committee for Mental Hygiene (NCMH), an organisation that sought to improve conditions in asylums and the treatment of patients by working with reform-minded psychiatrists. The NCMH achieved limited success in this respect, and its subsequent efforts focused on mental health education, training, and public awareness campaigns in accordance with the then dominant concept of mental hygiene (Dain 1989, p. 6).
On both sides of the Atlantic, mental health advocacy in the first few decades of the 20th century promoted a mental hygiene agenda. Mental hygiene is an American concept and was understood as “the art of preserving the mind against all incidents and influences calculated to deteriorate its qualities, impair its energies, or derange its movements” (Rossi 1962). These “incidents and influences” were conceived broadly and included “exercise, rest, food, clothing and climate, the laws of breeding, the government of the passions, the sympathy with current emotions and opinions, the discipline of the intellect”, all of which had to be governed adequately to promote a healthy mind (ibid.). With such a broad list of human affairs under their purview, the mental hygienists had to fall back on a set of values by which the ‘healthy’ life-style was to be determined. These values, as argued by Davis (1938) and more recently by Crossley (2006), were those of the educated middle classes who promoted mental hygiene in accordance with a deeply ingrained ethic. For example, extra-marital sex was seen as a deviation and therefore a potential source of mental illness. Despite this conservative element, the discourse of mental hygiene was progressive, for its time, in a number of ways: first, it considered mental illness to arise from interactions among many factors, including the biological and the social, and hence to be responsive to improvements in the person’s environment; second, it fought stigma by arguing that mental illness is similar to physical illness and can be treated; third, it promoted the prevention of mental illness, in particular through paying attention to childhood development; and fourth, it argued for the importance of early detection and treatment (Crossley 2006, pp. 71-75).
In the US, Clifford Beer’s own group, the NCMH, continued to advance a mental hygiene agenda and, in 1950, merged with two other groups to form the National Association for Mental Health, a non-profit organisation that exists since 2006 as Mental Health America. In the UK, mental hygiene was promoted by three inter-war groups that campaigned for patient wellbeing and education of the public. These groups merged, in 1946, to form the National Association for Mental Health (NAMH), which later, in 1972, changed its name to Mind, the name under which it remains to this day as a well-known and influential charity. In the late 50s, these two groups continued to educate the public through various campaigns and publications, and were involved in training mental health professionals in accordance with hygienist principles. In addition, they were advocates for mental patients, campaigning for the government to improve commitment laws, and, in the UK, working with the government to instate the move from asylums to ‘care in the community’.
Even though the discourse of mental hygiene was dominant during these decades, the developments that were to come in the early 70s were already taking shape in the emerging discourse of civil rights. A good example of these developments in the UK is the National Council for Civil Liberties (NCCL), better known today as Liberty. Founded in 1934 in response to an aggressive police reaction to protestors during the “hunger marches”, it became involved in 1947 in its first “mental health case”: a woman wrongly detained in a mental health institution for what appeared to be ‘moral’ rather than ‘medical’ reasons. During the 50s, the NCCL campaigned vigorously for reform of mental health law to address this issue, and was able to see some positive developments in 1959 with the abolition of the problematic 1913 Mental Deficiency Act and the introduction of tribunals in which patients’ interests were represented.
During the 1960s criticism of mental health practices and theories was carried through by a number of psychiatrists who came to be referred to as the ‘anti-psychiatrists’. Most famous among them were Thomas Szasz, R. D. Laing, and David Cooper. Szasz (1960) famously argued that mental illness is a myth that legitimizes state oppression (via the psychiatric enterprise) on those judged as socially deviant and perceived to be a danger to themselves or others. Mental illnesses for Szasz are problems in living: morally and existentially significant problems relating to social interaction and to finding meaning and purpose in life. Laing (1965, 1967) considered the medical concept of schizophrenia to be a label applied to those whose behaviour seems incomprehensible, thereby permitting exercises of power. For Laing (1967, p. 106) the people so labelled are not so much experiencing a breakdown but a breakthrough: a state of ego-loss that permits a wider range of experiences and may culminate in a “new-ego” and an “existential rebirth”. These individuals require guidance and encouragement, and not the application of a psychiatric label that distorts and arrests this process. David Cooper (1967, 1978) considered ‘schizophrenia’ a revolt against alienating familial and social structures with the hope of finding a less-alienating, autonomous yet recognised existence. In Cooper’s (1978, p. 156) view, it is precisely this revolt that the ‘medical apparatus’, as an agent of the ‘State’, aims to suppress.
From the perspective of those individuals who have experienced psychiatric treatment and mental distress, the anti-psychiatrists of the 1960s were not activists but dissident mental health professionals. As will be noted in the following section, the mental patients’ liberation movement did not support the inclusion of sympathetic professionals within its ambit. Nevertheless, the ideas of Thomas Szasz, R. D. Laing, and David Cooper were frequently used by activists themselves to ground their critique of mental health institutions and the medical model. At the time, these ideas were radical if not revolutionary, and it is not surprising that they inspired activists engaged in civil rights struggles in the 1970s.
Civil rights activism in mental health began through the work of a number of groups that came together in the late 60s and early 70s in the wake of the emerging successes and struggles of Black, Gay and women civil rights activists. In the UK, a notable group was the Mental Patients’ Union (1972), and in the US three groups were among the earliest organisers: Insane Liberation Front (1970), Mental Patients’ Liberation Front (1971), and Network Against Psychiatric Assault (1972). An important difference between these groups and earlier ones that may have also pursued a civil rights agenda such as the NCCL, is that they, from the start or early on, excluded sympathetic mental-health professionals and were composed solely of patients and ex-patients. Judi Chamberlin (1990, p. 324), a key figure in the American movement, justified it in this way:
Among the major organising principles of [black, gay, women’s liberation movements] were self-definition and self-determination. Black people felt that white people could not truly understand their experiences … To mental patients who began to organise, these principles seemed equally valid. Their own perceptions about “mental illness” were diametrically opposed to those of the general public, and even more so to those of mental health professionals. It seemed sensible, therefore, not to let non-patients into ex-patient organisations or to permit them to dictate an organisation’s goals.
The extent of the resolve to exclude professionals – even those who would appear to be sympathetic such as the anti-psychiatrists – is evident in the writings of Chamberlin as well as in the founding document of the Mental Patients’ Union. Both distance themselves from anti-psychiatry on the grounds that the latter is “an intellectual exercise of academics and dissident mental health professionals” which, while critical of psychiatry, did not include ex-patients or engage their struggles (Chamberlin 1990, p. 323). Further, according to Chamberlin, a group that permits non-patients and professionals inevitably abandons its liberatory intentions and ends up in the weaker position of attempting to reform psychiatry. And reform was not on the agenda of these early groups.
On the advocacy front, the mental patients’ liberation movement – the term generally used to refer to this period of civil rights activism – sought to end psychiatry as they knew it. They sought to abolish involuntary hospitalisation and forced treatment, to prioritise freedom of choice and consent above other considerations, to reject the reductive medical model, to restore full civil rights to mental patients including the right to refuse treatment, and to counter negative perceptions in the media such as the inherent dangerousness of the ‘mentally ill’. In addition to advocacy, a great deal of work went into setting up non-hierarchical, non-coercive alternatives to mental health institutions such as self-help groups, drop-in centres, and retreats. The purpose of these initiatives was not only to provide support to individuals in distress, but to establish that mental patients are self-reliant and able to manage their own lives outside of mental health institutions. Central to the success of these initiatives was a radical transformation in how ex-patients understood their situation. This transformation was referred to as consciousness-raising.
Borrowed from the women’s liberation movement, consciousness-raising is the process of placing elements of one’s situation in the wider context of systematic social oppression (Chamberlin 1990). This begins to occur in meetings in which people get together and share their experiences, identifying commonalities, and re-interpreting them in a way that gives them broader meaning and significance. An implication of this process is that participants may be able to reverse an internalised sense of weakness or incapability – which hitherto they may have regarded as natural – and regain confidence in their abilities. In the mental patients’ liberation movement, consciousness-raising involved ridding oneself of the central assumptions of the ‘mental health system’: that one has an illness, and that the medical profession is there to provide a cure. In the discourse of the time, inspired by the writings of Thomas Szasz and others, psychiatry was a form of social control, medicalising unwanted behaviour as a pre-text for ‘treating’ it and forcing individuals into a sane way of behaving. By sharing experiences, participants begin to see that the mental health system has not helped them. In a book first published in 1977 and considered a founding and inspirational document for mental health activists, Chamberlin (1988, pp. 70-71) writes of the important insights ex-patients gained through consciousness-raising:
Consciousness-raising … helps people to see that their so called symptoms are indications of real problems. The anger, which has been destructively turned inward, is freed by this recognition. Instead of believing that they have a defect in their psychic makeup (or their neurochemical system), participants learn to recognise the oppressive conditions in their daily lives.
Mental suffering and distress, within this view, are a normal response to the difficulties individuals face in life such as relationship problems, social inequality, poverty, loss and trauma. In such situations, individuals need a sympathetic, caring and understanding response, and not the one society offers in the form of psychotropic drugs and the difficult environment of a mental health hospital (Chamberlin 1988). Consciousness-raising does not stop at the ‘mental health system’, and casts a wider net that includes all discriminatory stereotypes against ex-patients. In a deliberate analogy with racism and sexism, Chamberlin uses the term mentalism to refer to the widespread social tendency to call disapproved of behaviour ‘sick’ or ‘crazy’. Mental patients’ liberation required of patients and ex-patients to resist the ‘mental health system’ as well as social stereotyping, and to find the strength and confidence to do so. In this context, voluntary alternatives by and for patients and ex-patients were essential to providing a forum for support and consciousness-raising.
In the 1980s, the voices of advocates and activists began to be recognised by national government agencies and bodies. This was in the context of a shift towards market approaches to health-care provision, and the idea of the patient as a consumer of services (Campbell 2009). Patients and ex-patients – now referred to as consumers (US) or users (UK) of services – were able to sit in policy meetings and advisory committees of mental health services and make their views known. Self-help groups, which normally struggled for funding, began to be supported by public money. In the US, a number of consumer groups formed that were no longer opposed to the medical model or to working with mental health professionals in order to reform services. While some considered these developments to be positive, others regarded them as indicating what Linda Morrison, an American activist and academic, referred to as a “crisis of co-optation”: the voice of mental health activists had to become acceptable to funding agencies, which required relinquishing radical demands in favour of reform (Morrison 2005, p. 80). Some activists rejected the term consumer as it implied that patients and professionals were in an equal relation, with patients free to determine the services they receive (Chamberlin 1988, p. vii).
Countering the consumer/user discourse was an emerging survivor discourse reflected in a number of national groups, for example the National Association of Psychiatric Survivors (1985) in the US and Survivors Speak Out (1986) in the UK. Survivor discourse shared many points of alignment with earlier activism, but whereas the latter was opposed to including professionals and non-patients, survivors were no longer against this as long as it occurred within a framework of genuine and honest partnership and inclusion in all aspects of service structure, delivery and evaluation (Chamberlin 1995, Campbell 1992). 
In the US, developments throughout the 1990s and into the millennium confirm the continuation of these two trends: the first oriented towards consumer discourse and involvement, and the second towards survivors, with a relatively more radical tone and a concern with human rights (Morrison 2005). Today, representative national groups for these two trends include, respectively, the National Coalition for Mental Health Recovery (NCMHR), and Mind Freedom International (MFI). The former is focused on promoting comprehensive recovery, approvingly quoting the ‘New Freedom Mental Health Commission Report’ target of a “future when everyone with mental illness will recover”. To this end they campaign for better services, for consumers to have a voice in their recovery, for tackling stigma, discrimination, and promoting community inclusion via consumer-run initiatives that offer assistance with education, housing and other aspects of life. On the other hand, MFI state their vision to be a “nonviolent revolution in mental health care”. Unlike NCMHR, MFI do not use the language of ‘mental illness’, and support campaigns such as Creative Maladjustment, Mad Pride, and Boycott Normal. Further, MFI state emphatically that they are completely independent and do not receive funds from or have any links with government, drug companies or mental health agencies. Despite their differences, both organisations claim to represent both survivors and consumers, and both trace their beginnings to the 1970s civil rights movements. But whereas NCMHR refer to ‘consumers’ always first and generally more often, MFI do the opposite and state that the majority of their members identify as psychiatric survivors.
In the UK, the service-user/survivor movement – as it came to be referred to – is today represented nationally by a number of groups. Of note is the National Survivor User Network (NSUN) which brings together survivor and user groups and individuals across the UK in order to strengthen their voice and assist with policy change. Another long-standing group (1990), though less active today, is the UK Advocacy Network, a group which campaigns for user led advocacy and involvement in mental health services planning and delivery. A UK survey done in 2003 brings some complexity to this appearance of a homogenous movement (Wallcraft et al. 2003). While most respondents agreed that there is a national user/survivor movement – albeit a rather loose one – different opinions arose on all the important issues; for example, disagreements over whether compulsory treatment can ever be justified, and whether receiving funds from drug companies compromises the movement. In addition, there were debates over the legitimacy of the medical model, with some respondents rejecting it in favour of social and political understandings of mental distress. In this context, they drew a distinction between the service-user movement and the survivor movement, the former concerned with improving services, and the latter with challenging the medical model and the “supposed scientific basis of mental health services” (Wallcraft et al. 2003, p. 50). More radical voices suggested that activists who continued to adopt the medical model have not been able to rid themselves of the disempowering frameworks of understanding imposed by the mental health system. In a similar vein, some respondents noted the de-politicisation of the movement, as activists ceased to be primarily concerned with civil rights and began to work for the mental health system (Wallcraft et al. 2003, p. 14).
In summary, there exists within the consumer/service-user/survivor movements in the US and the UK a variety of stances in relation to involuntary detention and treatment, acceptable sources of funding, the medical model, and the extent and desirability of user involvement in services. Positions range from working for mental health institutions and reforming them from the ‘inside’, to rejecting any co-operation and engaging in activism to end what is considered psychiatric abuse and social discrimination in the guise of supposed medical theory and treatment. It appears that within national networks and movements pragmatic and co-operative approaches are more common, with radical positions pushed somewhat aside though by no means silenced. In this context Mad Pride, representing the latest wave of activism in mental health, re-invigorates the radicalism of the movement and makes the most serious demand yet of social norms and understandings. But Mad Pride, underpinned by the notions of Mad culture and Mad identity, builds on the accomplishments of Survivor identity to which I now briefly turn.
The connotations of survivor discourse are unmistakable and powerful. With survivor discourse the term ‘patient’ and its implications of dependence and weakness are finally discarded (Crossley 2004, p.169). From the perspective of those individuals who embraced the discourse, there is much that they have survived: forced detention in the mental health system; aggressive and unhelpful treatments; discrimination and stigma in society; and, for some, the distress and suffering they experienced and which was labelled by others ‘mental illness’. By discarding of what they came to see as an imposed identity – viz. ‘patient’ – survivors took one further step towards increased self-definition (Crossley 2006, p. 182). Further, the very term ‘survivor’ implies a positive angle to this definition in so far as to survive something implies resilience, strength, and other personal traits considered valuable. Morrison (2005, p. 102) describes it as the “heroic survivor narrative” and accords it a central function in the creation of a collective identity for the movement and a shared sense of injustice.
Central to survivor identity is the importance of the voice of survivors, and their ability to tell their own stories, a voice which neither society nor the psychiatric system respected. The well-known British activist and poet Peter Campbell (1992, p. 122) writes that a great part of the “damage” sustained in the psychiatric system
has been a result of psychiatry’s refusal to give value to my personal perceptions and experience … I cannot believe it is possible to dismiss as meaningless people’s most vivid and challenging interior experiences and expect no harm to ensue.
The emphasis on survivor voice highlights one further difference from 1970s activism: whereas earlier activists sustained their critique of psychiatry by drawing upon the writings of Szasz, Goffman, Marx and others, survivor discourse eschewed such sources of ‘authority’ in favour of the voice of survivors themselves; Crossley (2004, p. 167) writes:
Survivors have been able to convert their experiences of mental distress and (mis)treatment into a form of cultural and symbolic capital. The disvalued status of the patient is reversed within the movement context. Therein it constitutes authority to speak and vouches for authenticity. The experience of both distress and treatment, stigmatized elsewhere, has become recognized as a valuable, perhaps superior knowledge base. Survivors have laid a claim, recognized at least within the movement itself, to know ‘madness’ and its ‘treatment’ with authority, on the basis that they have been there and have survived it.
Survivors are therefore experts on their own experiences, and experts on what it is like to be subject to treatment in mental health institutions and to face stigma and discrimination in society. So construed, to survive is to be able to emerge from a range of difficulties, some of which are external and others internal, belonging to the condition (the distress, the experiences) that led to the encounter with psychiatry in the first place. In this sense, survivor discourse had not yet been able to impose a full reversal of the negative value attached to phenomena of madness, a value reflected in the language of mental illness, disorder and pathology. This is clearly evident in the idea that one had survived the condition, for if that is the attitude one holds towards it, it is unlikely that the ‘condition’ is looked upon positively or neutrally (except perhaps teleologically in the sense that it had had a formative influence on one’s personality). Similarly, if one considers oneself to have survived mental health institutions rather than the condition, there still is no direct implication that the condition itself is regarded in a non-negative light, only that the personal traits conducive to survival are laudable. It is only with the discourse of Mad Pride, yet to come, that the language of mental illness and the social norms and values underpinning it are challenged in an unambiguous manner.
Mohammed Abouelleil Rashed (2018)
Note: the above is an excerpt from Madness and the Demand for Recognition: A Philosophical Inquiry into Identity and Mental Health Activism (Oxford University Press, 2019).
 The following account outlines key moments, figures, groups and strategies in mental health advocacy and activism; it is not intended to be exhaustive but rather to illustrate the background to the Mad Pride movement and discourse.
 In contrast to Survivor history, there is a tradition of historical and critical writing on the history of ‘psychiatry’ and ‘madness’, and on the development of lunacy reform and mental health law. Notable names in this tradition are Roy Porter, Andrew Scull, and Michel Foucault.
 See Peterson (1982, pp. 3-18).
 This section benefits, in part, from Crossley’s (2006, Chapter 4) account of mental hygiene.
 The history of Liberty can be found on their website: https://www.liberty-human-rights.org.uk/who-we-are/history/liberty-timeline
 In the US, groups were able to communicate with each other through a regular newsletter, Madness Network News (1972-1986), and an annual Conference on Human Rights and Against Psychiatric Oppression (1973-1985).
 For a similar point see the founding document of the Mental Patients’ Union, reprinted in Curtis et al. (2000, pp. 23-28).
 Some activists referred to themselves as ‘psychiatric inmates’ or ‘ex-inmates’ highlighting the fact of their incarceration in mental institutions and their rejection of the connotations of the term ‘patient’. This early difference in terminology – inmate versus patient – prefigures the multiplicity of terms and associated strategies that will come to define activism and advocacy in mental health to this day.
 The earliest example of a self-help group is WANA (We Are Not Alone). Formed in New York in the 1940s as a patient-run group, it developed into a major psychosocial rehabilitation centre, eventually to be managed by mental health professionals (see Chamberlin 1988, pp. 94-95).
 See Bluebird’s History of the Consumer/Survivor Movement. Online: https://www.power2u.org/downloads/HistoryOfTheConsumerMovement.pdf
 Mclean (1995, p. 1054) draws the distinction between consumers and survivors as follows: “Persons who identify themselves as ‘consumers’, ‘clients’ or ‘patients’, tend to accept the medical model of mental illness and traditional mental health treatment practices, but work for general system improvement and for the addition of consumer controlled alternatives. Those who refer to themselves as ‘ex-patients’, ‘survivors’ or ‘ex-inmates’ reject the medical model of mental illness, professional control and forced treatment and seek alternatives exclusively in user controlled centres.”
 Consumers and survivors aside, more radical voices persisted, continuing the discourse and activities of the 1970s’ groups. These voices were vehemently opposed to psychiatry and rejected any cooperation with services or with advocates/activists who tended towards reform. Examples include the Network to Abolish Psychiatry (1986) in the US and Campaign Against Psychiatric Oppression (CAPO, 1985) in the UK, both of which were active for a few years in the 1980s. (CAPO was an offshoot of the earlier Mental Patients’ Union.) For these groups, the ‘mental health system’ was intrinsically oppressive and had to be abolished: attempts to reform it, merely strengthened it (see Madness Network News, Summer 1986, vol.8, no.3, p.8). Reflecting on the beginnings of Survivors Speak Out (SSO, 1986), Peter Campbell, a founder, wrote that CAPO and other “separatist” groups were more concerned with “philosophical and ideological issues” and that SSO was “born partly in reaction to this: they were the first part of the ‘pragmatic’ wing which now dominates the user movement” with an emphasis on dialogue with others (Peter Campbell on The History and Philosophy of The Survivor Movement. Southwark Mind Newsletter, issue 24 – year not specified).
 Note that the reference here is to national networks and groups and not the local groups engaged in self-help, support, education, training, and advocacy of which there are hundreds in the US, UK and elsewhere.
 National organisations are of two types: those concerned with mental health generally (discussed in the text), and those with a focus on a particular condition or behaviour such as the Hearing Voices Network and the National Self-Harm network.
[Excerpt from Chapter 10 of Madness and the Demand for Recognition (2019, OUP)]
Referring to religious fundamentalism, Gellner (1992, p. 2) writes:
The underlying idea is that a given faith is to be upheld firmly in its full and literal form, free of compromise, softening, re-interpretation or diminution. It presupposes that the core of religion is doctrine, rather than ritual, and also that this doctrine can be fixed with precision and finality.
Religious doctrine includes fundamental ideas about our nature, the nature of the world and the cosmos, and the manner in which we should live and treat each other. In following to the letter the doctrines of one’s faith, believers are trying to get it right, where getting it right means knowing with exactness what God intended for us. In the case of Islam, the tradition I know most about, the Divine intent can be discerned from the Qur’an (considered to be the word of the God) and the Traditions (the sayings) attributed to the Prophet (see Rashed 2015b). The process of getting it right, therefore, becomes an interpretive one, raising questions such as: how do we understand this verse; what does God mean by the words ‘dust’ and ‘clot’ in describing human creation; who did the Prophet intend by this Tradition; does this Tradition follow a trusted lineage of re-tellers?
We can see that ‘getting it right’ for the religious fundamentalist and for the scientific rationalist mean different things – interpreting the Divine intent, and producing true explanations of the nature of the world, respectively. But then we have a problem, for religious doctrine often involves claims whose truth – in the sense of their relation to reality – can, in principle, be established. Yet in being an interpretive enterprise, religious fundamentalism cannot claim access to the truth in this sense. The religious fundamentalist can immediately respond by pointing out that the Divine word corresponds to the truth; it is the truth. If we press the religious fundamentalist to tell us why this is so we might be told that the truth of God’s pronouncements in the Qur’an is guaranteed by God’s pronouncement (also in the Qur’an) that His word is the truth and will be protected for all time from distortion. Such a circular argument, of course, is unsatisfactory, and simply points to the fact that matters of evidence and logic have been reduced to matters of faith. If we press the religious fundamentalist further we might encounter what has become a common response: the attempt to justify the truth of the word of God by demonstrating that the Qur’an had anticipated modern scientific findings, and had done so over 1400 years ago. This is known as the ‘scientific miracle of the Qur’an’; scholars interpret certain ambiguous, almost poetic verses to suggest discoveries such as the relativity of time, the process of conception, brain functions, the composition of the Sun, and many others. The irony in such an attempt is that it elevates scientific truths to the status of arbiter of the truth of the word of God. But the more serious problem is that science is a self-correcting progressive enterprise – what we know today to be true may turn out tomorrow to be false. The Qur’an, on the other hand, is fixed; every scientific claim in the Qur’an (assuming there are any that point to current scientific discoveries) is going to be refuted the moment our science develops. You cannot use a continually changing body of knowledge to validate the eternally fixed word of God.
Neither the faith-based response nor the ‘scientific miracle of the Qur’an’ response can tie the Divine word to the truth. From the stance of scientific rationality, all the religious fundamentalist can do is provide interpretations of the ‘Divine’ intent as the latter can be discerned in the writings of his or her tradition. Given this, when we are presented with identities constituted by doctrinal claims whose truth can, in principle, be established (and which therefore stand or fall subject to an investigation of their veracity), we cannot extend a positive response to these identities; scientific rationality is within its means to pass judgement.
But not all religion is purely doctrinal in this sense or, more precisely, its doctrines are not intended as strictly factual claims about the world; Appiah (2005, p. 188) makes this point:
Gore Vidal likes to talk about ancient mystery sects whose rites have passed down so many generations that their priests utter incantations in language they no longer understand. The observation is satirical, but there’s a good point buried here. Where religious observance involves the affirmation of creeds, what may ultimately matter isn’t the epistemic content of the sentences (“I believe in One God, the Father Almighty …”) but the practice of uttering them. By Protestant habit, we’re inclined to describe the devout as believers, rather than practitioners; yet the emphasis is likely misplaced.
This is a reasonable point; for many people, religion is a practical affair: they attend the mosque for Friday prayers with their family members, they recite verses from the Qur’an and repeat invocations behind the Imam, and they socialise with their friends after the prayer, and during all of this, ‘doctrine’ is the last thing on their minds. They might even get overwhelmed with spiritual feelings of connectedness to the Divine. In the course of their ritual performance, they are likely to recite verses the content of which involves far-fetched claims about the world. It would be misguided to press them on the truth of those claims (in an empirical or logical sense), as it would be to approach, to use Taylor’s (1994a, p. 67) example, “a raga with the presumptions of value implicit in the well-tempered clavier”; in both cases we would be applying the wrong measure of judgement, it would be “to forever miss the point” (ibid.).
And then there is the possibility that the ‘truths’ in question are metaphorical truths, symbolic expressions of human experience, its range and its moral heights and depths. Charles Taylor (2007, 1982) often talks about the expressive dimension of our experience, a dimension that has been largely expunged from scientific research and its technological application. Human civilizations have always developed rich languages of expression, religious languages being a prominent example. The rarefied language of scientific rationality and its attendant procedural asceticism are our best bet to get things right about the world, but they are often inadequate as a means to express our psychological, emotional, and moral complexity.
To judge the practical (ritualistic) and expressive dimensions of identities in light of the standards of scientific rationality is to trespass upon these identities. Our judgements are misplaced and have limited value. My contention is that every time we suspect that we do not possess the right kind of language to understand other identities, or that there is an experience or mode of engagement that over-determines the language in which people express their identities, we have a genuine problem of shared understanding; we are not within our means to pass judgements of irrationality on the narratives that constitute these identities. Now I am not suggesting that the distinctions between doctrine and practice, or between understanding the world and expressing ourselves, are easy to make. And neither am I suggesting that a particular case falls neatly on side or the other of these distinctions. But if we are going to adopt the stance of scientific rationality – given that we have to adopt some stance as I have argued earlier – then these are the issues we need to think about: (1) Is the narrative best apprehended in its factual or expressive dimension? (2) Are there experiences that over-determine the kind of narrative that can adequately express them?
For a few months in 2009 and 2010 I was a resident of Mut, a small town in the Dakhla Oasis in the Western desert of Egypt. My aim was to become acquainted with the social institution of spirit possession, and with sorcery and Qur’anic healing (while keeping an eye on how all of this intersects with ‘mental disorder’ and ‘madness’). I learnt many things, among which was the normalness with which spirit possession was apprehended in the community: people invoked spirits to explain a slight misfortune as much as a life- changing event; to make sense of what we would refer to as ‘schizophrenia’, and to make sense of a passing dysphoria. It was part of everyday life. The way in which spirit possession cut across these diverse areas of life got me thinking about the broader role it plays in preserving meaning when things go wrong. To help me think these issues through I brought in the concepts of ‘intentionality’ and ‘personhood’. The result is my essay More Things in Heaven and Earth: Spirit Possession, Mental Disorder, and Intentionality (2018, open access at the Journal of Medical Humanities).
The essay is a philosophical exploration of a range of concepts and how they relate to each other. It appeals sparingly, though decisively, to the ethnography that I had conducted at Dakhla. If you want to know more about the place and the community you can check these blog-posts:
And this is a piece I published in the newspaper Al-Ahram Weekly (2009) voicing my view on some of the practices that I had observed: To Untie or Knot
[Introduction to an essay I am working on for a special issue of the Journal of Medicine & Philosophy with the title ‘The Crisis in Psychiatric Science’]
THE IDENTITY OF PSYCHIATRY IN THE AFTERMATH OF MAD ACTIVISM
Psychiatry has an identity in the sense that it is constituted by certain understandings of what it is and what it is for. The key element in this identity, and the element from where other features arise, is that psychiatry is a medical speciality. Upon completion of their medical education and during the early years of their training, medical students – now budding doctors – make a choice about the speciality they want to pursue. Psychiatry is one of them, and so is ophthalmology, cardiology, gynaecology, and paediatrics. Modern medical specialities share some fundamental features: they treat conditions, disorders, or diseases; they aspire to be evidence-based in the care and treatments they offer; they are grounded in basic sciences such as physiology, anatomy, histology, and biochemistry; and they employ technology in investigations, research, and development of treatments. All of this ought to occur (and in the best of cases does occur) in a holistic manner, taking account of the whole person and not just of an isolated organ or a system; i.e. person-centred medicine (e.g. Cox, Campbell, and Fulford 2007). In addition, it is increasingly recognised that the arts and humanities have a role to play in medical education, training, and practice. Literature, theatre, film, history, and the various arts, it is argued, can help develop the capacity for good judgement, and can broaden the ability of clinicians to understand and empathise with patients (e.g. Cook 2010, McManus 1995). None of the above, I will assume in this essay, is particularly controversial.
Even though psychiatry is a medical speciality, it is a special medical speciality. This arises from its subject matter, ordinarily conceived of as mental health conditions or disorders, to be contrasted with physical health conditions or disorders. Psychiatry deals with the mind not working as it should while ophthalmology, for example, deals with the ophthalmic system not working as it should. The nature of its subject matter raises certain complexities for psychiatry that, in extreme, are sometimes taken to suggest that psychiatry’s positioning as a medical speciality is suspect; these include the normative nature of psychiatric judgements, the explanatory limitations of psychiatric theories, and the classificatory inaccuracies that beset the discipline. Another challenge to psychiatry’s identity as a medical speciality comes from particular approaches in mental health activism. Mad Pride and mad-positive activism (henceforth Mad activism) rejects the language of ‘mental illness’ and ‘mental disorder’, and rejects the assumption that people have a ‘condition’ that is the subject of treatment. The idea that medicine treats ‘things’ that people ‘have’ is fundamental to medical practice and theory and hence is fundamental to psychiatry in so far as it wishes to continue understanding itself as a branch of medicine. Mad activism, therefore, challenges psychiatry’s identity as a medical speciality.
In this essay, I argue that among these four challenges, only the fourth requires of psychiatry to rethink its identity. By contrast, as I demonstrate in section 2, neither the normative, nor the explanatory, or the classificatory complexities undermine psychiatry’s identity as a medical speciality. This is primarily for the reason that the aforementioned complexities obtain in medicine as a whole, and are not unique to psychiatry even if they are more common and intractable. On the other hand, the challenge of Mad activism is a serious problem. In order to understand what the challenge amounts to, I develop in section 3 the notion of the hypostatic abstraction, a logical and semantic operation which I consider to lie at the heart of medical practice and theory. It distinguishes medicine from other social institutions concerned with human suffering such as religious and some therapeutic institutions. In section 4 I demonstrate how Mad activism challenges the hypostatic abstraction. And in section 5 I discuss a range of ways in which psychiatry can respond to this challenge, and the modifications to its identity that may be necessary.
After four years of (almost) continuous work, I have finally completed my book:
Madness and the Demand for Recognition: A Philosophical Inquiry into Identity and Mental Health Activism.
Madness is a complex and contested term. Through time and across cultures it has acquired many formulations: for some, madness is synonymous with unreason and violence, for others with creativity and subversion, elsewhere it is associated with spirits and spirituality. Among the different formulations, there is one in particular that has taken hold so deeply and systematically that it has become the default view in many communities around the world: the idea that madness is a disorder of the mind.
Contemporary developments in mental health activism pose a radical challenge to psychiatric and societal understandings of madness. Mad Pride and mad-positive activism reject the language of mental ‘illness’ and ‘disorder’, reclaim the term ‘mad’, and reverse its negative connotations. Activists seek cultural change in the way madness is viewed, and demand recognition of madness as grounds for identity. But can madness constitute such grounds? Is it possible to reconcile delusions, passivity phenomena, and the discontinuity of self often seen in mental health conditions with the requirements for identity formation presupposed by the theory of recognition? How should society respond?
Guided by these questions, this book is the first comprehensive philosophical examination of the claims and demands of Mad activism. Locating itself in the philosophy of psychiatry, Mad studies, and activist literatures, the book develops a rich theoretical framework for understanding, justifying, and responding to Mad activism’s demand for recognition.
[Excerpt from Chapter 4 of my book Madness & the Demand for Recognition, forthcoming Oxford University Press, 2018]
In the foregoing account of identity (section 4.2) there is frequent mention of the demand for recognition (indeed, the title of the book features the same). We have made some progress towards understanding the nature of the gaps in social validation under which such a demand can become possible: individuals who are unable to find their self-understanding reflected in the social categories with which they identify and who are demanding social change to address this; what motivates people to seek this kind of social change – what motivates them to struggle for recognition?
4.3 THE STRUGGLE FOR RECOGNITION
4.3.1 The motivation for recognition
There are, at least, four possible sources of motivation for recognition. One of these sources has already been identified in the discussion of Hegel’s teleology (section 3.5.1). In accordance with this, the struggle for more equal and mutual forms of recognitive relations is driven forward by the telos of human nature which is the actualisation of freedom: if that is the ultimate goal, then the dialectical development of consciousness’ understanding of itself will lead to an awareness of mutual dependency as a condition of freedom. But this account has been considered and rejected on the grounds that positing an ultimate, rational telos for human beings that tends towards realisation is a problematic assumption, with connotations to the kind of metaphysical theorising which Kant’s critical philosophy had put to rest. The metaphysical source of the motivation for recognition must be rejected.
Another possible source is empirical and has to do with the psychological nature of human beings. In the Struggle for Recognition, Axel Honneth (1996) provides such an account through the empirical social psychology of G. H. Mead. According to Mead (1967) the self develops out of the interaction of two perspectives: the ‘me’ which is the internalised perspective of the social norms of the generalised other, and the ‘I’ which is a response to the ‘me’ and the source of individual creativity and rebellion against social norms. It is the movement of the ‘I’ – the impulse to individuation – that shows up the limitations of social norms and motivates the expansion of relations of recognition (see Honneth 1996, pp. 75-85).
In a later work Honneth (2002, p. 502) rejects his earlier account; he begins by noting: “there has always seemed to me to be something particularly attractive about the idea of an ongoing struggle for recognition, though I did not quite see how it could still be justified today without the idealistic presupposition of a forward-driven process of Spirit’s complete realization”. Honneth thus rejects the teleological account that we, also, found wanting. He then goes on to render problematic his earlier proposal that seeks to ground the motivation for recognition in Mead’s social psychology:
I have come to doubt whether [Mead’s] views can actually be understood as contributions to a theory of recognition: in essence, what Mead calls ‘recognition’ reduces to the act of reciprocal perspective taking, without the character of the other’s action being of any crucial significance; the psychological mechanism by which shared meanings and norms emerge seems to Mead generally to develop independently of the reactive behaviour of the two participants, so that it also becomes impossible to distinguish actions according to their respective normative character. (Honneth 2002, p. 502)
In other words, what Mead describes is a general process that is always occurring behind people’s backs in so far as it is a basic feature of the human life form. His theory explains how shared norms emerge and why they expand but deprives agents’ behaviours towards each other of normative significance. They become unwitting subjects of this process rather than agents struggling for recognition. To struggle for recognition is to perceive oneself to be denied a status one is worthy of, and not to mechanically act out one’s innate nature. And this remains the case even if our treatment by others engenders feelings of humiliation and disrespect. To experience humiliation is to already consider oneself deserving of a certain kind of treatment, of a normative status that is denied. Such feelings, therefore, cannot themselves constitute the motivation for recognition, rather they are symptoms of the prior existence of a conviction that one must be treated in a better way.
If the motivation for recognition cannot be accounted for metaphysically (by the teleology of social existence), or empirically (by the facts of one’s psychological nature), or emotionally (by the powerful feelings that signal the need for social change), then it must somehow be explained with reference to the ideas that together make up the theory of recognition. These ideas include specific understandings of individuality, self-realisation, freedom, authenticity, social dependence, the need for social confirmation, in addition to notions of dignity, esteem, and distinction, among others. To be motivated to struggle for recognition is to already be shaped by a historical tradition where such notions have become part of how we relate to ourselves and others, and the normative expectations that structure such relations; as McBride (2013, p. 137) writes, “we are the inheritors of a long and complex history of ethical, religious, philosophical, and, more recently, social scientific thought about the stuff of recognition: pride, honour, dignity, respect, status, distinction, prestige”. It is partly that we are within the space of these notions that we can see, as pointed out in section 3.5.2, that living a life of delusion and disregard for what others think, or a life of total absorption in social norms, is not to live a worthwhile life, for we would be giving up altogether either on social confirmation or on our individuality. We are motivated by these notions in so far as we are already constituted socially so as to be moved by them.
Putting the issue this way may raise concerns. By grounding the motivation for recognition in the subject’s prior socialisation, it becomes harder to establish whether that motivation is, ultimately, a means for the individual to broaden his or her social freedom, or a means for reproducing existing relations of domination. As McNay (2008, p. 10) writes, “the desire for recognition might be far from a spontaneous and innate phenomenon but the effect of a certain ideological manipulation of individuals” (see also McBride 2013, pp. 37-40; Markell 2003). Honneth (2012, p. 77) provides a number of examples where recognition may be seen as contributing to the domination of individuals:
The pride that ‘Uncle Tom’ feels as a reaction to the constant praises of his submissive virtues makes him into a compliant servant in a slave-owning society. The emotional appeals to the ‘good’ mother and housewife made by churches, parliaments or the mass media over the centuries caused women to remain trapped within a self-image that most effectively accommodated gender-specific division of labour.
Instead of constituting moral progress (in the sense of an expansion of individual freedom), recognition becomes a mechanism by which people endorse the very identities that limit their freedom. They seek recognition for these identities and in this way “voluntarily take on tasks or duties that serve society” (Honneth 2012, p. 75). There is a need, therefore, to see if we can distinguish ideological forms of recognition from those relations of recognition in which genuine moral progress can be said to have occurred, since what we are after are relations of the latter sort.
4.3.2 The problem of ideology
I first consider, and exclude, some ways in which the problem of ideology cannot be solved. It may seem attractive to find a solution by appeal to a Kantian notion of rational autonomy, where the subject withdraws from social life in order to know what it ought to do. If such withdrawal were possible, we would have had an instance of genuine recognition in the sense that an autonomous choice has been made. But as argued in section 3.2, withdrawing to pure reason can only produce the form that moral principles must take, without those principles thereby possessing sufficient content that can guide action. Moral principles acquire content, and hence can be action guiding, through the very social practices that Kant urged us to withdraw from in order to exercise our rational autonomy. Somehow then, the distinction between ideological and genuine recognition, if it can be made at all, will have to be drawn from within those social practices, as an appeal to a noumenal realm of freedom where we can rationally will what we ought to do cannot work. This is further complicated by the fact that both genuine and ideological recognition – being forms of recognition – must meet the approval of the subject in the sense that both must make the subject feel valued and are considered positive developments conducive to individual growth. Hence, the experience of the subject cannot help us here either. Ideological recognition then consists in practices that are “intrinsically positive and affirmative” yet “bear the negative features of an act of willing subjection, even though these practices appear prima facie to lack all such discriminatory features” (Honneth 2012, p. 78). How can these acts of recognition be identified?
The key seems to lie in the notion of ‘willing subjection’ and the possibility of identifying this despite subjects’ pronouncements of their wellbeing. The judgement that particular practices of recognition are ideological in the sense that they constitute acts of willing subjection must therefore be made by an external observer. The observer needs to perceive subjection, while at the same time explaining away the person’s acceptance of the situation as an indication that he has internalised his oppression in such a way that he willing subjects himself. The case of the ‘good mother’ is a case in point; by voluntarily endorsing that role, she remains uncompensated for her work and many other opportunities in life would be foreclosed to her. Now the observer, in this kind of theoretical narrative, is no longer concerned with the quality of interpersonal relations or the subject’s experience of freedom and wellbeing. What is at issue here seems to be that the observer disagrees with the values and beliefs that structure those relations, rather than the quality of those relations being relations of mutual recognition. A contemporary example can further clarify.
Consider the claim, often heard in certain public discourse, that Muslim women who cover their hair – who wear a hijab – are ‘oppressed’. Frequently, the claims made do not require that the women in question report any oppression, and hence concepts such as ‘internalised oppression’ are invoked to explain the lack of a negative experience. Of course, some women are coerced into wearing the hijab, and given the right context they would remove it and see it as an unnecessary imposition on them. For others the hijab is about modesty and has religious connotations. In this sense, it is not a symbol of their oppression and may even be regarded as a feature that can generate positive recognition as a pious and religiously observant person. An observer who claims that the desire for recognition in such cases is ideological – that women who cover their hair are willingly (and subconsciously) subjecting themselves to existing norms – is making a statement about his or her views on the cultural context: the problem the observer has is with the religious weight placed on clothing, or the fact that it is mainly women who have to observe such practices. Some women who wear a hijab reject this account since it bypasses their own understanding of what they are doing and the value they attach to it (in fact such an account can itself end up being a form of misrecognition). Not surprisingly, the exact claim is made in reverse by some Muslim women who argue that ‘Westernised’ women who dress ‘immodestly’ are oppressed by a dominant, male culture that subtly forces them to show their bodies. Those who believe that dressing in this way is an expression of freedom and secularism have simply internalised the values by which they willing subject themselves to existing norms.
The point of presenting this case from both sides is to show that once we bypass people’s accounts of what they are doing, and put aside their reported experience of freedom and wellbeing, we can see that what is going on is an ideological conflict between two worldviews. This conflict can itself be described within the framework of misrecognition as a continued devaluing of agent’s identities under the cover of an interest in their wellbeing. Of course, people are not always right about what they are doing, and our psychological depth is such that we can deceive ourselves and accept an abusive situation, even more not be able to see that it is abusive. We may convince ourselves that a particular role is exactly right for us, whereas others can see that it is obviously limiting our lives. But psychological depth and the possibility of self-deception go both ways; if that person over there is not transparent to himself then neither am I, even if transparency admits of degrees. Hence, if we are going to argue that a person is willingly subjecting herself, we also need to account for our motivations in making such an argument and what we are, in a sense, getting out of it in terms of validating our worldview, our take on what matters.
This perspective on the idea of ‘willing subjection’ should not be interpreted as a call for inaction; what it is, is a call for personalising and contextualising our moral and political responses and analyses of the lives of others. This means that if we are inclined to persuade individuals to change their understanding of their situation, then we cannot simply bypass their experience of wellbeing and their specific circumstances. In other words, sweeping judgements that take the form ‘group x is oppressed’ are not helpful; clearly there are all sorts of possibilities and the only way to sort these out is to be aware of this complexity, without losing sight of ‘structural’ discrimination in a particular community. With this in mind we will find that the spectrum of oppression includes the following: some in group x are oppressed and are already fighting to change that; some do not consider themselves oppressed but change their take on the situation once they are presented with a different analysis of it; some do not consider themselves oppressed – despite clear evidence to the contrary – yet no amount of persuasion can get them to see this; some consider your interest in their freedom as an attempt to oppress them; others consider themselves perfectly free and empowered.
Returning to our original question – the distinction between ideological and genuine forms of recognition – it appeared, to begin with, that the idea of ‘willing subjection’ held the key to that distinction. However, on having a closer look at this idea it emerged that what it communicates is a conflict of worldviews rather than a view on the quality of interpersonal relations as relations of recognition. As argued earlier, whether ‘ideological’ or ‘genuine’, if the relations in question are to be relations of recognition then the individuals concerned must feel valued for who they are, and be able to see existing relations as contributing to their personal growth and fulfilment. In this sense the distinction between ideological and genuine recognition cannot be drawn using the notion of ‘willing subjection’. What this notion brings to light are the very real, and very deep, disagreements in beliefs, values, social roles, and life goals that exist across contexts and ideologies. And while it certainly is of importance to debate and negotiate these differences, in order for such disagreements not to end up themselves generating conditions for misrecognition, it is necessary not to lose sight of the individuals involved, including their take on what they are doing and their experience of freedom and wellbeing.
Excerpt from Chapter 1 of my book “Madness and the Demand for Recognition”. Forthcoming with Oxford University Press, 2018
Mad with a capital m refers to one way in which an individual can identify, and in this respect it stands similar to other social identities such as Maori, African-Caribbean, or Deaf. If someone asks why a person identifies as Mad or as Maori, the simplest answer that can be offered is to state that he identifies so because he is mad or Maori. And if this answer is to be anything more than a tautology – he identifies as Mad because he identifies as Mad – the is must refer to something over and above that person’s identification; i.e. to that person’s ‘madness’ or ‘Maoriness’. Such an answer has the implication that if one is considered to be Maori yet identifies as Anglo-Saxon – or white and identifies as Black – they would be wrong in a fundamental way about their own nature. And this final word – nature – is precisely the difficulty with this way of talking, and underpins the criticism that such a take on identity is ‘essentialist’.
Essentialism, in philosophy, is the idea that some objects may have essential properties, which are properties without which the object would not be what it is; for example, it is an essential property of a planet that it orbits around a star. In social and political discussions, essentialism means something somewhat wider: it is invoked as a criticism of the claim that one’s identity falls back on immutable, given, ‘natural’ features that incline one – and the group with which one shares those features – to behave in certain ways, and to have certain predispositions. The critique of certain discourses as essentialist has been made in several domains including race and queer studies, and in feminist theory; as Heyes (2000, p. 21) points out, contemporary North American feminist theory now takes it as a given that to refer to “women’s experience” is merely to engage in an essentialist generalisation from what is actually the experience of “middle-class white feminists”. The problem seems to be the construction of a category – ‘women’ or ‘black’ or ‘mad’ – all members of which supposedly share something deep that is part of their nature: being female, being a certain race, being mad. In terms of the categories, there appears to be no basis for supposing either gender essentialism (the claim that women, in virtue of being women, have a shared and distinctive experience of the world: see Stone (2004) for an overview), or the existence of discrete races (e.g. Appiah 1994a, pp. 98-101), or a discrete category of experience and behaviour that we can refer to as ‘madness’ (or ‘schizophrenia’ or any other psychiatric condition for this purpose). Evidence for the latter claim is growing rapidly as the following overview indicates.
There is a body of literature in philosophy and psychiatry that critiques essentialist thinking about ‘mental disorder’, usually by rebutting the claim that psychiatric categories can be natural kinds (see Zachar 2015, 2000; Haslam 2002; Cooper 2013 is more optimistic). A ‘natural kind’ is a philosophical concept which refers to entities that exist in nature and are categorically distinct from each other. The observable features of a natural kind arise from its internal structure which also is the condition for membership of the kind. For example, any compound that has two molecules of hydrogen and one molecule of oxygen is water, irrespective of its observable features (which in the case of H2O can be ice, liquid, or gas). Natural kind thinking informs typical scientific and medical approaches to mental disorder, evident in the following assumptions (see Haslam 2000, pp. 1033-1034): (1) different disorders are categorically distinct from each other (schizophrenia is one thing, bipolar disorder another); (2) you either have a disorder or not – a disorder is a discrete category; (3) the observable features of a disorder (symptoms and signs) are causally produced by its internal structure (underlying abnormalities); (4) diagnosis is a determination of the kind (the disorder) which the individual instantiates.
If this picture of strong essentialism appears as a straw-man it is because thinking about mental disorder has moved on or is in the process of doing so. All of the assumptions listed here have been challenged (see Zachar 2015): in many cases it’s not possible to draw categorical distinctions between one disorder and another, and between disorder and its absence; fuzzy boundaries predominate. Symptoms of schizophrenia and of bipolar disorder overlap, necessitating awkward constructions such as schizoaffective disorder or mania with psychotic symptoms. Similarly, the boundary between clinical depression and intense grief has been critiqued as indeterminate. In addition, the reductive causal picture implied by the natural kind view seems naive in the case of mental disorder: it is now a truism that what we call psychiatric symptoms are the product of multiple interacting factors (biological, social, cultural, psychological). And diagnosis is not a process of matching the patient’s report with an existing category, but a complicated interaction between two parties in which one side – the clinician – constantly reinterprets what the patient is saying in the language of psychiatry, a process which the activist literature has repeatedly pointed out permits the exercise of power over the patient.
The difficulties in demarcating health from disorder and disorders from each other have been debated recently under the concept of ‘vagueness’; the idea that psychiatric concepts and classifications are imprecise with no sharp distinctions possible between those phenomena to which they apply and those to which they do not (Keil, Keuck, and Hauswald 2017). Vagueness in psychiatry does not automatically eliminate the quest for more precision – it may be the case, for example, that we need to improve our science – but it does strongly suggest a formulation of states of health and forms of experience in terms of degrees rather than categorically, i.e. a gradualist approach to mental health. Gradualism is one possible implication of vagueness, and there is good evidence to support it as a thesis. For example, Sullivan-Bissett and colleagues (2017) have convincingly argued that delusional and non-delusional beliefs differ in degree, not kind: non-delusional beliefs exhibit the same epistemic short-comings attributed to delusions: resistance to counterevidence, resistance to abandoning the belief, and the influence of biases and motivational factors on belief formation. Similarly, as pointed out earlier, the distinction between normal sadness and clinical depression is difficult to make on principled grounds, and relies on an arbitrary specification of the number of weeks during which a person can feel low in mood before a diagnosis can be given (see Horwitz and Wakefield 2007). Another related problem is the non-specificity of symptoms: auditory hallucinations, thought insertion, and other passivity phenomena which are considered pathognomonic of schizophrenia, can be found in the non-patient population as well as other conditions (e.g. Jackson 2007).
Vagueness in mental health concepts and gradualism with regards to psychological phenomena undermine the idea that there are discrete categories underpinned by an underlying essence and that go with labels such as schizophrenia, bipolar disorder, or madness. But people continue to identify as Women, African-American, Maori, Gay, and Mad. Are they wrong to do so? To say they are wrong is to mistake the nature of social identities. To prefigure a discussion that will occupy a major part of Chapters 4 and 5, identity is a person’s understanding of who he or she is, and that understanding always appeals to existing collective categories: to identify is to place oneself in some sort of relation to those categories. To identify as Mad is to place oneself in some sort of relation to madness; to identify as Maori is to place oneself in some sort of relation to Maori culture. Now those categories may not be essential in the sense of falling back on some immutable principle, but they are nevertheless out there in the social world and their meaning and continued existence does not depend on one person rejecting them (nor can one person alone maintain a social category even if he or she can play a major role in conceiving it). Being social in nature they are open to redefinition, hence collective activism to reclaim certain categories and redefine them in positive ways. In fact, the argument that a particular category has fuzzy boundaries and is not underpinned by an essence may enter into its redefinition. But demonstrating this cannot be expected to eliminate people’s identification with that category: the inessentiality of race, to give an example, is not going to be sufficient by itself to end people’s identification as White or Black.
In the context of activism, to identify as Mad is to have a stake in how madness is defined, and the key issue becomes the meaning of madness. To illustrate the range of ways in which madness has been defined, I appeal to some key views that have been voiced in a recent, important anthology: Mad Matters: A Critical Reader in Canadian Mad Studies (2013). A key point to begin with is that Mad identity tends to be anchored in experiences of mistreatment and labelling by others. By Mad, Poole and Ward (2013, p. 96) write, “we are referring to a term reclaimed by those who have been pathologised/ psychiatrised as ‘mentally ill,'”. Similarly, Fabris (2013, p. 139) proposes Mad “to mean the group of us considered crazy or deemed ill by sanists … and are politically conscious of this”. These definitions remind us that a group frequently comes into being when certain individuals experience discrimination or oppression that is then attributed by them as arising from some features that they share, no matter how loosely. Those features have come to define the social category of madness. Menzies, LeFrancois, and Reaume (2013, p. 10) write:
Once a reviled term that signalled the worst kinds of bigotry and abuse, madness has come to represent a critical alternative to ‘mental illness’ or ‘disorder’ as a way of naming and responding to emotional, spiritual, and neuro-diversity. … Following other social movements including queer, black, and fat activism, madness talk and text invert the language of oppression, reclaiming disparaged identities and restoring dignity and pride to difference.
In a similar fashion, Liegghio (2013, p. 122) writes:
madness refers to a range of experiences – thoughts, moods, behaviours – that are different from and challenge, resist, or do not conform to dominant, psychiatric constructions of ‘normal’ versus ‘disordered’ or ‘ill’ mental health. Rather than adopting dominant psy constructions of mental health as a negative condition to alter, control, or repair, I view madness as a social category among other categories like race, class, gender, sexuality, age, or ability that define our identities and experiences.
Mad activism may start with shared experiences of oppression, stigma and mistreatment, it continues with the rejection of biomedical language and reclamation of the term mad, and then proceeds by developing positive content to madness and hence to Mad identity. As Burstow (2013, p. 84) comments:
What the community is doing is essentially turning these words around, using them to connote, alternately, cultural difference, alternate ways of thinking and processing, wisdom that speaks a truth not recognised …, the creative subterranean that figures in all of our minds. In reclaiming them, the community is affirming psychic diversity and repositioning ‘madness’ as a quality to embrace; hence the frequency with which the word ‘Mad’ and ‘pride’ are associated.
My essay, about to be published in the Journal of Medicine & Philosophy.
I write defending mad positive approaches against the tendency to adopt a medical view of the limitations associated with madness. Unlike most debates that deal with similar issues – for example the debate between critical psychiatrists and biological psychiatrists, or between proponents of the social model of disability versus those who endorse the medical model of disability – my essay is not a polemical adoption of one or other side, but a philosophical examination of how we can talk about disability in general, and madness in particular.
You can read the essay here: IN DEFENCE OF MADNESS
And here is the abstract: At a time when different groups in society are achieving notable gains in respect and rights, activists in mental health and proponents of mad positive approaches, such as Mad Pride, are coming up against considerable challenges. A particular issue is the commonly held view that madness is inherently disabling and cannot form the grounds for identity or culture. This paper responds to the challenge by developing two bulwarks against the tendency to assume too readily the view that madness is inherently disabling: the first arises from the normative nature of disability judgements, and the second from the implications of political activism in terms of being a social subject. In the process of arguing for these two bulwarks, the paper explores the basic structure of the social model of disability in the context of debates on naturalism and normativism; the applicability of the social model to madness; and the difference between physical and mental disabilities in terms of the unintelligibility often attributed to the latter
Over the course of last year I have been working on a small project with Rachel Bingham examining the possibility of distinguishing ‘social deviance’ from ‘mental disorder’ in light of recent work on concepts of health. The result was an essay published recently in the journal Philosophy, Psychiatry & Psychology (21:3-September 2014).
In our response to Moncrieff and Stein we found it necessary to point out that in the writings of some critical psychiatrists and psychologists there is a problematic conflation of empirical with conceptual issues in relation to ‘mental disorder’. That section is reproduced below. Note that Criterion E is the final clause in the DSM definition of mental disorder. It states that a mental disorder must not solely be a result of social deviance or conflicts with society.
Let us begin by revisiting the conceptual basis of attributions of mental disorder. Criterion E is not, as we argued with Stein et al. (2010, 1765), conceptually necessary, but is of ethical and political importance given the historical context. Thus, notwithstanding the other criteria, a condition can only be considered for candidacy for mental disorder if “dysfunction” is present. What is a dysfunction? As Moncrieff puts it, there is a tautology in the definition of mental disorder where it is stated that a mental disorder reflects an “underlying psychobiological dysfunction” (Moncreiff 2014). Moncrieff argues that this is flawed because underlying processes have not been established, which renders the definition tantamount to saying that a dysfunction is a reflection of a dysfunction: a definition that adds nothing to our knowledge.
Here Moncrieff follows Thomas Szasz in finding a lack of resemblance to physical disorder to be the primary problem with the concept of mental disorder (see Fulford et al. 2013).1 In pursuing this, the critical psychiatrist not only fails to see the complexity of the concept of physical disorder, but also commits the same error as the biological psychiatrist. The latter implies that an ever longer awaited complete neurochemistry of mental health conditions would solve the conceptual problems. The former—the critical psychiatrist—implies the converse; that the absence of proof for the “existence of separate and distinct foundational processes,” as Moncrieff (2014) puts it, proves that mental health conditions are not disorders. As we have argued elsewhere, identifying the biological basis for a set of behaviors or symptoms does not in itself pick out what is pathological or disordered: for example, a complete description of the neurochemical states governing sexuality would not permit the inference that homosexuality is a disorder, any more than discovery of the neural correlates of falling in love or criminality would make these mental illnesses (Bingham and Banner 2012). Neurobiological changes—their presence or their absence—tells us about conditions when we find them by other means, but it does not tell us what is or is not a disorder. The same arguments could be run for underlying psychological processes. Consequently, emphasis on scientific progress or failure to progress in understanding the neurobiological correlates of mental health conditions does little to advance the conceptual debates, a point that may help to explain the impasse in the ongoing exchange between critical and biological psychiatrists.
Thus, although Moncrieff is right in pointing out that the term ‘dysfunction’ is redundant in the definition of mental disorder, she is wrong about the reason why this is so. It is not, as she claims, due to the point that no “separate and distinct foundational processes” (2014) that can ground dysfunction have been discovered empirically. After all, this leaves her open to the simple response that they actually have been, a response many biological psychiatrists do offer. The redundancy of the term ‘dysfunction’ in the definition of mental disorder is a result of conceptual analysis (and not empirical evidence), whereby it has not proven possible to define dysfunction in a way that excludes values. Here, we follow Derek Bolton in the view that once we “give up trying to conceptually locate a natural fact of the matter [dysfunction] that underlies illness attribution… then we are left trying to make the whole story run on the basis of something like ‘distress and impairment of functioning’” (2010, 332). We are left then with those things that matter in real life, the reasons that lead to healthcare being sought: usually the presence of significant distress and disability.
This is what the terms ‘dysfunction’ and ‘mental disorder’ pick out once we achieve some clarity on their referents. Stein is clearly aware of the problems inherent in defining dysfunction. However, somewhat surprisingly, the assumption that we can talk of ‘dysfunction’ over and above experienced factors (distress and disability in particular) arises through Stein’s commentary. In other words, although Stein has acknowledged the conceptual problem, in places he still writes as if there were a clear definition of dysfunction, without telling us what this would be. For example, he describes “situations when there is evidence of dysfunction, but an absence of distress and/or impairment” and gives the example of tic disorders which have no “clinical criterion (emphasizing distress and/or impairment)” (Stein 2014). We would argue that, despite the lack of explicit acknowledgement in DSM, tic disorders enter the manual because of their association with clinically significant distress and disability. It is important to avoid confusing the empirical questions (e.g., Why do people have tics? Can people have tics and not be distressed?) with the conceptual questions (e.g., When is a tic a disorder? Can tics be disorders if they do not cause distress or impairment?).
A further potential pitfall is to conflate the technical use of ‘dysfunction’ with the ordinary use of that term. This might occur where, on the one hand, we perceive a ‘dysfunction’ but on the other hand we are unable to say what the dysfunction consists of. When Moncrieff writes that dysfunction and distress are not co-extant, because, “people may neglect themselves and act in other ways that compromise their safety and survival without necessarily being distressed,” she is offering a description of behavior many would consider ‘dysfunctional’ in the lay sense (2014). Considered as a basis for conceptual analysis, however, this does not illuminate any “underlying psychobiological dysfunction”, which previous definitions aspired to do. Indeed, it is somewhat surprising that Moncrieff provides this counterexample rather than sticking to her argument that dysfunction in fact does not exist. In citing safety and survival, Moncrieff’s phrase does resemble the evolutionary theoretic approach (notably described in Wakefield’s Harmful Dysfunction Analysis), which as has been discussed widely elsewhere and noted in our paper, has fallen out of favor owing to problems with evolutionary theory specifically and naturalistic definitions in general. What of importance is left in Moncrieff’s putative definition if not underlying psychobiological and evolutionary dysfunction? We would argue: only the harm or threat of harm experienced by the individual, whether that harm is cashed out as distress and disability or as some other similar negatively evaluated experienced factor.
Article published in Theoretical Medicine & Bioethics 2015
Abstract: The centenary of Karl Jaspers’ General Psychopathology was recognised in 2013 with the publication of a volume of essays dedicated to his work (edited by Stanghellini and Fuchs). Leading phenomenological-psychopathologists and philosophers of psychiatry examined Jaspers notion of empathic understanding and his declaration that certain schizophrenic phenomena are ‘un-understandable’. The consensus reached by the authors was that Jaspers operated with a narrow conception of phenomenology and empathy and that schizophrenic phenomena can be understood through what they variously called second-order and radical empathy. This article offers a critical examination of the second-order empathic stance along phenomenological and ethical lines. It asks: (1) Is second-order empathy (phenomenologically) possible? (2) Is the second-order empathic stance an ethically acceptable attitude towards persons diagnosed with schizophrenia? I argue that second-order empathy is an incoherent method that cannot be realised. Further, the attitude promoted by this method is ethically problematic insofar as the emphasis placed on radical otherness disinvests persons diagnosed with schizophrenia from a fair chance to participate in the public construction of their identity and, hence, to redress traditional symbolic injustices.
Mohammed Abouelleil Rashed 2015
My chapter published online at Oxford Handbooks.
Will appear in print in the Oxford Handbook for Psychiatric Ethics Volume 1 next year.
Islamic Perspectives on Psychiatric Ethics explores the implications for psychiatric practice of key metaphysical, psychological, and ethical facets of the Islamic tradition. It examines: (1) the nature of suffering and the ways in which psychological maladies and mental disorder are bound up with the individual’s moral and spiritual trajectory. (2) The emphasis placed on social harmony and the formation of a moral community over personal autonomy. (3) The sources of normative judgements in Islam and the principles whereby ethical/legal rulings are derived from the Qur’an and the Prophetic Traditions. Finally, the perspective of the chapter as a whole is employed to present an Islamic view on a number of conditions, practices, and interventions of interest to psychiatric ethics.
Click HERE for Pre-Production version
Back in the summer of 2013 when Turkey’s Taksim square protests were at their height, I recall watching a reporter interviewing a protestor to the background of teargas smoke and fervent chanting against the government. The protestor unflinchingly and passionately declared that they are all here demanding their freedom from the dictatorial state. The effect of this whole scene on me was no less than visceral: I felt sick in that way you do when a cliché of massive proportions is unleashed upon you or, even better, when your interlocutor’s moral high-ground is so high – and so delusional – that your natural response, were you not mildly disposed, would be to punch him in the face. Revolution. I no longer believe in Revolution. In fact, I am positively opposed to it, to that irrational impulse to ‘occupy’ the Square and engage in fake unity over idealistic demands with people who in any other context you would normally reject the very idea of spending a minute with, and not only because you find them morally reprehensible. How did this happen; how have I become so anti-Revolution?
It wasn’t always like this. On the 26th of January 2011, a day after the Egyptian Revolution had started in earnest and Tahrir Square was definitely ‘occupied by the People’, I booked a flight from my London abode and flew to Cairo to take part in what I described at the time as “the most significant moment in my life so far”. Together with my ‘fellow’ Egyptians we occupied the Square, our chants developing from the usual concoction of Bread, Freedom, and Social Justice to the comically simply and reductive howwa yemshi mesh hanemshi: He (Mubarak) must go, we won’t go. And he went. On the 11th of February 2011, in what we would later understand to have been a sort of internal Coup against Mubarak, a thirty second announcement was delivered by the late General Omar Suleiman – then head of the Secret Service – declaring that Mubarak had waived his powers to the Supreme Council of the Armed Forces (SCAF).
Right after returning to London I wrote in an intense state my account of eleven days in Tahrir Square and published it in Anthropology Today. The article was a success: it became one of the most read articles in that journal for 2011 . I was contacted by a South American – Nicaraguan – Revolutionary journal for permission to have it translated into Spanish and in June of the same year it was published in Envio. The South Americans, of course, being for people of my generation the quintessential Revolutionaries. Yet on reading my account now I have the exact visceral response I had to the Taksim Square protestors: I feel sick – and embarrassed. There is an unmistakable sense of innocence, passion – and delusion – that jumps at you from the page when you read my account of the occupation of Tahrir Square. We were all One. You would see Westernised Egyptian girls, their hair flowing, conversing and agreeing with bearded Salafi men in their white robes. Rich Egyptians sharing a spot and a glass of tea with the destitute inhabitants of Cairo’s slums on the by now eroded grass of the Square. Egyptians, famous for being organisationally and aesthetically challenged, forming neat queues and cleaning the Square to prove to the State that we can do it. We were all united and on our best behaviour. The corrupt state – Mubarak and his henchmen – were the enemy and we were, unquestionably to us, worthy occupiers of the moral high ground. If they would just go, we the People will set it right. And this was and remains the crux of the problem with Tahrir Square and with Revolution in general.
What happened next is well known and extensively analysed. In a number of perceptive articles, Egyptian writer Youssef Rakha eloquently documented and devastated the charade that is the Egyptian Revolution. By October 2011, when tens of Coptic protestors were murdered at Maspiro by security forces, and in the ensuing fabrications constructed by certain ‘fellow’ Egyptians to blame the Copts, I became acutely aware that the unity of Tahrir square was nothing but a temporary delusion: we were never One. We were always divided by class, education, belief, ideology, gender, geography, by our capacity for reason and our integrity: how did I ever think otherwise? Throughout the months in which SCAF were the explicit rulers of the country, they methodically destroyed the possibility of a reasonable transition to a reasonable government. Presidential elections conducted in June 2012 brought to power Muhammad Morsi of the Muslim Brotherhood who after a series of political blunders, mismanagement, and opposition by key state institutions was overthrown, having spent only twelve months in office, in what can only be described as a CoupVolution: it was not merely a Coup, and it certainly wasn’t a pure outcome of People Power. A few months after that and we were back pretty much to where it all started: an army general as our new president, having resigned from his position as head of SCAF. With the media resuming their familiar role of leader-worship and the country bitterly divided; with the space for expressing opinion severely restricted, and the political discourse reduced to name-calling and falsehood; with two presidents on trial and thousands of political prisoners; with intolerance, religious dogma, and harassment right there on the surface of society, it’s no wonder that I and many people like me are painfully disillusioned. From those heady days of the Square to the situation we are in today: now that’s quite a fall.
What is wrong with Revolution? One of the more obvious criticisms is that Revolution can only be destructive. The collective uprising that is Revolution occurs because there is no political process capable of responding to peoples’ grievances and needs. The People rise and forcefully articulate what they do not want, but, naturally, they have nothing else to replace it with, nothing substantial or meaningful that is. And this is not a coincidence. What is required for there to be a political vision by which alternatives to the existing system can be conceived, is a political process capable of generating this vision. But Revolution is an outcome of the absence of such a process, it therefore can offer no serious alternative to replace the machinery of the State it is so intent on bringing down. A quickly cobbled together system of ‘government’ that is in actuality a disguised sectarian ideology or, in other words, the Muslim Brotherhood, does not qualify for a viable political system. In fact, in the case of Egypt, it almost brought the country to the brink of total collapse. Further, the demand for Bread, Freedom and Social Justice may appear, contra to my claim, a positive rather than a destructive demand. But how can this demand ever be realised in the absence of the State? If the People want material equality, freedom and respect, their hope of realising any of this is within the confines of a functioning State. The State may fail miserably on all these dimensions, but the very demand for equality, freedom and respect presupposes an existing structure of which such demands can be made. Things, then, seem much more serious than the average Placard Holding, Tear-Gas Fighting, Square Occupying, Freedom Demanding protestor seems to appreciate. And to realise that I, by virtue of participating in the Revolution, am also guilty of this phenomenal and dangerous naivety.
I might be accused of being too pessimistic and short-sighted. Revolution, the thought may proceed, can only be judged like other major events of this kind with the benefit of hindsight once seen as part of broader historical changes. The long-term consequences of Revolution will only be palpable several decades down the line. Might it not have been the case that certain French individuals at the height of the French Revolution in the late 18th century were also, like me, disillusioned with the idea of Revolution? And weren’t they too myopic and ill disposed to see that the French Revolution was a first step on a long road to Democracy, the system of government now generally considered infinitely preferable to absolute Monarchy? Now this is an important argument and I concede that it is not possible to be cognizant of the future desirable consequences of such social upheaval. But that’s precisely the problem. We consider Democracy desirable because our values and perspectives have changed from those of the 18th century. From where we stand now, for many of us at least, it is difficult to desire a form of government that is entirely undemocratic. But the point of interrogating the rightness of an act, in this case of Revolution, is to interrogate it with what I have at my disposal now; with what I know now and not what I would know given the resources available through some hypothetical future. Revolution is a powerful social phenomenon with consequences beyond our capacity now to fathom, but the point is to know how we should position ourselves in relation to it as moral agents living in this age and place, right now, right here. And it is my contention that Revolution should be resisted because, paradoxically, it is a mechanism which guarantees that no change will actually happen where it matters.
Revolution is premised on a fundamental lack of integrity. Even more, Revolution is essentially defined by a worldview which is so morally unambiguous and transparent only because it traffics in one of the more extreme acts of self-deception a person can commit, short of outright insanity. Revolution is not morally discerning or subtle: there is ‘us’ and we are good; and there is ‘them’ and they are evil. A worldview so simple and reductive that in any other situation we would severely reprimand its holder – if not feel pity for him – whereas with Revolution we actively embrace it, shedding with it our cognitive and moral integrity. In apportioning all blame to a circumscribed entity – variously the State, Mubarak, the National Democratic Party, or the Muslim Brotherhood – the Revolutionary is thus free to plumb the depths of victimhood, shielding himself from all possibilities for self-examination. And that would have been bad enough if no serious consequences followed from this collective act of self-deception. But it is precisely this self-deception that makes it appear to the Revolutionary that one thing must happen, and must happen now, which is for the identified guilty political entity to be dismantled. And what happens next? Having no alternative system to replace the outgoing one, what gradually but inevitably occurs is for that outgoing system to return, only rearranged and cosmetically altered. This is not due to some underlying conspiracy, or even due to the failure of the Revolution; this is precisely the purpose of Revolution: a sort of rearrangement of the same political and social structure which existed before. Revolution is a trick, the purpose of which is to recycle society rather than genuinely change it. Revolution is conservative; Tradition in spectacular garb.
Joseph de Maistre famously wrote that “every nation gets the government it deserves”. While he was referring to the choices people make within a democracy, his epigram can equally be applied to autocracies where people apparently have no choice in who governs. Now that may sound counter-intuitive, after all how can I deserve that which I have not chosen? How can anyone, to be more specific, deserve a Gaddafi or an Assad? But tyrants don’t just descend upon us from nowhere. We create tyrants as much as we create democrats and both have to be ultimately accounted for in terms of the people whom they govern. In order to stop getting ‘what we deserve’, we must stop projecting the worst that is in us and receiving it back in the form of a Mubarak or a Sisi, then rising against them in an impotent act – Revolution – only to find, when the dust has settled, that nothing has changed. By reflecting, each one of us, on his and her place in the social fabric, we can begin to perceive the part we play in that ugly and fractured society we are so keen to change yet are unwilling to take responsibility for. It is not so much a case of the unashamedly romantic “be the change you want to see in the world”, rather, it is the more sober: if you want to see change in the world then you better start by looking at yourself.
Mohammed Abouelleil Rashed
Essay accepted for publication in the journal Philosophy, Psychiatry and Psychology
Written with Dr Rachel Bingham
Abstract and excerpt.
Abstract: Can psychiatry distinguish social deviance from mental disorder? Historical and recent abuses of psychiatry indicate that this is an important question to address. Typically, the deviance/disorder distinction has been made, conceptually, on the basis of dysfunction. Challenges to naturalistic accounts of dysfunction suggest that it is time to adopt an alternative strategy to draw the deviance/disorder distinction. This article adopts and follows through such a strategy, which is to draw the distinction in terms of the origins of distress with the relevant conditions. It is argued that psychiatry’s ability to distinguish deviance from disorder rests on the ability to define, identify and exclude socially constituted forms of distress. These should lie outside the purview of candidacy for mental disorder. In pursuing this argument, the article provides an analysis of the social origins of a form of distress with the personality and sexual disorders, and indicates in what ways it is socially constituted.
Keywords: Distress; Dysfunction; DSM-5; Cognitive Dissonance; Sexual Disorders; Personality Disorders
CAN PSYCHIATRY DISTINGUISH SOCIAL DEVIANCE FROM MENTAL DISORDER?
INTRODUCTION A number of leading figures in psychiatric nosology and the philosophy of mental health proposed various changes to the definition of mental disorder (Stein et al. 2010). These changes were intended to guide the development of the definition in the now published fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, the DSM-5. The authors proposed the following criteria which develop those in the DSM-IV (APA 1994); a mental disorder is:
In this article we consider criterion E, an exclusionary criterion intended to safeguard against pathologising social deviance and imparting diagnoses on the basis of discrimination. The importance of this safeguard cannot be overstated. The distant as well as recent history of psychiatry is replete with instances of the abuse of diagnosis and treatment for political purposes (van Voren 2010). And psychiatry tends to be susceptible to the claim that it functions as a tool for social control, disposing of ‘problematic’ individuals under the justification of a medical diagnosis (Szasz 1998). It has been argued for some time that abuses of psychiatry do not require mal-intent on the part of clinicians, but happen despite psychiatrists involved believing their diagnoses to be valid (van Voren 2002). Fulford, Smirnov and Snow (1993, 801) suggest that corruption, political pressures, poor clinical standards and a lack of safeguards “explain the ‘how’ but not the ‘why’ of abuse”. The authors argue that conceptual issues – in particular failure to recognise the value-laden nature of psychiatric diagnoses – explains the “why”, and leaves psychiatry particularly vulnerable to abuse. Elsewhere, the need to address past abuses of psychiatry was argued to require a satisfactory definition of ‘mental disorder’ (Wakefield 1992). Antipsychiatrists did not agree with this diagnosis. Following Thomas Szasz’s seminal argument that mental illness is a ‘myth’, the conceptual foundation of psychiatry has been strenuously disputed. Conceptual issues were not, for Szasz, the root of abuses, but rather legitimised them:
[W]hile de jure, the mental hospital system functions as an arm of the medical profession, de facto, it functions as an arm of the state’s law-enforcement system. The practices thus authorized do not represent the abuses of psychiatry; on the contrary, they represent the proper uses of psychiatry, sanctioned by tradition, science, medicine, law, custom, and common sense. (Szasz 2000, 11-12)
This is an articulation of the concern, or allegation, to which Criterion E responds. In the past, the scholarly defence has been to argue, in various ways, that psychiatry is in fact able to recognise and define its proper domain, thus the question of what is a mental disorder is central to the debate. Criterion E offers both an official recognition of the dangers of pathologisation and an apparent conceptual safeguard. This paper does not further rehearse the debate about the need for such a safeguard, but explores whether Criterion E is able to fulfil this role. Thus our contribution is to update the debate in the light of recent work on concepts of health and illness, to try to make the distinction between social deviance and mental disorder using DSM-5, and to provide an original analysis of the social origins of some forms of distress in the light of these considerations.[i]
In order to explore what criterion E entails we revert to the full definition provided in the now published DSM-5: “Socially deviant behavior (e.g., political, religious, or sexual) and conflicts that are primarily between the individual and society are not mental disorders unless the deviance or conflict results from a dysfunction in the individual” (emphasis added). [ii] This is almost identical to the definition provided in the DSM-IV. Thus formulated, as Stein and colleagues (2010, 1765) note in relation to the DSM-IV, criterion E is not “strictly necessary” as the prior specification (criterion ‘D’) that the condition or syndrome must be due to a dysfunction in the individual suffices. However, given the aforementioned importance of guarding against misuse of psychiatry for political or other discriminatory purposes and the difficulty in indicating appropriate use of the term ‘dysfunction’, Stein and colleagues chose to retain criterion E in simplified form. Conceptually, then, if a dysfunction can be identified then a mental disorder can be said to be present if the other criteria are also fulfilled. The safeguard against pathologising social deviance is accordingly the identification of dysfunction in the individual. Thus although presented as a criterion required by the conceptual and empirical difficulties inherent in defining and identifying dysfunction, to do any work criterion E in fact depends on the ability to define and identify dysfunction.
This article proceeds as follows: First, we identify some relevant meanings of ‘dysfunction’ with a particular focus on dysfunction understood in terms of the consequences of a syndrome: distress and disability. Second, we examine the implications for criterion E of understanding dysfunction in those terms. We argue that distinguishing social deviance from mental disorder now requires that a distinction is drawn between phenomena in which distress is an outcome of social conflict and discrimination and phenomena in which distress is intrinsic to the condition. Third, we explore different meanings of ‘intrinsic’ distress. We point out the difficulty in providing a positive definition and focus thus on what ‘intrinsic’ is not rather than on what it is. We propose that an alternative to distress being intrinsic to a condition is for such states to be constituted by social factors. What does it mean for distress to be constituted by social factors? To answer this question we explore the difference between factors that may cause a distressing state and factors that constitute that state. We argue that psychological states that are socially constituted – that is, are created and sustained by social factors – are excluded by criterion E from candidacy for mental disorder. Fourth, we provide an account of distress with the conditions of most relevance to the distinction between social deviance and mental disorder, pointing out in what ways distress may be understood as socially constituted. Fifth, and finally, we present some clarifications and outline some implications of this view. This article considers only Criterion E, and not the other criteria for a mental disorder as listed above. Thus, a condition that is argued to meet Criterion E may yet fail the other criteria and therefore not be considered a mental disorder under the DSM definition, despite meeting the final criterion.
As indicated in the introduction, to do any work criterion E depends on defining and identifying dysfunction. A reasonable starting point, then, would be to attempt to specify the meaning of the term ‘dysfunction’. One prominent strategy has been to seek a definition of dysfunction in naturalistic terms. The most widely debated and influential has been Jerome Wakefield’s evolutionary theoretic approach (1999, 1997). According to Wakefield, a dysfunction is a result of some mechanism failing to perform its natural function as designed (selected) by evolution (i.e. the function that can explain why the mechanism or organ exists and why it is designed the way it is). Wakefield’s account has been criticised as highly speculative and lacking in clinical utility. Further, it appears to rely on the questionable assumption “that there is a clear (enough) division between psychological functioning that is natural (evolved and innate), as opposed to social (cultivated)” (Bolton 2008, 124). In the absence of a clear division, Wakefield’s dysfunction cannot tag exclusively onto a fact of nature, precisely because psychological function is the product of “several interweaving” natural, social, and individual factors which are not separable through the science we currently possess (Bolton 2010, 329-331).
Problems with Wakefield’s account and with naturalism more generally have prompted alternative strategies to understand dysfunction.[iii] Thus, Bolton argues, if we abandon naturalism about illness, “if we give up trying to conceptually locate a natural fact of the matter that underlies illness attribution – then we are left trying to make the whole story run on the basis of something like ‘distress and impairment of functioning’” (2010, 332). Stein and colleagues note that an alternative to naturalism is to understand ‘dysfunction’ in terms of the “consequences of the syndrome, specifically that it leads to or is associated with distress and disability” (2010, 1763, emphasis added). The move from ‘naturalism about illness’ to ‘distress and disability as the mark of illness’ is a reversal of the priority of dysfunction from being antecedent to the syndrome to being a manifestation, or consequence, of it. For example, what marks out a syndrome like depression as illness is not some underlying and invariant psychological or biological mechanism(s) but the subjective experience of distress and the extent of impairment of the person’s day to day functioning. This is consistent with the syndrome being caused or constituted by biological factors: this reversal does not entail the denial of biology. What it indicates is that illness attributions, conceptually, cannot be made on the basis of an antecedent natural fact, but on the basis of the consequences of the syndrome as they manifest for the subject. This raises a further complexity in terms of which kinds of distress are to be conceived as illness as opposed to a normal response to the vicissitudes of life. We leave this complexity aside and stay with the original point: to do any work criterion E depends on defining and identifying dysfunction. Now that ‘dysfunction’ is understood in terms of the consequences of the syndrome, viz. distress and disability, could it be claimed that the identification of distress and disability is sufficient ground to diagnose mental disorder irrespective of social deviance or conflict? The answer to this question clearly is no. The reason is that distress and disability may be an outcome of social deviance and conflict, while they also may not. If we wish to ensure that diagnosis is not inappropriately applied to individuals whose suffering can, in some relevant and significant sense, be understood as a consequence or expression of conflict with society, then it becomes necessary to draw this distinction.
[i] A reviewer for this paper had made the important point that the distinction between mental disorder and social deviance is itself a cultural construction with a long history. This suggests that there is scope to deconstruct the distinction. While clearly an interesting project in its own right, our concerns here are more limited to exploring whether – through criterion E – the distinction can be made. We thus assume that there is something called mental disorder or mental health problem (definitions of which are subject to much debate), and something called social deviance (which has nothing directly to do with mental disorder). We further assume that this is an important distinction to make. [ii] DSM-5. The definition of Criterion E in the DSM-IV: “neither deviant behaviour (e.g. political, religious or sexual) nor conﬂicts that are primarily between the individual and society are mental disorders unless the deviance or conflict is a symptom of a dysfunction in the individual” (APA 2000, p. xxxi). [iii] See Bolton (2008, 2013) and Kingma (2013) for review and critical assessment of the various attempts to define dysfunction in naturalistic terms.
Summary of an essay I completed recently.
Spirit possession is a common phenomenon around the world in which a non-corporeal agent is involved with a human host. This manifests in a range of maladies or in displacement of the host’s agency and identity. Prompted by engagement with the phenomenon in Africa, this paper draws some connections between spirit possession, and the concepts of personhood and intentionality. It employs these concepts to articulate spirit possession, while also developing the intentional stance as formulated by Daniel Dennett. It argues for an understanding of spirit possession as the spirit stance: an intentional strategy that aims at predicting and explaining behaviour by ascribing to an agent (the spirit) beliefs and desires, but is only deployed once the mental states and activity of the subject (the person) fail specific normative distinctions. Applied to behaviours which are generally taken to signal ‘madness’ or ‘mental illness’, the spirit stance preserves a peculiar form of intentionality where otherwise behaviour would be explained as consequence of a broken physical mechanism. Centuries before the modern disciplines of psychoanalysis and phenomenological-psychopathology endeavoured to restore meaning to ‘madness’, the social institution of spirit possession had been preserving the intentionality of socially deviant behaviour.
Culture, salience, and psychiatric diagnosis: exploring the concept of cultural congruence & its practical application. Philosophy, Ethics and Humanities in Medicine (Journal)
This article is part of the series: Towards a new psychiatry: Philosophical and ethical issues in classification, diagnosis and care
Cultural congruence is the idea that to the extent a belief or experience is culturally shared it is not to feature in a diagnostic judgement, irrespective of its resemblance to psychiatric pathology. This rests on the argument that since deviation from norms is central to diagnosis, and since what counts as deviation is relative to context, assessing the degree of fit between mental states and cultural norms is crucial. Various problems beset the cultural congruence construct including impoverished definitions of culture as religious, national or ethnic group and of congruence as validation by that group. This article attempts to address these shortcomings to arrive at a cogent construct.
The article distinguishes symbolic from phenomenological conceptions of culture, the latter expanded upon through two sources: Husserl’s phenomenological analysis of background intentionality and neuropsychological literature on salience. It is argued that culture is not limited to symbolic presuppositions and shapes subjects’ experiential dispositions. This conception is deployed to re-examine the meaning of (in)congruence. The main argument is that a significant, since foundational, deviation from culture is not from a value or belief but from culturally-instilled experiential dispositions, in what is salient to an individual in a particular context.
Applying the concept of cultural congruence must not be limited to assessing violations of the symbolic order and must consider alignment with or deviations from culturally-instilled experiential dispositions. By virtue of being foundational to a shared experience of the world, such dispositions are more accurate indicators of potential vulnerability. Notwithstanding problems of access and expertise, clinical practice should aim to accommodate this richer meaning of cultural congruence.
Youth and the revolution in Egypt
Selim H. Shahine
The Egyptian revolution: A participant’s account from Tahir Square, January and February 2011
Mohammed Abouelleil Rashed and Islam El Azzazi
Inhabiting a risky earth: The Eyjafjallajökull eruption in 2010 and its impacts
Katrin Anna Lund and Karl Benediktsson
“The disappeared”: Power over the dead in the aftermath of 9/11
Chip Colwell-Chanthaphoh and Alice M. Greenwald
With Natalie Banner, Rachel Bingham, Norman Poole, Roman Pawar, and Abdi Sanati
This workshop considers the role of community in understandings of normality. In 1994, the DSM added a caveat to the definition of mental disorder, that cultural congruence protects individual’s beliefs and values from being labelled as pathological. This reflected a blossoming political and ideological notion of ‘tolerance’, which now underpins widespread efforts to respect – and not alienate – communities with non-mainstream value systems and beliefs. The INPP 2012 conference reflects continued efforts to understand and embrace difference and promote tolerance. Yet, mental disorder is fundamentally about ‘difference’, and is by definition not tolerated but treated. We therefore propose the following presentations in an exploration of ‘difference’ as it arises within, and between, communities. The first presentation questions why it is that a single individual with an unshakable and dangerous value system may sometimes be diagnosed with a mental disorder, while an unshakable and dangerous value system held by a group may be criminal, but is not ‘pathological’. The second presentation considers the features of communities which protect against diagnosis. We consider the dependence of this immunity on being sufficiently organised and having a discourse and dialogue of acceptability or tolerance. The final presentation discusses the successes of the homosexual civil rights movement in establishing a respected orientation as opposed to a repressed medical condition. We consider the conceptual problems illuminated by this shift, which reveal important features of diagnosis itself.
Ethical practice in psychiatry is underpinned by a secular, anthropocentric concept of autonomy. While this reflects the cultural heritage of the communities where modern psychiatry was developed, it might not be suitable for populations with different understandings of autonomy. This presentation outlines some Islamic cultural/ethical issues of particular relevance to decision-making in psychiatry.
First, the scope of autonomy is considered. Outside one’s personal relation with God, autonomy is secondary to community. A collectivity can only achieve salvation when the conduct of each member is aligned with the norms of the faith. Moral/social violations are not individual choices but a threat to this order, and therefore of concern for others. Shared responsibility for the actions of others renders decision-making a collective enterprise guided by figures of authority. This has implications for informed consent, confidentiality, privacy, and the duty of clinicians towards patients.
Second, the paradox of agency is considered. Action in Islamic theology is both predetermined and the full responsibility of the agent. Suffering, in a determinist theodicy, is foreknown to God and is a trial and expiation for sins. This may promote fatalism towards treatment. With a free-will theodicy, humans bring suffering upon themselves through their actions, and must take an active attitude towards relieving it. Deterministic attitudes complicate the clinician’s duty to relieve suffering within the available means, and render sharing information (e.g. about prognosis) irrelevant. The presentation concludes by asking whether and to what extent a clinician should abandon her secular ethical principles in favour of other religious or cultural ones.
Delusions and the Madness of the Masses is the latest book by Lawrie Reznek, a writer whose work is associated with the field of the Philosophy of Psychiatry. Ambitious both in scope and intent, this book is the latest installment in a tradition of works that employ the language of pathology and disorder — normally understood to apply to individuals — to describe whole societies and belief-systems. One is reminded of Freud’s (1969) assertion — which Reznek cites — that religion is mass delusion; of Edgerton’s (1992) characterization of some pre-modern societies as “sick”; of Dawkin’s (2006) polemic against God, belief in which he describes as delusional. While, thus, not original, Reznek’s thesis — that certain subcultures, groups, and sometimes whole communities can be deluded and should be described as such — is arrived at primarily through philosophical argument rather than psychoanalytic insight or a perusal of detailed anthropological data. On the whole, and for reasons discussed below, I do not believe that Reznek has done enough to convincingly advance his thesis.
I wrote this back in February, in the height of the events. Now, ten months on, I am struck by the innocence with which one embraced what was going on. It was exhilarating and beautiful. Such a striking contrast with the bitterness, self-delusion, and rumour mongering that characterises the revolution now.
Starting with Isaiah Berlin’s definition of freedom as “negative and positive liberty”, Hirschmann proceeds to demonstrate that positive liberty does not consist only in the removal of external barriers and the facilitation of conditions conducive to the expression of freedom but must also include attending to “internal barriers”- fears, addictions, compulsions – that may prevent individuals from making the right choices and accessing their freedom. Building on the ideas of Rousseau, Locke, and Hobbes she extends the notion of the social construction of the virtuous citizen to the social construction of desire and choice, thus reversing the question from what I want or desire to why I harbor certain desires and make certain choices. Freedom then becomes not only about the absence of constraint to make a choice but also about the discursive construction of choice, and true freedom “has to be about having a say in defining the context” where choices are made. Hirschmann’s thesis raises many important questions, one of which I would like to introduce here: given the constructed nature of desire and choice, and the inevitable presence of what Sartre would call ‘Bad-faith’ (1943/2001, Ch. 2), what grounds do we have in determining the real freedom of an agent?
And this is what, now, seems to me an uncharacteristic ode to individualism. what had gone in to me at the time? I was probably too fed up with Mut; now I am not: in fact I am nostalgic. Which goes to show that intellectual positions can be emotionally laden too !
Okay, a few days ago my good friend Youssef Rakha invited me to join his blog, or Blog Blog Blog as he decided to call it. I was shocked, surprised and disgusted yes disgusted. I will try to explain why : remember the last time you saw or thought of something that brought up a feeling of disdain in you. Think and you will definietly find something. Well, Blogs had this effect on me.. its a precognitive problem and it really does restrict our lives. In all cases, I decided to overcome it – not just by joining Youssef’s blog but by creating one myself !
So how should I start…