8 Curiosities About Psychiatry That May Surprise You

PsychologyFor Editorial Team Reviewed by PsychologyFor Editorial Team Editorial Review Reviewed by PsychologyFor Team Editorial Review

8 Curiosities About Psychiatry That May Surprise You

Psychiatry occupies a peculiar space in the landscape of medicine—it’s the medical specialty that treats the most mysterious organ in the human body, using interventions that range from talking therapy to powerful medications to, yes, controlled electrical currents through the brain. It’s been the source of some of medicine’s greatest triumphs and most disturbing controversies. It’s evolved from the asylum era of chains and straitjackets to the modern era of neuroscience and evidence-based treatment, yet it remains the medical specialty most people understand least. When you mention psychiatry, people might think of Freudian couches, One Flew Over the Cuckoo’s Nest, or perhaps just “that’s the one where they can prescribe medication, right?” But the reality of psychiatry—its history, its practices, its challenges, and its surprising facts—is far more fascinating and complex than most stereotypes suggest.

Unlike cardiology, where the heart’s mechanics are relatively well-understood, or orthopedics, where broken bones show up clearly on X-rays, psychiatry grapples with conditions that can’t be seen on standard imaging, diagnosed with simple blood tests, or always explained by clear biological mechanisms. This has made psychiatry uniquely vulnerable to both stigma and misconception, but it’s also made it a field of remarkable innovation, controversy, and ongoing discovery. The history of psychiatry includes Nobel Prize-winning procedures we now consider barbaric, medications discovered by accident, experiments that changed how we think about diagnosis itself, and ongoing debates about fundamental questions like what mental illness actually is and whether certain conditions should even be considered illnesses at all. The field sits at the intersection of medicine, neuroscience, psychology, philosophy, and social policy in ways that make it endlessly complex and frequently surprising.

This article explores eight curiosities about psychiatry that might surprise even well-informed readers—facts that challenge common assumptions, reveal fascinating historical contexts, or illuminate current controversies in the field. Whether you’re someone who’s worked with psychiatrists, studied psychology, or simply curious about how we understand and treat mental illness, these facts offer windows into a medical specialty that remains as fascinating as it is important, as controversial as it is essential, and as misunderstood as any field in modern medicine.

1. Lobotomy Won a Nobel Prize—And Was Performed on Tens of Thousands

Perhaps the most shocking fact about psychiatric history is that the frontal lobotomy—a procedure we now consider barbaric and unethical—won the Nobel Prize in Physiology or Medicine in 1949 for its inventor, Portuguese neurologist António Egas Moniz. The procedure involved severing connections in the brain’s prefrontal cortex to treat severe mental illness, and it was initially hailed as a breakthrough for patients with intractable conditions. Moniz performed the first lobotomy in 1935, and the technique spread rapidly, particularly in the United States where Walter Freeman popularized an even cruder version called the “ice pick lobotomy” that could be performed in doctors’ offices without anesthesia, literally using an instrument similar to an ice pick inserted through the eye socket.

Between the 1940s and 1960s, approximately 40,000 to 50,000 lobotomies were performed in the United States alone, with similar numbers in other countries. The procedure was used not just for severe mental illness but also for “difficult” behavior, rebelliousness in women, and even chronic pain. Patients often emerged from the surgery profoundly changed—subdued, emotionally blunted, sometimes reduced to childlike states. While some patients did show improvement in severe agitation or psychosis, many were left permanently disabled, with blunted personalities, cognitive impairments, and loss of initiative. The procedure essentially prioritized making patients manageable over preserving their essential humanity and cognitive function. The lobotomy era represents one of medicine’s darkest chapters, illustrating how desperation to treat severe mental illness, combined with inadequate understanding of brain function and insufficient regard for patient autonomy, can lead to widespread harm despite good intentions. The practice largely ended in the 1960s and 1970s with the development of effective psychiatric medications and growing ethical awareness, but the legacy serves as a sobering reminder about the importance of rigorous evidence, ethical oversight, and respect for patient autonomy in psychiatry.

2. The “Chemical Imbalance” Theory of Depression Is Vastly Oversimplified

For decades, many psychiatrists, pharmaceutical companies, and public health campaigns described depression as being caused by a “chemical imbalance” in the brain, particularly involving the neurotransmitter serotonin. You’ve likely heard this explanation: “Depression is caused by low serotonin, and antidepressants work by correcting this imbalance.” This became the dominant public understanding of depression and the primary explanation for how antidepressants work. There’s just one problem—the actual neuroscience is far more complicated, and this simple explanation is misleading at best. While antidepressants do affect neurotransmitter levels, we don’t actually have evidence that depression is caused by a simple deficiency of serotonin or any other single neurotransmitter.

Recent research has largely moved away from the chemical imbalance model. A comprehensive 2022 review published in Molecular Psychiatry examined decades of research and found no clear evidence that depression is caused by low serotonin levels or activity. Brain chemistry is certainly involved in depression—neurotransmitters affect mood, and medications that alter neurotransmitter function can help some people—but the relationship is complex, involving interactions among multiple neurotransmitter systems, brain structures, neural circuits, inflammation, stress hormones, genetics, environmental factors, and psychological processes. Depression isn’t like diabetes where we can measure a clear deficiency (insulin) and replace it. The chemical imbalance explanation persisted partly because it was simple, destigmatizing, and commercially useful for pharmaceutical companies marketing antidepressants, but oversimplifying complex conditions can be problematic. It may lead people to expect medications to work like insulin for diabetes—quickly and completely correcting a deficiency—when reality is more complicated. It may also minimize the importance of psychotherapy, lifestyle factors, and social determinants of mental health. Modern psychiatry increasingly recognizes that mental illnesses involve complex interactions among biological, psychological, and social factors—what’s called the biopsychosocial model—rather than simple chemical imbalances.

The "Chemical Imbalance" Theory of Depression Is Vastly Oversimplified

3. Electroconvulsive Therapy (ECT) Is Still Used—And Actually Works

When most people hear “electroconvulsive therapy” or “ECT,” they think of terrifying scenes from One Flew Over the Cuckoo’s Nest—patients restrained and brutalized with unmodified electrical shocks, used as punishment or control. This imagery has made ECT one of the most stigmatized treatments in medicine. But here’s the surprising fact: ECT is still used today, it’s performed very differently than historical depictions, and for certain severe conditions, it’s actually one of the most effective treatments we have. Modern ECT bears little resemblance to the shocking scenes in movies. Patients receive general anesthesia and muscle relaxants, so they’re unconscious and their bodies don’t convulse. Carefully controlled electrical currents are applied to specific areas of the brain, inducing a brief seizure (the patient is unconscious and doesn’t experience it) that somehow resets brain chemistry in ways we don’t fully understand but that can rapidly alleviate severe depression.

ECT is particularly effective for severe, treatment-resistant depression, depression with psychotic features, and acute suicidal ideation where rapid intervention is needed. Response rates for ECT in severe depression can reach 70-80%, substantially higher than medication alone. It’s also used for severe mania, catatonia, and sometimes treatment-resistant schizophrenia. The main side effect is temporary memory problems, particularly around the time of treatment, though some patients experience longer-lasting memory issues. Despite its effectiveness and safety profile in modern practice, ECT remains vastly underutilized due to stigma, misconceptions, and the disturbing legacy of its historical misuse. Many patients who could benefit never receive it because of fear based on outdated information. Psychiatrists struggle with the ethics of recommending a treatment that patients are terrified of due to cultural depictions, even when evidence suggests it might be their best option. The disconnect between ECT’s stigmatized public image and its actual clinical utility represents one of psychiatry’s ongoing challenges in combating misconceptions that prevent people from accessing effective treatment.

4. Psychiatrists Are Medical Doctors; Psychologists Are Not

Many people don’t realize the fundamental difference between psychiatrists and psychologists, often using the terms interchangeably or thinking they’re minor variations of the same profession. In reality, they’re quite different. Psychiatrists are medical doctors—they attend medical school, earn an MD (Doctor of Medicine) or DO (Doctor of Osteopathic Medicine) degree, complete the same basic medical training as all physicians, then specialize in psychiatry through a residency program. This medical training means psychiatrists can diagnose mental illnesses, prescribe medications, order and interpret laboratory tests and brain imaging, provide psychotherapy, and manage the medical aspects of mental health conditions. They understand how mental illnesses interact with physical health conditions and medications, which is crucial since mental and physical health are deeply interconnected.

Psychologists, by contrast, typically hold doctoral degrees in psychology (PhD or PsyD) but don’t attend medical school. They’re extensively trained in psychological assessment, testing, research, and various forms of psychotherapy, but they generally cannot prescribe medications (except in a few U.S. states with special training) or provide medical care. Clinical psychologists focus primarily on psychotherapy and psychological testing, and they’re often experts in specific therapeutic approaches or conditions. Neither profession is “better”—they’re different and complementary. For conditions that might benefit from medication or where medical factors are significant, psychiatrists’ medical training is essential. For in-depth psychotherapy, psychological testing, or certain specialized treatments, psychologists’ extensive psychology training is invaluable. In ideal circumstances, psychiatrists and psychologists work collaboratively, with psychiatrists managing medication and medical aspects while psychologists provide intensive psychotherapy. However, the distinction is important for patients to understand when seeking treatment, since the professional you see will depend on what type of intervention you need. The confusion between these professions contributes to public misunderstanding about mental health treatment and who provides what services.

Psychiatrists Are Medical Doctors; Psychologists Are Not

5. The Rosenhan Experiment Rocked Psychiatry’s Foundations

In 1973, a study was published that sent shockwaves through psychiatry and raised fundamental questions about psychiatric diagnosis that remain controversial today. Psychologist David Rosenhan conducted an experiment where eight healthy people (including Rosenhan himself) with no history of mental illness presented to psychiatric hospitals claiming to hear voices saying “empty,” “hollow,” and “thud.” Based solely on this single reported symptom, all eight were admitted to psychiatric hospitals, seven with diagnoses of schizophrenia and one with manic-depression (now called bipolar disorder). Once admitted, the “pseudopatients” stopped simulating any symptoms and behaved completely normally, yet they remained hospitalized for an average of 19 days (ranging from 7 to 52 days), and upon discharge, all but one were given diagnoses of “schizophrenia in remission.”

The study revealed disturbing findings: hospital staff never detected that these patients were faking, normal behaviors were often interpreted through the lens of psychiatric illness (one pseudopatient’s note-taking was recorded as “writing behavior” suggesting pathology), and once labeled with psychiatric diagnoses, everything these individuals did was viewed as symptomatic. Real patients, however, sometimes suspected the pseudopatients were journalists or investigators, suggesting patients might be more perceptive than staff in some ways. The study sparked fierce debate about the validity of psychiatric diagnosis, the potential for bias and stigma in psychiatric settings, and whether mental illnesses were real medical conditions or social constructs. Critics of the study noted methodological problems—faking symptoms to get diagnosed doesn’t prove diagnosis is invalid any more than faking chest pain and getting admitted for cardiac evaluation proves cardiology is pseudoscience. Despite its limitations, the Rosenhan experiment highlighted legitimate concerns about subjectivity in psychiatric diagnosis and the potential for psychiatric labels to stick once applied. It contributed to major reforms in psychiatric diagnosis and the development of more standardized diagnostic criteria in subsequent versions of the Diagnostic and Statistical Manual (DSM). The experiment remains controversial but undeniably influential in psychiatry’s ongoing grappling with questions about the nature of mental illness and how we diagnose it.

6. The DSM Has Changed Dramatically—Homosexuality Was Once Listed as a Mental Disorder

The Diagnostic and Statistical Manual of Mental Disorders (DSM), published by the American Psychiatric Association, is psychiatry’s primary diagnostic reference—the book that defines what counts as a mental disorder. Many people assume the DSM represents timeless medical knowledge, that the conditions listed are objectively real diseases that have always been recognized. But the DSM’s history reveals how psychiatric diagnosis is shaped by scientific understanding, social values, and cultural contexts. The manual has gone through multiple editions (currently on DSM-5-TR), and each revision has added, removed, and modified diagnostic categories in ways that sometimes seem shocking in retrospect. Perhaps most infamously, homosexuality was listed as a mental disorder in the DSM until 1973, when it was removed after extensive debate and activism by gay rights advocates and sympathetic mental health professionals.

This wasn’t the only dramatic change. Drapetomania—supposedly the disease causing enslaved people to flee captivity—was once considered a legitimate psychiatric diagnosis. Multiple personality disorder became dissociative identity disorder and remains controversial regarding its validity. Asperger’s syndrome was folded into autism spectrum disorder. Gender identity disorder became gender dysphoria with careful revisions to focus on distress rather than identity itself as pathological. Each edition of the DSM reflects both advancing scientific knowledge and evolving social understanding of what constitutes mental illness versus normal human variation. These changes raise profound questions about whether mental illnesses are discovered or constructed, whether psychiatric diagnosis reflects biological reality or social values. The truth is probably complex—some conditions (like schizophrenia or bipolar disorder) show remarkable consistency across cultures and times, suggesting biological reality, while others are more culturally shaped. The DSM’s evolution illustrates that psychiatric diagnosis isn’t purely objective medical science but involves value judgments about what behaviors and experiences are pathological versus acceptable human diversity. This doesn’t mean mental illnesses aren’t real or that psychiatry is arbitrary, but it highlights the unique challenges of a medical specialty that must define illness in the realm of thoughts, emotions, and behaviors where the line between pathology and variation is often genuinely unclear.

The DSM Has Changed Dramatically—Homosexuality Was Once Listed as a Mental Disorder

7. Psychiatry Is Among the Lowest-Paid Medical Specialties

Given the complexity of mental illness, the extensive training required, and the critical importance of mental health care, you might assume psychiatrists are well-compensated. Surprisingly, psychiatry consistently ranks among the lowest-paid medical specialties. According to recent physician compensation surveys, psychiatrists earn substantially less than specialists like orthopedic surgeons, cardiologists, gastroenterologists, or radiologists—often hundreds of thousands of dollars less annually. The average psychiatrist salary is typically closer to primary care physician compensation than to the high-earning specialties. This pay gap exists despite psychiatrists completing just as many years of training as other specialists and treating conditions that can be just as complex, severe, and life-threatening as those managed by higher-paid specialties.

Several factors contribute to this disparity. Psychiatry involves lengthy patient appointments that don’t generate procedure-based fees—psychiatrists primarily bill for time and evaluation, whereas procedural specialties perform surgeries or interventions that reimburse at much higher rates. Insurance reimbursement rates for psychiatric services have historically lagged behind those for other medical specialties, reflecting systemic undervaluation of mental health care. Many psychiatrists also spend significant time on tasks that aren’t directly billable, like coordinating care, communicating with other providers and family members, and dealing with insurance authorization requirements that can be particularly burdensome in mental health treatment. The compensation gap contributes to critical psychiatrist shortages in many areas, particularly in rural regions and public mental health systems. Talented medical students considering specialties may be deterred by the relatively lower compensation combined with high stress, challenging patient populations, and insurance hassles. This workforce shortage means many people who need psychiatric care can’t access it, with wait times for new psychiatric appointments sometimes extending months. The pay disparity reflects and perpetuates broader societal devaluation of mental health care compared to physical health care, with serious consequences for the availability and quality of mental health services.

8. Many Psychiatric Medications Were Discovered by Accident

The development of psychiatric medications is often portrayed as the triumph of rational drug design—scientists understanding brain chemistry and designing molecules to target specific imbalances. The reality is far messier and more interesting. Many of psychiatry’s most important medications were discovered through serendipity, lucky accidents, or observations of unexpected side effects rather than through targeted development. The first antipsychotic medication, chlorpromazine (Thorazine), was initially developed as a surgical anesthetic and antihistamine in the 1950s. Surgeons noticed it made patients calm and indifferent without losing consciousness, and psychiatrists tried it for severe mental illness, discovering it could reduce psychotic symptoms—a revolutionary finding that launched the era of psychopharmacology and eventually made it possible to discharge many long-term psychiatric hospital patients.

Similarly, iproniazid, one of the first antidepressants, was initially developed to treat tuberculosis. Doctors noticed that tuberculosis patients taking the drug became happier and more energetic, leading to trials for depression and the discovery of a new class of antidepressants (MAO inhibitors). Lithium, still the gold standard treatment for bipolar disorder, was discovered when Australian psychiatrist John Cade was testing the toxicity of uric acid in guinea pigs in 1948 and needed lithium to dissolve the uric acid. He noticed the lithium itself made the animals calmer, leading him to try it in manic patients with remarkable success. Even modern drug development involves substantial trial and error—we often don’t fully understand why medications work, we discover them through screening many compounds for desired effects. The accidental nature of many psychiatric drug discoveries highlights both how little we understood about brain chemistry when treatment advances began and how much we still don’t know. These medications work—often very well—but we’re frequently reverse-engineering explanations for mechanisms we discovered accidentally. This reality doesn’t diminish the importance of these medications, which have transformed treatment and saved countless lives, but it’s a reminder that psychiatric treatment remains more art informed by science than pure applied neuroscience, even as our understanding continues advancing.

Why These Curiosities Matter

Understanding these surprising facts about psychiatry matters for several reasons beyond mere historical interest. First, they combat stigma and misconceptions that prevent people from seeking mental health treatment. When people understand that psychiatry is a legitimate medical specialty with evidence-based treatments (even if imperfect), that mental health conditions are complex biological-psychological-social phenomena (not simple character flaws or chemical imbalances), and that modern treatment bears little resemblance to historical abuses, they may be more willing to seek help when needed. Second, these facts illustrate the importance of humility in medicine—recognizing that our understanding is incomplete, our treatments imperfect, and that what seems obviously correct now might be revised with new knowledge, just as lobotomy’s Nobel Prize now seems horrifying.

Third, understanding psychiatry’s evolution and current challenges helps us make better decisions about mental health policy, research funding, and clinical care. Knowing about workforce shortages and reimbursement disparities might influence how we structure mental health care systems. Understanding the limitations of diagnostic categories might make us more thoughtful about when to medicalize human experiences versus accepting them as normal variation. Recognizing that many treatments were discovered accidentally rather than through complete understanding might make us more open to trying different approaches when first-line treatments fail. Perhaps most importantly, these curiosities remind us that mental health treatment requires both scientific rigor and humanistic compassion—respect for patient autonomy, awareness of social contexts, and recognition that people experiencing mental illness are full human beings deserving dignity and comprehensive care, not just biological machines needing chemical adjustment or behaviors needing control.

FAQs About Psychiatry

Is psychiatry a legitimate medical specialty or pseudoscience?

Psychiatry is a legitimate medical specialty recognized by medical boards worldwide, but it faces unique challenges that fuel persistent questions about its scientific status. Psychiatrists complete full medical school training, pass rigorous board examinations, and complete 4+ years of specialized residency training in psychiatric diagnosis and treatment. The specialty is based on scientific research including neuroscience, genetics, pharmacology, and clinical trials. That said, psychiatry grapples with legitimate epistemological challenges: we can’t directly observe or measure most mental phenomena the way we can blood sugar or bone fractures; diagnostic categories are based on symptom clusters rather than identified biological mechanisms for most conditions; treatment often involves trial and error rather than targeted interventions based on understood pathophysiology; and psychiatric conditions exist on continua with normal human experience, making boundaries between pathology and variation genuinely fuzzy. These challenges don’t make psychiatry pseudoscience—they make it science confronting uniquely complex phenomena. All medicine involves uncertainty and imperfect knowledge; psychiatry just confronts this more obviously than specialties with clearer biomarkers. Legitimate criticisms of psychiatry (overdiagnosis, overreliance on medication, diagnostic subjectivity, historical abuses) don’t negate its validity but highlight areas needing improvement. The question isn’t whether psychiatry is legitimate but how we can improve its scientific foundations while acknowledging the inherent challenges of treating conditions in the realm of consciousness, emotion, and behavior. Most mental health professionals and researchers view psychiatry as a developing medical science—not pseudoscience, but also not as mature as fields with longer histories and clearer biological markers.

Why can’t we just do a blood test or brain scan to diagnose mental illness?

This is one of the most common questions about psychiatric diagnosis, and the inability to diagnose mental illnesses with objective tests contributes to persistent skepticism about psychiatry. The reality is complex. For most psychiatric conditions, we simply haven’t identified biomarkers—biological signs that reliably distinguish people with the condition from those without it. This doesn’t mean mental illnesses lack biological bases; it means the biology is complex, involving interactions among many genes, brain regions, neurotransmitter systems, and environmental factors that don’t produce simple, measurable abnormalities visible on standard tests. Brain scans can show differences between groups (people with schizophrenia versus controls on average) but these differences aren’t consistent or distinctive enough to diagnose individuals. Blood tests can rule out medical conditions that mimic psychiatric symptoms (thyroid problems, vitamin deficiencies) but can’t diagnose primary psychiatric conditions. Some argue this means psychiatric diagnoses aren’t “real” diseases, but this assumes all legitimate medical conditions must have biomarkers, which isn’t true—migraine headaches, for instance, are diagnosed purely based on symptom patterns without objective tests. The lack of biomarkers does create real problems: it makes diagnosis more subjective and vulnerable to bias; it makes it harder to study disease mechanisms; it allows skeptics to dismiss psychiatric conditions as “not real.” However, psychiatry diagnoses based on careful assessment of symptoms, history, and functioning—similar to how many medical conditions are diagnosed—using standardized criteria developed through extensive clinical experience and research. The field actively searches for biomarkers that might eventually enable more objective diagnosis, but until then, clinical assessment by trained professionals remains the standard. The absence of blood tests doesn’t make psychiatric conditions less real or less deserving of treatment than other medical conditions diagnosed primarily through clinical evaluation.

Are psychiatric medications just “chemical straightjackets” that numb people?

This criticism reflects legitimate concerns about overmedicalization and the historical misuse of psychiatric drugs, but it’s an oversimplification that doesn’t match most patients’ experiences with modern psychiatric medications. Yes, some psychiatric medications have sedating effects, particularly at high doses or in certain individuals. Yes, psychiatric medications have been misused—given to institutionalized patients primarily for behavioral control, prescribed too readily without adequate psychotherapy, or used to suppress normal human emotions that don’t represent illness. Yes, side effects can be significant for some medications, and finding the right medication often involves trial and error. However, for many people with moderate to severe mental illness, psychiatric medications are neither chemical straightjackets nor panaceas—they’re tools that, when used appropriately, can reduce suffering and improve functioning without fundamentally changing who someone is. A person with severe depression who can barely get out of bed, can’t concentrate, and contemplates suicide isn’t experiencing authentic emotion that medication suppresses—they’re experiencing pathological brain states that medication can help correct, allowing them to feel more like themselves. Someone with schizophrenia whose thoughts are disordered and who experiences terrifying hallucinations isn’t being “numbed” by antipsychotics but rather freed from symptoms that prevent normal functioning. When psychiatric medications work well, people often describe feeling more like themselves rather than less—no longer constantly battling intrusive symptoms, able to engage with life, work, and relationships. That said, medications are overused in some contexts, particularly in nursing homes and institutions where they may be given primarily for staff convenience. They’re sometimes prescribed too quickly without adequate evaluation or trial of psychotherapy. Finding the right medication and dose requires careful calibration, and not all medications work for all people. The reality is nuanced: psychiatric medications are powerful tools that can be tremendously helpful or problematic depending on how they’re used, whether they’re appropriate for the individual, and whether they’re part of comprehensive treatment including psychotherapy and support rather than the sole intervention.

Why does it take so long to get a psychiatry appointment?

The difficulty accessing psychiatric care, with wait times sometimes extending months for new appointments, reflects a genuine crisis in psychiatrist availability rather than lack of demand. Several factors contribute to this shortage. First, there simply aren’t enough psychiatrists relative to need. The United States alone faces a shortage of approximately 10,000 to 15,000 psychiatrists, with rural and underserved areas particularly affected. Many communities have no psychiatrists at all, requiring patients to travel hours or go without care. Second, the distribution of psychiatrists is skewed toward wealthy urban areas where patients can pay out-of-pocket, while areas relying on insurance reimbursement or serving low-income populations struggle to attract psychiatrists. Third, many psychiatrists opt out of insurance networks entirely due to low reimbursement rates and administrative hassles, becoming cash-only practices that only affluent patients can afford, further reducing access for typical patients. Fourth, psychiatrists often maintain smaller patient panels than other specialists because psychiatric appointments require more time than brief medication management allows if done properly, limiting how many patients each psychiatrist can see. Fifth, the relatively lower compensation compared to other medical specialties deters medical students from choosing psychiatry, perpetuating the shortage. Sixth, psychiatrists face high burnout rates due to challenging patient populations, high acuity, insurance hassles, and limited resources, causing some to reduce hours or leave practice entirely. The shortage creates vicious cycles where overwhelmed psychiatrists resort to brief medication-focused appointments because they can’t spare time for longer evaluations, reducing job satisfaction while not meaningfully improving access. Addressing this crisis requires systemic changes: improving reimbursement for psychiatric services, expanding psychiatric residency positions, providing loan forgiveness for psychiatrists working in underserved areas, reducing administrative burdens, and expanding the psychiatric workforce through training more psychiatrists and better utilizing other mental health professionals like psychiatric nurse practitioners and physician assistants. Until these changes occur, access problems will persist, with serious consequences for people needing psychiatric care.

Is psychiatry just a tool of social control?

This critique has historical validity and raises important ongoing concerns, though overstating the case risks dismissing legitimate psychiatric treatment. Historically, psychiatry has absolutely been used for social control—forcibly institutionalizing people who were inconvenient or different, pathologizing homosexuality and other normal variations, performing lobotomies on “difficult” patients, using diagnosis and involuntary hospitalization to silence political dissidents (particularly in Soviet psychiatry), and disproportionately confining poor people and minorities to understaffed institutions. The power to define what counts as mental illness and who gets labeled as mentally ill is indeed a form of social power that can be abused. Contemporary concerns include the medicalization of normal childhood behavior through ADHD diagnosis, the influence of pharmaceutical companies in promoting medication for increasingly broad conditions, and disparities in who receives what diagnoses and treatments (for instance, Black patients being more likely to receive schizophrenia diagnoses and less likely to receive depression diagnoses compared to white patients with similar symptoms, possibly reflecting bias). However, recognizing these legitimate concerns doesn’t support the conclusion that psychiatry is inherently nothing but social control. Most psychiatric patients seek treatment voluntarily because they’re suffering from genuine distress or impairment—the depression, anxiety, psychosis, or other symptoms are real problems, not social constructs or political labels. The fact that psychiatry can be misused doesn’t mean all psychiatric treatment is control any more than the fact that antibiotics are sometimes overprescribed means infections aren’t real or antibiotics aren’t helpful. The challenge is distinguishing legitimate treatment from social control, which requires constant vigilance about: respecting patient autonomy wherever possible; minimizing coercive treatment and using it only when genuinely necessary to prevent harm; maintaining awareness of how social values shape diagnostic categories; ensuring diverse perspectives inform psychiatric practice; and guarding against conflicts of interest from pharmaceutical companies. Modern psychiatry, at its best, helps people suffering from genuine mental illnesses while respecting their autonomy and diversity. At its worst, it can indeed function as social control. The answer isn’t rejecting psychiatry entirely but continually working to maximize the former and minimize the latter through ethical practice, patient rights protections, and critical self-examination of the field’s assumptions and practices.

Do antidepressants actually work or is it just placebo effect?

This question generates heated debate, and the answer is more complex than either “they definitely work” or “they’re just placebos.” Clinical trials consistently show that antidepressants, on average, outperform placebo for moderate to severe depression—statistically significant differences appear in randomized controlled trials. However, several findings complicate the picture: the placebo response in depression is substantial (30-40% of placebo group participants improve), the difference between medication and placebo is often relatively small for mild to moderate depression, and for severe depression the difference is larger and more clinically meaningful. Meta-analyses suggest antidepressants are modestly effective overall, with some working better for some people than others, and significant individual variation in response. Some patients experience dramatic improvement with antidepressants, others experience modest benefits, and some experience no benefit or intolerable side effects. The “number needed to treat” (how many people need to take the medication for one person to benefit who wouldn’t have benefited from placebo) ranges from about 4 to 8 depending on severity, meaning antidepressants help many but far from all patients. Factors complicating interpretation include: publication bias favoring positive studies; the fact that many trials are funded by pharmaceutical companies with conflicts of interest; difficulty maintaining blinding when medications have noticeable side effects that reveal they’re not placebo; and the challenge of measuring subjective improvement in mood. Most evidence suggests antidepressants have real effects beyond placebo for many patients, particularly those with severe depression, but the effects are often more modest than early enthusiasm suggested. This doesn’t mean they’re useless—modest benefits matter when suffering is severe—but it suggests they’re not panaceas and shouldn’t be the sole or automatic treatment. Psychotherapy, particularly cognitive-behavioral therapy, shows comparable effectiveness to medication for many patients, and combining medication with therapy often produces better outcomes than either alone. The most reasonable conclusion is that antidepressants are legitimate treatment tools with real effects for many patients, but they’re not universally effective, they work better for severe than mild depression, and they work best as part of comprehensive treatment including psychotherapy and lifestyle modifications rather than as standalone interventions. The placebo effect is real and substantial in depression, but antidepressants add benefit beyond placebo for many patients, particularly those with more severe symptoms.

Why are mental health diagnoses so controversial and subjective?

Mental health diagnoses face unique challenges that make them inherently more controversial and seemingly subjective than many medical diagnoses, though the degree of subjectivity is often overstated. The challenges include: absence of objective biomarkers for most conditions (no blood test or scan to definitively confirm diagnosis); reliance on symptom reports that depend on patient insight and communication; conditions defined by symptom clusters that may have heterogeneous underlying causes; symptoms existing on continua with normal experience (how much sadness becomes depression? when does worry become anxiety disorder?); significant comorbidity with multiple diagnoses often applicable; and cultural variation in how symptoms are expressed and understood. These factors mean diagnosis requires clinical judgment rather than mechanical application of tests, creating opportunity for subjectivity, bias, and disagreement. The DSM attempts to standardize diagnosis through specific criteria, but applying criteria still requires judgment—is someone’s sleep “significantly” disturbed? Do symptoms cause “clinically significant” impairment? Different clinicians might reasonably disagree. However, several points provide perspective: First, many medical diagnoses also involve substantial subjectivity—when does heartburn become GERD? When is someone’s blood pressure high enough to treat? Medicine generally involves judgment, not just objective measurement. Second, diagnostic reliability studies show that trained clinicians using standardized criteria agree on major diagnoses (like schizophrenia, bipolar disorder, major depression) at rates comparable to reliability in other medical fields. Third, even if diagnosis is somewhat subjective, that doesn’t mean the suffering isn’t real—people experience genuine distress and impairment regardless of diagnostic debates. Fourth, diagnosis in psychiatry serves the pragmatic function of guiding treatment selection and facilitating communication among clinicians, not necessarily reflecting discovered natural categories in nature. The controversy partly reflects psychiatry’s unique challenges and partly reflects different philosophical views about what diagnosis should accomplish. Some envision diagnosis as identifying discrete disease entities with specific causes, which psychiatry rarely achieves. Others see diagnosis as pragmatic classification schemes for guiding treatment of people who are suffering, which psychiatry does reasonably well. The subjectivity in psychiatric diagnosis is real but often exaggerated, and it’s better understood as reflecting the complexity of mental phenomena rather than fundamental invalidity of the diagnostic enterprise.

Will psychiatry eventually be replaced by neuroscience and brain scans?

This question reflects hopes that psychiatry will eventually achieve the same kind of objective, biologically-based diagnostic clarity as fields like radiology or laboratory medicine—that we’ll scan brains, identify specific abnormalities, and diagnose mental illness definitively. While neuroscience is advancing rapidly and increasingly informing psychiatric understanding, the complete replacement of clinical psychiatry with neuroscience-based diagnosis seems unlikely in the foreseeable future for several reasons. First, mental illnesses appear to involve complex interactions among multiple brain systems, genes, environmental factors, and developmental influences rather than simple, discrete brain abnormalities visible on scans. Brain differences associated with psychiatric conditions tend to be subtle, statistical patterns across groups rather than distinctive individual markers. Second, the relationship between brain and mind—how neural activity gives rise to subjective experience—remains one of science’s greatest unsolved problems, meaning even perfect brain maps might not fully explain mental phenomena. Third, many psychiatric symptoms are subjective experiences (mood, thoughts, perceptions) that may not have simple one-to-one neural correlates identifiable through imaging. Fourth, environmental and social factors powerfully influence mental health in ways that brain scans wouldn’t capture—trauma, poverty, discrimination, and social support all affect mental health through both biological and non-biological pathways. That said, neuroscience is increasingly valuable for psychiatry—helping identify treatment targets, developing new medications, understanding disease mechanisms, and potentially identifying biomarkers for treatment selection even if not for initial diagnosis. The future likely involves integration rather than replacement: neuroscience increasingly informing psychiatric understanding and treatment while clinical assessment, psychotherapy, and consideration of psychological and social factors remain essential. Psychiatry might become more biologically informed without becoming purely biological. Just as modern cardiology uses both high-tech imaging and basic physical examination, future psychiatry will likely combine neuroscience tools with clinical judgment and understanding of the whole person in their life context. The goal shouldn’t be replacing the human elements of psychiatric practice with technology but rather equipping psychiatrists with better tools while maintaining the humanistic, person-centered approach that’s essential when treating conditions affecting consciousness, identity, and human experience.

By citing this article, you acknowledge the original source and allow readers to access the full content.

PsychologyFor. (2025). 8 Curiosities About Psychiatry That May Surprise You. https://psychologyfor.com/8-curiosities-about-psychiatry-that-may-surprise-you/


  • This article has been reviewed by our editorial team at PsychologyFor to ensure accuracy, clarity, and adherence to evidence-based research. The content is for educational purposes only and is not a substitute for professional mental health advice.