Joshua E. Lewis & Publication Slop
Before I say anything else, I want everyone to know that Joshua E. Lewis is a terrible human being who (allegedly):
- Got dropped from his med school for cheating on his exams with a burner phone he left in the restroom and whose ringer he forgot to turn off because he's (allegedly) really stupid.
- Misused travel funds meant for medical conferences to travel to vacation hotspots like New York City, Las Vegas, and others.
- Recruited gullible first-year students, who wanted in on his prolific reputation, to mass-publish slop articles with additional help from generative AI.
- Told his girlfriend that he was checking himself into a mental hospital while just exercising at the gym, and also cheated on his girlfriend with a Grindr account.
- Edited screenshots of conversations over text with another student whom he reported for harassment.
- Lied about both of his parents being plastic surgeons to garner influence on Instagram and real life.
But this isn't about Joshua E. Lewis. This is about every author of the paper "Examining gender-specific mental health risks after gender-affirming surgery: a national database study" (J Sex Med., 2025-02-25), and of the larger gunner culture in medical education. In case you didn't know, publications look good on your CV, so some medical students will publish studies driven by scraping data from medical databases like TriNetX (sometimes assisted by outlines generated using AI). It's low-effort slop whose usefulness only occasionally coincides with the author's desire to get their numbers up. That's bad enough, but it's worse when the slop (even inadvertently) fulfills an anti-social agenda.
The article purports: "From 107 583 patients, matched cohorts demonstrated that those undergoing [gender-affirmation] surgery were at significantly higher risk for depression, anxiety, suicidal ideation, and substance use disorders than those without surgery" (p. 1) You might be thinking: are we comparing rates of depression or anxiety before and after surgery? Or, at least, the rates of post-surgery depression or anxiety over time? Nope. Simply: patients who underwent surgery are more likely to have diagnostic codes (boolean flags) representing depressive or self-destructive symptoms within 2 years. More than being a case of confusing correlation for causation, this is a case of "No shit!" The most severe procedure in trans healthcare self-selects for the most depressed and anxious patients? How likely is the Pope to attend church, in the time since he's become Pope, compared to non-Popes? Not sure tbh...
This is not to mention other data-structural issues in the article: despite also purporting to analyze these symptoms across gendered/sexed lines, it admits that the sex records of trans patients are all over the place and are unreliable in terms of representing an individual patient's current sex (or 'gender identity'):
We classified patients using the gender documented in the EMRs within the TriNetX database, recognizing that this documentation may reflect either natal sex or gender identity, depending on how it was recorded. To minimize potential misclassification, we identified transgender individuals using the ICD-10 code F64 (gender dysphoria) and categorized them into six cohorts.
- Cohort A: Patients documented as male (which may indicate natal sex or affirmed gender identity), aged ≥18 years, with a prior diagnosis of gender dysphoria, who had undergone gender-affirming surgery.
- Cohort B: Male patients with the same diagnosis but without surgery.
- Cohort C: Patients documented as female, aged ≥18 years, with a prior diagnosis of gender dysphoria, who had undergone gender-affirming surgery.
- Cohort D: Female patients with the same diagnosis but without surgery.
- Cohort E: Transgender male patients who underwent masculinizing gender-affirming regardless of a previous documented diagnosis of gender dysphoria.
- Cohort F: Transgender female patients who underwent feminizing gender-affirming surgery regardless of a previous documented diagnosis of gender dysphoria.
Cohorts E and F include transgender patients who underwent gender-affirming surgery but lacked a documented diagnosis of gender dysphoria, unlike Cohorts A and C, which specifically require this diagnosis for inclusion. This distinction allows for the evaluation of mental health outcomes in a broader transgender population, encompassing individuals who sought surgery without meeting the formal diagnostic criteria for gender dysphoria.
Lewis et al., p. 2.
One significant limitation is the binary classification of gender within the TriNetX database, which only records patients as “male” or “female” in its demographic data. This excludes non-binary individuals and others who do not align with binary gender categories, limiting the inclusivity and representativeness of the study. Furthermore, the database does not include explicit information on sex assigned at birth, legal gender changes, or affirmed gender identities, which prevents more nuanced subgroup [subgroup!?] analyses. This limitation underscores the importance of developing future data systems that allow for broader gender identity categories to support more inclusive research.
Lewis et al., p. 5.
Maybe that's not important. I wouldn't know. I do have to wonder if, on page 2, if they couldn't see whether individuals from cohorts A & C had received vaginoplasty or phalloplasty, since that data was available to determine cohorts E & F. That wouldn't make it necessarily easier to distinguish cohorts B & D, but at least it'd be something? Genuine question. An anonymous medical student also had this to say about the article's methodology, with respect to querying patients by medical procedure:
i think my main gripe is that this kind of looking solely at procedure codes does not properly trace whether that procedure was for gender affirming surgery or not. like 19304 is "subcutaneous mastectomy", which we use for everything from gynecomastia to prophylaxis (prevention) against breast cancer to cosmetic purposes to removing a lump. so simply saying "this person was diagnosed with gender dysphoria at age 15, and oh they had a subq mastectomy at age 35, that has to be for GAC [and not for some totally unrelated reason]"
The conclusion puts a little cherry on top: "Despite the benefits of surgery in alleviating gender dysphoria, our findings underscore the necessity for ongoing mental health support for transgender individuals during their post-surgery trajectories" (p. 6). You can't tell me that doesn't come across as virtue-signaling—i.e., gesturing towards gender-affirming surgery as an abstract 'good'—while purportedly demonstrating that the procedure worsens the patient's mental health. That comes across as a contradiction of terms.
Anyway, as covered by Erin Reed, the far-right has latched onto this article as apparent evidence that trans healthcare is fraudulent: specifically, that it results in worsened mental health outcomes for patients diagnosed with gender dysphoria. This is in spite of existing publications in recent years indicating ~1% rates of dissatisfaction or regret for both gender-affirmation surgery and hormone replacement therapy. It's also entered public discourse days after a bill was introduced in the Texas legislature to effectively ban trans healthcare for all patients, including adults. Did you know that this paper was also published in Texas? Great stuff! It's looking great here.
This article is supremely irresponsible. It's incomplete data poorly analyzed under a faulty methodology, published in an extremely hostile political environment eager to disenfranchise trans people, for the sake of valorizing one's name. Fuck you, Joshua E. Lewis, and all of you. Your article has a real chance to make my life, and the lives of other trans people, much more difficult. Reflect on that.
By the way, that paper was published under Baylor College of Medicine, but that's not the school in which Joshua E. Lewis was enrolled before he got dropped for cheating. I'll be nice and not say which.
Shout-out to my trans educational page because advocacy firms aren't worth shit.
Comments
Post a Comment