Breaking News: Psychoanalysis Is Falsifiable!

Exaggerating a little.

This article came to me twice—on Twitter where it was posted by @pourfairelevide, and again when my friend John B. of The Retired Adventurer shared it with me directly. "Phallocentricity in GPT-J's bizarre stratified ontology" by Matthew Watkins describes an experiment where the author generated a semiotic map of definitions used by GPT-J and found the meaning most likely to lead to any other meaning—as it were, the most specific definition to which others can be relative—is "a man's penis".

I recently posted an article about Freud being 'literally correct', i.e., about the Oedipus Complex being the most accurate model we have of heterosexual socialization. There I mention Lacan's abstraction of Freud, that unconscious desires are not necessarily structured around the literal male penis (of the father) but around the phallus (to be specific: the symbolic phallus of the desiring-subject which compensates for the lack imposed by the father's real phallus). Lacan's aim was to generalize Freud and arrive at a structural, linguistic theory of desire of which Freud saw traces but only in its superficial, overtly sexual content.

What's crazy about Watkins' findings is that they re-unify the 'literal' psychosexual theories of Freud with the structural theories of Lacan (which, as such, did not necessarily need re-unification except that Lacan is now often read on entirely symbolic rather than sexual terms). Not only can we arrive at a "most specific signifier" from which all other signifiers are derived, which is only made possible now by computatively complex language processing models, but that the signifier is none other than "a man's penis". Language consists of cascading metaphors, but at the center is strictly and definitionally not-one (except, as Lacan suggests, of pre-linguistic void). That's not just a phallus. That's a penis.

What implications does this have? First, that a central claim of psychoanalysis is now hypothetically falsifiable—and, if someone were to use a bigger dataset than GPT-J (which is smaller than, like, GPT-3), that falsifiable hypothesis could be understood as a scientific theory. Like we can have discourse about whether the scientific method is the sole determiner of truth, which it isn't, but now psychoanalysis could enter scientific discourse proper and be put to the test.

This also absolves feminist psychoanalysts like Luce Irigaray who have made strong claims about the innate relationship between language and phallocentrism (in contrast to other feminist psychoanalysts like Judith Butler who have tried to carve out spaces for women and queer people in language—and about whom Irigaray said, "[They're] the best product of masculine society"). Turns out, obviously if you have half a brain, language itself has a male bias which objectifies women in its very basic structures of meaning. Who's the crackpot now?! How exciting is this!!

Importantly, I don't think this "proves psychoanalysis" as much as it makes it maybe a little falsifiable. More experiments please!

Comments

  1. I'm too psycho for psychoanalysis 😈

    ReplyDelete
  2. Hi Marcia, I love your blog but I've never commented before. However, this happens to be close to my area of expertise (pure mathematician in geometric topology, now thinking about machine learning) so wanted to chime in! Not to be a downer on my first comment here, but I wanted to say I think it's important to take this series of articles with a HEAVY dose of salt. I think most of Watkins's semantic void series reads as kind of cherry-picked interpretations of general, purely mathematical phenomena.

    Essentially, in the first semantic void articles, Watkins tries to understand some of the geometry of the embedding space; but most of what he finds (that the central mass of tokens lie in the intersection of two thickened hyperspheres) follows immediately from some fairly routine properties of random distributions in high dimensions (i.e. not anything specific to GPT or llm embedding spaces). It's (exaggerating a bit) like if you were to intuit some facts about natural language from the mathematical fact 2+2=4. The perturbation of tokens at different distances (leading to the observed "stratified structure") also seems like just a relic of the construction of any language model embedding space in terms of angle/distance as similarity/commonality (roughly). It doesn't really seem like something special about language, it more feels like the model just doesn't know what to do when you feed it things that are too distant from its dictionary.

    ReplyDelete
    Replies

    1. Then in this article, Watkins wants to understand the centroid itself, but there's some interpretation issues:
      - I don't think there's a good reason to believe the centroid translates to some type of "semantic average." If the vectors for say "broccoli" and "sofa" cancelled in the sum, how do we interpret that as semantic "averaging?"
      - You'll get all sorts of wonky definitions for things far away from the mass of the embedding space. If you did this with something far outside the shell, you should get similar behavior. So the centroid feels like a red herring.
      - The definition path "a man's penis" is pretty cherrypicked. The definition tree for the centroid has a TON of paths, but what Watkins does is look at all the paths and say "this one looks most specific!" And if you examine the weights on the path, the weighting looks small, which feels like it defeats most of the purpose-- it's fairly unlikely, relatively speaking.
      - Even beyond that, it's hard to interpret these types of abstract observations as something useful: there's a ton of different reasons why "a man's penis" could be close to the centroid. These range from computer explanations (the way the language model is evaluated) to other semantic explanations (penis is often replaced with vague words like "thing"; how many words will fit into "a man's ___") to coincidence (particularities of the methods). Hoping that it tells us something about "primordial" words seems a bit confirmation bias-y.

      Also, regarding your post, even if you take all Watkins's interpretations at face value, I don't think "the meaning most likely to lead to any other meaning" is a good interpretation of "a man's penis" here. It's not leading to other meanings, really-- it's just defining the centroid. It's also not the most common definition, but just the most specific among the common-enough ones. It would be more like "the most specific four-token-phrase among the common-enough four-token-phrases that begin the many definitions of the average word." But even that interpretation is leaving out how we defined "common-enough" and "most specific," on top of the issues more prevalent in Watkins's interpretation. Sorry for the long comment or if this reads as pedantic, but thought maybe I could be helpful!

      Delete
    2. I have not followed up on all of this yet, and I am not as good with the underlying stats as you appear to be and definitely rusty from my best anyway, but @braeden intuitively what you're saying makes sense to me.

      I'm a machine learning engineer, emphasis on the engineer lol, but I do have a background in cognitive neuroscience so I used to know more about the underlying stats, but probably not as much as you.

      Topology is super interesting! But again a bit outside my area of expertise.

      Anyway, that's a long-winded way of saying, if you're interested in transitioning to industry as an MLE, I would be up to talk with you if you'd like any advice on how to get into industry, what it's like, etc., if that would be helpful.

      I think you can trace this comment back to my blog and/or other ways to reach out to me, or if not, I can give you my contact info directly via some other means.

      Delete
    3. Hey @maxcan7, that would actually be awesome! I'd genuinely really appreciate that. I found your blog, which is awesome btw! But could not actually find a good way to contact you directly. My discord is awex1490 (you can also find me on the NSR server) and my email is: b r a e d e n a r (at) gmail (delete the spaces). I'd love to chat more about MLE stuff whenever.

      Delete
    4. np I'll shoot you a message on discord :).

      Delete
    5. hey braeden and max, thank y'all for your clarifying + helpful comments! in that case, i would want the original author to clarify his methodology because the selections feel very arbitrary.

      Delete
  3. Hi Braeden, You've made some very worthy points, thanks for adding clarification. I didn't expect to get so much interest from Lacanian scholars, Brazilian psychoanalysts and sniggering Redditors (with little/no background in LLMs) when I posted that piece, so I was a bit taken aback by some of the claims and counterclaims. I'm only superficially familiar with Lacan (and Freud) so I can't really add much on that front.

    You're right that the first Semantic Void post overlooked something about random distributions in high-dimensional spaces, but that really only affected my introductory attempts to give some sense of where the token embeddings lie in the 4096-d space. The pattern of definitions which layer around the centroid is there regardless.

    I did some experiments sampling random points at very large distances from the centroid and they produce similar definition trees to the centroid one presented in the Phallocentricity post, but without "a man's penis" showing up. Likewise, you can (without having to customise any token embeddings) just prompt with

    A typical definition of "" would be "

    and you get the same kind of bland/vague/general definition tree, but no genitalia or sexual themes.
    So it does seem that there's something going on with the centroid

    And yes, the probability for the "a man's penis" node is
    0.000025577157608834145
    (that's the cumulative probability, so a product of four probabilities, for tokens "a", " man", "'s" and " penis" - it's equal to 0.071^4 to give some sense of the component probabilities.)
    but the point is that this is orders of magnitude larger than the next "non-general" definition. Every possible combination of words *could* theoretically be output for the prompt if you keep lowering the cutoff for the cumulative probability, but all but the tiniest fraction have astronomically small probabilities of occurrence.

    You might be interested to take a look at the data I've since compiled from prompts of the form
    A typical definition of "" would be "a man
    A typical definition of "" would be "a woman
    A typical definition of "" would be "a man who wants
    A typical definition of "" would be "a woman who wants
    A typical definition of "" would be "a man who is able to
    A typical definition of "" would be "a woman who is able to
    ...
    (and the equivalents where the centroid isn't used, just the null string "")

    The asymmetry is really fascinating. The basic takeaway is "man = subject; woman = object".

    ReplyDelete
    Replies
    1. Oops, forgot the link. Here it is: https://docs.google.com/document/d/19H7GHtahvKAF9J862xPbL5iwmGJoIlAhoUM1qj_9l3o

      Delete
    2. This comment has been removed by the author.

      Delete

Post a Comment

Popular posts from this blog

D&D Fifth Edition: Death & Rebirth

OSR Rules Families

Bite-Sized Dungeons