Beginning Astrophotography: Milky Way on 14 July 2018

Milky Way core, photographed at 22:58 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 8 seconds at 3200 ISO.
Milky Way core, photographed at 22:58 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 8 seconds at 3200 ISO.

On the night of the 14th, I got to take my camera out to a friend’s farm—the same one I visited last year—and try more photos of the Milky Way. None of them came out particularly special, but I thought I’d share a few here in one place.

My favorite of the evening might’ve been while I was waiting for dusk, watching the last rays of the sun over the countryside.

Sunset seen over the Oregon farmland, photographed at 20:37 on the evening of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/8 and exposed for 1/160 seconds at 400 ISO.
Sunset seen over the Oregon farmland, photographed at 20:37 on the evening of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/8 and exposed for 1/160 seconds at 400 ISO.

I ended up using my Zeiss Touit lens more than usual this time. It has considerable aberrations and some vignetting, as I’ve pointed out in the past, but its longer focal length let me frame the core of the Milky Way more tightly. It’s a 32mm lens, meaning that on my camera’s APS-C sensor, it is the equivalent of a 48mm lens on a full frame sensor. It’s ideal for things like portraiture, not really for landscapes or astrophotography, but I wanted to give it a try.

I took several photos dead into the Milky Way core with it. I haven’t yet reached the point where I’m taking longer exposures to combine them for more detail. I’ve been instead experimenting with seeing how much detail I can get from individual photos using different settings.

The photo I pushed the most used an ISO of 3200.

Core of the Milky Way, photographed at 22:55 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 8 seconds at 3200 ISO.
Milky Way core, photographed at 22:55 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 8 seconds at 3200 ISO.

A lot of the brightness comes from aggressive processing after the fact, though. With another photo from the set, taken with identical settings and nearly identical framing, I used more subdued processing.

Milky Way core, photographed at 22:58 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 8 seconds at 3200 ISO.
Milky Way core, photographed at 22:58 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 8 seconds at 3200 ISO.

I also turned the camera up to the zenith to catch Vega, Lyra, some of Cygnus, and a bit of the North American Nebula.

Zenith, including constellation Lyra and North American Nebula, photographed at 23:19 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 13 seconds at 1600 ISO.
Zenith, including constellation Lyra and North American Nebula, photographed at 23:19 on the night of 14 July 18 with my Sony α6300 using a Zeiss Touit 32mm lens stopped to 𝑓/1.8 and exposed for 13 seconds at 1600 ISO.

By the time I got out the lens I normally use for night sky wide-field photos, the Rokinon, a few clouds had drifted into view and began to spoil the shots in the direction of the core. So I got nothing so wonderful as last year, but still some nice and expansive shots. My friend suggested portrait aspect, and I definitely got the most out of that.

Milky Way core partially obscured by foreground clouds, photographed at 00:20 on the morning of 15 July 18 with my Sony α6300 using a Rokinon 12mm lens stopped to 𝑓/2.2 and exposed for 20 seconds at 2500 ISO.
Milky Way core partially obscured by foreground clouds, photographed at 00:20 on the morning of 15 July 18 with my Sony α6300 using a Rokinon 12mm lens stopped to 𝑓/2.2 and exposed for 20 seconds at 2500 ISO.

I took photos facing both toward and away from the center of the galaxy, though the latter required some additional processing to reduce the distorted colors from light pollution. There’s a small glimpse of the Andromeda Galaxy as a small blur in the lower right, but not much definition is there—I’d need a zoom lens and many exposures to get more.

View looking toward trailing end of Milky Way, with Andromeda Galaxy and Cassiopeia, photographed at 00:25 on the morning of 15 July 18 with my Sony α6300 using a Rokinon 12mm lens stopped to 𝑓/2.2 and exposed for 15 seconds at 3200 ISO.
View looking toward trailing end of Milky Way, with Andromeda Galaxy and Cassiopeia, photographed at 00:25 on the morning of 15 July 18 with my Sony α6300 using a Rokinon 12mm lens stopped to 𝑓/2.2 and exposed for 15 seconds at 3200 ISO.

Pandora’s Checkbox

The Information Age brought with it a cliché—that unread agreement you dismiss to get to the software you need to use. There’s no way you’re going to read it. For example, macOS High Sierra comes with a software license agreement totaling 535 pages in PDF form, which contain (by my count) 280,599 words of intensely detailed yet maddeningly vague legal language. On that operating system, Apple Music has another license, and the App Store has yet another, and so on.

It would take thousands of dollars in consulting fees with a lawyer to make a fully informed decision, or you can proceed regardless. So you proceed. You always have. Each little app, website, or gizmo peppers you with a new set of terms and conditions. Each upgrade gets a few extra clauses thrown in, and you agree again.

You’re not a fool. You assume you’re signing away rights and control you want. It comes in the bargain. You try to skim the terms and conditions, and this deal feels a bit more Faustian all the time—mandatory binding arbitration, data collection, disclaimers of liability, and so on.

None of this is really news to you if you’ve dug into it. You’re not really in possession of your software; you’ve merely licensed the use of it. You can’t really hold them responsible for flaws; you agreed to accept the software as is. You can’t really control what information they collect about you; you hand that over and get a free or discounted product in return.

However, where things get slippery is that a company with whom you’ve entered into a transaction has also signed agreements with yet other companies. Worked into those overwrought terms and conditions you clicked through, with their vague-yet-precise language, are ways of ensuring that you’ve already agreed to these subsequent proxy agreements as well.

What the T&C often allow is for your data to commingle at some broker whose name you’ve never heard of. A common situation in which this happens is when any entity responsible for handling money.

Say that you learn about a subscription service called Company A. You find them in your web browser or your mobile app, and you sign up, agreeing to their T&C. Then you ask to subscribe to a new e-mail about scarves every day, or whatever Company A does. They in turn ask for your credit card info, your billing address, and maybe a few other demographic details about you.

Company A turns to Company B to determine how risky you are. To do this, they ship off some information about you. If you used a mobile app, they’re possibly reading off what Wi-Fi networks are nearby, what Bluetooth devices are nearby, what apps are installed on your phone, what IP addresses you’re using, what fonts you have installed, and a wealth of other information. If you’ve used a browser, the information is similar but more limited. You’re being geographically located in either case. The headers from your browser are sent. The last website you were at before visiting Company A is probably sent.

Company B collects this information and compares it to all the other data it has on millions of other requests it’s collected from other companies. It has no real duty to sequester Company A’s data from Company Z (neither of which know anything about one another), and by putting it all together, it can detect patterns better. For example, it may have the ability to know where you are, even if you are behind a proxy. It may be able to track your traffic across the Internet as you move from Company A to Company Z and so on—because the number of details it gets are enough usually to uniquely identify you. It needs no cookies or other storage on your end for this.

This means that Company B has the role of an invisible data broker whose job it is to assess fraud risk on behalf of companies. The more clients it has feeding it data, the stronger its signals become, so Company B is incentivized to gather as many sources of data as possible, and it wants those data to be as rich and as frequently updated as possible.

Company A gets back something like a score from Company B indicating how much risk you pose—whether or not you’re likely to try to scam them out of free services (or if you’re even a human or not). Assuming you’re fine, then Company A sends your info off to Company C, a credit card processor who is the one actually responsible for charging you money and giving it back to Company A.

Company C is collecting data as well because they stand the greatest risk during this transaction. They collect data themselves, and they’re almost certainly using a data broker of some kind as well—either Company B or more likely something else, a Company D.

These interactions happen quite quickly and, usually, smoothly. In a few seconds, enough info about you to identify your browsing patterns and correlate you with your purchase of Scarf Facts has now been aggregated by one or two data brokers.

These brokers sell their services to companies hoping to prevent fraud, and they make money because they are able to draw from ever larger sources of traffic and gain a clearer picture of the Internet. You agreed to this, but I doubt it was clear to you that entities other than you and Company A were involved.

If you’re wondering whether or not this is really happening, this sort of collection has become increasingly common as businesses have tried to compete with one another by reducing friction around their sign-up processes. Simple CAPTCHAs have not been enough to hold back the tide of automated and human attempts to overwhelm large and small businesses attempting to sell services and goods online, and they have turned to data-based solutions to fight back. We can’t wind back the clock to a simpler time.

Unfortunately, most people are uninvolved and have become bycatch in the vast nets we’ve spun. It is likely, as time goes on, that the brokers who collect and analyze the data collected this way will try to sell them, or analyses of them, to profit in other ways. The value of these data increases as they become more representative of the traffic of the Internet as a whole.

I’m not asking you to stop and read the T&C on the next website you sign up for. That’s ever going to be practical. But now you know about another piece of your soul you’re possibly chipping off in return for clicking “Accept.”

A Taxonomy of Disagreements

I share my world with people with whom I disagree. The question is how and when to act upon it.

Not every disagreement deserves the same reaction. It’s not strictly necessary that I find common ground in every disagreement, and not every disagreement requires my engagement. Even among the cross product of these categories, I can respond in different ways.

I view disagreements along two axes which I’ll call triviality and consensus. By triviality I mean that the subject matter has little impact on at least one party’s life. Consensus means that agreement must be reached; this is not an agree-to-disagree situation.

I’ll lay out what each combination means.

  • Trivial, non-consensus disagreements—disagreements about an unimportant subject which doesn’t strongly impact all parties, or does so unequally. Food preferences are a perfect example. If one person likes mayo, another likes Miracle Whip, and yet another thinks they’re both kind of unpleasant, this is a trivial disagreement. It’s also pretty irrelevant to disagree because nobody has to change their lives too much over this disagreement. Live and let live.
  • Trivial, consensus disagreements—disagreements about an unimportant subject which impacts all parties and for which a single decision needs to be made. This is common in families and offices, like setting the thermostat or choosing where to go for dinner. Contention over shared resources, or picking common tools or workflows at work, can lead to a lot of nitpicking, but the problem is solvable, sometimes even with a coin-toss.
  • Nontrivial, non-consensus disagreements—disagreements about a subject which impacts all parties strongly but for which consensus is not needed, or is even impossible. The most salient example is any question of faith. Faith doesn’t respond to reason and occupies maybe the most important part of some people’s self-identity and self-determination, but agreement over the details of faith or religion are impossible to bring into accord. It’s unrealistic to try. Yet we have to try to find some way to live with people of different faiths. The very intimate, personal nature of their beliefs makes them immutable—non-consensus, as I’m calling it—since we can’t all share a singular faith and probably wouldn’t want to.
  • Nontrivial, consensus disagreements—disagreements which impact all parties strongly and which require agreement. This is the really hard stuff: fundamental human rights, ethics, land-use rights, traffic laws, and so on. For these disagreements, I permit no quarter for non-consensus because I believe that aspects of human rights are both of paramount importance and cannot be yielded to, appeased, or ignored. To do so—to say “live and let live,” “agree to disagree,” to fundamental questions of humanity, dignity, life and death—gives those viewpoints with which I disagree a place to dwell, a platform from which to speak, and an implicit permission for action. The crossover between non-consensus and consensus for nontrivial disagreements begins at the threshold for potential harm.

Within the triviality axis, the consensus degree of freedom actually can be a bit blurry. Taking the trivial disagreements to start with, it’s easy to see where certain topics that should have been non-consensus have blended into consensus in people’s lives—like food preferences, which culture has buried with spades of shame and influence in order to make people eat the same things in the same ways. I work in tech, where similar things have happened for decades, such as the Editor Wars: who edits what and how on their own computer should be an agree-to-disagree situation, but it became a holy war.

Unfortunately, at the other triviality extreme, the same kinds of confusion take place. Nontrivial disagreements which should be non-consensus (which should look like agree-to-disagree) have become literal holy wars. Worse yet, disagreements about basic human dignity and rights have begun to look like agree-to-disagree situations.

I believe we all have a similar taxonomy in our heads, that we believe we’re “entitled to our opinions,” regarding certain questions of faith and politics. In some matters, we are. We’re entitled to our opinions regarding how much funding the Federal Highway Administration should get. Whatever my beliefs about interstate highways, I could break bread with a person who believes in gutting their funding.

However, the idea that we’re “entitled to our opinions” leads to a simplified taxonomy that doesn’t take into account which opinions—which disagreements—are over harmless questions and which are over potentially harmful, dehumanizing, or traumatizing ones.

More complicatedly yet, matters of faith—a place within many of us untouchable by consensus or persuasion—have enabled some people to spread the non-consensus umbrella over many other areas of their worldview, seeing them all as speciously linked by faith and therefore unimpeachable. As such, their political opinions about personhood, their ethical behaviors, their votes—no matter what their source, they are all placed into a category beyond rational discussion.

I have found myself exhorted to meet these people in the middle, to attempt to understand them, to “agree to disagree” with them, or to attempt to include them in wider political efforts to advance my own political will. These efforts often come from centrist-liberal sources.

What I’m here to tell you is that if your politics touches a human, if it has the potential to visit harm and suffering, if it detains a person, I have no place for you at my table, in my home, or in my life. If you use the idea of free expression to shirk the responsibility of examining your own ideas, you have abrogated your duty as a citizen under the guise of entitlement.

Truth, Light, and Statistics

Today, an article I wrote called “Truth, Light, and Statistics” got published as an online extra for The Recompiler, a local feminist hacker magazine edited by my friend Audrey.

I’m thrilled about finally getting it edited and published. I put a lot of care into it, the same way I’ve put a lot of care into improving my astrophotography over the years. The sense of the article is to contrast reality versus perception, signal versus noise—to show how photo-manipulation can sometimes paradoxically get us a little closer to the truth rather than take us farther away.

The best part about image stacking is how the very randomness of the sky’s turbulence provides the key to seeing through its own distortions, kind of like a mathematical judo. Read through if you want to find out how.

Ordinary Synesthesia

For the last fourteen years or so, I have privately described some of my sensory experiences as a phenomenon called synesthesia. I don’t talk about it much because I am not sure whether synesthesia is an accurate description. Over the years, though, I find that term still feels appropriate in many ways. Maybe it fits something you experience, too.

To talk meaningfully about what synesthesia is, I’m drawing from the first paper I read on the subject, one called “Synesthesia: Phenomenology and Neuropsychology” by a man named Richard E. Cytowic. Later research has appeared since 1995, so I’ve looked some of that up as well—much of that by Cytowic as well.

What is synesthesia? It’s an inextricable linking of distinct senses, such as sight and sound. Cytowic says this more with more academic language: “[T]he stimulation of one sensory modality reliably causes a perception in one or more different senses.” It’s not symbolic or metaphorical. It’s a literal, sensory experience that happens reliably.

Importantly, though, it’s not a mental disorder. It cannot be diagnosed using the ICD or DSM. There’s no hallucinatory aspect to synesthesia, and it does not impair those affected by it.

What do I mean by “linking of distinct senses”? There are numerous forms of synesthesia, but as an example, consider the form that associates sounds with visual experiences (like color). When a person who experiences synesthesia (a synesthete) hears a sound which triggers an association, that sound itself is perceived both as the sound and as the visual event (such as the color green). This isn’t to say that the synesthete has a hallucination in which the color green appears literally and visually before their eyes—a phenomenon that would only be described as a hallucination. What I mean instead is that the sound is itself the color green in their brain. By hearing it, they have experienced the color green, with all its appertaining associations.

There is a certain ineffable quality to that mixture of sensory experiences. Consider it for a moment. How would I know, as an unaware synesthete, that the color green is the correct association? I haven’t seen the color green in any literally visual sense.

I might make sense of this by working backwards from the associations green has in my mind—each tied both to the sound and to the color. Or else, I might find the color linked rather directly to the sound, working backwards from what associations the sound has in my mind. Stranger still, I might find associations between sounds and colors I haven’t even seen in reality.

Synesthesia seems to glom things together until the experiences occur not only simultaneously but literally as a unified sensory experience. To experience the trigger is to experience its association.

I believe this causes synesthesia to go under-observed and misunderstood. Many of us experience synesthesia without understanding it for what it is or how common it is, how subtle and integrated into our sensory experience. I don’t believe it’s universal, but I believe it’s possibly a widespread feature that exists on a spectrum.

I believe synesthesia-like phenomena underlie certain kinds of universal sound symbolism, such as the bouba/kiki effect, which has been found across different ages and cultures across time. Ramachandran and Hubbard did some influential experiments in this area.

So as for me? I experience compelling visual sensations brought on by specific auditory experiences—in particular, music at certain frequencies. I didn’t have much breadth of exposure to music growing up (only hearing country music on radios around me until I was a teenager), so I didn’t really understand much about myself and music until I was nearly an adult.

I began to put it together when I was in a college class for music (with a powerful sound system), and I found myself instinctively blinking and averting my eyes while listening to some baroque music, and for the first time I realized how forcefully visual the music became for me. I started reading more about synesthesia and thought maybe this was a reality for me. Since then, I’ve learned some of the details of how music affects me.

My experiences have some color components, but I struggle to describe what those colors are, beyond cool or warm. They often have textural or spatial components, disjointed in space nearby.

Percussive sounds cause white or otherwise desaturated interruptions in the visual experience. They are like visual noise—snow, static, walls. I tend to seek out music which avoids or minimizes percussion.

Vocal accompaniment causes almost no visual sensation whatsoever. I tend to ignore vocals in music or seek out purely instrumental music. Highly distorted, distinctly stylistic, or highly polyphonic vocals are an exception.

Higher pitched sounds tend to have stronger associations, but I get fuller, more textured experiences from richer musical arrangements. These can be classical, electronic, guitar bands, or whatever.

Sounds of different pitches or timbres tend to make themselves more or less visually salient. Usually higher pitches layer over or through lower ones and have more compact visual representations, warmer colors. The progressions of melodies and overall chord progressions tend to lead to eddies and swirls.

Chromaticism from modernist compositions cause some of the most interesting visuals. “Clair de lune” starts with such rich, variegated lavenders, which yield then to legato scintillations of all colors, covered with lots of warm notes, like stars embedded in a cool sky. The Tristan chord from Tristan und Isolde felt like a greenish-yellowish blight melting into a veil billowing in the wind as the prelude carried into further dissonances—while the final “Liebestod” glowed like a hot, clean pink for me. “Aquarium” from Le carnaval des animaux by Camille Saint-Saëns (you probably know it as “that music that plays in cartoons when someone is underwater”) has all these piano glissandos riding over top which cause indescribable motes of light to flit away.

I don’t believe I’d call synesthesia (if that’s what this is) a blessing or a curse. They simply shape the way I enjoy music. I find them vivid, memorable, and affecting—they add a substance. I’m glad it’s there, but I don’t really have any explanation for it, and I enjoy plenty of things without it. I’ve found it gives me a better sensory recollection for things that happen while I’m listening to music, but that might be the only benefit.

I don’t really talk about synesthesia. (I searched my Twitter account for mentions, and I see I’ve only ever mentioned the word once before today.) It’s an extremely personal, subjective experience, and part of it is ineffable. It’s like describing a dream—no one really cares but you.

Since there’s no way to convey the affect portion of the experience, it’s hard to communicate your intentions. It sounds like an attempt to make yourself seem special or gifted in some way. Synesthesia has been associated with artists and art since the 1800s, especially musical composers. It became faddish enough for a time that it was even popular to fake aspects of it.

I want to emphasize again that I believe there is a universal quality to sensory crossover. My personal belief is that synesthesia-like experiences exist on a spectrum in many people—some more than others. The more we talk about it for what it is and how it actually is experienced, the more readily others will recognize the experience in themselves and normalize it.

For this reason, I don’t want to state definitively I have synesthesia. I’m not saying that. I will say that I have experiences that feel could be appropriately described by the term, so I wouldn’t rule it out. I imagine that many people feel like I do or have some similar quality to their sensorium. I just want to open us up to the possibility of synesthesia being ordinary.

Thematic Rewriting

I have been revisiting On Thematic Storytelling in my thoughts lately. Part of it is because I’ve been helping a friend story-doctor their writing a little. It’s also because I’ve been dwelling on my own story notes and refining them.

This has led me to questions I had not considered before. First of all, why do we write in symbolic, allegorical ways in the first place? Secondly, how do these themes end up in our stories at all, ready to develop, even if we don’t set out to use those themes at first? I think the answers to these questions are linked.

People have a long history of telling fables and parables to relate messages to one another, using figurative, allusive language. I believe this works because humans are designed to learn by example. We have wiring which internalizes vicarious experiences and reifies them as personal ones. Allegorical stories, like fables, adapt our ability to learn through example by employing our imagination.

We respond to fables well because of their indirection. On the one hand, it may be easier just to state the moral of a story outright and not even bother with telling “The Grasshopper and the Ant” or “The Tortoise and the Hare.” However, a moral without its fable is only a command. By telling the stories, the storyteller guides the listener through the journey. Figurative language and characters who represent ideas help to involve the listener and keep them engaged. The moral comes through naturally by the end as a result of the listener following the internal logic of the narrative, so the storyteller’s intended meaning does not need to be inculcated extrinsically.

In this way, indirect stories use symbolic language to draw in listeners. We, listening to the story, relate to the figures and characters because they allow us to involve ourselves. In turn, because we get invested, we take parts of the story and make them about ourselves. We care about what happens and empathize with the characters because we care about ourselves. This is what I actually mean when I say fables work well because of their indirection. We’re not actually interested in grasshoppers and ants, tortoises and hares, but we are interested in representations of our own values, our setbacks, and our triumphs. We put parts of ourselves into the story and get something else back.

And this is why I believe fables and parables have such staying power. Mythologies endure for similar reasons: their pantheons, even if filled with otherworldly gods or spirits, explain the ordinary—the sun, the night, the ocean, the sky—and embody our own better and worser natures—love, anger, and so on. In these myths we see ourselves, our friends, our enemies, and our histories.

So on the one hand, highly figurative language involves the audience in the way that literal language does not. How do we write like this in the first place?

Fables, parables, myths, allegories—they all use symbols that have endured over the centuries and been recapitulated in various ways, but in one way or another, they’re told using a long-lived cultural language. When we tell stories, when we write, we use the basic words of our native language, and with those come the building blocks of our cultural language as well. It is as difficult to avoid using these as it is to write a novel without the letter “e.”

We may not often think about the kinds of cultural language we use because we’re unaware of where it comes from. This is one of the primary goals of studying literature, to learn about the breadth of prior influences so we can study our “cultural language” (I am not sure what a better word for this is, but I am sure there is one). Even when we don’t intend to dwell on influences and allusions, we write with the language of symbols that surrounds us.

What’s interesting to consider is what we’re saying without always thinking about it. Just as we grew up with stories that drew us in using powerful symbolic language, we imbue our original stories with ourselves, using similar symbols.

I’ve realized that different writers tend to perseverate on different kinds of questions and beliefs as their experiences allow, and these emerge as common themes in their writing, the same way certain stock characters persist in an author’s repertoire. If, for example, I find myself primarily concerned with questions of faith, my stories may spontaneously concenter themselves around themes of faith, through no real intentional process. In the process, I might even embed symbols which convey the theme without meaning it (for example, religious trappings such as lambs, crosses, or even clergy).

I have come to identify themes and symbols which are either inherent to the story itself or accidentally embedded by my execution as part of the planning and editing process in my writing. Once I understand them, then decide whether to keep those and how to refine and harmonize them. For example, if I do have religious symbols within a story, there are unavoidable allusions this implies, and I have to work through how to harmonize those with my story or cut them out. As another example, if I have a character who is alone on a desert island, the themes of isolation and survival are both inherent parts of the story structure which cannot be avoided and will be addressed in some way or another. If I write about political conflict, then cooperation-versus-competition is lurking behind nearly every character’s motivation.

In a practical sense, how do I develop themes and work in symbols? Generally, editing first occurs at a larger scale and then moves to a smaller scale, so I tend to think in similar terms with themes. I identify whether broader themes already exist and ask myself if they carry through the entire narrative. If there is an inchoate message buried in the subtext that I didn’t intend to put there, I should decide if it belongs or not. If I want to keep it, then I need to clarify what it is and how it works as a theme.

I examine the narrative through the point of view of this theme and see which elements fit and which don’t. I see how I can adapt it better to fit the theme—a process I actually love because it often burnishes rough narrative ideas.

To give an idea of what I mean, I’ve been writing a story whose central theme has to do with disintegrity of mind and body—the feeling of being not only under a microscope but flayed open at the same time. I began with a premise that didn’t really involve this theme, but I synthesized ideas from elsewhere that really pushed the story in that direction. When I began considering reworking the style and ending, I realized I needed more narrative disorientation and ambiguity to convey the protagonist’s feeling of disintegrity. The changes I had to make involved researching certain plays and removing dialogue tags to make it uncertain who’s speaking (implying that the protagonist could be speaking when she believes her interrogator to be speaking).

Before I go on, I also ask myself what the theme means for me. If I were making a statement about the theme, what would I want to say? More to the point, what does the story have to say? Sometimes, there are even multiple themes in conflict—hints at determinism here, others at fatalism there—so that the overall picture gets confused. The most important thing is that the narrative contains an internal logic that works at every scale, from the broadest thematic sense to the word-by-word meaning. I consider—at least at some point, even if it’s only after a draft is done—what the overall story is saying and then ensure no smaller element of the story contradicts itself.

After I’ve made some large-scale changes, there may be smaller narrative gaps to fill, and I find I can also add certain ornamentations to settings or characters based on theme. This is where I can use the language of symbolism. I try to be somewhat coy. Like I said, indirect, allegorical language allows for stories that are more interesting because they’re more relatable and let the reader insert themselves. The illusion falls apart if the allegory is naked and obvious.

I don’t mean that I necessarily want to make symbols which are obscure allusions, either. I personally like symbols which have a logic within the narrative. I believe it’s possible both ways. The Lord of the Flies is an example of an highly allegorical novel which uses symbols this way. The conch shell symbolizes civilization because it’s used as a rallying point for the boys who remain faithful to the rules. Golding embeds its symbolism completely within the narrative logic—expressed in terms of the story—and the idea it represents erodes as the physical item is neglected and then destroyed.

Sometimes I’m not working with a story but just a premise, and it’s one to which many themes could attach. I could choose a theme, and that choice would influence the direction in which I take the premise. A lot of the ideas I decide to develop end up being conglomerations of ideas, and I’m never quite sure which ones should go together. Themes can sometimes be links which join disparate ideas into a framework, allowing me to decide what to synthesize and how. This way, a premise and a theme determines how a story grows and what characters, settings, and events I place into it.

It may seem like a lot of effort to run through this exercise for a story which is purely fanciful entertainment, which sets out not to say anything in the first place. Not everyone sets out to write an allegory. However, like I said, I think to some extent it’s not possible to avoid planting some subtextual themes because we all speak with a shared cultural language. My goal is to consider what I say between the lines and harmonize that thematic content. Hopefully, I end up with a story with a wider meaning running through it, giving it some backbone. I never set out to make a moral—maybe at most a statement?—but I do at least try to structure my narrative ideas to make the most impact.

I am extraordinarily grateful to Zuzu O. for the time and care she put into editing this post.

Removing: PGP Key and Content Licensing

I have removed two pages from my website—my PGP key and my content license.

PGP Key

I removed my public PGP key because I no longer intend to use it for signing messages. My same key remains on Keybase and other public key servers, but I no longer sign outgoing mail, nor do I intend to use my key regularly in any way.

I don’t feel that my key has been compromised. However, it does me little good to keep using it, and most encryption in actual use in my daily life doesn’t involve PGP. In the wake of a mostly minor vulnerability called E-Fail earlier this month—which didn’t impact me—I found myself persuaded of the ultimate futility of keeping up with PGP by an op-ed on Ars Technica from a couple of years ago.

Content Licensing

I removed my CC BY-NC 4.0 license notice page from my site. This post hereby serves as notice that from this date forward, I no longer license my existing or new content (writing, photos, videos, or audio) under a CC license, and so all that content falls back to the default copyright of the relevant jurisdiction.

Any works that have already been used under the CC license have been done so irrevocably, and so I have no ability to revoke those licenses. They may be licensed until their rights lapse under the copyright laws of those jurisdictions.

If anyone wants to use some of mine, they are certainly welcome. The removal of the license only implies one practical change—you must ask permission. That’s all.

Math, She Rote

My friends often have different educational backgrounds than mine. Some of them are younger, but even if they aren’t, they’re often from urban areas that had moved to more modern educational curricula before my school system had. The way I learned basic arithmetic remained unchanged from how it was taught from the early 1980s by the time I learned it in the late 1980s and early 1990s because that’s when our books dated from.

I learned during an interesting period in mathematical education history. It represented a kind of educational interbellum—a bit after the “New Math” of the 1960s and 1970s but before the “math wars,” instigated by the 1989 Curriculum and Evaluation Standards for School Mathematics. The latter 1989 publication has been called “reform mathematics,” which emphasizes processes and concepts over correctness and manual thinking. In other words, the educators promoting reform mathematics began to believe that the path students took toward the answer mattered more than whether they got the answer right. Many states’ standards and federally funded textbooks followed reform mathematics in the 1990s and beyond.

Reform mathematics emphasized constructivist teaching methods. Under this approach, instead of prescribing to students the best way how to solve a problem, teachers pose a problem and allow the student to surmount it by building on their own knowledge, experiences, perspective, and agency. The teacher provides tools and guidance to help the student along the way. Constructivist approaches involve experiments, discussions, trips, films, and hands-on experiences.

One example of a constructivist-influenced math curriculum, used in elementary school to teach basic arithmetic, was known as Investigations in Numbers, Data, and Space. It came with a heavy emphasis on learning devices called manipulatives, which are tactile objects which the student can physically see, touch, and move, to solve problems. These are items like cubes, spinners, tapes, rulers, weights, and so on.

As another example, someone I know recently described a system they learned in elementary school called TouchMath for adding one-digit numbers, which makes the experience more visual or tactile (analogous to manipulatives). They explained that for each computation, they counted the “TouchPoints” in the operands to arrive at the result.

I had never heard of TouchMath. In fact, I never solved problems using manipulatives, nor any analogue of them. I had little experience with this form of math education. We were given explicit instructions on traditional ways to solve problems (carrying, long division, and so on). Accompanying drawings or diagrams rarely became more elaborate than number lines, grids, or arrangements of abstract groupings of shapes which could be counted. They served only as tools to allow students to internalize the lesson, not to draw their own independent methods or conclusions.

I contrasted my friend’s experience with TouchMath to my experience. To add or subtract one-digit numbers, we merely counted. We were given worksheets full of these to do, and since counting for each problem would have been tedious and impractical, memorization for each combination of numbers would become inevitable. Given the expectations and time constraints, I’m certain rote memorization was the goal.

In a couple of years, we were multiplying and dividing, and we were adding and subtracting two- or three-digit numbers using carrying—processing the numbers digit-wise. At the same time, we were asked to commit the multiplication tables to memory. These expectations came in third grade, and it would be nearly impossible to make it out of fourth grade without committing the multiplication table and all single-digit addition and subtraction to memory (the age of ten for me).


Our teachers did not bother to force us to memorize any two-digit arithmetic operations. But I have some recollection a lot of years ago of my grandma telling me she had most two-digit additions and subtractions still memorized. It was just an offhand remark—maybe something she said as I was reaching for a calculator for something she had already figured out. Maybe we were playing Scrabble.

For context, she would have gone to school in rural Georgia in the 1940s and 1950s, and she graduated high school. (In that time and place, it was commonplace for many who intended to do manual, trade, or agricultural work not to continue through secondary school.)

I remember feeling incredulous at the time about the number of possible two-digit arithmetic operations that would imply memorizing. Of course, many would be trivial (anything plus or minus ten or one, or anything minus itself); others would be commonplace enough to easily memorize, while still others would be rare enough to ignore. But that still leaves several thousand figures to remember.

The more I thought about it, the more I saw that, in her world, it would make better sense to memorize literally thousands of things rather than work them out over and over. She had no way of knowing that affordable, handheld calculators would exist in a few decades after she graduated from school, after all. Each time she memorized a two-digit addition or subtraction, she saved herself from working out the problem from scratch over and over again for the rest of her life. This saved her effort and time every time she

  • balanced her checkbook,
  • filled out a deposit slip at the bank,
  • calculated the tax or tip on something,
  • tallied up the score for her card game,
  • totaled up a balance sheet for her business,
  • made change for a customer, or
  • checked that the change she was being given was correct,

to say nothing of all the hundred little things I can’t think of. She married young and has run small businesses for supplemental income all her life, so managing the purse strings fell squarely into her traditional gender role. Numbers were part of her daily life.

So for the first half of her life, none of this could be automated. There were no portable machines to do the job, and even the non-portable ones were expensive, loud, slow, and needed to be double-checked by hand.

I don’t believe she remembered these all at once for a test, the way I learned the multiplication tables in third grade. It seems likely she memorized them over time. It’s possible that expectations in her school forced a lot of memorization that I didn’t experience when I went many decades later, but maybe she was just extra studious.


I recall, as I went through school, having to rely more on a calculator as I approached advanced subjects. Before calculators became available to students, appendices of lookup tables contained pre-calculated values for many logarithms, trigonometric functions, radicals, and so on. Students relied on these to solve many problems. Anything else—even if it were just the square root of a number—came from a pen-and-paper calculation. (Many of my early math books did not acknowledge calculators yet, but this changed by the 1990s.)

Charles Babbage reported that he was inspired to attempt to mechanize computation when he observed the fallibility of making tables of logarithms by hand. He began in the 1820s. After a hundred and fifty years, arithmetic computation would become handheld and affordable, fomenting new tension around what rote memorization plays in both learning and in daily life.

Today, we’re still trying to resolve that tension. Memorization may feel like it has a diminished role in a post-reform education environment, but it’s by no means dead. Current U.S. Common Core State Standards include expectations that students “[b]y end of Grade 2, know from memory all sums of two one-digit numbers,” and, “[b]y the end of Grade 3, know from memory all products of two one-digit numbers.” That sounds exactly like the pre-reform expectations I had to meet.

All this means is that there has been neither a steady march away from rote memorization nor a retreat back to it. Research is still unclear about what facts are best memorized, when, or how, and so there’s no obvious curriculum that fits all students at all ages. For example, the Common Core Standards cite contributing research from a paper which reports on findings from California, concluding that students are counting more than memorizing when pushed to memorize arithmetic facts earlier. The paper reasons this is probably due to deficiencies in the particulars of the curriculum at the time of the research (2008).


I’m not an expert, and I don’t have easy answers, but my instinct is that rote memorization will always play an inextricable role in math education.

Having learned about the different directions in which the traditional and reform movements of math education have tugged the standards over the years, I tend to lean more traditional, but I attribute this to two things. One is that I was educated with what I remember to be a more traditional-math background, and though I didn’t like it, it seems serviceable to me in retrospect.

The other reason is that, for me, memorization has always come easily. I don’t really know why this is. It’s just some automatic way I experience the world. Having this point of view, though, I can easily see how beneficial it is to have answers to a set of frequent problems ready at hand. It’s efficient, and its benefits never cease giving over time. The earlier you remember something, the more it helps you, and the better you internalize it. Even for those who can’t remember things as easily, the returns on doing so are just as useful.

I do completely agree with the underlying rationale of the constructivist approach. Its underpinnings are based on Piaget’s model of cognitive development, which is incredibly insightful. It seems useful to learn early to accommodate the discomfort of adapting your internal mental model to new information by taking an active role in learning new ideas in order to surmount new problems.

I don’t necessarily believe that a constructivist learning approach is intrinsically at odds with rote memorization—that is to say, that memorization necessarily requires passive acquisition. In fact, the experience of active experimentation and active role may help form stronger memories. It’s more likely they compete in curricula for time. It takes longer to mathematically derive a formula for area or volume by independent invention, for example, than to have it given to you.

In fact, constructivist learning works better when the student has a broader reservoir of knowledge in the first place from which to draw to begin with when trying to find novel solutions to problems. In other words, rote memorization aids constructivist learning, which then in turn aids remembering new information.

My feeling is that math will always require a traditional approach at its very heart to set in place a broad foundation of facts, at least at first, before other learning approaches can have success. Though the idea of critical periods in language acquisition has detractors and heavy criticism, there is a kernel of truth to the idea that younger minds undergo a period of innate and intense linguistic fecundity. Maybe as time goes by, we can learn more about math acquisition and find out which kinds of math learning children are more receptive to at which ages. Until then, I feel like we’re figuring out the best way to teach ourselves a second language.

I am grateful to Rachel Kelly for her feedback on a draft of this post.

Privacy Policy Updates: Data Storage

I updated WordPress today to version 4.9.6. I noticed this version comes with support for implementing privacy policies throughout the site. I seem to have been ahead of the curve in implementing my own, but when the GDPR in the EU comes into effect this month, it will clarify and simplify data privacy for much of Europe. This implies enforcement will become a more direct matter as well. Any web service accessible to Europe and which does business in Europe now has updated their privacy policies to ensure it complies with the GDPR—which is why everyone has gotten a raft of privacy policy updates.

Most of these privacy policy updates pertain to what rights customers or users have to their own data. Often, they grant new rights or clarify existing rights. This week’s new version of WordPress is yet another GDPR accommodation.

Today, I have to announce my own GDPR update. Yes, I’m just a tiny website no one reads, and I provide no actual services. But having already committed to a privacy policy, which I promised to keep up to date (and announce those changes), I’m here to make another update.

One nice thing that came with the the WordPress update is a raft of suggestions on a good privacy policy (and in what ways WordPress and its plugins may cause privacy concerns). I found that I had covered most of them, but one thing I needed to revisit was a piece of functionality in Wordfence.

I use Wordfence for security: It monitors malware probes and uses some static blacklists of known bad actors. It also, by default, sends cookies to browsers in order to track which users are recurring ones or which users are automated clients. The tracking consisted only of an anonymous, unique token which distinguished visitors from one another. Unfortunately, this functionality had no opt-out and did not respect Do Not Track.

Although my tracking was only for security purposes—not for advertising—and although did not store any personal information, nor did I share with anyone else, I realized I would have to disable it.

I had made explicit mention of this tracking in my previous revision of my privacy policy:

I run an extra plugin for security which tracks visits in the database for the website, but these are, again, stored locally, and no one has access to these.

This is unfortunately more vague than it should have been, since it doesn’t mention cookies. It also provides no provision for consent. It merely states the consequences of visiting my site.

The GDPR makes it clear that that all tracking techniques (and specifically cookies) require prior consent. Again, I’m not a company, and I don’t provide any service. I’m not even hosted in the EU’s jurisdiction. My goal, though, is to exist as harmoniously with my visitors as possible, whomever they may be, and have the lightest possible touch.

So I’ve disabled Wordfence’s cookie tracking. I’ve added a couple of points to my privacy policy which clarify more precisely which data is logged and under which circumstances cookies may be sent to the browser.

This interferes my analytics, unfortunately—it’s no longer possible to be sure which visitors are humans anymore. I think it’s worth it, regardless.

I also made a couple of other changes based on WordPress’s suggestions. I moved a few bullet points around to put some points closer together which feel more logically grouped. I also added a point which specifies which URL my site uses (meaning the policy would be void if viewed in an archived format, within a frame, or copied elsewhere).

How Transgender Children Are Made and Unmade

Note that the following post discusses the sensitive topic of conversion therapy for transgender children, along with mentions of outmoded terminology and psychodynamic models, ethically questionable studies and treatment practices, and links to some sources which may misgender or mislabel transgender people.

I have also added some clarifications to my final points on 12 May 2018.

Today, a friend pointed me to a news article out of the UK covering a new study by Newhook et al. released in the International Journal of Transgenderism. The study was published a couple of weeks ago and criticizes a handful of other studies made in the last decade which bolster a myth that the vast majority (more than 80%) of children who have presented as transgender have since “desisted” (reverted to being cisgender) as adolescents or adults. Those studies, all released in the years since 2008, analyze children who were researched in the years since 1970 up until the 2000s.

Those recent desistance studies might hint at a couple of interpretations of transgender children who desist. The most neutral one is that such children were “going through a phase,” playing out the vagaries of youthful whims and later changing their minds. However, these studies also permit a more sinister interpretation—one in which children were subject to external influences that “confused” them about their gender, a confusion which time and therapy later allowed them to outgrow and reject.

It stands to reason that, because each child included in the original studies had contact with researchers, it was likely they were seeking treatment which included therapy, which might seem to support the latter interpretation. The standard of care for whichever diagnosis they received, which would have varied by location and time—more on this below—would possibly have focused, in fact, on influencing the child away from transgender or homosexual behaviors. Many research studies and forms of treatment, especially in earlier years, would have taken the form of conversion therapy. That also creates interpretative concerns from the original studies—they affect their own outcome. (This is referenced below as well.)


First, I want to briefly discuss the flaws from the desistance studies so that we can begin to erode the desistance myth. The news article above sums up the critique introduced by the new study quite well.

The ‘desistance’ figure come from studies conducted between the 1970s and the 2000s in the Netherlands and Canada, which assessed whether the kids that sought services at the gender clinic turned out to be trans as adults. The new publication concludes that the figure included all kids that were brought to the clinic, many of who never experienced gender dysphoria in the first place nor saw themselves as trans. Kids that shouldn’t have been a part of the figure were therefore being used to ramp up the numbers.

The news article elaborates that, not only is there uncertainty in how many children should have been counted as transgender in the first place, the earlier studies make blanket assumptions as to what happened to those children afterward.

Another flaw is that in the follow up, all participants that weren’t included for whatever reason were simply brushed off as ‘desisters’. This was done without having any factual evidence or knowledge about the children involved.

In what should have been simple division, the numbers on both sides of the division sign have become suspect. Now the question becomes, do we have the actual figures? Here’s where the real problems start. We need to delve into the primary source, the Newhook et al. study, itself.


The study is called “A critical commentary on follow-up studies and ‘desistance’ theories about transgender and gender-nonconforming children,” authored by Newhook et al. It contains a methodological meta-analysis of four previous studies. As it states in its introduction,

In the media, among the lay public, and in medical and scientific journals, it has been widely suggested that over 80% of transgender children will come to identify as cisgender once they reach adolescence or early adulthood. This statement largely draws on estimates from four follow-up studies conducted with samples of gender-nonconforming children in one of two clinics in Canada or the Netherlands (Drummond, Bradley, Peterson-Badali, & Zucker, 2008; Steensma, Biemond, de Boer, & Cohen-Kettenis, 2011; Steensma, McGuire, Kreukels, Beekman, & Cohen-Kettenis, 2013; Wallien & Cohen-Kettenis, 2008).

The critiques in the Newhook et al. study aren’t new, and the authors take pains to mention some of their forebears in their introduction as well. They contextualize their new study by explaining that they hope to guide the eighth upcoming version of the WPATH standards of care, which will determine how transgender children for years to come are treated.

Newhook et al. mention older follow-up studies from before the year 2008 of gender-non-conforming children, but the authors explain those studies are tainted by methodological and sampling problems. They are also likely irrelevant since they were not cited in the meta-analysis’s contribution to the 80% figure. So they skip these earlier studies in their meta-analysis.

We recognize that numerous follow-up studies of gender-nonconforming children have been reported since the mid-20th century (e.g., Green, 1987; Money & Russo, 1979; Zucker & Bradley, 1995; Zuger, 1984). In that era, most research in the domain focused on feminine expression among children assigned male at birth, with the implicit or explicit objective of preventing homosexuality or transsexualism.

(I’d like to draw your attention for a moment to the fact that Kenneth Zucker was an author both in the 1995 study above and in the later 2008 study mentioned earlier. We’ll return to him later.)

Now, the Newhook et al. critical commentary study notes that the four desistance studies arrive at a figure of over 80% desistance. Then it begins to note what abilities and limitations these studies have. The methodological concerns center around what we can know and can’t know, given what information was collected at the time and afterward.

What I found was that because the studies used children from times spanning from 1970 onward, the basis for diagnosis itself seeded the flaws in mis-categorization, both in mis-categorizing children as transgender in the first place and then again on follow-up.

Back in 1970, no formal diagnosis for gender identity disorder or gender dysphoria existed. Doctors and researchers had only informal descriptions. As Newhook et al. explain,

However, the plain-language meaning of gender dysphoria, as distress regarding incongruent physical sex characteristics or ascribed social gender roles, has been established since the 1970s (Fisk, 1973). When these four studies refer to gender dysphoria, they are referring to this plain-language context of distress, and not the newer DSM-5 diagnostic category.

The DSM-III would not exist until 1980, so the meanings applied here may vary from person to person, as experience and prejudice allow.  I do not know all the criteria which were applied. (I have been unable to locate the Fisk source, but he appears to be the source of the term “gender dysphoria.”)

Then, in the 80s and 90s, the DSM-III, DSM-III-R, DSM-IV, and DSM-IV-TR each included a “gender identity disorder” diagnosis which came with a “GID/Children Transsexualism” or “gender identity disorder in children” category. The symptomatology of these were similar in general shape and included distress (a gender dysphoria component) but also certain behaviors (e.g., crossdressing), timeframes (e.g., six months), and so on. This is a very definite case of moving the goalposts, where the diagnostic criteria shifted. In some ways, they became more lax. Diagnostic criteria often state that only a certain number out of all of the above need be satisfied over a period of time, so if every component but gender dysphoria is present, the diagnosis of gender identity disorder can still apply.

At the same time, the standards of care also were shifting, evolving through time to match the competing typologies and psychosexual models of the providers. Adults learned to conform to expectations (such as crossdressing for a year before receiving treatment or professing attraction to men where no such attraction existed).

Children who may not have been aware of these standards and criteria, acting on their needs and wants, might have very well fallen in and out of the categorizations changing around them. Through no fault of their own, the category of transgender might one day have landed upon a child and then another day slipped away from them.

The Newhook et al. study describes the problem this way:

Due to such shifting diagnostic categories and inclusion criteria over time, these studies included children who, by current DSM-5 standards, would not likely have been categorized as transgender (i.e., they would not meet the criteria for gender dysphoria) and therefore, it is not surprising that they would not identify as transgender at follow-up. Current criteria require identification with a gender other than what was assigned at birth, which was not a necessity in prior versions of the diagnosis. For example, in Drummond et al. (2008) study […] the sample consisted of many children diagnosed with GIDC, as defined in the DSM editions III, III-R, and IV (American Psychiatric Association, 1980, 1987, 1994). Yet the early GIDC category included a broad range of gender-nonconforming behaviors that children might display for a variety of reasons, and not necessarily because they identified as another gender. Evidence of the actual distress of gender dysphoria, defined as distress with physical sex characteristics or associated social gender roles (Fisk, 1973), was dropped as a requirement for GIDC diagnosis in the DSM-IV (American Psychiatric Association, 1994; Bradley et al., 1991). Moreover, it is often overlooked that 40% of the child participants did not even meet the then-current DSM-IV diagnostic criteria. The authors conceded: “…it is conceivable that the childhood criteria for GID may ‘scoop in’ girls who are at relatively low risk for adolescent/adult gender-dysphoria” and that “40% of the girls were not judged to have met the complete DSM criteria for GID at the time of childhood assessment… it could be argued that if some of the girls were subthreshold for GID in childhood, then one might assume that they would not be at risk for GID in adolescence or adulthood” (p. 42). By not distinguishing between gender-non-conforming and transgender subjects, there emerges a significant risk of inflation when reporting that a large proportion of “transgender” children had desisted. As noted by Ehrensaft (2016) and Winters (2014), those young people who did not show indications of identifying as transgender as children would consequently not be expected to identify as transgender later, and hence in much public use of this data there has been a troubling overestimation of desistance.

Because of the meaningful shifts in diagnostic criteria over the last fifty years, there’s little hope of reconstructing the true figures of desistance, such as they may be. We would need both detailed notes (interviews, etc.) from the original cohorts to attempt to assess the children’s self-reported identities and then those same cohorts’ adulthood identities, assessed the same way from follow-ups, to compare. I suspect the paucity of detailed qualitative data from the original studies would undermine such an effort, due to the primacy of researchers’ diagnoses over self-described experiences and identities.

In most studies, it appears we do not have such detailed notes and the like available. Newhook et al. do cite Steensma et al. (2011) as having some unique qualitative research, but quantitative data are very limited—there are only two interviews mentioned.


The Newhook et al. study also brings up many ethical concerns, and here I turn back to the problem of Zucker in particular. The authors identify three ethical concerns, of which the second is particularly insidious—the questionable goals of treatment itself.

In describing their second concern, the authors write,

A second ethical concern is that many of the children in the Toronto studies (Drummond et al., 2008; Zucker & Bradley, 1995) were enrolled in a treatment program that sought to “lower the odds” that they would grow up to be transgender (Drescher & Pula, 2014; Zucker, Wood, Singh, & Bradley, 2012; Paterson, 2015). Zucker et al. (2012) wrote: “…in our clinic, treatment is recommended to reduce the likelihood of GID persistence” (p. 393).

As I write, Zucker’s words are only six years old. To be clear: he is both espousing and practicing conversion therapy of children.

Zucker is not a marginalized figure in the world of psychiatry. He is not only respected and accepted; he was the head of the “Sexual and Gender Identity Disorders” group that revised the DSM-5, appointed by the American Psychiatric Association. A heartbreaking account of his attempt at conversion therapy may be found in this NPR story (with some misgendering).

He was not the only person in the group to favor controversial theories, either. Blanchard (who favors an outmoded typology of transgender people based on sexual attraction and also attempts conversion therapy) and Lawrence (who has expressed the belief that transgender people have a kind of body integrity identity disorder) also formed part of the group.

Why do I mention their role in shaping the DSM-5? Well, they believe children should be dissuaded from transgender identities, which they regard as pathological or maladaptive. Under their influence in shaping the diagnostic criteria for children and adults, they moved the goalposts for fitting the model. That then allowed studies to tally up how past children fit current, different diagnostic criteria to determine that they have “desisted.” In turn, these fudged figures can be used to justify further conversion therapy, resist affirmative care models of treatment, and influence the WPATH standards of care to inhibit access to treatment and personal safety.

I therefore question whether—after influencing or directly authoring new diagnostic standards for gender dysphoria—advocates for conversion therapy then revisited older studies to make follow-ups, aware of how the results would skew toward their desired outcome: an interpretation of a seeming tendency toward desistance, which marks transgender identities as “unnatural” aberrations which only emerge later in life and which can be headed off earlier in childhood. Buried underneath this interpretation is an implicit assumption about how children form transgender identities due to extrinsic influences. They conclude that they can prescribe a model of care which essentially counteracts those influences with their own.

Wiser people than I have already explained why better models of care, such as the affirmative care model, practiced in most North American clinics, provide better outcomes. The news article I began with also concludes with some great sources on treatment outcomes, which I cannot possibly outdo, so I’ll leave you to revisit Owl’s article.

Denying children bodily autonomy and agency over their identity is a form of abuse. The long-lasting confusion may result in self-denial, withdrawal, self-harm, or even suicide later in life. Unlike many forms of abuse, which happen privately, transgender conversion therapy coopts institutions toward its own ends by shaping the standards of care for treatment (via its influence on the WPATH with influential studies) and by writing the diagnostic manual itself. The prevalent myth of desistance of childhood gender dysphoria has been a powerful tool used to abuse children. It must be dismantled. To do so, we must expose pernicious and specious studies, using critical meta-analysis such as Newhook et al.’s.

I am grateful to Zuzu O. for feedback on this post.