Ordinary Synesthesia

For the last fourteen years or so, I have privately described some of my sensory experiences as a phenomenon called synesthesia. I don’t talk about it much because I am not sure whether synesthesia is an accurate description. Over the years, though, I find that term still feels appropriate in many ways. Maybe it fits something you experience, too.

To talk meaningfully about what synesthesia is, I’m drawing from the first paper I read on the subject, one called “Synesthesia: Phenomenology and Neuropsychology” by a man named Richard E. Cytowic.1 Later research has appeared since 1995, so I’ve looked some of that up as well—much of that by Cytowic as well.

What is synesthesia? It’s an inextricable linking of distinct senses, such as sight and sound. Cytowic says this more with more academic language: “[T]he stimulation of one sensory modality reliably causes a perception in one or more different senses.” It’s not symbolic or metaphorical. It’s a literal, sensory experience that happens reliably.

Importantly, though, it’s not a mental disorder. It cannot be diagnosed using the ICD or DSM. There’s no hallucinatory aspect to synesthesia, and it does not impair those affected by it.

What do I mean by “linking of distinct senses”? There are numerous forms of synesthesia, but as an example, consider the form that associates sounds with visual experiences (like color). When a person who experiences synesthesia (a synesthete) hears a sound which triggers an association, that sound itself is perceived both as the sound and as the visual event (such as the color green). This isn’t to say that the synesthete has a hallucination in which the color green appears literally and visually before their eyes—a phenomenon that would only be described as a hallucination. What I mean instead is that the sound is itself the color green in their brain. By hearing it, they have experienced the color green, with all its appertaining associations.

There is a certain ineffable quality to that mixture of sensory experiences. Consider it for a moment. How would I know, as an unaware synesthete, that the color green is the correct association? I haven’t seen the color green in any literally visual sense.

I might make sense of this by working backwards from the associations green has in my mind—each tied both to the sound and to the color. Or else, I might find the color linked rather directly to the sound, working backwards from what associations the sound has in my mind. Stranger still, I might find associations between sounds and colors I haven’t even seen in reality.

Synesthesia seems to glom things together until the experiences occur not only simultaneously but literally as a unified sensory experience. To experience the trigger is to experience its association.

I believe this causes synesthesia to go under-observed and misunderstood. Many of us experience synesthesia without understanding it for what it is or how common it is, how subtle and integrated into our sensory experience. I don’t believe it’s universal, but I believe it’s possibly a widespread feature that exists on a spectrum.

I believe synesthesia-like phenomena underlie certain kinds of universal sound symbolism, such as the bouba/kiki effect, which has been found across different ages and cultures across time. Ramachandran and Hubbard did some influential experiments in this area.2

So as for me? I experience compelling visual sensations brought on by specific auditory experiences—in particular, music at certain frequencies. I didn’t have much breadth of exposure to music growing up (only hearing country music on radios around me until I was a teenager), so I didn’t really understand much about myself and music until I was nearly an adult.

I began to put it together when I was in a college class for music (with a powerful sound system), and I found myself instinctively blinking and averting my eyes while listening to some baroque music, and for the first time I realized how forcefully visual the music became for me. I started reading more about synesthesia and thought maybe this was a reality for me. Since then, I’ve learned some of the details of how music affects me.

My experiences have some color components, but I struggle to describe what those colors are, beyond cool or warm. They often have textural or spatial components, disjointed in space nearby.

Percussive sounds cause white or otherwise desaturated interruptions in the visual experience. They are like visual noise—snow, static, walls. I tend to seek out music which avoids or minimizes percussion.

Vocal accompaniment causes almost no visual sensation whatsoever. I tend to ignore vocals in music or seek out purely instrumental music. Highly distorted, distinctly stylistic, or highly polyphonic vocals are an exception.

Higher pitched sounds tend to have stronger associations, but I get fuller, more textured experiences from richer musical arrangements. These can be classical, electronic, guitar bands, or whatever.

Sounds of different pitches or timbres tend to make themselves more or less visually salient. Usually higher pitches layer over or through lower ones and have more compact visual representations, warmer colors. The progressions of melodies and overall chord progressions tend to lead to eddies and swirls.

Chromaticism from modernist compositions cause some of the most interesting visuals. “Clair de lune” starts with such rich, variegated lavenders, which yield then to legato scintillations of all colors, covered with lots of warm notes, like stars embedded in a cool sky. The Tristan chord from Tristan und Isolde felt like a greenish-yellowish blight melting into a veil billowing in the wind as the prelude carried into further dissonances—while the final “Liebestod” glowed like a hot, clean pink for me.3 “Aquarium” from Le carnaval des animaux by Camille Saint-Saëns (you probably know it as “that music that plays in cartoons when someone is underwater”) has all these piano glissandos riding over top which cause indescribable motes of light to flit away.

I don’t believe I’d call synesthesia (if that’s what this is) a blessing or a curse. They simply shape the way I enjoy music. I find them vivid, memorable, and affecting—they add a substance. I’m glad it’s there, but I don’t really have any explanation for it, and I enjoy plenty of things without it. I’ve found it gives me a better sensory recollection for things that happen while I’m listening to music, but that might be the only benefit.

I don’t really talk about synesthesia. (I searched my Twitter account for mentions, and I see I’ve only ever mentioned the word once before today.) It’s an extremely personal, subjective experience, and part of it is ineffable. It’s like describing a dream—no one really cares but you.

Since there’s no way to convey the affect portion of the experience, it’s hard to communicate your intentions. It sounds like an attempt to make yourself seem special or gifted in some way. Synesthesia has been associated with artists and art since the 1800s, especially musical composers. It became faddish enough for a time that it was even popular to fake aspects of it.

I want to emphasize again that I believe there is a universal quality to sensory crossover. My personal belief is that synesthesia-like experiences exist on a spectrum in many people—some more than others. The more we talk about it for what it is and how it actually is experienced, the more readily others will recognize the experience in themselves and normalize it.

For this reason, I don’t want to state definitively I have synesthesia. I’m not saying that. I will say that I have experiences that feel could be appropriately described by the term, so I wouldn’t rule it out. I imagine that many people feel like I do or have some similar quality to their sensorium. I just want to open us up to the possibility of synesthesia being ordinary.

Thematic Rewriting

I have been revisiting On Thematic Storytelling in my thoughts lately. Part of it is because I’ve been helping a friend story-doctor their writing a little. It’s also because I’ve been dwelling on my own story notes and refining them.

This has led me to questions I had not considered before. First of all, why do we write in symbolic, allegorical ways in the first place? Secondly, how do these themes end up in our stories at all, ready to develop, even if we don’t set out to use those themes at first? I think the answers to these questions are linked.

People have a long history of telling fables and parables to relate messages to one another, using figurative, allusive language. I believe this works because humans are designed to learn by example. We have wiring which internalizes vicarious experiences and reifies them as personal ones.4 Allegorical stories, like fables, adapt our ability to learn through example by employing our imagination.

We respond to fables well because of their indirection. On the one hand, it may be easier just to state the moral of a story outright and not even bother with telling “The Grasshopper and the Ant” or “The Tortoise and the Hare.” However, a moral without its fable is only a command. By telling the stories, the storyteller guides the listener through the journey. Figurative language and characters who represent ideas help to involve the listener and keep them engaged. The moral comes through naturally by the end as a result of the listener following the internal logic of the narrative, so the storyteller’s intended meaning does not need to be inculcated extrinsically.

In this way, indirect stories use symbolic language to draw in listeners. We, listening to the story, relate to the figures and characters because they allow us to involve ourselves. In turn, because we get invested, we take parts of the story and make them about ourselves. We care about what happens and empathize with the characters because we care about ourselves. This is what I actually mean when I say fables work well because of their indirection. We’re not actually interested in grasshoppers and ants, tortoises and hares, but we are interested in representations of our own values, our setbacks, and our triumphs. We put parts of ourselves into the story and get something else back.

And this is why I believe fables and parables have such staying power. Mythologies endure for similar reasons: their pantheons, even if filled with otherworldly gods or spirits, explain the ordinary—the sun, the night, the ocean, the sky—and embody our own better and worser natures—love, anger, and so on. In these myths we see ourselves, our friends, our enemies, and our histories.

So on the one hand, highly figurative language involves the audience in the way that literal language does not. How do we write like this in the first place?

Fables, parables, myths, allegories—they all use symbols that have endured over the centuries and been recapitulated in various ways, but in one way or another, they’re told using a long-lived cultural language. When we tell stories, when we write, we use the basic words of our native language, and with those come the building blocks of our cultural language as well. It is as difficult to avoid using these as it is to write a novel without the letter “e.”5

We may not often think about the kinds of cultural language we use because we’re unaware of where it comes from. This is one of the primary goals of studying literature, to learn about the breadth of prior influences so we can study our “cultural language” (I am not sure what a better word for this is, but I am sure there is one). Even when we don’t intend to dwell on influences and allusions, we write with the language of symbols that surrounds us.

What’s interesting to consider is what we’re saying without always thinking about it. Just as we grew up with stories that drew us in using powerful symbolic language, we imbue our original stories with ourselves, using similar symbols.

I’ve realized that different writers tend to perseverate on different kinds of questions and beliefs as their experiences allow, and these emerge as common themes in their writing, the same way certain stock characters persist in an author’s repertoire. If, for example, I find myself primarily concerned with questions of faith, my stories may spontaneously concenter themselves around themes of faith, through no real intentional process. In the process, I might even embed symbols which convey the theme without meaning it (for example, religious trappings such as lambs, crosses, or even clergy).

I have come to identify themes and symbols which are either inherent to the story itself or accidentally embedded by my execution as part of the planning and editing process in my writing. Once I understand them, then decide whether to keep those and how to refine and harmonize them. For example, if I do have religious symbols within a story, there are unavoidable allusions this implies, and I have to work through how to harmonize those with my story or cut them out. As another example, if I have a character who is alone on a desert island, the themes of isolation and survival are both inherent parts of the story structure which cannot be avoided and will be addressed in some way or another. If I write about political conflict, then cooperation-versus-competition is lurking behind nearly every character’s motivation.

In a practical sense, how do I develop themes and work in symbols? Generally, editing first occurs at a larger scale and then moves to a smaller scale, so I tend to think in similar terms with themes. I identify whether broader themes already exist and ask myself if they carry through the entire narrative. If there is an inchoate message buried in the subtext that I didn’t intend to put there, I should decide if it belongs or not. If I want to keep it, then I need to clarify what it is and how it works as a theme.

I examine the narrative through the point of view of this theme and see which elements fit and which don’t. I see how I can adapt it better to fit the theme—a process I actually love because it often burnishes rough narrative ideas.

To give an idea of what I mean, I’ve been writing a story whose central theme has to do with disintegrity of mind and body—the feeling of being not only under a microscope but flayed open at the same time. I began with a premise that didn’t really involve this theme, but I synthesized ideas from elsewhere that really pushed the story in that direction. When I began considering reworking the style and ending, I realized I needed more narrative disorientation and ambiguity to convey the protagonist’s feeling of disintegrity. The changes I had to make involved researching certain plays and removing dialogue tags to make it uncertain who’s speaking (implying that the protagonist could be speaking when she believes her interrogator to be speaking).

Before I go on, I also ask myself what the theme means for me. If I were making a statement about the theme, what would I want to say? More to the point, what does the story have to say? Sometimes, there are even multiple themes in conflict—hints at determinism here, others at fatalism there—so that the overall picture gets confused. The most important thing is that the narrative contains an internal logic that works at every scale, from the broadest thematic sense to the word-by-word meaning. I consider—at least at some point, even if it’s only after a draft is done—what the overall story is saying and then ensure no smaller element of the story contradicts itself.

After I’ve made some large-scale changes, there may be smaller narrative gaps to fill, and I find I can also add certain ornamentations to settings or characters based on theme. This is where I can use the language of symbolism. I try to be somewhat coy. Like I said, indirect, allegorical language allows for stories that are more interesting because they’re more relatable and let the reader insert themselves. The illusion falls apart if the allegory is naked and obvious.

I don’t mean that I necessarily want to make symbols which are obscure allusions, either. I personally like symbols which have a logic within the narrative. I believe it’s possible both ways. The Lord of the Flies is an example of an highly allegorical novel which uses symbols this way. The conch shell symbolizes civilization because it’s used as a rallying point for the boys who remain faithful to the rules. Golding embeds its symbolism completely within the narrative logic—expressed in terms of the story—and the idea it represents erodes as the physical item is neglected and then destroyed.

Sometimes I’m not working with a story but just a premise, and it’s one to which many themes could attach. I could choose a theme, and that choice would influence the direction in which I take the premise. A lot of the ideas I decide to develop end up being conglomerations of ideas, and I’m never quite sure which ones should go together. Themes can sometimes be links which join disparate ideas into a framework, allowing me to decide what to synthesize and how. This way, a premise and a theme determines how a story grows and what characters, settings, and events I place into it.

It may seem like a lot of effort to run through this exercise for a story which is purely fanciful entertainment, which sets out not to say anything in the first place. Not everyone sets out to write an allegory. However, like I said, I think to some extent it’s not possible to avoid planting some subtextual themes because we all speak with a shared cultural language. My goal is to consider what I say between the lines and harmonize that thematic content. Hopefully, I end up with a story with a wider meaning running through it, giving it some backbone. I never set out to make a moral—maybe at most a statement?—but I do at least try to structure my narrative ideas to make the most impact.

I am extraordinarily grateful to Zuzu O. for the time and care she put into editing this post.

Removing: PGP Key and Content Licensing

I have removed two pages from my website—my PGP key and my content license.

PGP Key

I removed my public PGP key because I no longer intend to use it for signing messages. My same key remains on Keybase and other public key servers, but I no longer sign outgoing mail, nor do I intend to use my key regularly in any way.

I don’t feel that my key has been compromised. However, it does me little good to keep using it, and most encryption in actual use in my daily life doesn’t involve PGP. In the wake of a mostly minor vulnerability called E-Fail earlier this month—which didn’t impact me—I found myself persuaded of the ultimate futility of keeping up with PGP by an op-ed on Ars Technica from a couple of years ago.

Content Licensing

I removed my CC BY-NC 4.0 license notice page from my site. This post hereby serves as notice that from this date forward, I no longer license my existing or new content (writing, photos, videos, or audio) under a CC license, and so all that content falls back to the default copyright of the relevant jurisdiction.

Any works that have already been used under the CC license have been done so irrevocably, and so I have no ability to revoke those licenses. They may be licensed until their rights lapse under the copyright laws of those jurisdictions.

If anyone wants to use some of mine, they are certainly welcome. The removal of the license only implies one practical change—you must ask permission. That’s all.

Math, She Rote

My friends often have different educational backgrounds than mine. Some of them are younger, but even if they aren’t, they’re often from urban areas that had moved to more modern educational curricula before my school system had. The way I learned basic arithmetic remained unchanged from how it was taught from the early 1980s by the time I learned it in the late 1980s and early 1990s because that’s when our books dated from.

I learned during an interesting period in mathematical education history. It represented a kind of educational interbellum—a bit after the “New Math” of the 1960s and 1970s6 but before the “math wars,” instigated by the 1989 Curriculum and Evaluation Standards for School Mathematics. The latter 1989 publication has been called “reform mathematics,” which emphasizes processes and concepts over correctness and manual thinking. In other words, the educators promoting reform mathematics began to believe that the path students took toward the answer mattered more than whether they got the answer right. Many states’ standards and federally funded textbooks followed reform mathematics in the 1990s and beyond.

Reform mathematics emphasized constructivist teaching methods. Under this approach, instead of prescribing to students the best way how to solve a problem, teachers pose a problem and allow the student to surmount it by building on their own knowledge, experiences, perspective, and agency. The teacher provides tools and guidance to help the student along the way. Constructivist approaches involve experiments, discussions, trips, films, and hands-on experiences.

One example of a constructivist-influenced math curriculum, used in elementary school to teach basic arithmetic, was known as Investigations in Numbers, Data, and Space. It came with a heavy emphasis on learning devices called manipulatives, which are tactile objects which the student can physically see, touch, and move, to solve problems. These are items like cubes, spinners, tapes, rulers, weights, and so on.

As another example, someone I know recently described a system they learned in elementary school called TouchMath for adding one-digit numbers, which makes the experience more visual or tactile (analogous to manipulatives). They explained that for each computation, they counted the “TouchPoints” in the operands to arrive at the result.

I had never heard of TouchMath. In fact, I never solved problems using manipulatives, nor any analogue of them. I had little experience with this form of math education. We were given explicit instructions on traditional ways to solve problems (carrying, long division, and so on). Accompanying drawings or diagrams rarely became more elaborate than number lines, grids, or arrangements of abstract groupings of shapes which could be counted. They served only as tools to allow students to internalize the lesson, not to draw their own independent methods or conclusions.

I contrasted my friend’s experience with TouchMath to my experience. To add or subtract one-digit numbers, we merely counted. We were given worksheets full of these to do, and since counting for each problem would have been tedious and impractical, memorization for each combination of numbers would become inevitable. Given the expectations and time constraints, I’m certain rote memorization was the goal.

In a couple of years, we were multiplying and dividing, and we were adding and subtracting two- or three-digit numbers using carrying—processing the numbers digit-wise. At the same time, we were asked to commit the multiplication tables to memory. These expectations came in third grade, and it would be nearly impossible to make it out of fourth grade without committing the multiplication table and all single-digit addition and subtraction to memory (the age of ten for me).


Our teachers did not bother to force us to memorize any two-digit arithmetic operations. But I have some recollection a lot of years ago of my grandma telling me she had most two-digit additions and subtractions still memorized. It was just an offhand remark—maybe something she said as I was reaching for a calculator for something she had already figured out. Maybe we were playing Scrabble.

For context, she would have gone to school in rural Georgia in the 1940s and 1950s, and she graduated high school. (In that time and place, it was commonplace for many who intended to do manual, trade, or agricultural work not to continue through secondary school.)

I remember feeling incredulous at the time about the number of possible two-digit arithmetic operations that would imply memorizing. Of course, many would be trivial (anything plus or minus ten or one, or anything minus itself); others would be commonplace enough to easily memorize, while still others would be rare enough to ignore. But that still leaves several thousand figures to remember.

The more I thought about it, the more I saw that, in her world, it would make better sense to memorize literally thousands of things rather than work them out over and over. She had no way of knowing that affordable, handheld calculators would exist in a few decades after she graduated from school, after all. Each time she memorized a two-digit addition or subtraction, she saved herself from working out the problem from scratch over and over again for the rest of her life. This saved her effort and time every time she

  • balanced her checkbook,
  • filled out a deposit slip at the bank,
  • calculated the tax or tip on something,
  • tallied up the score for her card game,
  • totaled up a balance sheet for her business,
  • made change for a customer, or
  • checked that the change she was being given was correct,

to say nothing of all the hundred little things I can’t think of. She married young and has run small businesses for supplemental income all her life, so managing the purse strings fell squarely into her traditional gender role. Numbers were part of her daily life.

So for the first half of her life, none of this could be automated. There were no portable machines to do the job, and even the non-portable ones were expensive, loud, slow, and needed to be double-checked by hand.7

I don’t believe she remembered these all at once for a test, the way I learned the multiplication tables in third grade. It seems likely she memorized them over time. It’s possible that expectations in her school forced a lot of memorization that I didn’t experience when I went many decades later, but maybe she was just extra studious.


I recall, as I went through school, having to rely more on a calculator as I approached advanced subjects. Before calculators became available to students, appendices of lookup tables contained pre-calculated values for many logarithms, trigonometric functions, radicals, and so on. Students relied on these to solve many problems. Anything else—even if it were just the square root of a number—came from a pen-and-paper calculation. (Many of my early math books did not acknowledge calculators yet, but this changed by the 1990s.)

Charles Babbage reported that he was inspired to attempt to mechanize computation when he observed the fallibility of making tables of logarithms by hand. He began in the 1820s. After a hundred and fifty years, arithmetic computation would become handheld and affordable, fomenting new tension around what rote memorization plays in both learning and in daily life.

Today, we’re still trying to resolve that tension. Memorization may feel like it has a diminished role in a post-reform education environment, but it’s by no means dead. Current U.S. Common Core State Standards include expectations that students “[b]y end of Grade 2, know from memory all sums of two one-digit numbers,” and, “[b]y the end of Grade 3, know from memory all products of two one-digit numbers.” That sounds exactly like the pre-reform expectations I had to meet.

All this means is that there has been neither a steady march away from rote memorization nor a retreat back to it. Research is still unclear about what facts are best memorized, when, or how, and so there’s no obvious curriculum that fits all students at all ages. For example, the Common Core Standards cite contributing research from a paper8 which reports on findings from California, concluding that students are counting more than memorizing when pushed to memorize arithmetic facts earlier. The paper reasons this is probably due to deficiencies in the particulars of the curriculum at the time of the research (2008).


I’m not an expert, and I don’t have easy answers, but my instinct is that rote memorization will always play an inextricable role in math education.

Having learned about the different directions in which the traditional and reform movements of math education have tugged the standards over the years, I tend to lean more traditional, but I attribute this to two things. One is that I was educated with what I remember to be a more traditional-math background, and though I didn’t like it, it seems serviceable to me in retrospect.

The other reason is that, for me, memorization has always come easily. I don’t really know why this is. It’s just some automatic way I experience the world. Having this point of view, though, I can easily see how beneficial it is to have answers to a set of frequent problems ready at hand. It’s efficient, and its benefits never cease giving over time. The earlier you remember something, the more it helps you, and the better you internalize it. Even for those who can’t remember things as easily, the returns on doing so are just as useful.

I do completely agree with the underlying rationale of the constructivist approach. Its underpinnings are based on Piaget’s model of cognitive development, which is incredibly insightful. It seems useful to learn early to accommodate the discomfort of adapting your internal mental model to new information by taking an active role in learning new ideas in order to surmount new problems.

I don’t necessarily believe that a constructivist learning approach is intrinsically at odds with rote memorization—that is to say, that memorization necessarily requires passive acquisition. In fact, the experience of active experimentation and active role may help form stronger memories. It’s more likely they compete in curricula for time. It takes longer to mathematically derive a formula for area or volume by independent invention, for example, than to have it given to you.

In fact, constructivist learning works better when the student has a broader reservoir of knowledge in the first place from which to draw to begin with when trying to find novel solutions to problems. In other words, rote memorization aids constructivist learning, which then in turn aids remembering new information.

My feeling is that math will always require a traditional approach at its very heart to set in place a broad foundation of facts, at least at first, before other learning approaches can have success. Though the idea of critical periods in language acquisition has detractors and heavy criticism, there is a kernel of truth to the idea that younger minds undergo a period of innate and intense linguistic fecundity. Maybe as time goes by, we can learn more about math acquisition and find out which kinds of math learning children are more receptive to at which ages. Until then, I feel like we’re figuring out the best way to teach ourselves a second language.

I am grateful to Rachel Kelly for her feedback on a draft of this post.

Privacy Policy Updates: Data Storage

I updated WordPress today to version 4.9.6. I noticed this version comes with support for implementing privacy policies throughout the site. I seem to have been ahead of the curve in implementing my own, but when the GDPR in the EU comes into effect this month, it will clarify and simplify data privacy for much of Europe. This implies enforcement will become a more direct matter as well. Any web service accessible to Europe and which does business in Europe now has updated their privacy policies to ensure it complies with the GDPR—which is why everyone has gotten a raft of privacy policy updates.

Most of these privacy policy updates pertain to what rights customers or users have to their own data. Often, they grant new rights or clarify existing rights. This week’s new version of WordPress is yet another GDPR accommodation.

Today, I have to announce my own GDPR update. Yes, I’m just a tiny website no one reads, and I provide no actual services. But having already committed to a privacy policy, which I promised to keep up to date (and announce those changes), I’m here to make another update.

One nice thing that came with the the WordPress update is a raft of suggestions on a good privacy policy (and in what ways WordPress and its plugins may cause privacy concerns). I found that I had covered most of them, but one thing I needed to revisit was a piece of functionality in Wordfence.

I use Wordfence for security: It monitors malware probes and uses some static blacklists of known bad actors. It also, by default, sends cookies to browsers in order to track which users are recurring ones or which users are automated clients. The tracking consisted only of an anonymous, unique token which distinguished visitors from one another. Unfortunately, this functionality had no opt-out and did not respect Do Not Track.

Although my tracking was only for security purposes—not for advertising—and although did not store any personal information, nor did I share with anyone else, I realized I would have to disable it.

I had made explicit mention of this tracking in my previous revision of my privacy policy:

I run an extra plugin for security which tracks visits in the database for the website, but these are, again, stored locally, and no one has access to these.

This is unfortunately more vague than it should have been, since it doesn’t mention cookies. It also provides no provision for consent. It merely states the consequences of visiting my site.

The GDPR makes it clear that that all tracking techniques (and specifically cookies) require prior consent. Again, I’m not a company, and I don’t provide any service. I’m not even hosted in the EU’s jurisdiction. My goal, though, is to exist as harmoniously with my visitors as possible, whomever they may be, and have the lightest possible touch.

So I’ve disabled Wordfence’s cookie tracking. I’ve added a couple of points to my privacy policy which clarify more precisely which data is logged and under which circumstances cookies may be sent to the browser.

This interferes my analytics, unfortunately—it’s no longer possible to be sure which visitors are humans anymore. I think it’s worth it, regardless.

I also made a couple of other changes based on WordPress’s suggestions. I moved a few bullet points around to put some points closer together which feel more logically grouped. I also added a point which specifies which URL my site uses (meaning the policy would be void if viewed in an archived format, within a frame, or copied elsewhere).

How Transgender Children Are Made and Unmade

Note that the following post discusses the sensitive topic of conversion therapy for transgender children, along with mentions of outmoded terminology and psychodynamic models, ethically questionable studies and treatment practices, and links to some sources which may misgender or mislabel transgender people.

I have also added some clarifications to my final points on 12 May 2018.

Today, a friend pointed me to a news article out of the UK covering a new study by Newhook et al. released in the International Journal of Transgenderism. The study was published a couple of weeks ago and criticizes a handful of other studies made in the last decade which bolster a myth that the vast majority (more than 80%) of children who have presented as transgender have since “desisted” (reverted to being cisgender9) as adolescents or adults. Those studies, all released in the years since 2008, analyze children who were researched in the years since 1970 up until the 2000s.

Those recent desistance10 studies might hint at a couple of interpretations of transgender children who desist. The most neutral one is that such children were “going through a phase,” playing out the vagaries of youthful whims and later changing their minds. However, these studies also permit a more sinister interpretation—one in which children were subject to external influences that “confused” them about their gender, a confusion which time and therapy later allowed them to outgrow and reject.

It stands to reason that, because each child included in the original studies had contact with researchers, it was likely they were seeking treatment which included therapy, which might seem to support the latter interpretation. The standard of care for whichever diagnosis they received, which would have varied by location and time—more on this below—would possibly have focused, in fact, on influencing the child away from transgender or homosexual behaviors. Many research studies and forms of treatment, especially in earlier years, would have taken the form of conversion therapy. That also creates interpretative concerns from the original studies—they affect their own outcome. (This is referenced below as well.)


First, I want to briefly discuss the flaws from the desistance studies so that we can begin to erode the desistance myth. The news article above sums up the critique introduced by the new study quite well.

The ‘desistance’ figure come from studies conducted between the 1970s and the 2000s in the Netherlands and Canada, which assessed whether the kids that sought services at the gender clinic turned out to be trans as adults. The new publication concludes that the figure included all kids that were brought to the clinic, many of who never experienced gender dysphoria in the first place nor saw themselves as trans. Kids that shouldn’t have been a part of the figure were therefore being used to ramp up the numbers.

The news article elaborates that, not only is there uncertainty in how many children should have been counted as transgender in the first place, the earlier studies make blanket assumptions as to what happened to those children afterward.

Another flaw is that in the follow up, all participants that weren’t included for whatever reason were simply brushed off as ‘desisters’. This was done without having any factual evidence or knowledge about the children involved.

In what should have been simple division, the numbers on both sides of the division sign have become suspect. Now the question becomes, do we have the actual figures? Here’s where the real problems start. We need to delve into the primary source, the Newhook et al. study, itself.


The study is called “A critical commentary on follow-up studies and ‘desistance’ theories about transgender and gender-nonconforming children,” authored by Newhook et al.11 It contains a methodological meta-analysis of four previous studies. As it states in its introduction,

In the media, among the lay public, and in medical and scientific journals, it has been widely suggested that over 80% of transgender children will come to identify as cisgender once they reach adolescence or early adulthood. This statement largely draws on estimates from four follow-up studies conducted with samples of gender-nonconforming children in one of two clinics in Canada or the Netherlands (Drummond, Bradley, Peterson-Badali, & Zucker, 2008; Steensma, Biemond, de Boer, & Cohen-Kettenis, 2011; Steensma, McGuire, Kreukels, Beekman, & Cohen-Kettenis, 2013; Wallien & Cohen-Kettenis, 2008).

The critiques in the Newhook et al. study aren’t new, and the authors take pains to mention some of their forebears in their introduction as well. They contextualize their new study by explaining that they hope to guide the eighth upcoming version of the WPATH12 standards of care, which will determine how transgender children for years to come are treated.

Newhook et al. mention older follow-up studies from before the year 2008 of gender-non-conforming children, but the authors explain those studies are tainted by methodological and sampling problems. They are also likely irrelevant since they were not cited in the meta-analysis’s contribution to the 80% figure. So they skip these earlier studies in their meta-analysis.

We recognize that numerous follow-up studies of gender-nonconforming children have been reported since the mid-20th century (e.g., Green, 1987; Money & Russo, 1979; Zucker & Bradley, 1995; Zuger, 1984). In that era, most research in the domain focused on feminine expression among children assigned male at birth, with the implicit or explicit objective of preventing homosexuality or transsexualism.

(I’d like to draw your attention for a moment to the fact that Kenneth Zucker was an author both in the 1995 study above and in the later 2008 study mentioned earlier. We’ll return to him later.)

Now, the Newhook et al. critical commentary study notes that the four desistance studies arrive at a figure of over 80% desistance. Then it begins to note what abilities and limitations these studies have. The methodological concerns center around what we can know and can’t know, given what information was collected at the time and afterward.

What I found was that because the studies used children from times spanning from 1970 onward, the basis for diagnosis itself seeded the flaws in mis-categorization, both in mis-categorizing children as transgender in the first place and then again on follow-up.

Back in 1970, no formal diagnosis for gender identity disorder or gender dysphoria existed. Doctors and researchers had only informal descriptions. As Newhook et al. explain,

However, the plain-language meaning of gender dysphoria, as distress regarding incongruent physical sex characteristics or ascribed social gender roles, has been established since the 1970s (Fisk, 1973). When these four studies refer to gender dysphoria, they are referring to this plain-language context of distress, and not the newer DSM-5 diagnostic category.

The DSM-III13 would not exist until 1980, so the meanings applied here may vary from person to person, as experience and prejudice allow.  I do not know all the criteria which were applied. (I have been unable to locate the Fisk source, but he appears to be the source of the term “gender dysphoria.”)

Then, in the 80s and 90s, the DSM-III, DSM-III-R, DSM-IV, and DSM-IV-TR each included a “gender identity disorder” diagnosis which came with a “GID/Children Transsexualism” or “gender identity disorder in children” category. The symptomatology of these were similar in general shape and included distress (a gender dysphoria component) but also certain behaviors (e.g., crossdressing), timeframes (e.g., six months), and so on. This is a very definite case of moving the goalposts, where the diagnostic criteria shifted. In some ways, they became more lax. Diagnostic criteria often state that only a certain number out of all of the above need be satisfied over a period of time, so if every component but gender dysphoria is present, the diagnosis of gender identity disorder can still apply.

At the same time, the standards of care also were shifting, evolving through time to match the competing typologies14 and psychosexual models of the providers. Adults learned to conform to expectations (such as crossdressing for a year before receiving treatment or professing attraction to men where no such attraction existed).

Children who may not have been aware of these standards and criteria, acting on their needs and wants, might have very well fallen in and out of the categorizations changing around them. Through no fault of their own, the category of transgender might one day have landed upon a child and then another day slipped away from them.

The Newhook et al. study describes the problem this way:

Due to such shifting diagnostic categories and inclusion criteria over time, these studies included children who, by current DSM-5 standards, would not likely have been categorized as transgender (i.e., they would not meet the criteria for gender dysphoria) and therefore, it is not surprising that they would not identify as transgender at follow-up. Current criteria require identification with a gender other than what was assigned at birth, which was not a necessity in prior versions of the diagnosis. For example, in Drummond et al. (2008) study […] the sample consisted of many children diagnosed with GIDC, as defined in the DSM editions III, III-R, and IV (American Psychiatric Association, 1980, 1987, 1994). Yet the early GIDC category included a broad range of gender-nonconforming behaviors that children might display for a variety of reasons, and not necessarily because they identified as another gender. Evidence of the actual distress of gender dysphoria, defined as distress with physical sex characteristics or associated social gender roles (Fisk, 1973), was dropped as a requirement for GIDC diagnosis in the DSM-IV (American Psychiatric Association, 1994; Bradley et al., 1991). Moreover, it is often overlooked that 40% of the child participants did not even meet the then-current DSM-IV diagnostic criteria. The authors conceded: “…it is conceivable that the childhood criteria for GID may ‘scoop in’ girls who are at relatively low risk for adolescent/adult gender-dysphoria” and that “40% of the girls were not judged to have met the complete DSM criteria for GID at the time of childhood assessment… it could be argued that if some of the girls were subthreshold for GID in childhood, then one might assume that they would not be at risk for GID in adolescence or adulthood” (p. 42). By not distinguishing between gender-non-conforming and transgender subjects, there emerges a significant risk of inflation when reporting that a large proportion of “transgender” children had desisted. As noted by Ehrensaft (2016) and Winters (2014), those young people who did not show indications of identifying as transgender as children would consequently not be expected to identify as transgender later, and hence in much public use of this data there has been a troubling overestimation of desistance.

Because of the meaningful shifts in diagnostic criteria over the last fifty years, there’s little hope of reconstructing the true figures of desistance, such as they may be. We would need both detailed notes (interviews, etc.) from the original cohorts to attempt to assess the children’s self-reported identities15 and then those same cohorts’ adulthood identities, assessed the same way from follow-ups, to compare. I suspect the paucity of detailed qualitative data from the original studies would undermine such an effort, due to the primacy of researchers’ diagnoses over self-described experiences and identities.

In most studies, it appears we do not have such detailed notes and the like available. Newhook et al. do cite Steensma et al. (2011)16 as having some unique qualitative research, but quantitative data are very limited—there are only two interviews mentioned.


The Newhook et al. study also brings up many ethical concerns, and here I turn back to the problem of Zucker in particular. The authors identify three ethical concerns, of which the second is particularly insidious—the questionable goals of treatment itself.17

In describing their second concern, the authors write,

A second ethical concern is that many of the children in the Toronto studies (Drummond et al., 2008; Zucker & Bradley, 1995) were enrolled in a treatment program that sought to “lower the odds” that they would grow up to be transgender (Drescher & Pula, 2014; Zucker, Wood, Singh, & Bradley, 2012; Paterson, 2015). Zucker et al. (2012) wrote: “…in our clinic, treatment is recommended to reduce the likelihood of GID persistence” (p. 393).

As I write, Zucker’s words are only six years old. To be clear: he is both espousing and practicing conversion therapy of children.

Zucker is not a marginalized figure in the world of psychiatry. He is not only respected and accepted; he was the head of the “Sexual and Gender Identity Disorders” group that revised the DSM-5, appointed by the American Psychiatric Association. A heartbreaking account of his attempt at conversion therapy may be found in this NPR story (with some misgendering).

He was not the only person in the group to favor controversial theories, either. Blanchard (who favors an outmoded typology of transgender people based on sexual attraction and also attempts conversion therapy) and Lawrence (who has expressed the belief that transgender people have a kind of body integrity identity disorder) also formed part of the group.

Why do I mention their role in shaping the DSM-5? Well, they believe children should be dissuaded from transgender identities, which they regard as pathological or maladaptive. Under their influence in shaping the diagnostic criteria for children and adults, they moved the goalposts for fitting the model. That then allowed studies to tally up how past children fit current, different diagnostic criteria to determine that they have “desisted.” In turn, these fudged figures can be used to justify further conversion therapy, resist affirmative care models of treatment, and influence the WPATH standards of care to inhibit access to treatment and personal safety.

I therefore question whether—after influencing or directly authoring new diagnostic standards for gender dysphoria—advocates for conversion therapy then revisited older studies to make follow-ups, aware of how the results would skew toward their desired outcome: an interpretation of a seeming tendency toward desistance, which marks transgender identities as “unnatural” aberrations which only emerge later in life and which can be headed off earlier in childhood. Buried underneath this interpretation is an implicit assumption about how children form transgender identities due to extrinsic influences. They conclude that they can prescribe a model of care which essentially counteracts those influences with their own.

Wiser people than I have already explained why better models of care, such as the affirmative care model, practiced in most North American clinics, provide better outcomes.18 The news article I began with also concludes with some great sources on treatment outcomes, which I cannot possibly outdo, so I’ll leave you to revisit Owl’s article.

Denying children bodily autonomy and agency over their identity is a form of abuse. The long-lasting confusion may result in self-denial, withdrawal, self-harm, or even suicide later in life. Unlike many forms of abuse, which happen privately, transgender conversion therapy coopts institutions toward its own ends by shaping the standards of care for treatment (via its influence on the WPATH with influential studies) and by writing the diagnostic manual itself. The prevalent myth of desistance of childhood gender dysphoria has been a powerful tool used to abuse children. It must be dismantled. To do so, we must expose pernicious and specious studies, using critical meta-analysis such as Newhook et al.’s.

I am grateful to Zuzu O. for feedback on this post.

Privacy Policy Update: No Mining

I got a weird spam e-mail overnight asking if I wanted to embed someone’s cryptocurrency miner into my website. They purport to be opt-in only, but all the other examples I’ve read about online up to now have been surreptitious, hijacking the browser for its own ends without asking. The end user only notices when their computer fans switch on or their computer gets too hot.

Such mining scripts have been strongly contentious in other websites. They exert excessive and unilateral control over the browser’s system. I certainly had such things in mind when I promised never to embed ads and the like in my website, but I had never spelled out that I had no intention of hijacking the browser for my own ends (ad or not).

This morning, I added a new point to my privacy policy.

  • This website does not load software in the user agent (your browser) which serves any purpose beyond displaying the website and its assets—meaning it does not use your browser to mine cryptocurrency, for example.

Most of my privacy policy describes what the website does without mentioning the browser. This point adds a clear expectation for browsers which visit.

I generalized the point a bit to include things which aren’t just cryptocurrency miners. It might be tempting to grab a few of my users’ cycles for SETI@home or the like, for example, but if a user wants to contribute to a project like that, they can do so themselves. I’ll have to rely on persuasive words to bring people around to a cause like that.

The Apology Contract

A binding contract has three elements: offer, consideration, and acceptance—all of which must exist among mutually assenting parties. These elements, in some form or another, have existed since time immemorial. A contract of sale, for example, contains an offer (the good for sale at a price), the consideration (the money exchanged for the good), and the acceptance (the actual mutual agreement to exchange the good for the price).

Many of our social interactions implicitly follow a similar structure because they rely upon offering, considering, and accepting one another’s social cues in more-or-less formulaic ways. Some of these interactions are rigidly ritualistic—”thank you,” “you’re welcome”—and some are not (flirting, for example).

I have read several articles on the best way to apologize, with which I agree, and which address the person giving the apology with humility and sincere intent, acknowledging the harm done, and reducing further harm. (One such popular example was written by John Scalzi. Another good example aimed at children comes from a parenting blog.)

However, I have lately come to worry that the act of the apology often still imposes a contract-like, ritualistic exchange. On receiving an apology, I have in the past found myself at odds with every instinct in my body to assuage the apologizer who, having recognized their fault and promising in good faith to do better, awaits something like an absolution from me before moving on.

The formula for how we’re taught to apologize, as children, goes:

— I’m sorry.

— It’s okay.

I’ve tried withholding that second part of the exchange as I’ve gotten older. Sometimes I don’t feel okay. Sometimes it’s not okay. Maybe I need space or time to get there. Maybe I just want to move on without needing to perseverate on the feelings of the person who wronged me.

This is especially difficult for an in-person conversation. Without the expected words, “it’s okay,” or, “it’s fine,” in my mouth, what am I to say? I don’t necessarily want to prolong the moment, either. I often have an interest in moving past the moment, but I don’t have some alternative wording that isn’t focused on the feelings of the apologizer.

When I don’t automatically say, “it’s okay,” a loaded pause often seems to follow. The apologizer feels they have done everything right, and I haven’t followed through on my end of the apology. They wait for me to give them some way to get past the moment, and when I don’t offer that back, they also don’t know how to continue.

The ritual of the apology feels a lot like a social contract because we’re conditioned to treat it as such from a young age, to offer some comfort to someone who has apologized and meet them part way. However, this is no contract. The formula, like so many social rituals, instead imposes an expected response on the recipient. There’s not necessarily mutual assent.

What I have read about the best way to offer an apology sometimes, but doesn’t always, offers a final step I believe is extremely important—once given, expect nothing back. Any forgiveness, grace, or acceptance on the part of the recipient is a gift, not an exchange. Beyond that, though, you need not expect any response whatsoever, not even acknowledgement. The apology, for the one giving it, is both the understanding of harm and the promise to reject furthering it. It is not a request.

What’s more, I can’t recall seeing anyone write for the person receiving the apology. I address you now: You owe nothing. Take comfort, if you can, that someone has seen how they have harmed you. Find peace, if you can, in the closure they offer. Exchange what you like, and repair the relationship if you want it. But your duty to them ended when the apologizer wronged you in the first place.

Adding a Privacy Policy

I’ve decided to give my website a privacy policy. It’s maybe more of a privacy promise.

It might sound strange to make a privacy policy for a website with which I don’t intend users to interact, but I’ve realized that even browsing news websites or social media has privacy implications for users who visit them. So I wanted to state what assurances users can have when visiting my website—and set a standard for myself to meet when I make modifications to my website.

Most of the points in it boil down to one thing—if you visit my site, that fact remains between you and my site. No one else will know—not Google, not Facebook, not your ISP, not the airplane WiFi you’re using, not some ad network.

I went to some trouble to make these assurances. For example, I had to create a WordPress child theme which prevents loading stylesheets associated with Google Fonts used by default. Then—since I still wanted to use some of those fonts—I needed to check the licensing on them, download them, convert them to a form I could host locally, and incorporate them into a stylesheet on my own server.

I also needed to audit the source code for all the WordPress plugins I use to see what requests they make, if any, to other parties (and I’ll have to repeat this process if I ever add a new plugin). This was more challenging than I realized.

I needed to ensure I had no malware present and that my website remain free of malware. I began with WordPress’s hardening guide. I found a very thorough plugin for comparing file versions against known-good versions (WordFence, which I found recommended in the hardening guide). I also made additional checks of file permissions, excised unused plugins, made sure all server software was up to date, and incorporated additional protections into the web server configuration to limit my attack surface.

Finally, I had to browse my website for a while using my local developer tools built into my browser, both to see if any requests went to a domain other than my own and to inspect what cookies, local storage, and session storage data were created. This turned up a plugin that brought in icons from a third party site, which I had to replace.

After all that, I feel sure I can make the assurances my privacy policy makes.

Beginning Astrophotography: Crescent Moonset

Computer-controlled, motorized telescope, dimly illuminated, aimed up and to the left at the distant crescent moon in the upper left, with a camera attached where the eyepiece should be at the back, using a baroque contraption.
Last night’s front-porch setup: Celestron NexStar 5 SE powered by a Celestron PowerTank Lithium and Sony α6300 camera connected using an E-mount camera adapter, aimed at the twilight moonset.

I had clear skies again last night, and I remembered to look for the Moon while it was slightly higher in the sky. I set my telescope up on the front porch shortly after sunset. The Moon presented an incandescent, imperceptibly fuller crescent facing the failing twilight.

Because it was higher, I had a better perspective, I had more time to take photos, I had more time to check my settings, and my photos had less atmosphere through which to photograph (meaning less distortion). And because the crescent was fuller, I captured more detail in my photos.

Equipment

I always remember to spell out acquisition details in my astrophotography posts, but I’ve found instead people most often ask what equipment I use. I usually don’t list this in detail, both because I’ve usually already mentioned my equipment in earlier posts and also because I find that the exact equipment I used on a given night is partially convenience and whim, not meriting any particular recommendation or endorsement. My photos are within reach of all sorts of equipment of various kinds and prices, given practice and technique, and the last thing I want to do is give someone the impression they need to spend over a thousand dollars to do what a two-hundred-dollar telescope and a smartphone can do.

However, I’m going to try to make an effort to name what equipment I use now and in the future just because it’s so commonly asked. Maybe I’ll need to reference it myself in the future, too. So last night, I used

Those are the only four pieces of hardware I used last night.

Technique

I aligned the telescope on the Moon, which let it track roughly. This meant it needed periodic corrections to keep it from drifting out of view (once every several minutes). I concentrated on keeping the extents of the arc within the viewfinder.

View of the LCD display of the camera, zoomed in on a fuzzy section of the Moon, showing bright and dark sections divided by a diagonal line.
Using the Focus Magnifier on the Sony α6300, concentrated on a section of the Moon near its terminator to fine-tune focus.

Once it was centered and roughly focused, I used a feature on my camera called the “Focus Magnifier” to fine-tune the focus. I’ve found this to be indispensable. Using this feature, I zoom in to a close up view of some section of what the camera sensor is seeing. This way, I can make fine adjustments to the telescope’s focus until I get the best possible clarity available. I can also get a good idea what kind of seeing I’ll encounter that night—whether the sky will shimmer a lot or remain still. I was lucky last night to find good focus and good seeing.

Once focus is good, it can be left alone. I ensure that the adapter is locked tightly in place so that nothing moves or settles, keeping the focal point cleanly locked on infinity.

Then I turned the ISO up—doubled it. The Moon is a bright object, so I was not keen to use something I would use for a dark site, but I settled on ISO 1600. My goal was to reach a shutter speed of 1/100 seconds, which I did, without losing the picture to noise or dimness. A higher ISO works great at a dark site, but the Moon is quite dynamic, so I felt like I had less headroom. In any case, I used 1/100 seconds’ exposure and ISO 1600 for all my photos.

I captured a short 4K video before I began so I could capture the seeing conditions that night. I recommend viewing it fullscreen, or it will look like a still photo—the sky was placid as a pond last night.

After taking the video, I realigned the telescope slightly and, using my remote controller so that I could quickly actuate it without shaking the telescope, I took 319 photos, occasionally realigning to correct for drift.

Unfortunately, Venus and Mercury had already sunk too low to get a glimpse, so I packed it up and went inside.

Processing

I moved all the photos, in RAW format, to my computer from the camera. Then I converted them all to TIFF format. These two steps took probably something like an hour and resulted in seven and a half gigabytes of data.

Screenshot of a Windows program called PIPP, using two Windows, one showing the Moon highlighted in blood red, the other with several progress bars and a list of files.
Screenshot of PIPP in use, aligning the photos of the Moon and sorting them by brightness.

Because the Moon drifted, due to the rough tracking, the photos needed to be pre-aligned. I used a piece of software called PIPP for that. Without this pre-alignment step, the tracking and alignment built into my stacking software struggled mightily with the photos and created a mess.

Its output was another series of TIFF photos. I found afterwards that two of the photos were significantly too exposed, leaving many details blown out, so I excluded them from the rest of the process, leaving me with 317 photos.

Screenshot of AutoStakkert!3, a baroque program consisting of two windows, one with a large preview of the moon, the other with lots of graphs and buttons and inputs and multiple progress bars. It is 20% through "MAP Analysis."
Screenshot of AutoStakkert!3 stacking the best 50% of the Moon photos I took into a single image.

I opened these 317 photos in AutoStakkert!3 beta. After initial quality analysis, I used the program to align and stack the best 50% of the images (by its determination). This took a bit less than ten minutes and left me with a single TIFF photo as output.

Image stacking leaves behind an intermediate product when it’s complete, which is what this TIFF photo is. It’s blurry, containing an average of all the 157 photos which were composited into it. However, the blurs in this photo can be mathematically refined more easily using special filters.19 I used a program called Astra Image to apply this further processing. In particular, I used a feature it calls “wavelet sharpening” (which can be found in other programs) to reduce the blurring. I also applied an unsharp mask and de-noising.

Finally, I used Apple Photos to flip the resulting photo vertically (to undo the inversion which the telescope causes) and tweak the contrast and colors.

The Photo

Photo of crescent moon, slender, curving down like a bowl but askew to the right. Terminator stops just short of the Mare Crisium.
Crescent moonset taken at about 8 p.m., PDT, on 20 Mar 18. Composited from 50% highest quality photos of a set of 317 taken.

Click to view the photo in fullscreen if you can. There’s a lot of detail. The terminator of the lunar surface stops just short of the Mare Crisium (the Sea of Crises), the round, smooth basalt surface right about the middle of the crescent.

I can’t help but compare this one to the photo from the night before: what a difference a day makes. I had more time to work, more photos to take, and the benefit of yesterday’s experience to help improve.

Now it’s clouded over here again—Portland weather—and I can’t practice anymore for a while.