Math, She Rote

My friends often have different educational backgrounds than mine. Some of them are younger, but even if they aren’t, they’re often from urban areas that had moved to more modern educational curricula before my school system had. The way I learned basic arithmetic remained unchanged from how it was taught from the early 1980s by the time I learned it in the late 1980s and early 1990s because that’s when our books dated from.

I learned during an interesting period in mathematical education history. It represented a kind of educational interbellum—a bit after the “New Math” of the 1960s and 1970s1 but before the “math wars,” instigated by the 1989 Curriculum and Evaluation Standards for School Mathematics. The latter 1989 publication has been called “reform mathematics,” which emphasizes processes and concepts over correctness and manual thinking. In other words, the educators promoting reform mathematics began to believe that the path students took toward the answer mattered more than whether they got the answer right. Many states’ standards and federally funded textbooks followed reform mathematics in the 1990s and beyond.

Reform mathematics emphasized constructivist teaching methods. Under this approach, instead of prescribing to students the best way how to solve a problem, teachers pose a problem and allow the student to surmount it by building on their own knowledge, experiences, perspective, and agency. The teacher provides tools and guidance to help the student along the way. Constructivist approaches involve experiments, discussions, trips, films, and hands-on experiences.

One example of a constructivist-influenced math curriculum, used in elementary school to teach basic arithmetic, was known as Investigations in Numbers, Data, and Space. It came with a heavy emphasis on learning devices called manipulatives, which are tactile objects which the student can physically see, touch, and move, to solve problems. These are items like cubes, spinners, tapes, rulers, weights, and so on.

As another example, someone I know recently described a system they learned in elementary school called TouchMath for adding one-digit numbers, which makes the experience more visual or tactile (analogous to manipulatives). They explained that for each computation, they counted the “TouchPoints” in the operands to arrive at the result.

I had never heard of TouchMath. In fact, I never solved problems using manipulatives, nor any analogue of them. I had little experience with this form of math education. We were given explicit instructions on traditional ways to solve problems (carrying, long division, and so on). Accompanying drawings or diagrams rarely became more elaborate than number lines, grids, or arrangements of abstract groupings of shapes which could be counted. They served only as tools to allow students to internalize the lesson, not to draw their own independent methods or conclusions.

I contrasted my friend’s experience with TouchMath to my experience. To add or subtract one-digit numbers, we merely counted. We were given worksheets full of these to do, and since counting for each problem would have been tedious and impractical, memorization for each combination of numbers would become inevitable. Given the expectations and time constraints, I’m certain rote memorization was the goal.

In a couple of years, we were multiplying and dividing, and we were adding and subtracting two- or three-digit numbers using carrying—processing the numbers digit-wise. At the same time, we were asked to commit the multiplication tables to memory. These expectations came in third grade, and it would be nearly impossible to make it out of fourth grade without committing the multiplication table and all single-digit addition and subtraction to memory (the age of ten for me).


Our teachers did not bother to force us to memorize any two-digit arithmetic operations. But I have some recollection a lot of years ago of my grandma telling me she had most two-digit additions and subtractions still memorized. It was just an offhand remark—maybe something she said as I was reaching for a calculator for something she had already figured out. Maybe we were playing Scrabble.

For context, she would have gone to school in rural Georgia in the 1940s and 1950s, and she graduated high school. (In that time and place, it was commonplace for many who intended to do manual, trade, or agricultural work not to continue through secondary school.)

I remember feeling incredulous at the time about the number of possible two-digit arithmetic operations that would imply memorizing. Of course, many would be trivial (anything plus or minus ten or one, or anything minus itself); others would be commonplace enough to easily memorize, while still others would be rare enough to ignore. But that still leaves several thousand figures to remember.

The more I thought about it, the more I saw that, in her world, it would make better sense to memorize literally thousands of things rather than work them out over and over. She had no way of knowing that affordable, handheld calculators would exist in a few decades after she graduated from school, after all. Each time she memorized a two-digit addition or subtraction, she saved herself from working out the problem from scratch over and over again for the rest of her life. This saved her effort and time every time she

  • balanced her checkbook,
  • filled out a deposit slip at the bank,
  • calculated the tax or tip on something,
  • tallied up the score for her card game,
  • totaled up a balance sheet for her business,
  • made change for a customer, or
  • checked that the change she was being given was correct,

to say nothing of all the hundred little things I can’t think of. She married young and has run small businesses for supplemental income all her life, so managing the purse strings fell squarely into her traditional gender role. Numbers were part of her daily life.

So for the first half of her life, none of this could be automated. There were no portable machines to do the job, and even the non-portable ones were expensive, loud, slow, and needed to be double-checked by hand.2

I don’t believe she remembered these all at once for a test, the way I learned the multiplication tables in third grade. It seems likely she memorized them over time. It’s possible that expectations in her school forced a lot of memorization that I didn’t experience when I went many decades later, but maybe she was just extra studious.


I recall, as I went through school, having to rely more on a calculator as I approached advanced subjects. Before calculators became available to students, appendices of lookup tables contained pre-calculated values for many logarithms, trigonometric functions, radicals, and so on. Students relied on these to solve many problems. Anything else—even if it were just the square root of a number—came from a pen-and-paper calculation. (Many of my early math books did not acknowledge calculators yet, but this changed by the 1990s.)

Charles Babbage reported that he was inspired to attempt to mechanize computation when he observed the fallibility of making tables of logarithms by hand. He began in the 1820s. After a hundred and fifty years, arithmetic computation would become handheld and affordable, fomenting new tension around what rote memorization plays in both learning and in daily life.

Today, we’re still trying to resolve that tension. Memorization may feel like it has a diminished role in a post-reform education environment, but it’s by no means dead. Current U.S. Common Core State Standards include expectations that students “[b]y end of Grade 2, know from memory all sums of two one-digit numbers,” and, “[b]y the end of Grade 3, know from memory all products of two one-digit numbers.” That sounds exactly like the pre-reform expectations I had to meet.

All this means is that there has been neither a steady march away from rote memorization nor a retreat back to it. Research is still unclear about what facts are best memorized, when, or how, and so there’s no obvious curriculum that fits all students at all ages. For example, the Common Core Standards cite contributing research from a paper3 which reports on findings from California, concluding that students are counting more than memorizing when pushed to memorize arithmetic facts earlier. The paper reasons this is probably due to deficiencies in the particulars of the curriculum at the time of the research (2008).


I’m not an expert, and I don’t have easy answers, but my instinct is that rote memorization will always play an inextricable role in math education.

Having learned about the different directions in which the traditional and reform movements of math education have tugged the standards over the years, I tend to lean more traditional, but I attribute this to two things. One is that I was educated with what I remember to be a more traditional-math background, and though I didn’t like it, it seems serviceable to me in retrospect.

The other reason is that, for me, memorization has always come easily. I don’t really know why this is. It’s just some automatic way I experience the world. Having this point of view, though, I can easily see how beneficial it is to have answers to a set of frequent problems ready at hand. It’s efficient, and its benefits never cease giving over time. The earlier you remember something, the more it helps you, and the better you internalize it. Even for those who can’t remember things as easily, the returns on doing so are just as useful.

I do completely agree with the underlying rationale of the constructivist approach. Its underpinnings are based on Piaget’s model of cognitive development, which is incredibly insightful. It seems useful to learn early to accommodate the discomfort of adapting your internal mental model to new information by taking an active role in learning new ideas in order to surmount new problems.

I don’t necessarily believe that a constructivist learning approach is intrinsically at odds with rote memorization—that is to say, that memorization necessarily requires passive acquisition. In fact, the experience of active experimentation and active role may help form stronger memories. It’s more likely they compete in curricula for time. It takes longer to mathematically derive a formula for area or volume by independent invention, for example, than to have it given to you.

In fact, constructivist learning works better when the student has a broader reservoir of knowledge in the first place from which to draw to begin with when trying to find novel solutions to problems. In other words, rote memorization aids constructivist learning, which then in turn aids remembering new information.

My feeling is that math will always require a traditional approach at its very heart to set in place a broad foundation of facts, at least at first, before other learning approaches can have success. Though the idea of critical periods in language acquisition has detractors and heavy criticism, there is a kernel of truth to the idea that younger minds undergo a period of innate and intense linguistic fecundity. Maybe as time goes by, we can learn more about math acquisition and find out which kinds of math learning children are more receptive to at which ages. Until then, I feel like we’re figuring out the best way to teach ourselves a second language.

I am grateful to Rachel Kelly for her feedback on a draft of this post.

Privacy Policy Updates: Data Storage

I updated WordPress today to version 4.9.6. I noticed this version comes with support for implementing privacy policies throughout the site. I seem to have been ahead of the curve in implementing my own, but when the GDPR in the EU comes into effect this month, it will clarify and simplify data privacy for much of Europe. This implies enforcement will become a more direct matter as well. Any web service accessible to Europe and which does business in Europe now has updated their privacy policies to ensure it complies with the GDPR—which is why everyone has gotten a raft of privacy policy updates.

Most of these privacy policy updates pertain to what rights customers or users have to their own data. Often, they grant new rights or clarify existing rights. This week’s new version of WordPress is yet another GDPR accommodation.

Today, I have to announce my own GDPR update. Yes, I’m just a tiny website no one reads, and I provide no actual services. But having already committed to a privacy policy, which I promised to keep up to date (and announce those changes), I’m here to make another update.

One nice thing that came with the the WordPress update is a raft of suggestions on a good privacy policy (and in what ways WordPress and its plugins may cause privacy concerns). I found that I had covered most of them, but one thing I needed to revisit was a piece of functionality in Wordfence.

I use Wordfence for security: It monitors malware probes and uses some static blacklists of known bad actors. It also, by default, sends cookies to browsers in order to track which users are recurring ones or which users are automated clients. The tracking consisted only of an anonymous, unique token which distinguished visitors from one another. Unfortunately, this functionality had no opt-out and did not respect Do Not Track.

Although my tracking was only for security purposes—not for advertising—and although did not store any personal information, nor did I share with anyone else, I realized I would have to disable it.

I had made explicit mention of this tracking in my previous revision of my privacy policy:

I run an extra plugin for security which tracks visits in the database for the website, but these are, again, stored locally, and no one has access to these.

This is unfortunately more vague than it should have been, since it doesn’t mention cookies. It also provides no provision for consent. It merely states the consequences of visiting my site.

The GDPR makes it clear that that all tracking techniques (and specifically cookies) require prior consent. Again, I’m not a company, and I don’t provide any service. I’m not even hosted in the EU’s jurisdiction. My goal, though, is to exist as harmoniously with my visitors as possible, whomever they may be, and have the lightest possible touch.

So I’ve disabled Wordfence’s cookie tracking. I’ve added a couple of points to my privacy policy which clarify more precisely which data is logged and under which circumstances cookies may be sent to the browser.

This interferes my analytics, unfortunately—it’s no longer possible to be sure which visitors are humans anymore. I think it’s worth it, regardless.

I also made a couple of other changes based on WordPress’s suggestions. I moved a few bullet points around to put some points closer together which feel more logically grouped. I also added a point which specifies which URL my site uses (meaning the policy would be void if viewed in an archived format, within a frame, or copied elsewhere).

How Transgender Children Are Made and Unmade

Note that the following post discusses the sensitive topic of conversion therapy for transgender children, along with mentions of outmoded terminology and psychodynamic models, ethically questionable studies and treatment practices, and links to some sources which may misgender or mislabel transgender people.

I have also added some clarifications to my final points on 12 May 2018.

Today, a friend pointed me to a news article out of the UK covering a new study by Newhook et al. released in the International Journal of Transgenderism. The study was published a couple of weeks ago and criticizes a handful of other studies made in the last decade which bolster a myth that the vast majority (more than 80%) of children who have presented as transgender have since “desisted” (reverted to being cisgender4) as adolescents or adults. Those studies, all released in the years since 2008, analyze children who were researched in the years since 1970 up until the 2000s.

Those recent desistance5 studies might hint at a couple of interpretations of transgender children who desist. The most neutral one is that such children were “going through a phase,” playing out the vagaries of youthful whims and later changing their minds. However, these studies also permit a more sinister interpretation—one in which children were subject to external influences that “confused” them about their gender, a confusion which time and therapy later allowed them to outgrow and reject.

It stands to reason that, because each child included in the original studies had contact with researchers, it was likely they were seeking treatment which included therapy, which might seem to support the latter interpretation. The standard of care for whichever diagnosis they received, which would have varied by location and time—more on this below—would possibly have focused, in fact, on influencing the child away from transgender or homosexual behaviors. Many research studies and forms of treatment, especially in earlier years, would have taken the form of conversion therapy. That also creates interpretative concerns from the original studies—they affect their own outcome. (This is referenced below as well.)


First, I want to briefly discuss the flaws from the desistance studies so that we can begin to erode the desistance myth. The news article above sums up the critique introduced by the new study quite well.

The ‘desistance’ figure come from studies conducted between the 1970s and the 2000s in the Netherlands and Canada, which assessed whether the kids that sought services at the gender clinic turned out to be trans as adults. The new publication concludes that the figure included all kids that were brought to the clinic, many of who never experienced gender dysphoria in the first place nor saw themselves as trans. Kids that shouldn’t have been a part of the figure were therefore being used to ramp up the numbers.

The news article elaborates that, not only is there uncertainty in how many children should have been counted as transgender in the first place, the earlier studies make blanket assumptions as to what happened to those children afterward.

Another flaw is that in the follow up, all participants that weren’t included for whatever reason were simply brushed off as ‘desisters’. This was done without having any factual evidence or knowledge about the children involved.

In what should have been simple division, the numbers on both sides of the division sign have become suspect. Now the question becomes, do we have the actual figures? Here’s where the real problems start. We need to delve into the primary source, the Newhook et al. study, itself.


The study is called “A critical commentary on follow-up studies and ‘desistance’ theories about transgender and gender-nonconforming children,” authored by Newhook et al.6 It contains a methodological meta-analysis of four previous studies. As it states in its introduction,

In the media, among the lay public, and in medical and scientific journals, it has been widely suggested that over 80% of transgender children will come to identify as cisgender once they reach adolescence or early adulthood. This statement largely draws on estimates from four follow-up studies conducted with samples of gender-nonconforming children in one of two clinics in Canada or the Netherlands (Drummond, Bradley, Peterson-Badali, & Zucker, 2008; Steensma, Biemond, de Boer, & Cohen-Kettenis, 2011; Steensma, McGuire, Kreukels, Beekman, & Cohen-Kettenis, 2013; Wallien & Cohen-Kettenis, 2008).

The critiques in the Newhook et al. study aren’t new, and the authors take pains to mention some of their forebears in their introduction as well. They contextualize their new study by explaining that they hope to guide the eighth upcoming version of the WPATH7 standards of care, which will determine how transgender children for years to come are treated.

Newhook et al. mention older follow-up studies from before the year 2008 of gender-non-conforming children, but the authors explain those studies are tainted by methodological and sampling problems. They are also likely irrelevant since they were not cited in the meta-analysis’s contribution to the 80% figure. So they skip these earlier studies in their meta-analysis.

We recognize that numerous follow-up studies of gender-nonconforming children have been reported since the mid-20th century (e.g., Green, 1987; Money & Russo, 1979; Zucker & Bradley, 1995; Zuger, 1984). In that era, most research in the domain focused on feminine expression among children assigned male at birth, with the implicit or explicit objective of preventing homosexuality or transsexualism.

(I’d like to draw your attention for a moment to the fact that Kenneth Zucker was an author both in the 1995 study above and in the later 2008 study mentioned earlier. We’ll return to him later.)

Now, the Newhook et al. critical commentary study notes that the four desistance studies arrive at a figure of over 80% desistance. Then it begins to note what abilities and limitations these studies have. The methodological concerns center around what we can know and can’t know, given what information was collected at the time and afterward.

What I found was that because the studies used children from times spanning from 1970 onward, the basis for diagnosis itself seeded the flaws in mis-categorization, both in mis-categorizing children as transgender in the first place and then again on follow-up.

Back in 1970, no formal diagnosis for gender identity disorder or gender dysphoria existed. Doctors and researchers had only informal descriptions. As Newhook et al. explain,

However, the plain-language meaning of gender dysphoria, as distress regarding incongruent physical sex characteristics or ascribed social gender roles, has been established since the 1970s (Fisk, 1973). When these four studies refer to gender dysphoria, they are referring to this plain-language context of distress, and not the newer DSM-5 diagnostic category.

The DSM-III8 would not exist until 1980, so the meanings applied here may vary from person to person, as experience and prejudice allow.  I do not know all the criteria which were applied. (I have been unable to locate the Fisk source, but he appears to be the source of the term “gender dysphoria.”)

Then, in the 80s and 90s, the DSM-III, DSM-III-R, DSM-IV, and DSM-IV-TR each included a “gender identity disorder” diagnosis which came with a “GID/Children Transsexualism” or “gender identity disorder in children” category. The symptomatology of these were similar in general shape and included distress (a gender dysphoria component) but also certain behaviors (e.g., crossdressing), timeframes (e.g., six months), and so on. This is a very definite case of moving the goalposts, where the diagnostic criteria shifted. In some ways, they became more lax. Diagnostic criteria often state that only a certain number out of all of the above need be satisfied over a period of time, so if every component but gender dysphoria is present, the diagnosis of gender identity disorder can still apply.

At the same time, the standards of care also were shifting, evolving through time to match the competing typologies9 and psychosexual models of the providers. Adults learned to conform to expectations (such as crossdressing for a year before receiving treatment or professing attraction to men where no such attraction existed).

Children who may not have been aware of these standards and criteria, acting on their needs and wants, might have very well fallen in and out of the categorizations changing around them. Through no fault of their own, the category of transgender might one day have landed upon a child and then another day slipped away from them.

The Newhook et al. study describes the problem this way:

Due to such shifting diagnostic categories and inclusion criteria over time, these studies included children who, by current DSM-5 standards, would not likely have been categorized as transgender (i.e., they would not meet the criteria for gender dysphoria) and therefore, it is not surprising that they would not identify as transgender at follow-up. Current criteria require identification with a gender other than what was assigned at birth, which was not a necessity in prior versions of the diagnosis. For example, in Drummond et al. (2008) study […] the sample consisted of many children diagnosed with GIDC, as defined in the DSM editions III, III-R, and IV (American Psychiatric Association, 1980, 1987, 1994). Yet the early GIDC category included a broad range of gender-nonconforming behaviors that children might display for a variety of reasons, and not necessarily because they identified as another gender. Evidence of the actual distress of gender dysphoria, defined as distress with physical sex characteristics or associated social gender roles (Fisk, 1973), was dropped as a requirement for GIDC diagnosis in the DSM-IV (American Psychiatric Association, 1994; Bradley et al., 1991). Moreover, it is often overlooked that 40% of the child participants did not even meet the then-current DSM-IV diagnostic criteria. The authors conceded: “…it is conceivable that the childhood criteria for GID may ‘scoop in’ girls who are at relatively low risk for adolescent/adult gender-dysphoria” and that “40% of the girls were not judged to have met the complete DSM criteria for GID at the time of childhood assessment… it could be argued that if some of the girls were subthreshold for GID in childhood, then one might assume that they would not be at risk for GID in adolescence or adulthood” (p. 42). By not distinguishing between gender-non-conforming and transgender subjects, there emerges a significant risk of inflation when reporting that a large proportion of “transgender” children had desisted. As noted by Ehrensaft (2016) and Winters (2014), those young people who did not show indications of identifying as transgender as children would consequently not be expected to identify as transgender later, and hence in much public use of this data there has been a troubling overestimation of desistance.

Because of the meaningful shifts in diagnostic criteria over the last fifty years, there’s little hope of reconstructing the true figures of desistance, such as they may be. We would need both detailed notes (interviews, etc.) from the original cohorts to attempt to assess the children’s self-reported identities10 and then those same cohorts’ adulthood identities, assessed the same way from follow-ups, to compare. I suspect the paucity of detailed qualitative data from the original studies would undermine such an effort, due to the primacy of researchers’ diagnoses over self-described experiences and identities.

In most studies, it appears we do not have such detailed notes and the like available. Newhook et al. do cite Steensma et al. (2011)11 as having some unique qualitative research, but quantitative data are very limited—there are only two interviews mentioned.


The Newhook et al. study also brings up many ethical concerns, and here I turn back to the problem of Zucker in particular. The authors identify three ethical concerns, of which the second is particularly insidious—the questionable goals of treatment itself.12

In describing their second concern, the authors write,

A second ethical concern is that many of the children in the Toronto studies (Drummond et al., 2008; Zucker & Bradley, 1995) were enrolled in a treatment program that sought to “lower the odds” that they would grow up to be transgender (Drescher & Pula, 2014; Zucker, Wood, Singh, & Bradley, 2012; Paterson, 2015). Zucker et al. (2012) wrote: “…in our clinic, treatment is recommended to reduce the likelihood of GID persistence” (p. 393).

As I write, Zucker’s words are only six years old. To be clear: he is both espousing and practicing conversion therapy of children.

Zucker is not a marginalized figure in the world of psychiatry. He is not only respected and accepted; he was the head of the “Sexual and Gender Identity Disorders” group that revised the DSM-5, appointed by the American Psychiatric Association. A heartbreaking account of his attempt at conversion therapy may be found in this NPR story (with some misgendering).

He was not the only person in the group to favor controversial theories, either. Blanchard (who favors an outmoded typology of transgender people based on sexual attraction and also attempts conversion therapy) and Lawrence (who has expressed the belief that transgender people have a kind of body integrity identity disorder) also formed part of the group.

Why do I mention their role in shaping the DSM-5? Well, they believe children should be dissuaded from transgender identities, which they regard as pathological or maladaptive. Under their influence in shaping the diagnostic criteria for children and adults, they moved the goalposts for fitting the model. That then allowed studies to tally up how past children fit current, different diagnostic criteria to determine that they have “desisted.” In turn, these fudged figures can be used to justify further conversion therapy, resist affirmative care models of treatment, and influence the WPATH standards of care to inhibit access to treatment and personal safety.

I therefore question whether—after influencing or directly authoring new diagnostic standards for gender dysphoria—advocates for conversion therapy then revisited older studies to make follow-ups, aware of how the results would skew toward their desired outcome: an interpretation of a seeming tendency toward desistance, which marks transgender identities as “unnatural” aberrations which only emerge later in life and which can be headed off earlier in childhood. Buried underneath this interpretation is an implicit assumption about how children form transgender identities due to extrinsic influences. They conclude that they can prescribe a model of care which essentially counteracts those influences with their own.

Wiser people than I have already explained why better models of care, such as the affirmative care model, practiced in most North American clinics, provide better outcomes.13 The news article I began with also concludes with some great sources on treatment outcomes, which I cannot possibly outdo, so I’ll leave you to revisit Owl’s article.

Denying children bodily autonomy and agency over their identity is a form of abuse. The long-lasting confusion may result in self-denial, withdrawal, self-harm, or even suicide later in life. Unlike many forms of abuse, which happen privately, transgender conversion therapy coopts institutions toward its own ends by shaping the standards of care for treatment (via its influence on the WPATH with influential studies) and by writing the diagnostic manual itself. The prevalent myth of desistance of childhood gender dysphoria has been a powerful tool used to abuse children. It must be dismantled. To do so, we must expose pernicious and specious studies, using critical meta-analysis such as Newhook et al.’s.

I am grateful to Zuzu O. for feedback on this post.

Privacy Policy Update: No Mining

I got a weird spam e-mail overnight asking if I wanted to embed someone’s cryptocurrency miner into my website. They purport to be opt-in only, but all the other examples I’ve read about online up to now have been surreptitious, hijacking the browser for its own ends without asking. The end user only notices when their computer fans switch on or their computer gets too hot.

Such mining scripts have been strongly contentious in other websites. They exert excessive and unilateral control over the browser’s system. I certainly had such things in mind when I promised never to embed ads and the like in my website, but I had never spelled out that I had no intention of hijacking the browser for my own ends (ad or not).

This morning, I added a new point to my privacy policy.

  • This website does not load software in the user agent (your browser) which serves any purpose beyond displaying the website and its assets—meaning it does not use your browser to mine cryptocurrency, for example.

Most of my privacy policy describes what the website does without mentioning the browser. This point adds a clear expectation for browsers which visit.

I generalized the point a bit to include things which aren’t just cryptocurrency miners. It might be tempting to grab a few of my users’ cycles for SETI@home or the like, for example, but if a user wants to contribute to a project like that, they can do so themselves. I’ll have to rely on persuasive words to bring people around to a cause like that.

The Apology Contract

A binding contract has three elements: offer, consideration, and acceptance—all of which must exist among mutually assenting parties. These elements, in some form or another, have existed since time immemorial. A contract of sale, for example, contains an offer (the good for sale at a price), the consideration (the money exchanged for the good), and the acceptance (the actual mutual agreement to exchange the good for the price).

Many of our social interactions implicitly follow a similar structure because they rely upon offering, considering, and accepting one another’s social cues in more-or-less formulaic ways. Some of these interactions are rigidly ritualistic—”thank you,” “you’re welcome”—and some are not (flirting, for example).

I have read several articles on the best way to apologize, with which I agree, and which address the person giving the apology with humility and sincere intent, acknowledging the harm done, and reducing further harm. (One such popular example was written by John Scalzi. Another good example aimed at children comes from a parenting blog.)

However, I have lately come to worry that the act of the apology often still imposes a contract-like, ritualistic exchange. On receiving an apology, I have in the past found myself at odds with every instinct in my body to assuage the apologizer who, having recognized their fault and promising in good faith to do better, awaits something like an absolution from me before moving on.

The formula for how we’re taught to apologize, as children, goes:

— I’m sorry.

— It’s okay.

I’ve tried withholding that second part of the exchange as I’ve gotten older. Sometimes I don’t feel okay. Sometimes it’s not okay. Maybe I need space or time to get there. Maybe I just want to move on without needing to perseverate on the feelings of the person who wronged me.

This is especially difficult for an in-person conversation. Without the expected words, “it’s okay,” or, “it’s fine,” in my mouth, what am I to say? I don’t necessarily want to prolong the moment, either. I often have an interest in moving past the moment, but I don’t have some alternative wording that isn’t focused on the feelings of the apologizer.

When I don’t automatically say, “it’s okay,” a loaded pause often seems to follow. The apologizer feels they have done everything right, and I haven’t followed through on my end of the apology. They wait for me to give them some way to get past the moment, and when I don’t offer that back, they also don’t know how to continue.

The ritual of the apology feels a lot like a social contract because we’re conditioned to treat it as such from a young age, to offer some comfort to someone who has apologized and meet them part way. However, this is no contract. The formula, like so many social rituals, instead imposes an expected response on the recipient. There’s not necessarily mutual assent.

What I have read about the best way to offer an apology sometimes, but doesn’t always, offers a final step I believe is extremely important—once given, expect nothing back. Any forgiveness, grace, or acceptance on the part of the recipient is a gift, not an exchange. Beyond that, though, you need not expect any response whatsoever, not even acknowledgement. The apology, for the one giving it, is both the understanding of harm and the promise to reject furthering it. It is not a request.

What’s more, I can’t recall seeing anyone write for the person receiving the apology. I address you now: You owe nothing. Take comfort, if you can, that someone has seen how they have harmed you. Find peace, if you can, in the closure they offer. Exchange what you like, and repair the relationship if you want it. But your duty to them ended when the apologizer wronged you in the first place.

Adding a Privacy Policy

I’ve decided to give my website a privacy policy. It’s maybe more of a privacy promise.

It might sound strange to make a privacy policy for a website with which I don’t intend users to interact, but I’ve realized that even browsing news websites or social media has privacy implications for users who visit them. So I wanted to state what assurances users can have when visiting my website—and set a standard for myself to meet when I make modifications to my website.

Most of the points in it boil down to one thing—if you visit my site, that fact remains between you and my site. No one else will know—not Google, not Facebook, not your ISP, not the airplane WiFi you’re using, not some ad network.

I went to some trouble to make these assurances. For example, I had to create a WordPress child theme which prevents loading stylesheets associated with Google Fonts used by default. Then—since I still wanted to use some of those fonts—I needed to check the licensing on them, download them, convert them to a form I could host locally, and incorporate them into a stylesheet on my own server.

I also needed to audit the source code for all the WordPress plugins I use to see what requests they make, if any, to other parties (and I’ll have to repeat this process if I ever add a new plugin). This was more challenging than I realized.

I needed to ensure I had no malware present and that my website remain free of malware. I began with WordPress’s hardening guide. I found a very thorough plugin for comparing file versions against known-good versions (WordFence, which I found recommended in the hardening guide). I also made additional checks of file permissions, excised unused plugins, made sure all server software was up to date, and incorporated additional protections into the web server configuration to limit my attack surface.

Finally, I had to browse my website for a while using my local developer tools built into my browser, both to see if any requests went to a domain other than my own and to inspect what cookies, local storage, and session storage data were created. This turned up a plugin that brought in icons from a third party site, which I had to replace.

After all that, I feel sure I can make the assurances my privacy policy makes.

Beginning Astrophotography: Crescent Moonset

Computer-controlled, motorized telescope, dimly illuminated, aimed up and to the left at the distant crescent moon in the upper left, with a camera attached where the eyepiece should be at the back, using a baroque contraption.
Last night’s front-porch setup: Celestron NexStar 5 SE powered by a Celestron PowerTank Lithium and Sony α6300 camera connected using an E-mount camera adapter, aimed at the twilight moonset.

I had clear skies again last night, and I remembered to look for the Moon while it was slightly higher in the sky. I set my telescope up on the front porch shortly after sunset. The Moon presented an incandescent, imperceptibly fuller crescent facing the failing twilight.

Because it was higher, I had a better perspective, I had more time to take photos, I had more time to check my settings, and my photos had less atmosphere through which to photograph (meaning less distortion). And because the crescent was fuller, I captured more detail in my photos.

Equipment

I always remember to spell out acquisition details in my astrophotography posts, but I’ve found instead people most often ask what equipment I use. I usually don’t list this in detail, both because I’ve usually already mentioned my equipment in earlier posts and also because I find that the exact equipment I used on a given night is partially convenience and whim, not meriting any particular recommendation or endorsement. My photos are within reach of all sorts of equipment of various kinds and prices, given practice and technique, and the last thing I want to do is give someone the impression they need to spend over a thousand dollars to do what a two-hundred-dollar telescope and a smartphone can do.

However, I’m going to try to make an effort to name what equipment I use now and in the future just because it’s so commonly asked. Maybe I’ll need to reference it myself in the future, too. So last night, I used

Those are the only four pieces of hardware I used last night.

Technique

I aligned the telescope on the Moon, which let it track roughly. This meant it needed periodic corrections to keep it from drifting out of view (once every several minutes). I concentrated on keeping the extents of the arc within the viewfinder.

View of the LCD display of the camera, zoomed in on a fuzzy section of the Moon, showing bright and dark sections divided by a diagonal line.
Using the Focus Magnifier on the Sony α6300, concentrated on a section of the Moon near its terminator to fine-tune focus.

Once it was centered and roughly focused, I used a feature on my camera called the “Focus Magnifier” to fine-tune the focus. I’ve found this to be indispensable. Using this feature, I zoom in to a close up view of some section of what the camera sensor is seeing. This way, I can make fine adjustments to the telescope’s focus until I get the best possible clarity available. I can also get a good idea what kind of seeing I’ll encounter that night—whether the sky will shimmer a lot or remain still. I was lucky last night to find good focus and good seeing.

Once focus is good, it can be left alone. I ensure that the adapter is locked tightly in place so that nothing moves or settles, keeping the focal point cleanly locked on infinity.

Then I turned the ISO up—doubled it. The Moon is a bright object, so I was not keen to use something I would use for a dark site, but I settled on ISO 1600. My goal was to reach a shutter speed of 1/100 seconds, which I did, without losing the picture to noise or dimness. A higher ISO works great at a dark site, but the Moon is quite dynamic, so I felt like I had less headroom. In any case, I used 1/100 seconds’ exposure and ISO 1600 for all my photos.

I captured a short 4K video before I began so I could capture the seeing conditions that night. I recommend viewing it fullscreen, or it will look like a still photo—the sky was placid as a pond last night.

After taking the video, I realigned the telescope slightly and, using my remote controller so that I could quickly actuate it without shaking the telescope, I took 319 photos, occasionally realigning to correct for drift.

Unfortunately, Venus and Mercury had already sunk too low to get a glimpse, so I packed it up and went inside.

Processing

I moved all the photos, in RAW format, to my computer from the camera. Then I converted them all to TIFF format. These two steps took probably something like an hour and resulted in seven and a half gigabytes of data.

Screenshot of a Windows program called PIPP, using two Windows, one showing the Moon highlighted in blood red, the other with several progress bars and a list of files.
Screenshot of PIPP in use, aligning the photos of the Moon and sorting them by brightness.

Because the Moon drifted, due to the rough tracking, the photos needed to be pre-aligned. I used a piece of software called PIPP for that. Without this pre-alignment step, the tracking and alignment built into my stacking software struggled mightily with the photos and created a mess.

Its output was another series of TIFF photos. I found afterwards that two of the photos were significantly too exposed, leaving many details blown out, so I excluded them from the rest of the process, leaving me with 317 photos.

Screenshot of AutoStakkert!3, a baroque program consisting of two windows, one with a large preview of the moon, the other with lots of graphs and buttons and inputs and multiple progress bars. It is 20% through "MAP Analysis."
Screenshot of AutoStakkert!3 stacking the best 50% of the Moon photos I took into a single image.

I opened these 317 photos in AutoStakkert!3 beta. After initial quality analysis, I used the program to align and stack the best 50% of the images (by its determination). This took a bit less than ten minutes and left me with a single TIFF photo as output.

Image stacking leaves behind an intermediate product when it’s complete, which is what this TIFF photo is. It’s blurry, containing an average of all the 157 photos which were composited into it. However, the blurs in this photo can be mathematically refined more easily using special filters.14 I used a program called Astra Image to apply this further processing. In particular, I used a feature it calls “wavelet sharpening” (which can be found in other programs) to reduce the blurring. I also applied an unsharp mask and de-noising.

Finally, I used Apple Photos to flip the resulting photo vertically (to undo the inversion which the telescope causes) and tweak the contrast and colors.

The Photo

Photo of crescent moon, slender, curving down like a bowl but askew to the right. Terminator stops just short of the Mare Crisium.
Crescent moonset taken at about 8 p.m., PDT, on 20 Mar 18. Composited from 50% highest quality photos of a set of 317 taken.

Click to view the photo in fullscreen if you can. There’s a lot of detail. The terminator of the lunar surface stops just short of the Mare Crisium (the Sea of Crises), the round, smooth basalt surface right about the middle of the crescent.

I can’t help but compare this one to the photo from the night before: what a difference a day makes. I had more time to work, more photos to take, and the benefit of yesterday’s experience to help improve.

Now it’s clouded over here again—Portland weather—and I can’t practice anymore for a while.

Beginning Astrophotography: Cheshire Grin

Thin waxing crescent moon, turned upward like an askew grin, surrounded by blackness
Waxing crescent moon setting in the west, photographed about 8:45 p.m. PDT (image stacked from thirty-nine individual photos taken by a Sony α6300 camera through a Celestron NexStar 5 SE telescope).

Before the waxing crescent moon set tonight, I caught its Cheshire grin among the firs in the west for a few minutes. Then it was gone.

I had to take my telescope (a smaller model, a Celestron NexStar 5 SE) down the sidewalk a little ways to get a view between the branches. I took as many photos as I could before it set too low in the sky, using my Sony α6300 camera connected to the telescope using an adapter without an eyepiece (the “prime focus” technique). They were photographed all at ISO 800 and exposed for 1/25 seconds. The photo above was stacked from the 50% best examples of those seventy-eight photos I took before the Moon subsided among the trees.

The Thing About My Name

I’m just Emily to my friends. I go by “Emily St” in writing whenever someone needs a longer name and there’s no strict, legal reason to give my whole last name. It catches some people up because “St” resembles the abbreviation for a bunch of things which have nothing to do with me.

In this case, “St” is only short for my last name—not “Saint,” not “Street,” not some other thing. I rarely write down my full last name because I’ve found it’s unnecessary in almost every situation.

Think of “St” like a file extension for my first name, if that helps. In cases where I can, I slap on a big asterisk (*) to show I’ve left out a part. Sometimes there’s just a dot instead. Usually, there’s nothing.

It’s surprising how often the full last name isn’t actually required. For years, I’ve managed to have mail delivered without my full last name—useful so I can know mail from people who actually know me from those who have me from some list. I’ve even had credit card transactions go through okay without the whole last name.


The idea that I might not be going by my “real” or “legal” name might cause someone consternation. But a “real name” is a slippery idea. It comes from a combination of assumptions about a person having a single, fixed name which is registered with a single, fixed governmental entity. This assumption is both relatively recent in history and only true in the simplest cases.

Not only may a legal name for a person vary over time, but even in a single moment, disagreement may exist among various legal entities about a legal name. For example, in the U.S., the moment a judge issues a court order granting a name change, you (and not some automatic process) must then take that name change order to all the various entities, public and private (Social Security Administration, DMV, bank, job, and so on) and get them all updated. Until you’re done, those entities disagree about your name. You can hold in your hand a driver’s license in one name, a Social Security card in another, and be totally in the right simply because of bureaucracy. They’re not even the same governments—one’s federal and one’s state. They have little meaningful responsibility to be in accord with one another (and any bills attempting to create a unified federal ID system have been resisted so far in the U.S.).

Then setting aside legal technicalities, a “real” name is just an idea that can coincide with a legal name or not, may be a single name or multiple. Used enough, a name may become someone’s legal name through sheer use—a name change by usage can be recognized legally as well.

There are people who convert their names through religion, use different names to assimilate culturally, or adopt assumed names for performance or pseudonymous reasons. Do you know Mozart’s “real name”? There’s an entire Wikipedia article about it. Would you be surprised to hear Beethoven introduce himself as Luigi or Louis, depending on if you were in Italy or France at the time?

The process of name change continues today. SAG-AFTRA rules discourage name collisions, so performers often choose new names under which they perform. Names also may have marketing or homage purposes. Diane Keaton loved Buster Keaton. You know Tom Cruise and not Thomas Mapother. Harry Houdini’s greatest escape might have been from the name Erik Weisz.

Seen through the prism of those contexts, what’s a “real” name?


As for why I use “St” and not some other abbreviation, I have a couple of reasons. First, “S” on its own would be even more confusing, I think. It’s less unique, so you couldn’t search for me online. It’s also a little confusing and might look (in handwriting especially) like I’m just pluralizing my first name.

I also liked the way it looked when I signed it. I could cross the final flourish with a downstroke.

Scripted signature of "Emily St"

It began at my first tech job several years ago, where everyone was assigned usernames with three-letter acronyms. For some reason, I was given “est” instead of my actual initials. I took to expanding that out—I can’t remember where exactly first—so my first name would be included: “emilyst“.

It was pretty unique—easy to find as a username in places. It had no strong flavor of personality beyond being my name, so I probably wouldn’t tire of it. It was short. I managed to find a Web domain version of it online.

It sometimes confuses people that I shorten it this way—it’s not an initial, but it has no vowels, so it’s not a word. That’s why I thought of slapping a big asterisk on the end—Emily St*—so it looks like something is omitted. (Putting a dot just makes people say “Saint” or “Street.”)

That’s all there is to it—it’s just my first name and part of my last name. Nothing more. If you meet me, you can call me by my first name. If you need to, you can sound out the letters “ess tee,” or just ask me my last name in person. I don’t mind people knowing my last name or using it—I’m not Rumpelstiltskin. I just don’t commit it to writing without a good reason.

The Putty

A long time ago, when I was still a young buck in middle school, I was sitting around with my best friend at his trailer playing around, and I noticed a giant tub of what I took to be Silly Putty. Had to have been half a gallon of the stuff, pink, in a white plastic tub.

I thought: hell, yes, tub of putty. Gonna play with some putty. Gonna just scoop up a bunch of this putty, and—it’s a rock. I can’t shove my hand in. I only left finger dimples.

My friend told me it’s putty for physical therapy. “You squeeze it with your hand.” He dipped his hand in slowly, and it gave way to his light touch.

He explained, in middle-school words, that the viscosity makes it resist any flow faster than a fixed rate. You can’t make it flow any faster, no matter how much effort you put in. You can’t speed it up. To shape it, to squeeze it, it doesn’t matter how much force you put in. It always flows at the same speed.

I tried it. He was right. It felt soft and yielding as long as I applied very little force. If I added more force, it responded with obstinate indifference.

He was able to scoop it up smoothly because he allowed his hand time to sink in without shoving. I had thought of it as a liquid like any other that would simply make room for me as I pushed my hand in, but it didn’t. It pushed back. No effort on my part made a difference. Only time mattered.


Early on in my life, many things came easily to me. By that, I mean I learned new information easily and retained it. Some things came more quickly to me that did not come as quickly to others, and I was encouraged for it. I became accustomed to gliding through tasks superficially. I used my innate aptitude to move past unpleasant work as quickly as possible and attend to my interests. But this was an undisciplined way to live. The more I indulged only what came easily, the more I neglected other aptitudes I should have nurtured.

Later came problems for which I had less inherent aptitude—whether that meant synthesizing existing knowledge to adapt to novel situations, coping with uncertainty or ambiguity, training for physical tasks, or understanding and empathizing with new people. I had no ready-made shortcut here. When the time passed beyond which I could no longer ignore these problems, my instinct was again to find some other way to speed up my approach.

I had formed a habit of rushing of which I wasn’t even aware. I also didn’t like being caught off-guard and unprepared.

I figured maybe I could power through these new situations with a burst of concentrated effort. It made sense to me. If I could just summon up one good wind, I could quickly clear whatever problem and—ironically—avoid self-discipline again.

However, I often encountered frustration instead, and I tended to begin by blaming my frustration on extrinsic factors. At work, for example, I blamed the documentation, training material, or managers. I blamed the people around me for confusing me or misleading me. I dismissed or downplayed the subject’s importance. After a while, these excuses stopped working, and my frustration then turned inward. I ended up blaming myself.

My life—one with a relative lack of financial privilege until recently—had a way of forcing me through the hardship of those episodes, just to survive and make my way, and I’m better off for it today. I can look back at times when I finally saw what had to happen, acted on it, and grew from it. I only regret that I had to pass through so much needless, self-inflicted frustration, pain, and blame along the way.

I’ve begun thinking more and more about that physical therapy putty as I get older. I think we’re the putty.

To learn—to grow—we must change, in a real and physical sense, by reshaping our brains and (sometimes) our bodies. This is a process that takes time. Laborious effort makes no dramatic difference in the rate at which this happens, the way a novice cannot just throw a massive amount of weight onto the rack at the gym to get stronger right away. On the other hand, neither can it be slowed by failing to bring all our effort to bear—so long as we devote the time and commit to some progress—nor initial lack of innate ability. We inevitably change as a function of time, provided we keep going, bit by bit, every day.

I have thought about this as I learned guitar. I thought about it when I learned French. I thought about it when I taught myself to juggle. I thought about it as I tried to train my eye to see through a telescope. And I thought about it as I recognized the pattern of discomfort I move through as I begin a new job. As long as I kept at it, I improved—usually just about at the same pace from one experience to the next.

I learned that new kinds of growth came from applying myself and then just waiting, and from accommodating within myself the discomfort of that waiting.

I have often avoided uncertainty in my life out of fear, I think. I’ve never been encouraged to be uncertain or doubtful. Not having the answers makes me vulnerable because it undermines the very thing that set me apart early in life and made me feel more capable. With that vulnerability then comes discomfort because I am unkind to myself when I notice I’m unable to meet my own expectations. Worst of all, it feels inescapable in the moment: there’s just no way to get easy answers, an easy fix, a magic word. It’s tempting to believe—after half a lifetime of being addicted to all the answers coming so quickly—that you’re failing, and it’s your fault.

However, I believe uncertainty, discomfort, and self-forgiveness are precisely the traits I need in order to grow beyond superficial knowledge acquisition, so that I may find kindness and connect to new things and people I could not have done when I was younger. Cultivating these traits allow me to surrender myself in the present to the passage of time and all it brings—and eventually to new circumstances and possibilities I would not have had otherwise. There are matters of experience which I cannot touch intellectually, no matter how hard I try.

The hell of it is, I still don’t know how I will do these things yet. I think that’s okay for now, as long as I keep trying.


(I am grateful to Amy Farrell and to Sophie for their constructive feedback on my earlier drafts of this post.)