In a captioned scene from Loudermilk, a salesclerk says to Loudermilk, "I can't help it. This is my voice."

That is not your voice

This is the fifth post in a series inspired by Lake Bell’s audiobook chapter “Sexy Baby Voice.” In previous posts I’ve covered the three key features she uses to define this vocal style – bright resonance (which Bell refers to as “high pitch”), creaky voice (“vocal fry”) and legato articulation (“slurring”), and discussed the various ways that we can manipulate our vocal tracts to create or amplify bright or dark resonances. Now I want to talk about your voice.

Not your voice, but what people mean when they say “your voice.” A friend who’s a vocal coach and read my earlier posts sent me a not-very-funny opening scene from a sitcom called Loudermilk, where the title character (played by Ron Livingstone of Office Space) mocks and insults a young woman who takes his order at a coffee bar. The salesclerk is friendly, prompt and thorough; Loudermilk has no cause for complaint. His abuse is entirely based on his dislike for the sound of her voice.

Anyone who’s read this series or listened to the “Sexy Baby Voice” chapter will recognize three particular features of salesclerk’s voice: bright resonance, creaky voice, legato articulation. The Loudermilk scene could have been inspired by the scenes about “sexy baby voice” in Lake Bell’s 2013 film about the voice-over industry, In a World…

Loudermilk mocks the salesclerk’s creaky voice by using creaky voice in his own responses, and the salesclerk asks “why are you talking like that?” Loudermilk responds, “This is my voice,” and she says, “No, it’s not.” After mocking her voice more and ranting a bit, he says, “just stop doing that.” Her response mirrors the earlier exchange: “I can’t help it, this is my voice,” to which he responds, “No, it’s not.”

As Loudermilk receives his coffee and leaves, the salesclerk, infuriated by his abuse, shouts at his back, “You’re a total dick!” Surprise! She doesn’t use legato articulation or creaky voice – because it’s really fucking hard to shout with either of those features. He turns back and says, “There, there you go, you’re talking!” as though she’d proven his point.

Loudermilk’s insistence that the salesclerk’s use of creaky voice is not “your voice” echoes a deleted scene from In a World… that Lake Bell includes in the audiobook chapter. In the scene, Bell’s character conducts “a vocal experiment” on another character who habitually uses “sexy baby voice.” She asks the other character to count to ten, alternating “the lowest point in your register” (i.e. with dark resonance) on odd numbers with “the highest point in your register” (bright resonance) on even numbers, and then say “Here’s my voice.”

Of course, “Here’s my voice” is the eleventh utterance in the sequence. As an odd-numbered utterance, Bell’s character pronounces it with relatively dark resonance, and the other character follows suit. As with the Loudermilk scene, we’re meant to marvel at the transformation: this woman’s True Voice, stripped of all that sexy baby junk! The message of both scenes is the same: that “sexy baby voice” is fake and women only use it because they’re insecure, but maybe they can be tricked into experiencing the power of their True Voices.

I don’t know about you, but when I first heard the deleted scene with the “vocal experiment,” the first thing I thought of was Elizabeth Holmes, the business executive who is currently in prison for selling a fake technology to investors. In addition to amassing wealth and power through lies and hype, Holmes is famous for having an unusually low voice for a woman – not just dark resonances, but when she speaks publicly, her fundamental frequency is in the range more typically used by American men.

During the height of Holmes’s success, several people felt that her claims were too good to be true, and they suspected her voice of being fake too. When recordings surfaced of Holmes speaking in a more typical pitch range for an American woman, that was presented as casting doubt on her honesty in general. Is her voice as big a fraud as her company?

I’ll have more to say about the notion of “your voice” and what it means to accuse someone of habitually using a fake voice, but astute observers may note that this double bind – don’t talk too “high-pitched,” but don’t talk too low-pitched either! – is an echo to the double-binds put on women in all kinds of areas – be assertive but not bossy! be attractive but not slutty!

Slurring sexy babies

Recently I’ve written a few posts in response to the notion of “sexy baby voice” in Lake Bell’s latest audiobook. Bell identifies “sexy baby voice” with three characteristic features: “high pitch” (which I argue is actually bright resonance), “vocal fry” (what phoneticians call creaky voice) and “slurring.” I’ve argued that while bright resonance can be controlled to some degree, it is characteristic of youth and femininity, and that creaky voice is the only way that some young woman can add darker resonance (and hence a bit of gravitas) without sounding tomboyish or fussy.

I wanted to write a quick post about Bell’s third criterion, “slurring,” which Gladwell summaries as “running some words together,” and “sentences without spaces.” Bell’s caricature of slurring gets to the point where she sounds like she’s doing an impression of a drunk sorority girl, but in moderation this is a well-documented pattern of speech variation: some people are noted for short, quick transitions from one speech segment to the next and from one intonational pitch to the next, known as “staccato” articulation, while others take these transitions more gradually, designated by the Italian word “legato.”

Guess what the legato vs. staccato articulation patterns are associated with? Gender. I learned it from my voice teachers, Kristy Bissell and Erin Carney, as part of lessons on developing gender expression in the voice. I’m not familiar with research on this in phonetics, if any has been done.

Basically, staccato articulation is stereotypically associated with men barking orders, while legato articulation is associated with women discussing things in soft, flowing ways. Yes, these are stereotypes, and we can all think of women who bark orders and men with soft, legato articulation. But those women are perceived as acting masculine when they speak with staccato articulation, and men speaking legato are perceived as speaking in feminine ways.

It’s understandable why the use of legato articulation bothers Lake Bell so much: it’s the antithesis of a particular voice-over style that she admires. In her chapter she includes an audio clip of a film she made in 2013, In a World… Before listening to this chapter I had never heard of her or the film, but I discovered that it was seen by a fairly large number of people, and generally well appreciated. That film introduced the general public to her idea of “sexy baby voice,” and was discussed by Mark Liberman in a series of LanguageLog posts.

The name of the film references the famous phrase “In a world…” used in voice-over tracks to introduce trailers for science-fiction action films. In the film, Bell’s character is competing to be the first woman to voice these kinds of macho trailers. The thesis of the film is that women are just as capable as men of delivering this punchy, aggressive style of speech, and are being held back from that success by what else? “Sexy baby voice.”

Even without going to the hypermasculine extent of action film voice-overs, Bell is implicitly endorsing the management-consultant approach to voice and gender that treats any bias against women’s speech as evidence of a deficiency in the women’s speech itself, a deficiency that can be remedied with enough courses in proper speaking. This is extensively debunked by linguists like Deborah Cameron and Lisa Davidson in articles that I linked from previous posts.

So there we have the three features of “sexy baby voice”: bright resonance, which is an indicator of youth and femininity; creaky voice, which is one of a handful of strategies available to young women to darken their resonance, and legato articulation, which is also an indicator of femininity. If we find this in women who are actually young, it basically means that they want to get away from girlish voices without sounding like tomboys or fussy older women. Judging young women for this strikes me as unfair and mean-spirited.

I have to point out, however, that young women are not the main target of Bell’s “sexy baby voice” tirades. Her ire is directed at older women who, she argues, have other ways of accessing dark resonance but use bright resonance with creaky voice anyway. I’ll address that in another post!

Youth, authority, gender and creaky voice

Recently I’ve written two posts about bright resonance in response to Lake Bell’s audiobook chapter, “Sexy Baby Voice.” Bell describes “sexy baby voice” as having three characteristic features: “high pitch”, “vocal fry” and “slurring.” My first post supported Byron Ahn’s analysis that found that Bell’s “sexy baby voice” samples didn’t have reliably higher pitch than the non-“sexy baby voice” samples, and suggested that she’s probably talking about bright resonance. My second post drew on phonetic and pedagogical research to confirm Bell’s claim that while resonance is constrained by the size and shape of our vocal tracts, it can be consciously controlled to a certain degree.

In this post I want to connect bright resonance (what Bell calls “high pitch”) with creaky voice (“vocal fry”). The original reason they’re used together is youth.

Bell’s argument is that “sexy baby voice” keeps women from being taken seriously, so let’s imagine a young woman who wants to be taken seriously when she talks. Let’s say it’s 1990, and this woman is named Heather, and she has important things to say, whether it’s in a speech or in conversation. And importantly for our purposes, Heather is trendy and feminine.

On some level Heather is aware that dark resonance adds gravitas to speech. But she’s young, she’s petite, she hasn’t given birth and she doesn’t smoke, so she has a relatively short vocal tract and thin vocal folds. This means that without using any of the vocal habits I described in my last post, Heather’s voice will sound girlish, and will risk being prejudged as immature and unserious.

Heather may try some of those habits and find them wanting. She’s already avoiding twang and nasal resonance, which would make her voice sound even brighter. She could try rounding and protruding her lips and using the furthest-back tongue articulations, the time-honored strategy of boys and tomboys. But here’s the thing: she doesn’t want to sound too masculine. She wants to be feminine, but taken seriously. And maybe even sexy.

Another strategy, lowering the larynx, also clashes with the style she wants. It sounds too formal, too grand dame, too fussy. Not at all trendy or stylish.

Let’s imagine that after trying all these strategies, Heather’s a little tired and resigned. She relaxes her voice and it drops into creak. And it doesn’t sound fussy or tomboyish, but it has dark resonance. Maybe it even sounds a bit fashionably blasé!

And from a completely personal view, I just want to say that I do find creaky voice adds a bit of gravitas, and it can be very sexy. When I hear a woman with creaky voice combined with bright overtones, I get an impression of smallness in bigness. I think of creaky voice as the oversize sweater, boyfriend shirt or even mom jeans of the voice.

So Heather starts using creak whenever she wants to be taken seriously. And because she’s trendy, other young women imitate her. Heather is Creaker Zero of late twentieth century “vocal fry.”

Is that the way it actually happened? I have no idea. But it’s a possible scenario. And the scorn that’s been heaped on “vocal fry” over the past thirty plus years has been a potent example of the double bind that women are placed in so many times. Not enough dark resonance? Girlish. Rounded lips? Transgressing gender. Lowered larynx? Fussy. Creaky voice? You’re destroying your voice!

A lot of the politics of women’s voices has been covered by linguists I respect and admire, so for most of this I’ll just refer you to the responses of Deborah Cameron, Penny Eckert and Lisa Davidson to the 2015 “vocal fry” panic, and radio producer Katie Mingle’s all-purpose response to criticism of women’s voices.

This is one area where Malcolm Gladwell failed in this chapter. Gladwell is the producer of Bell’s audiobook and a friend of Bell, and in the chapter she turns to him for feedback. His biggest strength is the ability to find experts and present their ideas in ways that engage a broader audience, but in this chapter he doesn’t talk to Cameron, Eckert, Davidson or even Mingle. He just sits there and gives his own opinions, even conflating “high pitch” with “uptalk.” In his defense, it is possible that he tried to refer Bell to experts, but we don’t hear about it.

Controlling the brightness of the voice

A few weeks ago I posted about “Sexy baby voice,” the topic of a chapter in Lake Bell’s audiobook about the culture and politics of voices. Bell identified three characteristics of “sexy baby voice” in women: high pitch, “vocal fry” (creaky voice) and “slurring.”

In phonetics, “pitch” is generally understood to refer to the fundamental frequency of the speech signal, but on Twitter the phonetician Byron Ahn posted the results of a computer analysis of some of the examples Bell gave for “high pitch” and pointed out that their fundamental frequencies weren’t much higher than the examples she gave for “normal” speech. In my post, I suggested that Bell is probably referring to the frequencies of harmonics in the speech, also called “resonance” or “formants.” It sounds like the most salient feature of “sexy baby voice” is bright resonance.

As I discussed in my post, bright resonance is generally associated with youth and femininity, because it’s usually caused by small vocal folds in small vocal tracts, and women and children tend to be smaller and have smaller vocal folds. Even younger women tend to have brighter resonance than older women, primarily because of the affect of hormonal changes during childbirth and menopause.

Of course, as Bell demonstrates repeatedly in her chapter, bright resonance can also be controlled, either consciously in the moment or subconsciously through habit and training. I’ve learned about these ways over the years, as a linguistics doctoral student, as a transgender woman and as an amateur singer. I’ll go through all the ways I know to do this.

Diagram of the vocal tract produced by the U.S. Centers for Disease Control and Prevention, and distributed by Wikimedia Commons.

As with the previous post in this series, my knowledge comes from training, not reading, so I don’t know who to credit for figuring all this out about the vocal tract. For now I will credit my primary teachers: the vocal coaches Kristy Bissell and Erin Carney, and the phoneticians Jacques Filliolet, Karen Landahl, Alex Francis and Doug Honorof.

For people who haven’t studied the anatomy of the vocal tract, this will get a little technical. In this blog post I’m going to use all the technical language, but if there’s a particular area that you feel could use more explanation for a general audience, please let me know.

Let’s start with the larynx and move through the vocal tract with the breath. The vocal folds generate sound through their vibration. When closing completely they generate a relatively coherent sound wave, but they can add dark resonance by maintaining gaps of particular sizes to allow low-frequency vibrations, what we call creaky voice or “vocal fry.” Similarly, they can add bright resonance by allowing turbulent air to flow through, causing breathy voice.

Just above the larynx is a tube called the pharynx. We can add bright resonance by constricting the pharynx, a practice that vocal coaches call “twang.” The name confused me for a while, because I associate the word “twang” with the Southern vowel shift, but in this case it refers to the narrowing of the pharynx.

The velum is a flap of muscle that we open to allow air to flow through the nose. When we allow air to flow through the nose and mouth at the same time it produces nasal resonance, which adds brighter resonance.

We use our tongues to produce consonants and vowels, raising a part of the tongue towards the roof of our mouths, so a /d/ sound is formed by touching the front of the mouth, and a /g/ sound by touching further back. For each of these sounds there is a range of positions along the roof of our mouth. When we raise our tongues further forward within the range for that sound, we generate brighter resonance. We can also generate bright resonance by flattening the tongue, allowing it to be raised higher. There is extensive research showing that women and gay men tend to have brighter resonance on their /s/ phonemes, and that people who make brighter /s/ sounds tend to be heard as women or gay men, even if they aren’t.

The lips are the gates that release our voices to the air outside. Rounded and protruded lips can produce darker resonance, and spread lips (in a smile or similar shape) can produce lighter resonance. I remember hearing about a study showing that even before puberty, boys tend to round their lips to sound more masculine.

One thing that makes this confusing is that all these vocal tract configurations have other functions. Creaky voice can be a sign of fatigue. Breathy voice can be a sign of relaxation. Pharyngeal constriction, nasal resonance, place of articulation and lip rounding can each change one word into another word with a completely different meaning, in Arabic, French, English and other languages.

These articulations can also interact with each other and with the fundamental frequency of the voice in different ways. At low frequencies, breathy voice can sound sympathetic or sexy, but at high frequencies it can sound weak and vulnerable. This may be what you want to project, or it may not. Nasal resonance and pharyngeal constriction can sound forced or strident, obnoxious or insensitive.

The bottom line is that these aspects of the voice are all under some degree of conscious control. How much control a speaker has, and how conscious they are, depends on a lot of factors, but the takeaway for Bell’s chapter is that people with smaller vocal tracts can use these techniques to speak with darker resonance than they would without them, and people with larger vocal tracts can use them to speak with brighter resonance than they otherwise would.

Note that I’m using the term “otherwise.” The terms I want to avoid, for this post at least, are “natural,” “authentic,” “real” and “your/my/their voice.” The tension between biological constraints, habit and conscious control is what makes resonance so fraught, politically, culturally and socially, which is why Bell and others have such intense feelings about it. That’s for another post.

Screenshot of the "Compose new Tweet" modal on Twitter, with the "+" button and a tooltip reading "Add another Tweet". The tweet texts reads "blah blah blah bl"

Dialogue and monologue in social media

I wrote most of this post in June 2022, before a lot of us decided to try out Mastodon. I didn’t publish it because I despaired of it making a difference. It felt like so many people were set in particular practices, including not reading blog posts! My experience on Mastodon has been so much better than the past several years on Twitter. I think this is connected with how Twitter and Mastodon handle threads.

A few years ago I wrote a critique of Twitter threads, tweetstorms, essays, and similar forms. I realize now that I didn’t actually talk much about what’s wrong with them. I focused on how difficult they are to read, but I didn’t realize how the native Twitter website and app actually makes them easier to read. So let me tell you some of the deeper problems with threads.

In 2001 I visited some of the computational linguistics labs at Carnegie Mellon University. Unfortunately I don’t remember the researchers’ names, but they described a set of experiments that has informed my thinking about language ever since. They were looking at the size of the input box in a communication app.

These researchers did experiments where they asked people to communicate with each other using a custom application. They presented different users with input boxes of different sizes: some got only a single line, others got three or four, and maybe some got six or eight lines.

What they found was that when someone was presented with a large blank space, as in an email application or the Google Docs application I’m writing this in, they tended to take their time and write long blocks of text, and edit them until they were satisfied. Only then did they hit send. Then the other user would do the same.

When the Carnegie Mellon researchers presented users with only one line, as in a text message app, their behavior was much different. They wrote short messages and sent them off with minimal editing. The short turnaround time resulted in a dialogue that was much closer to the rhythm of spoken conversation.

This echoed my own findings from a few years before. I was searching for features of French that I heard all over the streets of Paris, but had not been taught to me in school, in particular what linguists call right dislocation (“Ils sont fous, ces Romains”) and left dislocation (“L’état, c’est moi”).

In 1998 the easiest place to look was USENET newsgroups, and I found that even casual newsgroups like fr.rec.animaux were heavy on the formal, carefully crafted types of messages I remembered from high school French class. I had already read some prior research on this kind of language variation, so I decided to try something with faster dialogue.

In Internet Relay Chat (IRC) I hit the jackpot. On the IRC channel, left and right dislocations made up between 21% and 38% of all finite clauses. I noticed other features of conversational French like ne-dropping were common as well. I could even see IRC newbies adapting in real time: they would start off trying to write formal sentences the way they were taught in lycée, and soon give up and start writing the way they talked.

At this point I have to say: I love dialogue. Don’t get me wrong: I can get into a nice well-crafted monologue or monograph. And anyone who knows me knows I enjoy telling a good story or tearing off on a rant about something. But dialogue keeps me honest, and it keeps other people honest too.

Dialogue is not inherently or automatically good. On Twitter as in many other places, it is used to harass and intimidate. But when properly structured and regulated it can be a democratizing force. It’s important to remember how long our media has been dominated by monologues: newspapers, films, television. Even when these formats contain dialogues, they are often fictional dialogues written by a single author or team of authors to send a single message.

One of my favorite things about the internet is that it has always favored dialogue. Before large numbers of people were on the internet there was a large gap between privileged media sources and independent ones. Those of us who disagreed with the monologues being thrust upon us by television and newspapers were often reduced to impotently talking back at those powerful media sources, in an empty room.

USENET, email newsletters, personal websites and blogs were democratizing forces because they allowed anyone who could afford the hosting fees (sometimes with the help of advertisers) to command these monologic platforms. They were the equivalent of Speakers’ Corner in London. They were like pamphlets or letters to the editor or cable access television, but they eliminated most of the barriers to entry. But they were focused on monologues.

In the 1990s and early 2000s we had formats that encouraged dialogue, like mailing lists and bulletin boards, but they had large input boxes. As I saw on fr.rec.animaux in 1998, that encouraged long, edited messages.
We did have forums with smaller input boxes, like IRC or the group chats on AOL Instant Messenger. As I found, those encouraged people to write short messages in dialog with each other. When I first heard about Twitter with its 140-character limit I immediately recognized it as a dialogic forum.

But what sets Twitter apart from IRC or AOL Instant Messenger? Twitter is a broadcast platform. The fact that every tweet is public by default, searchable and assigned a unique URL, makes it a “microblog” site like some popular sites in China.

If someone said something on IRC or AIM in 1999 it was very hard to share it outside that channel. I was able to compile my corpus by creating a “bot” that logged on to the channel every night and logged a copy of all the messages. What Twitter and the sites it copied like Weibo brought was the combination of permanent broadcast, low barrier to entry, and dialogue.

This is why I’m bothered by Twitter threads, by screenshots of text, by the unending demands for an edit button. These are all attempts to overpower the dialogue on Twitter, to remove one of the key elements that make it special.

Without the character limits, Twitter is just a blogging platform. Of course, there’s nothing wrong with blogs! I’ve done a lot of blogging, I’ve done a lot of commenting on blogs and I’ve tweeted a lot of links to blogs. But I want to choose when to follow those links and go read those blog posts or news articles or press releases.

I want a feed full of dialogue or short statements. Threads and screenshots interrupt the dialogue. They aggressively claim the floor, crowding out other tweets. Screenshots interrupt the other tweets with large blocks of text, demanding to be read in their entirety. Threads take up even more of the timeline. The Twitter web app will show as many as three tweets of a thread, interrupting the flow of dialogue.

The experience of threads is much worse on Twitter clients that don’t manipulate the timeline, like TweetDeck (which was bought by Twitter in 2011) and HootSuite. If it’s a long thread, your timeline is screwed, and you have to scroll endlessly to get past it.

One of the things I love the most about Mastodon is the standard practice of making the first toot in a thread public, but publishing all the other toots as unlisted. That broadcasts the toot announcing the thread, and then gives readers the agency to decide whether they want to read the follow-up toots. It’s more or less the equivalent of including a link to a web page or blog post in a toot.

There’s a lot more to say about dialogue and social media, but for now I’m hugely encouraged by the feeling of being on Mastodon, and I’m hoping it leads us in a better direction for dialogue, away from threads and screenshots.

WASHINGTON, DC - OCTOBER 20: Actress and model Paris Hilton speaks during a news conference outside the U.S. Capitol October 20, 2021 in Washington, DC. Congressional Democrats held a news conference with Hilton to discuss child abuse and legislation to establish a “bill of rights” to protect children placed in congregate care facilities. (Photo by Alex Wong/Getty Images)

Listen to the voices of the sexy babies

A few days ago, Byron Ahn drew our attention to an excerpt from a new, six-hour audiobook, Inside Voice by Lake Bell, credited as an “actress/writer/director/producer.” Bell is a friend of author and podcaster Malcolm Gladwell, and Gladwell agreed to serve as a kind of sounding board for Bell’s ideas about something she calls “sexy baby voice,” pointing to the voices of Paris Hilton and Kim Kardashian as paradigm examples of it. Gladwell, whose company is publishing Inside Voice, also published this excerpt as a free bonus episode of his podcast Revisionist History, which I listen to regularly, although I’m almost two years behind.

Bell argues for a few points: that what she calls “sexy baby voice” is a distinct speech style with specific audible features, that it is particularly inauthentic (she claims several times that it requires effort to speak that way, and describes a coaching technique for helping women to find their “true” voices) and that it makes them sound stupider than Bell knows them to be. She repeatedly assures us that she is not passing judgment, and then uses extremely judgmental language to describe “sexy baby voice,” which I interpret as an application of “love the sinner, hate the sin.”

Ahn posted a series of Twitter threads about the excerpt. He notes that it’s problematic for Bell to criticize women as a self-identified feminist, but he focuses on the terminology that she uses to describe the features of “sexy baby voice,” particularly the word “pitch.” He concludes, “we should encourage public figures talking about voices to consult linguists who have the training.”

I’ve got a lot of thoughts and feelings about this excerpt and Bell’s idea of “sexy baby voice.” I could probably write several blog posts on the practical, cultural and social angles to this. For this post I’m going to keep with Ahn’s focus on what “sexy baby voice” is, phonetically. I sketched some of this out on Ahn’s Twitter thread, and I’ll synthesize and expand that here.

Bell says that the primary feature that defines “sexy baby voice” is “pitch,” and as linguists, we’re trained to interpret “pitch” as the fundamental frequency of the voice – essentially, the lowest pitch produced by the voice at any given time. I’ve been taking singing lessons, and all the singers and singing teachers I’ve talked to use “pitch” in the same way.

Ahn introduces his discussion of the “sexy baby voice” excerpt with a graph of the fundamental frequency of a segment of the recording – throughout the excerpt, Bell uses her own voice to demonstrate the “sexy baby voice” style, even though she says she does not use it in everyday conversation. In the graph he posts, the floor and ceiling of Bell’s fundamental frequency range are not particularly higher when she is using “sexy baby voice” than at other times.

Bell mentions two other factors: “vocal fry” (the linguistic term is “creaky voice”) and “slurring” speech. Ahn speculates that she may be picking up on other factors as well, like “SoCal vowels” or laryngeal constriction. He also acknowledges that “pitch” may refer to other pitch-related features besides fundamental frequency range, such as “uptalk,” a pattern of rising in fundamental frequency at the ends of phrases. Gladwell uses the word “uptalk” when echoing Bell’s explanations, but it’s not clear that he’s referring to phrase-final pitch rise.

So here’s where I come in: my gender expression is fluid, so I’ve been studying differences in vocal quality. When I listen to the samples in the chapter of “sexy baby voice” and … not-sexy-baby-voice (that’s for another post!) given by Bell, both in recordings and her own mimicry, I hear some creaky voice (“vocal fry”), but the main difference I hear is resonance.

This section is going to be a bit of a departure from my normal linguistics blogging, because I have not studied any of the literature on this. My understanding of it comes from practical training, so I don’t know who to cite or credit for any of this besides my teachers, Kristy Bissell and Erin Carney.  Of course, any inaccuracies are most likely due to my misunderstanding of what they’ve tried to teach me!

Resonance is about the pitch of speech, but it’s not about the fundamental frequency. It’s about everything else: the harmonics that result from the way the tones from our vocal folds echo around our bodies and are filtered through different parts of our vocal tracts and nasal passages. Just as plucking a string on an acoustic guitar produces overtones from the guitar body, whenever we arrange our vocal folds to talk or sing we produce overtones: higher pitched frequencies that can harmonize or clash with the fundamental frequency.

There are a ton of things you can do with resonance and it can get really complicated, so let’s focus on the primary resonance difference I’m hearing between Lake Bell’s “sexy baby voice” and the other examples. To me, the “sexy baby voice” examples sound brighter.

Bright and dark are useful terms to evoke the quality of resonance while distinguishing it from fundamental frequency. Bright sounds are ones where we hear more of the higher-pitched harmonics, while in dark sounds the lower harmonics dominate.

As I’ve learned from my teachers, and as Bell demonstrates, there’s a lot we can do with our voices to shift the balance of harmonics towards light or dark, but a substantial part of resonance comes form the structure of our bones, cartilage, muscles and fat. Higher-pitched harmonics tend to come from shorter vocal tracts, smaller nasal cavities, and in general, from smaller bodies. As a result, the voices of smaller people tend to sound brighter.

Testosterone during the teenage years also changes the configuration of our vocal tracts: thickening the vocal folds, making the larynx larger and shifting it lower in the throat. This is why men’s and trans women’s voices tend to sound darker than those of women, girls and prepubescent boys, even when singing the same pitch.

Bodies that see an increase in testosterone after puberty do not get larger or lower larynxes, but do tend to develop thicker vocal folds. This is why many trans men’s voices change, but often sound different from typical men’s voices. It is also, as Bell mentions, why women’s voices often change when they give birth or go through menopause.

As you might have guessed, this is where the “baby” in “sexy baby voice” comes from. Children are smaller than adults and tend to have brighter resonances. It’s also why Bell sees “sexy baby voice” as an exaggerated expression of femininity: women tend to be smaller than men and therefore have brighter voices. Women who haven’t given birth or gone through menopause tend to have brighter voices. Bright resonance suggests youth, femininity and immaturity.

As I mentioned above, there are several things that people can do, consciously or unconsciously, to shift their resonances, and I want to talk about them. I would also love to get into a discussion of the sociopolitical issues that Bell identifies around “sexy baby voice” and women’s voices in general. But this is already pretty long for a blog post, so I’ll save those for another time.

Screenshot of LanguageLab displaying the exercise "J'étais certain que j'aillais écrire à quinze ans"

Imagining an alternate language service

It’s well known that some languages have multiple national standards, to the point where you can take courses in either Brazilian or European Portuguese, for example. Most language instruction services seem to choose one variety per language: when I studied Portuguese at the University of Paris X-Nanterre it was the European variety, but the online service Duolingo only offers the Brazilian one.

I looked into some of Duolingo’s offerings for this post, because they’re the most talked about language instruction service these days. I was surprised to discover that they use no recordings of human speakers; all their speech samples are synthesized using an Amazon speech synthesis service named Polly. Interestingly, even though Duolingo only offers one variety of each language, Amazon Polly offers multiple varieties of English, Spanish, Portuguese and French.

As an aside, when I first tried Duolingo years ago I had the thought, “Wait, is this synthesized?” but it just seemed too outrageous to think that someone would make a business out of teaching humans to talk like statistical models of corpus speech. It turns out it wasn’t too outrageous, and I’m still thinking through the implications of that.

Synthesized or not, it makes sense for a company with finite resources to focus on one variety. But if that one company controls a commanding market share, or if there’s a significant amount of collusion or groupthink among language instruction services, they can wind up shutting out whole swathes of the world, even while claiming to be inclusive.

This is one of the reasons I created an open LanguageLab platform: to make it easier for people to build their own exercises and lessons, focusing on any variety they choose. You can set up your own LanguageLab server with exercises exclusively based on recordings of the English spoken on Smith Island, Maryland (population 149), if you like.

So what about excluded varieties with a few more speakers? I made a table of all the Duolingo language offerings according to their number of English learners, along with the Amazon Polly dialect that is used on Duolingo. If the variety is only vaguely specified, I made a guess.

For each of these languages I picked another variety, one with a large number of speakers. I tried to find the variety with the largest number of speakers, but these counts are always very imprecise. The result is an imagined alternate language service, one that does not automatically privilege the speakers of the most influential variety. Here are the top ten:

Language Duolingo dialect Alternate dialect
English Midwestern US India
Spanish Mexico Argentina
French Paris Quebec
Japanese Tokyo Kagoshima
German Berlin Bavarian
Korean Seoul Pyongyang
Italian Florence Rome
Mandarin Chinese Beijing Taipei
Hindi Delhi Chhatisgarhi
Russian Moscow Almaty

To show what could be done with a little volunteer work, I created a sample lesson for a language that I know, the third-most popular language on Duolingo, French. After France, the country with the next largest number of French speakers is Canada. Canadian French is distinct in pronunciation, vocabulary and to some degree grammar.

Canadian French is stigmatized outside Canada, to the point where I’m not aware of any program in the US that teaches it, but it is omnipresent in all forms of media in Canada, and there is quite a bit of local pride. These days at least, it would be as odd for a Canadian to speak French like a Parisian as for an American to speak English like a Londoner. There are upper and lower class accents, but they all share certain features, notably the ranges of the nasal vowels.

I chose a bestselling author and television anchor, Michel Jean, who has one grandmother from the indigenous Innu people and three presumably descended from white French settlers. I took a small excerpt from an interview with Jean about his latest novel where he responds spontaneously to the questions of a librarian, Josianne Binette.

The sample lesson in Canadian French based on Michel Jean’s speech is available on the LanguageLab demo site. You are welcome to try it! Just log in with the username demo and the password LanguageLab.

What is “text” for a sign language?

I started writing this post back in August, and I hurried it a little because of a Limping Chicken article guest written by researchers at the Deafness, Cognition and Language Research Centre at University College London. I’ve known the DCAL folks for years, and they graciously acknowledged some of my previous writings on this issue. I know they don’t think the textual form of British Sign Language is written English, so I was surprised that they used the term “sign-to-text” in the title of their article and in a tweet announcing the article. After I brought it up, Dr. Kearsy Cormier acknowledged that there was potential for confusion in that term.

So, what does “sign-to-text” mean, and why do I find it problematic in this context? “Sign-to-text” is an analogy with “speech-to-text,” also known as speech recognition, the technology that enables dictation software like DragonSpeak. Speech recognition is also used by agents like Siri to interpret words we say so that they can act on them.

There are other computer technologies that rely on the concept of text. Speech synthesis is also known as text-to-speech. It’s the technology that enables a computer to read a text aloud. It can also be used by agents like Siri and Alexa to produce sounds we understand as words. Machine translation is another one: it typically proceeds from text in one language to text in another language. When the DCAL researchers wrote “sign-to-text” they meant a sign recognition system hooked up to a BSL-to-English machine translation system.

Years ago I became interested in the possibility of applying these technologies to sign languages, and created a prototype sign synthesis system, SignSynth, and an experimental English-to-American Sign Language system.

I realized that all these technologies make heavy use of text. If we want automated audiobooks or virtual assistants or machine translation with sign languages, we need some kind of text, or we need to figure out a new way of accomplishing these things without text. So what does text mean for a sign language?

One big thing I discovered when working on SignSynth is that (unlike the DCAL researchers) many people really think that the written form of ASL (or BSL) is written English. On one level that makes a certain sense, because when we train ASL signers for literacy we typically teach them to read and write English. On another level, it’s completely nuts if you know anything about sign languages. The syntax of ASL is completely different from that of English, and in some ways resembles Mandarin Chinese or Swahili more than English.

It’s bad enough that we have speakers of languages like Moroccan Arabic and Fujianese that have to write in a related language (written Arabic and written Chinese, respectively) that is different in non-trivial ways that take years of schooling to master. ASL and English are so totally different that it’s like writing Korean or Japanese with Chinese characters. People actually did this for centuries until someone smart invented hangul and katakana, which enabled huge jumps in literacy.

There are real costs to this, serious costs. I spent some time volunteering with Deaf and hard-of-hearing fifth graders in an elementary school, and after years of drills they were able to put English words on paper and pronounce them when they saw them. But it became clear to me that despite their obvious intelligence and curiosity, they had no idea that they could use words on paper to send a message, or that some of the words they saw might have a message for them.

There are a number of Deaf people who are able to master English early on. But from extensive reading and discussions with Deaf people, it is clear to me that the experience of these kids is typical of that for the vast majority of Deaf people.

It is a tremendous injustice to a child, and a tremendous waste of that child’s time and attention, for them to get to the age of twelve, at normal intelligence, without being able to use writing. This is the result of portraying English as the written form of ASL or BSL.

So what is the written form of ASL? Simply put, it doesn’t have one, despite several writing systems that have been invented, and it won’t have one until Deaf people adopt one. There will be no sign-to-text until signers have text, in their language.

I can say more about that, but I’ll leave it for another post.

Why do people make ASL translations of written documents?

My friend Josh was puzzled to see that the City of New York offers videos of some of its documents, translated from the original English into American Sign Language, on YouTube. I didn’t know of a good, short explainer online, and nobody responded when I asked for one on Twitter, so I figured I’d write one up.

The short answer is that ASL and English are completely different language, and knowing one is not that much help learning the other. It’s true that some deaf people are able to lipread, speak and write fluent English, this is generally because they have some combination of residual hearing, talent, privilege and interest in language. Many deaf people need to sign for daily conversation, even if they grew up with hearing parents.

It is incredibly difficult to learn to read and write a language that you can’t speak, hear, sign or see. As part of my training in sign linguistics I spent time with two deaf fifth grade students in an elementary school in Albuquerque. These were bright, curious children, and they spent hours every day practicing reading, writing, speaking and even listening – they both had cochlear implants.

After visiting these kids several times, talking with them in ASL and observing their reading and writing, I realized that at the age of eleven they did not understand how writing is used to communicate. I asked them to simply pass notes to each other, the way that hearing kids did well before fifth grade. They did each write things on paper that made the other laugh, but when I tried giving them specific messages and asking them to pass those messages on in writing, they had no idea what I was asking for.

These kids are in their thirties now, and they may well be able to read and write English fluently. At least one had a college-educated parent who was fluent in both English and ASL, which helps a lot. Other factors that help are the family’s income level and a general skill with languages. Many deaf people have none of these advantages, and consequently never develop much skill with English.

The City could even print some of these documents in ASL. Several writing systems have been created for sign languages, some of them less complete than others. For a variety of reasons, they haven’t caught on in Deaf communities, so using one of those would not help the City get the word out about school closures.

The reasons that the City government provides videos in ASL are thus that ASL is a completely different language from English, many deaf people do not have the exceptional language skills necessary to read a language they don’t really speak, and the vast majority of deaf people don’t read ASL.

Remembering Alan Hudson

On Saturday I found out that Alan Hudson died. Alan was my doctoral advisor at the University of New Mexico until his retirement in 2005, and a source of support after that.

I first met Alan when I visited the UNM Linguistics Department in 1997. Alan welcomed me into his office with a broad smile, and asked, “So Angus, have you made up your mind about whether you want to come here?”

“Well…” I said. I had been accepted into the PhD program, but had just come from a very discouraging encounter with another professor, and was ready to give up and go home. Before I could continue, Alan said, “Is there anything I can say to convince you?” I replied, “Well, I guess you just did.”

Alan was not a big name in linguistics; he never published a book. I regularly had to tell people that my advisor was not Dick Hudson. But Alan had a profound insight about the sociology of language that changed my career trajectory and my thinking about language and social justice.

In a seminar on Societal Bilingualism the next year, Alan led us through the case studies laid out by Joshua Fishman, his own advisor, in his book Reversing Language Shift. Fishman’s book is of interest to anyone concerned with language “death” (a problematic metaphor unless the language users themselves are being killed). As a Dubliner who had become fluent in Irish through compulsory government schooling, Alan cared deeply about his national language, but he did not have high hopes for it recovering its status as the primary language of Ireland.

Fishman argues that we can prevent large numbers of people abandoning a language by establishing “diglossia” – arrangements where language H is used for some functions and language L is used for others. Charles Ferguson had shown in 1949 that diglossic arrangements tend to be stable over time. Fishman believed that if language users can establish similar functional separations, they can stop language shift.

Drawing in part on his own research in Ireland and Switzerland, Alan observed that the cases Fishman categorized as diglossia did not fit with Ferguson’s examples. The key factor in Ferguson’s cases was that there were no children in the speech community who are native speakers of H: no child speakers of High German in Switzerland, no child speakers of Metropolitan French in Haiti, etc. In Ireland, by contrast, there are millions of English-speaking children, and in the Netherlands Frisian-speaking children go to school with Dutch-speaking peers.

The result of this contact is that most of these children eventually shift to the higher-prestige, better-paying language, and will not pass their native languages on to their children. There are only two ways to stop it: reverse the power dynamic (as happened in Finland when Russia conquered it from Sweden, I discovered in a term paper that semester) or isolate the children (as Kamal Sridhar observed in her Thanjavur Marathi community).

This was an important insight, with major implications for linguistics. None of us in the course were interested in segregating language groups from each other, and as linguists we were not positioned to shift the socioeconomic power differentials between groups. If the prescription for reversing language shift can be captured in a single sentence, that leaves no ongoing role for linguists.

Since then I have not been terribly surprised that Alan’s insight has not been enthusiastically embraced by other linguists. As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” Alan published two articles describing his definition of diglossia, but framed it in theoretical terms, downplaying the implications for efforts at language maintenance and revitalization.

Alan Hudson supervised my studies and my comprehensive exams, but retired before I was ready to begin my dissertation. He continued to provide valuable advice, and attended my dissertation defense. He will be remembered as an insightful linguist and a supportive teacher.