Origin of Language (Alan Du, 9/11/13)

First meeting of Linguistics this semester. We learned evolution of language, which should provide a good foundation for all the wonderful things we’ll be learning this year. We also got to a little NACLO practice.

Slides are available here.

One of the big questions is how human language evolved. Obviously, this is a hard problem. Unlike the evolution of the eye or something, language doesn’t leave a trace. There’s no fossil sentence we can dig up and study.

To answer the question, we need to consider two other questions. How different is human language from animal “languages” and how old is human language? The first will tell us what exactly evolved, and the second will tell us about possible evolutionary mechanisms.

Human vs Animal Language

At first glance, it might not seem that human and animal languages are that different. After all, animals can communicate information very effectively (see the waggle dance) and some definitely have some form of grammar (bird song). In fact, we believe that humpback whale songs have hierarchical structure, something long thought unique to humans (the paper is available here. It’s a beautiful piece of information theory, well worth the read). So then, what really is the difference between humans and animals?

The two major differences between humans and animals are vocabulary and grammar. Human vocabulary is much, much richer than an animals. Humans know tens of thousands of words, while even the most complex animal languages have only a couple hundred symbols: about one hundreth of the size of humans’. Human vocabulary is also very fluid: words invented, changed, and forgotten all the time. The words in a novel today are very different from the words in a novel just 20 years ago. Animal vocabularies, by contrast, are very static. Their vocabularies hardly ever change.

The other major difference is the complexity of the grammar. Although, animals do have complex grammar, human language is even more complex. For example, only human language exhibits recursive properties. Human language also has a much greater variety of symbol patterns than other animal languages.

Dating Language

One way to date language is through fossil evidence. Although we can’t find fossil sentences, we can find fossil vocal chords and skulls. By determining when our vocal and auditory systems become similar to human ones, we can approximate when language evolved.

Apes have air sacs. Humans don’t.

We focused on something called the laryngeal air sac. The air sac is the round thing that inflates when an ape yells (see picture). While apes have air sacs, humans don’t.

We know that Australopithecus afarensiswhich lived ~3.3 million years ago, had air sacs. We also know that Homo heidelbergensis, which lived 600,000 years ago, don’t. So by air sac dating, language evolved somewhere between 600,000 and 3.3 million years ago (for more information about this, see this paper. We can also look at other adaptations of human vocal tracts to get a narrower window. Unfortunately, fossils are rare, and the vocal tract doesn’t fossilize well. So this approach doesn’t give us all that much information (see here for a more detailed discussion).

We also discussed using the FOXP2 gene for dating. FOXP2 is a gene that is somehow related to language. All humans have essentially the same copy of FOXP2 — those with mutations have severe language disorders. This highly suggests that FOXP2 was crucial for the evolution of language (see here for more info). By using some genetic drift analysis, people have dated human’s FOXP2 to around 300,000 to 400,000 years old (previous studies have dated FOXP2 to around 200,000 years old, but new evidence suggests an older age).

Estimates of language age

Another piece of evidence is to use language drift analysis. In 2012, a team of researchers analyzed the phoneme (sound) variance across languages. By estimating how fast phonetic drift occurs, they estimated language to be between 100,000 and 600,000 years ago (see here). Unfortunately, as the authors themselves admit, they used a lot of untested assumptions and over-simplified methodology. Still, it’s one of the only estimates based on linguistic data.

The last piece of evidence (and the weakest in my opinion) would be archaeological evidence. We know that cave paintings, large communities, and other signs of complexity started around 100,000 years ago. Some people (like Chomsky) take this to mean that language also evolved ~100,000 years ago.

Darwin’s Problem and Chomsky’s hypothesis:

Taken all together, we estimate language to have evolved between 100,000 and 700,000 years ago. Unfortunately, this gives us a problem: language is really, really complex. So how on Earth did language evolve in that short amount of time?

This was one of the major criticisms of Darwin’s evolutionary hypothesis. Evolution works essentially through random chance. Some small change happens randomly, and if it’s better, it sticks. But how could something really complex evolve just through chance? The classic example is the eye. The eye’s a beautifully crafted precision machine. How could it just appear because of chance (there’s discussion about this here).

One answer is that language’s evolution occurred over a really, really long period of time. But what started the process? We don’t know.

Chomsky’s answer is to assume that language just suddenly appeared 100,000 years ago (for reasons that are unclear to me). 100,000 years ago, humans developed the merge operation (how language followed afterwards is essentially the study of minimalism).

Logical conclusion of merge process
Logical conclusion of merge process

What is merge? Merge is what combines two words to create a phrase. In other words, it merges two children nodes into their parent nodes. For example, in the tree to the right, it combines the I and the VP into an I’.

There are two key features of merge that Chomsky highlights. First, merge is unbounded. It can combine two phrases from far away (although arguably, it can only combine traces together). The second is that merge works independently of meaning. The classic example is “Colorless green ideas sleep furiously.” We can still parse this sentence even though it doesn’t make any sense.

These properties are crucial for Chomsky’s argument. Merge is useful in lots of other contexts, like merging two numbers and an operation to do arithmetic. In Chomsky’s conception, merge, and language by extension, evolved to help with this kind of symbolic processing, not to communicate with others.

Let me repeat that. He says that language did not evolve for communication. It evolved to help us think better. This is an extremely radical idea. For more than 2000 years, people have thought that language was for communication. And it seems so intuitively true; if you ask random people of the street “Why do we have language?”, they’ll say “So we can tell each other stuff.”

So what’s Chomsky’s support? Well, his main argument is a logical argument. He says that evolving language was kind of miracle, a very rare event. If language is for communication, then to get any evolutionary benefit, at least two people need to have it at the same time. Chomsky argues that for the same miracle to occur at the same place at the same time is just too improbable.

I personally believe that’s a kind of weak argument. But there’s a study that I found much more convincing. In this study, we have some kind of rectangular room. We put food in one corner of the room, and the train a rat to find the food. We then take the rat, disorient it, and then put it back in the box to see whether it can find the correct corner.

Food is at A. Notice that A and C are geometrically identical (left corners if your facing the short side).

Logically, if the food is hidden in corner A, then the rat has no way of telling whether the food is in A or C. It was disoriented; all it knows is that the food’s in the left corner when you’re facing the short side. So it just guesses one of them. And, as we expect, about half the time, the rat will go to A and about half to C.

What’s interesting is that if you make A different from C somehow, then the rats still can’t tell the difference. They made A have a distinct smell, made it have a different texture, a different color, shined light on it. Despite all the different signals, the rat couldn’t combine it with its “left corner” mentality and the new distinguishing information. It couldn’t merge those two pieces of information together.

Not surprisingly, if you make a human adult do this, they can combine those two pieces of information. Interestingly though, a pre-linguistic baby cannot. Even more interesting is that under certain conditions, adults can’t do it either. To make the task a little more complicated, the researchers made to adult multitask. In one setup, they made the adults playback a rhythm to the researchers. In another, they made the adults speak to the researchers. During the rhythm shadowing, adults could combine the information. But during the verbal shadowing, they can’t.

A possible explanation is that combining the information needs merge. But because the adults were speaking, they were already using up all their merge abilities. So this could be evidence that merge is just a general operation that humans use, not specific to language.

I’m a bit more skeptical. After all, the verbal shadowing task used needed very little linguistic knowledge. It could just be that speaking is a lot more taxing than tapping a rhythm. Still, it’s interesting to look at.

Note that Chomsky makes no explanation for the massive vocabulary size differences. As far as I know, that’s just a feature of our bigger/better brains.

Summary (TLDR)

We looked animal and human languages, and saw that human languages were more complex and had more words than animal languages. We then estimated human language to be between 100,000 and 700,000 years old. We also discussed Chomsky’s idea for the evolution of language.

About Alan Du

I'm one of the founders and co-presidents of this club. I also maintain this website. My main interests are all about cognition and intelligence. The idea that a bunch of atoms can combine and form something self-aware is absolutely fascinating. Linguistically, I'm interesting in integrating theoretical syntax with NLP, grammar inference, figuring out how the brain processes language, and creating a program with true artificial language capacities.

Leave a Reply

Your email address will not be published. Required fields are marked *