Alan told us about the neuroimaging that allows us to learn about sentence processing. Cool techniques, and very useful knowledge. Notes and slideshow coming soon.
Pamela Toman, a data scientist who studied linguistics at Georgetown, talked about the morphology of American Sign Language. We explored various syntactic features in ASL, and talked about sign languages and the deaf community in general.
Brian and Aaron gave a preparatory lecture last week.
Brian and Aaron introduced us to morphology, in preparation for Pamela Toman’s talk on the morphology of American Sign Language next week. Their presentation included a NACLO-style problem.
NOTES (thanks to Brian, as may be guessable:)
The basic unit of meaning in a language is a morpheme, of which there are two types: free and bound. There are two types of bound morphemes: inflectional (those that don’t change the inherent meaning of a word) and derivational (those that do). The null morpheme is a specific type of derivational morpheme that is not pronounced or written, but changes the lexical class of a word.
Languages can be categorized based on how they handle morphology. See the slides for more information.
The presentation ended with a demonstration of a method for solving certain types of morphology problems. See the slides for more information.
I talked about my summer project. We also watched the great “Tono Tono”, and worked on NACLO problems.
About 300,000 people each year suffer some degree of aphasia after strokes. A lot of post-stroke aphasics end up scoring really well on the standard tests after a while, so they’re considered recovered and insurance stops paying for treatment. The problem is that the standard tests only cover phonological, lexical, and syntactic stuff; many of these people (we estimate ~20,000 a year) have lingering discourse-level difficulties that prevent them from having normal conversations or doing their jobs.
We had transcripts of “well-recovered patients” and controls describing Norman Rockwell paintings. I cleaned up the transcripts and scored them for content units. Then I analyzed the data on a word-by-word basis by statistically comparing the usage percent of each word between the two groups (I developed the method). Some 35 words were significant, most of which were detailed descriptors which were used more by controls. We also used a neat statistical method called VLSM to identify brain areas associated with each deficit, which lets us speculate about what might be responsible. The results indicated that patients conserve unnecessary words, as the physical act of speech is difficult. Applied to a much larger corpus, this technique could allow the development of a diagnostic tool to quickly flag patients with discourse difficulties based on a speech sample.
Polly O’Rourke, a scientist at UMD’s Center for the Advanced Study of Language, told us about aphasia: the types, the causes, the symptoms, and what we can learn from it. I’ll be talking about my summer project, which involved aphasia, at our next meeting.
Notes and slideshow coming soon.
We worked on NACLO problems and eating cookies.
Michael McCourt, who’s a grad student in the Philosophy Department at UMD and studies logic and language, spoke about semantic paradoxes. He assumed the background that I gave in my lecture last week, available in this post (and the linked papers).
NOTES are below (thanks to Hannah Tsai). Last week’s post should help with context. Slideshow hopefully coming soon.
- Semantic paradoxes
- (A) Epimenides the Crete says that all Cretans are liars.
- (A) is both true and false. This is bad.
This is both self referential and assumes bivalence
- (B) This sentence is not true
- Simple untruth liar, assume true or untrue instead of true or false
- Still self referential…
- (A) and (B) are semantically defective.
- (D) (D) is true.
- Not a paradox, but it adds nothing.
- Non-classic logic solution
- Generates a need for non-classical logic that says the Law of Excluded middle doesn’t hold for all sentences (there are more than two possibilities) and that contradictions can be both true and false but the principle of explosion doesn’t hold (Once a contradiction has been asserted, any proposition (or its negation) can be inferred from it.)
- Set theory solution
- Russell’s set theoretic paradox is resolved by limiting a set’s members to its n-1 stage.
- Tarski claims that the liar sentence isn’t a sentence by limiting applications of truth predicates to other languages. waht
- Contextualist solution
- (B) is semantically defective, but some are true and not true. It all depends on context.
- YABLO’S PARADOX
- Infinite sequence of sentences generating a paradox.
We talked about some semantic paradoxes and their attempted resolutions, which Michael McCourt will continue with next week. Pretty much everything was based on this paper. Please read it if you weren’t there — it’s not long, but it’ll take some thinking about, so probably budget 20-30 minutes. Try talking each paradox out to yourself as you go along, and reference the notes below as needed. The slideshow (follows pretty directly from the paper) is here.
The paper mentions two interesting paradoxes that it doesn’t go into much detail on, but we talked about a little more. One of them is Yablo’s paradox, which you can read about in this (single-paragraph) journal article.
The other is Curry’s paradox. Take this sentence (call it S): “If this sentence is true, then ducks are blue.” How would we go about deciding if S is true?
Well, S is a statement of if-then form; it’s saying A -> B, where A is “this sentence is true” and B is “ducks are blue.” Whenever we want to prove an if-then statement, we assume that the antecedent (A) is true and try to prove the consequent (B). For example, if we were proving “If 2x + 2 > 8, then x > 3”, we would assume 2x + 2 > 8, then reason that 2x > 6, and therefore that x > 3.
So let’s assume A is true. Then we’re assuming “this sentence is true”. That means “A -> B” is also true, since that’s what the sentence means. Therefore, B is true. We’ve found that if we assume A, then B is true; therefore, A implies B; therefore the whole sentence is true.
So “If this sentence is true, then ducks are blue” is true. The sentence is true, so ducks are blue.
The paradox there is that we can use this logic to prove any sentence whatsoever, which can’t be right — but none of the steps we used were weird; they’re all ideas that are used all the times in proofs. So what’s the problem?
Mr. Rose talked about Intensional Semantics, including why the Extensional stuff from last week doesn’t really work. Note that it is “intensional” as opposed to “extensional”, rather than “intentional” like “concerning intent” — linguists have strange naming strategies.
Notes coming soon.
Mr. Rose told us about Formal Extensional Semantics, which is the study of how words in a sentence come together to form meaning.
NOTES (thanks to WL — to be completed):
- This stuff might be slightly out of date
- Semantics is about meaning. This is important, because the whole reason we use langage is that it has meaning.
- The form that sentences take is based on their meaning
- To know the meaning of a sentence is to know its truth conditions (when it is true)
- (1) There is a bag of potatoes in my pantry.
- Truth conditions: what must be true about the world for this sentence to be true
- Awkwardness of studying objects made of words using words. Distinguish between object language (language of the object under study) from metalanguage (langauge used to study the object).
- The sentence “There is a bag of potatoes in my pantry” is true if and only if there is a bag of potatoes in my pantry.
- Quotes signify element which is being studied
- Problem: It’s just unsatisfactory to say that the sentence “_______” is true iff _______. Doesn’t explain anything or give predictions.