Upcoming Colloquiums
Fall 2024
Georgia Zellou (UC Davis)
Friday September 13, 2024, 3:30pm to 5pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: Linguistic and social biases impact speech communication in human-computer interaction
Abstract: People are now regularly interacting with voice assistants (VAs), which are conversational agents that allow users to use spoken language to interface with a machine to complete tasks. The huge adoption and daily use of VAs by millions of people - and its increasing use for financial, healthcare, and educational applications - raises important questions about the linguistic and social factors that affect spoken language interactions with machines.
We are exploring issues of linguistic and social biases that impact speech communication in human-computer interaction - particularly during cross-language transfer, learning, or adaptation of some kind. In this talk, I will present two case studies illustrating some of our most recent work in this area. The first study looks at a case of cross-language ASR transfer. We find systematic linguistic and phonetic disparities in language transfer by machines trained on a source language to speech recognition of a novel target, low-resource language. The second study looks at a case of social bias in word learning by humans using voice-enabled apps. We find the word learning is inhibited when there are mismatching social cues presented by the voice and the linguistic information.
Together, along with highlights from other ongoing work in my lab, the aim of this talk is to underscore that human-computer linguistic communication is a rich testing ground for investigating issues in speech and language variation. Examining linguistic variation during HCI can enrich and elaborate linguistic theory, as well as present opportunities for linguists to provide insights for improving both the function and fairness of these technologies.
Linnaea Stockall (Queen Mary University of London)
Friday, November 8, 2024, 3:30pm to 5pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: The neural bases of linguistic structure building vs. Interpretation across the world's languages
Abstract: In this talk, I'll discuss a program of research that combines a very simple single word lexical decision paradigm with concurrent MEG recording to investigate how we assemble complex linguistics units (morphologically complex words and minimal phrases) from their consistent pieces, and how we generate and assess the interpretations that correspond to those structures. I'll survey evidence my SAVANT team (https://savant.qmul.ac.uk/) and I have now collected from 7 languages (with 3 more in progress) that is consistent with a successive cyclic Y model of grammar, in which initial syntactic composition precedes and feeds subsequent interpretation, and word internal processes rely on the same neural resources and mechanisms as sentential processes.
Karlos Arregi (University of Chicago)
Friday, November 22, 2024, 3:30pm to 5pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: TBA
Abstract: TBA
Winter 2025
Aron Hirsch (University of Maryland)
Friday, January 17, 2025, 3:30pm to 5pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: TBA
Abstract: TBA
Eleanor Chodroff (University of Zurich)
Friday, January 31, 2025
Location: TBA
Title: TBA
Abstract: TBA
Kathryn Davidson (Harvard University)
Friday, April 4, 2025
Location: TBA
Title: TBA
Abstract: TBA
Fall 2023
Dr. Valentine Hacquard (University of Maryland)
Friday September 29, 2023, at 3:30pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: Endless Possibilities in Child Language
Abstract: Children have been shown to struggle with the force of modals: they tend to accept possibility modals (like might) in environments where adults prefer necessity modals (like must), and necessity modals in environments where adults require possibility modals. In this talk, I present various studies which test the robustness of children’s modal difficulties, probe the potential linguistic and nonlinguistic sources of these difficulties, and point to how children eventually overcome them.
Dr. Meredith Tamminga (University of Pennsylvania)
Friday October 20, 2023, at 3:30pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: Language Users' Expectations Shape Phonetic Flexibility
Abstract: Language users show considerable flexibility in their phonetic perception and production. Phonetic flexibility phenomena such as convergence and perceptual learning are of broad interest because of their connections to questions in language learning, sociolinguistic variation, and diachronic change. In this talk I will present two case studies on how phonetic flexibility is influenced by language users' expectations. The first case study is on expectation-driven convergence, which is when speakers converge toward a regional accent feature that they expect, but crucially do not hear, from an interlocutor. In work with Lacey Wade and Dave Embick, we show that people with different dialect backgrounds exhibit expectation-driven convergence triggered by different kinds of sociolinguistic expectations. The second case study is on cross-talker generalization in perceptual learning, which is when listeners shift a perceptual category boundary based on input from one talker and then, under some circumstances, extend that expectation to a different talker. In work with Wei Lai, we propose that perceptual learning involves learning both a bias toward identifying a particular phonological category and a new phonetic boundary between two phonemes, and that listeners will generalize the shift only when those two aspects of what they have learned match. I conclude with reflections on the thematic parallels between these two case studies and potential implications for understanding larger-scale phenomena like language change.
Dr. Claire Halpert (University of Minnesota)
Friday November 10, 2023, at 3:30pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: What Does It Seem That Hyperraising Blocks?
Abstract: In this talk, I explore an unusual interaction between A-movement (raising out of finite clauses) and A-bar movement (long-distance wh-movement) in the Bantu language Zulu: while both types of movement are independently permitted out of complement clauses, A-bar movement is blocked in raising environments. I demonstrate that this ungrammaticality is not a result of interactions between the moving subject and the wh-phrase themselves, but instead argue that it arises as a result of the hyperraising process in Zulu (Halpert 2019). In raising out of a finite clause (hyperraising) in Zulu, finite embedded clauses are implicated in an agreement dependency that their counterparts in non-hyperraising contexts are not; the wh-facts I discuss here suggest that the result of this dependency creates the same opacity that we find in instances of clausal dislocation and object agreement in Zulu. What can we learn from this complex and unexpected opacity profile in Zulu? The simplest approach to these patterns is to treat all instances of opacity in Zulu (and perhaps more generally) as cases of intervention for specific features.
Winter 2024
Dr. Heather Newell (Université du Québec à Montréal)
Friday January 19, 2024, at 3:30pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: An examination of the non-phonological nature of the Prosodic Hierarchy
Abstract: In this talk I will discuss various themes in my recent work that call into question the reality of the Prosodic Hierarchy as a cognitive object (at the Phonological Word and above). I will compare the behaviour of the Prosodic Hierarchy with the behaviour of other phonological structures, and with syntactic structure-building; it will be demonstrated that the Prosodic Hierarchy behaves like neither. Alternative autosegmental analyses of phenomena whose descriptions regularly appeal to the Prosodic Hierarchy will be offered and I will suggest a procedural account of phonological domain delimitation. Phenomena to be discussed will include, time permitting, stress and syllable structure, reduplication, bracketing paradoxes, phrasal boundary marking, function words vs. lexical words and cohering vs. non-cohering affixes.
Dr. Andrei Munteanu (McGill University, Montréal)
Friday February 23, 2024, at 3:30pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: Probabilistic Evaluation of Comparative Reconstructions
Abstract: In this talk, I present Wordlist Distortion Theory, a framework for the probabilistic evaluation of comparative reconstructions in historical linguistics. The framework estimates the likelihood that a randomly generated wordlist merits the same type and number of diachronic transformations (e.g. sound changes, replacements, etc.) as required by the reconstruction. The framework is primarily intended as a platform for objective and accessible debate surrounding spurious reconstruction in historical linguistics. Additionally, it can be used as a tiebreaker between conflicting reconstructions for the same data. Finally, the framework allows for probability-based theoretical arguments in historical linguistics about the interaction of synchronic and diachronic factors with reconstruction reliability.
I also present results from a case study, where Wordlist Distortion Theory was incorporated into a machine learning algorithm. The algorithm suggests comparative reconstructions (i.e. series of sound changes) stochastically while giving preference to those that decrease the probability of a spurious match. When tested on wordlists from 74 Austronesian languages and 5 proto-languages, the algorithm yields reconstructions that appear in line with general knowledge about sound change and about Austronesian historical linguistics.
Dr. Paul H. Portner (Georgetown University)
Friday March 15, 2024, at 3:30pm
Location: 680 Sherbrooke West, room 1041 (10th floor)
Title: Social relations and scalar implicature
Abstract:
Psychologists and philosophers have noted the relevance of social factors to scalar implicature. For example, Bonnefon et al. (2009) argue that (1) is less likely to be understood with an upper-bounding implicature (i.e. ‘not everyone thought you drank too much’) than a more neutral case such as (2) (based on their examples p. 249-50):
(1) A: What impression did I make during dinner?
B: Some guests thought you drank too much.
(Potential implicature: Not all guests thought you drank too much.)
(2) Some students stayed on the campus this weekend.
(Implicature: Not all students stayed on the campus this weekend.)
They explain the difference with the idea that using ‘some’ can be understood as a politeness strategy, and that when used for politeness, it does not generate an implicature. Other relevant work in psychology includes Bonnefon and Villejoubert (2006), Mazzarella et al. (2018), and Yoon et al. (2020). Within philosophy, Swanson (2017) focuses on the possibility for the speaker [B] to convey an upper-extending (understatement) implicature like ‘many guests thought you drank too much’, and he shares with the other authors the idea that social factors are crucial to explaining the particular pragmatic profile of such cases.
In this talk, I will discuss how to formalize the role of social relations in semantics and pragmatics and use it to provide a precise neo-Gricean explanation of why the upper-bounding implicature fails to arise in cases like (1). The analysis shows the necessity of integrating social relations into the formal apparatus of semantic theory, and it also leads to a better understanding of the off-record nature of certain implicatures (a point mentioned by Swanson). I may also touch on some consequences for the grammatical approach to scalar implicature.
Fall 2022
Dr. Alan Bale (Concordia University)
Friday December 2nd at 3:30 pm
Location: Education Bldg. Rm. 624
Title: Ignorance and Distraction: A case study that uses experimental methodologies to help distinguish between different theories of inference.
Abstract: Ever since Grice (1975), there has been a vibrant literature on the nature of quantity implicatures (e.g., inferences from “some” to “not all” as well as inferences from “some” to “I don’t know whether all”). Some authors have defended the spirit of Grice’s original proposal, namely that these types of inferences stem from people rationally reasoning about basic principles of conversational cooperation (for a recent defence, see Geurts 2010). Others have amended Grice’s original proposal with various grammatical interventions, special algorithms, and/or default heuristics that sometimes run counter to rational reasoning (e.g., Levinson 2000). Still others have proposed that at least some types of quantity implicatures are purely grammatical and do not involve any type of strengthening outside of the linguistic faculty (e.g., Chierchia, Fox and Spector 2012; Meyer 2013).
In this talk, I will discuss how inducing states of ignorance and employing methods of distraction can allow us to gather experimental evidence that might be able to distinguish between these various theories. In particular, I will argue that the cumulative results from several recent studies, including those from an experiment conducted in my lab, are inconsistent with the grammatical account of quantity implicatures. Rather, such results are more consistent with Grice’s original proposal (with some relatively minor amendments).
Dr. Kyle Johnson (University of Massachusetts Amherst)
Friday November 11th at 3:30 pm
Location: Education Bldg. Rm. 624
Title: Implicit arguments as Incorporated Theta-roles
Abstract: Some verbs are capable of being used without an expression of their arguments. The direct and indirect objects of eat and throw are standard examples.
(1) a. Marlys ate cake. Marlys ate.
b. Marlys threw the ball to Sam. Marlys threw the ball.
The meanings of eat and throw preserve the θ-roles that cake and to Sam bear, even when those arguments are not present. Those θ-roles are understood to be existentially closed. They are said to be implicit when this happens. The ability for a θ-role to be implicit seems to be idiosyncratically controlled by the verb, but it does not extend to external arguments. To make an external θ-role implicit, a valency changing operation is required. An external/internal argument contrast of this sort is also found in many kinds of Noun Incorporation constructions. And the lexically idiosyncratic nature of making a θ-role implicit also seems to be a feature of some Noun Incorporation constructions. Martí (2015) argues that the syntax and semantics of Noun Incorporation underlies making a θ-role implicit. I will pursue that thesis in this talk. I will suggest that we should think of θ-roles as being kinds of nominals, and sketch a syntax that makes sense of that idea. One of its consequences is that θ-roles can undergo Incorporation, and this is how implicit arguments are achieved.
References
Martí, Luisa (2015). “Grammar versus Pragmatics: Carving Nature at the Joints”. In: Mind & Language 30.4, pp. 437–473.
Dr. Beth MacLeod (Carleton University)
Friday October 28th at 3:30 pm
Location: Education Bldg. Rm. 624
Title: The other piece of the imitation puzzle: individual variation in the perception of phonetic imitation
Abstract: Phonetic imitation occurs when, during an interaction, a speaker’s pronunciation shifts to become more like that of the person to whom they are speaking. According to Communication Accommodation Theory (e.g., Giles, 1973), speakers imitate their interlocutors to minimize social distance between themselves and their interlocutors and to influence the interlocutor’s impression of them. Previous work has found that when speakers imitate, they and the interaction are evaluated more positively than when no imitation takes place (Chartrand & Bargh, 1999; Giles & Smith, 1979; Street, 1982). In order for this positive evaluation to occur, however, the interlocutor must perceive the imitation. While many studies on the production of imitation use a perceptual method (often an AXB task) to assess how much the talkers have imitated (e.g., Dias & Rosenblum, 2016; Goldinger, 1998), virtually no study has focused on the behavior of the listeners and whether they might vary in their ability to perceive imitation. If listeners do vary in this ability, it might predict that they would also vary in their ability to access social cues, both in their own conversations and in conversations they observe between others (e.g., Dias et al., 2021; Giles et al., 1991; Pardo et al., 2012; Shepard et al., 2001). As such, the perception of imitation is a critical piece of the puzzle in understanding the social motivations and consequences of imitation.
In this talk, I discuss my recent research focusing on the variability and consistency of listener performance in an AXB assessment of phonetic imitation. The results suggest that individuals differ in their ability to perceive imitation in this task, but that this variation reflects stable characteristics of the individual listener, rather than random fluctuations. I also discuss how this research will set the stage for future work exploring an individual-level connection between the perception and production of imitation.
Dr. Allyson Ettinger (University of Chicago)
Friday September 16th at 3:30 pm (register here)
Abstract: The interaction between "understanding" and prediction is a central theme both in psycholinguistics and in the AI domain of natural language processing (NLP). Evidence indicates that the human brain engages in predictive processing when comprehending the meaning of language in real time, while NLP models use training based on prediction in context to learn strategies of language "understanding". In this talk I will discuss work that tackles key problems in both of these disciplines by exploring and teasing apart effects of compositional meaning extraction and effects of statistical-associative processes associated with prediction. I will begin with work that diagnoses the linguistic capabilities of NLP models, investigating the extent to which these models exhibit robust compositional meaning processing resembling that of humans, versus shallower heuristic sensitivities associated with predictive processes. I will show that with properly controlled tests, we identify important limitations in the capacities of current NLP models to handle compositional meaning as humans do. However, the models' behaviors do show signs of aligning with statistical sensitivities associated with predictive mechanisms in human real-time processing. Leveraging this knowledge, I will then turn to work that directly models the mechanisms underlying human real-time language comprehension, with a focus on understanding how the robust meaning extraction processes exhibited by humans interact with probabilistic predictive mechanisms. I will show that by combining psycholinguistic theory with targeted use of measures from NLP models, we can strengthen the explanatory power of our psycholinguistic models and achieve nuanced accounts of interacting factors underlying a wide range of observed effects in human real-time processing.
Previous Colloquiums
Winter 2022
Christopher Potts (Stanford University)
Friday, April 8th at 3:30pm
Title: Inducing Interpretable Causal Structures in Neural Networks
Abstract: Early symbolic NLP models were designed to leverage valuable insights about language and cognition. These insights were expressed directly in hand-designed structures, and this ensured that model behaviors were systematic and interpretable. Unfortunately, these models tended also to be brittle and specialized. By contrast, present-day models are data-driven and can flexibly acquire complex behaviors, which has opened up many new avenues. However, the trade-offs are now evident: these models often find opaque, unsystematic solutions. In this talk, I'll report on our ongoing efforts to combine the best aspects of the old and new using techniques from causal abstraction analysis. In this method, we define high-level causal models, usually in symbolic terms, and then train neural networks to conform to the structure of those models while also learning specific tasks. The central technical piece is interchange intervention training (IIT), in which we swap internal representations in the target neural model in a way that is guided by the input–output behavior of the causal model. Where the IIT objective is minimized, the high-level model is an interpretable, faithful proxy for the underlying neural model. My talk will focus on how and why IIT works, since I am hoping this will help people identify new application areas for it, and I will also briefly review case studies applying IIT to natural language inference, grounded language understanding, and language model distillation.
Jessamyn Schertz (University of Toronto Mississauga)
Friday, March 18th at 3:30pm
Title: Underpinnings of Phonetic Imitation
Abstract: Phonetic imitation is a complex behavior, requiring accurate perception, identification, and (re-)production of relevant features. In this talk, I will present data from the initial stages of a project designed to explore the linguistic and cognitive processes governing accent imitation. Using a paradigm designed to examine both explicit imitation and perception of artificial “accents” differing in a single feature (voice onset time), we test the relative roles of perceptual and articulatory sub-components in predicting individual variability in imitative ability. In addition, we explore factors that may constrain imitation, including the presence of talker variability and the linguistic status of the relevant feature. Finally, we give examples of how the paradigm is currently being used for systematic comparisons of imitation across different linguistic features (e.g. aspiration vs. prevoicing vs. vowel quality) and populations (e.g. teens vs adults; different language backgrounds), as a step toward a fuller understanding of imitation and the factors that facilitate and constrain it.
Gillian Ramchand (University of Tromsø)
February 18th at 3:30pm
Title: Verbal Symbols and Demonstrations Across Modalities
Abstract: The evidence from syntax and morphology suggests that the extended projection of the verb is divided into typologically robustly attested zones (see Ramchand and Svenonius 2014). The lowest part of each extended projection (whether nominal or verbal) is associated with open class lexical items, and the meanings of those items are notoriously hard to define and integrate into a compositional treatment of meaning without rampant allosemy. In this talk I will argue that the truth conditional, or denotational view of lexical contents is misguided and that we should move to a more mentalistic view of the semantic content of open class lexical items. This will involve reifying the symbol and the demonstrative act of reference as a way of mediating between mentalistic contents and assertions which have worldly truthmakers. I show that the neo-quotational view of compositional semantics (as I will call it),allows for a more satisfying account of iconic and gestural meaning, and gives a new perspective on indexical shift.
Dr. Anne H. Charity Hudley (Stanford University)
February 11th at 3:30pm
Title: A Model for Linguistic Reparations
Description: This current time of pandemic and protest is a visceral and constant reminder that the racial and economic legacies of the enslavement of Black people were not only unresolved but continue to determine the courses of the daily lives of Black people across the world. Diversity and inclusion alone will not repair hundreds of years of injustice. Colleges and universities need to have frank and explicit conversations about Anti-Black racism and create plans for educational reparations.
As part of a model for educational reparations, Charity Hudley presents linguistic reparation work from the Talking College Project, a Black student and Black studies centered community-based research project that was designed to document the particular linguistic choices of Black students for Black students. The Project is explicitly focused on empowering Black students to be proud of their cultural and linguistic heritage.
Maribel Romero (University of Konstanz)
Friday, January 21, 2022 at 3:30 pm
Title: A Unified Semantic Analysis of the Q-Particle in Sinhala Across Interrogative Types
Abstract: In a wide variety of languages, Q(uestion)-particles are –optionally or mandatorily– used in the formation of some interrogative clause types(see e.g. Kamali 2015 for Turkish mI; Rudin et al. 1999 for Macedonian li; Hagstrom 1998 for Japanese ka; Kishimoto 2005, Cable 2010 and Slade 2011 for Sinhala də). The present talk concentrates on Sinhala, in which the Qparticle də appears in wh-questions (WhQs), alternative questions (AltQs) and polar questions (PolQs), as illustrated in (1)-(3):
(1) Chitra monəwa də gatte WhQ
Chitra what də bought.E
`What did Chitra buy?’ [Slade 2011: (2) p. 19]
(2) oyaa maalu.də mas.də kanne? AltQ
you fish.də meat.də eat.E
`Did you eat meat or fish?’ [Weerasooriya 2019: (36) p. 12]
(3) Chitra [ee potə]F də kieuwe? PolQ- narrow
Chitra that book də read.E
`Was it that book that Chitra read?’ [Kishimoto 2005: (21a) p. 11]
A prominent line of analysis, developed by Cable (2010) for WhQs and by Slade (2011) for AltQs and PolQs, posits that the Q-particle də introduces a choice function variable coindexed with the question operator in all three interrogative clause types.
Against this background, the goal of our talk is two-fold. First, we present novel data on the distribution of the Q-particle də in interrogatives containing islands which challenge the Cable-Slade choice function analysis. We will see that, while də in AltQs patterns like də in WhQs in being sensitive to islands, də is PolQs is island-insensitive. Second, we develop a first unified semantic analysis of the Q-particle də that accounts for its distribution in all three question types and for the disparity in island-(in)sensitivity between WhQs/AltQs and PolQs.
Fall 2021
Siva Reddy (McGill University)
Friday, September 17, 2021 at 3:30 pm
Title: Universal Linguistics Representations
Abstract: In this talk, I will present varying degrees of universal linguistic representations that can help us analyze and understand a language. These representations can serve two purposes: 1) one to answer scientific questions about a language, and 2) to build better language understanding applications like question answering systems. The varying degrees correspond to linguistic knowledge that is used to build the representations: from almost no linguistic knowledge (based on pure corpus co-occurrences) to syntax and semantics. We will try to develop representations that are widely applicable to many languages. In this process, I will also be proposing a new syntax-semantics interface to validate if universal syntax is descriptive enough to obtain universal semantics, specifically if universal dependency syntax can serve as a foundation to obtain semantic representations. If time permits, I will also connect these ideas to connectionist approaches which fundamentally challenge the entire linguistic tradition, a question that is inevitable given the enormous progress made by neural models of language. The talk also involves several demos.
Heidi Harley (University of Arizona)
Friday November 5, 2021 at 3:30 pm
Abstract: Hiaki (Yaqui) exhibits an interesting formal overlap between nominalizations which create relative-clause like structures and nominalizations which create event nominals. The same nominalizer which usually derives a subject relative nominal also, when applied to an argumentless predicate such as a weather verb or an impersonal passive, derives an event nominal. I argue that this is because the event argument IS the ‘subject’ of an argumentless predicate, the only accessible argument for the nominalizer to reify. In the process of proposing a uniform semantics for the relative nominalizers and the event nominalizer, a detailed analysis of both is provided. The nominalizers are argued to select an AspP complement. In entity-referring relative nominals, null operator movement is involved; in the event-referring event nominals, no operator is needed or possible. The syntax and morphology of the relative nominalizers is worked out in detail, with particular attention to the genitive-marked subjects of object, oblique and locative relative nominals
Ewan Dunbar (University of Toronto)
Friday, November 26 at 3:30pm
Title: Probing state-of-the-art speech representation models using experimental speech perception data from human listeners
Abstract: The strong performance of neural network natural language processing has led to an explosion of research probing systems' linguistic knowledge (whether language models implicitly learn syntactic hierarchy, whether word embeddings understand quantifiers, and so on), in order to understand if the data-crunching power of these models can be harnessed as the basis for serious, theoretically-grounded models of grammatical learning and processing. Much of this "(psycho)linguistics for robots" work has focussed on textual models. Here, I show how we have applied this same approach to phonetics. In particular, we probe state-of-the-art unsupervised speech processing models and compare their behaviour to humans' in order to shed light on the traditionally hazy and ad hoc construct of "acoustic distance."
On the basis of a series of simple, broad-coverage speech perception experiments run on English- and French-speaking participants, I compare human listeners' behaviour (how well they discriminate sounds in the experiment) to the "behaviour" of representations (how well they separate those same stimuli) which come from models trained with the express purpose of building better representations to be used in automatic speech recognition. For example, Facebook AI's recent wav2vec 2.0 model takes large amounts of unlabelled speech as training data, and learns to extract a representation of the audio that is highly predictive of the surrounding context; it has now proven extraordinarily useful for replacing off-the-shelf audio features, to the point that some of the best-performing speech recognition systems today have switched to using these representations, which has substantially reduced the amount of labelled data needed to train high-quality speech recognizers.
We use the comparison with human behaviour to show that, for this and related systems, contrary to what many researchers may have *thought* these systems are doing, they are not really "learning representations of the sound inventory" of the training language, so much as learning good representations of the acoustics of speech - so good that they are very good models of "auditory distance" in human speech processing, but, notably, they lack the categorical effects on speech perception which are pervasive in human listening experiments, and they only show very weak effects of the language on which they are trained, unlike our human listeners. As well, I present new evidence that "speech is special" in human auditory processing, by comparing learned representations trained on speech data to the same models, trained on non-speech data. We show that representations trained on non-speech are very (very) poor predictors of human speech perception behaviour in experiments.
Winter 2020
Viola Schmitt (Institute for German Studies, University of Vienna)
Title: Are worlds special?
Friday February 26, 2021 at 3:30 pm
Abstract: This talk (which owes a lot to current joint projects with Nina Haslinger, Eva Rosina, Tim Stowell and Valerie Wurm) addresses an apparent gap in an otherwise apparently robust pattern, namely, that all semantic domains contain pluralities (or at least objects with a non-trivial part structure). In the individual domain, plurality-denoting expressions have a number of well-known characteristic properties (see Link 1983 for a general discussion): On the one hand, we have properties that are intuitively related to the presence of a part-whole relation – plurality- denoting expressions can partake in cumulative readings (Scha 1981 a.m.o.) and be targeted by certain adverbs that seem to directly appeal to their part-structure (Link 1987, Zimmermann 2002 a.o.). On the other hand, a subset of plurality-denoting expressions – namely, definite plurals and individual conjunctions – can give rise to homogeneity effects (Löbner 2000, Schwarzschild 1993, Križ 2015 a.o.) and some of these expressions sometimes permit non-maximal predication (Brisson 1998, Malamud 2012, Križ 2016 a.o.). My first point will be to show that if we consider the first set of tests, the notion of plurality (or rather, some form of part-structure) is pretty much persistent across semantic domains: It looks like we find pluralities in the domains of a number other ‘primitives’, like events, degrees and times (see Landman 2000, Dotlacˇil & Nouwen 2016, Artstein & Francez 2006 a.m.o. for discussion of different types of such primitives), as well as in ‘functional’ domains like those of predicates of individuals, propositions, question denotations, quantifiers or individual concepts (see Schmitt 2019, 2020, Beck & Sharvit 2002, Haslinger 2019, Haslinger & Schmitt to appear for discussion of different aspects of this claim). I will then argue, based on data from German, that the best candidates for world-pluralities fail these tests. First, it has been argued that the antecedents of (indicative) conditionals denote (definite) pluralities of worlds (see Schlenker (2004), Kaufmann (2017), Križ (2018, 2019)) because they exhibit two traits of plurality: Homogeneity and non- maximality (see in particular Križ (2018) for these points and Gajewski (2005) for relevant connected observations). Second, neg-raising constructions with attitude verbs have been discussed as potentially involving world-pluralities, with neg-raising being a potential instance of homogeneity (see Križ (2015) for a discussion of this possibility). However, neither construction allows for cumulative readings that appeal to parts of world pluralities (rather than, say, pluralities of propositions). Furthermore, adverbs sensitive to part-structure which, in all other cases, seem to be pretty much category blind, cannot access parts of world pluralities. The last part of the talk will probe the consequences of these findings. (Warning: I don’t really have a solution, yet, just some speculations.)
Yael Sharvit (Dept. of Linguistics, University of California, Los Angeles )
Title: Thoughts on disjunction in declarative and interrogative clauses
Friday March 26, 2021 at 3:30 pm
Abstract: In this talk I discuss some problems regarding the composition of constituent and non-constituent questions. I show how some possible solutions to these problems are affected by “filtering” presuppositions in declarative as well as interrogative disjunctive clauses.
James Crippen (Dept. of Linguistics, McGill University)
Title: Aspect and related phenomena in Tlingit: Looking down to composition
Friday April 9, 2021 at 3:30 pm
Abstract: I present the basic parameters involved in aspect, tense, mood, and modality in Tlingit, searching for some possible avenues for a formal, compositional analysis that matches the morphosyntax. Na-Dene languages like Tlingit, Navajo, and Ahtna are famous for their “elaborate aspectual systems” (Mithun 1999: 166). The complexity of these systems is opacified by their peculiar descriptive terminology (Cook 1984: 120; Mithun 1999: 362) which evolved apart from mainstream temporal semantics. Given a Minimalist syntactic model of the Tlingit verbal system (Crippen 2019), we would like a semantic model that proceeds compositionally along the same structures. But a compositional approach to aspect is incompatible with the standard non-compositional analyses in the family (Cook 1984: 119; Leer 1991: ch. 8; Axelrod 1993: ch. 3; Smith 1997: 329 n. 7; Young 2000). This suggests that the system needs to be deconstructed and reanalyzed with compositionality in mind. Looming large in the morphosyntax of aspect is the conjugation class system that expresses spatial semantics and which seems to be extended to time in the grammar. In addition, the lexical aspect classes known as “verb theme categories” (Kari 1979; Leer 1991: ch. 7; Axelrod 1993: ch. 5) and their rich systems of derivation (Kari 1992) directly impinge on the realization of aspect and other temporal meanings. I suggest some directions for the analysis of aspect that take into account the spatial and lexical aspect categories and point toward the possibility if not the reality of a compositional semantics for aspect and related phenomena in Tlingit and other Na-Dene languages.
References
Axelrod, Melissa. 1993. The semantics of time: Aspectual categorization in Koyukon Athabaskan. Lincoln, NE: Univ. of Nebraska Press.
Cook, Eung-Do. 1984. A Sarcee grammar. Vancouver: UBC Press.
Crippen, James A. 2019. The syntax in Tlingit verbs. Vancouver: UBC, PhD diss.
Kari, James. 1979. Athabaskan verb theme categories: Ahtna. Fairbanks, AK: ANLC.
Kari, James. 1992. Some concepts in Ahtna Athabaskan word formation. In Morphology Now, M. Aronoff (ed.), pp. 107–131. Albany, NY: SUNY Press.
Leer, Jeff. 1991. The schetic categories of the Tlingit verb. Chicago: Univ. of Chicago, PhD diss.
Mithun, Marianne. 1999. The languages of Native North America. Cambridge: CUP.
Smith, Carlota. 1997. The parameter of aspect. Dordrect: Kluwer Academic.
Young, Robert W. 2000. The Navajo verb system: An overview. Albuquerque: Univ. of New Mexico Press.
Lisa Matthewson (Dept. of Linguistics, University of British Columbia)
Title: Evidential-temporal interactions do not (always) come for free
Friday April 16, 2021 at 3:30 pm
Abstract: Evidentials are usually assumed to encode the speaker’s source of evidence for their utterance. However, a growing body of research proposes that evidence source does not need to be hardwired into the lexical entry of the evidential morphemes; instead, evidential restrictions can be derived from temporal or aspectual information in the rest of the sentence (e.g., Chung 2007, Lee 2013 for Korean; Koev 2017 for Bulgarian; Bowler 2018 for Tatar; Speas 2021 for Matses).
In this talk we argue that the derivation of evidence source from temporal information is not always tenable. Drawing on data from five languages from four families, we argue that evidentials can lexically encode restrictions on the time the speaker acquired their evidence for the truth of the prejacent proposition (the Evidence Acquisition Time). Evidentials can do this independently of temporal marking elsewhere in the sentence, and they sometimes must encode both temporal and evidence source information.
In particular, we argue that English inferential apparently and seem, the Japanese indirect evidential yooda and reportative sooda, and the St’át’imcets (a.k.a. Lillooet; Salish) perceived-evidence inferential an’ all require that the earliest time their prejacent p becomes true, EARLIEST(p) (cf. Beaver and Condoravdi 2003) precedes or coincides with the Evidence Acquisition Time. Conversely, English epistemic should and the German epistemic modal sollte encode the opposite relation: EARLIEST(p) must follow the EAT. A third group of evidentials encode no temporal restrictions: the English epistemic modal must, St’át’imcets inferential k’a and reportative ku7, and Gitksan (Tsimshianic) inferential ima and reportative gat. Comparing temporal evidentials with non-temporal ones supports the view that a temporal component is hardwired into the lexical semantics of the former set. Finally, the fact that the temporal contributions cross-cut the evidential ones supports the proposal that one cannot be reduced to the other in these languages.
Duane Watson (Vanderbilt University)
Title: Speaking for thinking: Understanding the link between cognition and speech
Friday April 23, 2021 at 3:30 pm
Abstract: One of the central debates in the language sciences is understanding whether linguistic representations can be divided into those that represent competence, i.e. linguistic knowledge, and those that represent performance, i.e. psychological processes that use that knowledge. Prosody, which is the tone, rhythm, and intonation of speech, is perhaps unique among linguistic representations in that it conveys information about both linguistic structure and psychological processes. In this talk, I will present work from my lab, as well as the language literature more generally, that suggests that prosody is used to optimize the speech signal for listeners as well as provide time for speakers to engage speech processes related to language production. By studying prosody, language scientists can gain insight into language structure (e.g. syntax, semantics, and discourse), psychological processes (e.g. production and comprehension), and how the two interact.
Fall 2020
Laura Dilley (Michigan State University)
Friday, September 18, 2020 at 3:30 pm
Title: Language and social brains: Toward understanding mechanisms and typologies of prosody and tone
Abstract: The past ~70 years of linguistic research have seen dramatic changes in the way researchers frame and conceptualize language as a human capacity and activity. In this talk I will present a synthesis of key insights from these past decades which leads to a view that language structure and meaning is grounded in social dynamics of perception, action, and cognition within ecological niches. Language perception does not entail, as some have argued, mere recovery of abstract linguistic units; rather, the very process of what those units are understood to be depends on social and ecological contexts. Framed in this way, innate brain mechanisms tuned to extraction of information over language-relevant timescales, together with the history of short- and long-term experiences over a lifetime, give rise to emergent understandings of meaning, as well as the apprehension of linguistic form and content. I will present the case of prosody, long held to be a mere overlay on the implicitly more foundational segmental underpinning, and challenge some long-held assumptions about the structure of prosody and how it contributes to meaning. With the benefit of insights of original thinkers who have come before, as well as the principle of Ockham’s Razor, I will argue that viewing human linguistic capacities as grounded in inherent temporal dynamics of social brains and bodies fosters novel connections among linguistic sub-disciplines and brings new questions into focus. Viewed through this lens, I assert that is possible to make headway toward understanding some of the most challenging domains of linguistic inquiry, namely typology, meaning and structure of tone and prosody.
Peter Jenks (UC Berkeley)
Friday October 16, 2020 at 3:30 pm
Title: Are indices syntactically represented?
Abstract: The status of indices in syntactic representations is unclear. While indices are frequently used for expository purposes, they have no syntactic status in the copy theory of movement (Corver & Nunes 2007) or Agree-based analyses binding phenomena (Reuland 2011, Vanden Wyngaerd 2011). In this talk I argue that the presence versus absence of indices explain language-internal splits in definiteness and pronouns in different languages, while the ability of names to violate condition C in Thai receives a natural explanation if we treat names in Thai but not English as contextually restricted indices. The resulting view is one where indices are a component of linguistic representations, but not all referential expressions contain them. This view is consistent with Tayna Reinhart’s approach to Conditions B and C (Grodzinsky & Reinhart 1993), and entails that indices should play a more important role in syntactic theory than they currently do.
Emily Elfner (Dept. of Languages, Literatures and Linguistics, York University)
Friday November 20, 2020 at 3:30 pm
Title: Evaluating evidence for recursive prosodic structure
Abstract: In much recent work on the syntax-prosody interface, the question of whether recursion is present in prosodic structure has played a key role (for example, Wagner 2005, 2010; Selkirk 2009, 2011, among others). In particular, in theories of the syntax-prosody interface such as Match Theory (Selkirk 2009, 2011), which derive prosodic constituents directly from syntactic structure, prosodic structure is predicted to show by default a degree of recursion that arguably is comparable with the depth of the nested hierarchical structure found in syntax.
One major question which has surfaced is the extent to which the level of recursive prosodic structure predicted by syntactic structure is universal. For example, some languages have been argued to show overt phonological and phonetic reflexes of recursion, thus providing apparent empirical support for the recursive structures predicted by syntactic structure in a number of languages, such as Irish (Elfner 2012, 2015), Basque (Elordieta 2015), and Swedish (Myrberg 2013). However, other languages may not show such overt evidence, as it has long been assumed that the ways that languages mark prosodic phrase edges and heads is language-specific; for example, some of the predicted prosodic phrases may be marked overtly only on one edge (left or right), or not at all. Conversely, we cannot always assume that overt evidence of a prosodic boundary indicates the presence of a syntactic boundary.
Therefore, the question remains: if there is no overt evidence of the edges of certain prosodic constituents in a particular language, to what extent can we posit their existence based on theoretical predictions relating to hierarchical structure and syntax-prosody mapping alone? In this talk, I will explore this question in relation to a case study on the prosodic structure of Irish, which provides an apparent conflict between prosodic cues which provide evidence for hierarchal syntactic structure and domain juncture (Elfner 2012, 2016)
Winter 2020
Andrés Salanova : February 28th, 2020 3:30 to 5:00 pm
Location: Wilson Hall - Wendy Patrick Room (118)
Title: A semantics for frustratives
Laura Dilley: March 13, 2020
Yael Sharvit : April 3, 2020
Fall 2019
John Alderete : Nov 15, 2019 3:30 to 5:00 pm
Location: BIRKS room 111
Title: Speech errors and phonological patterns: Integrating insights from psycholinguistic and linguistic theory
Abstract: In large collections of speech errors, phonological patterns emerge. Speech errors are shaped by phonotactic constraints, cross-linguistic markedness, frequency, and phonological representations of prosodic and segmental structure. While insights from both linguistic theory and psycholinguistic models have been brought to bear on these patterns, research on phonological patterns in speech errors rarely attempts to compare and contrast analyses from these different perspectives, much less integrate them as a coherent whole. This talk investigates the phonological patterns in the SFU Speech Error Database (SFUSED) with the goal of combining both processing and linguistic assumptions in an integrated model of speech production. In particular, it examines the impact of language particular phonotactics on speech errors, competing explanations from markedness and frequency, and the role of linguistic representations for syllables and tone. The empirical findings support a model that includes both production processing impacted by frequency and explicit representations of tone and syllables from phonological theory.
Jason Brenier : Dec 6, 2019 3:30 to 5:00 pm
Location: ARTS Bldg. W-20
Abstract: From knowledge representation to speech processing and human-computer interaction, linguistic research has been critical to the development of information technology and its widespread adoption in the business world. Using examples from technology startups, enterprise businesses and the venture capital industry, this talk will review the many contributions that linguists have made to the rising AI economy and will explore their increasingly important role in the future.
Winter 2019
Speaker: Susi Wurmbrand (Universität Wien)
Date & Time: March 22nd at 3:30 pm
Place: Education Bldg. rm. 434
Title: Proper and Improper A-Dependencies
Abstract: This talk provides an overview of case and agreement dependencies that are established across clause-boundaries, such as raising to subject or object and cross-clausal agreement. We will see that cross-clausal A-dependencies (CCADs) in several languages can apply not only across non-finite but also across finite clause boundaries. Furthermore, it will be shown that the DP entering a CCAD is situated in the specifier of the embedded CP. This poses a challenge for the traditional 'truncation' approach to CCADs according to which CCADs are restricted to reduced (CP-less) complements. It also poses a challenge for the view that A-dependencies cannot follow A'-dependencies involving the same element. Lastly, we can observe that a clause across which a CCAD applies functions as true, non-deficient, A'-CP for other purposes. The direction proposed to bring the observed properties together is to maintain a universal improper A-after-A′ constraint, but allow certain positions in certain CPs to qualify as A-positions from which further A-dependencies can be established.
Speaker: Scott Anderbois (Brown University)
Date & Time: April 12th at 3:30 pm
Place: Education Bldg. rm. 434
Title: At-issueness in direct quotation: the case of Mayan quotatives
Abstract: In addition to verba dicendi, languages have a bunch of different other grammatical devices for encoding reported speech. While not common in Indo-European languages, two of the most common such elements cross-linguistically are reportative evidentials and quotatives. Quotatives have been much less discussed then either verba dicendi or reportatives, both in descriptive/typological literature and especially in formal semantic work. While quotatives haven't been formally analyzed in detail previously to my knowledge, several recent works on reported speech constructions in general have suggested in passing that they pattern either with verba dicendi or with reportatives. Drawing on data from Yucatec Maya, I argue that they differ from both since they present direct quotation (like verba dicendi) but make a conventional at-issueness distinction (like reportatives). To account for these facts, I develop an account of quotatives by combining an extended Farkas & Bruce 2010-style discourse scoreboard with bicontextualism (building on Eckardt 2014's work on Free Indirect Discourse).
Fall 2018
Speaker: Jane Stuart-Smith (University of Glasgow)
Date & Time: October 12th at 3:30pm
Place: Education Bldg. rm. 211
Title: Sound perspectives? Speech and speaker dynamics over a century of Scottish English
Abstract: As in many disciplines, in linguistics too, perspective matters. Structured variability in language occurs at all linguistic levels and is governed by a large range of diverse factors. Viewed through a synchronic lens, such variation informs our understanding of linguistic and social-cognitive constraints on language at particular points in time; a diachronic lens expands the focus across time. And, as Weinreich et al (1968) pointed out, structured variability is integral to linguistic description and explanation as a whole, by being at once both the stuff of the present, the reflexes of the past, and the potential for changes in the future. There is a further dimension which is often not explicit, the role of analytical perspective on linguistic phenomena.
This paper considers a particular kind of structured variability, phonetic and phonological variation, within the sociolinguistic context of the recorded history of Glaswegian vernacular across the 20th century. Two aspects of perspective frame my key research questions:
1. What are the ‘things’ which we observe? How do different analytical perspectives on phonetic variation affect how we interpret that variation? Specifically, how do different kinds of observation — within segment/across a phonological contrast/even beyond segments — auditory/acoustic/articulatory phonetic — shape our interpretations?
2. How are these ‘things’ embedded in time and social space? Specifically, how is this variation linked to contextual perspective, shifts in social events and spaces over the history of the city of Glasgow? How do we know whether, or when, these ‘things’ might be sound changes (following Milroy 2003)?
I consider these questions by reviewing a series of studies (including some ongoing and still unpublished) on two segments in Glaswegian English, the first thought to be stable and not undergoing sound change (/s/), the second thought to be changing (postvocalic /r/).
Speaker: Nico Baier (McGill University)
Date & Time: November 2nd at 3:30 pm
Place: Education Bldg. rm. 211
Title: Unifying Anti-Agreement and wh-Agreement
Abstract: In this talk, I investigate the sensitivity of φ-agreement to features typically associated with Ā-extraction, including those related towh-questioning, relativization, focus and topicalization. This phenomenon has been referred to as anti-agreement (Ouhalla 1993) orwh-agreement (Chung and Georgopoulos 1988; Georgopoulos 1991; Chung 1994) in the literature. While anti-agreement is commonly held to result from constraints on the Ā-movement of agreeing DPs, I argue that it reduces to an instance ofwh-agreement, or the appearance of particular morphological forms in the presence of Ā-features. I develop a unified account of theseĀ-sensitive φ-agreement effects in which they arise from the ability of φ-probes to copy both φ-features and Ā-features in the syntax. In the morphological component, partial or totalimpoverishmentmay apply to feature bundles containing both φ- and Ā-features, deleting some or all of the φ-features. Impoverishment blocks insertion of an otherwise appropriate, more highly specified agreement exponent. I present case studies of the effect of Ā-features on φ-agreement in three languages: the West Caucasian language Abaza (O’Herin 2002); the Berber language Tarifit (Ouhalla 1993; El Hankari 2010); and the Northern Italian dialect Fiorentino (Brandi and Cordin 1989; Suñer 1992). I show that in all three languages, the agreement exponents that appear in the context of Ā-features are systematically underspecified.
Winter 2018
Speaker: Sharon Goldwater (University of Edinburgh)
Date & Time: January 12th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Bootstrapping language acquisition
Abstract: The semantic bootstrapping hypothesis proposes that children break into the syntactic system of their native language by inferring from the situational context a structured semantic representation for (some) words or utterances. Assuming a correspondence between semantic structure and syntactic structure allows the child to begin to acquire native language syntax. In this talk I will describe a Bayesian probabilistic model of semantically bootstrapped child language acquisition. The model learns from pairs of sentences and their (noisy) meaning representations, extracted from a real child-directed corpus. It *jointly* models both (a) word learning: the mapping between components of the givensentential meaning and wordforms, and (b) syntax learning: word order and the mapping between wordforms and their syntactic categories. I will show how this joint model accounts for several well-documented phenomena from the developmental literature. In particular, the model exhibits syntactic bootstrapping effects (in which previously learned constructions facilitate the learning of novel words), sudden jumps in learning without explicit parameter setting, acceleration of word-learning (the "vocabulary spurt"), an initial bias favoring the learning of nouns over verbs, and one-shot learning of words and their meanings. The learner thus demonstrates how statistical learning over structured representations can provide a unified account for these seemingly disparate phenomena.
Speaker: Karen Jesney (University of Southern California)
Date & Time: April 27th at 3:30 pm
Place: Leacock 210
Title: Constraint Scaling Factors and Patterns of Variation in Phonology
Abstract: Language systems characterized by high levels of variability offer unique possibilities for probing the structure of the phonological grammar. This talk examines data from developing L1 phonologies and loanword adaptation patterns, and argues that scaling of constraint values within a system of weighted constraints offers the most direct means of encoding the attested effects. Two case studies are presented. The first case study looks at words that contain multiple sources of syllable-structure markedness, focusing on data from the twelve Dutch- acquiring children in the CLPF corpus (Fikkert 1994, Levelt 1994). The overall finding is that accurate realization of marked coda structures increases the probability that marked onset structures will be accurately realized by the child. These effects cannot be reduced to either age or the frequency with which the marked structures are attempted. The second case study examines the realization of marginal segments in a corpus of Québec French borrowings from English (Roy 1992), and finds evidence for similar interactions at the level of segmental realization. Given that one marked structure is realized accurately, the probability increases that other marked structures will also be realized accurately. Other loanword data show related implicational patterns. I argue these interactions are best modeled through scaling of constraint values within a probabilistic weighted constraint grammar – either Noisy Harmonic Grammar (Boersma & Pater 2008) or Maximum Entropy OT (Goldwater & Johnson 2003). Constraint scaling factors co-exist with basic constraints weights, and can be keyed both to grammatical factors like prosodic position, and to non-phonological factors like word frequency and attention. The result is a model that captures the attested interactions between marked structures within words while avoiding the pitfalls of previous accounts that are too restrictive to accurately model the full range of variation.
Speaker: Susana Béjar (University of Toronto)
Date & Time: February 23rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Person, Agree, and Derived Predicates
Abstract: Person features have played a prominent role in models of argument licensing, case and agreement over the past two decades. Within the theory of Agree, person features have been manipulated to account for a range of intricate patterns including non-canonical locality effects (e.g. hierarchy effects), ineffabilities (e.g. PCC effects) and differential argument marking (e.g. DOM). Overwhelmingly, work in this area has been based on structures with verbal predicates. In this talk I put the spotlight on verbless structures — specifically, copular clauses with nominal complements — and challenges that they present to person-driven approaches, in particular unexpected locality patterns and ineffabilities. I argue that both challenges benefit from viewing nominal complements of copular clauses as derived predicates in a sense similar to Landau (2011), that is to say I take them to involve (reduced) clausal complements. The distribution of φ-features in clausal complements, and the operations these are subject to, can explain the unusual locality patterns and ineffabilities alluded to above.
Speaker: Elizabeth Coppock (Boston University)
Date & Time: March 23rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Speedbumps on the compositional route to proportional MOST
Abstract: Recent work has suggested that the proportional interpretation of English "most" is not lexically arbitrary but rather compositionally derived as the superlative of "many". Based on broad cross-linguistic evidence, we caution that the compositional route there is fraught. Investigation of a geographically, genetically, and typologically diverse set of languages shows that proportional readings of quantity superlatives are highly typologically marked, and relative readings are universal. We argue that proportional interpretations are marked because they depend on violations of certain default semantic principles: (i) quantity words denote gradable predicates of degrees, rather than individuals, and (ii) comparison among any set of entities involves comparison among a set of individuals. We also find that proportional readings arise with a quite limited range of morphosyntactic strategies for forming superlatives, suggesting that analogical pressure from other quantifiers in the lexicon may help in overcoming these hindrances.
Speaker: Daniel Pape (McMaster University)
Date & Time: April 13th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Linking speech production to speech perception: A cross-linguistic comparison of the phonological voicing contrast and its phonetic realization
Abstract: How do we form phonemic categories? How is speech perception linked to the articulatory and acoustic production of speech? These are classic phonetic questions but are still controversially debated today. In my talk I will present a number of phonetic experiments to approach these questions. I will discuss (1) how the cognitive system is intricately linked to the speech production system for the phonological voicing contrast; (2) how cross-linguistic differences in Romance languages surface in perception compared to production; and (3) how several acoustic cues of the speech signal are used with varying weights to form a robust phoneme identification. I conclude my talk with an excursion into audio-visual speech perception by presenting a phonetic experiment examining the effect of facial hair on speech intelligibility.
Fall 2017
Speaker: Jie Li (Shantou University)
Date & Time: September 15th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Grammatical Metaphor Theory in Pursuit of Metaphorical Competence
Abstract: Grammatical Metaphor is one of the important concepts in Systemic-Functional Grammar. Halliday (1994) took grammatical metaphor as a linguistic strategy for “variation in the expression of a meaning”. The language system provides language users with a system of meaning potential, from which language users make a series of choices to realize a certain semantic function. The relation between the chosen linguistic structure and the meaning expressed can be either congruent or incongruent/ metaphorical. Children gradually learn to speak metaphorically, and the emergence of more metaphorical expressions is an important feature of adult language. Denesi (1993) claimed that speaking metaphorically is a basic characteristic of native speaker’s linguistic competence. In other words, the ability of understand and use metaphors can be taken as an important symbol for the good mastery of a language. Therefore, it is both necessary and important to value metaphorical competence in language education. With the guidelines of the grammatical metaphor theory, this talk is going to analyze the nature, the complexity and the functions of metaphorical forms so as to help language learners with their knowledge and mastery of the metaphorical phenomena in their target language, and finally reach the goal of improving their linguistic competence by enhancing their ability to understand and use metaphors.
Speaker: Aron Hirsch (McGill University)
Date & Time: October 6th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Only and pseudo-clefts
Abstract: This talk motivates a revision to the semantics of only. Only composes with a proposition (its prejacent, p) and a set of alternative propositions. Only presupposes that p is true, and asserts that alternatives are false. In deciding which alternatives to negate, only is selective: to avoid creating a logical contradiction, only negates an alternative q only if ¬q is logically consistent with p. I will argue that only is even more selective than previously thought: in addition to avoiding contradictions, only avoids creating certain meanings which are intuitively paradoxical, though logically contingent. The argument comes from novel data studying the interaction of only with epistemic modals and conditionals. In the second part of the talk, I show how the new, more selective only can shed light on a wider range of data: in particular, I argue that the source of exhaustivity in pseudo-clefts is a covert only, crucially with the revised semantics.
Speaker: Christian DiCanio (University of Buffalo)
Date & Time: November 10th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Phonetic variation and the construction of a Mixtec spoken language corpus
Abstract: The documentation of endangered languages frequently involves the collection and analysis of a corpus of speech data. To ensure continued access to the corpus, researchers must construct additional layers of annotation. This process is often constrained by patterns of phonetic variation, but such patterns also open up new areas of research in both speech production and phonology. In this talk, I discuss the interplay between the construction of a spoken language corpus of Yoloxóchitl Mixtec (Otomanguean: Mexico) from a language documentation project and the patterns of phonetic variation which have been investigated along the way. I address three main issues of relevance to linguistic theory and phonetics: (1) How does speech style influence speech production and how might this affect the creation of a spoken language corpus? (2) How do variable morphophonological rules impact corpus segmentation? and (3) What principles account for surface phonetic variation? Can such variation be predicted and automatically annotated? Together, these topics address issues of increasing importance in the fields of corpus phonetics, speech processing, and language documentation.
Speaker: Lucie Ménard (Université de Québec à Montreal)
Date & Time: December 1st at 3:30 pm
Place: Arts Bldg. W-20
Title: Reaching goals with limited means: Production-perception relationships in blind children and adults
Abstract: In face-to-face conversation, speech is produced and perceived through various modalities. Movements of the lips, jaw, and tongue, for instance, modulate air pressure to produce a complex waveform perceived by the listener’s ears. Visually salient articulatory movements (of the lips and jaw) also contribute to speech perception. Although many studies have been conducted on the role of visual components in speech perception, much less is known about their role in speech production. In this presentation, we discuss the emergence and refinement of production-perception relationships through a series of studies conducted with typically developing and blind individuals (children and adults). Acoustic, kinematic, and perceptual data collected in contexts representing various degrees of saliency requirements will be presented. We will show how sensory templates built from impoverished input influence production strategies.
Winter 2017
Speaker: Dan Lassiter (Stanford University)
Date & Time: January 27th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Epistemic language in indicative and counterfactual conditionals
Abstract: In this talk I'll report on a series of experiments which examine judgments about epistemic modals, both in unembedded contexts and in indicative and counterfactual conditionals. Building on these results and recent probabilistic theories of epistemic language, I propose a probabilistic version of Kratzer's restrictor theory of conditionals that identifies the indicative/counterfactual distinction with Pearl's distinction between conditioning and intervening in probabilistic graphical models. Combining this theory with recent accounts of must, we can also derive a theory of bare conditionals; I describe the predictions and consider their plausibility in light of the experimental data.
Speaker: Jeremy Hartman (UMass Amherst)
Date & Time: February 3rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Negation and factivity in acquisition and beyond
Abstract: In this talk, I present joint work with Magda Oiry on the interaction between negation and two types of factive predicates in acquisition. Following work by Léger (2008), we examine children’s understanding of sentences with the factive predicates know and be happy, in combination with negation--in the matrix clause, as well as in the embedded clause. In addition to anasymmetry in the understanding of know vs. be happy, we find a new and revealing pattern of errors across different sentence-types with know. We also show that a similar error pattern is found even with adult subjects. I discuss how these findings relate to recent work on the processing of negation.
Speaker: Boris Harizanov (Stanford University)
Date & Time: February 17th at 3:30 pm
Place: Education Bldg. rm. 433
Title: On the nature of syntactic head movement
Abstract: In Harizanov and Gribanova 2017, we argue that head movement phenomena having to do with word formation (affixation, compounding, etc.) must be empirically distinguished from head movement phenomena having to do purely with the displacement of heads or fully formed words (verb initiality, verb-second, etc.). We suggest that the former, word-formation type should be implemented as post-syntactic amalgamation, while the latter, displacement-type should be implemented as regular syntactic movement.
In this talk, I take this result as a starting point for an investigation of the latter, syntactic type of head movement. I show in some detail that such movement has the properties of (Internal) Merge and that it always targets the root. In addition, I suggest that, once a head is merged with the root, there are two available options (traditionally assumed to be incompatible with one another or with other grammatical principles): either (i) the target of movement projects or (ii) the moved head projects. The former scenario yields head movement to a specifier position, while the latter yields head reprojection. I offer participle fronting in Bulgarian as a case study of head movement to a specifier position and show how this analysis explains the apparently dual X- and XP-movement properties of participle fronting in Bulgarian, without stipulating a structure-preservation constraint on movement. As a case study of head reprojection, I discuss free relativization in Bulgarian. A treatment of this phenomenon in terms of reprojection allows for an understanding of why an element that has the distribution of a relative complementizer C in Bulgarian free relatives looks like a determiner D morphologically.
This work brings together and reconciles two strands of research, usually viewed, at least to some degree, as incompatible: head movement to specifier position and head movement as reprojection. Such synthesis is afforded, in large part, by the exclusion of the word-formation type of head movement phenomena from the purview of syntactic head movement, as in Harizanov and Gribanova 2017.
Speaker: Stephanie Shih (University of California Merced)
Date & Time: March 17th at 3:30 pm
Place: Education Bldg. rm. 433
Title: A multilevel approach to lexically-conditioned phonology
Abstract: Lexical classes often exhibit different phonological behaviours, in alternations or phonotactics. This talk takes up two interrelated issues for lexically-conditioned phonological patterns: (1) how the grammar captures the range of phonological variation that stems from lexical conditioning, and (2) whether the relevant lexical classes needed by the grammar can be learned from surface patterns. Previous approaches to lexically-sensitive phonology have focused largely on constraining it; however, only a limited understanding currently exists of the quantitative space of variation possible (i.e., entropy) within a coherent grammar.
In this talk, I present an approach that models lexically-conditioned phonological patterns as a multilevel grammar: each lexical class is a cophonology subgrammar of indexed constraint weight adjustments (i.e., varying slopes) in a multilevel Maximum Entropy Harmonic Grammar. This approach leverages the structure of multilevel statistical models to quantify the space of lexically-conditioned variation in natural language data. Moreover, the approach allows for the deployment of information-theoretic model comparison to assess competing hypotheses of what the phonologically-relevant lexical classes are. I’ll show that under this approach, the relevant lexical classes need not be a priori assumed but can instead be induced from noisy surface input via feature discovery.
Two case studies are examined: part of speech-conditioned tone patterns in Mende and content versus function word prosodification in English. Both case studies bring to bear new quantitative evidence on classic category-sensitive phenomena. The results illustrate how the multilevel approach proposed here can capture the probabilistic heterogeneity and learnability of lexical conditioning in a phonological system, with potential ramifications for understanding the structure of the developing lexicon in grammar acquisition.
Fall 2016
Speaker: Michael McAuliffe (McGill University)
Date & Time: September 23rd at 3:30 pm
Place: Education Bldg. rm. 433
Title: Dual nature of perceptual learning: Robustness and specificity
Abstract: In perceiving speech and language, listeners need to both perceive specific, highly variable utterances, and generalize to larger linguistic categories. One large source of the variability is in how individual speakers produce sounds, but another source of variation is the way in which speech and language are used in a particular task to accomplish a goal. Perceptual learning is a phenomenon in which listeners update their perceptual sound categories when exposed to a novel speaker. Perceptual learning is robust in the sense that most listeners show perceptual learning effects, most sound categories can be easily updated, and most tasks involving speech facilitate perceptual learning. In this talk, I focus more on the ways that perceptual learning can be task-specific. I present a series of perceptual learning experiments for exposing listeners to a novel talker through single words or longer sentences, varying tasks and the linguistic context. The instructions and goals of the task exert a size-able influence over the amount of perceptual learning that listeners exhibit. In general, listeners adapt less in the course of an experiment if they do not have to rely on the acoustic signal as much. For instance, if listeners are presented the orthography of the word along with the audio, they will not learn as much as if they had heard the audio alone. In sentence tasks, listeners matching pictures to a word at the end of a predictable sentence (i.e., A deep moat protected the old castle) will not learn as much from the final word as from an unpredictable sentence (i.e., He dreaded the long walk to the castle). However, the inverse is true for sentence transcription tasks, with larger perceptual learning effects from predictable sentences than unpredictable. Perceptual learning effects can generally be seen for all listeners and all tasks, but the size of the effects are dependent on the exposure task and how the linguistic system is engaged.
Speaker: Yvan Rose (Memorial University)
Date & Time: October 28th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Perceptual-Articulatory Relationships in Phonological Development: Implications for Feature Theory
Abstract: In this presentation, I discuss a series of asymmetries in phonological development, the nature of which is difficult to address from a strictly phonological perspective. In particular, I focus on transitional periods between developmental stages. I show that these transitions are best interpreted in terms of phonological categories at both prosodic and segmental levels of representation, including segmental features. Using computer-assisted methods of data classification, I describe the detail of these transitions, highlighting both perceptual and articulatory pressures on the child's developing system of phonological representation. I discuss implications of these findings for Phonological Theory, in particular for traditional models of segmental representation relying on phonological features. While the data support the need for sub-segmental units of phonological representation, these units do not appear to match fully the set of features typically used in the analysis of adult phonological systems.
Speaker: Judith Degen (Stanford University)
Date & Time: November 4th at 3:30 pm
Place: Education Bldg. rm. 433
Title: Beyond "overinformativeness": rationally redundant referring expressions
Abstract: What guides the choice of a referring expression like "the box", "the big box", or "the big red box"? Speakers have a well-documented tendency to add redundant modifiers in referring expressions (e.g., "the big red box" when "the big box" would suffice for uniquely picking out the intended object). This "overinformativeness" poses a challenge for theories of language production, especially those positing rational language use (e.g., in the Gricean tradition). We present a novel production model of referring expressions in the Rational Speech Act framework. Speakers are modeled as rationally trading off the cost of additional modifiers with the amount of information added about the intended referent. The innovation is assuming that truth functions are probabilistic rather than deterministic.
This model captures a number of production phenomena in the realm of overinformativeness, including the color-size asymmetry in probability of overmodification (speakers overmodify more with color than size adjectives); visual scene variation effects on probability of overmodification (increased visual scene variation increases the probability of overmodifying with color); and color typicality effects on probability of overmodification (speakers overmodify less with more typical colors). In addition to demonstrating how the model accounts for these qualitative effects, we present fine-grained quantitative predictions that are beautifully borne out in data from interactive free production reference game experiments.
We conclude that the systematicity with which speakers redundantly use modifiers implicates a system geared towards communicative efficiency rather than towards wasteful overinformativeness.
Speaker: Jackie Cheung (McGill University)
Date & Time: December 2nd at 3:30 pm
Place: Education Bldg. rm. 624
Title: Generalized Natural Language Generation
Abstract: In popular language generation tasks such as machine translation, automatic systems are typically given pairs of expected input and output (e.g., a sentence in some source language and its translation in the target language). A single task-specific model is then learned from these samples using statistical techniques. However, such training data exists in sufficient quantity and quality for only a small number of high-profile, standardized generation tasks. In this talk, I argue for the need for generic tools in natural language generation, and discuss my lab's work on developing generic generation tasks and methods to solve them. First, I discuss progress on defining a task in sentence aggregation, which involves predicting whether units of semantic content can be meaningfully expressed in the same sentence. Then, I present a system for predicting noun phrase definiteness, and show that an artificial neural network model achieves state-of-the-art performance on this task, learning relevant syntactic and semantic constraints.
Winter 2016
Speaker: Lisa Pearl (UC Irvine)
Date & Time: Friday, March 18th at 3:30 pm
Place: ARTS Bldg. room 260
Title: How to know what’s necessary: Using computational modeling to specify Universal Grammar
Abstract: One explicit motivation for Universal Grammar (UG) is that it’s what allows children to acquire language as effectively and as rapidly as they do. Proposals for the contents of UG typically come from characterizing a learning problem precisely and identifying a potential solution to that problem. One benefit of computational modeling is to see if that solution works when it’s embedded in a learning strategy used during the acquisition process. This includes specifying (i) what the child knows already, (ii) what data the child is learning from, (iii) how long the child has to learn, and (iv) what the child needs to learn along the way.
When we identify successful learning strategies this way, we can then examine their components to see if any are necessarily both innate and domain-specific (and so part of UG). I have previously used this approach to propose new UG components (and remove the necessity of others) for learning both syntactic islands and English anaphoric one. In this talk, I investigate what’s been called the Linking Problem, which concerns where event participants appear syntactically. I’ll discuss some initial findings about when prior (and likely UG) knowledge, such as the Uniformity of Theta Assignment Hypothesis (UTAH), is helpful for learning useful information about the Linking Problem.
Speaker: Pat Keating (UCLA)
Date & Time: Friday, April 8th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Linguistic Voice Quality
Abstract: In this talk I will present several results concerning the production and perception of voice quality (phonation type), from a larger interdisciplinary project at UCLA. First, I compare the acoustic properties of phonation type distinctions in several languages, deriving a simple (low-dimensional) phonetic space for voice quality in which phonation types cluster across languages. Second, I discuss the relation between phonation and lexical tone. In some languages, phonation type is phonemic, and independent of tone, either because the languages are non-tonal (e.g. Gujarati), or because tones and phonation cross-classify (e.g. Mazatec, Yi languages). In other languages, phonation is non-phonemic, instead conditioned by voice pitch and segmental/prosodic contexts (e.g. English). In some such languages (e.g. Mandarin), this relation between voice pitch and voice quality gives voice quality a secondary role in tonal contrasts, increasing the effective size of the tone space. Still other tone languages have both independent phonation and pitch-related phonation (e.g. Hmongic languages); we show that in one such language, White Hmong, the perceptual role of phonation is different for different tones. These cases will be illustrated with acoustic and physiological measures of voice production, obtained with our freely-available tools for voice analysis.
Fall 2015
Speaker: Kie Zuraw (UCLA)
Date & Time: Friday, September 11th at 3:30 pm
Place: Education Building, room 338
Title: Polarized variation
Abstract: The normal distribution--the bell curve--is common in all kinds of data, and is often expected when the quantity being measured results from multiple independent factors. The distribution of phonologically varying words, however, is sharply non-normal in the cases examined in this talk (from English, French, Hungarian,Tagalog, and Samoan). Instead of most words' showing some medial rate of variation (say, 50% of a word's tokens are regular and 50% irregular), with smaller numbers of words having extreme behavior, words cluster at the extremes of behavior. That is, a histogram of variant rates is shaped like a U (or sometimes J) rather than a bell. The U shape cannot be accounted for by positing a binary distinction with some amount of noise over tokens, because some items (though the minority) clearly are variable, even speaker-internally. In some cases (e.g., French "aspirated" words) there is a diachronic explanation: sound change caused some words to become exceptional, so that the starting point for today's situation was already U-shaped. But in other cases, such an explanation is not available, and items seem to be attracted towards extreme behavior.
Two mechanisms for deriving U-shaped distributions will be discussed, with speculation as to why some distributions of variation are U-shaped and others bell-shaped.
Speaker: Matt Goldrick (Northwestern)
Date & Time: Friday, October 2nd at 1:30 pm
Place: Goodman Cancer Auditorium
Title: Phonetic echoes of cognitive processing
Abstract: For many years, theories of language production assumed a strict functional separation between peripheral phonetic encoding processes and more central cognitive processes. The output of lexical access—the processes mapping intended messages to utterance plans—was assumed to yield a plan that was simply executed by more peripheral processes. Recent work has challenged such proposals, showing that on-line disruptions to lexical access can affect gradient phonetic properties (e.g., phonological speech errors influence the phonetic properties of speech sounds; Goldrick & Blumstein, 2006). I'll discuss two sets of projects from my lab that extend this work. Large data sets, enabled by machine-learning based techniques for automated phonetic analysis, provide new insights into the consequences of cognitive disruptions for monolingual speech. I'll then discuss how cognitive disruptions modulate cross-language interactions in multilingual speakers.
Speaker: Danny Fox (MIT)
Date & Time: Friday, October 23rd at 3:30 pm
Place: ARTS Bldg. room 260
Title: Quantifier Raising as Restrictor Sharing – Evidence from Hydra and Extaposition with Split Antecedents
Abstract: To provide an account of Hydra (Every boy and (every) girl who like each other should have a play date) and Extraposition with Split Antecedents (ESA, A boy came in and a girl left who like each other), along the lines of Zhang 2007.
To explain how the account argues for the following conclusions (Johnson 2011):
a. Quantifier Raising involves movement not of a QP but of the quantifiers restrictor. More specifically:
1. Quantifier words are covert and “late merged” in the QPs scope position
2. Quantifier words are morphologically realized on lower heads in the QP.
b. This should be embedded in a theory in which a moved constituent has more than one mother (multi-dominance).
To provide a semantics for the lower hosting head (inspired by Champollion 2015)
Speaker: Mark Baker (Rutgers)
Date & Time: Friday, November 6th at 3:30 pm
Place: ARTS Bldg. room 260
Title: TBA
Abstract: TBA
Speaker: Meaghan Fowlie (McGill)
Date & Time: Friday, November 20th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Modelling and Learning Adjuncts
Abstract: Adjuncts have among their properties optionality and iterability, which are usually accounted for with a grammar in which the presence or absence of an adjunct does not affect the state of the derivation. For example, in a phrase structure grammar with rules like NP -> AP NP, we have an NP whether or not we have an adjective. However, certain adjuncts like adverbs and adjectives are often quite strictly ordered, which cannot be accounted for with a model that treats a phrase the same regardless of the presence of another adjunct: whether or not a particular adjunct has adjoined affects whether or not another adjunct may adjoin. I present a minimalist model that can handle all of these properties.
In terms of learning, I cover three topics: language learning algorithms and how they handle optionality and repetition; an artificial language learning experiment about repetition, and, just for fun, the use of machine learning to analyse the song of the California Thrasher, showing that their unbounded repetition lends itself much better to a human-language-like grammar than simple transitional probabilities.
Speaker: Elizabeth Smith (UQAM)
Date & Time: Friday, December 4th at 3:30 pm
Place: ARTS Bldg. room 260
Title: Just say 'no': Cross-linguistic differences in the felicity of disagreements over issues of taste and possibility
Abstract: Semanticists, pragmaticists, philosophers, and others have recently been interested in disagreements arising from evaluative propositions (especially those containing so-called "predicates of personal taste"), as in (1), and their theoretical implications, especially the mechanism behind the difference between (1) and (2).
(1) A: This soup is tasty. B: No it isn't.
(2) A: This soup is tasty, in my opinion. B: # No it isn't
In this talk, I will present experimental data (in the form of offline felicity judgments) collected from English Catalan, French, and Spanish two-turn oral dialogues showing that there are differences with respect to (1) v. (2) and other similar judgments cross-linguistically that create a further puzzle. I will compare various explanations for these new data, drawing on ideas present in Stojanovic 2007, von Fintel & Gillies 2007, Bouchard 2012, Umbach 2012 and others. I will further discuss the interplay of various factors in these data, including comparison with another dialect of Spanish with known differences in cultural norms as compared to Iberian Spanish. Finally, I will propose an analysis in which different types of content affect the number and type of propositions attributed to a speaker's discourse commitment set v. those being proposed for admission to the conversational common ground.
Winter 2015
Speaker: James Kirby (University of Edinburgh)
Date & Time: Monday, January 19 at 3:30 pm
Place: Education Building, room 627
Title: Dialect variation and phonetic change: Incipient tonogenesis in Khmer
Abstract: Unlike many languages of Southeast Asia, Khmer (Cambodian) is not a tone language, but an incipient tone contrast has been noted in several Khmer dialects for at least 50 years. While the process of tonogenesis is reasonably well-understood, the manner by which it seems to be taking place in Khmer - conditioned by loss of onset /r/ - has not been reported for any other language. In this talk, I will compare new acoustic and perceptual data on the emergence of tone in two varieties of Khmer: the colloquial speech of the capital Phnom Penh, and the dialect spoken in Kiên Giang province, Vietnam. I will show how this sound change may have been set in motion by devoicing of /r/, and sketch a statistical learning account of how differences in the perception of devoicing might help explain the observed differences between dialects. Finally, I will briefly discuss the implications of these findings for our understanding of tonogenesis and phonetic change more generally.
Speaker: Chris Carignan (North Carolina State University)
Date & Time: Friday, January 23 at 3:30 pm
Place: Education Building, room 433
Title: An oral articulatory approach to vowel nasalization: Searching for the "oral" in "nasal"
Abstract: Vowel nasalization, by definition, is characterized by some degree of coupling of the nasal cavity to the oral cavity via an opening of the velo-pharyngeal (VP) port, otherwise referred to as VP coupling, a lowering of the velum or, more generally, “nasalization”. In acoustic studies of vowel nasalization, it is sometimes assumed that the primary articulatory difference between an oral vowel and a nasal(ized) vowel is VP coupling and, thus, observed acoustic changes are customarily attributed to the effect of nasalization itself on the acoustic signal. The work presented in this talk takes the assumption that the production of vowel nasalization may also involve changes to the shape of the oral tract. Inferring these oral articulatory changes from the acoustic signal may be an intractable problem due to the conflation of the respective acoustic transfer functions associated with the nasal and oral tracts. Because of this issue, I explore the oral articulation of vowel nasalization by studying the shape of the oral tract itself. The findings from four such studies are presented in this talk---two studies on phonemic vowel nasalization (European French) and two studies on phonetic vowel nasalization (American English). The results suggest that---without being deterministic---the effect of nasalization on a vowel's acoustic output creates a condition where misapprehension of the articulatory source is possible and, as a result, modification of the oral tract is likely. In this framework, explanations for diachronic patterns of nasal vowel systems can be reasoned, understanding of synchronic effects of nasalization on vowel production and perception can be enlightened, and plausible predictions for nasal vowel systems can be made.
Speaker: Holger Mitterer (University of Malta)
Date & Time: Monday, February 2 at 3:30 pm
Place: Education Building, room 627
Title: When is a phone a phoneme?
Abstract: The glottal stop is viewed as a phoneme in some languages (e.g., Maltese) but as an optional prosodic boundary marker in others (e.g., Dutch). German is an intermediate case, in which the glottal stop is assumed to form the onset of “vowel-initial” words canonically (in contrast to in Dutch). Nevertheless, most phonological analyses agree that the phonotactic restrictions for the glottal stop—mostly restricted to morpheme-initial position—make it unnecessary to view it as a phoneme in German (in contrast to Maltese). Such assumptions are critical for our understanding of what is “lexical”, “phonological”, and “phonetic”. In this talk, I will present several production and perception studies in Maltese, German, and Dutch investigating this issue. The production experiments showed that glottalization of vowel-initial words functions similarly in German and Dutch, contrasting with the view that glottal stops are canonical in German and optional in Dutch. The perception experiments then tested the consequences of deleting an initial glottal stop or an initial /h/. The comparison with /h/ is motivated by the fact that /h/ is considered a phoneme in German, despite similar phonotactic restrictions as for the glottal stop. The results showed that deleting the Dutch glottal stop, the German glottal stop, the Maltese glottal stop, and German /h/ have very similar consequences in perception. These results thus favour the assumption that the glottal stop is part of the lexical representation of words in these three languages rather than lexically represented in Maltese and post-lexically inserted in the Germanic languages.
Speaker: Jessamyn Schertz (University of Toronto)
Date & Time: Friday, February 6 at 3:30 pm
Place: Education Building, room 433
Title: Learning different things from the same input: How initial category structure shapes phonetic adaptation
Abstract: Listeners are confronted with a large amount of redundancy in the language input. On the level of phonetic categories, sound contrasts often covary systematically on multiple dimensions, providing listeners with options of what to pay attention to (and what to ignore), in principle allowing for different individual “grammars.” In this talk, I present a series of experiments demonstrating the different choices made by native Korean listeners when categorizing the (L2) English stop voicing contrast. Korean speakers used both pitch and VOT to distinguish the contrast, showing relatively homogenous use of the two cues in production. However, perceptual patterns varied widely, with some listeners using pitch as a primary cue, some using VOT, and some using a combination of the two. These different choices were stable across sessions and determined how listeners modified their phonetic categories when confronted with a novel accent. The fact that individual differences in phonetic structure predict categorically different adaptation patterns highlights the importance of integrating initial listener biases into models of distributional learning and phonetic adaptation.
Speaker: Francisco Torreira (Max Planck Institute for Psycholinguistics)
Date & Time: Monday, February 9 at 3:30 pm
Place: Education Building, room 627
Title: Unraveling the time course of language production in conversational interaction
Abstract: In conversation, turn transitions between speakers often occur smoothly, most typically within a time window of 100 to 300 milliseconds. Since speech planning usually takes over half a second (ca. 600 ms for picture naming, Indefrey & Levelt, 2004; ca. 1500 ms for simple sentences, Griffin & Bock, 2000), it appears that participants in conversation often plan their utterances in overlap with their interlocutor’s turns. It is not clear, however, how they manage to launch their own turns in a timely manner (i.e., without excessive overlaps or long silent gaps). On the basis of psycholinguistic experiments (e.g., De Ruiter, Mitterer & Enfield, 2006), and against a long tradition of observational studies, it has been argued that participants in conversation rely mainly on anticipating morphosyntactic structure when timing and producing their turns, and that they do not need to make use of prosodic information in order to achieve smooth floor transitions. In this talk, I will present a series of new psycholinguistic, phonetic, and corpus studies challenging this view, and sketch an efficient turn-taking mechanism of language production involving two separate processes: a) early planning of content, based among other things on morphosyntactic prediction, and often carried out in overlap with the incoming turn, and b) late launching of articulation, mainly based on the identification of turn-final prosodic cues (e.g., phrase-final melodic patterns, final lengthening, sharp intensity drops).
Speaker: Florian Jaeger (University of Rochester)
Date & Time: Friday, February 20 at 3:30 pm
Place: Education Building, room 433
Title: The doubly-hierarchical structure of linguistic knowledge
Abstract: It is now broadly recognized that language understanding and production are probabilistic. For example, multiple instances of the same sound produced in the same context by the same speaker form a distribution over acoustic dimensions, rather than a signle point. I discuss data from speech perception and language processing that suggests that the ideas of gradience and inference over noisy input, while an important step forward, do not go far enough in characterizing the cognitive architecture underlying language.
Much of the noise and variability in linguistic behavior is structured: part of the differences in speakers’ gradient preferences are systematically conditioned on social indexical variables (e.g., gender, age, dialects and accents). This structure variability contributes to what is known as the infamous ‘lack of invariance’ problem in speech perception.
Listeners overcome the lack of invariance by learning to represent environment-specific linguistics statistics (e.g., talker-specific pronunciation, lexical, and syntactic preferences). Specifically, I propose that comprehenders recognize previously encountered language environments (such as a familiar speaker) and adapt to the statistics of novel environments while generalizing based on similar previous experiences. In this view, grammatical knowledge is conditioned on hierarchically organized indexical structure that captures speaker-specificity as well as generalizations across groups of speakers (sociolects, dialects, etc.). These representations can be thought of as allowing the efficient parameterizations (in the stochastic sense) of grammars for different language environments.
For this talk I will first briefly summarize evidence from speech perception (Kleinschmidt and Jaeger, in press). Then I will focus on sentence processing to demonstrate rapid expectation adaptation during language understanding (Fine et al., 2010, 2013; Farmer et al., 2014). Finally, I’ll present evidence from implicit motor learning that we can indeed learn the indexical structure underlying varying statistics in our environment (Qian et al, submitted).
Speaker: Lisa Matthewson (University of British Columbia)
Date & Time: Friday, April 10 at 3:30 pm
Place: Education Building, room 433
Title: TBA
Abstract: TBA
Fall 2014
Speaker: Anne-Michelle Tessier (University of Alberta)
Date & Time: Friday, September 12 at 3:30 pm
Place: Education Building, room 433
Title: Lexical Avoidance and Sources of Complexity in Phonological Acquisition
Abstract: This talk is about the phenomenon of lexical avoidance in children’s early linguistic development, whereby a child avoids producing words which contain some complex (or marked?) phonological structure (as discussed in Ferguson and Farwell, 1975; Menn 1976, 1983; Schwarz and Leonard, 1982, Schwartz et al, 1987; Storkel 2004, 2006; Adam and Bat-El, 2009; interalia). This research’s basic question is to what extent a child’s developing grammar is responsible for lexical avoidance, and more specifically what kinds of linguistic complexity can drive this avoidance. The increase in complexity I will focus on is the transition from one word to two word utterances – which might be either driven or delayed by a child’s phonology – and I will assess the nature of lexical avoidance related to this transition in two case studies: one taken from Donahue (1986), and another in a novel corpus analysis. The central claim will be that phonological grammar is indeed crucial to explaining the kinds of lexical avoidance which are attested and unattested, illustrated using OT constraint interaction to yield typologically-reasonable patterns, and I will discuss some of the predictions, implications and open questions that emerge from this approach.
Speaker: Kristine Onishi (McGill)
Date & Time: Friday, September 26 at 3:30 pm
Place: Education Building, room 433
Title: Infants' understanding of communicative intention
Abstract: Language is a tool that allows us to convey information quickly and efficiently. For example, to let you know where I left your keys, saying "your keys are on the table" is often more efficient than grunting and waving my arms. Even when we do not understand a language, as adults we infer that speakers of that unknown language can use it to convey information. When observing interactions between two people, what types of behavior do infants think can be used to convey information and what types of information do they think can be conveyed? I will describe some recent experiments demonstrating that infants, even before speaking much, understand that speech can be used to convey information, suggesting that realize that speech can be a tool for gathering knowledge.
Speaker: Benjamin Bruening (University of Delaware)
Date & Time: Friday, October 3 at 3:30 pm
Place: Education Building, room 433
Title: Subject-Verb Inversion as Generalized Alignment
Abstract: I suggest that the driving force behind subject-verb inversion, which takes place in questions in many languages, is the need for phonological alignment, as in the theory of Generalized Alignment in phonology and morphology. Specifically, I propose that many languages have a version of the following constraint:
Align V-C: Align(C(x), L/R, V(tense), L/R)
This constraint says that the left/right edge of some projection of C must be aligned with the left/right edge of the tensed verb. In the relevant context, say questions, this constraint holds. If the subject is in between the relevant projection of C and the tensed verb, they have to invert or the constraint is violated. The specifics of the inversion will vary from language to language and even from context to context within a language. For instance, in English the inversion is sometimes head movement, sometimes phrasal movement. In the Romance languages it is generally phrasal movement. I show that variation in how the constraint is stated in each language and how the language responds to meet it can account for an array of facts both within a single language and across languages. Languages vary in exactly the way this theory predicts they should, and a variety of seemingly obscure adjacency constraints simply falls out.
Speaker: Hadas Kotek (McGill)
Date & Time: Friday, October 24 at 3:30 pm
Place: Education Building, room 433
Title: TBA
Abstract: TBA
Speaker: Yoonjung Kang (University of Toronto)
Date & Time: Friday, November 14 at 3:30 pm
Place: Education Building, room 433
Title: Laryngeal classification of Korean fricatives: evidence from sound change and dialect variation
Abstract: Korean has a three-way contrast of voiceless stops among aspirated, lenis, and fortis stops. Recent studies converge to show that Seoul Korean is undergoing a tonogenetic sound change whereby the VOT distinction between lenis and aspirated stops is neutralized and the tone on the following vowel becomes the primary phonetic distinction. Korean fricatives, on the hand, show a two-way contrast between a fortis and a “non-fortis” fricative. The laryngeal classification of the non-fortis fricative has been a topic of much debate, as its phonetic patterning is ambiguous between aspirated and lenis categories. In this talk, I will bring additional evidence to the debate by examining the patterning of the fricatives in the on-going sound change in Seoul. I will also compare the Seoul data with the data collected from two major North Korean dialects as spoken by ethnic Koreans in China, where the stop contrast retains the “older” VOT pattern.
Winter 2014
Speaker: Julie Legate (University of Pennsylvania)
Date & Time: Friday, January 10, 3:30 pm
Place: Education Building Rm. 433
Title: Acehnese causatives and the structure of the verb phrase
Abstract: In this talk, I provide evidence from Acehnese (Malayo-Chamic: Aceh Province, Indonesia) for a distinction between VoiceP, which introduces the external argument and assigns accusative case, and causative vP, which introduces causative semantics (Alexiadou, Anagnostopoulou, & Shafer 2006; Pylkkanen 2008; inter alia). In Acehnese, VoiceP and causative vP are morphologically overt and occur both independently and simultaneously. Focussing on causativization of roots that are normally used as unergative or transitive verbs, I argue that the causative head does not embed an active, passive, or object voice VoiceP, but instead embeds an applicative VoiceP. Thus, the causee is introduced as an applicative object, not as an agent. Implications for the general theory of causatives and the structure of the verb phrase are considered.
Speaker: Jakob Leimgruber (McGill)
Date & Time: Friday, February 7, 3:30 pm
Place: Education Building Rm. 433
Title: Language policy in multilingual cities: effects on the linguistic landscape of Singapore and Montreal
Speaker: Marc Brunelle (University of Ottawa)
Date & Time: Friday, February 21, 3:30 pm
Place: Education Building Rm. 433
Title: An incipient tone sandhi in Northern Vietnamese?
Abstract: Synchronic tone sandhis are well attested and described, but their development is largely a matter of speculation. In this study, we look at an instance of apparent tone sandhi in progress and examine the interplay between coarticulation, reduction and perception in its formation.
In Northern Vietnamese (NVN), the low rising tone (sắc) often loses its rise in non-final position, making it perceptually very similar to the low falling tone (huyền). This gradient change does not normally result in contrast neutralization, as the rise is recoverable from a strong progressive coarticulation on the following tone. However, over the past decade, the authors have noticed that many speakers neutralize the rising tone and the low falling tone before the high level tone (ngang), an observation confirmed by native speaking linguists. This is characteristic of young female Hanoians, but seems more and more common among other gender and age groups, as well as outside Hanoi.
We conducted an acoustic investigation of this incipient sandhi in six young female NVN speakers. They were recorded while completing a map task designed to obtain targets words controlled for tone and microprosody in semi-spontaneous speech. Our results show that although none of our speakers exhibits full neutralization, they all show some degree of tone change. Based on these results and those of previous studies, we infer phonetic scenarios that could account for the initial development of the tone change. We then highlight similarities between this incipient sandhi and more established cases in Chinese and Hmong.
Speaker: Norvin Richards (MIT)
Date & Time: Friday, February 28, 3:30 pm
Place: Education Building Rm. 433
Title: Pied-piping and Selectional Contiguity
Abstract: Cable (2007, 2010) argues, on the basis of data from Tlingit, that wh-questions involve three participants: an interrogative C, a wh-word, and a head Q, which is visible in Tlingit but invisible in English. In Cable's account, QP standardly dominates the wh-word, and wh-movement is always of QP. The question of how much material pied-pipes under wh-movement, on Cable's account, is essentially a question about the distribution of QP. Cable offers several conditions and parameters governing the distribution of QP.
I will try to derive Cable's conditions on the distribution of QP from Contiguity Theory, a series of proposals about the interaction of syntax with phonology that I have been developing in recent work.
Speaker: Thomas Ede Zimmerman (University of Frankfurt)
Date & Time: Friday, March 14, 3:30 pm
Place: Education Building Rm. 433
Title: On the ontological status of semantic values
Abstract: The following three theses will be defended, and connections between them will be established:
1. Model-theoretic natural language semantics is not a theory of meaning.
2. Extensions ("generalized“ quantifiers, truth values,…) must be distinguished from referents.
3. Intension must be distinguished from content.
Speaker: Amy Rose Deal (UC Santa Cruz)
Date & Time: Friday, March 28, 3:30 pm
Place: Education Building Rm. 433
Title: Cyclicity and connectivity in Nez Perce relative clauses
Abstract: This talk centers on two aspects of movement in relative clauses, focusing on evidence from Nez Perce.
First, I argue that relativization involves _cyclic_ A’ movement, even in monoclausal relatives. Rather than moving directly to Spec,CP, the relative element moves there via an intermediate position in an A’ outer specifier of the TP immediately subjacent to relative C. Cyclicity of this type suggests that the TP sister of relative C constitutes a phase – a result whose implications extend to an ill-understood corner of the English that-trace effect.
Second, I argue that Nez Perce relativization provides new evidence for an ambiguity thesis for relative clauses, according to which some but not all relatives are derived by a head-raising analysis. The argument comes from connectivity and anticonnectivity in morphological case. These new data complement the range of standard arguments for head-raising, which draw primarily on connectivity effects at the syntax-semantics interface.
Fall 2013
Speaker: Richard Compton (McGill)
Date & Time: Friday, September 20, 3:30 pm
Place: Education Building Rm. 433
Title: Evidence for phrasal words in Inuit
Abstract: In this talk I argue that data from noun incorporation, conjunction, ellipsis, and a VP pro-form in Inuit provide evidence for word-internal XPs inside polysynthetic words. Such data provide a potential counter-example to Piggott & Travis’s (2012) proposal (following Baker 1996) that phonological words cross-linguistically correspond to syntactic heads—simplex or complex—with morphologically complex words being derived via head movement, head-adjunction, or PF movement.
Speaker: Emily Elfner (McGill)
Date & Time: Friday, October 4, 3:30 pm
Place: Education Building Rm. 433
Title: Recursion in prosodic phrasing: Evidence from Connemara Irish
Abstract: One function of prosodic phrasing is its role in aiding the recoverability of syntactic structure. In recent years, a growing body of work suggests it is possible to find concrete phonetic and phonological evidence that recursion in syntactic structure is preserved in the prosodic organization of utterances (Ladd 1986, 1988; Kubozono 1989, 1992; Féry & Truckenbrodt 2005; Wagner 2005, 2010). In this talk, I argue that the distribution of phrase-level tonal accents in Connemara Irish provides a new type of evidence in favour of this hypothesis: that, under ideal conditions, syntactic constituents are mapped onto prosodic constituents in a one-to-one fashion, such that information about the nested relationships between syntactic constituents is preserved through the recursion of prosodic domains. Through an empirical investigation of both clausal and nominal constructions, I argue that the distribution of phrase accents in Connemara Irish can be used to identify recursive bracketing in prosodic structure.
Speaker: Alan Yu (University of Chicago)
Date & Time: Friday, November 15, 3:30 pm
Place: Education Building Rm. 433
Title: Individual differences in speech perception and production
Abstract: Linguists often discuss language in terms of groups of speakers, even though it is also acknowledged that no two individuals speak alike. The focus on language as a group-level phenomenon can obscure important insights that are only apparent when systematic individual variation is taken into account. In this talk, I offer cross-linguistic experimental evidence, showing that speakers vary significantly and systematically along certain individual-difference dimensions, including autistic-like traits, in their responses to the effects of the lexicon and coarticulation in speech perception and production. I will argue that understanding the nature of such individual linguistic differences is crucial for the understanding the inception (and possibly the propagation) of sound change, the primary source of sound patterns in language.
Speaker: Laurent Dekydtspotter (Indiana University)
Date & Time: Friday, November 22, 3:30 pm
Place: Education Building Rm. 216
Title: Parsing second languages: Anaphora in real time cycles of computations
Abstract: A body of research proposes that second language (L2) sentence processing is strongly semantically guided as a result of shallow structures lacking syntactic details in real time (Clahsen & Felser, 2006a, b; Felser & Roberts, 2007; Felser, Cunnings, Batterham, & Clahsen, 2012; Felser, Roberts, Gross, & Marinis, 2003; Felser, Sato, & Bertenshaw, 2009; Marinis, Roberts, Felser, & Clahsen, 2005; Papadopoulou & Clahsen, 2003). A second body of research argues for a strong structural reflex (Dekydtspotter & Miller, 2012; Juffs, 2005; Juffs & Harrington, 1995; Hopp, 2006; Williams, Möbius & Kim, 2001; Williams, 2006; inter alia). In this case, working memory capacity, proficiency, lexical access, etc. qualify the manner in which such information is acted upon in the conceptual-intentional and in sensory-motor systems in a L2 (Dekydtspotter & Miller, 2012; Dekydtspotter & Renaud, 2009; Dekydtspotter, Schwartz, & Sprouse, 2006; Hopp, 2012; Miller, 2011; Williams, 2006).
The talk addresses the etiology of L2 sentence processing in a modular system consisting of autonomous components in view of new experimental evidence. The empirical focus is on anaphora under reconstruction as in (1) for instance.
(1) Which story about him(self) did Ben say that Anna told?
New evidence from reading experiments strongly suggests that L2 sentence processing includes an incremental syntactic analysis according to cycles of computations. Specifically, I argue that such L2 parsing follows default structural computations that select specified information and guide aspects of the deployment of semantic processes in real time. Hence, to the extent that minimality, locality and chains supporting binding constitute good-design signatures of language architecture given limited processing resources (Chomsky, 2005; Reuland, 2001, 2011; Rizzi 2013), these design features seem available in L2 sentence processing. A path of research in view of these findings will be charted.
Winter 2013
Speaker: Kai von Fintel (MIT)
Date & Time: Friday, January 25 at 3:30 pm
Place: Education Building, room 433
Title: Hedging your ifs and vice versa (Kai von Fintel & Anthony S. Gillies)
Abstract: How does the word “if” help things we say mean what they mean? It can work together with other words like “maybe” and “probably” to make things we say less strong. But how does it do that? Many people have tried to find out how this works, but we will show that they face a big problem when one looks at people talking to each other and pointing to things the other said. Can we do better?
Speaker: Jennifer Cole (Univ. of Illinois, Urbana-Champaign)
Date & Time: Friday, February 22 at 3:30 pm
Place: Education Building, room 433
Title: Memory for prosody
Speaker: Kevin Russell (Manitoba/McGill)
Date & Time: Friday, March 1 at 3:30 pm
Place: Education Building, room 433
Title: When phonology goes bad
Abstract: The consensus on dyslexia, to the extent there is one, is that the core deficit lies in the reader having poor phonological representations or poor ability to use their phonological representations. Yet most dyslexic readers show no obvious problems in using phonology during everyday speaking and listening. This talk addresses the question of what it could possibly mean for a phonological representation to be poor. It synthesizes current findings in spoken word recognition and the development of phonological categories in infants and young children, to determine what the phonological representations of beginning readers are probably like and how, in some, they can be adequate for spoken communication but still be a poor match for the assumptions of an alphabetic orthography.
Speaker: Gillian Gallagher (NYU)
Date & Time: Friday, March 22 at 3:30 pm
Place: Education Building, room 433
Title: Identity bias and phonetic grounding in Quechua phonotactics
Abstract: Many languages distinguish between identical and non-identical segments with respect to some phonotactic restriction. For example, in several unrelated languages, roots with pairs of non-identical ejectives are unattested while pairs of identical ejectives are common (e.g., Bolivian Aymara t'ant'a 'bread' *t'ank'a). In other languages, like Cochabamba Quechua, pairs of non-identical and identical ejectives are both unattested. This talk explores the basis for an identity exemption to phonotactics by testing Quechua speakers' production and perception of non-identical and identical ejective pairs. If identical pairs of ejectives (or segments in general) benefit from some bias, then this bias should be latent in speakers of languages that don't grammatically distinguish identical from non-identical ejectives. It is found that Quechua speakers are more accurate at repeating nonce words with pairs of identical ejectives (e.g., p'ap'u) than pairs of non-identical ejectives (e.g. k'ap'u), though no distinction is found in a perception task. These results suggest that identical ejectives have an articulatory advantage over non-identical ejectives. Further evidence that articulation is central to the cooccurrence restriction comes from a production task with real phrases of Quechua. Ejectives can cooccur across word boundaries in Quechua (e.g., misk'i t'anta 'good bread'), though speakers de-ejectivize one of the two ejectives in phrases of this type at a small but significant rate. Implications of these results for the analysis of cooccurrence restrictions and the role of phonetic effects in the grammar are discussed.
Speaker: Colin Phillips (Univ. of Maryland); CRBLM/Linguistics Distinguished Lecturer
Date & Time: Friday, April 12 at 3:30 pm
Place: Education Building, room 433
Title: Generating Expectations and Meanings in Language Comprehension and Production
Abstract: We often have expectations about utterances before they are uttered. How we do this, in language production and comprehension alike, has implications for practical concerns and for theoretical questions about language architecture. The ability to generate reliable expectations may be a key enabler of robust language understanding in noisy environments. Understanding the (non-)parallels between the generative mechanisms engaged in comprehension and production is essential for any attempt to close the gap between grammatical 'knowledge' and language use systems. In this talk I explore how we generate expectations about word-level and sentence-level meanings. One set of studies uses behavioral interference paradigms to examine the time-course of verb generation when Japanese speakers plan their utterances. Two other series of studies focus on electrophysiological evidence for the generation of verb expectations in Chinese, Spanish, and English. Evidence for advance generation of verb meanings is found in comprehension and production alike. But we find that different types of linguistic information drive expectations on different time scales. In verb-final clauses, verb expectations are initially driven only by lexical associations, and effects of compositional interpretations are observed only after a delay. Similar mechanisms operate in production and comprehension, but they yield different outputs, depending on the information available to the language user in a specific task.
Fall 2012
Speaker: Robert Henderson (UCSC/McGill)
Date & Time: Friday, September 7 at 3:30 pm
Place: Education Building, room 211
Title: The morphosemantics of Mayan positional derivation
Speaker: Bryan Gick (UBC/CRBLM/McGill)
Date & Time: Friday, September 14 at 3:30 pm
Place: Education Building, room 211
Title: How humans don't have lips
Abstract: Researchers concerned with speech and related functions of the vocal tract have long relied on lay conceptions of terms like "lips" and "tongue" to describe ostensible parts of the anatomy. Close examination of these and other vocal tract structures strongly suggests that they are anatomically ill-defined, culture-specific concepts (which partly explains why researchers have never agreed on how to describe them). Nevertheless, they remain fundamental building blocks in our otherwise highly formalized theories of phonology, phonetics, sound change, language acquisition, and so on. Biomechanical modeling and production experiments will be used to show that, in addition to being anatomically indistinct, these structures are not straightforwardly definable in terms of their mechanical or articulatory function. So, how DO humans have lips? It will be argued that cultural concepts like "lips" (and concomitant phonological categories like [labial]) are indeed useful and relevant, but only in a robust, mulitdimensional, real-world setting - the setting where language happens. Implications for sound change, language acquisition, and the emergence of phonological categories will be discussed.
Speaker: Alex Drummond (McGill)
Date & Time: Friday, October 5 at 3:30 pm
Place: Education Building, room 211
Title: Parallelism and Dahl's paradigm
Abstract: I will attempt to defend the following two hypotheses: (i) that the binding constraints are stated in terms of a general notion of covaluation which subsumes binding and coreference; and (ii) that VP ellipsis is constrained by a strict parallelism requirement. My starting point is a 2007 paper by Irene Heim, which sketches a formulation of the binding theory consistent with hypothesis (i). The primary empirical problem for Heim’s theory is Dahl’s paradigm, which appears to necessitate the rejection of hypothesis (ii). I will argue that certain proposals in Tanya Reinhart’s 2006 monograph can be adapted to overcome this problem.
Speaker: Martina Wiltschko (UBC)
Date & Time: Friday, November 30 at 3:30 pm
Place: Education Building, room 434
Title: The structure of universal categories: Towards a formal typology
Abstract: When it comes to the nature of categories within syntactic theory, we can identify two opposing positions: i) The universalist position: Categories are universal ii) The variance position: Languages differ in the morpho-syntactic categories they make use of. My goal for this talk is to develop a model of grammar which allows us to reconcile these seemingly contradictory positions. I first show that we want to maintain both positions. On the one hand I review some properties of functional categories that suggest that there is a universal set of hierarchically organized categories. On the other hand, I review properties of categories across different languages that suggest that they are indeed language-specific. In fact, I shall argue that categories defined based on word class (i.e., determiner), morphological type (i.e., inflection), or substantive content (i.e., tense) cannot be universal on principled grounds. Instead I propose a model according to which universal categories are defined based on their core function: classification, anchoring, and discourse linking. I refer to this as the Universal-Spine-Hypothesis. Variance in the inventory of categories across languages arises via different strategies to map form and meaning onto the syntactic spine. This will allow us to formulate a formal typology for functional categories.