Description, theory and linguistics as a science – an interview with William B. McGregor

Professor William B. McGregor is an Australian-born linguist who works at Aarhus University in Denmark. He has published various books on linguistic theory and Australian languages, which include Semiotic Grammar (1997), Verb classification in Australian languages (2002), The Languages of the Kimberley, Western Australia (2004), Linguistics: An Introduction (2009, 2015 second edition) and Sign Languages of the World: A Comparative Handbook (2015, coeditor with Julie Bakken Jepsen, Goedele A. M. De Clerck and Sam Lutalo-Kiingi). He has written extensively on a wide range of topics which include optional case marking, zero-markers, Australian historical linguistics and Shua syntax. He is also the author of various grammatical descriptions of Australian languages including Warrwa, Gooniyandi and Nyulnyul.

I interviewed him in Aarhus about linguistic theory.

Could you briefly summarise what it is you do in linguistics?

I suppose I regard myself primarily as a descriptive linguist – but one who thinks that descriptive linguistics has to be informed by theory, and that it also has implications for linguistic theory. So I’m not just interested in describing in what I see in languages, but also in what it implies theoretically.

Currently I’m working on a grammar of Warrwa, which is a language of the far Northwest of Australia. I began working on it in the late 80’s and wrote a grammar sketch and various articles on specific topics over the years. Now I’m aiming at extending the grammar sketch to a comprehensive grammar. This is then written within the framework of the theory I’ve been working on since the late nineties, Semiotic Grammar.

How did Semiotic Grammar arise, and why did you feel the need to develop your own theory?

I guess primarily it arose due to my attempting to apply Halliday’s Systemic Functional Grammar to an Australian language. The difficulties I encountered led me to refine his theory in various ways. And I guess the work that I’ve been doing over the past 20-or-so years is really within that framework, and trying to develop it in various ways when I address descriptive problems. So for instance, one thing I’ve always been very interested in is optional case marking, where languages that have case have a choice between using the case and leaving it out. How do we account for that? And what is the relevance of this optional case marking to grammatical theory?

In Semiotic Grammar I propose a multi-dimensional shape to linguistic structures, for example a sentence, where the sentence doesn’t emerge as having a single structure, but multiple structures, depending on the perspective you look at it from. I consider linguistic units to be effectively four-dimensional objects that look different depending on your perspective.

Do these different perspectives loosely correspond to the different components of Systemic Functional Grammar, and if so, what was your dissatisfaction with this theory?

Indeed, they correspond to Halliday’s components, which he called metafunctions, but somewhat loosely in that I distinguish them in different ways to Halliday – primarily for syntagmatic rather than paradigmatic reasons – and draw the boundaries between them in different places.

There are a number of things that dissatisfied me with Systemic Functional Grammar. For instance, Halliday never worried too much about things like internal consistency and comprehensiveness, which are central to most scientific theories. So I started to develop it in ways that made it more economical, and to make more precise and rigourous conditions on things. Halliday was always satisfied with characterisations that were very loose, whereas in my view it is better to try and tighten things up and get a more rigid specification. Take metafunctions, for instance; for Halliday, different metafunctions were identified because networks of paradigmatic relations turned out, in his view, to fall into components that were tightly connected internally, but showed few interconnections; moreover, they were loosely connected with different structural features. But I wanted to get a more precise connection between these.

Is that where your emphasis on the interconnectedness of form and function in the sign comes in?

That’s right. The way I see it, Halliday was always talking about signs, but effectively refused to acknowledge them as such. He preferred in a way to operate with a theory where the sign has no place, even though as far as I can see it the sign occupied a central place in his theory.

Is this also because he was so function-oriented?

In a way, yes. But he also wanted to move away from loose conceptualisations of function and try to group them into significant types, and that’s where his notion of metafunctions comes in – which I think is one of his most significant contributions to linguistics.

I’m not very familiar with Semiotic Grammar as you know, but the way you describe it, taking the linguistic sign as the starting point, an interest in both form and function.. – apart from perhaps a lesser focus on cognitive aspects, to me it looks rather close to Construction Grammar. Could you point out some important differences?

Well I think you’re right. The construction in Construction Grammar is effectively also a linguistic sign in the sense of Ferdinand de Saussure. Now the main difference is that Construction Grammar tends to see constructions as effectively unanalysable, and often doesn’t look at the grammatical structure at all. Sometimes we have grammatical constructions defined by the lexical or grammatical items that occur in them, but there is no real place for grammatical relations as far as I can see it.

It’s even been argued by some Construction Grammarians that there is no such thing as a grammatical relation at all. This is one of the theses in William Croft’s Radical Construction Grammar.

Yes, and on this matter I diverge completely from Construction Grammar. Because I think there are many more linguistic signs there than they would admit. Surely, constructions are in general at least partly analysable, and are also made up of other things like grammatical relations, lexical items, grammatical items and so on, all of which make some kind of contribution to the meaning of a particular instance. To the extent that Construction Grammars, like for example Goldberg’s, brings grammatical relations in, it’s only really assuming the basic sort of Latinate ones like subject and so on, and they have no real place and no effective meaning. So there are some commonalities, but that’s one point where we differ.

Chomsky said linguistics is effectively a branch of psychology, whereas Halliday said linguistics is effectively a branch of sociology. What is your take?

Well, recently I was reading a paper by Dick Hudson, who said that “everything cognitive in language functional, and everything functional in language is cognitive”. Now Halliday’s perspective on this largely sociological, because he believed – at least that’s what he said to me – that we really don’t know enough about the human mind to comment on the issue of the relation between the language and the mind, but we know a lot more about language in its social context. So I don’t think he believed language is not a psychological or cognitive phenomenon, but rather we take the sociological perspective because that’s the perspective that’s most readily available. Going back to Hudson, he definitely believes that language is cognitive phenomenon, but in my view we can have both – society depends on people with brains. So the brain is absolutely essential to language, but it is important to realise that cognition is not an individual phenomenon, it’s a social phenomenon. So the cognitive perspective is both cognitive and social. And the relevance for language is that this duality of cognitive and social in my view structures the grammar of languages. So grammars show phenomena that iconically represent features of interpersonal interaction.

Most if not all extralinguistic factors are ultimately grounded in cognition.

Which would include social cognition, that’s right. One of the objections to ‘psychology’ in this domain is when we think of psychology as what goes in an individual mind, in the abstraction from the social context, which I certainly don’t like, and which Halliday wouldn’t approve of either.

So you’re a functional linguist-

-hold on, I don’t like that label (laughter). I’ve always disliked that label, because it’s not function that’s central to my interest, it’s the combination of form and function. While in practical terms I’m more theoretically aligned with what functionalists are doing, I wouldn’t consider myself a functionalist.

Alright, let me rephrase that. Your work deals with function extensively, but whereas most function-oriented linguists also deal with many cognitive aspects -cognitive linguistics and functional linguistics go hand in hand- as far as I know you, you don’t tend to include many cognitive aspects. Is there a particular motivation for that?

Well yes I think so, because like Halliday I feel like I’m in the dark as far as psychology goes, although I have tried to rectify that situation somewhat over the years. Now that we know more about cognition I feel a little bit happier making hypotheses about certain aspects of cognition in relation to language, but one thing I’ve never liked about theories that call themselves cognitive, like Langacker’s [Cognitive Grammar], is the number of times cognition is alluded to and hypothesised, when one doesn’t really get the feeling it’s any more than plausible. A plausible statement as far as cognition goes perhaps, but a serious gap in actually drawing that association, you know, demonstrating it. It’s very easy if you look at a grammatical construction to come up with cognitive bases for it, but very hard to actually motivate.

A good example of this might be the fact that we attribute unmarked word orders to a cognitive perception of events. So for instance, Agents tend to occur as the first clausal argument because we perceive the event as starting with an Agent. This of course sounds very good, but it actually very hard to prove.

Yes, that’s right. I mean, in all these domains there’s been limited experimental stuff, for instance by looking at attention, but I’m always left with the feeling that the claim sounds plausible, but that’s as far as we get.

How do you foresee the future for linguistics, and where do you think it should be headed in terms of theory? Are there any theories you think are viable or should be dispensed with?

I was afraid you’d ask something like that (laughter). As to where it should be headed, I think there’s a serious lack of descriptive work going on in the world that is theoretically informed. I would like to see a lot more of that, where linguists adopt and test their theories rather than trying to write atheoretical, ‘framework-free’ grammars, which of course are not atheoretical at all.

I think many linguists construe the act of grammar-writing as the first step in the scientific process, similar to observing and gathering organisms as a biologist. So you get the data, and you can always do stuff with it later.

Yes, that’s true. Models of grammar-writing people like Nikolaus Himmelmann propose [where a sharp distinction is made between documentation and description] are presented as though the written grammar is there in the data. So all you need to do is record stuff and you can come back to it a hundred years later and pull the grammar out. I think nothing is further from the truth; grammar-writing is a process of interacting with data, and trying to understand it. And that involves understanding of both the form and the meaning of the linguistic sign. There is no such thing as uncontaminated data that hasn’t gone through at least two minds, that of the linguist, and the mind of the person being recorded. And that’s what makes this task so exciting I think, that in the process of writing a grammar you continually refine your understanding of the utterances that you’ve written down in your field notes. So you’ll eventually understand that some word you wrote down and glossed as ‘father’ turns out to mean something very different from what it means in English.

And you wouldn’t ever find this out if you’re the guy extracting elicited sentences from a database a hundred years later.

Yes, it’s not extractable from the raw data that you’ve recorded in your notebooks. So that’s why I really don’t think there should be a difference between documentation and description.
Something else I would like to see improvement upon in grammar-writing is that we’re still ignorant of many things going on in languages, even in the better-described languages like English. Linguists just haven’t perceived the full range of richness of languages, and I think this is true for other languages as well. Although I think we can say that the grammar of any language is delimited and finite, I don’t think any written grammar has reached the point of being truly comprehensive. I think that’s something we should be aiming for in the future, to increase our order of description by an order of magnitude, and I think this would take us to about the extent of grammars of human languages. I don’t think they’re wildly more complex than that. But we do need to move up from our current 1,000 page grammars to around 10,000 pages, mega-grammars. And then I think we have a hope, or at least some possibility, of having described the grammar of a language. (I say this all with some qualifications, since the target is forever changing; grammar is inherently dynamic – as Halliday put it, every pass through the system reconfigures its probabilities, thus providing the ground for change.)

And a lot of grammar writers are of course heavily influenced by current topics in typology, they tend to include what’s currently of interest. That’s also why I mentioned that many linguists view grammar-writing as the first step in the scientific process, like collecting an organism. They see descriptive linguistics as feeding into linguistic theory, but not necessarily the reverse.

Yes, that’s right. The thing there is one has to be awake to interesting things. And when you’ve found them, to explore them. I think that’s a valuable lesson from McGurk [see McGurk effect], if you find something anomalous, don’t throw it away but explore what it means, its implications. As for theory, well, I don’t see much future for generative theories; I simply find the narrow focus on certain phenomena of language to the exclusion of others to be uninteresting. But whether or not they will succeed in the future, who knows. I think increasingly linguists are adopting cognitive and functional-type approaches, so there are many more of ‘us’ around than there were in the past. I think this is a good thing, and may be the salvation of linguistics in a sense (laughter).

A paradigm shift?

Yes. But I think the reality is, I don’t think there will ever be single paradigm in linguistics; it’s just not the sort of subject that lends itself to a unified theory.

That might be true for any subject at the human level perhaps.

Yes, I think once you’re dealing with humans, as distinct from physical sciences, you will always have some diversity of opinion. Once you accept this as a fact, diversity of opinion is actually a good thing. I think we should definitely be working within a theoretical framework, but it’s also important to appreciate and acknowledge what others are doing. What I particularly like about doing descriptive linguistics is that you are forced to take a broad perspective on the subject, to be a jack-of-all-trades (while not necessarily being a master of none). You need the knowledge not just of all of the core areas of linguistics, but also understanding of the social dimension, anthropology, archaeology, history, geography, and cognition – increasingly linguists studying small languages are addressing questions of conceptionalisation and cognition e.g. about space, time, colour, number. Language is not a separate encapsulated phenomenon isolated from the rest of human life, it is something that human beings use.

Thank you for your time.

No worries.

Jeroen Willemsen is a PhD student in Linguistics at Aarhus University. He conducted this interview for his previous linguistics blog fuzzygrammar.wordpress.com in 2016. 
The picture from Lingvistiska Samlarbilder was used with permission from Michael Parkvall.

SaveSave

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.