The attempt to analyse the phenomenon of near-synonymy in the oral and written mode implicitely pronounces the expectation of finding a tendentious or even a significant variation among the lexical realisations of a given underlying semantic concept in diverse contexts of linguistic performance. However, this task requires a specific determination of the kind of variation we hope to uncover in the course of analysis. One option would be to inquire how often the lexical items under discussion are used in speech and writing, respectively, and to determine that an item, or target, t1 is used twice as often in written language than in spoken, while, conversely, we face the opposite fact for target t2. But such an observation would merely scratch the surface of the actual problem, namely the assessment of the variations among the items concerning their paradigmatic relations to each other in both modes of communication. Asked more articulately, presupposing that speech and writing show a considerable difference concerning dimensions exceeding the mere distinction of being produced in the oral or written mode, does this difference also affect the paradigmatic relations between particular lexical items in a significant way? The answer to this question would ultimately give rise to a more dynamic understanding of how the items are used in language in general. However, this consideration still leaves us with the problem of how to operationalise the items’ paradigmatic relations to each other in such a way, that we are able to measure them in different situations of linguistic performance. In the present thesis, we will meet this challenge by pursuing the following argumentation: an inherent part of a lexical item’s meaning is constituted by its collocational potential, this is, its syntagmatic readiness concerning a given set of locally co-occurring lexical context and, consequently, nearsynonyms share a certain, measurable amount of this potential with each other, assuming that if they are semantically similar, they may also be so in respect of their syntagmatic readiness. This is, if we consider a set of near-synonyms with respect to their collocational patterns in diverse situational contexts of linguistic performance, are we able to assert a significant difference concerning their similarity among each context? If we are able to observe such a difference, what are the proper dimensions and determinants of this variation? [...]
Contents
1 An Interdisciplinary Approach
2 Theoretical Concepts and Issues
2.1 Language in the oral and the written mode
2.1.1 Lexical density and grammatical intricacy
2.1.2 Involvement and detachment
2.1.3 Textual dimensions and a grammar of conversation
2.1.4 Conclusion
2.2 Near-synonymy and the lexicon
2.2.1 Synonymy as an emergent phenomenon
2.2.2 Synonymy as a matter of absoluteness and degree
2.2.3 Variations across near-synonyms
2.2.4 Conclusion
2.3 A multi-dimensional point of view
2.3.1 Introducing semantic space
2.3.2 Collocations as distance determinants
2.3.3 Conclusion
3 A Statistical Model of Near-synonymy in the Oral and Written Mode
3.1 The Hypothesis
3.2 The Methodology
3.2.1 Composing the subcorpora of spoken and written registers
3.2.2 Selecting the targets
3.2.3 Selecting the context elements
3.2.4 The lexical association function and the similarity measures
3.2.5 Some caveats
4 Results
4.1 Spontaneous Conversation
4.2 Context-governed speech
4.3 Written language of medium formality
4.4 Highly formal written language
4.5 The dimensions of variation
5 Discussion
5.1 Evaluation of the model
5.2 Is near-synonymy just the usual sort of standard stuff?
5.3 The determinants of variation
6 Summary
7 References
Appendix A: Context Elements
Appendix B: Similarities and Distances
Confirmation of Authorship
1 An Interdisciplinary Approach
The attempt to analyse the phenomenon of near-synonymy in the oral and written mode implicitely pronounces the expectation of finding a tendentious or even a significant variation among the lexical realisations of a given underlying semantic concept in diverse contexts of linguistic performance. However, this task requires a specific determination of the kind of variation we hope to uncover in the course of analysis. One option would be to inquire how often the lexical items under discussion are used in speech and writing, respectively, and to determine that an item, or target, t1 is used twice as often in written language than in spoken, while, conversely, we face the opposite fact for target t2. But such an observation would merely scratch the surface of the actual problem, namely the assessment of the variations among the items concerning their paradigmatic relations to each other in both modes of communication. Asked more articulately, presupposing that speech and writing show a considerable difference concerning dimensions exceeding the mere distinction of being produced in the oral or written mode, does this difference also affect the paradigmatic relations between particular lexical items in a significant way? The answer to this question would ultimately give rise to a more dynamic understanding of how the items are used in language in general.
However, this consideration still leaves us with the problem of how to operationalise the items’ paradigmatic relations to each other in such a way, that we are able to measure them in different situations of linguistic performance. In the present thesis, we will meet this challenge by pursuing the following argumentation: an inherent part of a lexical item’s meaning is constituted by its collocational potential, this is, its syntagmatic readiness concerning a given set of locally co-occurring lexical context and, consequently, near-synonyms share a certain, measurable amount of this potential with each other, assuming that if they are semantically similar, they may also be so in respect of their syntagmatic readiness. This is, if we consider a set of near-synonyms with respect to their collocational patterns in diverse situational contexts of linguistic performance, are we able to assert a significant difference concerning their similarity among each context? If we are able to observe such a difference, what are the proper dimensions and determinants of this variation? As we see, an interdisciplinary approach to the phenomena of near-synonymy and performance variation raises a lot of questions begging for empirical research.
In order to uncover and illustrate performance-dependent variations among near-synonymous lexical items realising a performance-independent concept in an as reliable as possible manner, merely qualitatively comparing the items’ concordances in large samples of linguistic performance appears to be a highly time-consuming and competence-biassed attempt. Rather, it seems promising to quantitatively assess the items’ collocational patterns in the neighbourhood of a fixed set of locally co-occurring lexical context. This is the objective of the thesis at hand, in the course of which we will develop a statistical model of collocational variation among a set of near-synonymous lexical items, that accounts for a flexible and dynamic view on the subject of ‘proximity in meaning’. Figure 1.1 depicts the basic conceptional design underlying the present thesis’ course of analysis.
illustration not visible in this excerpt
* “By analysing the collocational patterns of a set of near-synonymous lexical items in different situational contexts of linguistic performance, we are able to quantitatively assess and graphically represent a hypothesised variation concerning the items’ paradigmatic relations to each other.”
Figure 1.1. The interdisciplinary approach to the analysis of near-synonyms in the oral and written mode.
In order to put the analysis on a solid foundation, the subsequent chapter will provide an accurate overview concerning the various theoretical and methodological perspectives involved in the setup of the statistical model of near-synonymy in the oral and written mode. The latter will be carefully established in chapter 3, comprising the formalisation of a proper research hypothesis and the selection of the samples of linguistic performance as well as of an appropriately salient set of near-synonymous items to be analysed. Chapter 4 presents the model’s output and provides a qualitative analysis of the semantic dimensions unfolding the selected items’ underlying concept. Chapter 5 analyses the output in a more aggregated manner and derives the proper determinants of variation among the various performance samples under discussion. The final chapter summarises the findings of the present thesis in a scrutinising manner and provides suggestions for future research.
2 Theoretical Concepts and Issues
This chapter will provide an examination of the most salient topics touched upon in the present thesis. These are namely a review of prior studies on the differences between language in the written and the oral mode, the notion of near-synonymy and its subsumption into the field of lexical semantics, and, finally, the motivation for adopting a multi-dimensional perspective within the analysis of near-synonymy in written and spoken language.
2.1 Language in the oral and the written mode
Talking is more like dancing than it is like playing chess.
Michael Halliday (1987: 57)
Historically, academics have considered written instances of language the reference point of appropriate language use itself and condemned any oral discourse not fitting into such a prescriptive scheme to be ungrammatical, or even illegitimate. Speech has been regarded as a degenerated and sloppy insult to the structurally elaborated instances of literacy, not even being worth of study. With the rise of phonetics as a separate discipline in linguistics at the end of the nineteenth century, researchers began to change their mind and to study speech on its own right, presuming that language is a sound system in the first place. By the early twentieth century, linguists considered writing not to be “language, but merely a way of recording language by visible marks” (Bloomfield 1933:21) and postulated the fundamental primacy of speech, which in turn caused structural linguistics to exclude any comparison of written and spoken language from its scope of study. This bias has prolonged to some works of contemporary linguistics, building on “the undoubtly correct observation that spoken language is ‘true’ language, while written language is an artifact” (Aronoff 1985: 28).
But, as so often, theory and practice tend to considerably differ. The Chomskyan paradigm has shown that spoken language still has the flavour of being inappropriate for the systematic and representative study of language due to its accumulation of hesitations, false starts, and slips of the tongue. Instead, generativists substitute(d) externalised language by grammatical intuitions (i.e. internalised language) as the primary data to be analysed (Chomsky 1988). Michael Halliday commiserates with linguists dedicated to this kind of research because they are unlikely to ever come across a verbal construction like the final one in the following dialogue, which he claims to be a regular product of spoken English:
illustration not visible in this excerpt
John Lyons has substantiated the primacy of speech by reference to its historical, structural, and functional priority compared to written language. While the historical priority of spoken language appears to be quite obvious, since “there is no human society known to exist or to have existed at any time in the past without the capacity of speech” (Lyons 1981: 12), the other dimensions of pre-eminance need some elaboration. At least for our alphabetic system, which ideally provides a one-to-one correspondance of particular letters (and their combinations) to particular sounds (and their combinations), we can assume that a not acceptable sequence of characters in a given language (e.g. * lmot in English) derives from the constraints that either the nature of the human speech apparatus, but, more saliently, the phonology of that given language imposes on the distribution of phonemes (i.e. there are no acceptable forms of English beginning with [lm]), and not vice versa. This is known as the structural priority of speech and predicates that the combinability of particular alphabetic signs is completely unpredictable in terms of their shape, but relies, to a greater or lesser extent, on the combinability of the particular sounds they represent. The functional priority of spoken language is more like a double-edged sword: on the one hand, we can undoubtly argue that, on average, six billion residents of planet earth produce speech as uncountable times as much as writing at a given point in time, but, on the other hand, we cannot be quite sure if this asymmetry also holds for the reception of language. The exponential increase of information technology in the last decades has promoted an equally sharp rise of the supply and availability of mainly text-based, this is, read-only information resources such as the World Wide Web. A further point that has to be made relates to the comparably increasing ubiquity of mobile communication systems, which contribute to both speaking and writing (cf. ‘short message service’), but, finally, we will not venture to estimate the in- and output of either communicative mode. A rather distinct implication of the functional dimension relates to the fact that written language historically has been used for reliable communication across long distances as well as for the sustainment of religious, legal, and commercial documents. Thus, the development of writing systems as institutionalised, recordable knowledge has contributed to its social priority especially in those cultures belonging to the catchment areas of the major monotheistic religions.
In addition to, and partly in rejection of the Chomskyan linguistic competence/performance dichotomy, linguists in the neo-Firthian tradition (cf. section 2.3) such as Dell Hymes (1971) and Michael Halliday (1973) postulate a formal knowledge about the appropriateness of linguistic variation due to contextual circumstances of language use. Such a notion of communicative competence considers neither speech nor writing to be primary to each other, but rather includes both communicative modes, and particularly their comparison, into the scope of analysis. This a point of view also accounts for the observation of a language’s medium-transferability, this is, the most obvious phenomenon that a language, given a writing system, is to a large extent transferable from one mode of production into the other. Consequently, aspects of written language are borrowed by speakers when suitable, just as aspects of speech may be borrowed by writers.
Thus, given such a variety of points of views on the particular significance of either spoken or written language, what are the proper differences between them? Written language is generally considered structurally elaborated, formal, depersonalised, and detached from its spatio-temporal context. It is typically associated with planned and deliberate text edition, primarily used in fictional literature and academic prose. In contrast, spoken language is typically associated with conversation that is spontaneously processed and evaluated in the context of interpersonal relationships at a structurally simple level, maintaining cohesion primarily by means of deictic references, prosodic cues, and paralinguistic devices. These are the distinctive features most predominantly found in the introductory chapters of any volume dedicated to the comparative study of speech and writing (e.g., Tannen 1982). In the following sections we will examine some dimensions of divergence in more detail.
2.1.1 Lexical density and grammatical intricacy
We already came across Michael Halliday’s (1987, 1989, 1992) enjoyment for verbal groups such as “it’ll’ve been going to’ve been being tested every day for about a fortnight soon”, or even “they said they’d been going to’ve been paying me all this time, only the funds just kept on not coming through”, which he encountered when having had started to observe systematically natural spontaneous discourse in English in the late 1950ies. Halliday claims, that referring to constructions of that kind as linguistic failures would fall short of the unconscious and incidental manner in which they have been generated on the part of the speaker and processed on the listener’s part. If asked to recall it, speaker as well as listener would most probably offer a paraphrase adequately reproducing the meaning of the utterance, but not by any means its actual wording. Other examples show that the clauses of spontaneously uttered language can run to considerable length and depth and that speakers normally do not show any sign of disorientation, “but emerge at the end with all brackets closed and all structural promises fulfilled” (1987: 58).
In a rather introspective manner (i.e., N=1), Halliday (1987) exemplifies the continuous quality of language, in which speech and writing just mark the prototypical poles, by transferring both spoken and written samples stepwise into their complementary modes. He finds that much of the differences between both can be accounted for as the effect of two related lexicosyntactic variables, namely lexical density and grammatical intricacy. Halliday defines lexical density as the ratio of lexical items (i.e., content words) to the number of non-embedded clauses in the total discourse and shows that this ratio tends to be higher in a written text while it decreases as the text approximates spontaneous speech. It is important to note that it is not the number of lexical items that increases in the written mode – in fact, it remains fairly constant in the model – but the number of clauses that decreases. Halliday examines the grammatical intricacy of each mode in terms of systemic functional grammar (Halliday 1985) and argues that while “spoken language tends to accommodate more clauses in the syntagm …, with fewer lexical items in the clause, written language tends to accommodate more lexical items in the clause …, with fewer clauses in the syntagm” (1987: 71). The crucial divergence between speech and writing is grounded in their particular way of encoding lexical information: written language exhibits a considerable potential for embedding within the noun phrase, thereby allowing the lion’s share of the lexical content to be nominalised; in contrast, spoken language realises low lexical density by means of hypotaxis[1], thus distributing the lexical content more or less evenly among interdependent clauses.
Moreover, Halliday uses the notion of grammatical metaphor, to account for the organisation of information in written discourse. It backgrounds and objectifies “ideational content”, for example by means of nominalisation, and thus compensates for written language’s lack of other resources for structuring the message, such as prosodic cues and paralinguistic devices. Although he points out that grammatical metaphor is not confined to writing, he considers it its most dictinctive characteristic as opposed to spoken language. He concludes that a dichotomous point of view on the differences between speech and writing does not accommodate their continuous nature. Rather both modes of communication should be considered revealing different kinds of complexity, being “chrystalline” in the case of written language and “choreographic” in the case of spoken language. Such a point of view would be unlikely to be comprehensible “through the lense of a grammar designed for writing” (1987: 67). And this sets up Halliday’s cardinal criticism on approaches such as Chafe’s distinction between involvement and detachment, which we will discuss in turn.
2.1.2 Involvement and detachment
Wallace Chafe (1982; Chafe and Danielewicz 1987) presents an analysis based on four samples of linguistic performance, namely informal dinnertable conversations, formal spoken language from lectures, informal personal letters, and formal academic papers. As a basic determinant, the cognitive effort required in each kind of language affects the variety and level of vocabulary, as well as the diverse construction of clauses and sentences. Building on the observation that casual spoken language is uttered in relatively brief spurts, Chafe (1987: 95) establishes the notion of “intonation unit”, which he claims to be the speaker’s “focus of consciousness”, this is, the content of short-term memory at the very time of language production, processing information that can be expressed in about six words. From this perspective, verbal disfluences, such as false starts, hesitations, and repetitions are considered to result from cognitive constraints educing the trade-off between syntactic complexity and thematic contiguity. Written language is assumed to have a covert prosody which is analogous to the intonation units of speech (Chafe 1988), but, due to much more processing time available, “writing frees intonation units from the limitations of short-term memory” (1987: 96).
Variety and level of lexical choice of the four language samples are accessed in terms of type-token ratios[2], obviously disregarding any distinction between content and function words, and appropriateness judgements (N= 3), classifying the lexical data under discussion into distinctly literary or distinctly colloquial vocabulary. The lexical variety of spoken language is considered to be limited by the rapidity of speech production, whereas the on average 24% higher type-token ratio of written language can be interpreted in terms of additionally provided editing possibilities. In contrast, the level of lexical choice is not considered to be constrained by the speaking and writing processes themselves, but rather to vary with respect to particular contexts, purposes, and topics. Relative frequency counts show that the appropriateness of lexical choice unfolds a continuum from distinctly informal spoken language at one extreme to formal written language at the other, this is, lectures being more literary than conversations, and personal letters more conversational than academic papers. Concerning lexical choice, Chafe concludes that, although speakers and writers do not choose from the same “store of words and phrases” (1987: 91), it is up to “speakers to borrow liberally from the written lexicon, or conversely for writers to borrow from the spoken” (1987: 94).
Frequency counts of words per intonation units across all language samples show that written intonation units tend to be markedly longer, especially in academic writing, which exhibits a mean length of intonation unit of about nine words. Devices of expanding intonation units are most frequently prepositional phrases, nominalisations, and attributive adjectives. Chafe considers none of these resources to be cognitively challenging, but believes them to require the writer’s patience in order to be combined in quantity. Concerning the assignment of intonation units into larger portions of meaning, spoken language is found to consist of simple sequences, predominantly linked together by the coordinating conjunction and, while writing shows more elaborate syntactic structures, once again ascribable to the additional time and effort required for their construction. Chafe points out that “in conversational language the largest number of sentences consists of one word, a slightly smaller number of two words, and so on” (1987: 104). which is markedly similar to the distribution of one-syllable words in texts observed by George K. Zipf in 1935 (Zipf 1968: 22f). In contrast, the language of academic papers exhibits a sentence length being normally distributed around a mean of 24 words, which suggests that writers, unlike speakers, intuitively know about the ‘normal length’ of a sentence.
Building on these findings, Chafe establishes the distinction between involvement and detachment, and this dichotomy is probably the most familiar notion related to his work on spoken and written language. The contextual targets to be involved with or detached from, respectively, are most obviously the adressees of the communicative act, but also the adresser himself, and the concrete reality of what is being communicated. The different language samples under discussion show different degrees of involvement and detachment, resulting from unequal use of linguistic features, such as first person pronouns, locative and temporal adverbials, as well as abstract subjects (as in This concept needs some elaboration), passive voice constructions, and probalistic generalisations (e.g. normally, primarily, or virtually). While involvement with the audience emerges as a dominating aspect of conversations and as a minimal characteristic of lectures, involvement with oneself is primarily observed in personal letters, along with references to specific points in time and space. In contrast, academic writing detaches from concrete reality by means of extensive use of abstract subjects, passive voice, and of probalistic generalisations.
Objections to Chafe’s model, apart from the methodological ones already mentioned, arise with the distinction between a written and a spoken lexicon, “the fact that speakers and writers do not choose from the same supply” (1987: 91). Psycholinguistic theory (cf. Levelt 1989) proposes a subdivision of lexical items in the mental lexicon into a lemma partition containing the item’s semantic and syntactic information and a form partition providing morphological and phonological information. In the course of linguistic production, lexical choice is undoubtly determined by considerations concerning discourse organisation at a conceptual level, but is unlikely to resort to different semantic systems, from which communicators choose according to the particular mode of communication. As Handke (1995: 19) points out, “it would be a wasteful duplication of information if the language processor made use of two mental lexicons.” Rather, neurolinguistic studies on aphasia and agraphia show that such a distinction seems to be more plausible at the level of linguistic output, where speakers and writers, respectively, have to rely on sublexical conversion processes involving both a phonological and a orthographic output lexicon. This assumption is supported by case studies of patients producing semantic errors primarily or exclusively in spoken output with relatively preserved writing, or vice versa (cf. Hillis et al. 1999).
2.1.3 Textual dimensions and a grammar of conversation
A rather differentiated image of the divergence between speech and writing draws Douglas Biber (1986, 1988) in his methodologically quite sophisticated analysis of 23 registers of spoken and written English, including telephone and face-to-face conversations, spontaneous speeches, press reportage, several kinds of fictional literature, and official documents, amongst others.[3] Building on a corpus of 960.000 running words compiled from the “London-Oslo-Bergen Corpus of British English” and the “London-Lund Corpus of Spoken English”, Biber has identified six textual dimensions of variation by means of factor analysis. These dimensions aggregate co-occurrence data of 67 linguistic features representing 16 major grammatical classes, among them tense and aspect markers, nominal forms, subordination features, lexical specificity, and coordination. Probably the most interesting conclusion of Biber’s analysis is the negation of any absolute distinction between spoken and written language. Rather, the relations among speech and writing depend on a variety of communicative purposes as well as on particular configurations of cultural, physical, and psychological characteristics defining the situational contexts of texts. The most salient dimensions of the model will be discussed in turn.
The dimension of ‘informational versus involved production’ represents a strong correlation between the speaker’s or writer’s communicative purpose and the circumstances of language production. This is, discourse characterised by careful editing possibilities and lexical precision typically has an distinctly informational purpose, while more interactional and affective discourse is associated with the constraints of real-time production and comprehension. Biber points out that the large number of linguistic features being complementary distributed on this dimension identify it as a fundamental parameter of variation among texts in English. This argument seems to be eminently validated by the opposition of telephone and face-to-face conversations to academic prose and official documents due to unequal use of private verbs, that -deletions, contractions, and second person pronouns. However, Biber explicitly negates the interpretation of this parameter as a dichotomy between written and spoken language. The dimension of ‘narrative versus non-narrative concerns’ distinguishes between active, event-oriented discourse and static and descriptive types of discourse. The genres of fictional literature load very high on this dimension, suggesting the frequent use of past tense and third person pronouns to be its marked value. The distinction between explicit, context-independent reference and nonspecific, situation-dependent reference, as indicated by the need for referential inferences on the part of the adressee, is encoded in the dimension of ‘explicit versus situation-dependent reference’ . On this dimension, the extrema are occupied by official documents making extensive use of relative clauses on the one end and broadcasts encouraging direct reference to the events actually in progress on the other. The dimension of ‘overt expression of persuasion’ marks the degree to which the adresser’s persuasive intention towards the adressee is marked overtly, whether in terms of his expression of own point of view, or by means of argumentative discourse designed to persuade the adressee. Unequal use of modals expressing likelihood or advisability and of persuasive verbs determine the opposition of professional letters and editorials to press reviews and broadcasts. The distinction between informational discourse that is abstract, technical, and formal and other types of discourse, such as fictional literature and conversations, is encoded in the dimension of ‘abstract versus non-abstract information’, showing a complementary distribution of passives and lexical variety, the latter being most probably due to the limited set of precise technical vocabulary. Finally, the dimension of ‘on-line informational elaboration’ distinguishes between discourse that is informational but produced under real-time constraints from other types of discourse. The most outstanding instances of the former are prepared speeches, interviews, and spontaneous speeches, while the latter include most typically general fiction as well as mystery and adventure fiction. These texts differ most notably in their employment of that -complements to verbs and adjectives, indicating diverse strategies of informational elaboration.
Unfolding such a six-dimensional space of similarities and differences among the registers under discussion, Biber is able to compare any two registers with respect to their particular values on each dimension, once again accounting for the absence of any absolute distinction between spoken and written language. For instance, spontaneous speeches and broadcasts, both being spoken, emerge as not having very much in common, except for their comparable type/token ratios and rather modest use of passive voice constructions. Similarily, the written registers of general fiction and professional letters show very striking differences on most dimensions, solely being indifferent to the distinction between ‘informational versus involved production’.
Biber’s multi-feature/multi-dimensional approach has given rise to further studies investigating, among other phenomena, the typology of English texts (1989), referential strategies of spoken and written texts (1992), and, most recently, the variation of speaking and writing in the university (2002). The methodological thouroughness of these studies and their differentiated insights into the grammatical patterns as well as the lexico-grammatical associations of English language in use have ultimately met in a grammar placing emphasis on register variations resulting from different communicative priorities and circumstances (Biber et al. 1999). The grammar is based on a corpus of about 40 million of running words, representing four main registers of contemporary spoken and written English, namely conversation, fiction, newspaper language, and academic prose. From this data, Biber and his colleagues were able to “explore the interface between grammar and discourse analysis, lexis, and pragmatics” (1999: 45) and, moreover, to describe a special “Grammar of Conversation” (1999: 1037ff), thus adressing the question whether spoken language obeys laws different from those of written language.
The fact that conversation is encoded and decoded online, ‘negotiates’ meaning and processing convenience interactively between interlocutors (cf. section 2.2.1), and takes place in a shared context concerning time and space as well as social and cultural knowledge accounts for many of the phenomena observed in spoken discourse by Biber and his team. In order to relieve the online planning pressure imposed by human memory limitations, conversationalists make extensive use of contractions and ‘situational ellipses’ (i.e. the linguistic omission of information that is retrievable through situational knowledge) and avoid detailed nominal reference as well as being specific about quantity (e.g., round about the fortyish age) and quality (e.g., sort of, something like that). The markedly low type-token ratio of conversation compared with written registers is considered to confirm the repetitive nature of spoken discourse, in that it relies on ‘lexical bundles’ (i.e., prefabricated sequences of words, such as I don’t know why; cf. section 2.3.2) being readily accessible from the speaker’s memory. Since conversation is co-constructed between interlocutors in order to achieve the aim of a communicative win-win situation, participants apply several strategies of discourse management, either in order to ensure a common semantic ground, for example by adding question tags to declarative clauses, or to dynamically shape an utterance to the ongoing exchange by attaching particular pragmatic or discoursal markers, such as the speaker’s attitude to what is said, for example by means of stance adverbials. Syntactically, the interlocutors’ limited planning ahead opportunities and their intention to keep the conversation moving forward result in the avoidance of elaborated structures at the beginning or in the middle of a clause, the subject prevalently consisting of a monosyllabic pronoun, and in the strategy of compiling information along a linear sequence of finite clause-like chunks, which is consistent with Chafe’s (1987) notion of ‘intonation unit’. Biber and his colleagues accommodate Halliday’s (1989) demand for a ‘choreographic grammar’ by establishing the “C-unit” (1999: 1070) as an umbrella term for syntactically independent pieces of speech and show that, in the data under discussion, non-clausal units account for over one-third of the C-units in the register of conversation.
2.1.4 Conclusion
In this section, we have learned about the significance of comparing speech and writing to the systematic study of language and have examined three approaches in more detail, the most elaborate of which appears to be Biber’s (1988) six-dimensional model of textual variation among written and spoken English. The studies under discussion suggest, to a greater or lesser extent, a rather continuous nature of linguistic performance to be accounted for. Not really surprisingly, the most outstanding determinant of this continuum turns out to be the degree of cognitive strain imposed by the situational context of communication and affecting the speaker’s or writer’s strategy of compiling lexical information within the clausal unit or sentence. This argument has been exhaustively put forward by Chafe and Danielewicz (1987) and has been empirically validated by Biber (1988, Biber et al. 1999) on a large scale.
To the present thesis, the findings contribute the conclusion that a mode-sensitive analysis of near-synonyms does not appear to be promising merely building on a dichotomous perspective towards written and spoken language. Speech and writing neither resort to distinctive semantic systems nor do they operate by completely different grammars. Rather, the analysis has to account for distinctive situational contexts and intentions determining the communicative act, which appear to be well reflected by different types of register. Moreover, the notion of ‘focus of consciousness’, denoting a span of about six (Chafe and Danielewicz 1987, Biber et al. 1999) words, as the presumed clock rate of human working memory, will undoubtly contribute to a deeper understanding of collocational variations in the performance data of spoken and written registers. Finally, we have encountered various methodological hints, which will emerge as viable reference points to the study at hand (cf. section 2.3). But, for the time being, we will have to adress another major theoretical topic to be investigated in the present thesis, namely the notion of near-synonymy and its subsumption into the field of lexical semantics.
2.2 Near-synonymy and the lexicon
Any working definition of near-synonymy logically presupposes a set of characteristics that determine a lexical item’s meaning in relation to which it is or is not near-synonymous to any other item. We will avoid to pursue the question of “the meaning of meaning” (Ogden and Richards 1946), for on the one hand, it exhibits an indigestibly paradoxical sense and on the other hand, both philosophy and linguistics have provided a cornucopia of theories adressing the nature of meaning, some of which we will encounter in the course of the present section.[4] Instead, before we have a closer look at the various levels of description and variation among near-synonyms, we will consider a population dynamic model accounting for the emergence and elimination of synonymy in language systems.
2.2.1 Synonymy as an emergent phenomenon
Being geared to methodological approaches of scientific domains such as biology and economics, Luc Steels (2000) promotes an evolutionary perspective towards the study of language in order to gain an “understanding how language users construct and reconstruct their language as they adapt to the language spoken in their enviroment and try to keep up with the ever changing communicative challenges arising in their community” (Steels 2000: 143). In order to explore the origins and evolution of grounded word-meaning, Steels has conducted an experiment building on a multi-agent approach, this is, modeling a speech community in terms of about 1,000 autonomous and distributed software agents. These agents are able to visually perceive and focus on their model world of magnetic white boards (i.e., the ‘context’) pasted with various coloured geometric shapes (i.e., the ‘topics’) and are instructed to perform communication games about their shared environment.
In order to attain the game’s goal of a completely successful act of communication, two agents alternately play both the role of speaker and of hearer. Agents take turns playing games so all of them develop the capacity to be speaker or hearer. Both speaker and hearer segment their environment into perceptual features, such as horizontal position or colour, and ‘talk’ about it. In the experiment, talking denotes the transmission of a word, or even a multi-word phrase, from the speaker to the hearer, and since the agents start the game with no prior lexicon agreed upon, but just a set of syllables by which words are combined at random, or words taught by human users logged in through the internet, the hearer has to guess which of the topics the speaker has chosen to talk about to him. The communication game succeeds if the topic chosen by the speaker is identical to the topic guessed by the hearer. If the game fails, the speaker gives an extra-linguistic hint to the hearer by pointing to the topic, this is, by transmitting in which direction he was looking at the very moment of talking, and both agents reconfigurate their internal structures to be more successful in future games.
Although all the individual agents must acquire their lexicon autonomously, the experiment has shown that lexical coherence arises in the system after a series of up to 100,000 games. Steels (2000: 148) considers synonymy to “arise naturally in a group of distributed agents because agents do not have a global view” within the system and thus sometimes coin new words for a particular meaning not knowing that there are already words for that meaning available in the population. Figure 2.1 shows the emergence and elimination of competing synonymous words for the meaning of “to the left”. The ordinate denotes the extent of a word’s use within the population on a scale from 0 (the word is not being used at all for a particular meaning) to 1 (the word is being used exclusively for a particular meaning), whereas the abscissa denotes the number of games performed. The synonyms get damped due to the effect of ‘causal circularity’, this is, the positive feedback between the use of a particular word and successful communication. After a struggle among different words, partly artificial (e.g., bevuwu, xomove, or danuve) and partly natural (e.g., links, gauche), one word (wogglesplat) stands out and becomes preserved subsequently. Analyses of polysemy show even more dramatic diagrams of meanings competing for the same word (cf. Steels 2000: 149).
illustration not visible in this excerpt
Figure 2.1. Word-competition diagram showing the emergence and elimination of synonymy. It graphs the words competing for the meaning of “to the left” for a series of 100,000 communication games (from Steels 2000: 148).
The experiment’s results support the assumption that members of a speech community tend to cooperate in shaping their language towards lexical coherence. This coherence seems to be necessary for successful communication, though it does not need to be absolutely complete. The following section will discuss the notion of absoluteness in more detail and account for a rather continuous perspective on synonymy, ultimately proposing a working definition of near-synonymy, which is suitable for the present thesis’ research objective.
2.2.2 Synonymy as a matter of absoluteness and degree
The notion of absolute synonymy as intersubstitutability in all contexts without changing the truth value of a sentence is attributed to the German philosopher Gottfried Wilhelm von Leibniz[5], while a definition more emphasising the equivalence (i.e., the bilateral implication) of sentences refers to the interchangeability of lexical items in a sentence without affecting its set of logical entailments (cf. Lyons 1985). Quine (1951) considers such definitions problematic, since circularities are likely to arise from the interrelated notions of meaning, synonymy, and truth. He argues that the statement of any change in a sentence’s meaning when substituting a content word by its synonym presupposes an adequate way of specifying that the not substituted and the substituted sentences have the same meaning. According to Quine, the only possibility to make such a statement is to reduce one sentence to the other by interchanging synonyms, which appears to be paradoxical.[6] From the perspective of referential extension, Goodman (1952) asserts the impossibility of absolute synonymy since no two words can have the same extension because one can always find a context in which two putative synonyms are not synonymous. Lyons (1981: 50f) grants a pair of synonyms the quality of absoluteness if, and only if, “all their meanings are identical”, “they are synonymous in all contexts”, and if “they are identical on all (relevant) dimensions of meaning“.
We realise that, if present at all, absolute synonymy appears to be quite rare in natural language. Cruse (1986: 270) points out that “natural languages abhor absolute synonyms just as nature abhors a vacuum.” From the perspective of pragmatics, Clark (1992: 176) suggests that the principles of “Conventionality and Contrast work together to eliminate synonyms.” This is, communicators abide by the agreement that “for certain meanings there is a form that speakers expect to be used in the language community” (p. 171) and avoid to coin words with meanings that are ‘pre-empted’ in the conventional lexicon by already established expressions. Clark argues that, even when synonyms enter into a language system (e.g, as a result of language contact, as in such pairs as cattle/beef, calf/veal, or pig/pork), members of a speech community cooperate in differentiating the meanings of each pair (e.g, by referring to different kinds of food and different kinds of farm animals, respectively) in order to maintain the principle that “every two forms contrast in meaning” (p. 172). This view is markedly consistent with the basic assumption of structuralism that “in language there are only differences and no positive terms” (Saussure 1916: 118) and could also account for the findings of Steel’s (2000) experiment discussed earlier.
Thus, in real situations of language production, there are usually several words to choose from that are nearly absolute synonyms, where a different choice, however, would make a difference in the meaning conveyed. Building on a classification of synonyms the definitions of which we have already encountered above, Lyons (1995) suggests a distinction between ‘partial synonyms’, which satisfy at least one, but not all three, of the criteria, and ‘near-synonyms’, which are “more or less similar, but not identical, in meaning” (1995: 60), where meaning is taken to include both propositional meaning, or the set of logical entailments, and expressive meaning, this is, an indication of the communicator’s attitude towards the propositional content. However, the distinction remains somewhat fuzzy. As Cruse (1986: 292) points out,
although Lyons insists that near-synonymy is not the same as partial synonymy, it should be noted that by his definitions near-synonyms qualify as incomplete synonyms [i.e., not being identical on all relevant dimensions of meaning], and therefore as partial synonyms.
Instead, he proposes the differentiation of ‘cognitive synonyms’, yielding sentences with equivalent truth conditions, though likely deviations in expressive meaning, style, or register, from ‘plesionyms’, which change truth conditions, but still produce semantically similar sentences. Thus, the two or more sentences obtained by substitution of cognitive synonyms must logically entail one another, as (1) and (2) do:
(1) He did play the violin and didn’t wear socks. (FNW: 3622)[7]
(2) He did play the fiddle and didn’t wear socks.
In addition to the principal semantic modes of propositional and expressive meaning, Cruse suggests the semantic properties of ‘presupposed meaning’ (i.e., semantic traits taken for granted in the use of a lexical item) and ‘evoked meaning’ (i.e., dialect and register variation, cf. section 2.2.3), whose primary function is to place restrictions on what linguistic items can occur together normally within the same sentence, and to contribute to discourse cohesion. Cognitive synonyms are insensitive to arbitrary co-occurrence (collocational) restrictions resulting from the presupposed meaning of the lexical items involved, because these restrictions are irrelevant to truth-conditions.
(3) Have you grilled the bread?
Cruse points out that, when having toasted the bread, a cooperative answer to sentence (3) would be either ‘Yes’, or something mediative like ‘You can’t say that, you have to say “toast the bread” – but the answer to your question is “yes”.’ Thus, though collocational restrictions determine a good deal of judgements concerning collocational acceptability, they have not any impact on the truth-conditions of odd sentences. For that reason, to grill and to toast should be classified as cognitive synonyms. In contrast, plesionymy allows for the assertion of one member of a pair, without paradox, while simultaneously denying the other member, as in sentence (4):
(4) This isn’t really a custom – it’s more of a habit.
The low degree of semantic distance, or contrastiveness, of the pair custom / habit facilitates the adjustment of presented meaning, and, “as the semantic distance between lexical items increases, plesionymy shades imperceptibly into non-synonymy” (Cruse 1986: 286). However, the notion of ‘semantic distance’ does not appear to be well motivated in this context. Cruse shows with the examples fog, mist, and haze, that fog and mist, and mist and haze are plesionyms, but that fog and haze are not due to the fact that the latter pair is not semantically adjacent. If these items exhibit distinctive specifications on a one-dimensional continuum, such as “density of condensed particles”, their relative distance to each other should be measurable on a metrical scale and be frameable in a statement such as “the pair fog and mist is as x times as distant as the pair mist and haze.” Cruse concedes that the more dimensions of variation have to be taken into account, the more difficult the notion of ‘distance’ becomes (cf. section 2.3).
Both Lyons and Cruse approach the phenomenon of synonymy as a matter of degree from the point of view of truth-conditional semantics. From a more application-oriented perspective, lexicography has always treated synonymy as near-synonymy. Consider the definition put forward by the lexicographers of Webster’s Collegiate Thesaurus:
A word is construed as a synonym if and only if it or one of its senses shares with another word or sense of a word one or more elementary meanings, … [being their] discrete objective denotation uncolored by such peripheral aspects as connotations, implications, or quirks of idiomatic usage. (Kay 1988: 9a)
A similar definition is suggested by the editor of the Webster’s New Dictionary of Synonyms:
A synonym, in this dictionary, will always mean one of two or more words in the English language which have the same or very nearly the same essential meaning … the two or more words which are synonyms can be defined in the same terms up to a certain point. (Webster 1968: 24a)
Both definitions hold the view that near-synonyms must have the same essential meaning, but may differ in peripheral or subordinate ideas. This idea has, in fact, also been suggested by Cruse, alluding that synonyms, of all kind, are words that are identical in ‘central semantic traits’, and differ, if at all, only in ‘peripheral traits’ (1986: 267). But, as Edmonds and Hirst (2002) point out, specifying formally how much similarity of central traits and difference of peripheral traits is allowed can be a problem. They argue that, at an absurdly coarse level of similarity, any two lexical items denoting a physical object or an event could be considered cognitive synonyms in the sense of Cruse. To the other extreme, no two items could ever be classified as cognitive synonyms, because they always might be further distinguishable by a still more peripheral representation.[8]
In order to resolve this shortcoming, Edmonds and Hirst (2002: 116) introduce the notion of “granularity of representation”, which allows to pin down the essential meaning of an item as the portion of meaning that is representable only above a certain level of granularity, and peripheral meanings as those portions representable only below that level. The appropriate level of granularity could be derived from one’s intuition about lexical meaning, or, more appropriately, the intuitions of lexicographers, which are filtered by expertise. Being concerned with the representation of lexical knowledge in multilingual machine translation, Edmonds and Hirst propose that the adequate level of granularity divides a language-independent conceptual representation from its various language-specific instantiations. From this point of view, they argue, a set of near-synonyms would be a set of items that all link to the same language-independent concept and all share the same propositional meaning just up to the point in granularity defined by language dependence. In Edmonds and Hirst’s approach, language specification completes some more general notion, namely that of context:
The meaning of an open-class content word, however it manifests itself in text or speech, arises out of a context-dependent combination of a basic inherent context-independent denotation and a set of explicit differences to its near-synonyms. (Edmonds and Hirst 2002: 117f)
Taking the position that a lexical item’s meaning is not explicitely represented in the lexicon but is generated by the consideration of differences when an item is used, Edmonds and Hirst combine salient aspects of classical theories such as structuralism (Saussure 1916) and generativism (Pustejovsky 1995).
Such a combined point of view emerges as a viable contribution to our discussion and will be adopted with minor adjustments. Since, in the present thesis, we are not concerned with a computational model of lexical choice, but with a descriptive statistical model of lexical variation in written and spoken registers on the basis of corpus analysis, a modification to Edmonds and Hirst’s definition appears to be inevitable. We will refrain from the question, whether to assume a generative or a feature-based lexicon[9] and focus on a descriptive analysis of linguistic performance at the syntagmatic level. Contextualism (Firth 1957, Sinclair 1966, Sinclair 1991) provides the theoretical and methodological basis for our understanding of register-governed variations among near-synonyms and will be discussed in more detaile in section 2.3.2.
Preliminarily, and for our purpose, the appropriate level of granularity divides a performance-independent core meaning from performance-dependent variations emerging as a consequence of distinctive collocational patterns in various samples of linguistic performance. A near-synonym will be defined as one of two or more lexical items that realise a common performance-independent concept and exhibit a certain extent of performance-dependent differences to any other lexical item denoting that concept. Having thus established a working definition of near-synonymy suitable to the objective of the present thesis, we now turn to the examination of potential variations among near-synonyms, before we will complete the theoretical foundation of the present research with an account for a multi-dimensional point of view.
[...]
[1] Halliday proposes to substitute embedding by the term rank shift in order “to refer just to embedding in the strict sense, and [to] distinguish it from the interdependeny relation of hypotaxis, where one element is dependent on another but is not a constituent of it” (1987: 73f).
[2] The type-token ratio measures the number of different words (types) in a sample as the percentage of the total number of words (tokens) in that sample.
[3] In the original study, Biber uses the term ‘genre’ in order to refer to different types of discourse. However, he substitutes this term by ‘register’ in subsequent summaries and enhancements of his study (cf. Biber 1998, Biber et al. 1999), referring to language produced in different situations as opposed to language produced by different socially, or geographically determined clusters of people (‘dialect’). In the present thesis, we will adopt the term ‘register’ in order to refer to different occurrences of spoken and written language.
[4] The bibliographical overview in this section owes much to Edmonds (1999).
[5] “Eadem sunt quorum unum potest substituti alteri salva veritate (‘Two things are identical if one can be subsituted for the other without affecting the truth’)” (Church et al. 1994: 154).
[6] See Sparck Jones (1986: 82) for a discussion of Quine’s position.
[7] All language examples not being generated in an introspectively manner have been taken from the British National Corpus. The following convention will be adopted concerning references from the British National Corpus: (File ID: sequential number of the sentence unit quoted).
[8] Decompositional theory (cf. Katz and Fodor (1963), Bierwisch (1969), Clark (1973)) solves this problem by the assumption of a finite inventory of semantic primitives. Cruse (1986: 22) prefers the term ‘semantic trait’ “as a deliberate act of distancing.”
[9] For a critical discussion of Pustejovsky’s approach, see Fodor and Lepore (1998).
- Quote paper
- M.A. Daniel Daimler (Author), 2004, Near-synonyms in the oral and written mode, Munich, GRIN Verlag, https://www.grin.com/document/29420
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.