1 Introduction
According to Aristotle, every word is to be defined by naming its genus proximum and differentia specifica.
The linguistic debate concerning the issue of word-meaning and its adequate description has split researchers into two opposing parties. Traditional linguists whose position is relatively close to Aristotle´s idea favour a theory called Feature Semantics (hereafter FS) whereas many scientists rather support a more modern approach which can be labelled Prototype Theory (hereafter PT).
The aim of this research paper is to describe and compare these two concepts. As a conclusion of the first (theoretical) part, it will try to show that the two approaches are not incompatible but that they even seem to function on a complementary basis. In the second (more practical) part, I will try to decompose the meaning elements of some verbs of selected English semantic fields and thus give an example for the use (and usefulness) of semantic features.
[...]
Contents
1 Introduction
2 Feature Semantics vs. Prototype Theory
2.1 Traditional Feature Theory: `Checklists´ and `Atomic Globules´
2.2 Prototype Theory
2.3 Need FS and PT Really Be Conflicting Views ?
2.4 Lipkas Typology of Semantic Features
3 An Attempt to Decompose the Meaning Elements of Selected English Verbs
3.1 Preliminary Remarks
3.2 The Word-Field of `to attack´
3.3 The Word-Field of `to cry´
3.4 The Word-Field of `to throw´
4 Summary
Bibliography
1 Introduction
According to Aristotle, every word is to be defined by naming its genus proximum and differentia specifica.
The linguistic debate concerning the issue of word-meaning and its adequate description has split researchers into two opposing parties. Traditional linguists whose position is relatively close to Aristotle´s idea favour a theory called Feature Semantics (hereafter FS) whereas many scientists rather support a more modern approach which can be labelled Prototype Theory (hereafter PT).
The aim of this research paper is to describe and compare these two concepts. As a conclusion of the first (theoretical) part, it will try to show that the two approaches are not incompatible but that they even seem to function on a complementary basis. In the second (more practical) part, I will try to decompose the meaning elements of some verbs of selected English semantic fields and thus give an example for the use (and usefulness) of semantic features.
2 Feature Semantics vs. Prototype Theory
3 Traditional Feature Theory: `Checklists´ and `Atomic Globules´
Traditional semantic researchers believed that the meaning of every single word of a language is built up from a kind of pool of absolutely basic meaning components. These components were called atomic globules (or semantic primitives) since they were considered to be so basic that they could not be analysed or decomposed any further. Thus, the efforts of researchers focused on finding those atoms of meaning in order to arrive at a basic set of meaning elements, some kind of stencil that should enable them to describe and define any given word simply by naming the appropriate list of `atomic´ features.
This idea mainly evolved from the methods and structures which had been developed to describe sounds by the Prague School in its functional approach to phonology. A phoneme can be unambiguously described by a bundle of certain distinctive features. This means that for example only the phoneme /p/ corresponds to the following list of features: [bilabial], [fortis], [plosive]. The analogous use of this technique of defining in the field of semantics also works for the meaning of some words. The meaning of `boy´ can be decomposed to these elements: [+human], [+male], [–adult][1], whereas the corresponding description of `girl´ would be [+human], [–male], [–adult] and that of `woman´ [+human], [–male], [+adult].[2] If this, mechanism worked with all lexemes of a language, it would mean that it is sufficient to check some (`atomic´) categories and decide whether they can be approved of (`+´) or have to be negated (`–´) in order to define the meaning of a word exactly. (This checking-procedure is the reason why atomic globule theories are also called checklist theories[3].)
As I have mentioned, this approach works fine with some words, but what, if you tried to apply the mentioned categories – which proved to be sufficient for words like boy, girl etc. – to the lexeme `window´, for example ? You could say that this word contains the feature [–human] but the categories [±adult] and [±male] are clearly inappropriate. Thus, in order to pin down the meaning of this lexeme, it is necessary to introduce additional categories, for instance such like [±square shape] and [±made of glass]. But these new categories – which enabled you to define `window´ – also turn out to be useless if you try to apply them to a word like `sound´. The analysis of this lexeme would again require additional categories. Just like that, your once relatively neat and concise set of features will gradually develop to an immense confused and complicated agglomeration of the most diverse semantic features and categories. In his book Praxis der englischen Semantik, Ernst Leisi distinguishes various `Bedeutungselemente´[4] for each English word-class. Among those he lists for nouns are the following: shape (`Form´), material (`Substanz´), colour (`Farbe´), size (`Größe´), number (`Anzahl´) dynamic condition, purpose (`Zweck´), various time conditions and a norm-oriented point of reference (`Bezugsnorm´).[5] The enormous amount of individual and exclusive meaning elements (i.e. elements that can be applied to some words but turn out to be useless for the analysis for the remainder of the lexicon) clearly shows the great limitations of the checklist approach. Since there are far more words and meaning-bundles than phonemes, the number of required descriptive categories inevitably gets so large that it is impossible to find an at least somewhat complete but still clear and practically manageable set of features. Thus, the notion of the existence of one universal semantic stencil or simple checklist which is applicable to any word in a language cannot be maintained. The reason for this is the fact that “...längst nicht alle Bedeutungen gleich aufgebaut sind...“[6], which means that any undertaking to create a fixed set of so-called basic and primitive elements is doomed to failure.
The second aspect that shows the weakness of traditional FS is its inability to cope with so-called `fuzzy meanings´. This term describes the phenomenon that the meaning of words is not as clear-cut and fixed as many supporters of traditional FS claimed. The fluidness of meaning and the fuzziness of its edges is probably best illustrated by the `tiger-example´ in Jean Aitchison´s book Words in the Mind.[7] Aitchison quotes usual dictionary definitions of the word `tiger´ (e.g. COD: `large Asian yellow-brown, black-striped carnivorous maneless feline´) and then tries to extract those characteristics from the list that are absolutely necessary for describing or recognizing a tiger. The result of her efforts however appears to be that virtually none of the mentioned features can be evaluated as really indispensable. The yellow-brown colour and the black stripes for example are not essential, because everyone would still call a white tiger a tiger. The same is true for the feature `carnivorous´. Almost any person would agree that a vegetarian tiger nonetheless still is a tiger. This “permissiveness over core characteristics“[8] demonstrates the fuzzy meaning of words. Meanings are not fixed and clear-cut (e.g. by some obligatory characteristics). There are many more or less grey areas which still belong to the meaning of a word even if several `core characteristics´ are missing. “It’s not at all hard to convince the man on the street that there are three-legged, lame, toothless albino tigers, that are tigers all the same...“[9]. This example shows that ticking off certain features on a semantic checklist is no adequate way to describe the meaning of words and the mechanisms of meaning attribution in the mental lexicon. The definitions of traditional FS are too narrow and the characteristics that are declared to be absolutely basic cannot be empirically verified.
Summary:
- Efforts to find a universally applicable core set of semantic features failed, because there is a huge number of highly heterogeneous categories and almost any lexeme requires individual attention (i.e. its own tailor-made set of features).
- FS is not able to cope with the fuzzy meaning of words because its definitions are too narrow and restrictive.
2.2 Prototype Theory
A more efficient and adequate way of dealing with fuzzy meanings is presented by Prototype Theory. Supporters of this school of thought suggest that people have some typical basic examples of each meaning category in their minds. These so-called exemplars (i.e. prototypes) represent a kind of standard model of a category, an image which is immediately conjured up in a person’s mind when they hear a certain word.
Eleanor Rosch carried out a number of experiments to reveal some of these prototypical images. In her probably most famous experiment, she asked her subjects to classify different kinds of birds according to their `birdiness´ i.e. their degree of typicality for the category `bird´. Not surprisingly, creatures like the ostrich or the penguin ranked significantly lower than the sparrow or the blackbird. Though, the best example of a `bird´ and thus the prototype for that category turned out to be the robin.[10] This means that a robin is the standard mental representation for the meaning class `bird´ and therefore serves as the basis for any further recognizing and labelling activities. Any other object in question is compared to and tried to be matched with a robin. If this match is satisfactorily good, the object is accepted as a bird. The word `satisfactorily´ here is of major importance. The match only has to be reasonably i.e. approximately good, the object need not be a perfect lookalike of the prototype to be classified as a bird. This is the essential difference to FS and the great achievement of PT. With the use of one prototype, you can cover a wide range of other similar (but clearly different) referents that belong to the same category, even if they happen to be located in the grey areas or near the fuzzy edges of the word-meaning. Thus, the problem of how to deal with fuzzy meaning is effectively solved.
[...]
[1] `±´ indicates that this category is binary
[2] This scheme of definition still functions with words like `bull´ ([bovine], [+male], [+adult]) or `tomcat´
([feline], [+male], [+adult])
[3] the term was originally created by Fillmore
[4] a term which I consider to be almost synonymous to our semantic components
[5] cf. Ernst Leisi, Praxis der englischen Semantik, 2nd revised edition, (Heidelberg, 1985), p.47-52
6 Leisi, p.52
[7] cf. Jean Aitchison , Words in the Mind, (Oxford, 1987), p.45
[8] Aitchison, p.45
[9] Aitchison, p.45
[10] cf. Aitchison, p. 51ff.
- Citation du texte
- Thomas Glöckner (Auteur), 1997, Semantic Features vs. Prototypes, Munich, GRIN Verlag, https://www.grin.com/document/616