In present-day English corpora are used for dictionary-making. When looking at that, concordances and collocations play an important role here. This essay gives information about the terms collocation and concordance, provides examples and shows problems that may occur during the linguistic research process.
Inhaltsverzeichnis (Table of Contents)
- Introduction
- Collocation
- Concordance
- Problems Concerning the Research of Collocations
- Advanced Methods in Corpus Linguistics
- Example for Dictionary-Making: The Lexeme Sweet
- Using Statistical Data Exemplified on BNCWeb and LDOCE
- Including Statistics in Dictionary-Making
- Conclusion
- References
Zielsetzung und Themenschwerpunkte (Objectives and Key Themes)
This essay provides an overview of the terms collocation and concordance, which are essential for dictionary-making in present-day English. It delves into the historical development of these concepts, explores their practical applications, and highlights potential challenges encountered during linguistic research.
- The role of collocations and concordances in dictionary-making
- The historical development of collocation theory
- The importance of statistical analysis in corpus linguistics
- Challenges related to the research of collocations
- Advanced methods in corpus linguistics and their implications for dictionary-making
Zusammenfassung der Kapitel (Chapter Summaries)
- Introduction: This chapter introduces the concept of corpus linguistics and its application in dictionary-making, emphasizing the importance of collocations and concordances.
- Collocation: This chapter defines collocations as the characteristic co-occurrence of words and explores their historical development, including Firth's concept of "habitual or customary places" of words and Halliday's introduction of probability and lexical sets.
- Concordance: This chapter explains how concordances, which show the adjacent words surrounding a target word, can be used to identify collocations and understand the context of words. It also highlights the use of concordances in identifying language patterns.
- Problems Concerning the Research of Collocations: This chapter addresses challenges encountered in researching collocations, such as the exclusion of grammatical lexemes and the difficulty in achieving consensus among native speakers regarding acceptable collocations.
- Advanced Methods in Corpus Linguistics: This chapter discusses the evolution of corpus linguistics research methods from informant-based texts to computer-based corpora. It explores the use of lemmatization, part-of-speech tagging, and frequency lists in corpus linguistics.
- Example for Dictionary-Making: The Lexeme Sweet: This chapter presents a practical example of how statistical data from corpora, such as BNCWeb and LDOCE, can be used for dictionary-making, specifically focusing on the lexeme "sweet".
Schlüsselwörter (Keywords)
This essay focuses on key terms and concepts in corpus linguistics, including collocation, concordance, corpus-based research, dictionary-making, statistical analysis, frequency lists, lemmatization, and part-of-speech tagging. The work also explores the relationship between language patterns, semantics, and the development of linguistic categories within the context of corpus analysis.
Frequently Asked Questions
What is the role of corpus linguistics in modern dictionary-making?
Corpus linguistics provides real-world data from large collections of text, allowing lexicographers to base dictionary entries on actual language usage rather than intuition.
What are collocations in linguistics?
Collocations are the characteristic or habitual co-occurrence of words, describing how certain words tend to appear together with higher probability.
How do concordances help in linguistic research?
Concordances show a target word within its immediate context (surrounding words), helping researchers identify patterns, language usage, and collocations.
What are the common problems when researching collocations?
Challenges include the exclusion of grammatical lexemes and difficulty achieving a consensus among native speakers regarding which word combinations are "acceptable."
What is lemmatization in the context of corpus analysis?
Lemmatization is a method used to group together different inflected forms of a word so they can be analyzed as a single item (the lemma).
Which specific lexeme is used as an example for dictionary-making in this essay?
The essay uses the lexeme "sweet" as a practical example, utilizing data from BNCWeb and LDOCE.
- Quote paper
- Marie Wieslert (Author), 2009, Corpus Linguistics: Lexicography and Semantics: Introduction to Concordance and Collocations, Munich, GRIN Verlag, https://www.grin.com/document/171915