Search (67 results, page 1 of 4)

  • × theme_ss:"Wissensrepräsentation"
  1. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.23
    0.2251322 = product of:
      0.4502644 = sum of:
        0.064323485 = product of:
          0.19297044 = sum of:
            0.19297044 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.19297044 = score(doc=400,freq=2.0), product of:
                0.34335276 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04049921 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.19297044 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19297044 = score(doc=400,freq=2.0), product of:
            0.34335276 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04049921 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
        0.19297044 = weight(_text_:2f in 400) [ClassicSimilarity], result of:
          0.19297044 = score(doc=400,freq=2.0), product of:
            0.34335276 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04049921 = queryNorm
            0.56201804 = fieldWeight in 400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.5 = coord(3/6)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  2. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.20
    0.20337544 = product of:
      0.4067509 = sum of:
        0.042882323 = product of:
          0.12864697 = sum of:
            0.12864697 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.12864697 = score(doc=5820,freq=2.0), product of:
                0.34335276 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04049921 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
        0.18193428 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.18193428 = score(doc=5820,freq=4.0), product of:
            0.34335276 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04049921 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
        0.18193428 = weight(_text_:2f in 5820) [ClassicSimilarity], result of:
          0.18193428 = score(doc=5820,freq=4.0), product of:
            0.34335276 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04049921 = queryNorm
            0.5298757 = fieldWeight in 5820, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.5 = coord(3/6)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.15
    0.15008813 = product of:
      0.30017626 = sum of:
        0.042882323 = product of:
          0.12864697 = sum of:
            0.12864697 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.12864697 = score(doc=701,freq=2.0), product of:
                0.34335276 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04049921 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.12864697 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12864697 = score(doc=701,freq=2.0), product of:
            0.34335276 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04049921 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.12864697 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.12864697 = score(doc=701,freq=2.0), product of:
            0.34335276 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04049921 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.5 = coord(3/6)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Green, R.: WordNet (2009) 0.02
    0.021366488 = product of:
      0.12819892 = sum of:
        0.12819892 = weight(_text_:english in 4696) [ClassicSimilarity], result of:
          0.12819892 = score(doc=4696,freq=4.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.58840716 = fieldWeight in 4696, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4696)
      0.16666667 = coord(1/6)
    
    Abstract
    WordNet, a lexical database for English, is organized around semantic and lexical relationships between synsets, concepts represented by sets of synonymous word senses. Offering reasonably comprehensive coverage of the nouns, verbs, adjectives, and adverbs of general English, WordNet is a widely used resource for dealing with the ambiguity that arises from homonymy, polysemy, and synonymy. WordNet is used in many information-related tasks and applications (e.g., word sense disambiguation, semantic similarity, lexical chaining, alignment of parallel corpora, text segmentation, sentiment and subjectivity analysis, text classification, information retrieval, text summarization, question answering, information extraction, and machine translation).
  5. Quillian, M.R.: Word concepts : a theory and simulation of some basic semantic capabilities. (1967) 0.02
    0.018314132 = product of:
      0.10988479 = sum of:
        0.10988479 = weight(_text_:english in 4414) [ClassicSimilarity], result of:
          0.10988479 = score(doc=4414,freq=4.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.504349 = fieldWeight in 4414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.046875 = fieldNorm(doc=4414)
      0.16666667 = coord(1/6)
    
    Abstract
    In order to discover design principles for a large memory that can enable it to serve as the base of knowledge underlying human-like language behavior, experiments with a model memory are being performed. This model is built up within a computer by "recoding" a body of information from an ordinary dictionary into a complex network of elements and associations interconnecting them. Then, the ability of a program to use the resulting model memory effectively for simulating human performance provides a test of its design. One simulation program, now running, is given the model memory and is required to compare and contrast the meanings of arbitrary pairs of English words. For each pair, the program locates any relevant semantic information within the model memory, draws inferences on the basis of this, and thereby discovers various relationships between the meanings of the two words. Finally, it creates English text to express its conclusions. The design principles embodied in the memory model, together with some of the methods used by the program, constitute a theory of how human memory for semantic and other conceptual material may be formatted, organized, and used.
  6. Collard, J.; Paiva, V. de; Fong, B.; Subrahmanian, E.: Extracting mathematical concepts from text (2022) 0.02
    0.015108388 = product of:
      0.09065033 = sum of:
        0.09065033 = weight(_text_:english in 668) [ClassicSimilarity], result of:
          0.09065033 = score(doc=668,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.41606668 = fieldWeight in 668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.0546875 = fieldNorm(doc=668)
      0.16666667 = coord(1/6)
    
    Abstract
    We investigate different systems for extracting mathematical entities from English texts in the mathematical field of category theory as a first step for constructing a mathematical knowledge graph. We consider four different term extractors and compare their results. This small experiment showcases some of the issues with the construction and evaluation of terms extracted from noisy domain text. We also make available two open corpora in research mathematics, in particular in category theory: a small corpus of 755 abstracts from the journal TAC (3188 sentences), and a larger corpus from the nLab community wiki (15,000 sentences).
  7. Drexel, G.: Knowledge engineering for intelligent information retrieval (2001) 0.01
    0.012950047 = product of:
      0.07770028 = sum of:
        0.07770028 = weight(_text_:english in 4043) [ClassicSimilarity], result of:
          0.07770028 = score(doc=4043,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.3566286 = fieldWeight in 4043, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.046875 = fieldNorm(doc=4043)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper presents a clustered approach to designing an overall ontological model together with a general rule-based component that serves as a mapping device. By observational criteria, a multi-lingual team of experts excerpts concepts from general communication in the media. The team, then, finds equivalent expressions in English, German, French, and Spanish. On the basis of a set of ontological and lexical relations, a conceptual network is built up. Concepts are thought to be universal. Objects unique in time and space are identified by names and will be explained by the universals as their instances. Our approach relies on multi-relational descriptions of concepts. It provides a powerful tool for documentation and conceptual language learning. First and foremost, our multi-lingual, polyhierarchical ontology fills the gap of semantically-based information retrieval by generating enhanced and improved queries for internet search
  8. Girju, R.; Beamer, B.; Rozovskaya, A.; Fister, A.; Bhat, S.: ¬A knowledge-rich approach to identifying semantic relations between nominals (2010) 0.01
    0.012950047 = product of:
      0.07770028 = sum of:
        0.07770028 = weight(_text_:english in 4242) [ClassicSimilarity], result of:
          0.07770028 = score(doc=4242,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.3566286 = fieldWeight in 4242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper describes a state-of-the-art supervised, knowledge-intensive approach to the automatic identification of semantic relations between nominals in English sentences. The system employs a combination of rich and varied sets of new and previously used lexical, syntactic, and semantic features extracted from various knowledge sources such as WordNet and additional annotated corpora. The system ranked first at the third most popular SemEval 2007 Task - Classification of Semantic Relations between Nominals and achieved an F-measure of 72.4% and an accuracy of 76.3%. We also show that some semantic relations are better suited for WordNet-based models than other relations. Additionally, we make a distinction between out-of-context (regular) examples and those that require sentence context for relation identification and show that contextual data are important for the performance of a noun-noun semantic parser. Finally, learning curves show that the task difficulty varies across relations and that our learned WordNet-based representation is highly accurate so the performance results suggest the upper bound on what this representation can do.
  9. Yu, L.-C.; Wu, C.-H.; Chang, R.-Y.; Liu, C.-H.; Hovy, E.H.: Annotation and verification of sense pools in OntoNotes (2010) 0.01
    0.010791706 = product of:
      0.06475023 = sum of:
        0.06475023 = weight(_text_:english in 4236) [ClassicSimilarity], result of:
          0.06475023 = score(doc=4236,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.2971905 = fieldWeight in 4236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4236)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper describes the OntoNotes, a multilingual (English, Chinese and Arabic) corpus with large-scale semantic annotations, including predicate-argument structure, word senses, ontology linking, and coreference. The underlying semantic model of OntoNotes involves word senses that are grouped into so-called sense pools, i.e., sets of near-synonymous senses of words. Such information is useful for many applications, including query expansion for information retrieval (IR) systems, (near-)duplicate detection for text summarization systems, and alternative word selection for writing support systems. Although a sense pool provides a set of near-synonymous senses of words, there is still no knowledge about whether two words in a pool are interchangeable in practical use. Therefore, this paper devises an unsupervised algorithm that incorporates Google n-grams and a statistical test to determine whether a word in a pool can be substituted by other words in the same pool. The n-gram features are used to measure the degree of context mismatch for a substitution. The statistical test is then applied to determine whether the substitution is adequate based on the degree of mismatch. The proposed method is compared with a supervised method, namely Linear Discriminant Analysis (LDA). Experimental results show that the proposed unsupervised method can achieve comparable performance with the supervised method.
  10. Bast, H.; Bäurle, F.; Buchhold, B.; Haussmann, E.: Broccoli: semantic full-text search at your fingertips (2012) 0.01
    0.010791706 = product of:
      0.06475023 = sum of:
        0.06475023 = weight(_text_:english in 704) [ClassicSimilarity], result of:
          0.06475023 = score(doc=704,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.2971905 = fieldWeight in 704, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.0390625 = fieldNorm(doc=704)
      0.16666667 = coord(1/6)
    
    Abstract
    We present Broccoli, a fast and easy-to-use search engine forwhat we call semantic full-text search. Semantic full-textsearch combines the capabilities of standard full-text searchand ontology search. The search operates on four kinds ofobjects: ordinary words (e.g., edible), classes (e.g., plants), instances (e.g.,Broccoli), and relations (e.g., occurs-with or native-to). Queries are trees, where nodes are arbitrary bags of these objects, and arcs are relations. The user interface guides the user in incrementally constructing such trees by instant (search-as-you-type) suggestions of words, classes, instances, or relations that lead to good hits. Both standard full-text search and pure ontology search are included as special cases. In this paper, we describe the query language of Broccoli, a new kind of index that enables fast processing of queries from that language as well as fast query suggestion, the natural language processing required, and the user interface. We evaluated query times and result quality on the full version of the English Wikipedia (32 GB XML dump) combined with the YAGO ontology (26 million facts). We have implemented a fully functional prototype based on our ideas, see: http://broccoli.informatik.uni-freiburg.de.
  11. Hauer, M.: Mehrsprachige semantische Netze leichter entwickeln (2002) 0.01
    0.008633365 = product of:
      0.051800188 = sum of:
        0.051800188 = weight(_text_:english in 3894) [ClassicSimilarity], result of:
          0.051800188 = score(doc=3894,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.2377524 = fieldWeight in 3894, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.03125 = fieldNorm(doc=3894)
      0.16666667 = coord(1/6)
    
    Abstract
    AGI - Information Management Consultants liefern seit nunmehr 16 Jahren eine Software zur Entwicklung von Thesauri und Klassifikationen, ehemals bezeichnet als INDEX, seit zweieinhalb Jahren als IC INDEX neu entwickelt. Solche Terminologien werden oft auch als Glossar, Lexikon, Topic Maps, RDF, semantisches Netz, Systematik, Aktenplan oder Nomenklatur bezeichnet. Die Software erlaubt zwar schon immer, dass solche terminologischen Werke mehrsprachig angelegt sind, doch es gab keine speziellen Werkzeuge, um die Übersetzung zu erleichtern. Die Globalisierung führt zunehmend auch zur Mehrsprachigkeit von Fachterminologien, wie laufende Projekte belegen. In IC INDEX 5.08 wurde deshalb ein spezieller Workflow für die Übersetzung implementiert, der Wortfelder bearbeitet und dabei weitgehend automatisch, aber vom Übersetzer kontrolliert, die richtigen Verbindungen zwischen den Termen in den anderen Sprachen erzeugt. Bereits dieser Workflow beschleunigt wesentlich die Übersetzungstätigkeit. Doch es geht noch schneller: der eTranslation Server von Linguatec generiert automatisch Übersetzungsvorschläge für Deutsch/English und Deutsch/Französisch. Demnächst auch Deutsch/Spanisch und Deutsch/Italienisch. Gerade bei Mehrwortbegriffen, Klassenbezeichnungen und Komposita spielt die automatische Übersetzung gegenüber dem Wörterbuch-Lookup ihre Stärke aus. Der Rückgriff ins Wörterbuch ist selbstverständlich auch implementiert, sowohl auf das Linguatec-Wörterbuch und zusätzlich jedes beliebige über eine URL adressierbare Wörterbuch. Jeder Übersetzungsvorschlag muss vom Terminologie-Entwickler bestätigt werden. Im Rahmen der Oualitätskontrolle haben wir anhand vorliegender mehrsprachiger Thesauri getestet mit dem Ergebnis, dass die automatischen Vorschläge oft gleich und fast immer sehr nahe an der gewünschten Übersetzung waren. Worte, die für durchschnittlich gebildete Menschen nicht mehr verständlich sind, bereiten auch der maschinellen Übersetzung Probleme, z.B. Fachbegriffe aus Medizin, Chemie und anderen Wissenschaften. Aber auch ein Humanübersetzer wäre hier ohne einschlägige Fachausbildung überfordert. Also, ohne Fach- und ohne Sprachkompetenz geht es nicht, aber mit geht es ziemlich flott. IC INDEX basiert auf Lotus Notes & Domino 5.08. Beliebige Relationen zwischen Termen sind zulässig, die ANSI-Normen sind implementiert und um zusätzliche Relationen ergänzt, 26 Relationen gehören zum Lieferumfang. Ausgaben gemäß Topic Maps oder RDF - zwei eng verwandte Normen-werden bei Nachfrage entwickelt. Ausgaben sind in HMTL, XML, eine ansprechende Druckversion unter MS Word 2000 und für verschiedene Search-Engines vorhanden. AGI - Information Management Consultants, Neustadt an der Weinstraße, beraten seit 1983 Unternehmen und Organisationen im dem heute als Knowledge Management bezeichneten Feld. Seit 1994 liefern sie eine umfassende, hochintegrative Lösung: "Information Center" - darin ist IC INDEX ein eigenständiges Modul zur Unterstützung von mehrsprachiger Indexierung und mehrsprachigem semantischem Retrieval. Linguatec, München, ist einstmals aus den linguistischen Forschungslabors von IBM hervorgegangen und ist über den Personal Translator weithin bekannt.
  12. Vlachidis, A.; Binding, C.; Tudhope, D.; May, K.: Excavating grey literature : a case study on the rich indexing of archaeological documents via natural language-processing techniques and knowledge-based resources (2010) 0.01
    0.008633365 = product of:
      0.051800188 = sum of:
        0.051800188 = weight(_text_:english in 3948) [ClassicSimilarity], result of:
          0.051800188 = score(doc=3948,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.2377524 = fieldWeight in 3948, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.03125 = fieldNorm(doc=3948)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - This paper sets out to discuss the use of information extraction (IE), a natural language-processing (NLP) technique to assist "rich" semantic indexing of diverse archaeological text resources. The focus of the research is to direct a semantic-aware "rich" indexing of diverse natural language resources with properties capable of satisfying information retrieval from online publications and datasets associated with the Semantic Technologies for Archaeological Resources (STAR) project. Design/methodology/approach - The paper proposes use of the English Heritage extension (CRM-EH) of the standard core ontology in cultural heritage, CIDOC CRM, and exploitation of domain thesauri resources for driving and enhancing an Ontology-Oriented Information Extraction process. The process of semantic indexing is based on a rule-based Information Extraction technique, which is facilitated by the General Architecture of Text Engineering (GATE) toolkit and expressed by Java Annotation Pattern Engine (JAPE) rules. Findings - Initial results suggest that the combination of information extraction with knowledge resources and standard conceptual models is capable of supporting semantic-aware term indexing. Additional efforts are required for further exploitation of the technique and adoption of formal evaluation methods for assessing the performance of the method in measurable terms. Originality/value - The value of the paper lies in the semantic indexing of 535 unpublished online documents often referred to as "Grey Literature", from the Archaeological Data Service OASIS corpus (Online AccesS to the Index of archaeological investigationS), with respect to the CRM ontological concepts E49.Time Appellation and P19.Physical Object.
  13. Coladangelo, L.P.: Organizing controversy : toward cultural hospitality in controlled vocabularies through semantic annotation (2021) 0.01
    0.008633365 = product of:
      0.051800188 = sum of:
        0.051800188 = weight(_text_:english in 578) [ClassicSimilarity], result of:
          0.051800188 = score(doc=578,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.2377524 = fieldWeight in 578, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.03125 = fieldNorm(doc=578)
      0.16666667 = coord(1/6)
    
    Abstract
    This research explores current controversies within country dance communities and the implications of cultural and ethical issues related to representation of gender and race in a KOS for an ICH, while investigating the importance of context and the applicability of semantic approaches in the implementation of synonym rings. During development of a controlled vocabulary to represent dance concepts for country dance choreography, this study encountered and considered the importance of history and culture regarding synonymous and near-synonymous terms used to describe dance roles and choreographic elements. A subset of names for the same choreographic concepts across four subdomains of country dance (English country dance, Scottish country dance, contra dance, and modern western square dance) were used as a case study. These concepts included traditionally gendered dance roles and choreographic terms with a racially pejorative history. Through the lens of existing research on ethical knowl­edge organization, this study focused on principles and methods of transparency, multivocality, cultural warrant, cultural hospitality, and intersectionality to conduct a domain analysis of country dance resources. The analysis revealed differing levels of engagement and distinction among dance practitioners and communities for their preferences to use different terms for the same concept. Various lexical, grammatical, affective, social, political, and cultural aspects also emerged as important contextual factors for the use and assignment of terms. As a result, this study proposes the use of semantic annotation to represent those contextual factors and to allow mechanisms of user choice in the design of a country dance knowl­edge organization system. Future research arising from this study would focus on expanding examination to other country dance genres and continued exploration of the use of semantic approaches to represent contextual factors in controlled vocabulary development.
  14. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.01
    0.008633365 = product of:
      0.051800188 = sum of:
        0.051800188 = weight(_text_:english in 1004) [ClassicSimilarity], result of:
          0.051800188 = score(doc=1004,freq=2.0), product of:
            0.21787451 = queryWeight, product of:
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.04049921 = queryNorm
            0.2377524 = fieldWeight in 1004, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.3797226 = idf(docFreq=553, maxDocs=44218)
              0.03125 = fieldNorm(doc=1004)
      0.16666667 = coord(1/6)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  15. Tang, X.-B.; Wei Wei, G,-C.L.; Zhu, J.: ¬An inference model of medical insurance fraud detection : based on ontology and SWRL (2017) 0.01
    0.0067144623 = product of:
      0.040286772 = sum of:
        0.040286772 = product of:
          0.080573544 = sum of:
            0.080573544 = weight(_text_:countries in 3615) [ClassicSimilarity], result of:
              0.080573544 = score(doc=3615,freq=2.0), product of:
                0.22186631 = queryWeight, product of:
                  5.478287 = idf(docFreq=501, maxDocs=44218)
                  0.04049921 = queryNorm
                0.36316258 = fieldWeight in 3615, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.478287 = idf(docFreq=501, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3615)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Medical insurance fraud is common in many countries' medical insurance systems and represents a serious threat to the insurance funds and the benefits of patients. In this paper, we present an inference model of medical insurance fraud detection, based on a medical detection domain ontology that incorporates the knowledge base provided by the Medical Terminology, NKIMed, and Chinese Library Classification systems. Through analyzing the behaviors of irregular and fraudulent medical services, we defined the scope of the medical domain ontology relevant to the task and built the ontology about medical sciences and medical service behaviors. The ontology then utilizes Semantic Web Rule Language (SWRL) and Java Expert System Shell (JESS) to detect medical irregularities and mine implicit knowledge. The system can be used to improve the management of medical insurance risks.
  16. Jansen, L.: Four rules for classifying social entities (2014) 0.01
    0.005595384 = product of:
      0.033572305 = sum of:
        0.033572305 = product of:
          0.06714461 = sum of:
            0.06714461 = weight(_text_:countries in 3409) [ClassicSimilarity], result of:
              0.06714461 = score(doc=3409,freq=2.0), product of:
                0.22186631 = queryWeight, product of:
                  5.478287 = idf(docFreq=501, maxDocs=44218)
                  0.04049921 = queryNorm
                0.30263546 = fieldWeight in 3409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.478287 = idf(docFreq=501, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3409)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Many top-level ontologies like Basic Formal Ontology (BFO) have been developed as a framework for ontologies in the natural sciences. The aim of the present essay is to extend the account of BFO to a very special layer of reality, the world of social entities. While natural entities like bacteria, thunderstorms or temperatures exist independently from human action and thought, social entities like countries, hospitals or money come into being only through human collective intentions and collective actions. Recently, the regional ontology of the social world has attracted considerable research interest in philosophy - witness, e.g., the pioneering work by Gilbert, Tuomela and Searle. There is a considerable class of phenomena that require the participation of more than one human agent: nobody can tango alone, play tennis against oneself, or set up a parliamentary democracy for oneself. Through cooperation and coordination of their wills and actions, agents can act together - they can perform social actions and group actions. An important kind of social action is the establishment of an institution (e.g. a hospital, a research agency or a marriage) through mutual promise or (social) contract. Another important kind of social action is the imposition of a social status on certain entities. For example, a society can impose the status of being a 20 Euro note on certain pieces of paper or the status of being an approved medication to a certain chemical substance.
  17. Schmitz-Esser, W.: Language of general communication and concept compatibility (1996) 0.00
    0.0045725703 = product of:
      0.027435422 = sum of:
        0.027435422 = product of:
          0.054870844 = sum of:
            0.054870844 = weight(_text_:22 in 6089) [ClassicSimilarity], result of:
              0.054870844 = score(doc=6089,freq=2.0), product of:
                0.14182134 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049921 = queryNorm
                0.38690117 = fieldWeight in 6089, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6089)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Pages
    S.11-22
  18. Drewer, P.; Massion, F; Pulitano, D: Was haben Wissensmodellierung, Wissensstrukturierung, künstliche Intelligenz und Terminologie miteinander zu tun? (2017) 0.00
    0.0045725703 = product of:
      0.027435422 = sum of:
        0.027435422 = product of:
          0.054870844 = sum of:
            0.054870844 = weight(_text_:22 in 5576) [ClassicSimilarity], result of:
              0.054870844 = score(doc=5576,freq=2.0), product of:
                0.14182134 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049921 = queryNorm
                0.38690117 = fieldWeight in 5576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5576)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    13.12.2017 14:17:22
  19. Tudhope, D.; Hodge, G.: Terminology registries (2007) 0.00
    0.0045725703 = product of:
      0.027435422 = sum of:
        0.027435422 = product of:
          0.054870844 = sum of:
            0.054870844 = weight(_text_:22 in 539) [ClassicSimilarity], result of:
              0.054870844 = score(doc=539,freq=2.0), product of:
                0.14182134 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049921 = queryNorm
                0.38690117 = fieldWeight in 539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=539)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    26.12.2011 13:22:07
  20. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.00
    0.0045725703 = product of:
      0.027435422 = sum of:
        0.027435422 = product of:
          0.054870844 = sum of:
            0.054870844 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.054870844 = score(doc=3406,freq=2.0), product of:
                0.14182134 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04049921 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    30. 5.2010 16:22:35

Authors

Years

Languages

  • e 55
  • d 12

Types

  • a 53
  • el 16
  • x 5
  • m 2
  • n 1
  • r 1
  • More… Less…