Search (15 results, page 1 of 1)

  • × language_ss:"e"
  • × type_ss:"x"
  1. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.04
    0.04078495 = product of:
      0.20392476 = sum of:
        0.20392476 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
          0.20392476 = score(doc=4997,freq=2.0), product of:
            0.43541256 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051357865 = queryNorm
            0.46834838 = fieldWeight in 4997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4997)
      0.2 = coord(1/5)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  2. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.03
    0.034272335 = product of:
      0.17136167 = sum of:
        0.17136167 = weight(_text_:thesaurus in 3222) [ClassicSimilarity], result of:
          0.17136167 = score(doc=3222,freq=16.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.7220435 = fieldWeight in 3222, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
      0.2 = coord(1/5)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.03
    0.032627963 = product of:
      0.1631398 = sum of:
        0.1631398 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
          0.1631398 = score(doc=701,freq=2.0), product of:
            0.43541256 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051357865 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  4. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.032627963 = product of:
      0.1631398 = sum of:
        0.1631398 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
          0.1631398 = score(doc=5820,freq=2.0), product of:
            0.43541256 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.051357865 = queryNorm
            0.3746787 = fieldWeight in 5820, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=5820)
      0.2 = coord(1/5)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  5. Francu, V.: Multilingual access to information using an intermediate language (2003) 0.03
    0.025647065 = product of:
      0.12823533 = sum of:
        0.12823533 = weight(_text_:thesaurus in 1742) [ClassicSimilarity], result of:
          0.12823533 = score(doc=1742,freq=14.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.5403279 = fieldWeight in 1742, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.03125 = fieldNorm(doc=1742)
      0.2 = coord(1/5)
    
    Abstract
    While being theoretically so widely available, information can be restricted from a more general use by linguistic barriers. The linguistic aspects of the information languages and particularly the chances of an enhanced access to information by means of multilingual access facilities will make the substance of this thesis. The main problem of this research is thus to demonstrate that information retrieval can be improved by using multilingual thesaurus terms based on an intermediate or switching language to search with. Universal classification systems in general can play the role of switching languages for reasons dealt with in the forthcoming pages. The Universal Decimal Classification (UDC) in particular is the classification system used as example of a switching language for our objectives. The question may arise: why a universal classification system and not another thesaurus? Because the UDC like most of the classification systems uses symbols. Therefore, it is language independent and the problems of compatibility between such a thesaurus and different other thesauri in different languages are avoided. Another question may still arise? Why not then, assign running numbers to the descriptors in a thesaurus and make a switching language out of the resulting enumerative system? Because of some other characteristics of the UDC: hierarchical structure and terminological richness, consistency and control. One big problem to find an answer to is: can a thesaurus be made having as a basis a classification system in any and all its parts? To what extent this question can be given an affirmative answer? This depends much on the attributes of the universal classification system which can be favourably used to this purpose. Examples of different situations will be given and discussed upon beginning with those classes of UDC which are best fitted for building a thesaurus structure out of them (classes which are both hierarchical and faceted)...
    Content
    Inhalt: INFORMATION LANGUAGES: A LINGUISTIC APPROACH MULTILINGUAL ASPECTS IN INFORMATION STORAGE AND RETRIEVAL COMPATIBILITY AND CONVERTIBILITY OF INFORMATION LANGUAGES CURRENT TRENDS IN MULTILINGUAL ACCESS BUILDING UDC-BASED MULTILINGUAL THESAURI ONLINE APPLICATIONS OF THE UDC-BASED MULTILINGUAL THESAURI THE IMPACT OF SPECIFICITY ON THE RETRIEVAL POWER OF A UDC-BASED MULTILINGUAL THESAURUS FINAL REMARKS AND GENERAL CONCLUSIONS Proefschrift voorgelegd tot het behalen van de graad van doctor in de Taal- en Letterkunde aan de Universiteit Antwerpen. - Vgl.: http://dlist.sir.arizona.edu/1862/.
  6. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.02
    0.023744568 = product of:
      0.11872284 = sum of:
        0.11872284 = weight(_text_:thesaurus in 3081) [ClassicSimilarity], result of:
          0.11872284 = score(doc=3081,freq=12.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.5002464 = fieldWeight in 3081, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.03125 = fieldNorm(doc=3081)
      0.2 = coord(1/5)
    
    Abstract
    Die Arbeit analysiert die dynamische Entwicklung und den Gebrauch von Thesaurusbegriffen. Zusätzlich konzentriert sie sich auf die Faktoren, die die Zahl von Indexbegriffen pro Dokument oder Zeitschrift beeinflussen. Als Untersuchungsobjekt dienten der MeSH und die entsprechende Datenbank "MEDLINE". Die wichtigsten Konsequenzen sind: 1. Der MeSH-Thesaurus hat sich durch drei unterschiedliche Phasen jeweils logarithmisch entwickelt. Solch einen Thesaurus sollte folgenden Gleichung folgen: "T = 3.076,6 Ln (d) - 22.695 + 0,0039d" (T = Begriffe, Ln = natürlicher Logarithmus und d = Dokumente). Um solch einen Thesaurus zu konstruieren, muss man demnach etwa 1.600 Dokumente von unterschiedlichen Themen des Bereiches des Thesaurus haben. Die dynamische Entwicklung von Thesauri wie MeSH erfordert die Einführung eines neuen Begriffs pro Indexierung von 256 neuen Dokumenten. 2. Die Verteilung der Thesaurusbegriffe erbrachte drei Kategorien: starke, normale und selten verwendete Headings. Die letzte Gruppe ist in einer Testphase, während in der ersten und zweiten Kategorie die neu hinzukommenden Deskriptoren zu einem Thesauruswachstum führen. 3. Es gibt ein logarithmisches Verhältnis zwischen der Zahl von Index-Begriffen pro Aufsatz und dessen Seitenzahl für die Artikeln zwischen einer und einundzwanzig Seiten. 4. Zeitschriftenaufsätze, die in MEDLINE mit Abstracts erscheinen erhalten fast zwei Deskriptoren mehr. 5. Die Findablity der nicht-englisch sprachigen Dokumente in MEDLINE ist geringer als die englische Dokumente. 6. Aufsätze der Zeitschriften mit einem Impact Factor 0 bis fünfzehn erhalten nicht mehr Indexbegriffe als die der anderen von MEDINE erfassten Zeitschriften. 7. In einem Indexierungssystem haben unterschiedliche Zeitschriften mehr oder weniger Gewicht in ihrem Findability. Die Verteilung der Indexbegriffe pro Seite hat gezeigt, dass es bei MEDLINE drei Kategorien der Publikationen gibt. Außerdem gibt es wenige stark bevorzugten Zeitschriften."
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Gordon, T.J.; Helmer-Hirschberg, O.: Report on a long-range forecasting study (1964) 0.02
    0.015744796 = product of:
      0.078723975 = sum of:
        0.078723975 = weight(_text_:22 in 4204) [ClassicSimilarity], result of:
          0.078723975 = score(doc=4204,freq=4.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.4377287 = fieldWeight in 4204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=4204)
      0.2 = coord(1/5)
    
    Date
    22. 6.2018 13:24:08
    22. 6.2018 13:54:52
  8. Schwarz, K.: Domain model enhanced search : a comparison of taxonomy, thesaurus and ontology (2005) 0.01
    0.013708933 = product of:
      0.06854466 = sum of:
        0.06854466 = weight(_text_:thesaurus in 4569) [ClassicSimilarity], result of:
          0.06854466 = score(doc=4569,freq=4.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2888174 = fieldWeight in 4569, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.03125 = fieldNorm(doc=4569)
      0.2 = coord(1/5)
    
    Abstract
    The results of this thesis are intended to support the information architect in designing a solution for improved search in a corporate environment. Specifically we have examined the type of search problems that require a domain model to enhance the search process. There are several approaches to modeling a domain. We have considered 3 different types of domain modeling schemes; taxonomy, thesaurus and ontology. The intention is to support the information architect in making an informed choice between one or more of these schemes. In our opinion the main criteria for this choice are the modeling characteristics of a scheme and the suitability for application in the search process. The second chapter is a discussion of modeling characteristics of each scheme, followed by a comparison between them. This should give an information architect an idea of which aspects of a domain can be modeled with each scheme. What is missing here is an indication of the effort required to model a domain with each scheme. There are too many factors that influence the amount of required effort, ranging from measurable factors like domain size and resource characteristics to cultural matters such as the willingness to share knowledge and the existence of a project champion in the team to keep the project running. The third chapter shows what role domain models can play in each part of the search process. This gives an idea of the problems that domain models can solve. We have split the search process into individual parts to show that domain models can be applied very differently in the process. The fourth chapter makes recommendations about the suitability of each individualdomain modeling scheme for improving search. Each scheme has particular characteristics that make it especially suitable for a domain or a search problem. In the appendix each case study is described in detail. These descriptions are intended to serve as a benchmark. The current problem of the enterprise can be compared to those described to see which case study is most similar, which solution was chosen, which problems arose and how they were dealt with. An important issue that we have not touched upon in this thesis is that of maintenance. The real problems of a domain model are revealed when it is applied in a search system and its deficits and wrong assumptions become clear. Adaptation and maintenance are always required. Unfortunately we have not been able to glean sufficient information about maintenance issues from our case studies to draw any meaningful conclusions.
  9. Chen, X.: Indexing consistency between online catalogues (2008) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 2209) [ClassicSimilarity], result of:
          0.06058549 = score(doc=2209,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 2209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2209)
      0.2 = coord(1/5)
    
    Abstract
    In der globalen Online-Umgebung stellen viele bibliographische Dienstleistungen integrierten Zugang zu unterschiedlichen internetbasierten OPACs zur Verfügung. In solch einer Umgebung erwarten Benutzer mehr Übereinstimmungen innerhalb und zwischen den Systemen zu sehen. Zweck dieser Studie ist, die Indexierungskonsistenz zwischen Systemen zu untersuchen. Währenddessen werden einige Faktoren, die die Indexierungskonsistenz beeinflussen können, untersucht. Wichtigstes Ziel dieser Studie ist, die Gründe für die Inkonsistenzen herauszufinden, damit sinnvolle Vorschläge gemacht werden können, um die Indexierungskonsistenz zu verbessern. Eine Auswahl von 3307 Monographien wurde aus zwei chinesischen bibliographischen Katalogen gewählt. Nach Hooper's Formel war die durchschnittliche Indexierungskonsistenz für Indexterme 64,2% und für Klassennummern 61,6%. Nach Rolling's Formel war sie für Indexterme 70,7% und für Klassennummern 63,4%. Mehrere Faktoren, die die Indexierungskonsistenz beeinflussen, wurden untersucht: (1) Indexierungsbereite; (2) Indexierungsspezifizität; (3) Länge der Monographien; (4) Kategorie der Indexierungssprache; (5) Sachgebiet der Monographien; (6) Entwicklung von Disziplinen; (7) Struktur des Thesaurus oder der Klassifikation; (8) Erscheinungsjahr. Gründe für die Inkonsistenzen wurden ebenfalls analysiert. Die Analyse ergab: (1) den Indexieren mangelt es an Fachwissen, Vertrautheit mit den Indexierungssprachen und den Indexierungsregeln, so dass viele Inkonsistenzen verursacht wurden; (2) der Mangel an vereinheitlichten oder präzisen Regeln brachte ebenfalls Inkonsistenzen hervor; (3) verzögerte Überarbeitungen der Indexierungssprachen, Mangel an terminologischer Kontrolle, zu wenige Erläuterungen und "siehe auch" Referenzen, sowie die hohe semantische Freiheit bei der Auswahl von Deskriptoren oder Klassen, verursachten Inkonsistenzen.
  10. Ziemba, L.: Information retrieval with concept discovery in digital collections for agriculture and natural resources (2011) 0.01
    0.009693679 = product of:
      0.048468396 = sum of:
        0.048468396 = weight(_text_:thesaurus in 4728) [ClassicSimilarity], result of:
          0.048468396 = score(doc=4728,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.20422474 = fieldWeight in 4728, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.03125 = fieldNorm(doc=4728)
      0.2 = coord(1/5)
    
    Abstract
    The amount and complexity of information available in a digital form is already huge and new information is being produced every day. Retrieving information relevant to address a particular need becomes a significant issue. This work utilizes knowledge organization systems (KOS), such as thesauri and ontologies and applies information extraction (IE) and computational linguistics (CL) techniques to organize, manage and retrieve information stored in digital collections in the agricultural domain. Two real world applications of the approach have been developed and are available and actively used by the public. An ontology is used to manage the Water Conservation Digital Library holding a dynamic collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org. The system has demonstrated numerous benefits of the ontology application, including accurate retrieval of resources, information sharing and reuse, and has proved to effectively facilitate information management. The major difficulty encountered with the approach is that large and dynamic number of concepts makes it difficult to keep the ontology consistent and to accurately catalog resources manually. To address the aforementioned issues, a combination of IE and CL techniques, such as Vector Space Model and probabilistic parsing, with the use of Agricultural Thesaurus were adapted to automatically extract concepts important for each of the texts in the Best Management Practices (BMP) Publication Library--a collection of documents in the domain of agricultural BMPs in Florida available at http://lyra.ifas.ufl.edu/LIB. A new approach of domain-specific concept discovery with the use of Internet search engine was developed. Initial evaluation of the results indicates significant improvement in precision of information extraction. The approach presented in this work focuses on problems unique to agriculture and natural resources domain, such as domain specific concepts and vocabularies, but should be applicable to any collection of texts in digital format. It may be of potential interest for anyone who needs to effectively manage a collection of digital resources.
  11. Markó, K.G.: Foundation, implementation and evaluation of the MorphoSaurus system (2008) 0.01
    0.00848197 = product of:
      0.04240985 = sum of:
        0.04240985 = weight(_text_:thesaurus in 4415) [ClassicSimilarity], result of:
          0.04240985 = score(doc=4415,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.17869665 = fieldWeight in 4415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4415)
      0.2 = coord(1/5)
    
    Abstract
    The proper handling of acronyms plays a crucial role in medical texts, e.g. in patient records, as well as in scientific literature. Chapter six presents an approach, in which acronyms are automatically acquired from (bio-) medical literature. Furthermore, acronyms and their definitions in different languages are linked to each other using the MorphoSaurus text processing system. Automatic word sense disambiguation is still one of the most challenging tasks in Natural Language Processing. In Chapter seven, cross-lingual considerations lead to a new methodology for automatic disambiguation applied to subwords. Beginning with Chapter eight, a series of applications based onMorphoSaurus are introduced. Firstly, the implementation of the subword approach within a crosslanguage information retrieval setting for the medical domain is described and evaluated on standard test document collections. In Chapter nine, this methodology is extended to multilingual information retrieval in the Web, for which user queries are translated into target languages based on the segmentation into subwords and their interlingual mappings. The cross-lingual, automatic assignment of document descriptors to documents is the topic of Chapter ten. A large-scale evaluation of a heuristic, as well as a statistical algorithm is carried out using a prominent medical thesaurus as a controlled vocabulary. In Chapter eleven, it will be shown how MorphoSaurus can be used to map monolingual, lexical resources across different languages. As a result, a large multilingual medical lexicon with high coverage and complete lexical information is built and evaluated against a comparable, already available and commonly used lexical repository for the medical domain. Chapter twelve sketches a few applications based on MorphoSaurus. The generality and applicability of the subword approach to other domains is outlined, and proof-of-concepts in real-world scenarios are presented. Finally, Chapter thirteen recapitulates the most important aspects of MorphoSaurus and the potential benefit of its employment in medical information systems is carefully assessed, both for medical experts in their everyday life, but also with regard to health care consumers and their existential information needs.
  12. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.01
    0.008349938 = product of:
      0.04174969 = sum of:
        0.04174969 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
          0.04174969 = score(doc=563,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.23214069 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
      0.2 = coord(1/5)
    
    Date
    10. 1.2013 19:22:47
  13. Geisriegler, E.: Enriching electronic texts with semantic metadata : a use case for the historical Newspaper Collection ANNO (Austrian Newspapers Online) of the Austrian National Libraryhek (2012) 0.01
    0.006958282 = product of:
      0.03479141 = sum of:
        0.03479141 = weight(_text_:22 in 595) [ClassicSimilarity], result of:
          0.03479141 = score(doc=595,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.19345059 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=595)
      0.2 = coord(1/5)
    
    Date
    3. 2.2013 18:00:22
  14. Makewita, S.M.: Investigating the generic information-seeking function of organisational decision-makers : perspectives on improving organisational information systems (2002) 0.01
    0.006958282 = product of:
      0.03479141 = sum of:
        0.03479141 = weight(_text_:22 in 642) [ClassicSimilarity], result of:
          0.03479141 = score(doc=642,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.19345059 = fieldWeight in 642, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=642)
      0.2 = coord(1/5)
    
    Date
    22. 7.2022 12:16:58
  15. Kiren, T.: ¬A clustering based indexing technique of modularized ontologies for information retrieval (2017) 0.01
    0.0055666254 = product of:
      0.027833126 = sum of:
        0.027833126 = weight(_text_:22 in 4399) [ClassicSimilarity], result of:
          0.027833126 = score(doc=4399,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.15476047 = fieldWeight in 4399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4399)
      0.2 = coord(1/5)
    
    Date
    20. 1.2015 18:30:22