Search (57 results, page 1 of 3)

  • × theme_ss:"Semantisches Umfeld in Indexierung u. Retrieval"
  1. Boyack, K.W.; Wylie,B.N.; Davidson, G.S.: Information Visualization, Human-Computer Interaction, and Cognitive Psychology : Domain Visualizations (2002) 0.03
    0.027163425 = product of:
      0.05432685 = sum of:
        0.017862841 = product of:
          0.053588524 = sum of:
            0.053588524 = weight(_text_:k in 1352) [ClassicSimilarity], result of:
              0.053588524 = score(doc=1352,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.39440846 = fieldWeight in 1352, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.33333334 = coord(1/3)
        0.03646401 = product of:
          0.07292802 = sum of:
            0.07292802 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
              0.07292802 = score(doc=1352,freq=4.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.54716086 = fieldWeight in 1352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1352)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:17:40
    Source
    Visual Interfaces to Digital Libraries. Eds.: Börner, K. u. C. Chen
  2. Rädler, K.: In Bibliothekskatalogen "googlen" : Integration von Inhaltsverzeichnissen, Volltexten und WEB-Ressourcen in Bibliothekskataloge (2004) 0.02
    0.021145657 = product of:
      0.042291313 = sum of:
        0.008931421 = product of:
          0.026794262 = sum of:
            0.026794262 = weight(_text_:k in 2432) [ClassicSimilarity], result of:
              0.026794262 = score(doc=2432,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.19720423 = fieldWeight in 2432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2432)
          0.33333334 = coord(1/3)
        0.033359893 = product of:
          0.066719785 = sum of:
            0.066719785 = weight(_text_:intelligent in 2432) [ClassicSimilarity], result of:
              0.066719785 = score(doc=2432,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.31118786 = fieldWeight in 2432, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2432)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Object
    Intelligent capture
  3. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie (2005) 0.02
    0.017866038 = product of:
      0.035732076 = sum of:
        0.01768331 = product of:
          0.05304993 = sum of:
            0.05304993 = weight(_text_:k in 1852) [ClassicSimilarity], result of:
              0.05304993 = score(doc=1852,freq=4.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.39044446 = fieldWeight in 1852, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1852)
          0.33333334 = coord(1/3)
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 1852) [ClassicSimilarity], result of:
              0.036097527 = score(doc=1852,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 1852, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1852)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:58
    Object
    K-Infinity
  4. Knorz, G.; Rein, B.: Semantische Suche in einer Hochschulontologie : Ontologie-basiertes Information-Filtering und -Retrieval mit relationalen Datenbanken (2005) 0.02
    0.017866038 = product of:
      0.035732076 = sum of:
        0.01768331 = product of:
          0.05304993 = sum of:
            0.05304993 = weight(_text_:k in 4324) [ClassicSimilarity], result of:
              0.05304993 = score(doc=4324,freq=4.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.39044446 = fieldWeight in 4324, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.33333334 = coord(1/3)
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 4324) [ClassicSimilarity], result of:
              0.036097527 = score(doc=4324,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 4324, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4324)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Ontologien werden eingesetzt, um durch semantische Fundierung insbesondere für das Dokumentenretrieval eine grundlegend bessere Basis zu haben, als dies gegenwärtiger Stand der Technik ist. Vorgestellt wird eine an der FH Darmstadt entwickelte und eingesetzte Ontologie, die den Gegenstandsbereich Hochschule sowohl breit abdecken und gleichzeitig differenziert semantisch beschreiben soll. Das Problem der semantischen Suche besteht nun darin, dass sie für Informationssuchende so einfach wie bei gängigen Suchmaschinen zu nutzen sein soll, und gleichzeitig auf der Grundlage des aufwendigen Informationsmodells hochwertige Ergebnisse liefern muss. Es wird beschrieben, welche Möglichkeiten die verwendete Software K-Infinity bereitstellt und mit welchem Konzept diese Möglichkeiten für eine semantische Suche nach Dokumenten und anderen Informationseinheiten (Personen, Veranstaltungen, Projekte etc.) eingesetzt werden.
    Date
    11. 2.2011 18:22:25
    Object
    K-Infinity
  5. Lund, K.; Burgess, C.; Atchley, R.A.: Semantic and associative priming in high-dimensional semantic space (1995) 0.02
    0.015276376 = product of:
      0.030552752 = sum of:
        0.012503989 = product of:
          0.037511967 = sum of:
            0.037511967 = weight(_text_:k in 2151) [ClassicSimilarity], result of:
              0.037511967 = score(doc=2151,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.27608594 = fieldWeight in 2151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2151)
          0.33333334 = coord(1/3)
        0.018048763 = product of:
          0.036097527 = sum of:
            0.036097527 = weight(_text_:22 in 2151) [ClassicSimilarity], result of:
              0.036097527 = score(doc=2151,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.2708308 = fieldWeight in 2151, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2151)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society: July 22 - 25, 1995, University of Pittsburgh / ed. by Johanna D. Moore and Jill Fain Lehmann
  6. Drexel, G.: Knowledge engineering for intelligent information retrieval (2001) 0.01
    0.014153403 = product of:
      0.056613613 = sum of:
        0.056613613 = product of:
          0.113227226 = sum of:
            0.113227226 = weight(_text_:intelligent in 4043) [ClassicSimilarity], result of:
              0.113227226 = score(doc=4043,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5281033 = fieldWeight in 4043, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4043)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Computational linguistics and intelligent text processing: second international conference; Proceedings. CICLing 2001, Mexico City, Mexiko, 18.-24.2.2001. Ed.: Alexander Gelbukh
  7. Berry, M.W.; Dumais, S.T.; O'Brien, G.W.: Using linear algebra for intelligent information retrieval (1995) 0.01
    0.014153403 = product of:
      0.056613613 = sum of:
        0.056613613 = product of:
          0.113227226 = sum of:
            0.113227226 = weight(_text_:intelligent in 2206) [ClassicSimilarity], result of:
              0.113227226 = score(doc=2206,freq=4.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5281033 = fieldWeight in 2206, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2206)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Currently, most approaches to retrieving textual materials from scientific databases depend on a lexical match between words in users' requests and those in or assigned to documents in a database. Because of the tremendous diversity in the words people use to describe the same document, lexical methods are necessarily incomplete and imprecise. Using the singular value decomposition (SVD), one can take advantage of the implicit higher-order structure in the association of terms with documents by determining the SVD of large sparse term by document matrices. Terms and documents represented by 200-300 of the largest singular vectors are then matched against user queries. We call this retrieval method Latent Semantic Indexing (LSI) because the subspace represents important associative relationships between terms and documents that are not evident in individual documents. LSI is a completely automatic yet intelligent indexing method, widely applicable, and a promising way to improve users...
  8. Järvelin, K.; Kristensen, J.; Niemi, T.; Sormunen, E.; Keskustalo, H.: ¬A deductive data model for query expansion (1996) 0.01
    0.013094037 = product of:
      0.026188074 = sum of:
        0.010717705 = product of:
          0.032153115 = sum of:
            0.032153115 = weight(_text_:k in 2230) [ClassicSimilarity], result of:
              0.032153115 = score(doc=2230,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23664509 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.33333334 = coord(1/3)
        0.015470369 = product of:
          0.030940738 = sum of:
            0.030940738 = weight(_text_:22 in 2230) [ClassicSimilarity], result of:
              0.030940738 = score(doc=2230,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.23214069 = fieldWeight in 2230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2230)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR '96), Zürich, Switzerland, August 18-22, 1996. Eds.: H.P. Frei et al
  9. Sanderson, M.; Lawrie, D.: Building, testing, and applying concept hierarchies (2000) 0.01
    0.010007967 = product of:
      0.04003187 = sum of:
        0.04003187 = product of:
          0.08006374 = sum of:
            0.08006374 = weight(_text_:intelligent in 37) [ClassicSimilarity], result of:
              0.08006374 = score(doc=37,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.37342542 = fieldWeight in 37, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.046875 = fieldNorm(doc=37)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  10. Smeaton, A.F.; Rijsbergen, C.J. van: ¬The retrieval effects of query expansion on a feedback document retrieval system (1983) 0.01
    0.009024382 = product of:
      0.036097527 = sum of:
        0.036097527 = product of:
          0.07219505 = sum of:
            0.07219505 = weight(_text_:22 in 2134) [ClassicSimilarity], result of:
              0.07219505 = score(doc=2134,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.5416616 = fieldWeight in 2134, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2134)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:32:22
  11. Chen, H.; Lally, A.M.; Zhu, B.; Chau, M.: HelpfulMed : Intelligent searching for medical information over the Internet (2003) 0.01
    0.008339973 = product of:
      0.033359893 = sum of:
        0.033359893 = product of:
          0.066719785 = sum of:
            0.066719785 = weight(_text_:intelligent in 1615) [ClassicSimilarity], result of:
              0.066719785 = score(doc=1615,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.31118786 = fieldWeight in 1615, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1615)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
  12. Jun, W.: ¬A knowledge network constructed by integrating classification, thesaurus and metadata in a digital library (2003) 0.01
    0.006671978 = product of:
      0.026687913 = sum of:
        0.026687913 = product of:
          0.053375825 = sum of:
            0.053375825 = weight(_text_:intelligent in 1254) [ClassicSimilarity], result of:
              0.053375825 = score(doc=1254,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.24895029 = fieldWeight in 1254, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1254)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge management in digital libraries is a universal problem. Keyword-based searching is applied everywhere no matter whether the resources are indexed databases or full-text Web pages. In keyword matching, the valuable content description and indexing of the metadata, such as the subject descriptors and the classification notations, are merely treated as common keywords to be matched with the user query. Without the support of vocabulary control tools, such as classification systems and thesauri, the intelligent labor of content analysis, description and indexing in metadata production are seriously wasted. New retrieval paradigms are needed to exploit the potential of the metadata resources. Could classification and thesauri, which contain the condensed intelligence of generations of librarians, be used in a digital library to organize the networked information, especially metadata, to facilitate their usability and change the digital library into a knowledge management environment? To examine that question, we designed and implemented a new paradigm that incorporates a classification system, a thesaurus and metadata. The classification and the thesaurus are merged into a concept network, and the metadata are distributed into the nodes of the concept network according to their subjects. The abstract concept node instantiated with the related metadata records becomes a knowledge node. A coherent and consistent knowledge network is thus formed. It is not only a framework for resource organization but also a structure for knowledge navigation, retrieval and learning. We have built an experimental system based on the Chinese Classification and Thesaurus, which is the most comprehensive and authoritative in China, and we have incorporated more than 5000 bibliographic records in the computing domain from the Peking University Library. The result is encouraging. In this article, we review the tools, the architecture and the implementation of our experimental system, which is called Vision.
  13. Schek, M.: Automatische Klassifizierung in Erschließung und Recherche eines Pressearchivs (2006) 0.01
    0.006671978 = product of:
      0.026687913 = sum of:
        0.026687913 = product of:
          0.053375825 = sum of:
            0.053375825 = weight(_text_:intelligent in 6043) [ClassicSimilarity], result of:
              0.053375825 = score(doc=6043,freq=2.0), product of:
                0.21440355 = queryWeight, product of:
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.038061365 = queryNorm
                0.24895029 = fieldWeight in 6043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.633102 = idf(docFreq=429, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6043)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Die Süddeutsche Zeitung (SZ) verfügt seit ihrer Gründung 1945 über ein Pressearchiv, das die Texte der eigenen Redakteure und zahlreicher nationaler und internationaler Publikationen dokumentiert und für Recherchezwecke bereitstellt. Die DIZ-Pressedatenbank (www.medienport.de) ermöglicht die browserbasierte Recherche für Redakteure und externe Kunden im Intra- und Internet und die kundenspezifischen Content Feeds für Verlage, Rundfunkanstalten und Portale. Die DIZ-Pressedatenbank enthält z. Zt. 7,8 Millionen Artikel, die jeweils als HTML oder PDF abrufbar sind. Täglich kommen ca. 3.500 Artikel hinzu, von denen ca. 1.000 durch Dokumentare inhaltlich erschlossen werden. Die Informationserschließung erfolgt im DIZ nicht durch die Vergabe von Schlagwörtern am Dokument, sondern durch die Verlinkung der Artikel mit "virtuellen Mappen", den Dossiers. Insgesamt enthält die DIZ-Pressedatenbank ca. 90.000 Dossiers, die untereinander zum "DIZ-Wissensnetz" verlinkt sind. DIZ definiert das Wissensnetz als Alleinstellungsmerkmal und wendet beträchtliche personelle Ressourcen für die Aktualisierung und Qualitätssicherung der Dossiers auf. Im Zuge der Medienkrise mussten sich DIZ der Herausforderung stellen, bei sinkenden Lektoratskapazitäten die Qualität der Informationserschließung im Input zu erhalten. Auf der Outputseite gilt es, eine anspruchsvolle Zielgruppe - u.a. die Redakteure der Süddeutschen Zeitung - passgenau und zeitnah mit den Informationen zu versorgen, die sie für ihre tägliche Arbeit benötigt. Bezogen auf die Ausgangssituation in der Dokumentation der Süddeutschen Zeitung identifizierte DIZ drei Ansatzpunkte, wie die Aufwände auf der Inputseite (Lektorat) zu optimieren sind und gleichzeitig auf der Outputseite (Recherche) das Wissensnetz besser zu vermarkten ist: - (Teil-)Automatische Klassifizierung von Pressetexten (Vorschlagwesen) - Visualisierung des Wissensnetzes - Neue Retrievalmöglichkeiten (Ähnlichkeitssuche, Clustering) Im Bereich "Visualisierung" setzt DIZ auf den Net-Navigator von intelligent views, eine interaktive Visualisierung allgemeiner Graphen, basierend auf einem physikalischen Modell. In den Bereichen automatische Klassifizierung, Ähnlichkeitssuche und Clustering hat DIZ sich für das Produkt nextBot der Firma Brainbot entschieden.
  14. Rekabsaz, N. et al.: Toward optimized multimodal concept indexing (2016) 0.01
    0.006445987 = product of:
      0.025783949 = sum of:
        0.025783949 = product of:
          0.051567897 = sum of:
            0.051567897 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
              0.051567897 = score(doc=2751,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38690117 = fieldWeight in 2751, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2751)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  15. Kozikowski, P. et al.: Support of part-whole relations in query answering (2016) 0.01
    0.006445987 = product of:
      0.025783949 = sum of:
        0.025783949 = product of:
          0.051567897 = sum of:
            0.051567897 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
              0.051567897 = score(doc=2754,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38690117 = fieldWeight in 2754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2754)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  16. Marx, E. et al.: Exploring term networks for semantic search over RDF knowledge graphs (2016) 0.01
    0.006445987 = product of:
      0.025783949 = sum of:
        0.025783949 = product of:
          0.051567897 = sum of:
            0.051567897 = weight(_text_:22 in 3279) [ClassicSimilarity], result of:
              0.051567897 = score(doc=3279,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38690117 = fieldWeight in 3279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3279)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  17. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.01
    0.006445987 = product of:
      0.025783949 = sum of:
        0.025783949 = product of:
          0.051567897 = sum of:
            0.051567897 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.051567897 = score(doc=3280,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
  18. Sacco, G.M.: Dynamic taxonomies and guided searches (2006) 0.01
    0.006381202 = product of:
      0.025524808 = sum of:
        0.025524808 = product of:
          0.051049616 = sum of:
            0.051049616 = weight(_text_:22 in 5295) [ClassicSimilarity], result of:
              0.051049616 = score(doc=5295,freq=4.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.38301262 = fieldWeight in 5295, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5295)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 17:56:22
  19. Gauch, S.; Chong, M.K.: Automatic word similarity detection for TREC 4 query expansion (1996) 0.01
    0.0053588524 = product of:
      0.02143541 = sum of:
        0.02143541 = product of:
          0.06430623 = sum of:
            0.06430623 = weight(_text_:k in 2991) [ClassicSimilarity], result of:
              0.06430623 = score(doc=2991,freq=2.0), product of:
                0.13587062 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.038061365 = queryNorm
                0.47329018 = fieldWeight in 2991, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2991)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    The Fourth Text Retrieval Conference (TREC-4). Ed.: K. Harman
  20. Efthimiadis, E.N.: End-users' understanding of thesaural knowledge structures in interactive query expansion (1994) 0.01
    0.00515679 = product of:
      0.02062716 = sum of:
        0.02062716 = product of:
          0.04125432 = sum of:
            0.04125432 = weight(_text_:22 in 5693) [ClassicSimilarity], result of:
              0.04125432 = score(doc=5693,freq=2.0), product of:
                0.13328442 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038061365 = queryNorm
                0.30952093 = fieldWeight in 5693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5693)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2001 13:35:22

Years

Languages

  • e 48
  • d 8

Types

Classifications