Search (71 results, page 1 of 4)

  • × theme_ss:"Automatisches Indexieren"
  1. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.09
    0.08891679 = product of:
      0.2134003 = sum of:
        0.050840456 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.050840456 = score(doc=2673,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.040716566 = weight(_text_:world in 2673) [ClassicSimilarity], result of:
          0.040716566 = score(doc=2673,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.05410469 = weight(_text_:wide in 2673) [ClassicSimilarity], result of:
          0.05410469 = score(doc=2673,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.050840456 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.050840456 = score(doc=2673,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.016898135 = product of:
          0.03379627 = sum of:
            0.03379627 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.03379627 = score(doc=2673,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.41666666 = coord(5/12)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  2. Koch, T.: Experiments with automatic classification of WAIS databases and indexing of WWW : some results from the Nordic WAIS/WWW project (1994) 0.07
    0.06858142 = product of:
      0.20574425 = sum of:
        0.02935275 = weight(_text_:web in 7209) [ClassicSimilarity], result of:
          0.02935275 = score(doc=7209,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 7209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.070523165 = weight(_text_:world in 7209) [ClassicSimilarity], result of:
          0.070523165 = score(doc=7209,freq=6.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.5148846 = fieldWeight in 7209, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.076515585 = weight(_text_:wide in 7209) [ClassicSimilarity], result of:
          0.076515585 = score(doc=7209,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.4846142 = fieldWeight in 7209, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.02935275 = weight(_text_:web in 7209) [ClassicSimilarity], result of:
          0.02935275 = score(doc=7209,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 7209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
      0.33333334 = coord(4/12)
    
    Abstract
    The Nordic WAIS/WWW project sponsored by NORDINFO is a joint project between Lund University Library and the National Technological Library of Denmark. It aims to improve the existing networked information discovery and retrieval tools Wide Area Information System (WAIS) and World Wide Web (WWW), and to move towards unifying WWW and WAIS. Details current results focusing on the WAIS side of the project. Describes research into automatic indexing and classification of WAIS sources, development of an orientation tool for WAIS, and development of a WAIS index of WWW resources
    Source
    Internet world and document delivery world international 94: Proceedings of the 2nd Annual Conference, London, May 1994
  3. Rasmussen, E.M.: Indexing and retrieval for the Web (2002) 0.06
    0.0642412 = product of:
      0.19272359 = sum of:
        0.054913964 = weight(_text_:web in 4285) [ClassicSimilarity], result of:
          0.054913964 = score(doc=4285,freq=28.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.47219574 = fieldWeight in 4285, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
        0.028790962 = weight(_text_:world in 4285) [ClassicSimilarity], result of:
          0.028790962 = score(doc=4285,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.21020076 = fieldWeight in 4285, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
        0.05410469 = weight(_text_:wide in 4285) [ClassicSimilarity], result of:
          0.05410469 = score(doc=4285,freq=8.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 4285, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
        0.054913964 = weight(_text_:web in 4285) [ClassicSimilarity], result of:
          0.054913964 = score(doc=4285,freq=28.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.47219574 = fieldWeight in 4285, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
      0.33333334 = coord(4/12)
    
    Abstract
    The introduction and growth of the World Wide Web (WWW, or Web) have resulted in a profound change in the way individuals and organizations access information. In terms of volume, nature, and accessibility, the characteristics of electronic information are significantly different from those of even five or six years ago. Control of, and access to, this flood of information rely heavily an automated techniques for indexing and retrieval. According to Gudivada, Raghavan, Grosky, and Kasanagottu (1997, p. 58), "The ability to search and retrieve information from the Web efficiently and effectively is an enabling technology for realizing its full potential." Almost 93 percent of those surveyed consider the Web an "indispensable" Internet technology, second only to e-mail (Graphie, Visualization & Usability Center, 1998). Although there are other ways of locating information an the Web (browsing or following directory structures), 85 percent of users identify Web pages by means of a search engine (Graphie, Visualization & Usability Center, 1998). A more recent study conducted by the Stanford Institute for the Quantitative Study of Society confirms the finding that searching for information is second only to e-mail as an Internet activity (Nie & Ebring, 2000, online). In fact, Nie and Ebring conclude, "... the Internet today is a giant public library with a decidedly commercial tilt. The most widespread use of the Internet today is as an information search utility for products, travel, hobbies, and general information. Virtually all users interviewed responded that they engaged in one or more of these information gathering activities."
    Techniques for automated indexing and information retrieval (IR) have been developed, tested, and refined over the past 40 years, and are well documented (see, for example, Agosti & Smeaton, 1996; BaezaYates & Ribeiro-Neto, 1999a; Frakes & Baeza-Yates, 1992; Korfhage, 1997; Salton, 1989; Witten, Moffat, & Bell, 1999). With the introduction of the Web, and the capability to index and retrieve via search engines, these techniques have been extended to a new environment. They have been adopted, altered, and in some Gases extended to include new methods. "In short, search engines are indispensable for searching the Web, they employ a variety of relatively advanced IR techniques, and there are some peculiar aspects of search engines that make searching the Web different than more conventional information retrieval" (Gordon & Pathak, 1999, p. 145). The environment for information retrieval an the World Wide Web differs from that of "conventional" information retrieval in a number of fundamental ways. The collection is very large and changes continuously, with pages being added, deleted, and altered. Wide variability between the size, structure, focus, quality, and usefulness of documents makes Web documents much more heterogeneous than a typical electronic document collection. The wide variety of document types includes images, video, audio, and scripts, as well as many different document languages. Duplication of documents and sites is common. Documents are interconnected through networks of hyperlinks. Because of the size and dynamic nature of the Web, preprocessing all documents requires considerable resources and is often not feasible, certainly not an the frequent basis required to ensure currency. Query length is usually much shorter than in other environments-only a few words-and user behavior differs from that in other environments. These differences make the Web a novel environment for information retrieval (Baeza-Yates & Ribeiro-Neto, 1999b; Bharat & Henzinger, 1998; Huang, 2000).
  4. Fauzi, F.; Belkhatir, M.: Multifaceted conceptual image indexing on the world wide web (2013) 0.06
    0.056143478 = product of:
      0.16843043 = sum of:
        0.043577533 = weight(_text_:web in 2721) [ClassicSimilarity], result of:
          0.043577533 = score(doc=2721,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 2721, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.034899916 = weight(_text_:world in 2721) [ClassicSimilarity], result of:
          0.034899916 = score(doc=2721,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.25480178 = fieldWeight in 2721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.046375446 = weight(_text_:wide in 2721) [ClassicSimilarity], result of:
          0.046375446 = score(doc=2721,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.29372054 = fieldWeight in 2721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.043577533 = weight(_text_:web in 2721) [ClassicSimilarity], result of:
          0.043577533 = score(doc=2721,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.37471575 = fieldWeight in 2721, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
      0.33333334 = coord(4/12)
    
    Abstract
    In this paper, we describe a user-centered design of an automated multifaceted concept-based indexing framework which analyzes the semantics of the Web image contextual information and classifies it into five broad semantic concept facets: signal, object, abstract, scene, and relational; and identifies the semantic relationships between the concepts. An important aspect of our indexing model is that it relates to the users' levels of image descriptions. Also, a major contribution relies on the fact that the classification is performed automatically with the raw image contextual information extracted from any general webpage and is not solely based on image tags like state-of-the-art solutions. Human Language Technology techniques and an external knowledge base are used to analyze the information both syntactically and semantically. Experimental results on a human-annotated Web image collection and corresponding contextual information indicate that our method outperforms empirical frameworks employing tf-idf and location-based tf-idf weighting schemes as well as n-gram indexing in a recall/precision based evaluation framework.
  5. Krüger, C.: Evaluation des WWW-Suchdienstes GERHARD unter besonderer Beachtung automatischer Indexierung (1999) 0.05
    0.05169515 = product of:
      0.15508544 = sum of:
        0.029650755 = weight(_text_:web in 1777) [ClassicSimilarity], result of:
          0.029650755 = score(doc=1777,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.041129943 = weight(_text_:world in 1777) [ClassicSimilarity], result of:
          0.041129943 = score(doc=1777,freq=4.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.30028677 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.054653995 = weight(_text_:wide in 1777) [ClassicSimilarity], result of:
          0.054653995 = score(doc=1777,freq=4.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.34615302 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.029650755 = weight(_text_:web in 1777) [ClassicSimilarity], result of:
          0.029650755 = score(doc=1777,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25496176 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
      0.33333334 = coord(4/12)
    
    Abstract
    Die vorliegende Arbeit beinhaltet eine Beschreibung und Evaluation des WWW - Suchdienstes GERHARD (German Harvest Automated Retrieval and Directory). GERHARD ist ein Such- und Navigationssystem für das deutsche World Wide Web, weiches ausschließlich wissenschaftlich relevante Dokumente sammelt, und diese auf der Basis computerlinguistischer und statistischer Methoden automatisch mit Hilfe eines bibliothekarischen Klassifikationssystems klassifiziert. Mit dem DFG - Projekt GERHARD ist der Versuch unternommen worden, mit einem auf einem automatischen Klassifizierungsverfahren basierenden World Wide Web - Dienst eine Alternative zu herkömmlichen Methoden der Interneterschließung zu entwickeln. GERHARD ist im deutschsprachigen Raum das einzige Verzeichnis von Internetressourcen, dessen Erstellung und Aktualisierung vollständig automatisch (also maschinell) erfolgt. GERHARD beschränkt sich dabei auf den Nachweis von Dokumenten auf wissenschaftlichen WWW - Servern. Die Grundidee dabei war, kostenintensive intellektuelle Erschließung und Klassifizierung von lnternetseiten durch computerlinguistische und statistische Methoden zu ersetzen, um auf diese Weise die nachgewiesenen Internetressourcen automatisch auf das Vokabular eines bibliothekarischen Klassifikationssystems abzubilden. GERHARD steht für German Harvest Automated Retrieval and Directory. Die WWW - Adresse (URL) von GERHARD lautet: http://www.gerhard.de. Im Rahmen der vorliegenden Diplomarbeit soll eine Beschreibung des Dienstes mit besonderem Schwerpunkt auf dem zugrundeliegenden Indexierungs- bzw. Klassifizierungssystem erfolgen und anschließend mit Hilfe eines kleinen Retrievaltests die Effektivität von GERHARD überprüft werden.
  6. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.05
    0.051175587 = product of:
      0.15352675 = sum of:
        0.02935275 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.02935275 = score(doc=2930,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.040716566 = weight(_text_:world in 2930) [ClassicSimilarity], result of:
          0.040716566 = score(doc=2930,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.05410469 = weight(_text_:wide in 2930) [ClassicSimilarity], result of:
          0.05410469 = score(doc=2930,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.02935275 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.02935275 = score(doc=2930,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.33333334 = coord(4/12)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  7. Gábor, K.; Zargayouna, H.; Tellier, I.; Buscaldi, D.; Charnois, T.: ¬A typology of semantic relations dedicated to scientific literature analysis (2016) 0.05
    0.051175587 = product of:
      0.15352675 = sum of:
        0.02935275 = weight(_text_:web in 2933) [ClassicSimilarity], result of:
          0.02935275 = score(doc=2933,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.040716566 = weight(_text_:world in 2933) [ClassicSimilarity], result of:
          0.040716566 = score(doc=2933,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.05410469 = weight(_text_:wide in 2933) [ClassicSimilarity], result of:
          0.05410469 = score(doc=2933,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.342674 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.02935275 = weight(_text_:web in 2933) [ClassicSimilarity], result of:
          0.02935275 = score(doc=2933,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
      0.33333334 = coord(4/12)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  8. Shafer, K.: Scorpion Project explores using Dewey to organize the Web (1996) 0.03
    0.030934671 = product of:
      0.123738684 = sum of:
        0.04151106 = weight(_text_:web in 6750) [ClassicSimilarity], result of:
          0.04151106 = score(doc=6750,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35694647 = fieldWeight in 6750, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
        0.040716566 = weight(_text_:world in 6750) [ClassicSimilarity], result of:
          0.040716566 = score(doc=6750,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.29726875 = fieldWeight in 6750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
        0.04151106 = weight(_text_:web in 6750) [ClassicSimilarity], result of:
          0.04151106 = score(doc=6750,freq=4.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.35694647 = fieldWeight in 6750, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
      0.25 = coord(3/12)
    
    Abstract
    As the amount of accessible information on the WWW increases, so will the cost of accessing it, even if search servcies remain free, due to the increasing amount of time users will have to spend to find needed items. Considers what the seemingly unorganized Web and the organized world of libraries can offer each other. The OCLC Scorpion Project is attempting to combine indexing and cataloguing, specifically focusing on building tools for automatic subject recognition using the technqiues of library science and information retrieval. If subject headings or concept domains can be automatically assigned to electronic items, improved filtering tools for searching can be produced
  9. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.03
    0.029243192 = product of:
      0.08772957 = sum of:
        0.016773 = weight(_text_:web in 2596) [ClassicSimilarity], result of:
          0.016773 = score(doc=2596,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.02326661 = weight(_text_:world in 2596) [ClassicSimilarity], result of:
          0.02326661 = score(doc=2596,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.030916965 = weight(_text_:wide in 2596) [ClassicSimilarity], result of:
          0.030916965 = score(doc=2596,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.016773 = weight(_text_:web in 2596) [ClassicSimilarity], result of:
          0.016773 = score(doc=2596,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.33333334 = coord(4/12)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  10. Groß, T.; Faden, M.: Automatische Indexierung elektronischer Dokumente an der Deutschen Zentralbibliothek für Wirtschaftswissenschaften : Bericht über die Jahrestagung der Internationalen Buchwissenschaftlichen Gesellschaft (2010) 0.03
    0.029243192 = product of:
      0.08772957 = sum of:
        0.016773 = weight(_text_:web in 4051) [ClassicSimilarity], result of:
          0.016773 = score(doc=4051,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.02326661 = weight(_text_:world in 4051) [ClassicSimilarity], result of:
          0.02326661 = score(doc=4051,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.16986786 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.030916965 = weight(_text_:wide in 4051) [ClassicSimilarity], result of:
          0.030916965 = score(doc=4051,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.1958137 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.016773 = weight(_text_:web in 4051) [ClassicSimilarity], result of:
          0.016773 = score(doc=4051,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
      0.33333334 = coord(4/12)
    
    Abstract
    Die zunehmende Verfügbarmachung digitaler Informationen in den letzten Jahren sowie die Aussicht auf ein weiteres Ansteigen der sogenannten Datenflut kumulieren in einem grundlegenden, sich weiter verstärkenden Informationsstrukturierungsproblem. Die stetige Zunahme von digitalen Informationsressourcen im World Wide Web sichert zwar jederzeit und ortsungebunden den Zugriff auf verschiedene Informationen; offen bleibt der strukturierte Zugang, insbesondere zu wissenschaftlichen Ressourcen. Angesichts der steigenden Anzahl elektronischer Inhalte und vor dem Hintergrund stagnierender bzw. knapper werdender personeller Ressourcen in der Sacherschließun schafft keine Bibliothek bzw. kein Bibliotheksverbund es mehr, weder aktuell noch zukünftig, alle digitalen Daten zu erfassen, zu strukturieren und zueinander in Beziehung zu setzen. In der Informationsgesellschaft des 21. Jahrhunderts wird es aber zunehmend wichtiger, die in der Flut verschwundenen wissenschaftlichen Informationen zeitnah, angemessen und vollständig zu strukturieren und somit als Basis für eine Wissensgenerierung wieder nutzbar zu machen. Eine normierte Inhaltserschließung digitaler Informationsressourcen ist deshalb für die Deutsche Zentralbibliothek für Wirtschaftswissenschaften (ZBW) als wichtige Informationsinfrastruktureinrichtung in diesem Bereich ein entscheidender und auch erfolgskritischer Aspekt im Wettbewerb mit anderen Informationsdienstleistern. Weil die traditionelle intellektuelle Sacherschließung aber nicht beliebig skalierbar ist - mit dem Anstieg der Zahl an Online-Dokumenten steigt proportional auch der personelle Ressourcenbedarf an Fachreferenten, wenn ein gewisser Qualitätsstandard gehalten werden soll - bedarf es zukünftig anderer Sacherschließungsverfahren. Automatisierte Verschlagwortungsmethoden werden dabei als einzige Möglichkeit angesehen, die bibliothekarische Sacherschließung auch im digitalen Zeitalter zukunftsfest auszugestalten. Zudem können maschinelle Ansätze dazu beitragen, die Heterogenitäten (Indexierungsinkonsistenzen) zwischen den einzelnen Sacherschließer zu nivellieren, und somit zu einer homogeneren Erschließung des Bibliotheksbestandes beitragen.
  11. Carevic, Z.: Semi-automatische Verschlagwortung zur Integration externer semantischer Inhalte innerhalb einer medizinischen Kooperationsplattform (2012) 0.02
    0.02210968 = product of:
      0.08843872 = sum of:
        0.054892723 = weight(_text_:tagging in 897) [ClassicSimilarity], result of:
          0.054892723 = score(doc=897,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.2609168 = fieldWeight in 897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.03125 = fieldNorm(doc=897)
        0.016773 = weight(_text_:web in 897) [ClassicSimilarity], result of:
          0.016773 = score(doc=897,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=897)
        0.016773 = weight(_text_:web in 897) [ClassicSimilarity], result of:
          0.016773 = score(doc=897,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.14422815 = fieldWeight in 897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=897)
      0.25 = coord(3/12)
    
    Abstract
    Die vorliegende Arbeit beschäftigt sich mit der Integration von externen semantischen Inhalten auf Basis eines medizinischen Begriffssystems. Die zugrundeliegende Annahme ist, dass die Verwendung einer einheitlichen Terminologie auf Seiten des Anfragesystems und der Wissensbasis zu qualitativ hochwertigen Ergebnissen führt. Um dies zu erreichen muss auf Seiten des Anfragesystems eine Abbildung natürlicher Sprache auf die verwendete Terminologie gewährleistet werden. Dies geschieht auf Basis einer (semi-)automatischen Verschlagwortung textbasierter Inhalte. Im Wesentlichen lassen sich folgende Fragestellungen festhalten: Automatische Verschlagwortung textbasierter Inhalte Kann eine automatische Verschlagwortung textbasierter Inhalte auf Basis eines Begriffssystems optimiert werden? Der zentrale Aspekt der vorliegenden Arbeit ist die (semi-)automatische Verschlagwortung textbasierter Inhalte auf Basis eines medizinischen Begriffssystems. Zu diesem Zweck wird der aktuelle Stand der Forschung betrachtet. Es werden eine Reihe von Tokenizern verglichen um zu erfahren welche Algorithmen sich zur Ermittlung von Wortgrenzen eignen. Speziell wird betrachtet, wie die Ermittlung von Wortgrenzen in einer domänenspezifischen Umgebung eingesetzt werden kann. Auf Basis von identifizierten Token in einem Text werden die Auswirkungen des Stemming und POS-Tagging auf die Gesamtmenge der zu analysierenden Inhalte beobachtet. Abschließend wird evaluiert wie ein kontrolliertes Vokabular die Präzision bei der Verschlagwortung erhöhen kann. Dies geschieht unter der Annahme dass domänenspezifische Inhalte auch innerhalb eines domänenspezifischen Begriffssystems definiert sind. Zu diesem Zweck wird ein allgemeines Prozessmodell entwickelt anhand dessen eine Verschlagwortung vorgenommen wird.
    Integration externer Inhalte Inwieweit kann die Nutzung einer einheitlichen Terminologie zwischen Anfragesystem und Wissensbasis den Prozess der Informationsbeschaffung unterstützen? Zu diesem Zweck wird in einer ersten Phase ermittelt welche Wissensbasen aus der medizinischen Domäne in der Linked Data Cloud zur Verfügung stehen. Aufbauend auf den Ergebnissen werden Informationen aus verschiedenen dezentralen Wissensbasen exemplarisch integriert. Der Fokus der Betrachtung liegt dabei auf der verwendeten Terminologie sowie der Nutzung von Semantic Web Technologien. Neben Informationen aus der Linked Data Cloud erfolgt eine Suche nach medizinischer Literatur in PubMed. Wie auch in der Linked Data Cloud erfolgt die Integration unter Verwendung einer einheitlichen Terminologie. Eine weitere Fragestellung ist, wie Informationen aus insgesamt 21. Mio Aufsatzzitaten in PubMed sinnvoll integriert werden können. Dabei wird ermittelt welche Mechanismen eingesetzt werden können um die Präzision der Ergebnisse zu optimieren. Eignung medizinischer Begriffssystem Welche medizinischen Begriffssysteme existieren und wie eignen sich diese als zugrungeliegendes Vokabular für die automatische Verschlagwortung und Integration semantischer Inhalte? Der Fokus liegt dabei speziell auf einer Bewertung der Reichhaltigkeit von Begriffssystemen, wobei insbesondere der Detaillierungsgrad von Interesse ist. Handelt es sich um ein spezifisches oder allgemeines Begriffssystem und eignet sich dieses auch dafür bestimmte Teilaspekte der Medizin, wie bspw. die Chirurige oder die Anästhesie, in einer ausreichenden Tiefe zu beschreiben?
  12. Alexander, M.: Retrieving digital data with fuzzy matching (1996) 0.02
    0.01806119 = product of:
      0.108367145 = sum of:
        0.04653322 = weight(_text_:world in 6961) [ClassicSimilarity], result of:
          0.04653322 = score(doc=6961,freq=2.0), product of:
            0.13696888 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.035634913 = queryNorm
            0.33973572 = fieldWeight in 6961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0625 = fieldNorm(doc=6961)
        0.06183393 = weight(_text_:wide in 6961) [ClassicSimilarity], result of:
          0.06183393 = score(doc=6961,freq=2.0), product of:
            0.1578897 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.035634913 = queryNorm
            0.3916274 = fieldWeight in 6961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=6961)
      0.16666667 = coord(2/12)
    
    Abstract
    Briefly describes the Excalibur EFS system which makes use of adaptive pattern recognition technology as an aid to automatic indexing and how it is being tested at the British Library for the indexing and retrieval of scanned images from the library's holdings. Notes how Excalibur EFS can support a wide degree of fuzzy searching, compensate for the errors produced by OCR conversion of scanned images, reduce the costs of indexing, and require far less storage space than more traditional indexes
    Source
    New library world. 97(1996) no.1131, S.28-31
  13. Golub, K.; Lykke, M.; Tudhope, D.: Enhancing social tagging with automated keywords from the Dewey Decimal Classification (2014) 0.02
    0.015128385 = product of:
      0.18154062 = sum of:
        0.18154062 = weight(_text_:tagging in 2918) [ClassicSimilarity], result of:
          0.18154062 = score(doc=2918,freq=14.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.8629013 = fieldWeight in 2918, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2918)
      0.083333336 = coord(1/12)
    
    Abstract
    Purpose - The purpose of this paper is to explore the potential of applying the Dewey Decimal Classification (DDC) as an established knowledge organization system (KOS) for enhancing social tagging, with the ultimate purpose of improving subject indexing and information retrieval. Design/methodology/approach - Over 11.000 Intute metadata records in politics were used. Totally, 28 politics students were each given four tasks, in which a total of 60 resources were tagged in two different configurations, one with uncontrolled social tags only and another with uncontrolled social tags as well as suggestions from a controlled vocabulary. The controlled vocabulary was DDC comprising also mappings from the Library of Congress Subject Headings. Findings - The results demonstrate the importance of controlled vocabulary suggestions for indexing and retrieval: to help produce ideas of which tags to use, to make it easier to find focus for the tagging, to ensure consistency and to increase the number of access points in retrieval. The value and usefulness of the suggestions proved to be dependent on the quality of the suggestions, both as to conceptual relevance to the user and as to appropriateness of the terminology. Originality/value - No research has investigated the enhancement of social tagging with suggestions from the DDC, an established KOS, in a user trial, comparing social tagging only and social tagging enhanced with the suggestions. This paper is a final reflection on all aspects of the study.
    Theme
    Social tagging
  14. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.01
    0.013977501 = product of:
      0.083865 = sum of:
        0.0419325 = weight(_text_:web in 2533) [ClassicSimilarity], result of:
          0.0419325 = score(doc=2533,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
        0.0419325 = weight(_text_:web in 2533) [ClassicSimilarity], result of:
          0.0419325 = score(doc=2533,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.36057037 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
      0.16666667 = coord(2/12)
    
  15. Hauer, M: Silicon Valley Vorarlberg : Maschinelle Indexierung und semantisches Retrieval verbessert den Katalog der Vorarlberger Landesbibliothek (2004) 0.01
    0.012104871 = product of:
      0.07262922 = sum of:
        0.03631461 = weight(_text_:web in 2489) [ClassicSimilarity], result of:
          0.03631461 = score(doc=2489,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3122631 = fieldWeight in 2489, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2489)
        0.03631461 = weight(_text_:web in 2489) [ClassicSimilarity], result of:
          0.03631461 = score(doc=2489,freq=6.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.3122631 = fieldWeight in 2489, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2489)
      0.16666667 = coord(2/12)
    
    Abstract
    10 Jahre Internet haben die WeIt um die Bibliotheken herum stark geändert. Der Web-OPAC war eine Antwort der Bibliotheken. Doch reicht ein Web-OPAC im Zeitalter des Internets noch aus? Außer Web ist es doch der alte Katalog. Ca. 90% aller Bibliotheksrecherchen durch Benutzer sind Themenrecherchen. Ein Anteil dieser Recherchen bringt kein Ergebnis. Es kann leicht gemessen werden, dass null Medien gefunden wurden. Die Gründe hierfür wurden auch immer wieder untersucht: Plural- anstelle Singularformen, zu spezifische Suchbegriffe, Schreib- oder Bedienungsfehler. Zu wenig untersucht sind aber die Recherchen, die nicht mit einer Ausleihe enden, denn auch dann kann man in vielen Fällen von einem Retrieval-Mangel ausgehen. Schließlich: Von den ausgeliehenen Büchern werden nach Einschätzung vieler Bibliothekare 80% nicht weiter als bis zum Inhaltsverzeichnis gelesen (außer in Präsenzbibliotheken) - und erst nach Wochen zurückgegeben. Ein Politiker würde dies neudeutsch als "ein Vermittlungsproblem" bezeichnen. Ein Controller als nicht hinreichende Kapitalnutzung. Einfacher machen es sich immer mehr Studenten und Wissenschaftler, ihr Wissensaustausch vollzieht sich zunehmend an anderen Orten. Bibliotheken (als Funktion) sind unverzichtbar für die wissenschaftliche Kommunikation. Deshalb geht es darum, Wege zu finden und auch zu beschreiten, welche die Schätze von Bibliotheken (als Institution) effizienter an die Zielgruppe bringen. Der Einsatz von Information Retrieval-Technologie, neue Erschließungsmethoden und neuer Content sind Ansätze dazu. Doch die bisherigen Verbundstrukturen und Abhängigkeit haben das hier vorgestellte innovative Projekt keineswegs gefördert. Innovation entsteht wie die Innvoationsforschung zeigt eigentlich immer an der Peripherie: in Bregenz fing es an.
  16. Hauer, M.: Neue Qualitäten in Bibliotheken : Durch Content-Ergänzung, maschinelle Indexierung und modernes Information Retrieval können Recherchen in Bibliothekskatalogen deutlich verbessert werden (2004) 0.01
    0.011182001 = product of:
      0.067092 = sum of:
        0.033546 = weight(_text_:web in 886) [ClassicSimilarity], result of:
          0.033546 = score(doc=886,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.2884563 = fieldWeight in 886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=886)
        0.033546 = weight(_text_:web in 886) [ClassicSimilarity], result of:
          0.033546 = score(doc=886,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.2884563 = fieldWeight in 886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=886)
      0.16666667 = coord(2/12)
    
    Abstract
    Seit Frühjahr 2004 ist Dandelon.com als neues, offenes, internationales Wissenschaftsportal in Betrieb. Erste Retrieval-Tests bescheinigen deutlich bessere Suchergebnisse als in herkömmlichen OPACs oder Verbundsystemen. Seine Daten stammen aus intelligentCAPTURE und Bibliothekskatalogen. intelligentCAPTURE erfasst Content über Scanning oder File-Import oder Web-Spidering und indexiert nach morphosyntaktischen und semantischen Verfahren. Aufbereiteter Content und Indexate gehen an Bibliothekssysteme und an dandelon.com. Dandelon.com ist kostenlos zugänglich für Endbenutzer und ist zugleich Austauschzentrale und Katalogerweiterung für angeschlossene Bibliotheken. Neue Inhalte können so kostengünstig und performant erschlossen werden.
  17. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.01
    0.010758134 = product of:
      0.0645488 = sum of:
        0.054892723 = weight(_text_:tagging in 1441) [ClassicSimilarity], result of:
          0.054892723 = score(doc=1441,freq=2.0), product of:
            0.21038401 = queryWeight, product of:
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.035634913 = queryNorm
            0.2609168 = fieldWeight in 1441, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.9038734 = idf(docFreq=327, maxDocs=44218)
              0.03125 = fieldNorm(doc=1441)
        0.009656077 = product of:
          0.019312155 = sum of:
            0.019312155 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
              0.019312155 = score(doc=1441,freq=2.0), product of:
                0.12478739 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.035634913 = queryNorm
                0.15476047 = fieldWeight in 1441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1441)
          0.5 = coord(1/2)
      0.16666667 = coord(2/12)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  18. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2012) 0.01
    0.0097842505 = product of:
      0.0587055 = sum of:
        0.02935275 = weight(_text_:web in 1717) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1717,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
        0.02935275 = weight(_text_:web in 1717) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1717,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1717, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1717)
      0.16666667 = coord(2/12)
    
    Content
    Beitrag für die Tagung: Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn. Vgl.: http://http://www.nlib.ee/index.php?id=17763.
  19. Junger, U.: Can indexing be automated? : the example of the Deutsche Nationalbibliothek (2014) 0.01
    0.0097842505 = product of:
      0.0587055 = sum of:
        0.02935275 = weight(_text_:web in 1969) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1969,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1969)
        0.02935275 = weight(_text_:web in 1969) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1969,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1969, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1969)
      0.16666667 = coord(2/12)
    
    Footnote
    Contribution in a special issue "Beyond libraries: Subject metadata in the digital environment and Semantic Web" - Enthält Beiträge der gleichnamigen IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn.
  20. Lichtenstein, A.; Plank, M.; Neumann, J.: TIB's portal for audiovisual media : combining manual and automatic indexing (2014) 0.01
    0.0097842505 = product of:
      0.0587055 = sum of:
        0.02935275 = weight(_text_:web in 1981) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1981,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1981)
        0.02935275 = weight(_text_:web in 1981) [ClassicSimilarity], result of:
          0.02935275 = score(doc=1981,freq=2.0), product of:
            0.11629491 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.035634913 = queryNorm
            0.25239927 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1981)
      0.16666667 = coord(2/12)
    
    Abstract
    The German National Library of Science and Technology (TIB) developed a Web-based platform for audiovisual media. The audiovisual portal optimizes access to scientific videos such as computer animations and lecture and conference recordings. TIB's AV-Portal combines traditional cataloging and automatic indexing of audiovisual media. The article describes metadata standards for audiovisual media and introduces the TIB's metadata schema in comparison to other metadata standards for non-textual materials. Additionally, we give an overview of multimedia retrieval technologies used for the Portal and present the AV-Portal in detail as well as the additional value for libraries and their users.

Years

Languages

  • e 40
  • d 29
  • f 1
  • ru 1
  • More… Less…

Types

  • a 61
  • el 7
  • x 5
  • m 2
  • s 1
  • More… Less…