Search (68 results, page 1 of 4)

  • × theme_ss:"Automatisches Indexieren"
  1. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.09
    0.09136297 = product of:
      0.18272594 = sum of:
        0.04576759 = weight(_text_:world in 2673) [ClassicSimilarity], result of:
          0.04576759 = score(doc=2673,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.29726875 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.060816556 = weight(_text_:wide in 2673) [ClassicSimilarity], result of:
          0.060816556 = score(doc=2673,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.342674 = fieldWeight in 2673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.057147387 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.057147387 = score(doc=2673,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.018994406 = product of:
          0.03798881 = sum of:
            0.03798881 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.03798881 = score(doc=2673,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  2. Koch, T.: Experiments with automatic classification of WAIS databases and indexing of WWW : some results from the Nordic WAIS/WWW project (1994) 0.07
    0.07435255 = product of:
      0.19827345 = sum of:
        0.07927179 = weight(_text_:world in 7209) [ClassicSimilarity], result of:
          0.07927179 = score(doc=7209,freq=6.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.5148846 = fieldWeight in 7209, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.0860076 = weight(_text_:wide in 7209) [ClassicSimilarity], result of:
          0.0860076 = score(doc=7209,freq=4.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.4846142 = fieldWeight in 7209, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
        0.032994058 = weight(_text_:web in 7209) [ClassicSimilarity], result of:
          0.032994058 = score(doc=7209,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 7209, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7209)
      0.375 = coord(3/8)
    
    Abstract
    The Nordic WAIS/WWW project sponsored by NORDINFO is a joint project between Lund University Library and the National Technological Library of Denmark. It aims to improve the existing networked information discovery and retrieval tools Wide Area Information System (WAIS) and World Wide Web (WWW), and to move towards unifying WWW and WAIS. Details current results focusing on the WAIS side of the project. Describes research into automatic indexing and classification of WAIS sources, development of an orientation tool for WAIS, and development of a WAIS index of WWW resources
    Source
    Internet world and document delivery world international 94: Proceedings of the 2nd Annual Conference, London, May 1994
  3. Rasmussen, E.M.: Indexing and retrieval for the Web (2002) 0.06
    0.05808951 = product of:
      0.15490536 = sum of:
        0.032362577 = weight(_text_:world in 4285) [ClassicSimilarity], result of:
          0.032362577 = score(doc=4285,freq=4.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.21020076 = fieldWeight in 4285, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
        0.060816556 = weight(_text_:wide in 4285) [ClassicSimilarity], result of:
          0.060816556 = score(doc=4285,freq=8.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.342674 = fieldWeight in 4285, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
        0.061726224 = weight(_text_:web in 4285) [ClassicSimilarity], result of:
          0.061726224 = score(doc=4285,freq=28.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.47219574 = fieldWeight in 4285, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4285)
      0.375 = coord(3/8)
    
    Abstract
    The introduction and growth of the World Wide Web (WWW, or Web) have resulted in a profound change in the way individuals and organizations access information. In terms of volume, nature, and accessibility, the characteristics of electronic information are significantly different from those of even five or six years ago. Control of, and access to, this flood of information rely heavily an automated techniques for indexing and retrieval. According to Gudivada, Raghavan, Grosky, and Kasanagottu (1997, p. 58), "The ability to search and retrieve information from the Web efficiently and effectively is an enabling technology for realizing its full potential." Almost 93 percent of those surveyed consider the Web an "indispensable" Internet technology, second only to e-mail (Graphie, Visualization & Usability Center, 1998). Although there are other ways of locating information an the Web (browsing or following directory structures), 85 percent of users identify Web pages by means of a search engine (Graphie, Visualization & Usability Center, 1998). A more recent study conducted by the Stanford Institute for the Quantitative Study of Society confirms the finding that searching for information is second only to e-mail as an Internet activity (Nie & Ebring, 2000, online). In fact, Nie and Ebring conclude, "... the Internet today is a giant public library with a decidedly commercial tilt. The most widespread use of the Internet today is as an information search utility for products, travel, hobbies, and general information. Virtually all users interviewed responded that they engaged in one or more of these information gathering activities."
    Techniques for automated indexing and information retrieval (IR) have been developed, tested, and refined over the past 40 years, and are well documented (see, for example, Agosti & Smeaton, 1996; BaezaYates & Ribeiro-Neto, 1999a; Frakes & Baeza-Yates, 1992; Korfhage, 1997; Salton, 1989; Witten, Moffat, & Bell, 1999). With the introduction of the Web, and the capability to index and retrieve via search engines, these techniques have been extended to a new environment. They have been adopted, altered, and in some Gases extended to include new methods. "In short, search engines are indispensable for searching the Web, they employ a variety of relatively advanced IR techniques, and there are some peculiar aspects of search engines that make searching the Web different than more conventional information retrieval" (Gordon & Pathak, 1999, p. 145). The environment for information retrieval an the World Wide Web differs from that of "conventional" information retrieval in a number of fundamental ways. The collection is very large and changes continuously, with pages being added, deleted, and altered. Wide variability between the size, structure, focus, quality, and usefulness of documents makes Web documents much more heterogeneous than a typical electronic document collection. The wide variety of document types includes images, video, audio, and scripts, as well as many different document languages. Duplication of documents and sites is common. Documents are interconnected through networks of hyperlinks. Because of the size and dynamic nature of the Web, preprocessing all documents requires considerable resources and is often not feasible, certainly not an the frequent basis required to ensure currency. Query length is usually much shorter than in other environments-only a few words-and user behavior differs from that in other environments. These differences make the Web a novel environment for information retrieval (Baeza-Yates & Ribeiro-Neto, 1999b; Bharat & Henzinger, 1998; Huang, 2000).
  4. Krüger, C.: Evaluation des WWW-Suchdienstes GERHARD unter besonderer Beachtung automatischer Indexierung (1999) 0.05
    0.052873235 = product of:
      0.1409953 = sum of:
        0.046232246 = weight(_text_:world in 1777) [ClassicSimilarity], result of:
          0.046232246 = score(doc=1777,freq=4.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.30028677 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.061434004 = weight(_text_:wide in 1777) [ClassicSimilarity], result of:
          0.061434004 = score(doc=1777,freq=4.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.34615302 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
        0.033329032 = weight(_text_:web in 1777) [ClassicSimilarity], result of:
          0.033329032 = score(doc=1777,freq=4.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25496176 = fieldWeight in 1777, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1777)
      0.375 = coord(3/8)
    
    Abstract
    Die vorliegende Arbeit beinhaltet eine Beschreibung und Evaluation des WWW - Suchdienstes GERHARD (German Harvest Automated Retrieval and Directory). GERHARD ist ein Such- und Navigationssystem für das deutsche World Wide Web, weiches ausschließlich wissenschaftlich relevante Dokumente sammelt, und diese auf der Basis computerlinguistischer und statistischer Methoden automatisch mit Hilfe eines bibliothekarischen Klassifikationssystems klassifiziert. Mit dem DFG - Projekt GERHARD ist der Versuch unternommen worden, mit einem auf einem automatischen Klassifizierungsverfahren basierenden World Wide Web - Dienst eine Alternative zu herkömmlichen Methoden der Interneterschließung zu entwickeln. GERHARD ist im deutschsprachigen Raum das einzige Verzeichnis von Internetressourcen, dessen Erstellung und Aktualisierung vollständig automatisch (also maschinell) erfolgt. GERHARD beschränkt sich dabei auf den Nachweis von Dokumenten auf wissenschaftlichen WWW - Servern. Die Grundidee dabei war, kostenintensive intellektuelle Erschließung und Klassifizierung von lnternetseiten durch computerlinguistische und statistische Methoden zu ersetzen, um auf diese Weise die nachgewiesenen Internetressourcen automatisch auf das Vokabular eines bibliothekarischen Klassifikationssystems abzubilden. GERHARD steht für German Harvest Automated Retrieval and Directory. Die WWW - Adresse (URL) von GERHARD lautet: http://www.gerhard.de. Im Rahmen der vorliegenden Diplomarbeit soll eine Beschreibung des Dienstes mit besonderem Schwerpunkt auf dem zugrundeliegenden Indexierungs- bzw. Klassifizierungssystem erfolgen und anschließend mit Hilfe eines kleinen Retrievaltests die Effektivität von GERHARD überprüft werden.
  5. Fauzi, F.; Belkhatir, M.: Multifaceted conceptual image indexing on the world wide web (2013) 0.05
    0.052627992 = product of:
      0.14034131 = sum of:
        0.039229363 = weight(_text_:world in 2721) [ClassicSimilarity], result of:
          0.039229363 = score(doc=2721,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.25480178 = fieldWeight in 2721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.052128475 = weight(_text_:wide in 2721) [ClassicSimilarity], result of:
          0.052128475 = score(doc=2721,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.29372054 = fieldWeight in 2721, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
        0.048983473 = weight(_text_:web in 2721) [ClassicSimilarity], result of:
          0.048983473 = score(doc=2721,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.37471575 = fieldWeight in 2721, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2721)
      0.375 = coord(3/8)
    
    Abstract
    In this paper, we describe a user-centered design of an automated multifaceted concept-based indexing framework which analyzes the semantics of the Web image contextual information and classifies it into five broad semantic concept facets: signal, object, abstract, scene, and relational; and identifies the semantic relationships between the concepts. An important aspect of our indexing model is that it relates to the users' levels of image descriptions. Also, a major contribution relies on the fact that the classification is performed automatically with the raw image contextual information extracted from any general webpage and is not solely based on image tags like state-of-the-art solutions. Human Language Technology techniques and an external knowledge base are used to analyze the information both syntactically and semantically. Experimental results on a human-annotated Web image collection and corresponding contextual information indicate that our method outperforms empirical frameworks employing tf-idf and location-based tf-idf weighting schemes as well as n-gram indexing in a recall/precision based evaluation framework.
  6. Daudaravicius, V.: ¬A framework for keyphrase extraction from scientific journals (2016) 0.05
    0.052341826 = product of:
      0.13957821 = sum of:
        0.04576759 = weight(_text_:world in 2930) [ClassicSimilarity], result of:
          0.04576759 = score(doc=2930,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.29726875 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.060816556 = weight(_text_:wide in 2930) [ClassicSimilarity], result of:
          0.060816556 = score(doc=2930,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.342674 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
        0.032994058 = weight(_text_:web in 2930) [ClassicSimilarity], result of:
          0.032994058 = score(doc=2930,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 2930, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2930)
      0.375 = coord(3/8)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  7. Gábor, K.; Zargayouna, H.; Tellier, I.; Buscaldi, D.; Charnois, T.: ¬A typology of semantic relations dedicated to scientific literature analysis (2016) 0.05
    0.052341826 = product of:
      0.13957821 = sum of:
        0.04576759 = weight(_text_:world in 2933) [ClassicSimilarity], result of:
          0.04576759 = score(doc=2933,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.29726875 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.060816556 = weight(_text_:wide in 2933) [ClassicSimilarity], result of:
          0.060816556 = score(doc=2933,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.342674 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
        0.032994058 = weight(_text_:web in 2933) [ClassicSimilarity], result of:
          0.032994058 = score(doc=2933,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.25239927 = fieldWeight in 2933, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2933)
      0.375 = coord(3/8)
    
    Content
    Vortrag, "Semantics, Analytics, Visualisation: Enhancing Scholarly Data Workshop co-located with the 25th International World Wide Web Conference April 11, 2016 - Montreal, Canada", Montreal 2016.
  8. Alexander, M.: Retrieving digital data with fuzzy matching (1996) 0.03
    0.030452613 = product of:
      0.12181045 = sum of:
        0.052305818 = weight(_text_:world in 6961) [ClassicSimilarity], result of:
          0.052305818 = score(doc=6961,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.33973572 = fieldWeight in 6961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0625 = fieldNorm(doc=6961)
        0.06950463 = weight(_text_:wide in 6961) [ClassicSimilarity], result of:
          0.06950463 = score(doc=6961,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.3916274 = fieldWeight in 6961, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0625 = fieldNorm(doc=6961)
      0.25 = coord(2/8)
    
    Abstract
    Briefly describes the Excalibur EFS system which makes use of adaptive pattern recognition technology as an aid to automatic indexing and how it is being tested at the British Library for the indexing and retrieval of scanned images from the library's holdings. Notes how Excalibur EFS can support a wide degree of fuzzy searching, compensate for the errors produced by OCR conversion of scanned images, reduce the costs of indexing, and require far less storage space than more traditional indexes
    Source
    New library world. 97(1996) no.1131, S.28-31
  9. Search Engines and Beyond : Developing efficient knowledge management systems, April 19-20 1999, Boston, Mass (1999) 0.03
    0.029909614 = product of:
      0.07975897 = sum of:
        0.026152909 = weight(_text_:world in 2596) [ClassicSimilarity], result of:
          0.026152909 = score(doc=2596,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.16986786 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.034752317 = weight(_text_:wide in 2596) [ClassicSimilarity], result of:
          0.034752317 = score(doc=2596,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.1958137 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
        0.018853746 = weight(_text_:web in 2596) [ClassicSimilarity], result of:
          0.018853746 = score(doc=2596,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.14422815 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2596)
      0.375 = coord(3/8)
    
    Content
    Ramana Rao (Inxight, Palo Alto, CA) 7 ± 2 Insights on achieving Effective Information Access Session One: Updates and a twelve month perspective Danny Sullivan (Search Engine Watch, US / England) Portalization and other search trends Carol Tenopir (University of Tennessee) Search realities faced by end users and professional searchers Session Two: Today's search engines and beyond Daniel Hoogterp (Retrieval Technologies, McLean, VA) Effective presentation and utilization of search techniques Rick Kenny (Fulcrum Technologies, Ontario, Canada) Beyond document clustering: The knowledge impact statement Gary Stock (Ingenius, Kalamazoo, MI) Automated change monitoring Gary Culliss (Direct Hit, Wellesley Hills, MA) User popularity ranked search engines Byron Dom (IBM, CA) Automatically finding the best pages on the World Wide Web (CLEVER) Peter Tomassi (LookSmart, San Francisco, CA) Adding human intellect to search technology Session Three: Panel discussion: Human v automated categorization and editing Ev Brenner (New York, NY)- Chairman James Callan (University of Massachusetts, MA) Marc Krellenstein (Northern Light Technology, Cambridge, MA) Dan Miller (Ask Jeeves, Berkeley, CA) Session Four: Updates and a twelve month perspective Steve Arnold (AIT, Harrods Creek, KY) Review: The leading edge in search and retrieval software Ellen Voorhees (NIST, Gaithersburg, MD) TREC update Session Five: Search engines now and beyond Intelligent Agents John Snyder (Muscat, Cambridge, England) Practical issues behind intelligent agents Text summarization Therese Firmin, (Dept of Defense, Ft George G. Meade, MD) The TIPSTER/SUMMAC evaluation of automatic text summarization systems Cross language searching Elizabeth Liddy (TextWise, Syracuse, NY) A conceptual interlingua approach to cross-language retrieval. Video search and retrieval Armon Amir (IBM, Almaden, CA) CueVideo: Modular system for automatic indexing and browsing of video/audio Speech recognition Michael Witbrock (Lycos, Waltham, MA) Retrieval of spoken documents Visualization James A. Wise (Integral Visuals, Richland, WA) Information visualization in the new millennium: Emerging science or passing fashion? Text mining David Evans (Claritech, Pittsburgh, PA) Text mining - towards decision support
  10. Groß, T.; Faden, M.: Automatische Indexierung elektronischer Dokumente an der Deutschen Zentralbibliothek für Wirtschaftswissenschaften : Bericht über die Jahrestagung der Internationalen Buchwissenschaftlichen Gesellschaft (2010) 0.03
    0.029909614 = product of:
      0.07975897 = sum of:
        0.026152909 = weight(_text_:world in 4051) [ClassicSimilarity], result of:
          0.026152909 = score(doc=4051,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.16986786 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.034752317 = weight(_text_:wide in 4051) [ClassicSimilarity], result of:
          0.034752317 = score(doc=4051,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.1958137 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.018853746 = weight(_text_:web in 4051) [ClassicSimilarity], result of:
          0.018853746 = score(doc=4051,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.14422815 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
      0.375 = coord(3/8)
    
    Abstract
    Die zunehmende Verfügbarmachung digitaler Informationen in den letzten Jahren sowie die Aussicht auf ein weiteres Ansteigen der sogenannten Datenflut kumulieren in einem grundlegenden, sich weiter verstärkenden Informationsstrukturierungsproblem. Die stetige Zunahme von digitalen Informationsressourcen im World Wide Web sichert zwar jederzeit und ortsungebunden den Zugriff auf verschiedene Informationen; offen bleibt der strukturierte Zugang, insbesondere zu wissenschaftlichen Ressourcen. Angesichts der steigenden Anzahl elektronischer Inhalte und vor dem Hintergrund stagnierender bzw. knapper werdender personeller Ressourcen in der Sacherschließun schafft keine Bibliothek bzw. kein Bibliotheksverbund es mehr, weder aktuell noch zukünftig, alle digitalen Daten zu erfassen, zu strukturieren und zueinander in Beziehung zu setzen. In der Informationsgesellschaft des 21. Jahrhunderts wird es aber zunehmend wichtiger, die in der Flut verschwundenen wissenschaftlichen Informationen zeitnah, angemessen und vollständig zu strukturieren und somit als Basis für eine Wissensgenerierung wieder nutzbar zu machen. Eine normierte Inhaltserschließung digitaler Informationsressourcen ist deshalb für die Deutsche Zentralbibliothek für Wirtschaftswissenschaften (ZBW) als wichtige Informationsinfrastruktureinrichtung in diesem Bereich ein entscheidender und auch erfolgskritischer Aspekt im Wettbewerb mit anderen Informationsdienstleistern. Weil die traditionelle intellektuelle Sacherschließung aber nicht beliebig skalierbar ist - mit dem Anstieg der Zahl an Online-Dokumenten steigt proportional auch der personelle Ressourcenbedarf an Fachreferenten, wenn ein gewisser Qualitätsstandard gehalten werden soll - bedarf es zukünftig anderer Sacherschließungsverfahren. Automatisierte Verschlagwortungsmethoden werden dabei als einzige Möglichkeit angesehen, die bibliothekarische Sacherschließung auch im digitalen Zeitalter zukunftsfest auszugestalten. Zudem können maschinelle Ansätze dazu beitragen, die Heterogenitäten (Indexierungsinkonsistenzen) zwischen den einzelnen Sacherschließer zu nivellieren, und somit zu einer homogeneren Erschließung des Bibliotheksbestandes beitragen.
  11. Shafer, K.: Scorpion Project explores using Dewey to organize the Web (1996) 0.02
    0.02310706 = product of:
      0.09242824 = sum of:
        0.04576759 = weight(_text_:world in 6750) [ClassicSimilarity], result of:
          0.04576759 = score(doc=6750,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.29726875 = fieldWeight in 6750, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
        0.046660647 = weight(_text_:web in 6750) [ClassicSimilarity], result of:
          0.046660647 = score(doc=6750,freq=4.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.35694647 = fieldWeight in 6750, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6750)
      0.25 = coord(2/8)
    
    Abstract
    As the amount of accessible information on the WWW increases, so will the cost of accessing it, even if search servcies remain free, due to the increasing amount of time users will have to spend to find needed items. Considers what the seemingly unorganized Web and the organized world of libraries can offer each other. The OCLC Scorpion Project is attempting to combine indexing and cataloguing, specifically focusing on building tools for automatic subject recognition using the technqiues of library science and information retrieval. If subject headings or concept domains can be automatically assigned to electronic items, improved filtering tools for searching can be produced
  12. Milstead, J.L.: Thesauri in a full-text world (1998) 0.01
    0.011564642 = product of:
      0.04625857 = sum of:
        0.032691136 = weight(_text_:world in 2337) [ClassicSimilarity], result of:
          0.032691136 = score(doc=2337,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.21233483 = fieldWeight in 2337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.013567433 = product of:
          0.027134866 = sum of:
            0.027134866 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
              0.027134866 = score(doc=2337,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.19345059 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2337)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    22. 9.1997 19:16:05
  13. Chartron, G.; Dalbin, S.; Monteil, M.-G.; Verillon, M.: Indexation manuelle et indexation automatique : dépasser les oppositions (1989) 0.01
    0.0076020695 = product of:
      0.060816556 = sum of:
        0.060816556 = weight(_text_:wide in 3516) [ClassicSimilarity], result of:
          0.060816556 = score(doc=3516,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.342674 = fieldWeight in 3516, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3516)
      0.125 = coord(1/8)
    
    Abstract
    Report of a study comparing 2 methods of indexing: LEXINET, a computerised system for indexing titles and summaries only; and manual indexing of full texts, using the thesaurus developed by French Electricity (EDF). Both systems were applied to a collection of approximately 2.000 documents on artifical intelligence from the EDF data base. The results were then analysed to compare quantitative performance (number and range of terms) and qualitative performance (ambiguity of terms, specificity, variability, consistency). Overall, neither system proved ideal: LEXINET was deficient as regards lack of accessibility and excessive ambiguity; while the manual system gave rise to an over-wide variation of terms. The ideal system would appear to be a combination of automatic and manual systems, on the evidence produced here.
  14. Souza, R.R.; Gil-Leiva, I.: Automatic indexing of scientific texts : a methodological comparison (2016) 0.01
    0.006538227 = product of:
      0.052305818 = sum of:
        0.052305818 = weight(_text_:world in 4913) [ClassicSimilarity], result of:
          0.052305818 = score(doc=4913,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.33973572 = fieldWeight in 4913, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.0625 = fieldNorm(doc=4913)
      0.125 = coord(1/8)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
  15. Goller, C.; Löning, J.; Will, T.; Wolff, W.: Automatic document classification : a thourough evaluation of various methods (2000) 0.01
    0.0065160594 = product of:
      0.052128475 = sum of:
        0.052128475 = weight(_text_:wide in 5480) [ClassicSimilarity], result of:
          0.052128475 = score(doc=5480,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.29372054 = fieldWeight in 5480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=5480)
      0.125 = coord(1/8)
    
    Abstract
    (Automatic) document classification is generally defined as content-based assignment of one or more predefined categories to documents. Usually, machine learning, statistical pattern recognition, or neural network approaches are used to construct classifiers automatically. In this paper we thoroughly evaluate a wide variety of these methods on a document classification task for German text. We evaluate different feature construction and selection methods and various classifiers. Our main results are: (1) feature selection is necessary not only to reduce learning and classification time, but also to avoid overfitting (even for Support Vector Machines); (2) surprisingly, our morphological analysis does not improve classification quality compared to a letter 5-gram approach; (3) Support Vector Machines are significantly better than all other classification methods
  16. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.01
    0.0058917957 = product of:
      0.047134366 = sum of:
        0.047134366 = weight(_text_:web in 2533) [ClassicSimilarity], result of:
          0.047134366 = score(doc=2533,freq=2.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.36057037 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
      0.125 = coord(1/8)
    
  17. Golub, K.: Automatic subject indexing of text (2019) 0.01
    0.0054300497 = product of:
      0.043440398 = sum of:
        0.043440398 = weight(_text_:wide in 5268) [ClassicSimilarity], result of:
          0.043440398 = score(doc=5268,freq=2.0), product of:
            0.17747644 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.040055543 = queryNorm
            0.24476713 = fieldWeight in 5268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5268)
      0.125 = coord(1/8)
    
    Abstract
    Automatic subject indexing addresses problems of scale and sustainability and can be at the same time used to enrich existing metadata records, establish more connections across and between resources from various metadata and resource collec-tions, and enhance consistency of the metadata. In this work, au-tomatic subject indexing focuses on assigning index terms or classes from established knowledge organization systems (KOSs) for subject indexing like thesauri, subject headings systems and classification systems. The following major approaches are dis-cussed, in terms of their similarities and differences, advantages and disadvantages for automatic assigned indexing from KOSs: "text categorization," "document clustering," and "document classification." Text categorization is perhaps the most wide-spread, machine-learning approach with what seems generally good reported performance. Document clustering automatically both creates groups of related documents and extracts names of subjects depicting the group at hand. Document classification re-uses the intellectual effort invested into creating a KOS for sub-ject indexing and even simple string-matching algorithms have been reported to achieve good results, because one concept can be described using a number of different terms, including equiv-alent, related, narrower and broader terms. Finally, applicability of automatic subject indexing to operative information systems and challenges of evaluation are outlined, suggesting the need for more research.
  18. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.01
    0.005426973 = product of:
      0.043415785 = sum of:
        0.043415785 = product of:
          0.08683157 = sum of:
            0.08683157 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.08683157 = score(doc=402,freq=2.0), product of:
                0.14026769 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.040055543 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  19. Hauer, M: Silicon Valley Vorarlberg : Maschinelle Indexierung und semantisches Retrieval verbessert den Katalog der Vorarlberger Landesbibliothek (2004) 0.01
    0.005102445 = product of:
      0.04081956 = sum of:
        0.04081956 = weight(_text_:web in 2489) [ClassicSimilarity], result of:
          0.04081956 = score(doc=2489,freq=6.0), product of:
            0.13072169 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.040055543 = queryNorm
            0.3122631 = fieldWeight in 2489, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2489)
      0.125 = coord(1/8)
    
    Abstract
    10 Jahre Internet haben die WeIt um die Bibliotheken herum stark geändert. Der Web-OPAC war eine Antwort der Bibliotheken. Doch reicht ein Web-OPAC im Zeitalter des Internets noch aus? Außer Web ist es doch der alte Katalog. Ca. 90% aller Bibliotheksrecherchen durch Benutzer sind Themenrecherchen. Ein Anteil dieser Recherchen bringt kein Ergebnis. Es kann leicht gemessen werden, dass null Medien gefunden wurden. Die Gründe hierfür wurden auch immer wieder untersucht: Plural- anstelle Singularformen, zu spezifische Suchbegriffe, Schreib- oder Bedienungsfehler. Zu wenig untersucht sind aber die Recherchen, die nicht mit einer Ausleihe enden, denn auch dann kann man in vielen Fällen von einem Retrieval-Mangel ausgehen. Schließlich: Von den ausgeliehenen Büchern werden nach Einschätzung vieler Bibliothekare 80% nicht weiter als bis zum Inhaltsverzeichnis gelesen (außer in Präsenzbibliotheken) - und erst nach Wochen zurückgegeben. Ein Politiker würde dies neudeutsch als "ein Vermittlungsproblem" bezeichnen. Ein Controller als nicht hinreichende Kapitalnutzung. Einfacher machen es sich immer mehr Studenten und Wissenschaftler, ihr Wissensaustausch vollzieht sich zunehmend an anderen Orten. Bibliotheken (als Funktion) sind unverzichtbar für die wissenschaftliche Kommunikation. Deshalb geht es darum, Wege zu finden und auch zu beschreiten, welche die Schätze von Bibliotheken (als Institution) effizienter an die Zielgruppe bringen. Der Einsatz von Information Retrieval-Technologie, neue Erschließungsmethoden und neuer Content sind Ansätze dazu. Doch die bisherigen Verbundstrukturen und Abhängigkeit haben das hier vorgestellte innovative Projekt keineswegs gefördert. Innovation entsteht wie die Innvoationsforschung zeigt eigentlich immer an der Peripherie: in Bregenz fing es an.
  20. Pirkola, A.: Morphological typology of languages for IR (2001) 0.00
    0.0049036704 = product of:
      0.039229363 = sum of:
        0.039229363 = weight(_text_:world in 4476) [ClassicSimilarity], result of:
          0.039229363 = score(doc=4476,freq=2.0), product of:
            0.15396032 = queryWeight, product of:
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.040055543 = queryNorm
            0.25480178 = fieldWeight in 4476, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.8436708 = idf(docFreq=2573, maxDocs=44218)
              0.046875 = fieldNorm(doc=4476)
      0.125 = coord(1/8)
    
    Abstract
    This paper presents a morphological classification of languages from the IR perspective. Linguistic typology research has shown that the morphological complexity of every language in the world can be described by two variables, index of synthesis and index of fusion. These variables provide a theoretical basis for IR research handling morphological issues. A common theoretical framework is needed in particular because of the increasing significance of cross-language retrieval research and CLIR systems processing different languages. The paper elaborates the linguistic morphological typology for the purposes of IR research. It studies how the indexes of synthesis and fusion could be used as practical tools in mono- and cross-lingual IR research. The need for semantic and syntactic typologies is discussed. The paper also reviews studies made in different languages on the effects of morphology and stemming in IR.

Years

Languages

  • e 39
  • d 27
  • f 1
  • ru 1
  • More… Less…

Types

  • a 60
  • el 6
  • x 4
  • m 2
  • s 1
  • More… Less…