Search (267 results, page 3 of 14)

  • × theme_ss:"Automatisches Indexieren"
  • × type_ss:"a"
  1. Hersh, W.R.; Hickam, D.H.: ¬A comparison of two methods for indexing and retrieval from a full-text medical database (1992) 0.01
    0.0077391705 = product of:
      0.054174192 = sum of:
        0.012233062 = weight(_text_:information in 4526) [ClassicSimilarity], result of:
          0.012233062 = score(doc=4526,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 4526, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4526)
        0.04194113 = weight(_text_:retrieval in 4526) [ClassicSimilarity], result of:
          0.04194113 = score(doc=4526,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.46789268 = fieldWeight in 4526, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4526)
      0.14285715 = coord(2/14)
    
    Abstract
    Reports results of a study of 2 information retrieval systems on a 2.000 document full text medical database. The first system, SAPHIRE, features concept based automatic indexing and statistical retrieval techniques, while the second system, SWORD, features traditional word based Boolean techniques, 16 medical students at Oregon Health Sciences Univ. each performed 10 searches and their results, recorded in terms of recall and precision, showed nearly equal performance for both systems. SAPHIRE was also compared with a version of SWORD modified to use automatic indexing and ranked retrieval. Using batch input of queries, the latter method performed slightly better
    Imprint
    Medford, NJ : Learned Information Inc.
    Source
    Proceedings of the 55th Annual Meeting of the American Society for Information Science, Pittsburgh, 26.-29.10.92. Ed.: D. Shaw
  2. Knorz, G.: Automatische Indexierung (1994) 0.01
    0.007581752 = product of:
      0.05307226 = sum of:
        0.01712272 = weight(_text_:information in 4254) [ClassicSimilarity], result of:
          0.01712272 = score(doc=4254,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3291521 = fieldWeight in 4254, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4254)
        0.03594954 = weight(_text_:retrieval in 4254) [ClassicSimilarity], result of:
          0.03594954 = score(doc=4254,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 4254, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4254)
      0.14285715 = coord(2/14)
    
    Series
    Berufsbegleitendes Ergänzungsstudium im Tätigkeitsfeld wissenschaftliche Information und Dokumentation (BETID): Lehrmaterialien; Nr.3
    Source
    Wissensrepräsentation und Information Retrieval. R.-D. Hennings u.a
  3. Rapke, K.: Automatische Indexierung von Volltexten für die Gruner+Jahr Pressedatenbank (2001) 0.01
    0.007154687 = product of:
      0.050082806 = sum of:
        0.0060537956 = weight(_text_:information in 6386) [ClassicSimilarity], result of:
          0.0060537956 = score(doc=6386,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.116372846 = fieldWeight in 6386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
        0.044029012 = weight(_text_:retrieval in 6386) [ClassicSimilarity], result of:
          0.044029012 = score(doc=6386,freq=12.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.49118498 = fieldWeight in 6386, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=6386)
      0.14285715 = coord(2/14)
    
    Abstract
    Retrieval Tests sind die anerkannteste Methode, um neue Verfahren der Inhaltserschließung gegenüber traditionellen Verfahren zu rechtfertigen. Im Rahmen einer Diplomarbeit wurden zwei grundsätzlich unterschiedliche Systeme der automatischen inhaltlichen Erschließung anhand der Pressedatenbank des Verlagshauses Gruner + Jahr (G+J) getestet und evaluiert. Untersucht wurde dabei natürlichsprachliches Retrieval im Vergleich zu Booleschem Retrieval. Bei den beiden Systemen handelt es sich zum einen um Autonomy von Autonomy Inc. und DocCat, das von IBM an die Datenbankstruktur der G+J Pressedatenbank angepasst wurde. Ersteres ist ein auf natürlichsprachlichem Retrieval basierendes, probabilistisches System. DocCat demgegenüber basiert auf Booleschem Retrieval und ist ein lernendes System, das auf Grund einer intellektuell erstellten Trainingsvorlage indexiert. Methodisch geht die Evaluation vom realen Anwendungskontext der Textdokumentation von G+J aus. Die Tests werden sowohl unter statistischen wie auch qualitativen Gesichtspunkten bewertet. Ein Ergebnis der Tests ist, dass DocCat einige Mängel gegenüber der intellektuellen Inhaltserschließung aufweist, die noch behoben werden müssen, während das natürlichsprachliche Retrieval von Autonomy in diesem Rahmen und für die speziellen Anforderungen der G+J Textdokumentation so nicht einsetzbar ist
    Source
    nfd Information - Wissenschaft und Praxis. 52(2001) H.5, S.251-262
  4. Kim, P.K.: ¬An automatic indexing of compound words based on mutual information for Korean text retrieval (1995) 0.01
    0.007148144 = product of:
      0.050037004 = sum of:
        0.016143454 = weight(_text_:information in 620) [ClassicSimilarity], result of:
          0.016143454 = score(doc=620,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3103276 = fieldWeight in 620, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=620)
        0.033893548 = weight(_text_:retrieval in 620) [ClassicSimilarity], result of:
          0.033893548 = score(doc=620,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37811437 = fieldWeight in 620, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=620)
      0.14285715 = coord(2/14)
    
    Abstract
    Presents an automatic indexing technique for compound words suitable for an agglutinative language, specifically Korean. Discusses some construction conditions for compound words and the rules for decomposing compound words to enhance the exhaustivity of indexing, demonstrating that this system, mutual information, enhances both the exhaustivity of indexing and the specifity of terms. Suggests that the construction conditions and rules for decomposition presented may be used in multilingual information retrieval systems to translate the indexing terms of the specific language into those of the language required
    Source
    Library and information science. 1995, no.34, S.29-38
  5. Hafer, M.A.; Weiss, S.F.: Word segmentation by letter successor varieties (1974) 0.01
    0.0069364496 = product of:
      0.048555143 = sum of:
        0.012233062 = weight(_text_:information in 4997) [ClassicSimilarity], result of:
          0.012233062 = score(doc=4997,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23515764 = fieldWeight in 4997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4997)
        0.036322083 = weight(_text_:retrieval in 4997) [ClassicSimilarity], result of:
          0.036322083 = score(doc=4997,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40520695 = fieldWeight in 4997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4997)
      0.14285715 = coord(2/14)
    
    Abstract
    This paper describes a method for automatically segmenting words into their stems and affixes. The process uses certain statistical properties of corpus (successor and predecessor letter variety counts) to indicate where words should be divided. Consequently, this process is less reliant on human intervention than are other methods for automated stemming. The segmentation system is used to construct stem dictionariesfor documnet classification. Information retrieval experiments are then performed using documents and queries so classified. Results show not only that this method is capable of high quality word segmentation, but also that its use in information retrieval produce results that are at least as good as thosse obtained using the more traditional stemming process.
    Source
    Information storage and retrieval. 10(1974) H.11/12, S.371-385
  6. Salton, G.: Future prospects for text-based information retrieval (1990) 0.01
    0.006865305 = product of:
      0.04805713 = sum of:
        0.012107591 = weight(_text_:information in 2327) [ClassicSimilarity], result of:
          0.012107591 = score(doc=2327,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 2327, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=2327)
        0.03594954 = weight(_text_:retrieval in 2327) [ClassicSimilarity], result of:
          0.03594954 = score(doc=2327,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 2327, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=2327)
      0.14285715 = coord(2/14)
    
  7. Salton, G.; Araya, J.: On the use of clustered file organizations in information search and retrieval (1990) 0.01
    0.006865305 = product of:
      0.04805713 = sum of:
        0.012107591 = weight(_text_:information in 2409) [ClassicSimilarity], result of:
          0.012107591 = score(doc=2409,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 2409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=2409)
        0.03594954 = weight(_text_:retrieval in 2409) [ClassicSimilarity], result of:
          0.03594954 = score(doc=2409,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 2409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=2409)
      0.14285715 = coord(2/14)
    
  8. Reimer, U.: Verfahren der automatischen Indexierung : benötigtes Vorwissen und Ansätze zu seiner automatischen Akquisition, ein Überblick (1992) 0.01
    0.006865305 = product of:
      0.04805713 = sum of:
        0.012107591 = weight(_text_:information in 7858) [ClassicSimilarity], result of:
          0.012107591 = score(doc=7858,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 7858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=7858)
        0.03594954 = weight(_text_:retrieval in 7858) [ClassicSimilarity], result of:
          0.03594954 = score(doc=7858,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 7858, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=7858)
      0.14285715 = coord(2/14)
    
    Source
    Experimentelles und praktisches Information Retrieval: Festschrift für Gerhard Lustig. Hrsg. R. Kuhlen
  9. Porter, M.F.: ¬An algorithm for suffix stripping (1980) 0.01
    0.006865305 = product of:
      0.04805713 = sum of:
        0.012107591 = weight(_text_:information in 3122) [ClassicSimilarity], result of:
          0.012107591 = score(doc=3122,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 3122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3122)
        0.03594954 = weight(_text_:retrieval in 3122) [ClassicSimilarity], result of:
          0.03594954 = score(doc=3122,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 3122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3122)
      0.14285715 = coord(2/14)
    
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.313-316.
  10. Anderson, J.D.; Pérez-Carballo, J.: ¬The nature of indexing: how humans and machines analyze messages and texts for retrieval : Part I: Research and the nature of human indexing (2001) 0.01
    0.006865305 = product of:
      0.04805713 = sum of:
        0.012107591 = weight(_text_:information in 3136) [ClassicSimilarity], result of:
          0.012107591 = score(doc=3136,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 3136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3136)
        0.03594954 = weight(_text_:retrieval in 3136) [ClassicSimilarity], result of:
          0.03594954 = score(doc=3136,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 3136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3136)
      0.14285715 = coord(2/14)
    
    Source
    Information processing and management. 37(2001) no.2, S.231-254
  11. Milstead, J.L.: Thesauri in a full-text world (1998) 0.01
    0.0068057464 = product of:
      0.03176015 = sum of:
        0.010089659 = weight(_text_:information in 2337) [ClassicSimilarity], result of:
          0.010089659 = score(doc=2337,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 2337, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.014978974 = weight(_text_:retrieval in 2337) [ClassicSimilarity], result of:
          0.014978974 = score(doc=2337,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 2337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
              0.020074548 = score(doc=2337,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2337)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Despite early claims to the contemporary, thesauri continue to find use as access tools for information in the full-text environment. Their mode of use is changing, but this change actually represents an expansion rather than a contrdiction of their utility. Thesauri and similar vocabulary tools can complement full-text access by aiding users in focusing their searches, by supplementing the linguistic analysis of the text search engine, and even by serving as one of the tools used by the linguistic engine for its analysis. While human indexing contunues to be used for many databases, the trend is to increase the use of machine aids for this purpose. All machine-aided indexing (MAI) systems rely on thesauri as the basis for term selection. In the 21st century, the balance of effort between human and machine will change at both input and output, but thesauri will continue to play an important role for the foreseeable future
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
    Theme
    Verbale Doksprachen im Online-Retrieval
  12. Buckley, C.; Allan, J.; Salton, G.: Automatic routing and retrieval using Smart : TREC-2 (1995) 0.01
    0.0066335746 = product of:
      0.04643502 = sum of:
        0.0104854815 = weight(_text_:information in 5699) [ClassicSimilarity], result of:
          0.0104854815 = score(doc=5699,freq=6.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.20156369 = fieldWeight in 5699, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5699)
        0.03594954 = weight(_text_:retrieval in 5699) [ClassicSimilarity], result of:
          0.03594954 = score(doc=5699,freq=8.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.40105087 = fieldWeight in 5699, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5699)
      0.14285715 = coord(2/14)
    
    Abstract
    The Smart information retrieval project emphazises completely automatic approaches to the understanding and retrieval of large quantities of text. The work in the TREC-2 environment continues, performing both routing and ad hoc experiments. The ad hoc work extends investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document that matches the query. The performance of ad hoc runs is good, but it is clear that full advantage of the available local information is not been taken advantage of. The routing experiments use conventional relevance feedback approaches to routing, but with a much greater degree of query expansion than was previously done. The length of a query vector is increased by a factor of 5 to 10 by adding terms found in previously seen relevant documents. This approach improves effectiveness by 30-40% over the original query
    Source
    Information processing and management. 31(1995) no.3, S.315-326
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  13. Lichtenstein, A.; Plank, M.; Neumann, J.: TIB's portal for audiovisual media : combining manual and automatic indexing (2014) 0.01
    0.006482826 = product of:
      0.04537978 = sum of:
        0.024409214 = weight(_text_:web in 1981) [ClassicSimilarity], result of:
          0.024409214 = score(doc=1981,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.25239927 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1981)
        0.020970564 = weight(_text_:retrieval in 1981) [ClassicSimilarity], result of:
          0.020970564 = score(doc=1981,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.23394634 = fieldWeight in 1981, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1981)
      0.14285715 = coord(2/14)
    
    Abstract
    The German National Library of Science and Technology (TIB) developed a Web-based platform for audiovisual media. The audiovisual portal optimizes access to scientific videos such as computer animations and lecture and conference recordings. TIB's AV-Portal combines traditional cataloging and automatic indexing of audiovisual media. The article describes metadata standards for audiovisual media and introduces the TIB's metadata schema in comparison to other metadata standards for non-textual materials. Additionally, we give an overview of multimedia retrieval technologies used for the Portal and present the AV-Portal in detail as well as the additional value for libraries and their users.
  14. Salton, G.: SMART System: 1961-1976 (2009) 0.01
    0.006472671 = product of:
      0.045308694 = sum of:
        0.011415146 = weight(_text_:information in 3879) [ClassicSimilarity], result of:
          0.011415146 = score(doc=3879,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.21943474 = fieldWeight in 3879, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=3879)
        0.033893548 = weight(_text_:retrieval in 3879) [ClassicSimilarity], result of:
          0.033893548 = score(doc=3879,freq=4.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.37811437 = fieldWeight in 3879, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=3879)
      0.14285715 = coord(2/14)
    
    Abstract
    While a number of researchers had experimented during the 1950's on automatic indexing and retrieval in various forms, it was Gerard Salton who brought the information retrieval experimental paradigm to full fruition, with his "SMART" system. His work has been enormously influential.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  15. McKiernan, G.: Automated categorisation of Web resources : a profile of selected projects, research, products, and services (1996) 0.01
    0.006422852 = product of:
      0.044959962 = sum of:
        0.034870304 = weight(_text_:web in 2533) [ClassicSimilarity], result of:
          0.034870304 = score(doc=2533,freq=2.0), product of:
            0.09670874 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.029633347 = queryNorm
            0.36057037 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
        0.010089659 = weight(_text_:information in 2533) [ClassicSimilarity], result of:
          0.010089659 = score(doc=2533,freq=2.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.19395474 = fieldWeight in 2533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=2533)
      0.14285715 = coord(2/14)
    
    Source
    New review of information networking. 1996, no.2, S.15-40
  16. Lepsky, K.: Automatische Indexierung des Reallexikons zur Deutschen Kunstgeschichte (2006) 0.01
    0.006305383 = product of:
      0.04413768 = sum of:
        0.03624127 = weight(_text_:elektronische in 6080) [ClassicSimilarity], result of:
          0.03624127 = score(doc=6080,freq=4.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.258616 = fieldWeight in 6080, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6080)
        0.0078964075 = weight(_text_:information in 6080) [ClassicSimilarity], result of:
          0.0078964075 = score(doc=6080,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.1517936 = fieldWeight in 6080, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=6080)
      0.14285715 = coord(2/14)
    
    Abstract
    Digitalisierungsvorhaben sorgen für eine immer größere Verfügbarkeit von Inhalten bislang ausschließlich gedruckt vorliegender Werke, zunehmend auch von ganzen Büchern. Projekte wie "Google Print" versprechen die völlige elektronische Verfügbarkeit von Informationen nahezu unabhängig von Zeit und Ort und sorgen dafür, dass die Hüter der konventionellen Information, die Bibliotheken, in Angst und Schrecken versetzt werden angesichts des befürchteten Verlusts ihrer traditionellen Rolle. Die Debatte kreist dabei eher selten um die Frage, welcher Nutzen sich konkret aus dem elektronischen Volltext ergibt: Der Nutzen steht schlichtweg außer Frage, Volltexte gelten prinzipiell als nützlich. Das ist insofern zu optimistisch, als die reine Verfügbarkeit von Information noch lange nicht fir deren sinnvolle Verwertung sorgt - die bloße Verfügbarkeit des Volltexts von Kants "Kritik der reinen Vernunft" enthebt nicht der Notwendigkeit, das Werk zu lesen und verstehen zu wollen. Lesen wird man besser auch nicht am Bildschirm sondern in der (neu-deutsch) "PrintAusgabe". Elektronische Volltexte von Büchern dienen nicht der Lektüre. Falls ihr Sinn nicht ohnehin ein rein verkaufsfördernder ist (das "Publishers Program" von Google Print erweckt in der Tat diesen Eindruck), bleibt ihr potenzieller Nutzen als Nachschlageinstrument. Nur der Volltext bietet die Möglichkeit, Informationen in einem Werk zu finden, die nicht explizit erschlossen wurden, durch ein Inhaltsverzeichnis etwa oder, eine noch günstigere Ausgangslage, durch ein Sachregister. Allerdings sind die meisten Werke nicht für einen solchen Zweck verfasst worden, d.h. es ist nicht zu erwarten, dass ein Werk über die "Geschichte des Römischen Reiches" im Volltextzugriff zu einem Lexikon zur Geschichte des Römischen Reiches wird. Entspricht also die hinter Google Print und zahllosen anderen Digitalisierungsinitiativen stehende Auffassung einem doch sehr naiven Bild von der Nützlichkeit gedruckter Information?
    Seriöse Information darf erwarten, wer renommierte Nachschlagewerke befragt. Zumindest für die über die Primärordnung (Stichwort/Lemma) erschlossenen Sachverhalte ist für Buchausgaben ein gezielter Zugriff möglich, Verweisungen zwischen Artikeln sorgen für weitere Einstiege. Anzunehmen ist, dass sich der Nutzen von Nachschlagewerken in elektronischer Form noch deutlich erhöhen lässt: Produkte wie z.B. "Brockhaus multimedial" oder "Encyclopedia Britannica" sorgen mit leistungsfähigen Techniken über den wahlfreien Volltextzugriff hinaus für zahlreiche Navigations- und Recherchemöglichkeiten. Es liegt daher nahe, über eine Digitalisierung konventionell vorliegender Nachschlagewerke auch deren Anwendung zu verbessern, die im Print möglichen Zugriffsmöglichkeiten deutlich zu erweitern. Beispiele für diesen Ansatz sind die Digitalisierung der "Oekonomischen Encyklopädie" von Johann Georg Krünitz, die mit hohem Aufwand nicht maschinell (Scanning und OCR) sondern manuell realisiert wurde oder auch die im "Projekt Runeberg' , vorgenommenen zahlreichen Digitalisierungen u.a. auch von Nachschlagewerken. Ob die einfache Volltextindexierung derartig umfangreicher und - weil bereits als Nachschlagewerk konzipiert - gleichzeitig extrem verdichteter Quellen für einen größtmöglichen Nutzen der elektronischen Version ausreicht, darf zu Recht bezweifelt werden. In kommerziellen Produkten sorgen daher zusätzliche Techniken für einerseits thematisch gezielte Zugriffe auch über Nicht-Stichwörter, andererseits für Querverbindungen zu möglicherweise weiteren Artikeln von Interesse ("Wissensnetz" des Brockhaus, "Knowledge Navigator" der Britannica). Es darf angenommen werden, dass derartige Techniken dabei auf Informationen aufsetzen können (Strukturierung der Artikel, gekennzeichnete (getaggte) Personennamen, Querverweise etc.), die in den zu verarbeitenden Artikeln in nutzbarer Form vorliegen. Für digitalisierte Druckausgaben kommen derartige Verfahren nicht in Frage, weil lediglich flache, noch dazu in der Regel mit OCR-Fehlern behaftete Texte vorliegen. Die Zugriffsmöglichkeiten bewegen sich daher zwischen der 1:1-Umsetzung der Druckausgabe, d.h. dem Primärzugriff über Stichwort, und der Volltextsuche auf den vollständigen Lexikontext. Beides ist angesichts der im elektronischen Volltext liegenden Möglichkeiten sicher nicht die Methode der Wahl. Für die Digitalisierung des "Reallexikons zur Deutschen Kunstgeschichte" im Rahmen des von der Deutschen Forschungsgemeinschaft geförderten Projekts "RDKWeb" wird daher versucht, mit den Mitteln der Automatischen Indexierung eine Lösung zu erzielen, die über eine reine Volltextsuchmöglichkeit hinaus Suchunterstützungen bietet, die sich an den Fähigkeiten kommerzieller Produkte orientieren (nicht messen!).
    Source
    Information und Sprache: Beiträge zu Informationswissenschaft, Computerlinguistik, Bibliothekswesen und verwandten Fächern. Festschrift für Harald H. Zimmermann. Herausgegeben von Ilse Harms, Heinz-Dirk Luckhardt und Hans W. Giessen
  17. Lepsky, K.: Automatisierung in der Sacherschließung : Maschinelles Indexieren von Titeldaten (1996) 0.01
    0.006275865 = product of:
      0.087862104 = sum of:
        0.087862104 = weight(_text_:elektronische in 3418) [ClassicSimilarity], result of:
          0.087862104 = score(doc=3418,freq=2.0), product of:
            0.14013545 = queryWeight, product of:
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.029633347 = queryNorm
            0.6269798 = fieldWeight in 3418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.728978 = idf(docFreq=1061, maxDocs=44218)
              0.09375 = fieldNorm(doc=3418)
      0.071428575 = coord(1/14)
    
    Source
    85. Deutscher Bibliothekartag in Göttingen 1995: Die Herausforderung der Bibliotheken durch elektronische Medien und neue Organisationsformen. Hrsg.: S. Wefers
  18. Hmeidi, I.; Kanaan, G.; Evens, M.: Design and implementation of automatic indexing for information retrieval with Arabic documents (1997) 0.01
    0.0061772587 = product of:
      0.043240808 = sum of:
        0.012107591 = weight(_text_:information in 1660) [ClassicSimilarity], result of:
          0.012107591 = score(doc=1660,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 1660, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1660)
        0.031133216 = weight(_text_:retrieval in 1660) [ClassicSimilarity], result of:
          0.031133216 = score(doc=1660,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 1660, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=1660)
      0.14285715 = coord(2/14)
    
    Abstract
    A corpus of 242 abstracts of Arabic documents on computer science and information systems using the Proceedings of the Saudi Arabian National Conferences as a source was put together. Reports on the design and building of an automatic information retrieval system from scratch to handle Arabic data. Both automatic and manual indexing techniques were implemented. Experiments using measures of recall and precision has demonstrated that automatic indexing is at least as effective as manual indexing and more effective in some cases. Automatic indexing is both cheaper and faster. Results suggests that a wider coverage of the literature can be achieved with less money and produce as good results as with manual indexing. Compares the retrieval results using words as index terms versus stems and roots, and confirms the results obtained by Al-Kharashi and Abu-Salem with smaller corpora that root indexing is more effective than word indexing
    Source
    Journal of the American Society for Information Science. 48(1997) no.10, S.867-881
  19. Plaunt, C.; Norgard, B.A.: ¬An association-based method for automatic indexing with a controlled vocabulary (1998) 0.01
    0.0061724912 = product of:
      0.028804958 = sum of:
        0.0071344664 = weight(_text_:information in 1794) [ClassicSimilarity], result of:
          0.0071344664 = score(doc=1794,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.13714671 = fieldWeight in 1794, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.014978974 = weight(_text_:retrieval in 1794) [ClassicSimilarity], result of:
          0.014978974 = score(doc=1794,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.16710453 = fieldWeight in 1794, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.0066915164 = product of:
          0.020074548 = sum of:
            0.020074548 = weight(_text_:22 in 1794) [ClassicSimilarity], result of:
              0.020074548 = score(doc=1794,freq=2.0), product of:
                0.103770934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.029633347 = queryNorm
                0.19345059 = fieldWeight in 1794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1794)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    In this article, we describe and test a two-stage algorithm based on a lexical collocation technique which maps from the lexical clues contained in a document representation into a controlled vocabulary list of subject headings. Using a collection of 4.626 INSPEC documents, we create a 'dictionary' of associations between the lexical items contained in the titles, authors, and abstracts, and controlled vocabulary subject headings assigned to those records by human indexers using a likelihood ratio statistic as the measure of association. In the deployment stage, we use the dictiony to predict which of the controlled vocabulary subject headings best describe new documents when they are presented to the system. Our evaluation of this algorithm, in which we compare the automatically assigned subject headings to the subject headings assigned to the test documents by human catalogers, shows that we can obtain results comparable to, and consistent with, human cataloging. In effect we have cast this as a classic partial match information retrieval problem. We consider the problem to be one of 'retrieving' (or assigning) the most probably 'relevant' (or correct) controlled vocabulary subject headings to a document based on the clues contained in that document
    Date
    11. 9.2000 19:53:22
    Source
    Journal of the American Society for Information Science. 49(1998) no.10, S.888-902
  20. Polity, Y.: Vers une ergonomie linguistique (1994) 0.01
    0.006002185 = product of:
      0.04201529 = sum of:
        0.01804893 = weight(_text_:information in 36) [ClassicSimilarity], result of:
          0.01804893 = score(doc=36,freq=10.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.3469568 = fieldWeight in 36, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=36)
        0.023966359 = weight(_text_:retrieval in 36) [ClassicSimilarity], result of:
          0.023966359 = score(doc=36,freq=2.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.26736724 = fieldWeight in 36, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=36)
      0.14285715 = coord(2/14)
    
    Abstract
    Analyzed a special type of man-mchine interaction, that of searching an information system with natural language. A model for full text processing for information retrieval was proposed that considered the system's users and how they employ information. Describes how INIST (the National Institute for Scientific and Technical Information) is developing computer assisted indexing as an aid to improving relevance when retrieving information from bibliographic data banks

Languages