Search (327 results, page 1 of 17)

  • × theme_ss:"Automatisches Indexieren"
  1. Stankovic, R. et al.: Indexing of textual databases based on lexical resources : a case study for Serbian (2016) 0.08
    0.08098909 = product of:
      0.12148364 = sum of:
        0.010677542 = weight(_text_:in in 2759) [ClassicSimilarity], result of:
          0.010677542 = score(doc=2759,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.15028831 = fieldWeight in 2759, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2759)
        0.1108061 = sum of:
          0.04004071 = weight(_text_:science in 2759) [ClassicSimilarity], result of:
            0.04004071 = score(doc=2759,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.2910318 = fieldWeight in 2759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=2759)
          0.07076539 = weight(_text_:22 in 2759) [ClassicSimilarity], result of:
            0.07076539 = score(doc=2759,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.38690117 = fieldWeight in 2759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2759)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  2. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.07
    0.065803155 = product of:
      0.098704726 = sum of:
        0.021140454 = weight(_text_:in in 5001) [ClassicSimilarity], result of:
          0.021140454 = score(doc=5001,freq=16.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.29755569 = fieldWeight in 5001, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.07756427 = sum of:
          0.028028497 = weight(_text_:science in 5001) [ClassicSimilarity], result of:
            0.028028497 = score(doc=5001,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.20372227 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
          0.049535774 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
            0.049535774 = score(doc=5001,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.2708308 = fieldWeight in 5001, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5001)
      0.6666667 = coord(2/3)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
  3. Newman, D.J.; Block, S.: Probabilistic topic decomposition of an eighteenth-century American newspaper (2006) 0.06
    0.058756333 = product of:
      0.0881345 = sum of:
        0.010570227 = weight(_text_:in in 5291) [ClassicSimilarity], result of:
          0.010570227 = score(doc=5291,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.14877784 = fieldWeight in 5291, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5291)
        0.07756427 = sum of:
          0.028028497 = weight(_text_:science in 5291) [ClassicSimilarity], result of:
            0.028028497 = score(doc=5291,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.20372227 = fieldWeight in 5291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5291)
          0.049535774 = weight(_text_:22 in 5291) [ClassicSimilarity], result of:
            0.049535774 = score(doc=5291,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.2708308 = fieldWeight in 5291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5291)
      0.6666667 = coord(2/3)
    
    Abstract
    We use a probabilistic mixture decomposition method to determine topics in the Pennsylvania Gazette, a major colonial U.S. newspaper from 1728-1800. We assess the value of several topic decomposition techniques for historical research and compare the accuracy and efficacy of various methods. After determining the topics covered by the 80,000 articles and advertisements in the entire 18th century run of the Gazette, we calculate how the prevalence of those topics changed over time, and give historically relevant examples of our findings. This approach reveals important information about the content of this colonial newspaper, and suggests the value of such approaches to a more complete understanding of early American print culture and society.
    Date
    22. 7.2006 17:32:00
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.753-767
  4. Ward, M.L.: ¬The future of the human indexer (1996) 0.05
    0.053872727 = product of:
      0.08080909 = sum of:
        0.014325427 = weight(_text_:in in 7244) [ClassicSimilarity], result of:
          0.014325427 = score(doc=7244,freq=10.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.20163295 = fieldWeight in 7244, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=7244)
        0.06648366 = sum of:
          0.024024425 = weight(_text_:science in 7244) [ClassicSimilarity], result of:
            0.024024425 = score(doc=7244,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.17461908 = fieldWeight in 7244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=7244)
          0.042459235 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
            0.042459235 = score(doc=7244,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.23214069 = fieldWeight in 7244, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=7244)
      0.6666667 = coord(2/3)
    
    Abstract
    Considers the principles of indexing and the intellectual skills involved in order to determine what automatic indexing systems would be required in order to supplant or complement the human indexer. Good indexing requires: considerable prior knowledge of the literature; judgement as to what to index and what depth to index; reading skills; abstracting skills; and classification skills, Illustrates these features with a detailed description of abstracting and indexing processes involved in generating entries for the mechanical engineering database POWERLINK. Briefly assesses the possibility of replacing human indexers with specialist indexing software, with particular reference to the Object Analyzer from the InTEXT automatic indexing system and using the criteria described for human indexers. At present, it is unlikely that the automatic indexer will replace the human indexer, but when more primary texts are available in electronic form, it may be a useful productivity tool for dealing with large quantities of low grade texts (should they be wanted in the database)
    Date
    9. 2.1997 18:44:22
    Source
    Journal of librarianship and information science. 28(1996) no.4, S.217-225
  5. Milstead, J.L.: Thesauri in a full-text world (1998) 0.05
    0.049582195 = product of:
      0.07437329 = sum of:
        0.010677542 = weight(_text_:in in 2337) [ClassicSimilarity], result of:
          0.010677542 = score(doc=2337,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.15028831 = fieldWeight in 2337, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.06369575 = sum of:
          0.028313057 = weight(_text_:science in 2337) [ClassicSimilarity], result of:
            0.028313057 = score(doc=2337,freq=4.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.20579056 = fieldWeight in 2337, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2337)
          0.035382695 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
            0.035382695 = score(doc=2337,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.19345059 = fieldWeight in 2337, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2337)
      0.6666667 = coord(2/3)
    
    Abstract
    Despite early claims to the contemporary, thesauri continue to find use as access tools for information in the full-text environment. Their mode of use is changing, but this change actually represents an expansion rather than a contrdiction of their utility. Thesauri and similar vocabulary tools can complement full-text access by aiding users in focusing their searches, by supplementing the linguistic analysis of the text search engine, and even by serving as one of the tools used by the linguistic engine for its analysis. While human indexing contunues to be used for many databases, the trend is to increase the use of machine aids for this purpose. All machine-aided indexing (MAI) systems rely on thesauri as the basis for term selection. In the 21st century, the balance of effort between human and machine will change at both input and output, but thesauri will continue to play an important role for the foreseeable future
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  6. Voorhees, E.M.: Implementing agglomerative hierarchic clustering algorithms for use in document retrieval (1986) 0.05
    0.049130917 = product of:
      0.073696375 = sum of:
        0.017084066 = weight(_text_:in in 402) [ClassicSimilarity], result of:
          0.017084066 = score(doc=402,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.24046129 = fieldWeight in 402, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=402)
        0.056612313 = product of:
          0.113224626 = sum of:
            0.113224626 = weight(_text_:22 in 402) [ClassicSimilarity], result of:
              0.113224626 = score(doc=402,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.61904186 = fieldWeight in 402, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=402)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Information processing and management. 22(1986) no.6, S.465-476
  7. Plaunt, C.; Norgard, B.A.: ¬An association-based method for automatic indexing with a controlled vocabulary (1998) 0.05
    0.046352074 = product of:
      0.06952811 = sum of:
        0.014125061 = weight(_text_:in in 1794) [ClassicSimilarity], result of:
          0.014125061 = score(doc=1794,freq=14.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.19881277 = fieldWeight in 1794, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.05540305 = sum of:
          0.020020355 = weight(_text_:science in 1794) [ClassicSimilarity], result of:
            0.020020355 = score(doc=1794,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.1455159 = fieldWeight in 1794, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1794)
          0.035382695 = weight(_text_:22 in 1794) [ClassicSimilarity], result of:
            0.035382695 = score(doc=1794,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.19345059 = fieldWeight in 1794, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1794)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, we describe and test a two-stage algorithm based on a lexical collocation technique which maps from the lexical clues contained in a document representation into a controlled vocabulary list of subject headings. Using a collection of 4.626 INSPEC documents, we create a 'dictionary' of associations between the lexical items contained in the titles, authors, and abstracts, and controlled vocabulary subject headings assigned to those records by human indexers using a likelihood ratio statistic as the measure of association. In the deployment stage, we use the dictiony to predict which of the controlled vocabulary subject headings best describe new documents when they are presented to the system. Our evaluation of this algorithm, in which we compare the automatically assigned subject headings to the subject headings assigned to the test documents by human catalogers, shows that we can obtain results comparable to, and consistent with, human cataloging. In effect we have cast this as a classic partial match information retrieval problem. We consider the problem to be one of 'retrieving' (or assigning) the most probably 'relevant' (or correct) controlled vocabulary subject headings to a document based on the clues contained in that document
    Date
    11. 9.2000 19:53:22
    Source
    Journal of the American Society for Information Science. 49(1998) no.10, S.888-902
  8. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.04
    0.044893935 = product of:
      0.0673409 = sum of:
        0.011937855 = weight(_text_:in in 3780) [ClassicSimilarity], result of:
          0.011937855 = score(doc=3780,freq=10.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.16802745 = fieldWeight in 3780, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3780)
        0.05540305 = sum of:
          0.020020355 = weight(_text_:science in 3780) [ClassicSimilarity], result of:
            0.020020355 = score(doc=3780,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.1455159 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
          0.035382695 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
            0.035382695 = score(doc=3780,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.19345059 = fieldWeight in 3780, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3780)
      0.6666667 = coord(2/3)
    
    Abstract
    Wir leben im 21. Jahrhundert, und vieles, was vor hundert und noch vor fünfzig Jahren als Science Fiction abgetan worden wäre, ist mittlerweile Realität. Raumsonden fliegen zum Mars, machen dort Experimente und liefern Daten zur Erde zurück. Roboter werden für Routineaufgaben eingesetzt, zum Beispiel in der Industrie oder in der Medizin. Digitalisierung, künstliche Intelligenz und automatisierte Verfahren sind kaum mehr aus unserem Alltag wegzudenken. Grundlage vieler Prozesse sind lernende Algorithmen. Die fortschreitende digitale Transformation ist global und umfasst alle Lebens- und Arbeitsbereiche: Wirtschaft, Gesellschaft und Politik. Sie eröffnet neue Möglichkeiten, von denen auch Bibliotheken profitieren. Der starke Anstieg digitaler Publikationen, die einen wichtigen und prozentual immer größer werdenden Teil des Kulturerbes darstellen, sollte für Bibliotheken Anlass sein, diese Möglichkeiten aktiv aufzugreifen und einzusetzen. Die Auswertbarkeit digitaler Inhalte, beispielsweise durch Text- and Data-Mining (TDM), und die Entwicklung technischer Verfahren, mittels derer Inhalte miteinander vernetzt und semantisch in Beziehung gesetzt werden können, bieten Raum, auch bibliothekarische Erschließungsverfahren neu zu denken. Daher beschäftigt sich die Deutsche Nationalbibliothek (DNB) seit einigen Jahren mit der Frage, wie sich die Prozesse bei der Erschließung von Medienwerken verbessern und maschinell unterstützen lassen. Sie steht dabei im regelmäßigen kollegialen Austausch mit anderen Bibliotheken, die sich ebenfalls aktiv mit dieser Fragestellung befassen, sowie mit europäischen Nationalbibliotheken, die ihrerseits Interesse an dem Thema und den Erfahrungen der DNB haben. Als Nationalbibliothek mit umfangreichen Beständen an digitalen Publikationen hat die DNB auch Expertise bei der digitalen Langzeitarchivierung aufgebaut und ist im Netzwerk ihrer Partner als kompetente Gesprächspartnerin geschätzt.
    Date
    19. 8.2017 9:24:22
  9. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.04
    0.038090326 = product of:
      0.05713549 = sum of:
        0.012813049 = weight(_text_:in in 5499) [ClassicSimilarity], result of:
          0.012813049 = score(doc=5499,freq=18.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 5499, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.04432244 = sum of:
          0.016016284 = weight(_text_:science in 5499) [ClassicSimilarity], result of:
            0.016016284 = score(doc=5499,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.11641272 = fieldWeight in 5499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.03125 = fieldNorm(doc=5499)
          0.028306156 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
            0.028306156 = score(doc=5499,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.15476047 = fieldWeight in 5499, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5499)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose Modern mathematicians and scientists of math-related disciplines often use Document Preparation Systems (DPS) to write and Computer Algebra Systems (CAS) to calculate mathematical expressions. Usually, they translate the expressions manually between DPS and CAS. This process is time-consuming and error-prone. The purpose of this paper is to automate this translation. This paper uses Maple and Mathematica as the CAS, and LaTeX as the DPS. Design/methodology/approach Bruce Miller at the National Institute of Standards and Technology (NIST) developed a collection of special LaTeX macros that create links from mathematical symbols to their definitions in the NIST Digital Library of Mathematical Functions (DLMF). The authors are using these macros to perform rule-based translations between the formulae in the DLMF and CAS. Moreover, the authors develop software to ease the creation of new rules and to discover inconsistencies. Findings The authors created 396 mappings and translated 58.8 percent of DLMF formulae (2,405 expressions) successfully between Maple and DLMF. For a significant percentage, the special function definitions in Maple and the DLMF were different. An atomic symbol in one system maps to a composite expression in the other system. The translator was also successfully used for automatic verification of mathematical online compendia and CAS. The evaluation techniques discovered two errors in the DLMF and one defect in Maple. Originality/value This paper introduces the first translation tool for special functions between LaTeX and CAS. The approach improves error-prone manual translations and can be used to verify mathematical online compendia and CAS.
    Date
    20. 1.2015 18:30:22
    Footnote
    Beitrag in einem Special Issue: Information Science in the German-speaking Countries.
  10. Research and development in information retrieval : Proc., Berlin, 18.-20.5.1982 (1983) 0.04
    0.03746206 = product of:
      0.056193087 = sum of:
        0.02416052 = weight(_text_:in in 2332) [ClassicSimilarity], result of:
          0.02416052 = score(doc=2332,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.34006363 = fieldWeight in 2332, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.125 = fieldNorm(doc=2332)
        0.032032568 = product of:
          0.064065136 = sum of:
            0.064065136 = weight(_text_:science in 2332) [ClassicSimilarity], result of:
              0.064065136 = score(doc=2332,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.4656509 = fieldWeight in 2332, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.125 = fieldNorm(doc=2332)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Series
    Lecture notes in computer science; vol.146
  11. Hauer, M.: Automatische Indexierung (2000) 0.04
    0.03684819 = product of:
      0.055272285 = sum of:
        0.012813049 = weight(_text_:in in 5887) [ClassicSimilarity], result of:
          0.012813049 = score(doc=5887,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 5887, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=5887)
        0.042459235 = product of:
          0.08491847 = sum of:
            0.08491847 = weight(_text_:22 in 5887) [ClassicSimilarity], result of:
              0.08491847 = score(doc=5887,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.46428138 = fieldWeight in 5887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5887)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Wissen in Aktion: Wege des Knowledge Managements. 22. Online-Tagung der DGI, Frankfurt am Main, 2.-4.5.2000. Proceedings. Hrsg.: R. Schmidt
  12. Biebricher, N.; Fuhr, N.; Lustig, G.; Schwantner, M.; Knorz, G.: ¬The automatic indexing system AIR/PHYS : from research to application (1988) 0.04
    0.03591783 = product of:
      0.053876743 = sum of:
        0.018494045 = weight(_text_:in in 1952) [ClassicSimilarity], result of:
          0.018494045 = score(doc=1952,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.260307 = fieldWeight in 1952, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1952)
        0.035382695 = product of:
          0.07076539 = sum of:
            0.07076539 = weight(_text_:22 in 1952) [ClassicSimilarity], result of:
              0.07076539 = score(doc=1952,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.38690117 = fieldWeight in 1952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1952)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    16. 8.1998 12:51:22
    Footnote
    Wiederabgedruckt in: Readings in information retrieval. Ed.: K. Sparck Jones u. P. Willett. San Francisco: Morgan Kaufmann 1997. S.513-517.
    Source
    Proceedings of the 11th annual conference on research and development in information retrieval. Ed.: Y. Chiaramella
  13. Renz, M.: Automatische Inhaltserschließung im Zeichen von Wissensmanagement (2001) 0.03
    0.028717373 = product of:
      0.043076057 = sum of:
        0.01830817 = weight(_text_:in in 5671) [ClassicSimilarity], result of:
          0.01830817 = score(doc=5671,freq=12.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.2576908 = fieldWeight in 5671, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5671)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 5671) [ClassicSimilarity], result of:
              0.049535774 = score(doc=5671,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 5671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5671)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Methoden der automatischen Inhaltserschließung werden seit mehr als 30 Jahren entwickelt, ohne in luD-Kreisen auf merkliche Akzeptanz zu stoßen. Gegenwärtig führen jedoch die steigende Informationsflut und der Bedarf an effizienten Zugriffsverfahren im Informations- und Wissensmanagement in breiten Anwenderkreisen zu einem wachsenden Interesse an diesen Methoden, zu verstärkten Anstrengungen in Forschung und Entwicklung und zu neuen Produkten. In diesem Beitrag werden verschiedene Ansätze zu intelligentem und inhaltsbasiertem Retrieval und zur automatischen Inhaltserschließung diskutiert sowie kommerziell vertriebene Softwarewerkzeuge und Lösungen präsentiert. Abschließend wird festgestellt, dass in naher Zukunft mit einer zunehmenden Automatisierung von bestimmten Komponenten des Informations- und Wissensmanagements zu rechnen ist, indem Software-Werkzeuge zur automatischen Inhaltserschließung in den Workflow integriert werden
    Date
    22. 3.2001 13:14:48
  14. Griffiths, A.; Luckhurst, H.C.; Willett, P.: Using interdocument similarity information in document retrieval systems (1986) 0.03
    0.02865137 = product of:
      0.042977054 = sum of:
        0.014948557 = weight(_text_:in in 2415) [ClassicSimilarity], result of:
          0.014948557 = score(doc=2415,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.21040362 = fieldWeight in 2415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.109375 = fieldNorm(doc=2415)
        0.028028497 = product of:
          0.056056995 = sum of:
            0.056056995 = weight(_text_:science in 2415) [ClassicSimilarity], result of:
              0.056056995 = score(doc=2415,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.40744454 = fieldWeight in 2415, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2415)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Source
    Journal of the American Society for Information Science. 37(1986) no.1, S.3-11
  15. Riloff, E.: ¬An empirical study of automated dictionary construction for information extraction in three domains (1996) 0.03
    0.026924279 = product of:
      0.040386416 = sum of:
        0.01208026 = weight(_text_:in in 6752) [ClassicSimilarity], result of:
          0.01208026 = score(doc=6752,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.17003182 = fieldWeight in 6752, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=6752)
        0.028306156 = product of:
          0.056612313 = sum of:
            0.056612313 = weight(_text_:22 in 6752) [ClassicSimilarity], result of:
              0.056612313 = score(doc=6752,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.30952093 = fieldWeight in 6752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6752)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    AutoSlog is a system that addresses the knowledge engineering bottleneck for information extraction. AutoSlog automatically creates domain specific dictionaries for information extraction, given an appropriate training corpus. Describes experiments with AutoSlog in terrorism, joint ventures and microelectronics domains. Compares the performance of AutoSlog across the 3 domains, discusses the lessons learned and presents results from 2 experiments which demonstrate that novice users can generate effective dictionaries using AutoSlog
    Date
    6. 3.1997 16:22:15
  16. Glaesener, L.: Automatisches Indexieren einer informationswissenschaftlichen Datenbank mit Mehrwortgruppen (2012) 0.03
    0.026924279 = product of:
      0.040386416 = sum of:
        0.01208026 = weight(_text_:in in 401) [ClassicSimilarity], result of:
          0.01208026 = score(doc=401,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.17003182 = fieldWeight in 401, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=401)
        0.028306156 = product of:
          0.056612313 = sum of:
            0.056612313 = weight(_text_:22 in 401) [ClassicSimilarity], result of:
              0.056612313 = score(doc=401,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.30952093 = fieldWeight in 401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=401)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ein Bericht über die Ergebnisse und die Prozessanalyse einer automatischen Indexierung mit Mehrwortgruppen. Diese Bachelorarbeit beschreibt, inwieweit der Inhalt informationswissenschaftlicher Fachtexte durch informationswissenschaftliches Fachvokabular erschlossen werden kann und sollte und dass in diesen wissenschaftlichen Texten ein Großteil der fachlichen Inhalte in Mehrwortgruppen vorkommt. Die Ergebnisse wurden durch eine automatische Indexierung mit Mehrwortgruppen mithilfe des Programme Lingo an einer informationswissenschaftlichen Datenbank ermittelt.
    Date
    11. 9.2012 19:43:22
  17. Bordoni, L.; Pazienza, M.T.: Documents automatic indexing in an environmental domain (1997) 0.03
    0.026477631 = product of:
      0.039716445 = sum of:
        0.014948557 = weight(_text_:in in 530) [ClassicSimilarity], result of:
          0.014948557 = score(doc=530,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.21040362 = fieldWeight in 530, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=530)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 530) [ClassicSimilarity], result of:
              0.049535774 = score(doc=530,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 530, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=530)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Describes an application of Natural Language Processing (NLP) techniques, in HIRMA (Hypertextual Information Retrieval Managed by ARIOSTO), to the problem of document indexing by referring to a system which incorporates natural language processing techniques to determine the subject of the text of documents and to associate them with relevant semantic indexes. Describes briefly the overall system, details of its implementation on a corpus of scientific abstracts related to environmental topics and experimental evidence of the system's behaviour. Analyzes in detail an experiment designed to evaluate the system's retrieval ability in terms of recall and precision
    Source
    International forum on information and documentation. 22(1997) no.1, S.17-28
  18. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.03
    0.02514248 = product of:
      0.037713718 = sum of:
        0.012945832 = weight(_text_:in in 2673) [ClassicSimilarity], result of:
          0.012945832 = score(doc=2673,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1822149 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.049535774 = score(doc=2673,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
  19. Kasprzik, A.: Voraussetzungen und Anwendungspotentiale einer präzisen Sacherschließung aus Sicht der Wissenschaft (2018) 0.03
    0.02514248 = product of:
      0.037713718 = sum of:
        0.012945832 = weight(_text_:in in 5195) [ClassicSimilarity], result of:
          0.012945832 = score(doc=5195,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1822149 = fieldWeight in 5195, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5195)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 5195) [ClassicSimilarity], result of:
              0.049535774 = score(doc=5195,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 5195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5195)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Große Aufmerksamkeit richtet sich im Moment auf das Potential von automatisierten Methoden in der Sacherschließung und deren Interaktionsmöglichkeiten mit intellektuellen Methoden. In diesem Kontext befasst sich der vorliegende Beitrag mit den folgenden Fragen: Was sind die Anforderungen an bibliothekarische Metadaten aus Sicht der Wissenschaft? Was wird gebraucht, um den Informationsbedarf der Fachcommunities zu bedienen? Und was bedeutet das entsprechend für die Automatisierung der Metadatenerstellung und -pflege? Dieser Beitrag fasst die von der Autorin eingenommene Position in einem Impulsvortrag und der Podiumsdiskussion beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
  20. Franke-Maier, M.: Anforderungen an die Qualität der Inhaltserschließung im Spannungsfeld von intellektuell und automatisch erzeugten Metadaten (2018) 0.03
    0.02514248 = product of:
      0.037713718 = sum of:
        0.012945832 = weight(_text_:in in 5344) [ClassicSimilarity], result of:
          0.012945832 = score(doc=5344,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1822149 = fieldWeight in 5344, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5344)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 5344) [ClassicSimilarity], result of:
              0.049535774 = score(doc=5344,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Spätestens seit dem Deutschen Bibliothekartag 2018 hat sich die Diskussion zu den automatischen Verfahren der Inhaltserschließung der Deutschen Nationalbibliothek von einer politisch geführten Diskussion in eine Qualitätsdiskussion verwandelt. Der folgende Beitrag beschäftigt sich mit Fragen der Qualität von Inhaltserschließung in digitalen Zeiten, wo heterogene Erzeugnisse unterschiedlicher Verfahren aufeinandertreffen und versucht, wichtige Anforderungen an Qualität zu definieren. Dieser Tagungsbeitrag fasst die vom Autor als Impulse vorgetragenen Ideen beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV am 29. August 2018 in Kiel zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.

Languages

Types

  • a 280
  • el 36
  • x 20
  • m 12
  • s 6
  • d 2
  • p 1
  • More… Less…