Search (382 results, page 2 of 20)

  • × theme_ss:"Automatisches Indexieren"
  1. Hodges, P.R.: Keyword in title indexes : effectiveness of retrieval in computer searches (1983) 0.01
    0.010281704 = product of:
      0.02570426 = sum of:
        0.006032446 = weight(_text_:a in 5001) [ClassicSimilarity], result of:
          0.006032446 = score(doc=5001,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.12611452 = fieldWeight in 5001, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5001)
        0.019671815 = product of:
          0.03934363 = sum of:
            0.03934363 = weight(_text_:22 in 5001) [ClassicSimilarity], result of:
              0.03934363 = score(doc=5001,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.2708308 = fieldWeight in 5001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5001)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A study was done to test the effectiveness of retrieval using title word searching. It was based on actual search profiles used in the Mechanized Information Center at Ohio State University, in order ro replicate as closely as possible actual searching conditions. Fewer than 50% of the relevant titles were retrieved by keywords in titles. The low rate of retrieval can be attributes to three sources: titles themselves, user and information specialist ignorance of the subject vocabulary in use, and to general language problems. Across fields it was found that the social sciences had the best retrieval rate, with science having the next best, and arts and humanities the lowest. Ways to enhance and supplement keyword in title searching on the computer and in printed indexes are discussed.
    Date
    14. 3.1996 13:22:21
    Type
    a
  2. Kasprzik, A.: Voraussetzungen und Anwendungspotentiale einer präzisen Sacherschließung aus Sicht der Wissenschaft (2018) 0.01
    0.010281704 = product of:
      0.02570426 = sum of:
        0.006032446 = weight(_text_:a in 5195) [ClassicSimilarity], result of:
          0.006032446 = score(doc=5195,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.12611452 = fieldWeight in 5195, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5195)
        0.019671815 = product of:
          0.03934363 = sum of:
            0.03934363 = weight(_text_:22 in 5195) [ClassicSimilarity], result of:
              0.03934363 = score(doc=5195,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.2708308 = fieldWeight in 5195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5195)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Große Aufmerksamkeit richtet sich im Moment auf das Potential von automatisierten Methoden in der Sacherschließung und deren Interaktionsmöglichkeiten mit intellektuellen Methoden. In diesem Kontext befasst sich der vorliegende Beitrag mit den folgenden Fragen: Was sind die Anforderungen an bibliothekarische Metadaten aus Sicht der Wissenschaft? Was wird gebraucht, um den Informationsbedarf der Fachcommunities zu bedienen? Und was bedeutet das entsprechend für die Automatisierung der Metadatenerstellung und -pflege? Dieser Beitrag fasst die von der Autorin eingenommene Position in einem Impulsvortrag und der Podiumsdiskussion beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
    Type
    a
  3. Plaunt, C.; Norgard, B.A.: ¬An association-based method for automatic indexing with a controlled vocabulary (1998) 0.01
    0.009662616 = product of:
      0.02415654 = sum of:
        0.010105243 = weight(_text_:a in 1794) [ClassicSimilarity], result of:
          0.010105243 = score(doc=1794,freq=22.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.21126054 = fieldWeight in 1794, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1794)
        0.014051297 = product of:
          0.028102593 = sum of:
            0.028102593 = weight(_text_:22 in 1794) [ClassicSimilarity], result of:
              0.028102593 = score(doc=1794,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.19345059 = fieldWeight in 1794, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1794)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In this article, we describe and test a two-stage algorithm based on a lexical collocation technique which maps from the lexical clues contained in a document representation into a controlled vocabulary list of subject headings. Using a collection of 4.626 INSPEC documents, we create a 'dictionary' of associations between the lexical items contained in the titles, authors, and abstracts, and controlled vocabulary subject headings assigned to those records by human indexers using a likelihood ratio statistic as the measure of association. In the deployment stage, we use the dictiony to predict which of the controlled vocabulary subject headings best describe new documents when they are presented to the system. Our evaluation of this algorithm, in which we compare the automatically assigned subject headings to the subject headings assigned to the test documents by human catalogers, shows that we can obtain results comparable to, and consistent with, human cataloging. In effect we have cast this as a classic partial match information retrieval problem. We consider the problem to be one of 'retrieving' (or assigning) the most probably 'relevant' (or correct) controlled vocabulary subject headings to a document based on the clues contained in that document
    Date
    11. 9.2000 19:53:22
    Type
    a
  4. Renz, M.: Automatische Inhaltserschließung im Zeichen von Wissensmanagement (2001) 0.01
    0.009574959 = product of:
      0.023937397 = sum of:
        0.004265583 = weight(_text_:a in 5671) [ClassicSimilarity], result of:
          0.004265583 = score(doc=5671,freq=2.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.089176424 = fieldWeight in 5671, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5671)
        0.019671815 = product of:
          0.03934363 = sum of:
            0.03934363 = weight(_text_:22 in 5671) [ClassicSimilarity], result of:
              0.03934363 = score(doc=5671,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.2708308 = fieldWeight in 5671, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5671)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 3.2001 13:14:48
    Type
    a
  5. Franke-Maier, M.: Anforderungen an die Qualität der Inhaltserschließung im Spannungsfeld von intellektuell und automatisch erzeugten Metadaten (2018) 0.01
    0.009574959 = product of:
      0.023937397 = sum of:
        0.004265583 = weight(_text_:a in 5344) [ClassicSimilarity], result of:
          0.004265583 = score(doc=5344,freq=2.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.089176424 = fieldWeight in 5344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5344)
        0.019671815 = product of:
          0.03934363 = sum of:
            0.03934363 = weight(_text_:22 in 5344) [ClassicSimilarity], result of:
              0.03934363 = score(doc=5344,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.2708308 = fieldWeight in 5344, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5344)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Spätestens seit dem Deutschen Bibliothekartag 2018 hat sich die Diskussion zu den automatischen Verfahren der Inhaltserschließung der Deutschen Nationalbibliothek von einer politisch geführten Diskussion in eine Qualitätsdiskussion verwandelt. Der folgende Beitrag beschäftigt sich mit Fragen der Qualität von Inhaltserschließung in digitalen Zeiten, wo heterogene Erzeugnisse unterschiedlicher Verfahren aufeinandertreffen und versucht, wichtige Anforderungen an Qualität zu definieren. Dieser Tagungsbeitrag fasst die vom Autor als Impulse vorgetragenen Ideen beim Workshop der FAG "Erschließung und Informationsvermittlung" des GBV am 29. August 2018 in Kiel zusammen. Der Workshop fand im Rahmen der 22. Verbundkonferenz des GBV statt.
    Type
    a
  6. Ward, M.L.: ¬The future of the human indexer (1996) 0.01
    0.009277722 = product of:
      0.023194304 = sum of:
        0.006332749 = weight(_text_:a in 7244) [ClassicSimilarity], result of:
          0.006332749 = score(doc=7244,freq=6.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.13239266 = fieldWeight in 7244, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=7244)
        0.016861554 = product of:
          0.03372311 = sum of:
            0.03372311 = weight(_text_:22 in 7244) [ClassicSimilarity], result of:
              0.03372311 = score(doc=7244,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.23214069 = fieldWeight in 7244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7244)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Considers the principles of indexing and the intellectual skills involved in order to determine what automatic indexing systems would be required in order to supplant or complement the human indexer. Good indexing requires: considerable prior knowledge of the literature; judgement as to what to index and what depth to index; reading skills; abstracting skills; and classification skills, Illustrates these features with a detailed description of abstracting and indexing processes involved in generating entries for the mechanical engineering database POWERLINK. Briefly assesses the possibility of replacing human indexers with specialist indexing software, with particular reference to the Object Analyzer from the InTEXT automatic indexing system and using the criteria described for human indexers. At present, it is unlikely that the automatic indexer will replace the human indexer, but when more primary texts are available in electronic form, it may be a useful productivity tool for dealing with large quantities of low grade texts (should they be wanted in the database)
    Date
    9. 2.1997 18:44:22
    Type
    a
  7. Mesquita, L.A.P.; Souza, R.R.; Baracho Porto, R.M.A.: Noun phrases in automatic indexing: : a structural analysis of the distribution of relevant terms in doctoral theses (2014) 0.01
    0.008272537 = product of:
      0.020681342 = sum of:
        0.009440305 = weight(_text_:a in 1442) [ClassicSimilarity], result of:
          0.009440305 = score(doc=1442,freq=30.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.19735932 = fieldWeight in 1442, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1442)
        0.011241037 = product of:
          0.022482075 = sum of:
            0.022482075 = weight(_text_:22 in 1442) [ClassicSimilarity], result of:
              0.022482075 = score(doc=1442,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.15476047 = fieldWeight in 1442, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1442)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The main objective of this research was to analyze whether there was a characteristic distribution behavior of relevant terms over a scientific text that could contribute as a criterion for their process of automatic indexing. The terms considered in this study were only full noun phrases contained in the texts themselves. The texts were considered a total of 98 doctoral theses of the eight areas of knowledge in a same university. Initially, 20 full noun phrases were automatically extracted from each text as candidates to be the most relevant terms, and each author of each text assigned a relevance value 0-6 (not relevant and highly relevant, respectively) for each of the 20 noun phrases sent. Only, 22.1 % of noun phrases were considered not relevant. A relevance values of the terms assigned by the authors were associated with their positions in the text. Each full noun phrases found in the text was considered as a valid linear position. The results that were obtained showed values resulting from this distribution by considering two types of position: linear, with values consolidated into ten equal consecutive parts; and structural, considering parts of the text (such as introduction, development and conclusion). As a result of considerable importance, all areas of knowledge related to the Natural Sciences showed a characteristic behavior in the distribution of relevant terms, as well as all areas of knowledge related to Social Sciences showed the same characteristic behavior of distribution, but distinct from the Natural Sciences. The difference of the distribution behavior between the Natural and Social Sciences can be clearly visualized through graphs. All behaviors, including the general behavior of all areas of knowledge together, were characterized in polynomial equations and can be applied in future as criteria for automatic indexing. Until the present date this work has become inedited of for two reasons: to present a method for characterizing the distribution of relevant terms in a scientific text, and also, through this method, pointing out a quantitative trait difference between the Natural and Social Sciences.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  8. Busch, D.: Domänenspezifische hybride automatische Indexierung von bibliographischen Metadaten (2019) 0.01
    0.008207108 = product of:
      0.020517768 = sum of:
        0.003656214 = weight(_text_:a in 5628) [ClassicSimilarity], result of:
          0.003656214 = score(doc=5628,freq=2.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.07643694 = fieldWeight in 5628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=5628)
        0.016861554 = product of:
          0.03372311 = sum of:
            0.03372311 = weight(_text_:22 in 5628) [ClassicSimilarity], result of:
              0.03372311 = score(doc=5628,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.23214069 = fieldWeight in 5628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5628)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    B.I.T.online. 22(2019) H.6, S.465-469
    Type
    a
  9. Milstead, J.L.: Thesauri in a full-text world (1998) 0.01
    0.007731435 = product of:
      0.019328587 = sum of:
        0.005277291 = weight(_text_:a in 2337) [ClassicSimilarity], result of:
          0.005277291 = score(doc=2337,freq=6.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.11032722 = fieldWeight in 2337, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2337)
        0.014051297 = product of:
          0.028102593 = sum of:
            0.028102593 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
              0.028102593 = score(doc=2337,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.19345059 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2337)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Despite early claims to the contemporary, thesauri continue to find use as access tools for information in the full-text environment. Their mode of use is changing, but this change actually represents an expansion rather than a contrdiction of their utility. Thesauri and similar vocabulary tools can complement full-text access by aiding users in focusing their searches, by supplementing the linguistic analysis of the text search engine, and even by serving as one of the tools used by the linguistic engine for its analysis. While human indexing contunues to be used for many databases, the trend is to increase the use of machine aids for this purpose. All machine-aided indexing (MAI) systems rely on thesauri as the basis for term selection. In the 21st century, the balance of effort between human and machine will change at both input and output, but thesauri will continue to play an important role for the foreseeable future
    Date
    22. 9.1997 19:16:05
    Type
    a
  10. Martins, A.L.; Souza, R.R.; Ribeiro de Mello, H.: ¬The use of noun phrases in information retrieval : proposing a mechanism for automatic classification (2014) 0.01
    0.0075796056 = product of:
      0.018949013 = sum of:
        0.0077079763 = weight(_text_:a in 1441) [ClassicSimilarity], result of:
          0.0077079763 = score(doc=1441,freq=20.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.16114321 = fieldWeight in 1441, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1441)
        0.011241037 = product of:
          0.022482075 = sum of:
            0.022482075 = weight(_text_:22 in 1441) [ClassicSimilarity], result of:
              0.022482075 = score(doc=1441,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.15476047 = fieldWeight in 1441, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1441)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper presents a research on syntactic structures known as noun phrases (NP) being applied to increase the effectiveness and efficiency of the mechanisms for the document's classification. Our hypothesis is the fact that the NP can be used instead of single words as a semantic aggregator to reduce the number of words that will be used for the classification system without losing its semantic coverage, increasing its efficiency. The experiment divided the documents classification process in three phases: a) NP preprocessing b) system training; and c) classification experiments. In the first step, a corpus of digitalized texts was submitted to a natural language processing platform1 in which the part-of-speech tagging was done, and them PERL scripts pertaining to the PALAVRAS package were used to extract the Noun Phrases. The preprocessing also involved the tasks of a) removing NP low meaning pre-modifiers, as quantifiers; b) identification of synonyms and corresponding substitution for common hyperonyms; and c) stemming of the relevant words contained in the NP, for similitude checking with other NPs. The first tests with the resulting documents have demonstrated its effectiveness. We have compared the structural similarity of the documents before and after the whole pre-processing steps of phase one. The texts maintained the consistency with the original and have kept the readability. The second phase involves submitting the modified documents to a SVM algorithm to identify clusters and classify the documents. The classification rules are to be established using a machine learning approach. Finally, tests will be conducted to check the effectiveness of the whole process.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
    Type
    a
  11. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.01
    0.0068392567 = product of:
      0.017098142 = sum of:
        0.0030468449 = weight(_text_:a in 3780) [ClassicSimilarity], result of:
          0.0030468449 = score(doc=3780,freq=2.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.06369744 = fieldWeight in 3780, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3780)
        0.014051297 = product of:
          0.028102593 = sum of:
            0.028102593 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.028102593 = score(doc=3780,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    19. 8.2017 9:24:22
    Type
    a
  12. Greiner-Petter, A.; Schubotz, M.; Cohl, H.S.; Gipp, B.: Semantic preserving bijective mappings for expressions involving special functions between computer algebra systems and document preparation systems (2019) 0.01
    0.00667656 = product of:
      0.0166914 = sum of:
        0.0054503623 = weight(_text_:a in 5499) [ClassicSimilarity], result of:
          0.0054503623 = score(doc=5499,freq=10.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.11394546 = fieldWeight in 5499, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=5499)
        0.011241037 = product of:
          0.022482075 = sum of:
            0.022482075 = weight(_text_:22 in 5499) [ClassicSimilarity], result of:
              0.022482075 = score(doc=5499,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.15476047 = fieldWeight in 5499, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5499)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose Modern mathematicians and scientists of math-related disciplines often use Document Preparation Systems (DPS) to write and Computer Algebra Systems (CAS) to calculate mathematical expressions. Usually, they translate the expressions manually between DPS and CAS. This process is time-consuming and error-prone. The purpose of this paper is to automate this translation. This paper uses Maple and Mathematica as the CAS, and LaTeX as the DPS. Design/methodology/approach Bruce Miller at the National Institute of Standards and Technology (NIST) developed a collection of special LaTeX macros that create links from mathematical symbols to their definitions in the NIST Digital Library of Mathematical Functions (DLMF). The authors are using these macros to perform rule-based translations between the formulae in the DLMF and CAS. Moreover, the authors develop software to ease the creation of new rules and to discover inconsistencies. Findings The authors created 396 mappings and translated 58.8 percent of DLMF formulae (2,405 expressions) successfully between Maple and DLMF. For a significant percentage, the special function definitions in Maple and the DLMF were different. An atomic symbol in one system maps to a composite expression in the other system. The translator was also successfully used for automatic verification of mathematical online compendia and CAS. The evaluation techniques discovered two errors in the DLMF and one defect in Maple. Originality/value This paper introduces the first translation tool for special functions between LaTeX and CAS. The approach improves error-prone manual translations and can be used to verify mathematical online compendia and CAS.
    Date
    20. 1.2015 18:30:22
    Type
    a
  13. Glaesener, L.: Automatisches Indexieren einer informationswissenschaftlichen Datenbank mit Mehrwortgruppen (2012) 0.00
    0.004496415 = product of:
      0.022482075 = sum of:
        0.022482075 = product of:
          0.04496415 = sum of:
            0.04496415 = weight(_text_:22 in 401) [ClassicSimilarity], result of:
              0.04496415 = score(doc=401,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.30952093 = fieldWeight in 401, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=401)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    11. 9.2012 19:43:22
  14. Lorenz, S.: Konzeption und prototypische Realisierung einer begriffsbasierten Texterschließung (2006) 0.00
    0.003372311 = product of:
      0.016861554 = sum of:
        0.016861554 = product of:
          0.03372311 = sum of:
            0.03372311 = weight(_text_:22 in 1746) [ClassicSimilarity], result of:
              0.03372311 = score(doc=1746,freq=2.0), product of:
                0.14527014 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04148407 = queryNorm
                0.23214069 = fieldWeight in 1746, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1746)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2015 9:17:30
  15. Jones, K.P.: Natural-language processing and automatic indexing : a reply (1990) 0.00
    0.0027576897 = product of:
      0.013788448 = sum of:
        0.013788448 = weight(_text_:a in 394) [ClassicSimilarity], result of:
          0.013788448 = score(doc=394,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.28826174 = fieldWeight in 394, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=394)
      0.2 = coord(1/5)
    
    Type
    a
  16. Salton, G.; Wong, A.: Generation and search of clustered files (1978) 0.00
    0.0027576897 = product of:
      0.013788448 = sum of:
        0.013788448 = weight(_text_:a in 2411) [ClassicSimilarity], result of:
          0.013788448 = score(doc=2411,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.28826174 = fieldWeight in 2411, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2411)
      0.2 = coord(1/5)
    
    Type
    a
  17. Griffiths, A.; Robinson, L.A.; Willett, P.: Hierarchic agglomerative clustering methods for automatic document classification (1984) 0.00
    0.0027576897 = product of:
      0.013788448 = sum of:
        0.013788448 = weight(_text_:a in 2414) [ClassicSimilarity], result of:
          0.013788448 = score(doc=2414,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.28826174 = fieldWeight in 2414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2414)
      0.2 = coord(1/5)
    
    Type
    a
  18. Willett, P.: Recent trends in hierarchic document clustering : a critical review (1988) 0.00
    0.0027576897 = product of:
      0.013788448 = sum of:
        0.013788448 = weight(_text_:a in 2604) [ClassicSimilarity], result of:
          0.013788448 = score(doc=2604,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.28826174 = fieldWeight in 2604, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=2604)
      0.2 = coord(1/5)
    
    Type
    a
  19. Rijsbergen, C.J. van: ¬A fast hierarchic clustering algorithm (1970) 0.00
    0.0027576897 = product of:
      0.013788448 = sum of:
        0.013788448 = weight(_text_:a in 3300) [ClassicSimilarity], result of:
          0.013788448 = score(doc=3300,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.28826174 = fieldWeight in 3300, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=3300)
      0.2 = coord(1/5)
    
    Type
    a
  20. Luhn, H.P.: ¬A statistical approach to the mechanical encoding and searching of literary information (1957) 0.00
    0.0027576897 = product of:
      0.013788448 = sum of:
        0.013788448 = weight(_text_:a in 5453) [ClassicSimilarity], result of:
          0.013788448 = score(doc=5453,freq=4.0), product of:
            0.04783308 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.04148407 = queryNorm
            0.28826174 = fieldWeight in 5453, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.125 = fieldNorm(doc=5453)
      0.2 = coord(1/5)
    
    Type
    a

Languages

Types

  • a 364
  • el 31
  • x 5
  • m 4
  • s 3
  • d 1
  • p 1
  • More… Less…

Classifications