Search (1438 results, page 1 of 72)

  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.38
    0.38179284 = product of:
      0.47724104 = sum of:
        0.065772705 = product of:
          0.1973181 = sum of:
            0.1973181 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.1973181 = score(doc=562,freq=2.0), product of:
                0.35108855 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041411664 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.1973181 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1973181 = score(doc=562,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.1973181 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.1973181 = score(doc=562,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.016832126 = product of:
          0.033664253 = sum of:
            0.033664253 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.033664253 = score(doc=562,freq=2.0), product of:
                0.1450166 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041411664 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.37
    0.36832717 = product of:
      0.6138786 = sum of:
        0.08769694 = product of:
          0.26309082 = sum of:
            0.26309082 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.26309082 = score(doc=140,freq=2.0), product of:
                0.35108855 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041411664 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
        0.26309082 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.26309082 = score(doc=140,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.26309082 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.26309082 = score(doc=140,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.6 = coord(3/5)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  3. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.32
    0.32228628 = product of:
      0.53714377 = sum of:
        0.076734826 = product of:
          0.23020446 = sum of:
            0.23020446 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.23020446 = score(doc=306,freq=2.0), product of:
                0.35108855 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041411664 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.33333334 = coord(1/3)
        0.23020446 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.23020446 = score(doc=306,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.23020446 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.23020446 = score(doc=306,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  4. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.28
    0.27624536 = product of:
      0.46040893 = sum of:
        0.065772705 = product of:
          0.1973181 = sum of:
            0.1973181 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.1973181 = score(doc=2918,freq=2.0), product of:
                0.35108855 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041411664 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
        0.1973181 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.1973181 = score(doc=2918,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.1973181 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.1973181 = score(doc=2918,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.6 = coord(3/5)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  5. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.23
    0.23020446 = product of:
      0.3836741 = sum of:
        0.054810584 = product of:
          0.16443175 = sum of:
            0.16443175 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.16443175 = score(doc=5895,freq=2.0), product of:
                0.35108855 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041411664 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.33333334 = coord(1/3)
        0.16443175 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.16443175 = score(doc=5895,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.16443175 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.16443175 = score(doc=5895,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.6 = coord(3/5)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  6. Hiller, H.; Füssel, S.: Wörterbuch des Buches : mit online Aktualisierung (2006) 0.20
    0.2031014 = product of:
      0.5077535 = sum of:
        0.37543333 = weight(_text_:dictionaries in 6005) [ClassicSimilarity], result of:
          0.37543333 = score(doc=6005,freq=12.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            1.3105223 = fieldWeight in 6005, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6005)
        0.13232014 = product of:
          0.26464027 = sum of:
            0.26464027 = weight(_text_:german in 6005) [ClassicSimilarity], result of:
              0.26464027 = score(doc=6005,freq=12.0), product of:
                0.24051933 = queryWeight, product of:
                  5.808009 = idf(docFreq=360, maxDocs=44218)
                  0.041411664 = queryNorm
                1.100287 = fieldWeight in 6005, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.808009 = idf(docFreq=360, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6005)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    LCSH
    Book industries and trade / Dictionaries / German
    Printing / Dictionaries / German
    Bibliography / Dictionaries / German
    Subject
    Book industries and trade / Dictionaries / German
    Printing / Dictionaries / German
    Bibliography / Dictionaries / German
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.18
    0.18416359 = product of:
      0.3069393 = sum of:
        0.04384847 = product of:
          0.13154541 = sum of:
            0.13154541 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.13154541 = score(doc=701,freq=2.0), product of:
                0.35108855 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.041411664 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.33333334 = coord(1/3)
        0.13154541 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13154541 = score(doc=701,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.13154541 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.13154541 = score(doc=701,freq=2.0), product of:
            0.35108855 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.041411664 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Chylkowska, E.: Implementation of information exchange : online dictionaries (2005) 0.10
    0.10353134 = product of:
      0.25882834 = sum of:
        0.24480157 = weight(_text_:dictionaries in 3011) [ClassicSimilarity], result of:
          0.24480157 = score(doc=3011,freq=10.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.854527 = fieldWeight in 3011, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3011)
        0.014026772 = product of:
          0.028053544 = sum of:
            0.028053544 = weight(_text_:22 in 3011) [ClassicSimilarity], result of:
              0.028053544 = score(doc=3011,freq=2.0), product of:
                0.1450166 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041411664 = queryNorm
                0.19345059 = fieldWeight in 3011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3011)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We are living in a society in which using Internet is a part of everyday life. People use Internet at schools, universities, at work in small and big companies. The Web gives huge number of information from every possible field of knowledge, and one of the problems that one can face by searching through the web is the fact that this information may be written in many different languages that one does not understand. That is why web site designers came up with an idea to create on-line dictionaries to make surfing on the Web easier. The most popular are bilingual dictionaries (in Poland the most known are: LING.pl, LEKSYKA.pl, and Dict.pl), but one can find also multilingual ones (Logos.com, Lexicool.com). Nowadays, when using Internet in education becomes more and more popular, on-line dictionaries are the best supplement for a good quality work. The purpose of this paper is to present, compare and recommend the best (from the author's point of view) multilingual dictionaries that can be found on the Internet and that can serve educational purposes well.
    Date
    22. 7.2009 11:05:56
  9. Münnich, M.: REUSE or rule harmonization : just a project? (2000) 0.05
    0.051913112 = product of:
      0.25956556 = sum of:
        0.25956556 = sum of:
          0.23151202 = weight(_text_:german in 192) [ClassicSimilarity], result of:
            0.23151202 = score(doc=192,freq=18.0), product of:
              0.24051933 = queryWeight, product of:
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.041411664 = queryNorm
              0.9625506 = fieldWeight in 192, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.0390625 = fieldNorm(doc=192)
          0.028053544 = weight(_text_:22 in 192) [ClassicSimilarity], result of:
            0.028053544 = score(doc=192,freq=2.0), product of:
              0.1450166 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041411664 = queryNorm
              0.19345059 = fieldWeight in 192, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=192)
      0.2 = coord(1/5)
    
    Abstract
    German academic libraries acquire a large number of books from British and American publishers. The bibliographic records of the Library of Congress and the British National Bibliography are offered in most German library networks. Thus, projects REUSE and REUSE+ were undertaken when there was a demand for harmonization of Germany cataloging rules with AACR2 (Anglo-American Cataloguing Rules). Experts in the United States and Germany systematically analyzed bibliographic data and compared the codes on which the data were based. Major and minor differences in cataloging rules were identified. The REUSE group proposed German participation in international authority files and changes in RAK, the German cataloging rules. In REUSE+ the different types of hierarchical bibliographic structures in USMARC and MAB2 and other German formats were analyzed. The German project group made suggestions concerning both the German formats and the USMARC format. Steps toward rule alignment and harmonization of online requirements were made when the German Cataloging Rules Conference made decisions on resolutions prepared by the Working Groups on Descriptive Cataloging that dealt with titles, encoding of form titles and conference terms, prefixes in names, hierarchies, entries under persons and corporate bodies, and the conceptual basis of RAK2 in the context of harmonization. Although problems remain, German rule makers have made progress toward internationality.
    Date
    10. 9.2000 17:38:22
  10. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.05
    0.051726174 = product of:
      0.12931544 = sum of:
        0.109478585 = weight(_text_:dictionaries in 2541) [ClassicSimilarity], result of:
          0.109478585 = score(doc=2541,freq=2.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.38215607 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.019836852 = product of:
          0.039673705 = sum of:
            0.039673705 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
              0.039673705 = score(doc=2541,freq=4.0), product of:
                0.1450166 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041411664 = queryNorm
                0.27358043 = fieldWeight in 2541, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2541)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  11. Larkey, L.S.; Connell, M.E.: Structured queries, language modelling, and relevance modelling in cross-language information retrieval (2005) 0.05
    0.049402144 = product of:
      0.123505354 = sum of:
        0.109478585 = weight(_text_:dictionaries in 1022) [ClassicSimilarity], result of:
          0.109478585 = score(doc=1022,freq=2.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.38215607 = fieldWeight in 1022, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1022)
        0.014026772 = product of:
          0.028053544 = sum of:
            0.028053544 = weight(_text_:22 in 1022) [ClassicSimilarity], result of:
              0.028053544 = score(doc=1022,freq=2.0), product of:
                0.1450166 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041411664 = queryNorm
                0.19345059 = fieldWeight in 1022, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1022)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Two probabilistic approaches to cross-lingual retrieval are in wide use today, those based on probabilistic models of relevance, as exemplified by INQUERY, and those based on language modeling. INQUERY, as a query net model, allows the easy incorporation of query operators, including a synonym operator, which has proven to be extremely useful in cross-language information retrieval (CLIR), in an approach often called structured query translation. In contrast, language models incorporate translation probabilities into a unified framework. We compare the two approaches on Arabic and Spanish data sets, using two kinds of bilingual dictionaries--one derived from a conventional dictionary, and one derived from a parallel corpus. We find that structured query processing gives slightly better results when queries are not expanded. On the other hand, when queries are expanded, language modeling gives better results, but only when using a probabilistic dictionary derived from a parallel corpus. We pursue two additional issues inherent in the comparison of structured query processing with language modeling. The first concerns query expansion, and the second is the role of translation probabilities. We compare conventional expansion techniques (pseudo-relevance feedback) with relevance modeling, a new IR approach which fits into the formal framework of language modeling. We find that relevance modeling and pseudo-relevance feedback achieve comparable levels of retrieval and that good translation probabilities confer a small but significant advantage.
    Date
    26.12.2007 20:22:11
  12. Heiner-Freiling, M.: DDC German - the project, the aims, the methods : new ideas for a well-established traditional classification system (2006) 0.05
    0.04814698 = product of:
      0.2407349 = sum of:
        0.2407349 = sum of:
          0.20707065 = weight(_text_:german in 5779) [ClassicSimilarity], result of:
            0.20707065 = score(doc=5779,freq=10.0), product of:
              0.24051933 = queryWeight, product of:
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.041411664 = queryNorm
              0.8609314 = fieldWeight in 5779, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.046875 = fieldNorm(doc=5779)
          0.033664253 = weight(_text_:22 in 5779) [ClassicSimilarity], result of:
            0.033664253 = score(doc=5779,freq=2.0), product of:
              0.1450166 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041411664 = queryNorm
              0.23214069 = fieldWeight in 5779, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5779)
      0.2 = coord(1/5)
    
    Abstract
    The paper will give a short outline of the project DDC German. The project is not limited to a mere translation of DDC 22, but aims at the implementation of Dewey in the library networks of the German-language countries. Use of DDC mainly for retrieval purposes, not for shelving, leads to certain new aspects in classifying with Dewey which are described in detail and presented together with the German web service Melvil. Based an the German experience of cooperation and data exchange in the field of verbal indexing the paper develops some ideas an future Dewey cooperation between European and American libraries.
  13. Sokirko, A.V.: Obzor zarubezhnykh sistem avtomaticheskoi obrabotki teksta, ispol'zuyushchikh poverkhnosto-semanticheskoe predstavlenie, i mashinnykh sematicheskikh slovarei (2000) 0.04
    0.043791436 = product of:
      0.21895717 = sum of:
        0.21895717 = weight(_text_:dictionaries in 8870) [ClassicSimilarity], result of:
          0.21895717 = score(doc=8870,freq=2.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.76431215 = fieldWeight in 8870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.078125 = fieldNorm(doc=8870)
      0.2 = coord(1/5)
    
    Footnote
    Übers. des Titels: Review of foreign systems for automated text processing using semantic presentations and electronic semantic dictionaries
  14. Egger, W.: Helferlein für jedermann : Elektronische Wörterbücher (2004) 0.04
    0.043791436 = product of:
      0.21895717 = sum of:
        0.21895717 = weight(_text_:dictionaries in 1501) [ClassicSimilarity], result of:
          0.21895717 = score(doc=1501,freq=2.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.76431215 = fieldWeight in 1501, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.078125 = fieldNorm(doc=1501)
      0.2 = coord(1/5)
    
    Abstract
    Zahllose online-dictionaries und einzelne, teilweise ausgezeichnete elektronische Wörterbücher wollen hier nicht erwähnt werden, da ihre Vorzüge teilweise folgenden Nachteilen gegenüber stehen: Internet-Verbindung, CD-Rom, bzw. zeitaufwändiges Aufrufen der Wörterbücher oder Wechsel der Sprachrichtung sind erforderlich.
  15. Copeland, A.; Hamburger, S.; Hamilton, J.; Robinson, K.J.: Cataloging and digitizing ephemera : one team's experience with Pennsylvania German broadsides and fraktur (2006) 0.04
    0.03841302 = product of:
      0.1920651 = sum of:
        0.1920651 = sum of:
          0.15279014 = weight(_text_:german in 768) [ClassicSimilarity], result of:
            0.15279014 = score(doc=768,freq=4.0), product of:
              0.24051933 = queryWeight, product of:
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.041411664 = queryNorm
              0.635251 = fieldWeight in 768, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.0546875 = fieldNorm(doc=768)
          0.03927496 = weight(_text_:22 in 768) [ClassicSimilarity], result of:
            0.03927496 = score(doc=768,freq=2.0), product of:
              0.1450166 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041411664 = queryNorm
              0.2708308 = fieldWeight in 768, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=768)
      0.2 = coord(1/5)
    
    Abstract
    The growing interest in ephemera collections within libraries will necessitate the bibliographic control of materials that do not easily fall into traditional categories. This paper discusses the many challenges confronting catalogers when approaching a mixed collection of unique materials of an ephemeral nature. Based on their experience cataloging a collection of Pennsylvania German broadsides and Fraktur at the Pennsylvania State University, the authors describe the process of deciphering handwriting, preserving genealogical information, deciding on cataloging approaches at the format and field level, and furthering access to the materials through digitization and the Encoded Archival Description finding aid. Observations are made on expanding the skills of traditional book catalogers to include manuscript cataloging, and on project management.
    Date
    10. 9.2000 17:38:22
  16. Nakashima, M.; Sato, K.; Qu, Y.; Ito, T.: Browsing-based conceptual information retrieval incorporating dictionary term relations, keyword associations, and a user's interest (2003) 0.04
    0.037158266 = product of:
      0.18579133 = sum of:
        0.18579133 = weight(_text_:dictionaries in 5147) [ClassicSimilarity], result of:
          0.18579133 = score(doc=5147,freq=4.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.6485404 = fieldWeight in 5147, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.046875 = fieldNorm(doc=5147)
      0.2 = coord(1/5)
    
    Abstract
    A model of browsing-based conceptual information retrieval is proposed employing two different types of dictionaries, a global dictionary and a local dictionary. A global dictionary with the authorized terms is utilized to capture the commonly acknowledgeable conceptual relation between a query and a document by replacing their keywords with the dictionary terms. The documents are ranked by the conceptual closeness to a query, and are arranged in the form of a user's personal digital library, or pDL. In a pDL a user can browse the arranged documents based an a suggestion about which documents are worth examining. This suggestion is made by the information in a local dictionary that is organized so as to reflect a user's interest and the association of keywords with the documents. Experiments for testing the retrieval performance of utilizing the two types of dictionaries were also performed using Standard test collections.
  17. Santana Suárez, O.; Carreras Riudavets, F.J.; Hernández Figueroa, Z.; González Cabrera, A.C.: Integration of an XML electronic dictionary with linguistic tools for natural language processing (2007) 0.04
    0.037158266 = product of:
      0.18579133 = sum of:
        0.18579133 = weight(_text_:dictionaries in 921) [ClassicSimilarity], result of:
          0.18579133 = score(doc=921,freq=4.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.6485404 = fieldWeight in 921, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.046875 = fieldNorm(doc=921)
      0.2 = coord(1/5)
    
    Abstract
    This study proposes the codification of lexical information in electronic dictionaries, in accordance with a generic and extendable XML scheme model, and its conjunction with linguistic tools for the processing of natural language. Our approach is different from other similar studies in that we propose XML coding of those items from a dictionary of meanings that are less related to the lexical units. Linguistic information, such as morphology, syllables, phonology, etc., will be included by means of specific linguistic tools. The use of XML as a container for the information allows the use of other XML tools for carrying out searches or for enabling presentation of the information in different resources. This model is particularly important as it combines two parallel paradigms-extendable labelling of documents and computational linguistics-and it is also applicable to other languages. We have included a comparison with the labelling proposal of printed dictionaries carried out by the Text Encoding Initiative (TEI). The proposed design has been validated with a dictionary of more than 145 000 accepted meanings.
  18. El-Sherbini, M.: Selected cataloging tools on the Internet (2003) 0.04
    0.035033148 = product of:
      0.17516573 = sum of:
        0.17516573 = weight(_text_:dictionaries in 1997) [ClassicSimilarity], result of:
          0.17516573 = score(doc=1997,freq=2.0), product of:
            0.2864761 = queryWeight, product of:
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.041411664 = queryNorm
            0.6114497 = fieldWeight in 1997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.9177637 = idf(docFreq=118, maxDocs=44218)
              0.0625 = fieldNorm(doc=1997)
      0.2 = coord(1/5)
    
    Abstract
    This bibliography contains selected cataloging tools an the Internet. It is divided into seven sections as follows: authority management and subject headings tools; cataloging tools by type of materials; dictionaries, encyclopedias, and place names; listservs and workshops; software and vendors; technical service professional organizations; and journals and newsletters. Resources are arranged in alphabetical order under each topic. Selected cataloging tools are annotated. There is some overlap since a given web site can cover many tools.
  19. Neuroth, H.; Pianos, T.: VASCODA: a German scientific portal for cross-searching distributed digital resource collections (2003) 0.03
    0.032925446 = product of:
      0.16462722 = sum of:
        0.16462722 = sum of:
          0.13096297 = weight(_text_:german in 2420) [ClassicSimilarity], result of:
            0.13096297 = score(doc=2420,freq=4.0), product of:
              0.24051933 = queryWeight, product of:
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.041411664 = queryNorm
              0.5445008 = fieldWeight in 2420, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.046875 = fieldNorm(doc=2420)
          0.033664253 = weight(_text_:22 in 2420) [ClassicSimilarity], result of:
            0.033664253 = score(doc=2420,freq=2.0), product of:
              0.1450166 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041411664 = queryNorm
              0.23214069 = fieldWeight in 2420, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2420)
      0.2 = coord(1/5)
    
    Abstract
    The German information science community - with the support of the two main funding agencies in Germany - will develop a scientific portal, vascoda, for cross-searching distributed metadata collections. In platitudinous words, one of the services of vascoda is going to be a ldquoGooglerdquo-like search for the academic community, an easy to use, yet sophisticated search-engine to supply information on high-quality resources from different media and technical environments. Reaching this objective requires considerable standardisation activity amongst the main players to harmonise the already existing services (e.g. regarding metadata, protocols, etc.). The co-operation amongst the participants including both of the funding agencies is creating a unique team-work situation in Germany thus strengthening the information science community.
    Source
    Research and advanced technology for digital libraries : 7th European Conference, proceedings / ECDL 2003, Trondheim, Norway, August 17-22, 2003
  20. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.03
    0.032343417 = product of:
      0.16171709 = sum of:
        0.16171709 = sum of:
          0.13366355 = weight(_text_:german in 1291) [ClassicSimilarity], result of:
            0.13366355 = score(doc=1291,freq=6.0), product of:
              0.24051933 = queryWeight, product of:
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.041411664 = queryNorm
              0.5557289 = fieldWeight in 1291, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.808009 = idf(docFreq=360, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
          0.028053544 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
            0.028053544 = score(doc=1291,freq=2.0), product of:
              0.1450166 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041411664 = queryNorm
              0.19345059 = fieldWeight in 1291, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1291)
      0.2 = coord(1/5)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".

Languages

Types

  • a 1203
  • m 159
  • el 84
  • s 53
  • b 27
  • x 15
  • i 11
  • n 2
  • r 2
  • More… Less…

Themes

Subjects

Classifications