Search (4446 results, page 1 of 223)

  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.102087736 = sum of:
      0.08128564 = product of:
        0.24385692 = sum of:
          0.24385692 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24385692 = score(doc=562,freq=2.0), product of:
              0.4338952 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05117889 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.0208021 = product of:
        0.0416042 = sum of:
          0.0416042 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.0416042 = score(doc=562,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Godby, J.: WordSmith research project bridges gap between tokens and indexes (1998) 0.09
    0.091471136 = product of:
      0.18294227 = sum of:
        0.18294227 = sum of:
          0.13440405 = weight(_text_:identify in 4729) [ClassicSimilarity], result of:
            0.13440405 = score(doc=4729,freq=4.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.53594446 = fieldWeight in 4729, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
          0.04853823 = weight(_text_:22 in 4729) [ClassicSimilarity], result of:
            0.04853823 = score(doc=4729,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.2708308 = fieldWeight in 4729, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4729)
      0.5 = coord(1/2)
    
    Abstract
    Reports on an OCLC natural language processing research project to develop methods for identifying terminology in unstructured electronic text, especially material associated with new cultural trends and emerging subjects. Current OCLC production software can only identify single words as indexable terms in full text documents, thus a major goal of the WordSmith project is to develop software that can automatically identify and intelligently organize phrases for uses in database indexes. By analyzing user terminology from local newspapers in the USA, the latest cultural trends and technical developments as well as personal and geographic names have been drawm out. Notes that this new vocabulary can also be mapped into reference works
    Source
    OCLC newsletter. 1998, no.234, Jul/Aug, S.22-24
  3. O'Neill, E.T.; Chan, L.M.; Childress, E.; Dean, R.; El-Hoshy, L.M.; Vizine-Goetz, D.: Form subdivisions : their identification and use in LCSH (2001) 0.09
    0.09134953 = product of:
      0.18269905 = sum of:
        0.18269905 = sum of:
          0.14109486 = weight(_text_:identify in 2205) [ClassicSimilarity], result of:
            0.14109486 = score(doc=2205,freq=6.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.5626245 = fieldWeight in 2205, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.046875 = fieldNorm(doc=2205)
          0.0416042 = weight(_text_:22 in 2205) [ClassicSimilarity], result of:
            0.0416042 = score(doc=2205,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.23214069 = fieldWeight in 2205, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2205)
      0.5 = coord(1/2)
    
    Abstract
    Form subdivisions have always been an important part of the Library of Congress Subject Headings. However, when the MARC format was developed, no separate subfield code to identify form subdivisions was defined. Form and topical subdivisions were both included within a general subdivision category. In 1995, the USMARC Advisory Group approved a proposal defining subfield v for form subdivisions, and in 1999 the Library of Congress (LC) began identifying form subdivisions with the new code. However, there are millions of older bibliographic records lacking the explicit form subdivision coding. Identifying form subdivisions retrospectively is not a simple task. An algorithmic method was developed to identify form subdivisions coded as general subdivisions. The algorithm was used to identify 2,563 unique form subdivisions or combinations of form subdivisions in OCLC's WorldCat. The algorithm proved to be highly accurate with an error rate estimated to be less than 0.1%. The observed usage of the form subdivisions was highly skewed with the 100 most used form subdivisions or combinations of subdivisions accounting for 90% of the assignments.
    Date
    10. 9.2000 17:38:22
  4. Artymiuk, P.J.; Spriggs, R.V.; Willett, P.: Graph theoretic methods for the analysis of structural relationships in biological macromolecules (2005) 0.09
    0.09134953 = product of:
      0.18269905 = sum of:
        0.18269905 = sum of:
          0.14109486 = weight(_text_:identify in 5258) [ClassicSimilarity], result of:
            0.14109486 = score(doc=5258,freq=6.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.5626245 = fieldWeight in 5258, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.046875 = fieldNorm(doc=5258)
          0.0416042 = weight(_text_:22 in 5258) [ClassicSimilarity], result of:
            0.0416042 = score(doc=5258,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.23214069 = fieldWeight in 5258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5258)
      0.5 = coord(1/2)
    
    Abstract
    Subgraph isomorphism and maximum common subgraph isomorphism algorithms from graph theory provide an effective and an efficient way of identifying structural relationships between biological macromolecules. They thus provide a natural complement to the pattern matching algorithms that are used in bioinformatics to identify sequence relationships. Examples are provided of the use of graph theory to analyze proteins for which three-dimensional crystallographic or NMR structures are available, focusing on the use of the Bron-Kerbosch clique detection algorithm to identify common folding motifs and of the Ullmann subgraph isomorphism algorithm to identify patterns of amino acid residues. Our methods are also applicable to other types of biological macromolecule, such as carbohydrate and nucleic acid structures.
    Date
    22. 7.2006 14:40:10
  5. Fachsystematik Bremen nebst Schlüssel 1970 ff. (1970 ff) 0.09
    0.08507313 = sum of:
      0.06773804 = product of:
        0.20321411 = sum of:
          0.20321411 = weight(_text_:3a in 3577) [ClassicSimilarity], result of:
            0.20321411 = score(doc=3577,freq=2.0), product of:
              0.4338952 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05117889 = queryNorm
              0.46834838 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.33333334 = coord(1/3)
      0.017335083 = product of:
        0.034670167 = sum of:
          0.034670167 = weight(_text_:22 in 3577) [ClassicSimilarity], result of:
            0.034670167 = score(doc=3577,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.19345059 = fieldWeight in 3577, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3577)
        0.5 = coord(1/2)
    
    Content
    1. Agrarwissenschaften 1981. - 3. Allgemeine Geographie 2.1972. - 3a. Allgemeine Naturwissenschaften 1.1973. - 4. Allgemeine Sprachwissenschaft, Allgemeine Literaturwissenschaft 2.1971. - 6. Allgemeines. 5.1983. - 7. Anglistik 3.1976. - 8. Astronomie, Geodäsie 4.1977. - 12. bio Biologie, bcp Biochemie-Biophysik, bot Botanik, zoo Zoologie 1981. - 13. Bremensien 3.1983. - 13a. Buch- und Bibliothekswesen 3.1975. - 14. Chemie 4.1977. - 14a. Elektrotechnik 1974. - 15 Ethnologie 2.1976. - 16,1. Geowissenschaften. Sachteil 3.1977. - 16,2. Geowissenschaften. Regionaler Teil 3.1977. - 17. Germanistik 6.1984. - 17a,1. Geschichte. Teilsystematik hil. - 17a,2. Geschichte. Teilsystematik his Neuere Geschichte. - 17a,3. Geschichte. Teilsystematik hit Neueste Geschichte. - 18. Humanbiologie 2.1983. - 19. Ingenieurwissenschaften 1974. - 20. siehe 14a. - 21. klassische Philologie 3.1977. - 22. Klinische Medizin 1975. - 23. Kunstgeschichte 2.1971. - 24. Kybernetik. 2.1975. - 25. Mathematik 3.1974. - 26. Medizin 1976. - 26a. Militärwissenschaft 1985. - 27. Musikwissenschaft 1978. - 27a. Noten 2.1974. - 28. Ozeanographie 3.1977. -29. Pädagogik 8.1985. - 30. Philosphie 3.1974. - 31. Physik 3.1974. - 33. Politik, Politische Wissenschaft, Sozialwissenschaft. Soziologie. Länderschlüssel. Register 1981. - 34. Psychologie 2.1972. - 35. Publizistik und Kommunikationswissenschaft 1985. - 36. Rechtswissenschaften 1986. - 37. Regionale Geograpgie 3.1975. - 37a. Religionswissenschaft 1970. - 38. Romanistik 3.1976. - 39. Skandinavistik 4.1985. - 40. Slavistik 1977. - 40a. Sonstige Sprachen und Literaturen 1973. - 43. Sport 4.1983. - 44. Theaterwissenschaft 1985. - 45. Theologie 2.1976. - 45a. Ur- und Frühgeschichte, Archäologie 1970. - 47. Volkskunde 1976. - 47a. Wirtschaftswissenschaften 1971 // Schlüssel: 1. Länderschlüssel 1971. - 2. Formenschlüssel (Kurzform) 1974. - 3. Personenschlüssel Literatur 5. Fassung 1968
  6. Burnett, I.S.: Quality, speed and access : alternative cataloguing sources (1994) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 2336) [ClassicSimilarity], result of:
            0.10861487 = score(doc=2336,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 2336, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=2336)
          0.055472266 = weight(_text_:22 in 2336) [ClassicSimilarity], result of:
            0.055472266 = score(doc=2336,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 2336, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2336)
      0.5 = coord(1/2)
    
    Abstract
    Offers advice on avaluating alternative cataloguing sources. The steps should be: identify the possible providers; network for advice; test or sample attractive systems; develop criteria based on library size, type and location (e.g. cost and equipment needs, currency of records, types of materials accessed, customer service and reputation of vendor, impact on staff/time and other library services and ability to share or network information); and evaluate the possible services; and implement the new service
    Date
    17.10.1995 18:22:54
  7. Robin, J.; McKeown, K.: Empirically designing and evaluating a new revision-based model for summary generation (1996) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 6751) [ClassicSimilarity], result of:
            0.10861487 = score(doc=6751,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 6751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=6751)
          0.055472266 = weight(_text_:22 in 6751) [ClassicSimilarity], result of:
            0.055472266 = score(doc=6751,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 6751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=6751)
      0.5 = coord(1/2)
    
    Abstract
    Presents a system for summarizing quantitative data in natural language, focusing on the use of a corpus of basketball game summaries, drawn from online news services, to empirically shape the system design and to evaluate the approach. Initial corpus analysis revealed characteristics of textual summaries that challenge the capabilities of current language generation systems. A revision based corpus analysis was used to identify and encode the revision rules of the system. Presents a quantitative evaluation, using several test corpora, to measure the robustness of the new revision based model
    Date
    6. 3.1997 16:22:15
  8. Jensen, M.: Digital structure, digital design : issues in designing electronic publications (1996) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 7481) [ClassicSimilarity], result of:
            0.10861487 = score(doc=7481,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 7481, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=7481)
          0.055472266 = weight(_text_:22 in 7481) [ClassicSimilarity], result of:
            0.055472266 = score(doc=7481,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 7481, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=7481)
      0.5 = coord(1/2)
    
    Abstract
    In print publications, content elements are representable in visual form, but in digital presentation function may be shown through hypertext. Good design must be a tool to illuminate content, not an arbitrary add on. Sets out elements of good digital design. Consideration of the purpose of the publication, the use of the publication, the audience, and the market will help to identify appropriate design choices
    Source
    Journal of scholarly publishing. 28(1996) no.1, S.13-22
  9. Bates, M.E.: Finding the question behind the question (1998) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 3048) [ClassicSimilarity], result of:
            0.10861487 = score(doc=3048,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 3048, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=3048)
          0.055472266 = weight(_text_:22 in 3048) [ClassicSimilarity], result of:
            0.055472266 = score(doc=3048,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 3048, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3048)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the art of the reference interview, suggesting that although it may be possible that the ability to conduct a good reference interview can only be learned through experience, there are some useful pointers that can help librarians hone their skills and identify possible problem areas: these are discussed. Points out that time invested in the primary reference interview is time that does not have to be spent later on when it turns out the client really wanted something different
    Date
    22. 2.1999 19:19:54
  10. Alexander, M.: Digitising books, manuscripts and scholarly materials : preparation, handling, scanning, recognition, compression, storage formats (1998) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 3686) [ClassicSimilarity], result of:
            0.10861487 = score(doc=3686,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 3686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=3686)
          0.055472266 = weight(_text_:22 in 3686) [ClassicSimilarity], result of:
            0.055472266 = score(doc=3686,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 3686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3686)
      0.5 = coord(1/2)
    
    Abstract
    The British Library's Initiatives for Access programme (1993-) aims to identify the impact and value of digital and networking technologies on the Library's collections and services. Describes the projects: the Electronic Beowulf, digitisation of ageing microfilm, digital photographic images, and use of the Excalibur retrieval software. Examines the ways in which the issues of preparation, scanning, and storage have been tackled, and problems raised by use of recognition technologies and compression
    Date
    22. 5.1999 19:00:52
  11. El-Sherbini, M.: Metadata and the future of cataloging (2001) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 751) [ClassicSimilarity], result of:
            0.10861487 = score(doc=751,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=751)
          0.055472266 = weight(_text_:22 in 751) [ClassicSimilarity], result of:
            0.055472266 = score(doc=751,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=751)
      0.5 = coord(1/2)
    
    Abstract
    This article is a survey of representative metadata efforts comparing them to MARC 21 metadata in order to determine if new electronic formats require the development of a new set of standards. This study surveys the ongoing metadata projects in order to identify what types of metadata exist and how they are used and also compares and analyzes selected metadata elements in an attempt to illustrate how they are related to MARC 21 metadata format elements.
    Date
    23. 1.2007 11:22:30
  12. Madison, O.M.A.: Utilizing the FRBR framework in designing user-focused digital content and access systems (2006) 0.08
    0.082043566 = product of:
      0.16408713 = sum of:
        0.16408713 = sum of:
          0.10861487 = weight(_text_:identify in 1085) [ClassicSimilarity], result of:
            0.10861487 = score(doc=1085,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.4331085 = fieldWeight in 1085, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0625 = fieldNorm(doc=1085)
          0.055472266 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
            0.055472266 = score(doc=1085,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.30952093 = fieldWeight in 1085, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1085)
      0.5 = coord(1/2)
    
    Abstract
    This paper discusses the rapidly expanding environment of emerging electronic content and the importance of librarians to partner with new research and teaching communities in meeting users' needs to find, identify, select, and obtain the information and resources they need. The methodology and framework of the International Federation of Library Associations and Institutions' Functional Requirements for Bibliographic Records could serve as a useful tool in building expanded access and content systems.
    Date
    10. 9.2000 17:38:22
  13. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.08
    0.08128564 = product of:
      0.16257128 = sum of:
        0.16257128 = product of:
          0.48771384 = sum of:
            0.48771384 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.48771384 = score(doc=973,freq=2.0), product of:
                0.4338952 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05117889 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  14. Moore, N.: Neo-liberal or dirigiste? : Policies for an information society (1997) 0.08
    0.07840383 = product of:
      0.15680766 = sum of:
        0.15680766 = sum of:
          0.11520347 = weight(_text_:identify in 685) [ClassicSimilarity], result of:
            0.11520347 = score(doc=685,freq=4.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.45938095 = fieldWeight in 685, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.046875 = fieldNorm(doc=685)
          0.0416042 = weight(_text_:22 in 685) [ClassicSimilarity], result of:
            0.0416042 = score(doc=685,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.23214069 = fieldWeight in 685, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=685)
      0.5 = coord(1/2)
    
    Abstract
    Notes the profound changes that are affecting countries worldwide and afffecting the development of information policies intended to shape their own particular information societies. Although it is possible to identify differences in the motivating factors, nevertheless the goals of these policies are remakably similar. It is possible to identify 2 broadly divergent models. One is based on neoliberal economic philosophies and emphasizes the importance of market led solutions, exploiting private capital. The alternative model can be described as dirigiste and is based on a much greater degree of intervention by the state and so emphasizes the role of the state as a participant rather than as a facilitator. Argues that the neoliberal policy mechanism, with their emphasis on narrow economic solutions, are likely to be inadequate and the more holistic apporach of the dirigiste model seems mot appropriate
    Source
    Understanding information policy. Proceedings of a British Library funded Information Policy Unit Workshop, Cumberland Lodge, UK, 22-24 July 1996. Ed. by Ian Rowlands
  15. Huang, M.-H.; Huang, W.-T.; Chang, C.-C.; Chen, D. Z.; Lin, C.-P.: The greater scattering phenomenon beyond Bradford's law in patent citation (2014) 0.08
    0.07840383 = product of:
      0.15680766 = sum of:
        0.15680766 = sum of:
          0.11520347 = weight(_text_:identify in 1352) [ClassicSimilarity], result of:
            0.11520347 = score(doc=1352,freq=4.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.45938095 = fieldWeight in 1352, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.046875 = fieldNorm(doc=1352)
          0.0416042 = weight(_text_:22 in 1352) [ClassicSimilarity], result of:
            0.0416042 = score(doc=1352,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.23214069 = fieldWeight in 1352, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1352)
      0.5 = coord(1/2)
    
    Abstract
    Patent analysis has become important for management as it offers timely and valuable information to evaluate R&D performance and identify the prospects of patents. This study explores the scattering patterns of patent impact based on citations in 3 distinct technological areas, the liquid crystal, semiconductor, and drug technological areas, to identify the core patents in each area. The research follows the approach from Bradford's law, which equally divides total citations into 3 zones. While the result suggests that the scattering of patent citations corresponded with features of Bradford's law, the proportion of patents in the 3 zones did not match the proportion as proposed by the law. As a result, the study shows that the distributions of citations in all 3 areas were more concentrated than what Bradford's law proposed. The Groos (1967) droop was also presented by the scattering of patent citations, and the growth rate of cumulative citation decreased in the third zone.
    Date
    22. 8.2014 17:11:29
  16. Clarke, R.I.: Cataloging research by design : a taxonomic approach to understanding research questions in cataloging (2018) 0.08
    0.07840383 = product of:
      0.15680766 = sum of:
        0.15680766 = sum of:
          0.11520347 = weight(_text_:identify in 5188) [ClassicSimilarity], result of:
            0.11520347 = score(doc=5188,freq=4.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.45938095 = fieldWeight in 5188, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.046875 = fieldNorm(doc=5188)
          0.0416042 = weight(_text_:22 in 5188) [ClassicSimilarity], result of:
            0.0416042 = score(doc=5188,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.23214069 = fieldWeight in 5188, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5188)
      0.5 = coord(1/2)
    
    Abstract
    This article asserts that many research questions (RQs) in cataloging reflect design-based RQs, rather than traditional scientific ones. To support this idea, a review of existing discussions of RQs is presented to identify prominent types of RQs, including design-based RQs. RQ types are then classified into a taxonomic framework and compared with RQs from the Everyday Cataloger Concerns project, which aimed to identify important areas of research from the perspective of practicing catalogers. This comparative method demonstrates the ways in which the research areas identified by cataloging practitioners reflect design RQs-and therefore require design approaches and methods to answer them.
    Date
    30. 5.2019 19:14:22
  17. Miksa, S.D.; Burnett, K.; Bonnici, L.J.; Kim , J.: ¬The development of a facet analysis system to identify and measure the dimensions of interaction in online learning (2007) 0.08
    0.07612461 = product of:
      0.15224922 = sum of:
        0.15224922 = sum of:
          0.11757905 = weight(_text_:identify in 581) [ClassicSimilarity], result of:
            0.11757905 = score(doc=581,freq=6.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.46885374 = fieldWeight in 581, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0390625 = fieldNorm(doc=581)
          0.034670167 = weight(_text_:22 in 581) [ClassicSimilarity], result of:
            0.034670167 = score(doc=581,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.19345059 = fieldWeight in 581, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=581)
      0.5 = coord(1/2)
    
    Abstract
    The development of a facet analysis system to code and analyze data in a mixed-method study is discussed. The research goal was to identify the dimensions of interaction that contribute to student satisfaction in online Web-supported courses. The study was conducted between 2000 and 2002 at the Florida State University School of Information Studies. The researchers developed a facet analysis system that meets S. R. Ranganathan's ([1967]) requirements for articulation on three planes (idea, verbal, and notational). This system includes a codebook (verbal), coding procedures, and formulae (notational) for quantitative analysis of logs of chat sessions and postings to discussion boards for eight master's level courses taught online during the fall 2000 semester. Focus group interviews were subsequently held with student participants to confirm that results of the facet analysis reflected their experiences with the courses. The system was developed through a process of emergent coding. The researchers have been unable to identify any prior use of facet analysis for the analysis of research data as in this study. Identifying the facet analysis system was a major breakthrough in the research process, which, in turn, provided the researchers with a lens through which to analyze and interpret the data. In addition, identification of the faceted nature of the system opens up new possibilities for automation of the coding process.
    Date
    2.11.2007 10:22:40
  18. Thelwall, M.; Buckley, K.; Paltoglou, G.; Cai, D.; Kappas, A.: Sentiment strength detection in short informal text (2010) 0.08
    0.07612461 = product of:
      0.15224922 = sum of:
        0.15224922 = sum of:
          0.11757905 = weight(_text_:identify in 4200) [ClassicSimilarity], result of:
            0.11757905 = score(doc=4200,freq=6.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.46885374 = fieldWeight in 4200, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4200)
          0.034670167 = weight(_text_:22 in 4200) [ClassicSimilarity], result of:
            0.034670167 = score(doc=4200,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.19345059 = fieldWeight in 4200, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4200)
      0.5 = coord(1/2)
    
    Abstract
    A huge number of informal messages are posted every day in social network sites, blogs, and discussion forums. Emotions seem to be frequently important in these texts for expressing friendship, showing social support or as part of online arguments. Algorithms to identify sentiment and sentiment strength are needed to help understand the role of emotion in this informal communication and also to identify inappropriate or anomalous affective utterances, potentially associated with threatening behavior to the self or others. Nevertheless, existing sentiment detection algorithms tend to be commercially oriented, designed to identify opinions about products rather than user behaviors. This article partly fills this gap with a new algorithm, SentiStrength, to extract sentiment strength from informal English text, using new methods to exploit the de facto grammars and spelling styles of cyberspace. Applied to MySpace comments and with a lookup table of term sentiment strengths optimized by machine learning, SentiStrength is able to predict positive emotion with 60.6% accuracy and negative emotion with 72.8% accuracy, both based upon strength scales of 1-5. The former, but not the latter, is better than baseline and a wide range of general machine learning approaches.
    Date
    22. 1.2011 14:29:23
  19. Yang, F.; Zhang, X.: Focal fields in literature on the information divide : the USA, China, UK and India (2020) 0.08
    0.07612461 = product of:
      0.15224922 = sum of:
        0.15224922 = sum of:
          0.11757905 = weight(_text_:identify in 5835) [ClassicSimilarity], result of:
            0.11757905 = score(doc=5835,freq=6.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.46885374 = fieldWeight in 5835, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5835)
          0.034670167 = weight(_text_:22 in 5835) [ClassicSimilarity], result of:
            0.034670167 = score(doc=5835,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.19345059 = fieldWeight in 5835, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5835)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to identify key countries and their focal research fields on the information divide. Design/methodology/approach Literature was retrieved to identify key countries and their primary focus. The literature research method was adopted to identify aspects of the primary focus in each key country. Findings The key countries with literature on the information divide are the USA, China, the UK and India. The problem of health is prominent in the USA, and solutions include providing information, distinguishing users' profiles and improving eHealth literacy. Economic and political factors led to the urban-rural information divide in China, and policy is the most powerful solution. Under the influence of humanism, research on the information divide in the UK focuses on all age groups, and solutions differ according to age. Deep-rooted patriarchal concepts and traditional marriage customs make the gender information divide prominent in India, and increasing women's information consciousness is a feasible way to reduce this divide. Originality/value This paper is an extensive review study on the information divide, which clarifies the key countries and their focal fields in research on this topic. More important, the paper innovatively analyzes and summarizes existing literature from a country perspective.
    Date
    13. 2.2020 18:22:13
  20. Mandel, C.A.; Wolven, R.: Intellectual access to digital documents : joining proven principles with new technologies (1996) 0.07
    0.07178812 = product of:
      0.14357623 = sum of:
        0.14357623 = sum of:
          0.09503801 = weight(_text_:identify in 597) [ClassicSimilarity], result of:
            0.09503801 = score(doc=597,freq=2.0), product of:
              0.2507798 = queryWeight, product of:
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.05117889 = queryNorm
              0.37896994 = fieldWeight in 597, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.9000635 = idf(docFreq=894, maxDocs=44218)
                0.0546875 = fieldNorm(doc=597)
          0.04853823 = weight(_text_:22 in 597) [ClassicSimilarity], result of:
            0.04853823 = score(doc=597,freq=2.0), product of:
              0.17921975 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05117889 = queryNorm
              0.2708308 = fieldWeight in 597, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=597)
      0.5 = coord(1/2)
    
    Abstract
    This paper considers the relevance of Charles Ami Cutter's principles of bibliographic access to the uiniverse of Internet accessible digital objects and explores new methods for applying these principles in the context of new information technologies. The paper examines the value for retrieval of collecting authors' names, identifying authors' roles, collocating works and versions, and providing subject access through classification and controlled vocabularies for digital resources available through the World Wide Web. THe authors identify emerging techniques and technologies that can be used in lieu of or as a supplement to traditional cataloging to achieve these functions in organizing access to Internet resources
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.25-42

Languages

Types

  • a 3796
  • m 361
  • el 208
  • s 144
  • b 39
  • x 39
  • r 27
  • i 23
  • ? 9
  • p 5
  • n 4
  • d 3
  • l 2
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications