Search (1574 results, page 1 of 79)

  • × year_i:[2000 TO 2010}
  1. Weitz, J.: Cataloger's judgment : music cataloging questions and answers from the music OCLC users group newsletter (2003) 0.15
    0.14991523 = product of:
      0.29983047 = sum of:
        0.29983047 = sum of:
          0.21922971 = weight(_text_:light in 4591) [ClassicSimilarity], result of:
            0.21922971 = score(doc=4591,freq=2.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.63808745 = fieldWeight in 4591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.078125 = fieldNorm(doc=4591)
          0.080600746 = weight(_text_:22 in 4591) [ClassicSimilarity], result of:
            0.080600746 = score(doc=4591,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.38690117 = fieldWeight in 4591, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4591)
      0.5 = coord(1/2)
    
    Abstract
    In this light hearted and practical compilation, Weitz collects and updates music cataloguing questions and answers featured in OCLC's "MOUG Newsletter."
    Date
    25.11.2005 18:22:29
  2. Gardner, T.; Iannella, R.: Architecture and software solutions (2000) 0.12
    0.11993219 = product of:
      0.23986438 = sum of:
        0.23986438 = sum of:
          0.17538378 = weight(_text_:light in 4867) [ClassicSimilarity], result of:
            0.17538378 = score(doc=4867,freq=2.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.51047 = fieldWeight in 4867, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0625 = fieldNorm(doc=4867)
          0.064480595 = weight(_text_:22 in 4867) [ClassicSimilarity], result of:
            0.064480595 = score(doc=4867,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.30952093 = fieldWeight in 4867, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4867)
      0.5 = coord(1/2)
    
    Abstract
    The current subject gateways have evolved over time when the discipline of Internet resource discovery was in its infancy. This is reflected by the lack of well-established, light-weight, deployable, easy-to-use, standards for metadata and information retrieval. We provide an introduction to the architecture, standards and software solutions in use by subject gateways, and to the issues that must be addressed to support future subject gateways
    Date
    22. 6.2002 19:38:24
  3. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11866613 = sum of:
      0.0944859 = product of:
        0.2834577 = sum of:
          0.2834577 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2834577 = score(doc=562,freq=2.0), product of:
              0.5043569 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.059490006 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.024180222 = product of:
        0.048360445 = sum of:
          0.048360445 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.048360445 = score(doc=562,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  4. Walker, K.; Kwasnik, B.: Providing access to collected works (2002) 0.10
    0.102905095 = sum of:
      0.037136175 = product of:
        0.111408524 = sum of:
          0.111408524 = weight(_text_:objects in 5630) [ClassicSimilarity], result of:
            0.111408524 = score(doc=5630,freq=2.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.35234275 = fieldWeight in 5630, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=5630)
        0.33333334 = coord(1/3)
      0.06576892 = product of:
        0.13153784 = sum of:
          0.13153784 = weight(_text_:light in 5630) [ClassicSimilarity], result of:
            0.13153784 = score(doc=5630,freq=2.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.3828525 = fieldWeight in 5630, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.046875 = fieldNorm(doc=5630)
        0.5 = coord(1/2)
    
    Abstract
    How are the boundaries of information objects to be defined in the networked electronic environment and what is the role of our retrieval systems in providing access where these boundaries are uncertain? The authors consider these questions in light of longstanding problems surrounding the definition of the "work" in the print environment. In particular, they examine the role of the index in providing access to the collected works of the individual writer. They review the discussion in the indexing literature of the "long index," and the close relationship between the functions of indexer and editor in collected works projects. And they treat the role of the index in constituting as a self-contained corpus the disparate types of text that make up a writer's lifetime output. Finally, by way of example, the authors turn to the extensive indexes to Sigmund Freud's psychoanalytic writings.
  5. Proffitt, M.: Pulling it all together : use of METS in RLG cultural materials service (2004) 0.10
    0.10226494 = sum of:
      0.07002465 = product of:
        0.21007393 = sum of:
          0.21007393 = weight(_text_:objects in 767) [ClassicSimilarity], result of:
            0.21007393 = score(doc=767,freq=4.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.6643839 = fieldWeight in 767, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=767)
        0.33333334 = coord(1/3)
      0.032240298 = product of:
        0.064480595 = sum of:
          0.064480595 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
            0.064480595 = score(doc=767,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.30952093 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=767)
        0.5 = coord(1/2)
    
    Abstract
    RLG has used METS for a particular application, that is as a wrapper for structural metadata. When RLG cultural materials was launched, there was no single way to deal with "complex digital objects". METS provides a standard means of encoding metadata regarding the digital objects represented in RCM, and METS has now been fully integrated into the workflow for this service.
    Source
    Library hi tech. 22(2004) no.1, S.65-68
  6. Khoo, C.S.G.; Ng, K.; Ou, S.: ¬An exploratory study of human clustering of Web pages (2003) 0.09
    0.09206356 = product of:
      0.18412712 = sum of:
        0.18412712 = sum of:
          0.15188682 = weight(_text_:light in 2741) [ClassicSimilarity], result of:
            0.15188682 = score(doc=2741,freq=6.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.44208 = fieldWeight in 2741, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
          0.032240298 = weight(_text_:22 in 2741) [ClassicSimilarity], result of:
            0.032240298 = score(doc=2741,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.15476047 = fieldWeight in 2741, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2741)
      0.5 = coord(1/2)
    
    Abstract
    This study seeks to find out how human beings cluster Web pages naturally. Twenty Web pages retrieved by the Northem Light search engine for each of 10 queries were sorted by 3 subjects into categories that were natural or meaningful to them. lt was found that different subjects clustered the same set of Web pages quite differently and created different categories. The average inter-subject similarity of the clusters created was a low 0.27. Subjects created an average of 5.4 clusters for each sorting. The categories constructed can be divided into 10 types. About 1/3 of the categories created were topical. Another 20% of the categories relate to the degree of relevance or usefulness. The rest of the categories were subject-independent categories such as format, purpose, authoritativeness and direction to other sources. The authors plan to develop automatic methods for categorizing Web pages using the common categories created by the subjects. lt is hoped that the techniques developed can be used by Web search engines to automatically organize Web pages retrieved into categories that are natural to users. 1. Introduction The World Wide Web is an increasingly important source of information for people globally because of its ease of access, the ease of publishing, its ability to transcend geographic and national boundaries, its flexibility and heterogeneity and its dynamic nature. However, Web users also find it increasingly difficult to locate relevant and useful information in this vast information storehouse. Web search engines, despite their scope and power, appear to be quite ineffective. They retrieve too many pages, and though they attempt to rank retrieved pages in order of probable relevance, often the relevant documents do not appear in the top-ranked 10 or 20 documents displayed. Several studies have found that users do not know how to use the advanced features of Web search engines, and do not know how to formulate and re-formulate queries. Users also typically exert minimal effort in performing, evaluating and refining their searches, and are unwilling to scan more than 10 or 20 items retrieved (Jansen, Spink, Bateman & Saracevic, 1998). This suggests that the conventional ranked-list display of search results does not satisfy user requirements, and that better ways of presenting and summarizing search results have to be developed. One promising approach is to group retrieved pages into clusters or categories to allow users to navigate immediately to the "promising" clusters where the most useful Web pages are likely to be located. This approach has been adopted by a number of search engines (notably Northem Light) and search agents.
    Date
    12. 9.2004 9:56:22
    Object
    Northern Light
  7. Strader, C.R.: Author-assigned keywords versus Library of Congress Subject Headings : implications for the cataloging of electronic theses and dissertations (2009) 0.09
    0.089949146 = product of:
      0.17989829 = sum of:
        0.17989829 = sum of:
          0.13153784 = weight(_text_:light in 3602) [ClassicSimilarity], result of:
            0.13153784 = score(doc=3602,freq=2.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.3828525 = fieldWeight in 3602, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.046875 = fieldNorm(doc=3602)
          0.048360445 = weight(_text_:22 in 3602) [ClassicSimilarity], result of:
            0.048360445 = score(doc=3602,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.23214069 = fieldWeight in 3602, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3602)
      0.5 = coord(1/2)
    
    Abstract
    This study is an examination of the overlap between author-assigned keywords and cataloger-assigned Library of Congress Subject Headings (LCSH) for a set of electronic theses and dissertations in Ohio State University's online catalog. The project is intended to contribute to the literature on the issue of keywords versus controlled vocabularies in the use of online catalogs and databases. Findings support previous studies' conclusions that both keywords and controlled vocabularies complement one another. Further, even in the presence of bibliographic record enhancements, such as abstracts or summaries, keywords and subject headings provided a significant number of unique terms that could affect the success of keyword searches. Implications for the maintenance of controlled vocabularies such as LCSH also are discussed in light of the patterns of matches and nonmatches found between the keywords and their corresponding subject headings.
    Date
    10. 9.2000 17:38:22
  8. Srinivasan, R.; Boast, R.; Becvar, K.M.; Furner, J.: Blobgects : digital museum catalogs and diverse user communities (2009) 0.09
    0.089349374 = sum of:
      0.06919919 = product of:
        0.20759755 = sum of:
          0.20759755 = weight(_text_:objects in 2754) [ClassicSimilarity], result of:
            0.20759755 = score(doc=2754,freq=10.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.656552 = fieldWeight in 2754, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2754)
        0.33333334 = coord(1/3)
      0.020150186 = product of:
        0.040300373 = sum of:
          0.040300373 = weight(_text_:22 in 2754) [ClassicSimilarity], result of:
            0.040300373 = score(doc=2754,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.19345059 = fieldWeight in 2754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2754)
        0.5 = coord(1/2)
    
    Abstract
    This article presents an exploratory study of Blobgects, an experimental interface for an online museum catalog that enables social tagging and blogging activity around a set of cultural heritage objects held by a preeminent museum of anthropology and archaeology. This study attempts to understand not just whether social tagging and commenting about these objects is useful but rather whose tags and voices matter in presenting different expert perspectives around digital museum objects. Based on an empirical comparison between two different user groups (Canadian Inuit high-school students and museum studies students in the United States), we found that merely adding the ability to tag and comment to the museum's catalog does not sufficiently allow users to learn about or engage with the objects represented by catalog entries. Rather, the specialist language of the catalog provides too little contextualization for users to enter into the sort of dialog that proponents of Web 2.0 technologies promise. Overall, we propose a more nuanced application of Web 2.0 technologies within museums - one which provides a contextual basis that gives users a starting point for engagement and permits users to make sense of objects in relation to their own needs, uses, and understandings.
    Date
    22. 3.2009 18:52:32
  9. Bates, M.J.: Fundamental forms of information (2006) 0.08
    0.08322087 = sum of:
      0.04332554 = product of:
        0.12997662 = sum of:
          0.12997662 = weight(_text_:objects in 2746) [ClassicSimilarity], result of:
            0.12997662 = score(doc=2746,freq=2.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.41106653 = fieldWeight in 2746, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2746)
        0.33333334 = coord(1/3)
      0.039895333 = product of:
        0.07979067 = sum of:
          0.07979067 = weight(_text_:22 in 2746) [ClassicSimilarity], result of:
            0.07979067 = score(doc=2746,freq=4.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.38301262 = fieldWeight in 2746, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2746)
        0.5 = coord(1/2)
    
    Abstract
    Fundamental forms of information, as well as the term information itself, are defined and developed for the purposes of information science/studies. Concepts of natural and represented information (taking an unconventional sense of representation), encoded and embodied information, as well as experienced, enacted, expressed, embedded, recorded, and trace information are elaborated. The utility of these terms for the discipline is illustrated with examples from the study of information-seeking behavior and of information genres. Distinctions between the information and curatorial sciences with respect to their social (and informational) objects of study are briefly outlined.
    Date
    22. 3.2009 18:15:22
  10. Understanding metadata (2004) 0.08
    0.0817552 = sum of:
      0.0495149 = product of:
        0.1485447 = sum of:
          0.1485447 = weight(_text_:objects in 2686) [ClassicSimilarity], result of:
            0.1485447 = score(doc=2686,freq=2.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.46979034 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.33333334 = coord(1/3)
      0.032240298 = product of:
        0.064480595 = sum of:
          0.064480595 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
            0.064480595 = score(doc=2686,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.30952093 = fieldWeight in 2686, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2686)
        0.5 = coord(1/2)
    
    Abstract
    Metadata (structured information about an object or collection of objects) is increasingly important to libraries, archives, and museums. And although librarians are familiar with a number of issues that apply to creating and using metadata (e.g., authority control, controlled vocabularies, etc.), the world of metadata is nonetheless different than library cataloging, with its own set of challenges. Therefore, whether you are new to these concepts or quite experienced with classic cataloging, this short (20 pages) introductory paper on metadata can be helpful
    Date
    10. 9.2004 10:22:40
  11. Yee, R.; Beaubien, R.: ¬A preliminary crosswalk from METS to IMS content packaging (2004) 0.08
    0.076698706 = sum of:
      0.052518487 = product of:
        0.15755546 = sum of:
          0.15755546 = weight(_text_:objects in 4752) [ClassicSimilarity], result of:
            0.15755546 = score(doc=4752,freq=4.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.49828792 = fieldWeight in 4752, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=4752)
        0.33333334 = coord(1/3)
      0.024180222 = product of:
        0.048360445 = sum of:
          0.048360445 = weight(_text_:22 in 4752) [ClassicSimilarity], result of:
            0.048360445 = score(doc=4752,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.23214069 = fieldWeight in 4752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4752)
        0.5 = coord(1/2)
    
    Abstract
    As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content-package (IMS-CP) plays a parallel role in educational technology. This article describes how METS-encoded library content can be converted into digital objects for IMS-compliant systems through an XSLT-based crosswalk. The conceptual models behind METS and IMS-CP are compared, the design and limitations of an XSLT-based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
    Source
    Library hi tech. 22(2004) no.1, S.69-81
  12. Song, D.; Bruza, P.D.: Towards context sensitive information inference (2003) 0.07
    0.07495762 = product of:
      0.14991523 = sum of:
        0.14991523 = sum of:
          0.10961486 = weight(_text_:light in 1428) [ClassicSimilarity], result of:
            0.10961486 = score(doc=1428,freq=2.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.31904373 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
          0.040300373 = weight(_text_:22 in 1428) [ClassicSimilarity], result of:
            0.040300373 = score(doc=1428,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.19345059 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
      0.5 = coord(1/2)
    
    Abstract
    Humans can make hasty, but generally robust judgements about what a text fragment is, or is not, about. Such judgements are termed information inference. This article furnishes an account of information inference from a psychologistic stance. By drawing an theories from nonclassical logic and applied cognition, an information inference mechanism is proposed that makes inferences via computations of information flow through an approximation of a conceptual space. Within a conceptual space information is represented geometrically. In this article, geometric representations of words are realized as vectors in a high dimensional semantic space, which is automatically constructed from a text corpus. Two approaches were presented for priming vector representations according to context. The first approach uses a concept combination heuristic to adjust the vector representation of a concept in the light of the representation of another concept. The second approach computes a prototypical concept an the basis of exemplar trace texts and moves it in the dimensional space according to the context. Information inference is evaluated by measuring the effectiveness of query models derived by information flow computations. Results show that information flow contributes significantly to query model effectiveness, particularly with respect to precision. Moreover, retrieval effectiveness compares favorably with two probabilistic query models, and another based an semantic association. More generally, this article can be seen as a contribution towards realizing operational systems that mimic text-based human reasoning.
    Date
    22. 3.2003 19:35:46
  13. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.07
    0.07495762 = product of:
      0.14991523 = sum of:
        0.14991523 = sum of:
          0.10961486 = weight(_text_:light in 1753) [ClassicSimilarity], result of:
            0.10961486 = score(doc=1753,freq=2.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.31904373 = fieldWeight in 1753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
          0.040300373 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
            0.040300373 = score(doc=1753,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.19345059 = fieldWeight in 1753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
      0.5 = coord(1/2)
    
    Abstract
    This book offers a comprehensive and consistent mathematical approach to information retrieval (IR) without which no implementation is possible, and sheds an entirely new light upon the structure of IR models. It contains the descriptions of all IR models in a unified formal style and language, along with examples for each, thus offering a comprehensive overview of them. The book also creates mathematical foundations and a consistent mathematical theory (including all mathematical results achieved so far) of IR as a stand-alone mathematical discipline, which thus can be read and taught independently. Also, the book contains all necessary mathematical knowledge on which IR relies, to help the reader avoid searching different sources. The book will be of interest to computer or information scientists, librarians, mathematicians, undergraduate students and researchers whose work involves information retrieval.
    Date
    22. 3.2008 12:26:32
  14. Lubas, R.L.; Wolfe, R.H.W.; Fleischman, M.: Creating metadata practices for MIT's OpenCourseWare Project (2004) 0.07
    0.071535796 = sum of:
      0.04332554 = product of:
        0.12997662 = sum of:
          0.12997662 = weight(_text_:objects in 2843) [ClassicSimilarity], result of:
            0.12997662 = score(doc=2843,freq=2.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.41106653 = fieldWeight in 2843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2843)
        0.33333334 = coord(1/3)
      0.02821026 = product of:
        0.05642052 = sum of:
          0.05642052 = weight(_text_:22 in 2843) [ClassicSimilarity], result of:
            0.05642052 = score(doc=2843,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.2708308 = fieldWeight in 2843, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2843)
        0.5 = coord(1/2)
    
    Abstract
    The MIT libraries were called upon to recommend a metadata scheme for the resources contained in MIT's OpenCourseWare (OCW) project. The resources in OCW needed descriptive, structural, and technical metadata. The SCORM standard, which uses IEEE Learning Object Metadata for its descriptive standard, was selected for its focus on educational objects. However, it was clear that the Libraries would need to recommend how the standard would be applied and adapted to accommodate needs that were not addressed in the standard's specifications. The newly formed MIT Libraries Metadata Unit adapted established practices from AACR2 and MARC traditions when facing situations in which there were no precedents to follow.
    Source
    Library hi tech. 22(2004) no.2, S.138-143
  15. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.07
    0.06904767 = product of:
      0.13809533 = sum of:
        0.13809533 = sum of:
          0.11391511 = weight(_text_:light in 1184) [ClassicSimilarity], result of:
            0.11391511 = score(doc=1184,freq=6.0), product of:
              0.34357315 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.059490006 = queryNorm
              0.33156 = fieldWeight in 1184, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
          0.024180222 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
            0.024180222 = score(doc=1184,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.116070345 = fieldWeight in 1184, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1184)
      0.5 = coord(1/2)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  16. Madison, O.M.A.: ¬The IFLA Functional Requirements for Bibliographic Records : international standards for bibliographic control (2000) 0.06
    0.063915595 = sum of:
      0.043765407 = product of:
        0.13129622 = sum of:
          0.13129622 = weight(_text_:objects in 187) [ClassicSimilarity], result of:
            0.13129622 = score(doc=187,freq=4.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.41523993 = fieldWeight in 187, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=187)
        0.33333334 = coord(1/3)
      0.020150186 = product of:
        0.040300373 = sum of:
          0.040300373 = weight(_text_:22 in 187) [ClassicSimilarity], result of:
            0.040300373 = score(doc=187,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.19345059 = fieldWeight in 187, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=187)
        0.5 = coord(1/2)
    
    Abstract
    The formal charge for the IFLA study involving international bibliography standards was to delineate the functions that are performed by the bibliographic record with respect to various media, applications, and user needs. The method used was the entity relationship analysis technique. Three groups of entities that are the key objects of interest to users of bibliographic records were defined. The primary group contains four entities: work, expression, manifestation, and item. The second group includes entities responsible for the intellectual or artistic content, production, or ownership of entities in the first group. The third group includes entities that represent concepts, objects, events, and places. In the study we identified the attributes associated with each entity and the relationships that are most important to users. The attributes and relationships were mapped to the functional requirements for bibliographic records that were defined in terms of four user tasks: to find, identify, select, and obtain. Basic requirements for national bibliographic records were recommended based on the entity analysis. The recommendations of the study are compared with two standards, AACR (Anglo-American Cataloguing Rules) and the Dublin Core, to place them into pragmatic context. The results of the study are being used in the review of the complete set of ISBDs as the initial benchmark in determining data elements for each format.
    Date
    10. 9.2000 17:38:22
  17. Raper, J.: Geographic relevance (2007) 0.06
    0.063915595 = sum of:
      0.043765407 = product of:
        0.13129622 = sum of:
          0.13129622 = weight(_text_:objects in 846) [ClassicSimilarity], result of:
            0.13129622 = score(doc=846,freq=4.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.41523993 = fieldWeight in 846, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=846)
        0.33333334 = coord(1/3)
      0.020150186 = product of:
        0.040300373 = sum of:
          0.040300373 = weight(_text_:22 in 846) [ClassicSimilarity], result of:
            0.040300373 = score(doc=846,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.19345059 = fieldWeight in 846, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=846)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper concerns the dimensions of relevance in information retrieval systems and their completeness in new retrieval contexts such as mobile search. Geography as a factor in relevance is little understood and information seeking is assumed to take place in indoor environments. Yet the rise of information seeking on the move using mobile devices implies the need to better understand the kind of situational relevance operating in this kind of context. Design/methodology/approach - The paper outlines and explores a geographic information seeking process in which geographic information needs (conditioned by needs and tasks, in context) drive the acquisition and use of geographic information objects, which in turn influence geographic behaviour in the environment. Geographic relevance is defined as "a relation between a geographic information need" (like an attention span) and "the spatio-temporal expression of the geographic information objects needed to satisfy it" (like an area of influence). Some empirical examples are given to indicate the theoretical and practical application of this work. Findings - The paper sets out definitions of geographical information needs based on cognitive and geographic criteria, and proposes four canonical cases, which might be theorised as anomalous states of geographic knowledge (ASGK). The paper argues that geographic relevance is best defined as a spatio-temporally extended relation between information need (an "attention" span) and geographic information object (a zone of "influence"), and it defines four domains of geographic relevance. Finally a model of geographic relevance is suggested in which attention and influence are modelled as map layers whose intersection can define the nature of the relation. Originality/value - Geographic relevance is a new field of research that has so far been poorly defined and little researched. This paper sets out new principles for the study of geographic information behaviour.
    Date
    23.12.2007 14:22:24
  18. Ku, L.-W.; Ho, H.-W.; Chen, H.-H.: Opinion mining and relationship discovery using CopeOpi opinion analysis system (2009) 0.06
    0.063915595 = sum of:
      0.043765407 = product of:
        0.13129622 = sum of:
          0.13129622 = weight(_text_:objects in 2938) [ClassicSimilarity], result of:
            0.13129622 = score(doc=2938,freq=4.0), product of:
              0.3161936 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.059490006 = queryNorm
              0.41523993 = fieldWeight in 2938, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2938)
        0.33333334 = coord(1/3)
      0.020150186 = product of:
        0.040300373 = sum of:
          0.040300373 = weight(_text_:22 in 2938) [ClassicSimilarity], result of:
            0.040300373 = score(doc=2938,freq=2.0), product of:
              0.20832387 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.059490006 = queryNorm
              0.19345059 = fieldWeight in 2938, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2938)
        0.5 = coord(1/2)
    
    Abstract
    We present CopeOpi, an opinion-analysis system, which extracts from the Web opinions about specific targets, summarizes the polarity and strength of these opinions, and tracks opinion variations over time. Objects that yield similar opinion tendencies over a certain time period may be correlated due to the latent causal events. CopeOpi discovers relationships among objects based on their opinion-tracking plots and collocations. Event bursts are detected from the tracking plots, and the strength of opinion relationships is determined by the coverage of these plots. To evaluate opinion mining, we use the NTCIR corpus annotated with opinion information at sentence and document levels. CopeOpi achieves sentence- and document-level f-measures of 62% and 74%. For relationship discovery, we collected 1.3M economics-related documents from 93 Web sources over 22 months, and analyzed collocation-based, opinion-based, and hybrid models. We consider as correlated company pairs that demonstrate similar stock-price variations, and selected these as the gold standard for evaluation. Results show that opinion-based and collocation-based models complement each other, and that integrated models perform the best. The top 25, 50, and 100 pairs discovered achieve precision rates of 1, 0.92, and 0.79, respectively.
  19. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.06
    0.062990606 = product of:
      0.12598121 = sum of:
        0.12598121 = product of:
          0.3779436 = sum of:
            0.3779436 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.3779436 = score(doc=140,freq=2.0), product of:
                0.5043569 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.059490006 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  20. Stock, M.; Stock, W.G.: Internet-Suchwerkzeuge im Vergleich (III) : Informationslinguistik und -statistik: AltaVista, FAST und Northern Light (2001) 0.06
    0.06200753 = product of:
      0.12401506 = sum of:
        0.12401506 = product of:
          0.24803013 = sum of:
            0.24803013 = weight(_text_:light in 5578) [ClassicSimilarity], result of:
              0.24803013 = score(doc=5578,freq=4.0), product of:
                0.34357315 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.059490006 = queryNorm
                0.7219136 = fieldWeight in 5578, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5578)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Suchmaschinen im World Wide Web arbeiten automatisch: Sie spüren Dokumente auf, indexieren sie, halten die Datenbank (mehr oder minder) aktuell und bieten den Kunden Retrievaloberflächen an. In unserem Known-Item-Retrievaltest (Password 11/2000) schnitten - in dieser Reihenfolge - Google, Alta Vista, Northern Light und FAST (All the Web) am besten ab. Die letzten drei Systeme arbeiten mit einer Kombination aus informationslinguistischen und informationsstatistischen Algorithmen, weshalb wir sie hier gemeinsam besprechen wollen. Im Zentrum unserer informationswissenschaftlichen Analysen stehen die "Highlights" der jeweiligen Suchwerkzeuge

Languages

Types

  • a 1324
  • m 168
  • el 97
  • s 63
  • b 26
  • x 15
  • i 8
  • n 3
  • r 2
  • More… Less…

Themes

Subjects

Classifications