Search (626 results, page 32 of 32)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 1: frameworks for linking (1999) 0.00
    1.7430348E-4 = product of:
      0.0041832835 = sum of:
        0.0041832835 = product of:
          0.01254985 = sum of:
            0.01254985 = weight(_text_:p in 1244) [ClassicSimilarity], result of:
              0.01254985 = score(doc=1244,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.15890071 = fieldWeight in 1244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1244)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
  2. Edmunds, J.: Zombrary apocalypse!? : RDA, LRM, and the death of cataloging (2017) 0.00
    1.7430348E-4 = product of:
      0.0041832835 = sum of:
        0.0041832835 = product of:
          0.01254985 = sum of:
            0.01254985 = weight(_text_:p in 3818) [ClassicSimilarity], result of:
              0.01254985 = score(doc=3818,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.15890071 = fieldWeight in 3818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3818)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    Equally fallacious is the statement that support for the "clustering bibliographic records to show relationships between works and their creators" is an "important new feature" of RDA. AACR2 bibliographic records and the systems housing them can, did, and do show such relationships. Finally, whether users want or care to be made "more aware of a work's different editions, translations, or physical formats" is debatable. As an aim, it sounds less like what a user wants and more like what a cataloging librarian thinks a user should want. As Amanda Cossham writes in her recently issued doctoral thesis: "The explicit focus on user needs in the FRBR model, the International Cataloguing Principles, and RDA: Resource Description and Access does not align well with the ways that users use, understand, and experience library catalogues nor with the ways that they understand and experience the wider information environment. User tasks, as constituted in the FRBR model and RDA, are insufficient to meet users' needs." (p. 11, emphasis in the original)
  3. Markey, K.: ¬The online library catalog : paradise lost and paradise regained? (2007) 0.00
    1.5251554E-4 = product of:
      0.003660373 = sum of:
        0.003660373 = product of:
          0.010981118 = sum of:
            0.010981118 = weight(_text_:p in 1172) [ClassicSimilarity], result of:
              0.010981118 = score(doc=1172,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.13903812 = fieldWeight in 1172, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1172)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    The impetus for this essay is the library community's uncertainty regarding the present and future direction of the library catalog in the era of Google and mass digitization projects. The uncertainty is evident at the highest levels. Deanna Marcum, Associate Librarian for Library Services at the Library of Congress (LC), is struck by undergraduate students who favor digital resources over the online library catalog because such resources are available at anytime and from anywhere (Marcum, 2006). She suggests that "the detailed attention that we have been paying to descriptive cataloging may no longer be justified ... retooled catalogers could give more time to authority control, subject analysis, [and] resource identification and evaluation" (Marcum, 2006, 8). In an abrupt about-face, LC terminated series added entries in cataloging records, one of the few subject-rich fields in such records (Cataloging Policy and Support Office, 2006). Mann (2006b) and Schniderman (2006) cite evidence of LC's prevailing viewpoint in favor of simplifying cataloging at the expense of subject cataloging. LC commissioned Karen Calhoun (2006) to prepare a report on "revitalizing" the online library catalog. Calhoun's directive is clear: divert resources from cataloging mass-produced formats (e.g., books) to cataloging the unique primary sources (e.g., archives, special collections, teaching objects, research by-products). She sums up her rationale for such a directive, "The existing local catalog's market position has eroded to the point where there is real concern for its ability to weather the competition for information seekers' attention" (p. 10). At the University of California Libraries (2005), a task force's recommendations parallel those in Calhoun report especially regarding the elimination of subject headings in favor of automatically generated metadata. Contemplating these events prompted me to revisit the glorious past of the online library catalog. For a decade and a half beginning in the early 1980s, the online library catalog was the jewel in the crown when people eagerly queued at its terminals to find information written by the world's experts. I despair how eagerly people now embrace Google because of the suspect provenance of the information Google retrieves. Long ago, we could have added more value to the online library catalog but the only thing we changed was the catalog's medium. Our failure to act back then cost the online catalog the crown. Now that the era of mass digitization has begun, we have a second chance at redesigning the online library catalog, getting it right, coaxing back old users, and attracting new ones. Let's revisit the past, reconsidering missed opportunities, reassessing their merits, combining them with new directions, making bold decisions and acting decisively on them.
  4. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 1202) [ClassicSimilarity], result of:
              0.009412387 = score(doc=1202,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 1202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
  5. Bertolucci, K.: Happiness is taxonomy : four structures for Snoopy - libraries' method of categorizing and classification (2003) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 1212) [ClassicSimilarity], result of:
              0.009412387 = score(doc=1212,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 1212, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1212)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Content
    Vgl.: http://findarticles.com/p/articles/mi_m0FWE/is_3_7/ai_99011617.
  6. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 1182) [ClassicSimilarity], result of:
              0.007843656 = score(doc=1182,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.

Years

Languages

  • d 489
  • e 127
  • a 1
  • f 1
  • i 1
  • no 1
  • More… Less…