Search (246 results, page 1 of 13)

  • × type_ss:"el"
  • × type_ss:"a"
  1. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.22
    0.22028741 = product of:
      0.5874331 = sum of:
        0.08391901 = product of:
          0.25175703 = sum of:
            0.25175703 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.25175703 = score(doc=230,freq=2.0), product of:
                0.3359639 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03962768 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
        0.25175703 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.25175703 = score(doc=230,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
        0.25175703 = weight(_text_:2f in 230) [ClassicSimilarity], result of:
          0.25175703 = score(doc=230,freq=2.0), product of:
            0.3359639 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03962768 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.375 = coord(3/8)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  2. Mitchell, J.S.; Zeng, M.L.; Zumer, M.: Modeling classification systems in multicultural and multilingual contexts (2012) 0.07
    0.07186526 = product of:
      0.14373052 = sum of:
        0.02834915 = weight(_text_:libraries in 1967) [ClassicSimilarity], result of:
          0.02834915 = score(doc=1967,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.05077526 = weight(_text_:case in 1967) [ClassicSimilarity], result of:
          0.05077526 = score(doc=1967,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.04182736 = weight(_text_:studies in 1967) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1967,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1967)
        0.022778753 = product of:
          0.045557506 = sum of:
            0.045557506 = weight(_text_:22 in 1967) [ClassicSimilarity], result of:
              0.045557506 = score(doc=1967,freq=4.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.32829654 = fieldWeight in 1967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1967)
          0.5 = coord(1/2)
      0.5 = coord(4/8)
    
    Abstract
    This paper reports on the second part of an initiative of the authors on researching classification systems with the conceptual model defined by the Functional Requirements for Subject Authority Data (FRSAD) final report. In an earlier study, the authors explored whether the FRSAD conceptual model could be extended beyond subject authority data to model classification data. The focus of the current study is to determine if classification data modeled using FRSAD can be used to solve real-world discovery problems in multicultural and multilingual contexts. The paper discusses the relationships between entities (same type or different types) in the context of classification systems that involve multiple translations and /or multicultural implementations. Results of two case studies are presented in detail: (a) two instances of the DDC (DDC 22 in English, and the Swedish-English mixed translation of DDC 22), and (b) Chinese Library Classification. The use cases of conceptual models in practice are also discussed.
    Source
    Beyond libraries - subject metadata in the digital environment and semantic web. IFLA Satellite Post-Conference, 17-18 August 2012, Tallinn
  3. Fagan, J.C.: Usability studies of faceted browsing : a literature review (2010) 0.04
    0.03815141 = product of:
      0.15260564 = sum of:
        0.03307401 = weight(_text_:libraries in 4396) [ClassicSimilarity], result of:
          0.03307401 = score(doc=4396,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.25406548 = fieldWeight in 4396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4396)
        0.11953163 = weight(_text_:studies in 4396) [ClassicSimilarity], result of:
          0.11953163 = score(doc=4396,freq=12.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.75592977 = fieldWeight in 4396, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4396)
      0.25 = coord(2/8)
    
    Abstract
    Faceted browsing is a common feature of new library catalog interfaces. But to what extent does it improve user performance in searching within today's library catalog systems? This article reviews the literature for user studies involving faceted browsing and user studies of "next-generation" library catalogs that incorporate faceted browsing. Both the results and the methods of these studies are analyzed by asking, What do we currently know about faceted browsing? How can we design better studies of faceted browsing in library catalogs? The article proposes methodological considerations for practicing librarians and provides examples of goals, tasks, and measurements for user studies of faceted browsing in library catalogs.
    Source
    Information technology and libraries. 2010, June, S.58-66
  4. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.03
    0.033441134 = product of:
      0.13376454 = sum of:
        0.10155052 = weight(_text_:case in 3895) [ClassicSimilarity], result of:
          0.10155052 = score(doc=3895,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.5828877 = fieldWeight in 3895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.09375 = fieldNorm(doc=3895)
        0.03221402 = product of:
          0.06442804 = sum of:
            0.06442804 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.06442804 = score(doc=3895,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Date
    24. 8.2005 19:20:22
  5. Wake, S.; Nicholson, D.: HILT: High-Level Thesaurus Project : building consensus for interoperable subject access across communities (2001) 0.03
    0.030688863 = product of:
      0.08183697 = sum of:
        0.026727835 = weight(_text_:libraries in 1224) [ClassicSimilarity], result of:
          0.026727835 = score(doc=1224,freq=4.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2053159 = fieldWeight in 1224, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03125 = fieldNorm(doc=1224)
        0.033850174 = weight(_text_:case in 1224) [ClassicSimilarity], result of:
          0.033850174 = score(doc=1224,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.1942959 = fieldWeight in 1224, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=1224)
        0.02125896 = product of:
          0.04251792 = sum of:
            0.04251792 = weight(_text_:area in 1224) [ClassicSimilarity], result of:
              0.04251792 = score(doc=1224,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.21775553 = fieldWeight in 1224, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1224)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    This article provides an overview of the work carried out by the HILT Project <http://hilt.cdlr.strath.ac.uk> in making recommendations towards interoperable subject access, or cross-searching and browsing distributed services amongst the archives, libraries, museums and electronic services sectors. The article details consensus achieved at the 19 June 2001 HILT Workshop and discusses the HILT Stakeholder Survey. In 1999 Péter Jascó wrote that "savvy searchers" are asking for direction. Three years later the scenario he describes, that of searchers cross-searching databases where the subject vocabulary used in each case is different, still rings true. Jascó states that, in many cases, databases do not offer the necessary aids required to use the "preferred terms of the subject-controlled vocabulary". The databases to which Jascó refers are Dialog and DataStar. However, the situation he describes applies as well to the area that HILT is researching: that of cross-searching and browsing by subject across databases and catalogues in archives, libraries, museums and online information services. So how does a user access information on a particular subject when it is indexed across a multitude of services under different, but quite often similar, subject terms? Also, if experienced searchers are having problems, what about novice searchers? As information professionals, it is our role to investigate such problems and recommend solutions. Although there is no hard empirical evidence one way or another, HILT participants agree that the problem for users attempting to search across databases is real. There is a strong likelihood that users are disadvantaged by the use of different subject terminology combined with a multitude of different practices taking place within the archive, library, museums and online communities. Arguably, failure to address this problem of interoperability undermines the value of cross-searching and browsing facilities, and wastes public money because relevant resources are 'hidden' from searchers. HILT is charged with analysing this broad problem through qualitative methods, with the main aim of presenting a set of recommendations on how to make it easier to cross-search and browse distributed services. Because this is a very large problem composed of many strands, HILT recognizes that any proposed solutions must address a host of issues. Recommended solutions must be affordable, sustainable, politically acceptable, useful, future-proof and international in scope. It also became clear to the HILT team that progress toward finding solutions to the interoperability problem could only be achieved through direct dialogue with other parties keen to solve this problem, and that the problem was as much about consensus building as it was about finding a solution. This article describes how HILT approached the cross-searching problem; how it investigated the nature of the problem, detailing results from the HILT Stakeholder Survey; and how it achieved consensus through the recent HILT Workshop.
  6. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.03
    0.029276006 = product of:
      0.07806935 = sum of:
        0.04910217 = weight(_text_:libraries in 1184) [ClassicSimilarity], result of:
          0.04910217 = score(doc=1184,freq=24.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3771894 = fieldWeight in 1184, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.02091368 = weight(_text_:studies in 1184) [ClassicSimilarity], result of:
          0.02091368 = score(doc=1184,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.13226016 = fieldWeight in 1184, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1184)
        0.008053505 = product of:
          0.01610701 = sum of:
            0.01610701 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.01610701 = score(doc=1184,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    Google's December 2004 announcement of its intention to collaborate with five major research libraries - Harvard University, the University of Michigan, Stanford University, the University of Oxford, and the New York Public Library - to digitize and surface their print book collections in the Google searching universe has, predictably, stirred conflicting opinion, with some viewing the project as a welcome opportunity to enhance the visibility of library collections in new environments, and others wary of Google's prospective role as gateway to these collections. The project has been vigorously debated on discussion lists and blogs, with the participating libraries commonly referred to as "the Google 5". One point most observers seem to concede is that the questions raised by this initiative are both timely and significant. The Google Print Library Project (GPLP) has galvanized a long overdue, multi-faceted discussion about library print book collections. The print book is core to library identity and practice, but in an era of zero-sum budgeting, it is almost inevitable that print book budgets will decline as budgets for serials, digital resources, and other materials expand. As libraries re-allocate resources to accommodate changing patterns of user needs, print book budgets may be adversely impacted. Of course, the degree of impact will depend on a library's perceived mission. A public library may expect books to justify their shelf-space, with de-accession the consequence of minimal use. A national library, on the other hand, has a responsibility to the scholarly and cultural record and may seek to collect comprehensively within particular areas, with the attendant obligation to secure the long-term retention of its print book collections. The combination of limited budgets, changing user needs, and differences in library collection strategies underscores the need to think about a collective, or system-wide, print book collection - in particular, how can an inter-institutional system be organized to achieve goals that would be difficult, and/or prohibitively expensive, for any one library to undertake individually [4]? Mass digitization programs like GPLP cast new light on these and other issues surrounding the future of library print book collections, but at this early stage, it is light that illuminates only dimly. It will be some time before GPLP's implications for libraries and library print book collections can be fully appreciated and evaluated. But the strong interest and lively debate generated by this initiative suggest that some preliminary analysis - premature though it may be - would be useful, if only to undertake a rough mapping of the terrain over which GPLP potentially will extend. At the least, some early perspective helps shape interesting questions for the future, when the boundaries of GPLP become settled, workflows for producing and managing the digitized materials become systematized, and usage patterns within the GPLP framework begin to emerge.
    This article offers some perspectives on GPLP in light of what is known about library print book collections in general, and those of the Google 5 in particular, from information in OCLC's WorldCat bibliographic database and holdings file. Questions addressed include: * Coverage: What proportion of the system-wide print book collection will GPLP potentially cover? What is the degree of holdings overlap across the print book collections of the five participating libraries? * Language: What is the distribution of languages associated with the print books held by the GPLP libraries? Which languages are predominant? * Copyright: What proportion of the GPLP libraries' print book holdings are out of copyright? * Works: How many distinct works are represented in the holdings of the GPLP libraries? How does a focus on works impact coverage and holdings overlap? * Convergence: What are the effects on coverage of using a different set of five libraries? What are the effects of adding the holdings of additional libraries to those of the GPLP libraries, and how do these effects vary by library type? These questions certainly do not exhaust the analytical possibilities presented by GPLP. More in-depth analysis might look at Google 5 coverage in particular subject areas; it also would be interesting to see how many books covered by the GPLP have already been digitized in other contexts. However, these questions are left to future studies. The purpose here is to explore a few basic questions raised by GPLP, and in doing so, provide an empirical context for the debate that is sure to continue for some time to come. A secondary objective is to lay some groundwork for a general set of questions that could be used to explore the implications of any mass digitization initiative. A suggested list of questions is provided in the conclusion of the article.
    Date
    26.12.2011 14:08:22
  7. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.03
    0.025926981 = product of:
      0.06913862 = sum of:
        0.014174575 = weight(_text_:libraries in 1202) [ClassicSimilarity], result of:
          0.014174575 = score(doc=1202,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.1088852 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
        0.02538763 = weight(_text_:case in 1202) [ClassicSimilarity], result of:
          0.02538763 = score(doc=1202,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.14572193 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
        0.02957641 = weight(_text_:studies in 1202) [ClassicSimilarity], result of:
          0.02957641 = score(doc=1202,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.18704411 = fieldWeight in 1202, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
      0.375 = coord(3/8)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  8. Lynch, C.A.: ¬The Z39.50 information retrieval standard : part I: a strategic view of its past, present and future (1997) 0.02
    0.024706101 = product of:
      0.065882936 = sum of:
        0.024551084 = weight(_text_:libraries in 1262) [ClassicSimilarity], result of:
          0.024551084 = score(doc=1262,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.1885947 = fieldWeight in 1262, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1262)
        0.02538763 = weight(_text_:case in 1262) [ClassicSimilarity], result of:
          0.02538763 = score(doc=1262,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.14572193 = fieldWeight in 1262, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1262)
        0.015944218 = product of:
          0.031888437 = sum of:
            0.031888437 = weight(_text_:area in 1262) [ClassicSimilarity], result of:
              0.031888437 = score(doc=1262,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.16331664 = fieldWeight in 1262, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1262)
          0.5 = coord(1/2)
      0.375 = coord(3/8)
    
    Abstract
    The Z39.50 standard for information retrieval is important from a number of perspectives. While still not widely known within the computer networking community, it is a mature standard that represents the culmination of two decades of thinking and debate about how information retrieval functions can be modeled, standardized, and implemented in a distributed systems environment. And - importantly -- it has been tested through substantial deployment experience. Z39.50 is one of the few examples we have to date of a protocol that actually goes beyond codifying mechanism and moves into the area of standardizing shared semantic knowledge. The extent to which this should be a goal of the protocol has been an ongoing source of controversy and tension within the developer community, and differing views on this issue can be seen both in the standard itself and the way that it is used in practice. Given the growing emphasis on issues such as "semantic interoperability" as part of the research agenda for digital libraries (see Clifford A. Lynch and Hector Garcia-Molina. Interoperability, Scaling, and the Digital Libraries Research Agenda, Report on the May 18-19, 1995 IITA Libraries Workshop, <http://www- diglib.stanford.edu/diglib/pub/reports/iita-dlw/main.html>), the insights gained by the Z39.50 community into the complex interactions among various definitions of semantics and interoperability are particularly relevant. The development process for the Z39.50 standard is also of interest in its own right. Its history, dating back to the 1970s, spans a period that saw the eclipse of formal standards-making agencies by groups such as the Internet Engineering Task Force (IETF) and informal standards development consortia. Moreover, in order to achieve meaningful implementation, Z39.50 had to move beyond its origins in the OSI debacle of the 1980s. Z39.50 has also been, to some extent, a victim of its own success -- or at least promise. Recent versions of the standard are highly extensible, and the consensus process of standards development has made it hospitable to an ever-growing set of new communities and requirements. As this process of extension has proceeded, it has become ever less clear what the appropriate scope and boundaries of the protocol should be, and what expectations one should have of practical interoperability among implementations of the standard. Z39.50 thus offers an excellent case study of the problems involved in managing the evolution of a standard over time. It may well offer useful lessons for the future of other standards such as HTTP and HTML, which seem to be facing some of the same issues.
  9. Lossau, N.: Search engine technology and digital libraries : libraries need to discover the academic internet (2004) 0.02
    0.02362226 = product of:
      0.09448904 = sum of:
        0.057285864 = weight(_text_:libraries in 1161) [ClassicSimilarity], result of:
          0.057285864 = score(doc=1161,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.4400543 = fieldWeight in 1161, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1161)
        0.037203178 = product of:
          0.074406356 = sum of:
            0.074406356 = weight(_text_:area in 1161) [ClassicSimilarity], result of:
              0.074406356 = score(doc=1161,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.38107216 = fieldWeight in 1161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1161)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    With the development of the World Wide Web, the "information search" has grown to be a significant business sector of a global, competitive and commercial market. Powerful players have entered this market, such as commercial internet search engines, information portals, multinational publishers and online content integrators. Will Google, Yahoo or Microsoft be the only portals to global knowledge in 2010? If libraries do not want to become marginalized in a key area of their traditional services, they need to acknowledge the challenges that come with the globalisation of scholarly information, the existence and further growth of the academic internet
  10. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4640) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4640,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
        0.04182736 = weight(_text_:studies in 4640) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4640,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4640, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4640)
      0.25 = coord(2/8)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
  11. Wongthontham, P.; Abu-Salih, B.: Ontology-based approach for semantic data extraction from social big data : state-of-the-art and research directions (2018) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4097) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4097,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4097)
        0.04182736 = weight(_text_:studies in 4097) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4097,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4097)
      0.25 = coord(2/8)
    
    Abstract
    A challenge of managing and extracting useful knowledge from social media data sources has attracted much attention from academic and industry. To address this challenge, semantic analysis of textual data is focused in this paper. We propose an ontology-based approach to extract semantics of textual data and define the domain of data. In other words, we semantically analyse the social data at two levels i.e. the entity level and the domain level. We have chosen Twitter as a social channel challenge for a purpose of concept proof. Domain knowledge is captured in ontologies which are then used to enrich the semantics of tweets provided with specific semantic conceptual representation of entities that appear in the tweets. Case studies are used to demonstrate this approach. We experiment and evaluate our proposed approach with a public dataset collected from Twitter and from the politics domain. The ontology-based approach leverages entity extraction and concept mappings in terms of quantity and accuracy of concept identification.
  12. Combs, A.; Krippner, S.: Collective consciousness and the social brain (2008) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 5622) [ClassicSimilarity], result of:
          0.05077526 = score(doc=5622,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 5622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=5622)
        0.04182736 = weight(_text_:studies in 5622) [ClassicSimilarity], result of:
          0.04182736 = score(doc=5622,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 5622, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=5622)
      0.25 = coord(2/8)
    
    Abstract
    This paper discusses supportive neurological and social evidence for 'collective consciousness', here understood as a shared sense of being together with others in a single or unified experience. Mirror neurons in the premotor and posterior parietal cortices respond to the intentions as well as the actions of other individuals. There are also mirror neurons in the anterior insula and anterior cingulate cortices which have been implicated in empathy. Many authors have considered the likely role of such mirror systems in the development of uniquely human aspects of sociality including language. Though not without criticism, Menant has made the case that mirror-neuron assisted exchanges aided the original advent of self-consciousness and intersubjectivity. Combining these ideas with social mirror theory it is not difficult to imagine the creation of similar dynamical patterns in the emotional and even cognitive neuronal activity of individuals in human groups, creating a feeling in which the participating members experience a unified sense of consciousness. Such instances pose a kind of 'binding problem' in which participating individuals exhibit a degree of 'entanglement'.
    Source
    Journal of consciousness studies. 15(2008) no.10-11, S.264-276
  13. Buckland, M.; Lancaster, L.: Combining place, time, and topic : the Electronic Cultural Atlas Initiative (2004) 0.02
    0.023147853 = product of:
      0.09259141 = sum of:
        0.055769812 = weight(_text_:studies in 1194) [ClassicSimilarity], result of:
          0.055769812 = score(doc=1194,freq=8.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 1194, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=1194)
        0.0368216 = product of:
          0.0736432 = sum of:
            0.0736432 = weight(_text_:area in 1194) [ClassicSimilarity], result of:
              0.0736432 = score(doc=1194,freq=6.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.37716365 = fieldWeight in 1194, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1194)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The Electronic Cultural Atlas Initiative was formed to encourage scholarly communication and the sharing of data among researchers who emphasize the relationships between place, time, and topic in the study of culture and history. In an effort to develop better tools and practices, The Electronic Cultural Atlas Initiative has sponsored the collaborative development of software for downloading and editing geo-temporal data to create dynamic maps, a clearinghouse of shared datasets accessible through a map-based interface, projects on format and content standards for gazetteers and time period directories, studies to improve geo-temporal aspects in online catalogs, good practice guidelines for preparing e-publications with dynamic geo-temporal displays, and numerous international conferences. The Electronic Cultural Atlas Initiative (ECAI) grew out of discussions among an international group of scholars interested in religious history and area studies. It was established as a unit under the Dean of International and Area Studies at the University of California, Berkeley in 1997. ECAI's mission is to promote an international collaborative effort to transform humanities scholarship through use of the digital environment to share data and by placing greater emphasis on the notions of place and time. Professor Lewis Lancaster is the Director. Professor Michael Buckland, with a library and information studies background, joined the effort as Co-Director in 2000. Assistance from the Lilly Foundation, the California Digital Library (University of California), and other sources has enabled ECAI to nurture a community; to develop a catalog ("clearinghouse") of Internet-accessible georeferenced resources; to support the development of software for obtaining, editing, manipulating, and dynamically visualizing geo-temporally encoded data; and to undertake research and development projects as needs and resources determine. Several hundred scholars worldwide, from a wide range of disciplines, are informally affiliated with ECAI, all interested in shared use of historical and cultural data. The Academia Sinica (Taiwan), The British Library, and the Arts and Humanities Data Service (UK) are among the well-known affiliates. However, ECAI mainly comprises individual scholars and small teams working on their own small projects on a very wide range of cultural, social, and historical topics. Numerous specialist committees have been fostering standardization and collaboration by area and by themes such as trade-routes, cities, religion, and sacred sites.
  14. Buttò, S.: RDA: analyses, considerations and activities by the Central Institute for the Union Catalogue of Italian Libraries and Bibliographic Information (ICCU) (2016) 0.02
    0.02111029 = product of:
      0.08444116 = sum of:
        0.05786746 = weight(_text_:libraries in 2958) [ClassicSimilarity], result of:
          0.05786746 = score(doc=2958,freq=12.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.44452196 = fieldWeight in 2958, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2958)
        0.0265737 = product of:
          0.0531474 = sum of:
            0.0531474 = weight(_text_:area in 2958) [ClassicSimilarity], result of:
              0.0531474 = score(doc=2958,freq=2.0), product of:
                0.1952553 = queryWeight, product of:
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.03962768 = queryNorm
                0.27219442 = fieldWeight in 2958, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.927245 = idf(docFreq=870, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2958)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The report aims to analyze the applicability of the Resource Description and Access (RDA) within the Italian public libraries, and also in the archives and museums in order to contribute to the discussion at international level. The Central Institute for the Union Catalogue of Italian libraries (ICCU) manages the online catalogue of the Italian libraries and the network of bibliographic services. ICCU has the institutional task of coordinating the cataloging and the documentation activities for the Italian libraries. On March 31 st 2014, the Institute signed the Agreement with the American Library Association,Publishing ALA, for the Italian translation rights of RDA, now available and published inRDAToolkit. The Italian translation has been carried out and realized by the Technical Working Group, made up of the main national and academic libraries, cultural Institutions and bibliographic agencies. The Group started working from the need of studying the new code in its textual detail, to better understand the principles, purposes, and applicability and finally its sustainability within the national context in relation to the area of the bibliographic control. At international level, starting from the publication of the Italian version of RDA and through the research carried out by ICCU and by the national Working Groups, the purpose is a more direct comparison with the experiences of the other European countries, also within EURIG international context, for an exchange of experiences aimed at strengthening the informational content of the data cataloging, with respect to history, cultural traditions and national identities of the different countries.
  15. Schaefer, A.; Jordan, M.; Klas, C.-P.; Fuhr, N.: Active support for query formulation in virtual digital libraries : a case study with DAFFODIL (2005) 0.02
    0.020807797 = product of:
      0.08323119 = sum of:
        0.040918473 = weight(_text_:libraries in 4296) [ClassicSimilarity], result of:
          0.040918473 = score(doc=4296,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3143245 = fieldWeight in 4296, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4296)
        0.042312715 = weight(_text_:case in 4296) [ClassicSimilarity], result of:
          0.042312715 = score(doc=4296,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 4296, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4296)
      0.25 = coord(2/8)
    
    Abstract
    Daffodil is a front-end to federated, heterogeneous digital libraries targeting at strategic support of users during the information seeking process. This is done by offering a variety of functions for searching, exploring and managing digital library objects. However, the distributed search increases response time and the conceptual model of the underlying search processes is inherently weaker. This makes query formulation harder and the resulting waiting times can be frustrating. In this paper, we investigate the concept of proactive support during the user's query formulation. For improving user efficiency and satisfaction, we implemented annotations, proactive support and error markers on the query form itself. These functions decrease the probability for syntactical or semantical errors in queries. Furthermore, the user is able to make better tactical decisions and feels more confident that the system handles the query properly. Evaluations with 30 subjects showed that user satisfaction is improved, whereas no conclusive results were received for efficiency.
    Source
    Research and advanced technology for digital libraries : 9th European conference, ECDL 2005, Vienna, Austria, September 18-23, 2005 ; proceedings / Andreas Rauber ... (eds.)
  16. Farney, T.: using Google Tag Manager to share code : Designing shareable tags (2019) 0.02
    0.020807797 = product of:
      0.08323119 = sum of:
        0.040918473 = weight(_text_:libraries in 5443) [ClassicSimilarity], result of:
          0.040918473 = score(doc=5443,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3143245 = fieldWeight in 5443, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5443)
        0.042312715 = weight(_text_:case in 5443) [ClassicSimilarity], result of:
          0.042312715 = score(doc=5443,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 5443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5443)
      0.25 = coord(2/8)
    
    Abstract
    Sharing code between libraries is not a new phenomenon and neither is Google Tag Manager (GTM). GTM launched in 2012 as a JavaScript and HTML manager with the intent of easing the implementation of different analytics trackers and marketing scripts on a website. However, it can be used to load other code using its tag system onto a website. It's a simple process to export and import tags facilitating the code sharing process without requiring a high degree of coding experience. The entire process involves creating the script tag in GTM, exporting the GTM content into a sharable export file for someone else to import into their library's GTM container, and finally publishing that imported file to push the code to the website it was designed for. This case study provides an example of designing and sharing a GTM container loaded with advanced Google Analytics configurations such as event tracking and custom dimensions for other libraries using the Summon discovery service. It also discusses processes for designing GTM tags for export, best practices on importing and testing GTM content created by other libraries and concludes with evaluating the pros and cons of encouraging GTM use.
  17. Mäkelä, E.; Hyvönen, E.; Ruotsalo, T.: How to deal with massively heterogeneous cultural heritage data : lessons learned in CultureSampo (2012) 0.02
    0.019781101 = product of:
      0.079124406 = sum of:
        0.02834915 = weight(_text_:libraries in 3263) [ClassicSimilarity], result of:
          0.02834915 = score(doc=3263,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 3263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
        0.05077526 = weight(_text_:case in 3263) [ClassicSimilarity], result of:
          0.05077526 = score(doc=3263,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 3263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
      0.25 = coord(2/8)
    
    Abstract
    This paper presents the CultureSampo system for publishing heterogeneous linked data as a service. Discussed are the problems of converting legacy data into linked data, as well as the challenge of making the massively heterogeneous yet interlinked cultural heritage content interoperable on a semantic level. Novel user interface concepts for then utilizing the content are also presented. In the approach described, the data is published not only for human use, but also as intelligent services for other computer systems that can then provide interfaces of their own for the linked data. As a concrete use case of using CultureSampo as a service, the BookSampo system for publishing Finnish fiction literature on the semantic web is presented.
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: http://www.semantic-web-journal.net/content/how-deal-massively-heterogeneous-cultural-heritage-data-%E2%80%93-lessons-learned-culturesampo http://www.semantic-web-journal.net/sites/default/files/swj160_0.pdf.
  18. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.02
    0.019292213 = product of:
      0.07716885 = sum of:
        0.042312715 = weight(_text_:case in 3869) [ClassicSimilarity], result of:
          0.042312715 = score(doc=3869,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
        0.034856133 = weight(_text_:studies in 3869) [ClassicSimilarity], result of:
          0.034856133 = score(doc=3869,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 3869, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.25 = coord(2/8)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
  19. Lavoie, B.; Henry, G.; Dempsey, L.: ¬A service framework for libraries (2006) 0.02
    0.01862245 = product of:
      0.0744898 = sum of:
        0.04910217 = weight(_text_:libraries in 1175) [ClassicSimilarity], result of:
          0.04910217 = score(doc=1175,freq=24.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3771894 = fieldWeight in 1175, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1175)
        0.02538763 = weight(_text_:case in 1175) [ClassicSimilarity], result of:
          0.02538763 = score(doc=1175,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.14572193 = fieldWeight in 1175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1175)
      0.25 = coord(2/8)
    
    Abstract
    Much progress has been made in aligning library services with changing (and increasingly digital and networked) research and learning environments. At times, however, this progress has been uneven, fragmented, and reactive. As libraries continue to engage with an ever-shifting information landscape, it is apparent that their efforts would be facilitated by a shared view of how library services should be organized and surfaced in these new settings and contexts. Recent discussions in a variety of areas underscore this point: * Institutional repositories: what is the role of the library in collecting, managing, and preserving institutional scholarly output, and what services should be offered to faculty and students in this regard? * Metasearch: how can the fragmented pieces of library collections be brought together to simplify and improve the search experience of the user? * E-learning and course management systems: how can library services be lifted out of traditional library environments and inserted into the emerging workflows of "e-scholars" and "e-learners"? * Exposing library collections to search engines: how can libraries surface their collections in the general Web search environment, and how can users be provisioned with better tools to navigate an increasingly complex information landscape? In each case, there is as yet no shared picture of the library to bring to bear on these questions; there is little consensus on the specific library services that should be expected in these environments, how they should be organized, and how they should be presented.
    Libraries have not been idle in the face of the changes re-shaping their environments: in fact, much work is underway and major advances have already been achieved. But these efforts lack a unifying framework, a means for libraries, as a community, to gather the strands of individual projects and weave them into a cohesive whole. A framework of this kind would help in articulating collective expectations, assessing progress, and identifying critical gaps. As the information landscape continually shifts and changes, a framework would promote the design and implementation of flexible, interoperable library systems that can respond more quickly to the needs of libraries in serving their constituents. It will provide a port of entry for organizations outside the library domain, and help them understand the critical points of contact between their services and those of libraries. Perhaps most importantly, a framework would assist libraries in strategic planning. It would provide a tool to help them establish priorities, guide investment, and anticipate future needs in uncertain environments. It was in this context, and in recognition of efforts already underway to align library services with emerging information environments, that the Digital Library Federation (DLF) in 2005 sponsored the formation of the Service Framework Group (SFG) [1] to consider a more systematic, community-based approach to aligning the functions of libraries with increasing automation in fulfilling the needs of information environments. The SFG seeks to understand and model the research library in today's environment, by developing a framework within which the services offered by libraries, represented both as business logic and computer processes, can be understood in relation to other parts of the institutional and external information landscape. This framework will help research institutions plan wisely for providing the services needed to meet the current and emerging information needs of their constituents. A service framework is a tool for documenting a shared view of library services in changing environments; communicating it among libraries and others, and applying it to best advantage in meeting library goals. It is a means of focusing attention and organizing discussion. It is not, however, a substitute for innovation and creativity. It does not supply the answers, but facilitates the process by which answers are sought, found, and applied. This paper discusses the SFG's vision of a service framework for libraries, its approach to developing the framework, and the group's work agenda going forward.
  20. Lamb, I.; Larson, C.: Shining a light on scientific data : building a data catalog to foster data sharing and reuse (2016) 0.02
    0.017544128 = product of:
      0.07017651 = sum of:
        0.02834915 = weight(_text_:libraries in 3195) [ClassicSimilarity], result of:
          0.02834915 = score(doc=3195,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 3195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=3195)
        0.04182736 = weight(_text_:studies in 3195) [ClassicSimilarity], result of:
          0.04182736 = score(doc=3195,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 3195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3195)
      0.25 = coord(2/8)
    
    Abstract
    The scientific community's growing eagerness to make research data available to the public provides libraries - with our expertise in metadata and discovery - an interesting new opportunity. This paper details the in-house creation of a "data catalog" which describes datasets ranging from population-level studies like the US Census to small, specialized datasets created by researchers at our own institution. Based on Symfony2 and Solr, the data catalog provides a powerful search interface to help researchers locate the data that can help them, and an administrative interface so librarians can add, edit, and manage metadata elements at will. This paper will outline the successes, failures, and total redos that culminated in the current manifestation of our data catalog.

Years

Languages

  • e 194
  • d 48
  • a 1
  • es 1
  • i 1
  • sp 1
  • More… Less…