Search (226 results, page 1 of 12)

  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. OWL Web Ontology Language Guide (2004) 0.06
    0.058129102 = product of:
      0.14532275 = sum of:
        0.13287885 = weight(_text_:readable in 4687) [ClassicSimilarity], result of:
          0.13287885 = score(doc=4687,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.47999436 = fieldWeight in 4687, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4687)
        0.012443894 = product of:
          0.024887787 = sum of:
            0.024887787 = weight(_text_:data in 4687) [ClassicSimilarity], result of:
              0.024887787 = score(doc=4687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.17468026 = fieldWeight in 4687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web as it is currently constituted resembles a poorly mapped geography. Our insight into the documents and capabilities available are based on keyword searches, abetted by clever use of document connectivity and usage patterns. The sheer mass of this data is unmanageable without powerful tool support. In order to map this terrain more precisely, computational agents require machine-readable descriptions of the content and capabilities of Web accessible resources. These descriptions must be in addition to the human-readable versions of that information. The OWL Web Ontology Language is intended to provide a language that can be used to describe the classes and relations between them that are inherent in Web documents and applications. This document demonstrates the use of the OWL language to - formalize a domain by defining classes and properties of those classes, - define individuals and assert properties about them, and - reason about these classes and individuals to the degree permitted by the formal semantics of the OWL language. The sections are organized to present an incremental definition of a set of classes, properties and individuals, beginning with the fundamentals and proceeding to more complex language components.
  2. O'Neill, E.T.: ¬The FRBRization of Humphry Clinker : a case study in the application of IFLA's Functional Requirements for Bibliographic Records (FRBR) (2002) 0.06
    0.05719 = product of:
      0.142975 = sum of:
        0.12804233 = weight(_text_:bibliographic in 2433) [ClassicSimilarity], result of:
          0.12804233 = score(doc=2433,freq=16.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 2433, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=2433)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 2433) [ClassicSimilarity], result of:
              0.029865343 = score(doc=2433,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 2433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2433)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The goal of OCLC's FRBR projects is to examine issues associated with the conversion of a set of bibliographic records to conform to FRBR requirements (a process referred to as "FRBRization"). The goals of this FRBR project were to: - examine issues associated with creating an entity-relationship model for (i.e., "FRBRizing") a non-trivial work - better understand the relationship between the bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic record is sufficient to reliably identify the FRBR entities - to develop a data set that could be used to evaluate FRBRization algorithms. Using an exemplary work as a case study, lead scientist Ed O'Neill sought to: - better understand the relationship between bibliographic records and the bibliographic objects they represent - determine if the information available in the bibliographic records is sufficient to reliably identify FRBR entities.
  3. Vizine-Goetz, D.; Hickey, C.; Houghton, A.; Thompson, R.: Vocabulary mapping for terminology services (2004) 0.05
    0.051073648 = product of:
      0.12768412 = sum of:
        0.11275144 = weight(_text_:readable in 918) [ClassicSimilarity], result of:
          0.11275144 = score(doc=918,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.4072887 = fieldWeight in 918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.046875 = fieldNorm(doc=918)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 918) [ClassicSimilarity], result of:
              0.029865343 = score(doc=918,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=918)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The paper describes a project to add value to controlled vocabularies by making inter-vocabulary associations. A methodology for mapping terms from one vocabulary to another is presented in the form of a case study applying the approach to the Educational Resources Information Center (ERIC) Thesaurus and the Library of Congress Subject Headings (LCSH). Our approach to mapping involves encoding vocabularies according to Machine-Readable Cataloging (MARC) standards, machine matching of vocabulary terms, and categorizing candidate mappings by likelihood of valid mapping. Mapping data is then stored as machine links. Vocabularies with associations to other schemes will be a key component of Web-based terminology services. The paper briefly describes how the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is used to provide access to a vocabulary with mappings.
  4. Dini, L.: CACAO : multilingual access to bibliographic records (2007) 0.05
    0.05086726 = product of:
      0.12716815 = sum of:
        0.09053959 = weight(_text_:bibliographic in 126) [ClassicSimilarity], result of:
          0.09053959 = score(doc=126,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.5161496 = fieldWeight in 126, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=126)
        0.036628567 = product of:
          0.07325713 = sum of:
            0.07325713 = weight(_text_:22 in 126) [ClassicSimilarity], result of:
              0.07325713 = score(doc=126,freq=2.0), product of:
                0.15778607 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04505818 = queryNorm
                0.46428138 = fieldWeight in 126, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=126)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  5. Nicholson, D.: Help us make HILT's terminology services useful in your information service (2008) 0.05
    0.04871397 = product of:
      0.121784925 = sum of:
        0.09395953 = weight(_text_:readable in 3654) [ClassicSimilarity], result of:
          0.09395953 = score(doc=3654,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 3654, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3654)
        0.027825395 = product of:
          0.05565079 = sum of:
            0.05565079 = weight(_text_:data in 3654) [ClassicSimilarity], result of:
              0.05565079 = score(doc=3654,freq=10.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.39059696 = fieldWeight in 3654, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3654)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The JISC-funded HILT project is looking to make contact with staff in information services or projects interested in helping it test and refine its developing terminology services. The project is currently working to create pilot web services that will deliver machine-readable terminology and cross-terminology mappings data likely to be useful to information services wishing to extend or enhance the efficacy of their subject search or browse services. Based on SRW/U, SOAP, and SKOS, the HILT facilities, when fully operational, will permit such services to improve their own subject search and browse mechanisms by using HILT data in a fashion transparent to their users. On request, HILT will serve up machine-processable data on individual subject schemes (broader terms, narrower terms, hierarchy information, preferred and non-preferred terms, and so on) and interoperability data (usually intellectual or automated mappings between schemes, but the architecture allows for the use of other methods) - data that can be used to enhance user services. The project is also developing an associated toolkit that will help service technical staff to embed HILT-related functionality into their services. The primary aim is to serve JISC funded information services or services at JISC institutions, but information services outside the JISC domain may also find the proposed services useful and wish to participate in the test and refine process.
  6. Anderson, C.: ¬The end of theory : the data deluge makes the scientific method obsolete (2008) 0.04
    0.044623144 = product of:
      0.111557856 = sum of:
        0.09395953 = weight(_text_:readable in 2819) [ClassicSimilarity], result of:
          0.09395953 = score(doc=2819,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.33940727 = fieldWeight in 2819, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2819)
        0.017598324 = product of:
          0.035196647 = sum of:
            0.035196647 = weight(_text_:data in 2819) [ClassicSimilarity], result of:
              0.035196647 = score(doc=2819,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24703519 = fieldWeight in 2819, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2819)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don't have to settle for wrong models. Indeed, they don't have to settle for models at all. Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age. The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to - well, at petabytes we ran out of organizational analogies.
  7. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.04
    0.038102854 = product of:
      0.09525713 = sum of:
        0.042680774 = weight(_text_:bibliographic in 1447) [ClassicSimilarity], result of:
          0.042680774 = score(doc=1447,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.24331525 = fieldWeight in 1447, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=1447)
        0.052576363 = sum of:
          0.028157318 = weight(_text_:data in 1447) [ClassicSimilarity], result of:
            0.028157318 = score(doc=1447,freq=4.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.19762816 = fieldWeight in 1447, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
          0.024419045 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
            0.024419045 = score(doc=1447,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.15476047 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
      0.4 = coord(2/5)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  8. Hjoerland, B.: Arguments for 'the bibliographical paradigm' : some thoughts inspired by the new English edition of the UDC (2007) 0.04
    0.037336905 = product of:
      0.09334226 = sum of:
        0.07840959 = weight(_text_:bibliographic in 552) [ClassicSimilarity], result of:
          0.07840959 = score(doc=552,freq=6.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.44699866 = fieldWeight in 552, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.046875 = fieldNorm(doc=552)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 552) [ClassicSimilarity], result of:
              0.029865343 = score(doc=552,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 552, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=552)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The term 'the bibliographic paradigm' is used in the literature of library and information science, but is a very seldom term and is almost always negatively described. This paper reconsiders this concept. Method. The method is mainly 'analytical'. Empirical data concerning the current state of the UDC-classification system are also presented in order to illuminate the connection between theory and practice. Analysis. The bibliographic paradigm is understood as a perspective in library and information science focusing on documents and information resources, their description, organization, mediation and use. This perspective is examined as one among other metatheories of library and information science and its philosophical assumptions and implications are outlined. Results. The neglect and misunderstanding of 'the bibliographic paradigm' as well as the quality of the new UDC-classification indicate that both the metatheoretical discourses on library and information science and its concrete practice seem to be in a state of crisis.
  9. Report on the future of bibliographic control : draft for public comment (2007) 0.04
    0.037336905 = product of:
      0.09334226 = sum of:
        0.07840959 = weight(_text_:bibliographic in 1271) [ClassicSimilarity], result of:
          0.07840959 = score(doc=1271,freq=24.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.44699866 = fieldWeight in 1271, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1271)
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 1271) [ClassicSimilarity], result of:
              0.029865343 = score(doc=1271,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 1271, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1271)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The future of bibliographic control will be collaborative, decentralized, international in scope, and Web-based. Its realization will occur in cooperation with the private sector, and with the active collaboration of library users. Data will be gathered from multiple sources; change will happen quickly; and bibliographic control will be dynamic, not static. The underlying technology that makes this future possible and necessary-the World Wide Web-is now almost two decades old. Libraries must continue the transition to this future without delay in order to retain their relevance as information providers. The Working Group on the Future of Bibliographic Control encourages the library community to take a thoughtful and coordinated approach to effecting significant changes in bibliographic control. Such an approach will call for leadership that is neither unitary nor centralized. Nor will the responsibility to provide such leadership fall solely to the Library of Congress (LC). That said, the Working Group recognizes that LC plays a unique role in the library community of the United States, and the directions that LC takes have great impact on all libraries. We also recognize that there are many other institutions and organizations that have the expertise and the capacity to play significant roles in the bibliographic future. Wherever possible, those institutions must step forward and take responsibility for assisting with navigating the transition and for playing appropriate ongoing roles after that transition is complete. To achieve the goals set out in this document, we must look beyond individual libraries to a system wide deployment of resources. We must realize efficiencies in order to be able to reallocate resources from certain lower-value components of the bibliographic control ecosystem into other higher-value components of that same ecosystem. The recommendations in this report are directed at a number of parties, indicated either by their common initialism (e.g., "LC" for Library of Congress, "PCC" for Program for Cooperative Cataloging) or by their general category (e.g., "Publishers," "National Libraries"). When the recommendation is addressed to "All," it is intended for the library community as a whole and its close collaborators.
    The Library of Congress must begin by prioritizing the recommendations that are directed in whole or in part at LC. Some define tasks that can be achieved immediately and with moderate effort; others will require analysis and planning that will have to be coordinated broadly and carefully. The Working Group has consciously not associated time frames with any of its recommendations. The recommendations fall into five general areas: 1. Increase the efficiency of bibliographic production for all libraries through increased cooperation and increased sharing of bibliographic records, and by maximizing the use of data produced throughout the entire "supply chain" for information resources. 2. Transfer effort into higher-value activity. In particular, expand the possibilities for knowledge creation by "exposing" rare and unique materials held by libraries that are currently hidden from view and, thus, underused. 3. Position our technology for the future by recognizing that the World Wide Web is both our technology platform and the appropriate platform for the delivery of our standards. Recognize that people are not the only users of the data we produce in the name of bibliographic control, but so too are machine applications that interact with those data in a variety of ways. 4. Position our community for the future by facilitating the incorporation of evaluative and other user-supplied information into our resource descriptions. Work to realize the potential of the FRBR framework for revealing and capitalizing on the various relationships that exist among information resources. 5. Strengthen the library profession through education and the development of metrics that will inform decision-making now and in the future. The Working Group intends what follows to serve as a broad blueprint for the Library of Congress and its colleagues in the library and information technology communities for extending and promoting access to information resources.
    Editor
    Library of Congress / Working Group on the Future of Bibliographic Control
    Source
    http://www.loc.gov/bibliographic-future/news/lcwg-report-draft-11-30-07-final.pdf
  10. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.04
    0.037206076 = product of:
      0.18603037 = sum of:
        0.18603037 = weight(_text_:readable in 1155) [ClassicSimilarity], result of:
          0.18603037 = score(doc=1155,freq=4.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.67199206 = fieldWeight in 1155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.2 = coord(1/5)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  11. Radhakrishnan, A.: Swoogle : an engine for the Semantic Web (2007) 0.04
    0.03696416 = product of:
      0.09241039 = sum of:
        0.075167626 = weight(_text_:readable in 4709) [ClassicSimilarity], result of:
          0.075167626 = score(doc=4709,freq=2.0), product of:
            0.2768342 = queryWeight, product of:
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.04505818 = queryNorm
            0.2715258 = fieldWeight in 4709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1439276 = idf(docFreq=257, maxDocs=44218)
              0.03125 = fieldNorm(doc=4709)
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 4709) [ClassicSimilarity], result of:
              0.03448553 = score(doc=4709,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 4709, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4709)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "Swoogle, the Semantic web search engine, is a research project carried out by the ebiquity research group in the Computer Science and Electrical Engineering Department at the University of Maryland. It's an engine tailored towards finding documents on the semantic web. The whole research paper is available here. Semantic web is touted as the next generation of online content representation where the web documents are represented in a language that is not only easy for humans but is machine readable (easing the integration of data as never thought possible) as well. And the main elements of the semantic web include data model description formats such as Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, Turtle, N-Triples), and notations such as RDF Schema (RDFS), the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain (Wikipedia). And Swoogle is an attempt to mine and index this new set of web documents. The engine performs crawling of semantic documents like most web search engines and the search is available as web service too. The engine is primarily written in Java with the PHP used for the front-end and MySQL for database. Swoogle is capable of searching over 10,000 ontologies and indexes more that 1.3 million web documents. It also computes the importance of a Semantic Web document. The techniques used for indexing are the more google-type page ranking and also mining the documents for inter-relationships that are the basis for the semantic web. For more information on how the RDF framework can be used to relate documents, read the link here. Being a research project, and with a non-commercial motive, there is not much hype around Swoogle. However, the approach to indexing of Semantic web documents is an approach that most engines will have to take at some point of time. When the Internet debuted, there were no specific engines available for indexing or searching. The Search domain only picked up as more and more content became available. One fundamental question that I've always wondered about it is - provided that the search engines return very relevant results for a query - how to ascertain that the documents are indeed the most relevant ones available. There is always an inherent delay in indexing of document. Its here that the new semantic documents search engines can close delay. Experimenting with the concept of Search in the semantic web can only bore well for the future of search technology."
  12. Parent, I.: ¬The importance of national bibliographies in the digital age (2007) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 687) [ClassicSimilarity], result of:
          0.060359728 = score(doc=687,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 687, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=687)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 687) [ClassicSimilarity], result of:
              0.03982046 = score(doc=687,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 687, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=687)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Technological developments are introducing both challenges and opportunities for the future production of national bibliographies. There are new complex issues which must be addressed collectively by national bibliographic agencies. As an international community, we must consider new methods and models for the on-going provision of authoritative data in national bibliographies, which continue to play an essential role in the control of and access to each country's published heritage.
  13. Landry, P.: Providing multilingual subject access through linking of subject heading languages : the MACS approach (2009) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 2787) [ClassicSimilarity], result of:
          0.060359728 = score(doc=2787,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 2787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=2787)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 2787) [ClassicSimilarity], result of:
              0.03982046 = score(doc=2787,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 2787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2787)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The MACS project aims at providing multilingual subject access to library catalogues through the use of concordances between subject headings from LCSH, RAMEAU and SWD. The manual approach, as used by MACS, has been up to now the most reliable method for ensuring accurate multilingual subject access to bibliographic data. The presentation will give an overview on the development of the project and will outline the strategy and methods used by the MACS project. The presentation will also include a demonstration of the search interface developed by The European Library (TEL).
  14. Patton, G.E.: From FRBR to FRAD : extending the Model (2009) 0.03
    0.032107983 = product of:
      0.080269955 = sum of:
        0.060359728 = weight(_text_:bibliographic in 3030) [ClassicSimilarity], result of:
          0.060359728 = score(doc=3030,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 3030, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0625 = fieldNorm(doc=3030)
        0.01991023 = product of:
          0.03982046 = sum of:
            0.03982046 = weight(_text_:data in 3030) [ClassicSimilarity], result of:
              0.03982046 = score(doc=3030,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2794884 = fieldWeight in 3030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3030)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    A report on the completion of the work of the IFLA Working Group on Functional Requirements and Numbering of Authority Records which was charged by the IFLA Division of Bibliographic Control to extend the FRBR model to authority data.
  15. Gorman, M.: Bibliographic control or chaos : an agenda for national bibliographic services in the 21st century (2001) 0.03
    0.029876543 = product of:
      0.14938271 = sum of:
        0.14938271 = weight(_text_:bibliographic in 6899) [ClassicSimilarity], result of:
          0.14938271 = score(doc=6899,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.8516034 = fieldWeight in 6899, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.109375 = fieldNorm(doc=6899)
      0.2 = coord(1/5)
    
  16. Functional Requirements for Subject Authority Data (FRSAD) : a conceptual model (2009) 0.03
    0.029775355 = product of:
      0.074438386 = sum of:
        0.060359728 = weight(_text_:bibliographic in 3573) [ClassicSimilarity], result of:
          0.060359728 = score(doc=3573,freq=8.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.34409973 = fieldWeight in 3573, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.03125 = fieldNorm(doc=3573)
        0.014078659 = product of:
          0.028157318 = sum of:
            0.028157318 = weight(_text_:data in 3573) [ClassicSimilarity], result of:
              0.028157318 = score(doc=3573,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.19762816 = fieldWeight in 3573, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3573)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Subject access to information has been the predominant approach of users to satisfy their information needs. Research demonstrates that the integration of controlled vocabulary information with an information retrieval system helps users perform more effective subject searches. This integration becomes possible when subject authority data (information about subjects from authority files) are linked to bibliographic files and are made available to users. The purpose of authority control is to ensure consistency in representing a value-a name of a person, a place name, or a subject term-in the elements used as access points in information retrieval. For example, "World War, 1939-1945" has been established as an authorized subject heading in the Library of Congress Subject Headings (LCSH). When using LCSH, in cataloging or indexing, all publications about World War II are assigned the established heading regardless of whether a publication refers to the war as the "European War, 1939-1945", "Second World War", "World War 2", "World War II", "WWII", "World War Two", or "2nd World War." The synonymous expressions are referred to by the authorized heading. This ensures that all publications about World War II can be retrieved by and displayed under the same subject heading, either in an individual institution's own catalog or database or in a union catalog that contains bibliographic records from a number of individual libraries or databases. In almost all large bibliographic databases, authority control is achieved manually or semi-automatically by means of an authority file. The file contains records of headings or access points - names, titles, or subjects - that have been authorized for use in bibliographic records. In addition to ensuring consistency in subject representation, a subject authority record also records and maintains semantic relationships among subject terms and/or their labels. Records in a subject authority file are connected through semantic relationships, which may be expressed statically in subject authority records or generated dynamically according to the specific needs (e.g., presenting the broader and narrower terms) of printed or online display of thesauri, subject headings lists, classification schemes, and other knowledge organization systems.
  17. Miller, D.R.: XML: Libraries' strategic opportunity (2001) 0.03
    0.027282408 = product of:
      0.06820602 = sum of:
        0.03772483 = weight(_text_:bibliographic in 1467) [ClassicSimilarity], result of:
          0.03772483 = score(doc=1467,freq=2.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.21506234 = fieldWeight in 1467, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1467)
        0.03048119 = product of:
          0.06096238 = sum of:
            0.06096238 = weight(_text_:data in 1467) [ClassicSimilarity], result of:
              0.06096238 = score(doc=1467,freq=12.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.4278775 = fieldWeight in 1467, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1467)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    XML (eXtensible Markup Language) is fast gaining favor as the universal format for data and document exchange -- in effect becoming the lingua franca of the Information Age. Currently, "library information" is at a particular disadvantage on the rapidly evolving World Wide Web. Why? Despite libraries'explorations of web catalogs, scanning projects, digital data repositories, and creation of web pages galore, there remains a digital divide. The core of libraries' data troves are stored in proprietary formats of integrated library systems (ILS) and in the complex and arcane MARC formats -- both restricted chiefly to the province of technical services and systems librarians. Even they are hard-pressed to extract and integrate this wealth of data with resources from outside this rarefied environment. Segregation of library information underlies many difficulties: producing standard bibliographic citations from MARC data, automatically creating new materials lists (including new web resources) on a particular topic, exchanging data with our vendors, and even migrating from one ILS to another. Why do we continue to hobble our potential by embracing these self-imposed limitations? Most ILSs began in libraries, which soon recognized the pitfalls of do-it-yourself solutions. Thus, we wisely anticipated the necessity for standards. However, with the advent of the web, we soon found "our" collections and a flood of new resources appearing in digital format on opposite sides of the divide. If we do not act quickly to integrate library resources with mainstream web resources, we are in grave danger of becoming marginalized
  18. Byrum, J.D.: ¬The birth and re-birth of the ISBD's : process and procedures for creating and revising the International Standard Bibliographic Description (2000) 0.03
    0.025608465 = product of:
      0.12804233 = sum of:
        0.12804233 = weight(_text_:bibliographic in 5399) [ClassicSimilarity], result of:
          0.12804233 = score(doc=5399,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 5399, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=5399)
      0.2 = coord(1/5)
    
    Footnote
    Vortrag, IFLA General Conference, Divison IV Bibliographic Control, Jerusalem, 2000
  19. Kedar, R.: Bibliographic projects and tools in Israel (2000) 0.03
    0.025608465 = product of:
      0.12804233 = sum of:
        0.12804233 = weight(_text_:bibliographic in 5406) [ClassicSimilarity], result of:
          0.12804233 = score(doc=5406,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 5406, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=5406)
      0.2 = coord(1/5)
    
    Footnote
    Vortrag, IFLA General Conference, Divison IV Bibliographic Control, Jerusalem, 2000
  20. Madsen, M.: Teaching bibliography, bibliographic control and bibliographical competence (2000) 0.03
    0.025608465 = product of:
      0.12804233 = sum of:
        0.12804233 = weight(_text_:bibliographic in 5408) [ClassicSimilarity], result of:
          0.12804233 = score(doc=5408,freq=4.0), product of:
            0.17541347 = queryWeight, product of:
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.04505818 = queryNorm
            0.7299458 = fieldWeight in 5408, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.893044 = idf(docFreq=2449, maxDocs=44218)
              0.09375 = fieldNorm(doc=5408)
      0.2 = coord(1/5)
    
    Footnote
    Vortrag, IFLA General Conference, Divison IV Bibliographic Control, Jerusalem, 2000

Languages

  • e 193
  • d 26
  • a 2
  • el 2
  • More… Less…

Types

  • a 62
  • p 22
  • i 7
  • n 6
  • r 4
  • s 2
  • m 1
  • More… Less…