Search (173 results, page 8 of 9)

  • × theme_ss:"Information Gateway"
  1. Cristán, A.L.: SACO and subject gateways (2004) 0.00
    0.0023918552 = product of:
      0.0071755657 = sum of:
        0.0071755657 = product of:
          0.014351131 = sum of:
            0.014351131 = weight(_text_:of in 5679) [ClassicSimilarity], result of:
              0.014351131 = score(doc=5679,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20947541 = fieldWeight in 5679, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5679)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This presentation attempts to fit the subject contribution mechanism used in the Program for Cooperative Cataloging's SACO Program into the context of subject gateways. The discussion points to several subject gateways and concludes that there is no similarity between the two. Subject gateways are a mechanism for facilitating searching, while the SACO Program is a cooperative venture that provides a "gateway" for the development of LCSH (Library of Congress Subject Heading list) into an international authority file for subject headings.
  2. Sharma, R.K.; Vishwanathan, K.R.: Digital libraries : development and challenges (2001) 0.00
    0.0023918552 = product of:
      0.0071755657 = sum of:
        0.0071755657 = product of:
          0.014351131 = sum of:
            0.014351131 = weight(_text_:of in 754) [ClassicSimilarity], result of:
              0.014351131 = score(doc=754,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20947541 = fieldWeight in 754, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=754)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Digital libraries are here to stay, and the conversion of traditional to digital is inevitable. Appropriate care should be taken to develop systems and managerial skills as well. Globalisation of the digital concept will not be possible until we overcome the technological gap between developed and developing countries. Measures are needed to overcome the menace of computer viruses and also unauthorised use. Sufficient thought has not been given to attaining self-sustained growth. It is therefore essential to explore new avenues for funding, particularly since initial investment in digital libraries is high, as is maintenance.
  3. Seadle, M.; Greifeneder, E.: Defining a digital library (2007) 0.00
    0.0023918552 = product of:
      0.0071755657 = sum of:
        0.0071755657 = product of:
          0.014351131 = sum of:
            0.014351131 = weight(_text_:of in 2540) [ClassicSimilarity], result of:
              0.014351131 = score(doc=2540,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20947541 = fieldWeight in 2540, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2540)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This editorial seeks to examine the definition of a "digital library" to see whether one can be constructed that usefully distinguishes a digital library from other types of electronic resources. Design/methodology/approach - The primary methodology compares definitions from multiple settings, including formal institutional settings, working definitions from articles, and a synthesis created in a seminar at Humboldt University in Berlin. Findings - At this point, digital libraries are evolving too fast for any lasting definition. Definitions that users readily understand are too broad and imprecise, and definitions with more technical precision quickly grow too obscure for common use. Originality/value - A functional definition of a digital library would add clarity to a burgeoning field, especially when trying to evaluate a resource. The student perspective provides a fresh look at the problem.
  4. Broughton, V.: Organizing a national humanities portal : a model for the classification and subject management of digital resources (2002) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 4607) [ClassicSimilarity], result of:
              0.014203937 = score(doc=4607,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 4607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4607)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  5. Hickey, T.; Vizine-Goetz, D.: ¬The role of classification in CORC (1999) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 385) [ClassicSimilarity], result of:
              0.014203937 = score(doc=385,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 385, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.09375 = fieldNorm(doc=385)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  6. Howarth, L.C.: Modelling a natural language gateway to metadata-enabled resources (2004) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 2626) [ClassicSimilarity], result of:
              0.014203937 = score(doc=2626,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 2626, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2626)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Even as the number of Web-enabled resources and knowledge repositories continues its unabated climb, both general purpose and domain-specific metadata schemas are in vigorous development. While this might be viewed as a promising direction for more precise access to disparate metadata-enabled resources, semantically-oriented tools to facilitate cross-domain searching by end-users unfamiliar with structured approaches to language or particular metadata schema conventions have received little attention. This paper describes findings from a focus group assessment of a natural language "gateway" previously derived from mapping, then categorizing terminology from nine metadata schemas. Semantic ambiguities identified in relation to three core metadata elements, namely, "Names", "Title", and "Subject", are discussed relative to data collection techniques employed in the research. Implications for further research, and particularly that pertaining to the design of an Interlingua gateway to multilingual, metadata-enabled resources, are addressed.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  7. Ohly, H.P.: ¬The organization of Internet links in a social science clearing house (2004) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 2641) [ClassicSimilarity], result of:
              0.014203937 = score(doc=2641,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 2641, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2641)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The German Internet Clearinghouse SocioGuide has changed to a database management system. Accordingly the metadata description scheme has become more detailed. The main information types are: institutions, persons, literature, tools, data sets, objects, topics, processes and services. Some of the description elements, such as title, resource identifier, and creator are universal, whereas others, such as primary/secondary information, and availability are specific to information type and cannot be generalized by referring to Dublin Core elements. The quality of Internet sources is indicated implicitly by characteristics, such as extent, restriction, or status. The SocioGuide is managed in DBClear, a generic system that can be adapted to different source types. It makes distributed input possible and contains workflow components.
    Source
    Knowledge organization and the global information society: Proceedings of the 8th International ISKO Conference 13-16 July 2004, London, UK. Ed.: I.C. McIlwaine
  8. Hellweg, H.; Hermes, B.; Stempfhuber, M.; Enderle, W.; Fischer, T.: DBClear : a generic system for clearinghouses (2002) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 3605) [ClassicSimilarity], result of:
              0.014203937 = score(doc=3605,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 3605, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3605)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Clearinghouses - or subject gateways - are domain-specific collections of links to resources an the Internet. The links are described with metadata and structured according to a domain-specific subject hierarchy. Users access the information by searching in the metadata or by browsing the subject hierarchy. The standards for metadata vary across existing Clearinghouses and different technologies for storing and accessing the metadata are used. This makes it difficult to distribute the editorial or administrative work involved in maintaining a clearinghouse, or to exchange information with other systems. DBClear is a generic, platform-independent clearinghouse system, whose metadata schema can be adapted to different standards. The data is stored in a relational database. It includes a workflow component to Support distributed maintenance and automation modules for link checking and metadata extraction. The presentation of the clearinghouse an the Web can be modified to allow seamless integration into existing web sites.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  9. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.00
    0.0023673228 = product of:
      0.007101968 = sum of:
        0.007101968 = product of:
          0.014203936 = sum of:
            0.014203936 = weight(_text_:of in 1205) [ClassicSimilarity], result of:
              0.014203936 = score(doc=1205,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732687 = fieldWeight in 1205, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1205)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Music websites are ubiquitous, and music downloads, such as MP3, are a major source of Web traffic. As the amount of musical content increases and the Web becomes an important mechanism for distributing music, we expect to see a rising demand for music search services. Many currently available music search engines rely on file names, song title, composer or performer as the indexing and retrieval mechanism. These systems do not make use of the musical content. We believe that a more natural, effective, and usable music-information retrieval (MIR) system should have audio input, where the user can query with musical content. We are developing a system called MusArt for audio-input MIR. With MusArt, as with other audio-input MIR systems, a user sings or plays a theme, hook, or riff from the desired piece of music. The system transcribes the query and searches for related themes in a database, returning the most similar themes, given some measure of similarity. We call this "retrieval by query." In this paper, we describe the architecture of MusArt. An important element of MusArt is metadata creation: we believe that it is essential to automatically abstract important musical elements, particularly themes. Theme extraction is performed by a subsystem called MME, which we describe later in this paper. Another important element of MusArt is its support for a variety of search engines, as we believe that MIR is too complex for a single approach to work for all queries. Currently, MusArt supports a dynamic time-warping search engine that has high recall, and a complementary stochastic search engine that searches over themes, emphasizing speed and relevancy. The stochastic search engine is discussed in this paper.
  10. Hyning, V. Van; Lintott, C.; Blickhan, S.; Trouille, L.: Transforming libraries and archives through crowdsourcing (2017) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 2526) [ClassicSimilarity], result of:
              0.014203937 = score(doc=2526,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 2526, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2526)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article will showcase the aims and research goals of the project entitled "Transforming Libraries and Archives through Crowdsourcing", recipient of a 2016 Institute for Museum and Library Services grant. This grant will be used to fund the creation of four bespoke text and audio transcription projects which will be hosted on the Zooniverse, the world-leading research crowdsourcing platform. These transcription projects, while supporting the research of four separate institutions, will also function as a means to expand and enhance the Zooniverse platform to better support galleries, libraries, archives and museums (GLAM institutions) in unlocking their data and engaging the public through crowdsourcing.
  11. Internet searching and indexing : the subject approach (2000) 0.00
    0.0022319334 = product of:
      0.0066958 = sum of:
        0.0066958 = product of:
          0.0133916 = sum of:
            0.0133916 = weight(_text_:of in 1468) [ClassicSimilarity], result of:
              0.0133916 = score(doc=1468,freq=4.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19546966 = fieldWeight in 1468, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1468)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This comprehensive volume offers usable information for people at all levels of Internet savvy. It can teach librarians, students, and patrons how to search the Internet more systematically. It also helps information professionals design more efficient, effective search engines and Web pages.
    Series
    Journal of internet cataloging; 2, nos. 1/2
  12. Mustafa El Hadi, W.; Roszkowski, M.: ¬The role of digital libraries as virtual research environments for the digital humanities (2016) 0.00
    0.0022319334 = product of:
      0.0066958 = sum of:
        0.0066958 = product of:
          0.0133916 = sum of:
            0.0133916 = weight(_text_:of in 4934) [ClassicSimilarity], result of:
              0.0133916 = score(doc=4934,freq=4.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19546966 = fieldWeight in 4934, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4934)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society : proceedings of the Fourteenth International ISKO Conference 27-29 September 2016, Rio de Janeiro, Brazil / organized by International Society for Knowledge Organization (ISKO), ISKO-Brazil, São Paulo State University ; edited by José Augusto Chaves Guimarães, Suellen Oliveira Milani, Vera Dodebei
  13. Aksoy, C.; Can, F.; Kocberber, S.: Novelty detection for topic tracking (2012) 0.00
    0.0022056228 = product of:
      0.006616868 = sum of:
        0.006616868 = product of:
          0.013233736 = sum of:
            0.013233736 = weight(_text_:of in 51) [ClassicSimilarity], result of:
              0.013233736 = score(doc=51,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.19316542 = fieldWeight in 51, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=51)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Multisource web news portals provide various advantages such as richness in news content and an opportunity to follow developments from different perspectives. However, in such environments, news variety and quantity can have an overwhelming effect. New-event detection and topic-tracking studies address this problem. They examine news streams and organize stories according to their events; however, several tracking stories of an event/topic may contain no new information (i.e., no novelty). We study the novelty detection (ND) problem on the tracking news of a particular topic. For this purpose, we build a Turkish ND test collection called BilNov-2005 and propose the usage of three ND methods: a cosine-similarity (CS)-based method, a language-model (LM)-based method, and a cover-coefficient (CC)-based method. For the LM-based ND method, we show that a simpler smoothing approach, Dirichlet smoothing, can have similar performance to a more complex smoothing approach, Shrinkage smoothing. We introduce a baseline that shows the performance of a system with random novelty decisions. In addition, a category-based threshold learning method is used for the first time in ND literature. The experimental results show that the LM-based ND method significantly outperforms the CS- and CC-based methods, and category-based threshold learning achieves promising results when compared to general threshold learning.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.4, S.777-795
  14. Crane, G.: ¬The Perseus Project and beyond : how building a digital library challenges the humanities and technology (1998) 0.00
    0.0020501618 = product of:
      0.006150485 = sum of:
        0.006150485 = product of:
          0.01230097 = sum of:
            0.01230097 = weight(_text_:of in 1251) [ClassicSimilarity], result of:
              0.01230097 = score(doc=1251,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.17955035 = fieldWeight in 1251, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1251)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    For more than ten years, the Perseus Project has been developing a digital library in the humanities. Initial work concentrated exclusively on ancient Greek culture, using this domain as a case study for a compact, densely hypertextual library on a single, but interdisciplinary, subject. Since it has achieved its initial goals with the Greek materials, however, Perseus is using the existing library to study the new possibilities (and limitations) of the electronic medium and to serve as the foundation for work in new cultural domains: Perseus has begun coverage of Roman and now Renaissance materials, with plans for expansion into other areas of the humanities as well. Our goal is not only to help traditional scholars conduct their research more effectively but, more importantly, to help humanists use the technology to redefine the relationship between their work and the broader intellectual community.
  15. Johannsen, J.: InetBib 2004 in Bonn : Tagungsbericht: (2005) 0.00
    0.0019785978 = product of:
      0.0059357933 = sum of:
        0.0059357933 = product of:
          0.011871587 = sum of:
            0.011871587 = weight(_text_:22 in 3125) [ClassicSimilarity], result of:
              0.011871587 = score(doc=3125,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.07738023 = fieldWeight in 3125, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3125)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2005 19:05:37
  16. Zhu, X.; Freeman, M.A.: ¬An evaluation of U.S. municipal open data portals : a user interaction framework (2019) 0.00
    0.001972769 = product of:
      0.0059183068 = sum of:
        0.0059183068 = product of:
          0.0118366135 = sum of:
            0.0118366135 = weight(_text_:of in 5502) [ClassicSimilarity], result of:
              0.0118366135 = score(doc=5502,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.17277241 = fieldWeight in 5502, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5502)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    As an increasing number of open government data (OGD) portals are created, an evaluation method is needed to assess these portals. In this study, we drew from the existing principles and evaluation methods to develop a User Interaction Framework, with concrete criteria in five dimensions: Access, Trust, Understand, Engage-integrate, and Participate. The framework was then used to evaluate the current OGD sites created and maintained by 34 U.S. municipal government agencies. The results show that, overall, portals perform well in terms of providing access, but not so well in helping users understand and engage with data. These findings indicate room for improvement in multiple areas and suggest potential roles for information professionals as data mediators. The study also reveals that portals using the Socrata platform performed better, regarding user access, trust, engagement, and participation. However, the variability among portals indicates that some portals should improve their platforms to achieve greater user engagement and participation. In addition, city governments need to develop clear plans about what data should be available and how to make them available to their public.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.1, S.27-37
  17. Zeeman, D.; Turner, G.: Resource discovery in the Government of Canada using the Dewey Decimal Classification (2006) 0.00
    0.0019529418 = product of:
      0.005858825 = sum of:
        0.005858825 = product of:
          0.01171765 = sum of:
            0.01171765 = weight(_text_:of in 5782) [ClassicSimilarity], result of:
              0.01171765 = score(doc=5782,freq=4.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.17103596 = fieldWeight in 5782, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5782)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Library and Archives Canada (LAC) has capitalized on the Dewey Decimal Classification (DDC) potential for organizing Web resources in two projects. Since 1995, LAC has been providing a service that offers links to authoritative Web resources about Canada categorized according to the DDC via its Web site. More recently, LAC has partnered with the federal government Department of Canadian Heritage to manage Web content related to Canadian culture in a DDC-based subject tree. Although the DDC works well to organize a broadly-based collection, challenges have been encountered in adapting it for a specific subject domain.
  18. Fang, L.: ¬A developing search service : heterogeneous resources integration and retrieval system (2004) 0.00
    0.0017084682 = product of:
      0.0051254043 = sum of:
        0.0051254043 = product of:
          0.010250809 = sum of:
            0.010250809 = weight(_text_:of in 1193) [ClassicSimilarity], result of:
              0.010250809 = score(doc=1193,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.1496253 = fieldWeight in 1193, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1193)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes two approaches for searching heterogeneous resources, which are explained as they are used in two corresponding existing systems-RIRS (Resource Integration Retrieval System) and HRUSP (Heterogeneous Resource Union Search Platform). On analyzing the existing systems, a possible framework-the MUSP (Multimetadata-Based Union Search Platform) is presented. Libraries now face a dilemma. On one hand, libraries subscribe to many types of database retrieval systems that are produced by various providers. The libraries build their data and information systems independently. This results in highly heterogeneous and distributed systems at the technical level (e.g., different operating systems and user interfaces) and at the conceptual level (e.g., the same objects are named using different terms). On the other hand, end users want to access all these heterogeneous data via a union interface, without having to know the structure of each information system or the different retrieval methods used by the systems. Libraries must achieve a harmony between information providers and users. In order to bridge the gap between the service providers and the users, it would seem that all source databases would need to be rebuilt according to a uniform data structure and query language, but this seems impossible. Fortunately, however, libraries and information and technology providers are now making an effort to find a middle course that meets the requirements of both data providers and users. They are doing this through resource integration.
  19. Prasad, A.R.D.; Madalli, D.P.: Faceted infrastructure for semantic digital libraries (2008) 0.00
    0.0017084682 = product of:
      0.0051254043 = sum of:
        0.0051254043 = product of:
          0.010250809 = sum of:
            0.010250809 = weight(_text_:of in 1905) [ClassicSimilarity], result of:
              0.010250809 = score(doc=1905,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.1496253 = fieldWeight in 1905, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1905)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The paper aims to argue that digital library retrieval should be based on semantic representations and propose a semantic infrastructure for digital libraries. Design/methodology/approach - The approach taken is formal model based on subject representation for digital libraries. Findings - Search engines and search techniques have fallen short of user expectations as they do not give context based retrieval. Deploying semantic web technologies would lead to efficient and more precise representation of digital library content and hence better retrieval. Though digital libraries often have metadata of information resources which can be accessed through OAI-PMH, much remains to be accomplished in making digital libraries semantic web compliant. This paper presents a semantic infrastructure for digital libraries, that will go a long way in providing them and web based information services with products highly customised to users needs. Research limitations/implications - Here only a model for semantic infrastructure is proposed. This model is proposed after studying current user-centric, top-down models adopted in digital library service architectures. Originality/value - This paper gives a generic model for building semantic infrastructure for digital libraries. Faceted ontologies for digital libraries is just one approach. But the same may be adopted by groups working with different approaches in building ontologies to realise efficient retrieval in digital libraries.
  20. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.00
    0.0016739499 = product of:
      0.0050218496 = sum of:
        0.0050218496 = product of:
          0.010043699 = sum of:
            0.010043699 = weight(_text_:of in 3967) [ClassicSimilarity], result of:
              0.010043699 = score(doc=3967,freq=4.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.14660224 = fieldWeight in 3967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3967)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.

Languages

  • e 148
  • d 25

Types

  • a 155
  • el 38
  • m 8
  • s 6
  • x 1
  • More… Less…