Search (30 results, page 1 of 2)

  • × type_ss:"el"
  • × theme_ss:"Information Gateway"
  1. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.04
    0.035714075 = product of:
      0.053571112 = sum of:
        0.010669114 = weight(_text_:on in 1205) [ClassicSimilarity], result of:
          0.010669114 = score(doc=1205,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1205)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 1205) [ClassicSimilarity], result of:
              0.085804 = score(doc=1205,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 1205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1205)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Music websites are ubiquitous, and music downloads, such as MP3, are a major source of Web traffic. As the amount of musical content increases and the Web becomes an important mechanism for distributing music, we expect to see a rising demand for music search services. Many currently available music search engines rely on file names, song title, composer or performer as the indexing and retrieval mechanism. These systems do not make use of the musical content. We believe that a more natural, effective, and usable music-information retrieval (MIR) system should have audio input, where the user can query with musical content. We are developing a system called MusArt for audio-input MIR. With MusArt, as with other audio-input MIR systems, a user sings or plays a theme, hook, or riff from the desired piece of music. The system transcribes the query and searches for related themes in a database, returning the most similar themes, given some measure of similarity. We call this "retrieval by query." In this paper, we describe the architecture of MusArt. An important element of MusArt is metadata creation: we believe that it is essential to automatically abstract important musical elements, particularly themes. Theme extraction is performed by a subsystem called MME, which we describe later in this paper. Another important element of MusArt is its support for a variety of search engines, as we believe that MIR is too complex for a single approach to work for all queries. Currently, MusArt supports a dynamic time-warping search engine that has high recall, and a complementary stochastic search engine that searches over themes, emphasizing speed and relevancy. The stochastic search engine is discussed in this paper.
  2. Neubauer, W.: ¬The Knowledge portal or the vision of easy access to information (2009) 0.04
    0.035714075 = product of:
      0.053571112 = sum of:
        0.010669114 = weight(_text_:on in 2812) [ClassicSimilarity], result of:
          0.010669114 = score(doc=2812,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 2812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2812)
        0.042902 = product of:
          0.085804 = sum of:
            0.085804 = weight(_text_:demand in 2812) [ClassicSimilarity], result of:
              0.085804 = score(doc=2812,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.2756524 = fieldWeight in 2812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2812)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    From a quantitative and qualitative point of view the ETH Library is offering its users an extensive choice of information services. In this respect all researchers, all scientists and also all students have access to nearly all relevant information. This is one side of the coin. On the other hand, this broad, but also heterogeneous bundle of information sources has disadvantages, which should not be underestimated: The more information services and information channels you have, the more complex is it to find what you want to get for your scientific work. A portal-like integration of all the different information resources is still missing. The vision, the main goal of the project "Knowledge Portal" is, to develop a central access system in terms of a "single-point-of-access" for all electronic information services. This means, that all these sources - from the library's catalogue and the fulltext inhouse applications to external, licensed sources - should be accessible via one central Web service. Although the primary target group for this vision is the science community of the ETH Zurich, the interested public should also be taken into account, for the library has also a nation-wide responsibility.The general idea to launch a complex project like that comes from a survey the library did one and a half years ago. We asked a defined sample of scientists what they expected to get from their library and one constant answer was, that they wanted to have one point of access to all the electronic library services and besides this, the search processes should be as simple as possible. We accepted this demand as an order to develop a "single-point-of-access" to all electronic services the library provides. The presentation gives an overview about the general idea of the project and describes the current status.
  3. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.02
    0.02324084 = product of:
      0.03486126 = sum of:
        0.021338228 = weight(_text_:on in 1447) [ClassicSimilarity], result of:
          0.021338228 = score(doc=1447,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 1447, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1447)
        0.013523032 = product of:
          0.027046064 = sum of:
            0.027046064 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
              0.027046064 = score(doc=1447,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.15476047 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1447)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  4. Fife, E.D.; Husch, L.: ¬The Mathematics Archives : making mathematics easy to find on the Web (1999) 0.01
    0.011761595 = product of:
      0.035284784 = sum of:
        0.035284784 = weight(_text_:on in 1239) [ClassicSimilarity], result of:
          0.035284784 = score(doc=1239,freq=14.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.3214632 = fieldWeight in 1239, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1239)
      0.33333334 = coord(1/3)
    
    Abstract
    Do a search on AltaVista for "algebra". What do you get? Nearly 700,000 hits, of which AltaVista will allow you to view only what it determines is the top 200. Major search engines such as AltaVista, Excite, HotBot, Lycos, and the like continue to provide a valuable service, but with the recent growth of the Internet, topic-specific sites that provide some organization to the topic are increasingly important. It the goal of the Mathematics Archives to make it easier for the ordinary user to find useful mathematical information on the Web. The Mathematics Archives (http://archives.math.utk.edu) is a multipurpose site for mathematics on the Internet. The focus is on materials which can be used in mathematics education (primarily at the undergraduate level). Resources available range from shareware and public domain software to electronic proceedings of various conferences, to an extensive collection of annotated links to other mathematical sites. All materials on the Archives are categorized and cross referenced for the convenience of the user. Several search mechanisms are provided. The Harvest search engine is implemented to provide a full text search of most of the pages on the Archives. The software we house and our list of annotated links to mathematical sites are both categorized by subject matter. Each of these collections has a specialized search engine to assist the user in locating desired material. Services at the Mathematics Archives are divided up into five broad topics: * Links organized by Mathematical Topics * Software * Teaching Materials * Other Math Archives Features * Other Links
  5. Janée, G.; Frew, J.; Hill, L.L.: Issues in georeferenced digital libraries (2004) 0.01
    0.010779679 = product of:
      0.032339036 = sum of:
        0.032339036 = weight(_text_:on in 1165) [ClassicSimilarity], result of:
          0.032339036 = score(doc=1165,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29462588 = fieldWeight in 1165, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1165)
      0.33333334 = coord(1/3)
    
    Abstract
    Based on a decade's experience with the Alexandria Digital Library Project, seven issues are presented that arise in creating georeferenced digital libraries, and that appear to be intrinsic to the problem of creating any library-like information system that operates on georeferenced and geospatial resources. The first and foremost issue is providing discovery of georeferenced resources. Related to discovery are the issues of gazetteer integration and specialized ranking of search results. Strong data typing and scalability are implementation issues. Providing spatial context is a critical user interface issue. Finally, sophisticated resource access mechanisms are necessary to operate on geospatial resources.
  6. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.01
    0.009335475 = product of:
      0.028006425 = sum of:
        0.028006425 = weight(_text_:on in 1260) [ClassicSimilarity], result of:
          0.028006425 = score(doc=1260,freq=18.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25515348 = fieldWeight in 1260, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1260)
      0.33333334 = coord(1/3)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  7. Fang, L.: ¬A developing search service : heterogeneous resources integration and retrieval system (2004) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 1193) [ClassicSimilarity], result of:
          0.02309931 = score(doc=1193,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 1193, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1193)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes two approaches for searching heterogeneous resources, which are explained as they are used in two corresponding existing systems-RIRS (Resource Integration Retrieval System) and HRUSP (Heterogeneous Resource Union Search Platform). On analyzing the existing systems, a possible framework-the MUSP (Multimetadata-Based Union Search Platform) is presented. Libraries now face a dilemma. On one hand, libraries subscribe to many types of database retrieval systems that are produced by various providers. The libraries build their data and information systems independently. This results in highly heterogeneous and distributed systems at the technical level (e.g., different operating systems and user interfaces) and at the conceptual level (e.g., the same objects are named using different terms). On the other hand, end users want to access all these heterogeneous data via a union interface, without having to know the structure of each information system or the different retrieval methods used by the systems. Libraries must achieve a harmony between information providers and users. In order to bridge the gap between the service providers and the users, it would seem that all source databases would need to be rebuilt according to a uniform data structure and query language, but this seems impossible. Fortunately, however, libraries and information and technology providers are now making an effort to find a middle course that meets the requirements of both data providers and users. They are doing this through resource integration.
  8. Müller, B.; Poley, C.; Pössel, J.; Hagelstein, A.; Gübitz, T.: LIVIVO - the vertical search engine for life sciences (2017) 0.01
    0.0076997704 = product of:
      0.02309931 = sum of:
        0.02309931 = weight(_text_:on in 3368) [ClassicSimilarity], result of:
          0.02309931 = score(doc=3368,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.21044704 = fieldWeight in 3368, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3368)
      0.33333334 = coord(1/3)
    
    Abstract
    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.
  9. Crane, G.: ¬The Perseus Project and beyond : how building a digital library challenges the humanities and technology (1998) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 1251) [ClassicSimilarity], result of:
          0.02263261 = score(doc=1251,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 1251, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1251)
      0.33333334 = coord(1/3)
    
    Abstract
    For more than ten years, the Perseus Project has been developing a digital library in the humanities. Initial work concentrated exclusively on ancient Greek culture, using this domain as a case study for a compact, densely hypertextual library on a single, but interdisciplinary, subject. Since it has achieved its initial goals with the Greek materials, however, Perseus is using the existing library to study the new possibilities (and limitations) of the electronic medium and to serve as the foundation for work in new cultural domains: Perseus has begun coverage of Roman and now Renaissance materials, with plans for expansion into other areas of the humanities as well. Our goal is not only to help traditional scholars conduct their research more effectively but, more importantly, to help humanists use the technology to redefine the relationship between their work and the broader intellectual community.
  10. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.01
    0.0075442037 = product of:
      0.02263261 = sum of:
        0.02263261 = weight(_text_:on in 3967) [ClassicSimilarity], result of:
          0.02263261 = score(doc=3967,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 3967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=3967)
      0.33333334 = coord(1/3)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.
  11. Buckland, M.; Lancaster, L.: Combining place, time, and topic : the Electronic Cultural Atlas Initiative (2004) 0.01
    0.007112743 = product of:
      0.021338228 = sum of:
        0.021338228 = weight(_text_:on in 1194) [ClassicSimilarity], result of:
          0.021338228 = score(doc=1194,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19440265 = fieldWeight in 1194, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1194)
      0.33333334 = coord(1/3)
    
    Abstract
    The Electronic Cultural Atlas Initiative was formed to encourage scholarly communication and the sharing of data among researchers who emphasize the relationships between place, time, and topic in the study of culture and history. In an effort to develop better tools and practices, The Electronic Cultural Atlas Initiative has sponsored the collaborative development of software for downloading and editing geo-temporal data to create dynamic maps, a clearinghouse of shared datasets accessible through a map-based interface, projects on format and content standards for gazetteers and time period directories, studies to improve geo-temporal aspects in online catalogs, good practice guidelines for preparing e-publications with dynamic geo-temporal displays, and numerous international conferences. The Electronic Cultural Atlas Initiative (ECAI) grew out of discussions among an international group of scholars interested in religious history and area studies. It was established as a unit under the Dean of International and Area Studies at the University of California, Berkeley in 1997. ECAI's mission is to promote an international collaborative effort to transform humanities scholarship through use of the digital environment to share data and by placing greater emphasis on the notions of place and time. Professor Lewis Lancaster is the Director. Professor Michael Buckland, with a library and information studies background, joined the effort as Co-Director in 2000. Assistance from the Lilly Foundation, the California Digital Library (University of California), and other sources has enabled ECAI to nurture a community; to develop a catalog ("clearinghouse") of Internet-accessible georeferenced resources; to support the development of software for obtaining, editing, manipulating, and dynamically visualizing geo-temporally encoded data; and to undertake research and development projects as needs and resources determine. Several hundred scholars worldwide, from a wide range of disciplines, are informally affiliated with ECAI, all interested in shared use of historical and cultural data. The Academia Sinica (Taiwan), The British Library, and the Arts and Humanities Data Service (UK) are among the well-known affiliates. However, ECAI mainly comprises individual scholars and small teams working on their own small projects on a very wide range of cultural, social, and historical topics. Numerous specialist committees have been fostering standardization and collaboration by area and by themes such as trade-routes, cities, religion, and sacred sites.
  12. Thaller, M.: From the digitized to the digital library (2001) 0.01
    0.0070569566 = product of:
      0.02117087 = sum of:
        0.02117087 = weight(_text_:on in 1159) [ClassicSimilarity], result of:
          0.02117087 = score(doc=1159,freq=14.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.19287792 = fieldWeight in 1159, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
      0.33333334 = coord(1/3)
    
    Abstract
    The author holds a chair in Humanities Computer Science at the University of Cologne. For a number of years, he has been responsible for digitization projects, either as project director or as the person responsible for the technology being employed on the projects. The "Duderstadt project" (http://www.archive.geschichte.mpg.de/duderstadt/dud-e.htm) is one such project. It is one of the early large-scale manuscript servers, finished at the end of 1998, with approximately 80,000 high resolution documents representing the holdings of a city archive before the year 1600. The digital library of the Max-Planck-Institut für Europäische Rechtsgeschichte in Frankfurt (http://www.mpier.uni-frankfurt.de/dlib) is another project on which the author has worked, with currently approximately 900,000 pages. The author is currently project director of the project "Codices Electronici Ecclesiae Colonensis" (CEEC), which has just started and will ultimately consist of approximately 130,000 very high resolution color pages representing the complete holdings of the manuscript library of a medieval cathedral. It is being designed in close cooperation with the user community of such material. The project site (http://www.ceec.uni-koeln.de), while not yet officially opened, currently holds about 5,000 pages and is growing by 100 - 150 pages per day. Parallel to the CEEC model project, a conceptual project, the "Codex Electronicus Colonensis" (CEC), is at work on the definition of an abstract model for the representation of medieval codices in digital form. The following paper has grown out of the design considerations for the mentioned CEC project. The paper reflects a growing concern of the author's that some of the recent advances in digital (research) libraries are being diluted because it is not clear whether the advances really reach the audience for whom the projects would be most useful. Many, if not most, digitization projects have aimed at existing collections as individual servers. A digital library, however, should be more than a digitized one. It should be built according to principles that are not necessarily the same as those employed for paper collections, and it should be evaluated according to different measures which are not yet totally clear. The paper takes the form of six theses on various aspects of the ongoing transition to digital libraries. These theses have been presented at a forum on the German "retrodigitization" program. The program aims at the systematic conversion of library resources into digital form, concentrates for a number of reasons on material primarily of interest to the Humanities, and is funded by the German research council. As such this program is directly aimed at improving the overall infrastructure of academic research; other users of libraries are of interest, but are not central to the program.
    Content
    Theses: 1. Who should be addressed by digital libraries? How shall we measure whether we have reached the desired audience? Thesis: The primary audience for a digital library is neither the leading specialist in the respective field, nor the freshman, but the advanced student or young researcher and the "almost specialist". The primary topic of digitization projects should not be the absolute top range of the "treasures" of a collection, but those materials that we always have wanted to promote if they were just marginally more important. Whether we effectively serve them to the appropriate community of serious users can only be measured according to criteria that have yet to be developed. 2. The appropriate size of digital libraries and their access tools Thesis: Digital collections need a critical, minimal size to make their access worthwhile. In the end, users want to access information, not metadata or gimmicks. 3. The quality of digital objects Thesis: If digital library resources are to be integrated into the daily work of the research community, they must appear on the screen of the researcher in a quality that is useful in actual work. 4. The granularity / modularity of digital repositories Thesis: While digital libraries are self-contained bodies of information, they are not the basic unit that most users want to access. Users are, as a rule, more interested in the individual objects in the library and need a straightforward way to access them. 5. Digital collections as integrated reference systems Thesis: Traditional libraries support their collections with reference material. Digital collections need to find appropriate models to replicate this functionality. 6. Library and teaching Thesis: The use of multimedia in teaching is as much of a current buzzword as the creation of digital collections. It is obvious that they should be connected. A clear-cut separation of the two approaches is nevertheless necessary.
  13. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.01
    0.0067615155 = product of:
      0.020284547 = sum of:
        0.020284547 = product of:
          0.040569093 = sum of:
            0.040569093 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.040569093 = score(doc=4189,freq=2.0), product of:
                0.1747608 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04990557 = queryNorm
                0.23214069 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2002 19:35:09
  14. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.01
    0.0062868367 = product of:
      0.01886051 = sum of:
        0.01886051 = weight(_text_:on in 6470) [ClassicSimilarity], result of:
          0.01886051 = score(doc=6470,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 6470, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6470)
      0.33333334 = coord(1/3)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  15. Goodchild, M.F.: ¬The Alexandria Digital Library Project : review, assessment, and prospects (2004) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 1153) [ClassicSimilarity], result of:
          0.01867095 = score(doc=1153,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 1153, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1153)
      0.33333334 = coord(1/3)
    
    Abstract
    The Alexandria Digital Library (ADL) was established in the late 1990s as a response to several perceived problems of traditional map libraries, notably access and organization. By 1999 it had evolved into an operational digital library, offering a well-defined set of services to a broad user community, based on an extensive collection of georeferenced information objects. The vision of ADL continues to evolve, as technology makes new services possible, as its users become more sophisticated and demanding, and as the broader field of geographic information science (GIScience) identifies new avenues for research and application.
  16. Borgman, C.L.: Multi-media, multi-cultural, and multi-lingual digital libraries : or how do we exchange data In 400 languages? (1997) 0.01
    0.00622365 = product of:
      0.01867095 = sum of:
        0.01867095 = weight(_text_:on in 1263) [ClassicSimilarity], result of:
          0.01867095 = score(doc=1263,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 1263, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1263)
      0.33333334 = coord(1/3)
    
    Abstract
    The Internet would not be very useful if communication were limited to textual exchanges between speakers of English located in the United States. Rather, its value lies in its ability to enable people from multiple nations, speaking multiple languages, to employ multiple media in interacting with each other. While computer networks broke through national boundaries long ago, they remain much more effective for textual communication than for exchanges of sound, images, or mixed media -- and more effective for communication in English than for exchanges in most other languages, much less interactions involving multiple languages. Supporting searching and display in multiple languages is an increasingly important issue for all digital libraries accessible on the Internet. Even if a digital library contains materials in only one language, the content needs to be searchable and displayable on computers in countries speaking other languages. We need to exchange data between digital libraries, whether in a single language or in multiple languages. Data exchanges may be large batch updates or interactive hyperlinks. In any of these cases, character sets must be represented in a consistent manner if exchanges are to succeed. Issues of interoperability, portability, and data exchange related to multi-lingual character sets have received surprisingly little attention in the digital library community or in discussions of standards for information infrastructure, except in Europe. The landmark collection of papers on Standards Policy for Information Infrastructure, for example, contains no discussion of multi-lingual issues except for a passing reference to the Unicode standard. The goal of this short essay is to draw attention to the multi-lingual issues involved in designing digital libraries accessible on the Internet. Many of the multi-lingual design issues parallel those of multi-media digital libraries, a topic more familiar to most readers of D-Lib Magazine. This essay draws examples from multi-media DLs to illustrate some of the urgent design challenges in creating a globally distributed network serving people who speak many languages other than English. First we introduce some general issues of medium, culture, and language, then discuss the design challenges in the transition from local to global systems, lastly addressing technical matters. The technical issues involve the choice of character sets to represent languages, similar to the choices made in representing images or sound. However, the scale of the language problem is far greater. Standards for multi-media representation are being adopted fairly rapidly, in parallel with the availability of multi-media content in electronic form. By contrast, we have hundreds (and sometimes thousands) of years worth of textual materials in hundreds of languages, created long before data encoding standards existed. Textual content from past and present is being encoded in language and application-specific representations that are difficult to exchange without losing data -- if they exchange at all. We illustrate the multi-language DL challenge with examples drawn from the research library community, which typically handles collections of materials in 400 or so languages. These are problems faced not only by developers of digital libraries, but by those who develop and manage any communication technology that crosses national or linguistic boundaries.
  17. Niggemann, E.: Europeana: connecting cultural heritage (2009) 0.01
    0.006159817 = product of:
      0.01847945 = sum of:
        0.01847945 = weight(_text_:on in 2816) [ClassicSimilarity], result of:
          0.01847945 = score(doc=2816,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.16835764 = fieldWeight in 2816, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=2816)
      0.33333334 = coord(1/3)
    
    Abstract
    The European Commission's goal for Europeana is to make European information resources easier to use in an online environment. It will build on Europe's rich heritage, combining multicultural and multilingual environments with technological advances and new business models. The Europeana prototype is the result of a 2-year project that began in July 2007. Europeana.eu went live on 20 November 2008, launched by Viviane Reding, European Commissioner for Information Society and Media. Europeana.eu is about ideas and inspiration. It links the user to 2 million digital items: images, text, sounds and videos. Europeana is a Thematic Network funded by the European Commission under the eContentplus programme, as part of the i2010 policy. Originally known as the European digital library network - EDLnet - it is a partnership of 100 representatives of heritage and knowledge organisations and IT experts from throughout Europe. They contribute to the work packages that are solving the technical and usability issues. The project is run by a core team based in the national library of the Netherlands, the Koninklijke Bibliotheek. It builds on the project management and technical expertise developed by The European Library, which is a service of the Conference of European National Librarians. Content is added via so called aggregators, national or domain specific portals aggegrating digital content and channelling it to Europeana. Most of these portals are being developed in the framework of EU funded projects, e.g. European Film Gateway, Athena and EuropeanaLocal. Overseeing the project is the EDL Foundation, which includes key European cultural heritage associations from the four domains. The Foundation's statutes commit members to: * providing access to Europe's cultural and scientific heritage through a cross-domain portal; * co-operating in the delivery and sustainability of the joint portal; * stimulating initiatives to bring together existing digital content; * supporting digitisation of Europe's cultural and scientific heritage. Europeana.eu is a prototype. Europeana Version 1.0 is being developed and will be launched in 2010 with links to over 6 million digital objects.
  18. Heery, R.; Carpenter, L.; Day, M.: Renardus project developments and the wider digital library context (2001) 0.01
    0.0058807973 = product of:
      0.017642392 = sum of:
        0.017642392 = weight(_text_:on in 1219) [ClassicSimilarity], result of:
          0.017642392 = score(doc=1219,freq=14.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1607316 = fieldWeight in 1219, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1219)
      0.33333334 = coord(1/3)
    
    Abstract
    For those building digital library services, the organisational barriers are sometimes far more intractable than technological issues. This was firmly flagged in one of the first workshops focusing specifically on the digital library research agenda: Digital libraries are not simply technological constructs; they exist within a rich legal, social, and economic context, and will succeed only to the extent that they meet these broader needs. The innovatory drive within the development of digital library services thrives on the tension between meeting both technical and social imperatives. The Renardus project partners have previously taken parts in projects establishing the technical basis for subject gateways (e.g., ROADS, DESIRE], EELS) and are aware that technical barriers to interoperability are outweighed by challenges relating to the organisational and business models used. Within the Renardus project there has been a determination to address these organisational and business issues from the beginning. Renardus intends initially to create a pilot service, targeting the European scholar with a single point of access to quality selected Web resources. Looking ahead beyond current project funding, it aims to create the organisational and technological infrastructure for a sustainable service. This means the project is concerned with the range of processes required to establish a viable service, and is explicitly addressing business issues as well as providing a technical infrastructure. The overall aim of Renardus is to establish a collaborative framework for European subject gateways that will benefit both users in terms of enhanced services, and the gateways themselves in terms of shared solutions. In order to achieve this aim, Renardus will provide firstly a pilot service for the European academic and research communities brokering access to those European-based information gateways that currently participate in the project; in other words, brokering to gateways that are already in existence. Secondly the project will explore ways to establish the organisational basis for co-operative efforts such as metadata sharing, joint technical solutions and agreement on standardisation. It is intended that this exploration will feed back valuable experience to the individual participating gateways to suggest ways their services can be enhanced.
    The Renardus project has brought together gateways that are 'large-scale national initiatives'. Within the European context this immediately introduces a diversity of organisations, as responsibility for national gateway initiatives is located differently, for example, in national libraries, national agencies with responsibility for educational technology infrastructure, and within universities or consortia of universities. Within the project, gateways are in some cases represented directly by their own personnel, in some cases by other departments or research centres, but not always by the people responsible for providing the gateway service. For example, the UK Resource Discovery Network (RDN) is represented in the project by UKOLN (formerly part of the Resource Discovery Network Centre) and the Institute of Learning and Research Technology (ILRT), University of Bristol -- an RDN 'hub' service provider -- who are primarily responsible for dissemination. Since the start of the project there have been changes within the organisational structures providing gateways and within the service ambitions of gateways themselves. Such lack of stability is inherent within the Internet service environment, and this presents challenges to Renardus activity that has to be planned for a three-year period. For example, within the gateway's funding environment there is now an exploration of 'subject portals' offering more extended services than gateways. There is also potential commercial interest for including gateways as a value-added component to existing commercial services, and new offerings from possible competitors such as Google's Web Directory and country based services. This short update on the Renardus project intends to inform the reader of progress within the project and to give some wider context to its main themes by locating the project within the broader arena of digital library activity. There are twelve partners in the project from Denmark, Finland, France, Germany, the Netherlands and Sweden, as well as the UK. In particular we will focus on the specific activity in which UKOLN is involved: the architectural design, the specification of functional requirements, reaching consensus on a collaborative business model, etc. We will also consider issues of metadata management where all partners have interests. We will highlight implementation issues that connect to areas of debate elsewhere. In particular we see connections with activity related to establishing architectural models for digital library services, connections to the services that may emerge from metadata sharing using the Open Archives Initiative metadata sharing protocol, and links with work elsewhere on navigation of digital information spaces by means of controlled vocabularies.
  19. Hyning, V. Van; Lintott, C.; Blickhan, S.; Trouille, L.: Transforming libraries and archives through crowdsourcing (2017) 0.01
    0.0053345575 = product of:
      0.016003672 = sum of:
        0.016003672 = weight(_text_:on in 2526) [ClassicSimilarity], result of:
          0.016003672 = score(doc=2526,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 2526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2526)
      0.33333334 = coord(1/3)
    
    Abstract
    This article will showcase the aims and research goals of the project entitled "Transforming Libraries and Archives through Crowdsourcing", recipient of a 2016 Institute for Museum and Library Services grant. This grant will be used to fund the creation of four bespoke text and audio transcription projects which will be hosted on the Zooniverse, the world-leading research crowdsourcing platform. These transcription projects, while supporting the research of four separate institutions, will also function as a means to expand and enhance the Zooniverse platform to better support galleries, libraries, archives and museums (GLAM institutions) in unlocking their data and engaging the public through crowdsourcing.
  20. Summann, F.; Lossau, N.: Search engine technology and digital libraries : moving from theory to practice (2004) 0.01
    0.005029469 = product of:
      0.015088406 = sum of:
        0.015088406 = weight(_text_:on in 1196) [ClassicSimilarity], result of:
          0.015088406 = score(doc=1196,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 1196, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1196)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes the journey from the conception of and vision for a modern search-engine-based search environment to its technological realisation. In doing so, it takes up the thread of an earlier article on this subject, this time from a technical viewpoint. As well as presenting the conceptual considerations of the initial stages, this article will principally elucidate the technological aspects of this journey. The starting point for the deliberations about development of an academic search engine was the experience we gained through the generally successful project "Digital Library NRW", in which from 1998 to 2000-with Bielefeld University Library in overall charge-we designed a system model for an Internet-based library portal with an improved academic search environment at its core. At the heart of this system was a metasearch with an availability function, to which we added a user interface integrating all relevant source material for study and research. The deficiencies of this approach were felt soon after the system was launched in June 2001. There were problems with the stability and performance of the database retrieval system, with the integration of full-text documents and Internet pages, and with acceptance by users, because users are increasingly performing the searches themselves using search engines rather than going to the library for help in doing searches. Since a long list of problems are also encountered using commercial search engines for academic use (in particular the retrieval of academic information and long-term availability), the idea was born for a search engine configured specifically for academic use. We also hoped that with one single access point founded on improved search engine technology, we could access the heterogeneous academic resources of subject-based bibliographic databases, catalogues, electronic newspapers, document servers and academic web pages.