Search (79 results, page 2 of 4)

  • × theme_ss:"Information Gateway"
  1. Jacob, E.K.; Albrechtsen, H.; George, N.: Empirical analysis and evaluation of a metadata scheme for representing pedagogical resources in a digital library for educators (2006) 0.01
    0.0061176866 = product of:
      0.048941493 = sum of:
        0.048941493 = weight(_text_:work in 2518) [ClassicSimilarity], result of:
          0.048941493 = score(doc=2518,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 2518, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2518)
      0.125 = coord(1/8)
    
    Abstract
    This paper introduces the Just-in-Time Teaching (JiTT) digital library and describes the pedagogical nature of the resources that make up this library for educators. Because resources in this library are stored in the form of metadata records, the utility of the metadata scheme, its elements and its relationships is central to the ability of the library to address the pedagogical needs of instructors in the work domain of the classroom. The analytic framework provided by cognitive work analysis (CWA) is proposed as an innovative approach for evaluating the effectiveness of the JiTT metadata scheme. CWA is also discussed as an approach to assessing the ability of this extensive networked library to create a common digital environment that fosters cooperation and collaboration among instructors.
  2. Collier, M.: ¬The business aims of eight national libraries in digital library co-operation : a study carried out for the business plan of The European Library (TEL) project (2005) 0.01
    0.005098072 = product of:
      0.040784575 = sum of:
        0.040784575 = weight(_text_:work in 4951) [ClassicSimilarity], result of:
          0.040784575 = score(doc=4951,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28674924 = fieldWeight in 4951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4951)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - To describe the process and results of the business-planning workpackage of The European Library (TEL) project, in which eight national libraries collaborated on a joint approach to access to their digital libraries. Design/methodology/approach - The methodology was in three parts: first, a literature review and the mapping of the partners' existing and planned digital products and services, then a structured interview or survey to determine the partners' business requirements from TEL, then a harmonization process, and finally the results were then combined with normal business planning elements to produce a mission and final business plan. Findings - Business planning for digital libraries has hitherto not been widely reported. The methodology proved to be an effective method of achieving mutual agreement among partners with widely different aims and characteristics. Eleven harmonized service aspirations were agreed and five categories of business aims. Research limitations/implications - Focused on the business aims of national libraries, but the methodology can be relevant to other collaborative projects. Together with the few existing other reports, this can form the basis for a new field of work. Practical implications - The work described led directly to the creation of an operational service, which will be open to all European national libraries. Originality/value - As far as is known, the first reporting of a collaborative international planning process for a digital library, and maybe the first multi-partner business plan between national libraries.
  3. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.01
    0.005046834 = product of:
      0.04037467 = sum of:
        0.04037467 = weight(_text_:work in 1260) [ClassicSimilarity], result of:
          0.04037467 = score(doc=1260,freq=8.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 1260, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1260)
      0.125 = coord(1/8)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  4. Banwell, L.: Developing and evaluation framework for a supranational digital library (2003) 0.00
    0.00499507 = product of:
      0.03996056 = sum of:
        0.03996056 = weight(_text_:work in 2769) [ClassicSimilarity], result of:
          0.03996056 = score(doc=2769,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28095573 = fieldWeight in 2769, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2769)
      0.125 = coord(1/8)
    
    Abstract
    The paper will explore the issues surrounding the development of an evaluation framework for a supranational digital library system, as seen through the TEL (The European Library) project. It will describe work an the project to date, and seek to establish what are the key drivers, priorities and barriers encountered, in developing such a framework. TEL is being funded by the EU as an Accompanying Measure in the IST program. Its main focus of is an consensus building, and also includes preparatory technical work to develop testbeds, which will gauge to what extent interoperability is achievable. In order for TEL to take its place as a major Information Society initiative of the EU, it needs to be closely attuned to the needs, expectations and realities of its user communities, which comprise the citizens of the project's national partners. To this end the evaluation framework described in this paper, is being developed by establishing the users' viewpoints and priorities in relation to the key project themes. A summary of the issues to be used in the baseline, and to be expanded upon in the paper, follows: - Establishing the differing contexts of the national library partners, and the differing national priorities which will impact an TEL - Exploring the differing expectations relating to building and using the hybrid library - Exploring the differing expectations relating to TEL. TEL needs to add value - what does this mean in each partner state, and for the individuals within them? 1. Introduction to TEL TEL (The European Library) is a thirty month project, funded by the European Commission as part of its Fifth Framework Programme for research. It aims to set up a co-operative framework for access to the major national, mainly digital, collections in European national libraries. TEL is funded as an Accompanying Measure, designed to support the work of the IST (Information Society Technologies) Programme an the development of access to cultural and scientific knowledge. TEL will stop short of becoming a live service during the lifetime of the project, and is focused an ensuring co-operative and concerted approaches to technical and business issues associated with large-scale content development. It will lay the policy and technical groundwork towards a pan European digital library based an distributed digital collections, and providing seamless access to the digital resources of major European national libraries. It began in February, 2001, and has eight national library partners: Finland, Germany, Italy, the Netherlands, Portugal, Slovenia, Switzerland and the United Kingdom. It is also seeking to encourage the participation of all European national libraries in due course.
  5. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.00
    0.004640572 = product of:
      0.037124574 = sum of:
        0.037124574 = product of:
          0.07424915 = sum of:
            0.07424915 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.07424915 = score(doc=4872,freq=4.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:40:22
  6. Subject gateways (2000) 0.00
    0.004593932 = product of:
      0.036751457 = sum of:
        0.036751457 = product of:
          0.07350291 = sum of:
            0.07350291 = weight(_text_:22 in 6483) [ClassicSimilarity], result of:
              0.07350291 = score(doc=6483,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.5416616 = fieldWeight in 6483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6483)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:43:01
  7. Parker, D.; Gow, E.; Lim, E.: AARLIN : seamless information delivery to researchers (2002) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 3602) [ClassicSimilarity], result of:
          0.034606863 = score(doc=3602,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 3602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3602)
      0.125 = coord(1/8)
    
    Abstract
    The Australian Academic Research Library Information Network (AARLIN) aims to provide seamless access to Australian and international information resources for researchers via their personal computers through a personally customisable portal. The project has funding from the Australian Govemment. AARLIN commenced in the year 2000 with a pilot project and will develop into a fully operational service in Australian universities over the next three years. During the pilot project Ex Libris' Metalib and SFX software have been used to trial the AARLIN portal concept with a group of researchers. The results of a survey of the researchers are presented. It is concluded that the portal has the potential to enhance the work of researchers by improving their success in information searching.
  8. Hellweg, H.; Hermes, B.; Stempfhuber, M.; Enderle, W.; Fischer, T.: DBClear : a generic system for clearinghouses (2002) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 3605) [ClassicSimilarity], result of:
          0.034606863 = score(doc=3605,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 3605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3605)
      0.125 = coord(1/8)
    
    Abstract
    Clearinghouses - or subject gateways - are domain-specific collections of links to resources an the Internet. The links are described with metadata and structured according to a domain-specific subject hierarchy. Users access the information by searching in the metadata or by browsing the subject hierarchy. The standards for metadata vary across existing Clearinghouses and different technologies for storing and accessing the metadata are used. This makes it difficult to distribute the editorial or administrative work involved in maintaining a clearinghouse, or to exchange information with other systems. DBClear is a generic, platform-independent clearinghouse system, whose metadata schema can be adapted to different standards. The data is stored in a relational database. It includes a workflow component to Support distributed maintenance and automation modules for link checking and metadata extraction. The presentation of the clearinghouse an the Web can be modified to allow seamless integration into existing web sites.
  9. Meyyappan, N.; Foo, F.; Chowdhury, G.G.: Design and evaluation of a task-based digital library for the academic community (2004) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 4425) [ClassicSimilarity], result of:
          0.034606863 = score(doc=4425,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 4425, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=4425)
      0.125 = coord(1/8)
    
    Abstract
    The paper discusses the design, development and evaluation of a task-based digital library, the Digital Work Environment (DWE), for the academic community of higher education institutions (HEI) with Nanyang Technological University, Singapore, as a test case. Three different information organisation approaches (alphabetical, subject category and task-based) were used to organise the wide range of heterogeneous information resources that were interfaced to DWE. A user evaluation study using a series of task scenarios was carried out to gauge the effectiveness and usefulness of DWE and these information organisation approaches. The time taken by respondents to identify and access the relevant information resources for individual tasks was also measured. The findings show that the task-based approach took the least time in identifying information resources. Regression analysis of information resource location time with gender, age, computer experience and digital resource experience of the participants are also reported.
  10. Kilner, K.: ¬The AustLit Gateway and scholarly bibliography : a specialist implementation of the FRBR (2004) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 5851) [ClassicSimilarity], result of:
          0.034606863 = score(doc=5851,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 5851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=5851)
      0.125 = coord(1/8)
    
    Abstract
    This paper discusses how the AustLit: Australian Literature Gateway's interpretation, enhancement and implementation of the International Federation of Library Association's Functional Requirements for Bibliographic Records (FRBR Final Report 1998) model is meeting the needs of Australian literature scholars for accurate bibliographic representation of the histories of literary texts. It also explores how the AustLit Gateway's underpinning research principles, which are based on the tradition of scholarly enumerative and descriptive bibliography, with enhancements from analytical bibliography and literary biography, have impacted upon our implementation of the FRBR model. The major enhancement or alteration to the model is the use of enhanced manifestations, which allow the full representation of all agents' contributions to be shown in a highly granular format by enabling creation events to be incorporated at all levels of the Work, Expression and Manifestation nexus.
  11. Brahms, E.: Digital library initiatives of the Deutsche Forschungsgemeinschaft (2001) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 1190) [ClassicSimilarity], result of:
          0.034606863 = score(doc=1190,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 1190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1190)
      0.125 = coord(1/8)
    
    Abstract
    The Deutsche Forschungsgemeinschaft (DFG) is the central public funding organization for academic research in Germany. It is thus comparable to a research council or a national research foundation. According to its statutes, DFG's mandate is to serve science and the arts in all fields by supporting research projects carried out at universities and public research institutions in Germany, to promote cooperation between researchers, and to forge and support links between German academic science, industry and partners in foreign countries. In the fulfillment of its tasks, the DFG pays special attention to the education and support of young scientists and scholars. DFG's mandate and operations follow the principle of territoriality. This means that its funding activities are restricted, with very few exceptions, to individuals and institutions with permanent addresses in Germany. Fellowships are granted for work in other countries, but most fellowship programs are restricted to German citizens, with a few exceptions for permanent residents of Germany holding foreign passports.
  12. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 3655) [ClassicSimilarity], result of:
          0.034606863 = score(doc=3655,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 3655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
      0.125 = coord(1/8)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
  13. Severiens, T.: ¬A distributed portal for physics (2002) 0.00
    0.0040784576 = product of:
      0.03262766 = sum of:
        0.03262766 = weight(_text_:work in 3620) [ClassicSimilarity], result of:
          0.03262766 = score(doc=3620,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2293994 = fieldWeight in 3620, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=3620)
      0.125 = coord(1/8)
    
    Abstract
    Many subject specific portals were built during the last year. Most of these are simple user-interfaces to databases of subject specific information added with several lists of links. This centralised type of portal often looks fine with its consistent facing but is hard to keep up to date and high priced to maintain. Users expect a service be malntained and available 24 hours, 365 days for at least 10 years and this all free of charge. On the one hand, it seams to be impossible to set up a service matching all this demands, an the other hand, many institutions offer information and services which could be parts of a portal, which are maintained frequently and paid by public via these institutions. The idea is, to collect the existing information and present it in a structured and consistent way. This idea matches in an excellent way with the way knowledge is produced in Physics. Physicists work all over the world often an different continents an the same topic, knowing each others work only from their publications, conferences and online-communication. Information in Physics is published in quite different ways, by journal articles, which can be reviewed, sometimes by peer, or pre-prints. Many information is available in non-textual genres like software sources or datasets or mathematical formula. Distributed Portals make use of the existing information an the web. In the early days of the web, the very popular link-lists where a kind of portal, linking to (all) pages with information an the specific topic. Indeed, these link lists had many properties of modern portals, offering information in a structured and selected way. But they did not offer the information under a common layout (desktop) and did not offer user-specific views onto the information. Modern distributed portals combine the advantages of centralised portals (high information structure, common layout, easy navigation through all the information) with the possibilities of distributed portals (up to date information, low budget implementation, good knowledge coverage).
  14. EuropeanaTech and Multilinguality : Issue 1 of EuropeanaTech Insight (2015) 0.00
    0.0040784576 = product of:
      0.03262766 = sum of:
        0.03262766 = weight(_text_:work in 1832) [ClassicSimilarity], result of:
          0.03262766 = score(doc=1832,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2293994 = fieldWeight in 1832, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=1832)
      0.125 = coord(1/8)
    
    Abstract
    Welcome to the very first issue of EuropeanaTech Insight, a multimedia publication about research and development within the EuropeanaTech community. EuropeanaTech is a very active community. It spans all of Europe and is made up of technical experts from the various disciplines within digital cultural heritage. At any given moment, members can be found presenting their work in project meetings, seminars and conferences around the world. Now, through EuropeanaTech Insight, we can share that inspiring work with the whole community. In our first three issues, we're showcasing topics discussed at the EuropeanaTech 2015 Conference, an exciting event that gave rise to lots of innovative ideas and fruitful conversations on the themes of data quality, data modelling, open data, data re-use, multilingualism and discovery. Welcome, bienvenue, bienvenido, Välkommen, Tervetuloa to the first Issue of EuropeanaTech Insight. Are we talking your language? No? Well I can guarantee you Europeana is. One of the European Union's great beauties and strengths is its diversity. That diversity is perhaps most evident in the 24 different languages spoken in the EU. Making it possible for all European citizens to easily and seamlessly communicate in their native language with others who do not speak that language is a huge technical undertaking. Translating documents, news, speeches and historical texts was once exclusively done manually. Clearly, that takes a huge amount of time and resources and means that not everything can be translated... However, with the advances in machine and automatic translation, it's becoming more possible to provide instant and pretty accurate translations. Europeana provides access to over 40 million digitised cultural heritage offering content in over 33 languages. But what value does Europeana provide if people can only find results in their native language? None. That's why the EuropeanaTech community is collectively working towards making it more possible for everyone to discover our collections in their native language. In this issue of EuropeanaTech Insight, we hear from community members who are making great strides in machine translation and enrichment tools to help improve not only access to data, but also how we retrieve, browse and understand it.
  15. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.00
    0.0039376556 = product of:
      0.031501245 = sum of:
        0.031501245 = product of:
          0.06300249 = sum of:
            0.06300249 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.06300249 = score(doc=4865,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:41:59
  16. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.00
    0.0039376556 = product of:
      0.031501245 = sum of:
        0.031501245 = product of:
          0.06300249 = sum of:
            0.06300249 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.06300249 = score(doc=6040,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:42:47
  17. Thaller, M.: From the digitized to the digital library (2001) 0.00
    0.0037463026 = product of:
      0.02997042 = sum of:
        0.02997042 = weight(_text_:work in 1159) [ClassicSimilarity], result of:
          0.02997042 = score(doc=1159,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2107168 = fieldWeight in 1159, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
      0.125 = coord(1/8)
    
    Abstract
    The author holds a chair in Humanities Computer Science at the University of Cologne. For a number of years, he has been responsible for digitization projects, either as project director or as the person responsible for the technology being employed on the projects. The "Duderstadt project" (http://www.archive.geschichte.mpg.de/duderstadt/dud-e.htm) is one such project. It is one of the early large-scale manuscript servers, finished at the end of 1998, with approximately 80,000 high resolution documents representing the holdings of a city archive before the year 1600. The digital library of the Max-Planck-Institut für Europäische Rechtsgeschichte in Frankfurt (http://www.mpier.uni-frankfurt.de/dlib) is another project on which the author has worked, with currently approximately 900,000 pages. The author is currently project director of the project "Codices Electronici Ecclesiae Colonensis" (CEEC), which has just started and will ultimately consist of approximately 130,000 very high resolution color pages representing the complete holdings of the manuscript library of a medieval cathedral. It is being designed in close cooperation with the user community of such material. The project site (http://www.ceec.uni-koeln.de), while not yet officially opened, currently holds about 5,000 pages and is growing by 100 - 150 pages per day. Parallel to the CEEC model project, a conceptual project, the "Codex Electronicus Colonensis" (CEC), is at work on the definition of an abstract model for the representation of medieval codices in digital form. The following paper has grown out of the design considerations for the mentioned CEC project. The paper reflects a growing concern of the author's that some of the recent advances in digital (research) libraries are being diluted because it is not clear whether the advances really reach the audience for whom the projects would be most useful. Many, if not most, digitization projects have aimed at existing collections as individual servers. A digital library, however, should be more than a digitized one. It should be built according to principles that are not necessarily the same as those employed for paper collections, and it should be evaluated according to different measures which are not yet totally clear. The paper takes the form of six theses on various aspects of the ongoing transition to digital libraries. These theses have been presented at a forum on the German "retrodigitization" program. The program aims at the systematic conversion of library resources into digital form, concentrates for a number of reasons on material primarily of interest to the Humanities, and is funded by the German research council. As such this program is directly aimed at improving the overall infrastructure of academic research; other users of libraries are of interest, but are not central to the program.
    Content
    Theses: 1. Who should be addressed by digital libraries? How shall we measure whether we have reached the desired audience? Thesis: The primary audience for a digital library is neither the leading specialist in the respective field, nor the freshman, but the advanced student or young researcher and the "almost specialist". The primary topic of digitization projects should not be the absolute top range of the "treasures" of a collection, but those materials that we always have wanted to promote if they were just marginally more important. Whether we effectively serve them to the appropriate community of serious users can only be measured according to criteria that have yet to be developed. 2. The appropriate size of digital libraries and their access tools Thesis: Digital collections need a critical, minimal size to make their access worthwhile. In the end, users want to access information, not metadata or gimmicks. 3. The quality of digital objects Thesis: If digital library resources are to be integrated into the daily work of the research community, they must appear on the screen of the researcher in a quality that is useful in actual work. 4. The granularity / modularity of digital repositories Thesis: While digital libraries are self-contained bodies of information, they are not the basic unit that most users want to access. Users are, as a rule, more interested in the individual objects in the library and need a straightforward way to access them. 5. Digital collections as integrated reference systems Thesis: Traditional libraries support their collections with reference material. Digital collections need to find appropriate models to replicate this functionality. 6. Library and teaching Thesis: The use of multimedia in teaching is as much of a current buzzword as the creation of digital collections. It is obvious that they should be connected. A clear-cut separation of the two approaches is nevertheless necessary.
  18. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 6470) [ClassicSimilarity], result of:
          0.028839052 = score(doc=6470,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 6470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6470)
      0.125 = coord(1/8)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  19. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: Compound descriptors in context : a matching function for classifications and thesauri (2002) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 3179) [ClassicSimilarity], result of:
          0.028839052 = score(doc=3179,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 3179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
      0.125 = coord(1/8)
    
    Abstract
    There are many advantages for Digital Libraries in indexing with classifications or thesauri, but some current disincentive in the lack of flexible retrieval tools that deal with compound descriptors. This paper discusses a matching function for compound descriptors, or multi-concept subject headings, that does not rely an exact matching but incorporates term expansion via thesaurus semantic relationships to produce ranked results that take account of missing and partially matching terms. The matching function is based an a measure of semantic closeness between terms, which has the potential to help with recall problems. The work reported is part of the ongoing FACET project in collaboration with the National Museum of Science and Industry and its collections database. The architecture of the prototype system and its Interface are outlined. The matching problem for compound descriptors is reviewed and the FACET implementation described. Results are discussed from scenarios using the faceted Getty Art and Architecture Thesaurus. We argue that automatic traversal of thesaurus relationships can augment the user's browsing possibilities. The techniques can be applied both to unstructured multi-concept subject headings and potentially to more syntactically structured strings. The notion of a focus term is used by the matching function to model AAT modified descriptors (noun phrases). The relevance of the approach to precoordinated indexing and matching faceted strings is discussed.
  20. Collins, L.M.; Hussell, J.A.T.; Hettinga, R.K.; Powell, J.E.; Mane, K.K.; Martinez, M.L.B.: Information visualization and large-scale repositories (2007) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 2596) [ClassicSimilarity], result of:
          0.028839052 = score(doc=2596,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 2596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2596)
      0.125 = coord(1/8)
    
    Abstract
    Purpose - To describe how information visualization can be used in the design of interface tools for large-scale repositories. Design/methodology/approach - One challenge for designers in the context of large-scale repositories is to create interface tools that help users find specific information of interest. In order to be most effective, these tools need to leverage the cognitive characteristics of the target users. At the Los Alamos National Laboratory, the authors' target users are scientists and engineers who can be characterized as higher-order, analytical thinkers. In this paper, the authors describe a visualization tool they have created for making the authors' large-scale digital object repositories more usable for them: SearchGraph, which facilitates data set analysis by displaying search results in the form of a two- or three-dimensional interactive scatter plot. Findings - Using SearchGraph, users can view a condensed, abstract visualization of search results. They can view the same dataset from multiple perspectives by manipulating several display, sort, and filter options. Doing so allows them to see different patterns in the dataset. For example, they can apply a logarithmic transformation in order to create more scatter in a dense cluster of data points or they can apply filters in order to focus on a specific subset of data points. Originality/value - SearchGraph is a creative solution to the problem of how to design interface tools for large-scale repositories. It is particularly appropriate for the authors' target users, who are scientists and engineers. It extends the work of the first two authors on ActiveGraph, a read-write digital library visualization tool.

Languages

  • e 56
  • d 23

Types

  • a 67
  • el 16
  • s 5
  • m 4
  • x 1
  • More… Less…