Search (15 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Information Gateway"
  • × type_ss:"el"
  1. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.01
    0.010760134 = product of:
      0.0322804 = sum of:
        0.0322804 = product of:
          0.048420597 = sum of:
            0.0207691 = weight(_text_:online in 1447) [ClassicSimilarity], result of:
              0.0207691 = score(doc=1447,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1447)
            0.027651496 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
              0.027651496 = score(doc=1447,freq=2.0), product of:
                0.17867287 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051022716 = queryNorm
                0.15476047 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1447)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  2. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.01
    0.009009165 = product of:
      0.027027493 = sum of:
        0.027027493 = weight(_text_:im in 3967) [ClassicSimilarity], result of:
          0.027027493 = score(doc=3967,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.18739122 = fieldWeight in 3967, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.046875 = fieldNorm(doc=3967)
      0.33333334 = coord(1/3)
    
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  3. Fang, L.: ¬A developing search service : heterogeneous resources integration and retrieval system (2004) 0.01
    0.005731291 = product of:
      0.017193872 = sum of:
        0.017193872 = product of:
          0.051581617 = sum of:
            0.051581617 = weight(_text_:retrieval in 1193) [ClassicSimilarity], result of:
              0.051581617 = score(doc=1193,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.33420905 = fieldWeight in 1193, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1193)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes two approaches for searching heterogeneous resources, which are explained as they are used in two corresponding existing systems-RIRS (Resource Integration Retrieval System) and HRUSP (Heterogeneous Resource Union Search Platform). On analyzing the existing systems, a possible framework-the MUSP (Multimetadata-Based Union Search Platform) is presented. Libraries now face a dilemma. On one hand, libraries subscribe to many types of database retrieval systems that are produced by various providers. The libraries build their data and information systems independently. This results in highly heterogeneous and distributed systems at the technical level (e.g., different operating systems and user interfaces) and at the conceptual level (e.g., the same objects are named using different terms). On the other hand, end users want to access all these heterogeneous data via a union interface, without having to know the structure of each information system or the different retrieval methods used by the systems. Libraries must achieve a harmony between information providers and users. In order to bridge the gap between the service providers and the users, it would seem that all source databases would need to be rebuilt according to a uniform data structure and query language, but this seems impossible. Fortunately, however, libraries and information and technology providers are now making an effort to find a middle course that meets the requirements of both data providers and users. They are doing this through resource integration.
  4. Veen, T. van; Oldroyd, B.: Search and retrieval in The European Library : a new approach (2004) 0.00
    0.0049634436 = product of:
      0.014890331 = sum of:
        0.014890331 = product of:
          0.04467099 = sum of:
            0.04467099 = weight(_text_:retrieval in 1164) [ClassicSimilarity], result of:
              0.04467099 = score(doc=1164,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 1164, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1164)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The objective of the European Library (TEL) project [TEL] was to set up a co-operative framework and specify a system for integrated access to the major collections of the European national libraries. This has been achieved by successfully applying a new approach for search and retrieval via URLs (SRU) [ZiNG] combined with a new metadata paradigm. One aim of the TEL approach is to have a low barrier of entry into TEL, and this has driven our choice for the technical solution described here. The solution comprises portal and client functionality running completely in the browser, resulting in a low implementation barrier and maximum scalability, as well as giving users control over the search interface and what collections to search. In this article we will describe, step by step, the development of both the search and retrieval architecture and the metadata infrastructure in the European Library project. We will show that SRU is a good alternative to the Z39.50 protocol and can be implemented without losing investments in current Z39.50 implementations. The metadata model being used by TEL is a Dublin Core Application Profile, and we have taken into account that functional requirements will change over time and therefore the metadata model will need to be able to evolve in a controlled way. We make this possible by means of a central metadata registry containing all characteristics of the metadata in TEL. Finally, we provide two scenarios to show how the TEL concept can be developed and extended, with applications capable of increasing their functionality by "learning" new metadata or protocol options.
  5. Müller, B.; Poley, C.; Pössel, J.; Hagelstein, A.; Gübitz, T.: LIVIVO - the vertical search engine for life sciences (2017) 0.00
    0.0049634436 = product of:
      0.014890331 = sum of:
        0.014890331 = product of:
          0.04467099 = sum of:
            0.04467099 = weight(_text_:retrieval in 3368) [ClassicSimilarity], result of:
              0.04467099 = score(doc=3368,freq=6.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.28943354 = fieldWeight in 3368, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3368)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.
  6. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 1205) [ClassicSimilarity], result of:
              0.041265294 = score(doc=1205,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 1205, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1205)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Music websites are ubiquitous, and music downloads, such as MP3, are a major source of Web traffic. As the amount of musical content increases and the Web becomes an important mechanism for distributing music, we expect to see a rising demand for music search services. Many currently available music search engines rely on file names, song title, composer or performer as the indexing and retrieval mechanism. These systems do not make use of the musical content. We believe that a more natural, effective, and usable music-information retrieval (MIR) system should have audio input, where the user can query with musical content. We are developing a system called MusArt for audio-input MIR. With MusArt, as with other audio-input MIR systems, a user sings or plays a theme, hook, or riff from the desired piece of music. The system transcribes the query and searches for related themes in a database, returning the most similar themes, given some measure of similarity. We call this "retrieval by query." In this paper, we describe the architecture of MusArt. An important element of MusArt is metadata creation: we believe that it is essential to automatically abstract important musical elements, particularly themes. Theme extraction is performed by a subsystem called MME, which we describe later in this paper. Another important element of MusArt is its support for a variety of search engines, as we believe that MIR is too complex for a single approach to work for all queries. Currently, MusArt supports a dynamic time-warping search engine that has high recall, and a complementary stochastic search engine that searches over themes, emphasizing speed and relevancy. The stochastic search engine is discussed in this paper.
  7. Peters, C.; Picchi, E.: Across languages, across cultures : issues in multilinguality and digital libraries (1997) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 1233) [ClassicSimilarity], result of:
              0.041265294 = score(doc=1233,freq=2.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 1233, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1233)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    With the recent rapid diffusion over the international computer networks of world-wide distributed document bases, the question of multilingual access and multilingual information retrieval is becoming increasingly relevant. We briefly discuss just some of the issues that must be addressed in order to implement a multilingual interface for a Digital Library system and describe our own approach to this problem.
  8. Urs, S.R.; Angrosh, M.A.: Ontology-based knowledge organization systems in digital libraries : a comparison of experiments in OWL and KAON ontologies (2006 (?)) 0.00
    0.004585033 = product of:
      0.013755098 = sum of:
        0.013755098 = product of:
          0.041265294 = sum of:
            0.041265294 = weight(_text_:retrieval in 2799) [ClassicSimilarity], result of:
              0.041265294 = score(doc=2799,freq=8.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.26736724 = fieldWeight in 2799, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2799)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Grounded on a strong belief that ontologies enhance the performance of information retrieval systems, there has been an upsurge of interest in ontologies. Its importance is identified in diverse research fields such as knowledge engineering, knowledge representation, qualitative modeling, language engineering, database design, information integration, object-oriented analysis, information retrieval and extraction, knowledge management and agent-based systems design (Guarino, 1998). While the role-played by ontologies, automatically lends a place of legitimacy for these tools, research in this area gains greater significance in the wake of various challenges faced in the contemporary digital environment. With the objective of overcoming various pitfalls associated with current search mechanisms, ontologies are increasingly used for developing efficient information retrieval systems. An indicator of research interest in the area of ontology is the Swoogle, a search engine for Semantic Web documents, terms and data found on the Web (Ding, Li et al, 2004). Given the complex nature of the digital content archived in digital libraries, ontologies can be employed for designing efficient forms of information retrieval in digital libraries. Knowledge representation assumes greater significance due to its crucial role in ontology development. These systems aid in developing intelligent information systems, wherein the notion of intelligence implies the ability of the system to find implicit consequences of its explicitly represented knowledge (Baader and Nutt, 2003). Knowledge representation formalisms such as 'Description Logics' are used to obtain explicit knowledge representation of the subject domain. These representations are developed into ontologies, which are used for developing intelligent information systems. Against this backdrop, the paper examines the use of Description Logics for conceptually modeling a chosen domain, which would be utilized for developing domain ontologies. The knowledge representation languages identified for this purpose are Web Ontology Language (OWL) and KArlsruhe ONtology (KAON) language. Drawing upon the various technical constructs in developing ontology-based information systems, the paper explains the working of the prototypes and also presents a comparative study of the two prototypes.
  9. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.00
    0.0040794373 = product of:
      0.012238312 = sum of:
        0.012238312 = product of:
          0.036714934 = sum of:
            0.036714934 = weight(_text_:online in 6470) [ClassicSimilarity], result of:
              0.036714934 = score(doc=6470,freq=4.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23710167 = fieldWeight in 6470, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6470)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  10. Oard, D.W.: Serving users in many languages : cross-language information retrieval for digital libraries (1997) 0.00
    0.004052635 = product of:
      0.012157904 = sum of:
        0.012157904 = product of:
          0.03647371 = sum of:
            0.03647371 = weight(_text_:retrieval in 1261) [ClassicSimilarity], result of:
              0.03647371 = score(doc=1261,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23632148 = fieldWeight in 1261, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1261)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We are rapidly constructing an extensive network infrastructure for moving information across national boundaries, but much remains to be done before linguistic barriers can be surmounted as effectively as geographic ones. Users seeking information from a digital library could benefit from the ability to query large collections once using a single language, even when more than one language is present in the collection. If the information they locate is not available in a language that they can read, some form of translation will be needed. At present, multilingual thesauri such as EUROVOC help to address this challenge by facilitating controlled vocabulary search using terms from several languages, and services such as INSPEC produce English abstracts for documents in other languages. On the other hand, support for free text searching across languages is not yet widely deployed, and fully automatic machine translation is presently neither sufficiently fast nor sufficiently accurate to adequately support interactive cross-language information seeking. An active and rapidly growing research community has coalesced around these and other related issues, applying techniques drawn from several fields - notably information retrieval and natural language processing - to provide access to large multilingual collections.
  11. Lossau, N.: Search engine technology and digital libraries : libraries need to discover the academic internet (2004) 0.00
    0.004038437 = product of:
      0.01211531 = sum of:
        0.01211531 = product of:
          0.03634593 = sum of:
            0.03634593 = weight(_text_:online in 1161) [ClassicSimilarity], result of:
              0.03634593 = score(doc=1161,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.23471867 = fieldWeight in 1161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1161)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    With the development of the World Wide Web, the "information search" has grown to be a significant business sector of a global, competitive and commercial market. Powerful players have entered this market, such as commercial internet search engines, information portals, multinational publishers and online content integrators. Will Google, Yahoo or Microsoft be the only portals to global knowledge in 2010? If libraries do not want to become marginalized in a key area of their traditional services, they need to acknowledge the challenges that come with the globalisation of scholarly information, the existence and further growth of the academic internet
  12. Summann, F.; Lossau, N.: Search engine technology and digital libraries : moving from theory to practice (2004) 0.00
    0.0032421078 = product of:
      0.009726323 = sum of:
        0.009726323 = product of:
          0.029178968 = sum of:
            0.029178968 = weight(_text_:retrieval in 1196) [ClassicSimilarity], result of:
              0.029178968 = score(doc=1196,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.18905719 = fieldWeight in 1196, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1196)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes the journey from the conception of and vision for a modern search-engine-based search environment to its technological realisation. In doing so, it takes up the thread of an earlier article on this subject, this time from a technical viewpoint. As well as presenting the conceptual considerations of the initial stages, this article will principally elucidate the technological aspects of this journey. The starting point for the deliberations about development of an academic search engine was the experience we gained through the generally successful project "Digital Library NRW", in which from 1998 to 2000-with Bielefeld University Library in overall charge-we designed a system model for an Internet-based library portal with an improved academic search environment at its core. At the heart of this system was a metasearch with an availability function, to which we added a user interface integrating all relevant source material for study and research. The deficiencies of this approach were felt soon after the system was launched in June 2001. There were problems with the stability and performance of the database retrieval system, with the integration of full-text documents and Internet pages, and with acceptance by users, because users are increasingly performing the searches themselves using search engines rather than going to the library for help in doing searches. Since a long list of problems are also encountered using commercial search engines for academic use (in particular the retrieval of academic information and long-term availability), the idea was born for a search engine configured specifically for academic use. We also hoped that with one single access point founded on improved search engine technology, we could access the heterogeneous academic resources of subject-based bibliographic databases, catalogues, electronic newspapers, document servers and academic web pages.
  13. Spink, A.; Wilson, T.; Ellis, D.; Ford, N.: Modeling users' successive searches in digital environments : a National Science Foundation/British Library funded study (1998) 0.00
    0.002836844 = product of:
      0.008510532 = sum of:
        0.008510532 = product of:
          0.025531596 = sum of:
            0.025531596 = weight(_text_:retrieval in 1255) [ClassicSimilarity], result of:
              0.025531596 = score(doc=1255,freq=4.0), product of:
                0.15433937 = queryWeight, product of:
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16542503 = fieldWeight in 1255, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.024915 = idf(docFreq=5836, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1255)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    As digital libraries become a major source of information for many people, we need to know more about how people seek and retrieve information in digital environments. Quite commonly, users with a problem-at-hand and associated question-in-mind repeatedly search a literature for answers, and seek information in stages over extended periods from a variety of digital information resources. The process of repeatedly searching over time in relation to a specific, but possibly an evolving information problem (including changes or shifts in a variety of variables), is called the successive search phenomenon. The study outlined in this paper is currently investigating this new and little explored line of inquiry for information retrieval, Web searching, and digital libraries. The purpose of the research project is to investigate the nature, manifestations, and behavior of successive searching by users in digital environments, and to derive criteria for use in the design of information retrieval interfaces and systems supporting successive searching behavior. This study includes two related projects. The first project is based in the School of Library and Information Sciences at the University of North Texas and is funded by a National Science Foundation POWRE Grant <http://www.nsf.gov/cgi-bin/show?award=9753277>. The second project is based at the Department of Information Studies at the University of Sheffield (UK) and is funded by a grant from the British Library <http://www.shef. ac.uk/~is/research/imrg/uncerty.html> Research and Innovation Center. The broad objectives of each project are to examine the nature and extent of successive search episodes in digital environments by real users over time. The specific aim of the current project is twofold: * To characterize progressive changes and shifts that occur in: user situational context; user information problem; uncertainty reduction; user cognitive styles; cognitive and affective states of the user, and consequently in their queries; and * To characterize related changes over time in the type and use of information resources and search strategies particularly related to given capabilities of IR systems, and IR search engines, and examine changes in users' relevance judgments and criteria, and characterize their differences. The study is an observational, longitudinal data collection in the U.S. and U.K. Three questionnaires are used to collect data: reference, client post search and searcher post search questionnaires. Each successive search episode with a search intermediary for textual materials on the DIALOG Information Service is audiotaped and search transaction logs are recorded. Quantitative analysis includes statistical analysis using Likert scale data from the questionnaires and log-linear analysis of sequential data. Qualitative methods include: content analysis, structuring taxonomies; and diagrams to describe shifts and transitions within and between each search episode. Outcomes of the study are the development of appropriate model(s) for IR interactions in successive search episodes and the derivation of a set of design criteria for interfaces and systems supporting successive searching.
  14. Buckland, M.; Lancaster, L.: Combining place, time, and topic : the Electronic Cultural Atlas Initiative (2004) 0.00
    0.002307678 = product of:
      0.006923034 = sum of:
        0.006923034 = product of:
          0.0207691 = sum of:
            0.0207691 = weight(_text_:online in 1194) [ClassicSimilarity], result of:
              0.0207691 = score(doc=1194,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 1194, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1194)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The Electronic Cultural Atlas Initiative was formed to encourage scholarly communication and the sharing of data among researchers who emphasize the relationships between place, time, and topic in the study of culture and history. In an effort to develop better tools and practices, The Electronic Cultural Atlas Initiative has sponsored the collaborative development of software for downloading and editing geo-temporal data to create dynamic maps, a clearinghouse of shared datasets accessible through a map-based interface, projects on format and content standards for gazetteers and time period directories, studies to improve geo-temporal aspects in online catalogs, good practice guidelines for preparing e-publications with dynamic geo-temporal displays, and numerous international conferences. The Electronic Cultural Atlas Initiative (ECAI) grew out of discussions among an international group of scholars interested in religious history and area studies. It was established as a unit under the Dean of International and Area Studies at the University of California, Berkeley in 1997. ECAI's mission is to promote an international collaborative effort to transform humanities scholarship through use of the digital environment to share data and by placing greater emphasis on the notions of place and time. Professor Lewis Lancaster is the Director. Professor Michael Buckland, with a library and information studies background, joined the effort as Co-Director in 2000. Assistance from the Lilly Foundation, the California Digital Library (University of California), and other sources has enabled ECAI to nurture a community; to develop a catalog ("clearinghouse") of Internet-accessible georeferenced resources; to support the development of software for obtaining, editing, manipulating, and dynamically visualizing geo-temporally encoded data; and to undertake research and development projects as needs and resources determine. Several hundred scholars worldwide, from a wide range of disciplines, are informally affiliated with ECAI, all interested in shared use of historical and cultural data. The Academia Sinica (Taiwan), The British Library, and the Arts and Humanities Data Service (UK) are among the well-known affiliates. However, ECAI mainly comprises individual scholars and small teams working on their own small projects on a very wide range of cultural, social, and historical topics. Numerous specialist committees have been fostering standardization and collaboration by area and by themes such as trade-routes, cities, religion, and sacred sites.
  15. Niggemann, E.: Europeana: connecting cultural heritage (2009) 0.00
    0.002307678 = product of:
      0.006923034 = sum of:
        0.006923034 = product of:
          0.0207691 = sum of:
            0.0207691 = weight(_text_:online in 2816) [ClassicSimilarity], result of:
              0.0207691 = score(doc=2816,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.13412495 = fieldWeight in 2816, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2816)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The European Commission's goal for Europeana is to make European information resources easier to use in an online environment. It will build on Europe's rich heritage, combining multicultural and multilingual environments with technological advances and new business models. The Europeana prototype is the result of a 2-year project that began in July 2007. Europeana.eu went live on 20 November 2008, launched by Viviane Reding, European Commissioner for Information Society and Media. Europeana.eu is about ideas and inspiration. It links the user to 2 million digital items: images, text, sounds and videos. Europeana is a Thematic Network funded by the European Commission under the eContentplus programme, as part of the i2010 policy. Originally known as the European digital library network - EDLnet - it is a partnership of 100 representatives of heritage and knowledge organisations and IT experts from throughout Europe. They contribute to the work packages that are solving the technical and usability issues. The project is run by a core team based in the national library of the Netherlands, the Koninklijke Bibliotheek. It builds on the project management and technical expertise developed by The European Library, which is a service of the Conference of European National Librarians. Content is added via so called aggregators, national or domain specific portals aggegrating digital content and channelling it to Europeana. Most of these portals are being developed in the framework of EU funded projects, e.g. European Film Gateway, Athena and EuropeanaLocal. Overseeing the project is the EDL Foundation, which includes key European cultural heritage associations from the four domains. The Foundation's statutes commit members to: * providing access to Europe's cultural and scientific heritage through a cross-domain portal; * co-operating in the delivery and sustainability of the joint portal; * stimulating initiatives to bring together existing digital content; * supporting digitisation of Europe's cultural and scientific heritage. Europeana.eu is a prototype. Europeana Version 1.0 is being developed and will be launched in 2010 with links to over 6 million digital objects.