Search (16 results, page 1 of 1)

  • × theme_ss:"Information Gateway"
  • × type_ss:"el"
  1. Zapilko, B.: InFoLiS (2017) 0.01
    0.012000851 = product of:
      0.09600681 = sum of:
        0.09600681 = weight(_text_:hochschule in 1031) [ClassicSimilarity], result of:
          0.09600681 = score(doc=1031,freq=2.0), product of:
            0.23689921 = queryWeight, product of:
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.03875087 = queryNorm
            0.40526438 = fieldWeight in 1031, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.113391 = idf(docFreq=265, maxDocs=44218)
              0.046875 = fieldNorm(doc=1031)
      0.125 = coord(1/8)
    
    Abstract
    Die von der DFG geförderte InFoLiS-Projektreihe wurde dieses Jahr erfolgreich abgeschlossen. Die Projekte wurden von GESIS - Leibniz-Institut für Sozialwissenschaften, der Universitätsbibliothek Mannheim und der Hochschule der Medien Stuttgart durchgeführt. Ziel der Projekte InFoLiS I und InFoLiS II war die Entwicklung von Verfahren zur Verknüpfung von Forschungsdaten und Literatur. Diese Verknüpfung kann einen erheblichen Mehrwert für Recherchesystem in Informationsinfrastrukturen wie Bibliotheken und Forschungsdatenzentren für die Recherche der Nutzerinnen und Nutzer darstellen. Die Projektergebnisse im Einzelnen sind: - Entwicklung von Verfahren für die automatische Verknüpfung von Publikationen und Forschungsdaten - Integration dieser Verknüpfungen in die Recherchesysteme der Projektpartner - Automatische Verschlagwortung von Forschungsdaten - Überführung der entwickelten Verfahren in eine Linked Open Data-basierte nachnutzbare Infrastruktur mit Webservices und APIs - Anwendung der Verfahren auf einer disziplinübergreifenden und mehrsprachigen Datenbasis - Nachnutzbarkeit der Links durch die Verwendung einer Forschungsdatenontologie Weitere Informationen finden sich auf der Projekthomepage [http://infolis.github.io/]. Sämtliche Projektergebnisse inklusive Quellcode stehen Open Source auf unserer GitHub-Seite [http://www.github.com/infolis/] für eine Nachnutzung zur Verfügung. Bei Interesse an einer Nachnutzung oder Weiterentwicklung Kontakt-E-Mail (benjamin.zapilko@gesis.org<mailto:benjamin.zapilko@gesis.org>).
  2. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.01
    0.008392914 = product of:
      0.033571657 = sum of:
        0.02307124 = weight(_text_:work in 1447) [ClassicSimilarity], result of:
          0.02307124 = score(doc=1447,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 1447, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=1447)
        0.010500416 = product of:
          0.021000832 = sum of:
            0.021000832 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
              0.021000832 = score(doc=1447,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.15476047 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1447)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  3. Crane, G.: ¬The Perseus Project and beyond : how building a digital library challenges the humanities and technology (1998) 0.01
    0.007492605 = product of:
      0.05994084 = sum of:
        0.05994084 = weight(_text_:work in 1251) [ClassicSimilarity], result of:
          0.05994084 = score(doc=1251,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.4214336 = fieldWeight in 1251, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1251)
      0.125 = coord(1/8)
    
    Abstract
    For more than ten years, the Perseus Project has been developing a digital library in the humanities. Initial work concentrated exclusively on ancient Greek culture, using this domain as a case study for a compact, densely hypertextual library on a single, but interdisciplinary, subject. Since it has achieved its initial goals with the Greek materials, however, Perseus is using the existing library to study the new possibilities (and limitations) of the electronic medium and to serve as the foundation for work in new cultural domains: Perseus has begun coverage of Roman and now Renaissance materials, with plans for expansion into other areas of the humanities as well. Our goal is not only to help traditional scholars conduct their research more effectively but, more importantly, to help humanists use the technology to redefine the relationship between their work and the broader intellectual community.
  4. Shechtman, N.; Chung, M.; Roschelle, J.: Supporting member collaboration in the Math Tools digital library : a formative user study (2004) 0.01
    0.0061176866 = product of:
      0.048941493 = sum of:
        0.048941493 = weight(_text_:work in 1163) [ClassicSimilarity], result of:
          0.048941493 = score(doc=1163,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.3440991 = fieldWeight in 1163, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1163)
      0.125 = coord(1/8)
    
    Abstract
    In this paper, we discuss a user study done at the formative stage of development of a Math Tools developers' community. The Math Tools digital library, which aims to collect software tools to support K-12 and university mathematics instruction, has two synergistic purposes. One is to support federated search and the other is to create a community of practice in which developers and users can work together. While much research has explored the technical problem of federated search, there has been little investigation into how to grow a creative, working community around a digital library. To this end, we surveyed and interviewed members of the Math Tools community in order to elicit concerns and priorities. These data led to rich descriptions of the teachers, developers, and researchers who comprise this community. Insights from these descriptions were then used to inform the creation of a set of metaphors and design principles that the Math Tools team could use in their continuing design work.
  5. Arms, W.Y.; Blanchi, C.; Overly, E.A.: ¬An architecture for information in digital libraries (1997) 0.01
    0.005046834 = product of:
      0.04037467 = sum of:
        0.04037467 = weight(_text_:work in 1260) [ClassicSimilarity], result of:
          0.04037467 = score(doc=1260,freq=8.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.28386727 = fieldWeight in 1260, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1260)
      0.125 = coord(1/8)
    
    Abstract
    Flexible organization of information is one of the key design challenges in any digital library. For the past year, we have been working with members of the National Digital Library Project (NDLP) at the Library of Congress to build an experimental system to organize and store library collections. This is a report on the work. In particular, we describe how a few technical building blocks are used to organize the material in collections, such as the NDLP's, and how these methods fit into a general distributed computing framework. The technical building blocks are part of a framework that evolved as part of the Computer Science Technical Reports Project (CSTR). This framework is described in the paper, "A Framework for Distributed Digital Object Services", by Robert Kahn and Robert Wilensky (1995). The main building blocks are: "digital objects", which are used to manage digital material in a networked environment; "handles", which identify digital objects and other network resources; and "repositories", in which digital objects are stored. These concepts are amplified in "Key Concepts in the Architecture of the Digital Library", by William Y. Arms (1995). In summer 1995, after earlier experimental development, work began on the implementation of a full digital library system based on this framework. In addition to Kahn/Wilensky and Arms, several working papers further elaborate on the design concepts. A paper by Carl Lagoze and David Ely, "Implementation Issues in an Open Architectural Framework for Digital Object Services", delves into some of the repository concepts. The initial repository implementation was based on a paper by Carl Lagoze, Robert McGrath, Ed Overly and Nancy Yeager, "A Design for Inter-Operable Secure Object Stores (ISOS)". Work on the handle system, which began in 1992, is described in a series of papers that can be found on the Handle Home Page. The National Digital Library Program (NDLP) at the Library of Congress is a large scale project to convert historic collections to digital form and make them widely available over the Internet. The program is described in two articles by Caroline R. Arms, "Historical Collections for the National Digital Library". The NDLP itself draws on experience gained through the earlier American Memory Program. Based on this work, we have built a pilot system that demonstrates how digital objects can be used to organize complex materials, such as those found in the NDLP. The pilot was demonstrated to members of the library in July 1996. The pilot system includes the handle system for identifying digital objects, a pilot repository to store them, and two user interfaces: one designed for librarians to manage digital objects in the repository, the other for library patrons to access the materials stored in the repository. Materials from the NDLP's Coolidge Consumerism compilation have been deposited into the pilot repository. They include a variety of photographs and texts, converted to digital form. The pilot demonstrates the use of handles for identifying such material, the use of meta-objects for managing sets of digital objects, and the choice of metadata. We are now implementing an enhanced prototype system for completion in early 1997.
  6. Brahms, E.: Digital library initiatives of the Deutsche Forschungsgemeinschaft (2001) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 1190) [ClassicSimilarity], result of:
          0.034606863 = score(doc=1190,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 1190, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=1190)
      0.125 = coord(1/8)
    
    Abstract
    The Deutsche Forschungsgemeinschaft (DFG) is the central public funding organization for academic research in Germany. It is thus comparable to a research council or a national research foundation. According to its statutes, DFG's mandate is to serve science and the arts in all fields by supporting research projects carried out at universities and public research institutions in Germany, to promote cooperation between researchers, and to forge and support links between German academic science, industry and partners in foreign countries. In the fulfillment of its tasks, the DFG pays special attention to the education and support of young scientists and scholars. DFG's mandate and operations follow the principle of territoriality. This means that its funding activities are restricted, with very few exceptions, to individuals and institutions with permanent addresses in Germany. Fellowships are granted for work in other countries, but most fellowship programs are restricted to German citizens, with a few exceptions for permanent residents of Germany holding foreign passports.
  7. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.00
    0.004325858 = product of:
      0.034606863 = sum of:
        0.034606863 = weight(_text_:work in 3655) [ClassicSimilarity], result of:
          0.034606863 = score(doc=3655,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2433148 = fieldWeight in 3655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
      0.125 = coord(1/8)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
  8. EuropeanaTech and Multilinguality : Issue 1 of EuropeanaTech Insight (2015) 0.00
    0.0040784576 = product of:
      0.03262766 = sum of:
        0.03262766 = weight(_text_:work in 1832) [ClassicSimilarity], result of:
          0.03262766 = score(doc=1832,freq=4.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2293994 = fieldWeight in 1832, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=1832)
      0.125 = coord(1/8)
    
    Abstract
    Welcome to the very first issue of EuropeanaTech Insight, a multimedia publication about research and development within the EuropeanaTech community. EuropeanaTech is a very active community. It spans all of Europe and is made up of technical experts from the various disciplines within digital cultural heritage. At any given moment, members can be found presenting their work in project meetings, seminars and conferences around the world. Now, through EuropeanaTech Insight, we can share that inspiring work with the whole community. In our first three issues, we're showcasing topics discussed at the EuropeanaTech 2015 Conference, an exciting event that gave rise to lots of innovative ideas and fruitful conversations on the themes of data quality, data modelling, open data, data re-use, multilingualism and discovery. Welcome, bienvenue, bienvenido, Välkommen, Tervetuloa to the first Issue of EuropeanaTech Insight. Are we talking your language? No? Well I can guarantee you Europeana is. One of the European Union's great beauties and strengths is its diversity. That diversity is perhaps most evident in the 24 different languages spoken in the EU. Making it possible for all European citizens to easily and seamlessly communicate in their native language with others who do not speak that language is a huge technical undertaking. Translating documents, news, speeches and historical texts was once exclusively done manually. Clearly, that takes a huge amount of time and resources and means that not everything can be translated... However, with the advances in machine and automatic translation, it's becoming more possible to provide instant and pretty accurate translations. Europeana provides access to over 40 million digitised cultural heritage offering content in over 33 languages. But what value does Europeana provide if people can only find results in their native language? None. That's why the EuropeanaTech community is collectively working towards making it more possible for everyone to discover our collections in their native language. In this issue of EuropeanaTech Insight, we hear from community members who are making great strides in machine translation and enrichment tools to help improve not only access to data, but also how we retrieve, browse and understand it.
  9. Thaller, M.: From the digitized to the digital library (2001) 0.00
    0.0037463026 = product of:
      0.02997042 = sum of:
        0.02997042 = weight(_text_:work in 1159) [ClassicSimilarity], result of:
          0.02997042 = score(doc=1159,freq=6.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.2107168 = fieldWeight in 1159, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1159)
      0.125 = coord(1/8)
    
    Abstract
    The author holds a chair in Humanities Computer Science at the University of Cologne. For a number of years, he has been responsible for digitization projects, either as project director or as the person responsible for the technology being employed on the projects. The "Duderstadt project" (http://www.archive.geschichte.mpg.de/duderstadt/dud-e.htm) is one such project. It is one of the early large-scale manuscript servers, finished at the end of 1998, with approximately 80,000 high resolution documents representing the holdings of a city archive before the year 1600. The digital library of the Max-Planck-Institut für Europäische Rechtsgeschichte in Frankfurt (http://www.mpier.uni-frankfurt.de/dlib) is another project on which the author has worked, with currently approximately 900,000 pages. The author is currently project director of the project "Codices Electronici Ecclesiae Colonensis" (CEEC), which has just started and will ultimately consist of approximately 130,000 very high resolution color pages representing the complete holdings of the manuscript library of a medieval cathedral. It is being designed in close cooperation with the user community of such material. The project site (http://www.ceec.uni-koeln.de), while not yet officially opened, currently holds about 5,000 pages and is growing by 100 - 150 pages per day. Parallel to the CEEC model project, a conceptual project, the "Codex Electronicus Colonensis" (CEC), is at work on the definition of an abstract model for the representation of medieval codices in digital form. The following paper has grown out of the design considerations for the mentioned CEC project. The paper reflects a growing concern of the author's that some of the recent advances in digital (research) libraries are being diluted because it is not clear whether the advances really reach the audience for whom the projects would be most useful. Many, if not most, digitization projects have aimed at existing collections as individual servers. A digital library, however, should be more than a digitized one. It should be built according to principles that are not necessarily the same as those employed for paper collections, and it should be evaluated according to different measures which are not yet totally clear. The paper takes the form of six theses on various aspects of the ongoing transition to digital libraries. These theses have been presented at a forum on the German "retrodigitization" program. The program aims at the systematic conversion of library resources into digital form, concentrates for a number of reasons on material primarily of interest to the Humanities, and is funded by the German research council. As such this program is directly aimed at improving the overall infrastructure of academic research; other users of libraries are of interest, but are not central to the program.
    Content
    Theses: 1. Who should be addressed by digital libraries? How shall we measure whether we have reached the desired audience? Thesis: The primary audience for a digital library is neither the leading specialist in the respective field, nor the freshman, but the advanced student or young researcher and the "almost specialist". The primary topic of digitization projects should not be the absolute top range of the "treasures" of a collection, but those materials that we always have wanted to promote if they were just marginally more important. Whether we effectively serve them to the appropriate community of serious users can only be measured according to criteria that have yet to be developed. 2. The appropriate size of digital libraries and their access tools Thesis: Digital collections need a critical, minimal size to make their access worthwhile. In the end, users want to access information, not metadata or gimmicks. 3. The quality of digital objects Thesis: If digital library resources are to be integrated into the daily work of the research community, they must appear on the screen of the researcher in a quality that is useful in actual work. 4. The granularity / modularity of digital repositories Thesis: While digital libraries are self-contained bodies of information, they are not the basic unit that most users want to access. Users are, as a rule, more interested in the individual objects in the library and need a straightforward way to access them. 5. Digital collections as integrated reference systems Thesis: Traditional libraries support their collections with reference material. Digital collections need to find appropriate models to replicate this functionality. 6. Library and teaching Thesis: The use of multimedia in teaching is as much of a current buzzword as the creation of digital collections. It is obvious that they should be connected. A clear-cut separation of the two approaches is nevertheless necessary.
  10. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 6470) [ClassicSimilarity], result of:
          0.028839052 = score(doc=6470,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 6470, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6470)
      0.125 = coord(1/8)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  11. Müller, B.; Poley, C.; Pössel, J.; Hagelstein, A.; Gübitz, T.: LIVIVO - the vertical search engine for life sciences (2017) 0.00
    0.0036048815 = product of:
      0.028839052 = sum of:
        0.028839052 = weight(_text_:work in 3368) [ClassicSimilarity], result of:
          0.028839052 = score(doc=3368,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.20276234 = fieldWeight in 3368, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3368)
      0.125 = coord(1/8)
    
    Abstract
    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.
  12. Birmingham, W.; Pardo, B.; Meek, C.; Shifrin, J.: ¬The MusArt music-retrieval system (2002) 0.00
    0.002883905 = product of:
      0.02307124 = sum of:
        0.02307124 = weight(_text_:work in 1205) [ClassicSimilarity], result of:
          0.02307124 = score(doc=1205,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 1205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=1205)
      0.125 = coord(1/8)
    
    Abstract
    Music websites are ubiquitous, and music downloads, such as MP3, are a major source of Web traffic. As the amount of musical content increases and the Web becomes an important mechanism for distributing music, we expect to see a rising demand for music search services. Many currently available music search engines rely on file names, song title, composer or performer as the indexing and retrieval mechanism. These systems do not make use of the musical content. We believe that a more natural, effective, and usable music-information retrieval (MIR) system should have audio input, where the user can query with musical content. We are developing a system called MusArt for audio-input MIR. With MusArt, as with other audio-input MIR systems, a user sings or plays a theme, hook, or riff from the desired piece of music. The system transcribes the query and searches for related themes in a database, returning the most similar themes, given some measure of similarity. We call this "retrieval by query." In this paper, we describe the architecture of MusArt. An important element of MusArt is metadata creation: we believe that it is essential to automatically abstract important musical elements, particularly themes. Theme extraction is performed by a subsystem called MME, which we describe later in this paper. Another important element of MusArt is its support for a variety of search engines, as we believe that MIR is too complex for a single approach to work for all queries. Currently, MusArt supports a dynamic time-warping search engine that has high recall, and a complementary stochastic search engine that searches over themes, emphasizing speed and relevancy. The stochastic search engine is discussed in this paper.
  13. Neubauer, W.: ¬The Knowledge portal or the vision of easy access to information (2009) 0.00
    0.002883905 = product of:
      0.02307124 = sum of:
        0.02307124 = weight(_text_:work in 2812) [ClassicSimilarity], result of:
          0.02307124 = score(doc=2812,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 2812, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2812)
      0.125 = coord(1/8)
    
    Abstract
    From a quantitative and qualitative point of view the ETH Library is offering its users an extensive choice of information services. In this respect all researchers, all scientists and also all students have access to nearly all relevant information. This is one side of the coin. On the other hand, this broad, but also heterogeneous bundle of information sources has disadvantages, which should not be underestimated: The more information services and information channels you have, the more complex is it to find what you want to get for your scientific work. A portal-like integration of all the different information resources is still missing. The vision, the main goal of the project "Knowledge Portal" is, to develop a central access system in terms of a "single-point-of-access" for all electronic information services. This means, that all these sources - from the library's catalogue and the fulltext inhouse applications to external, licensed sources - should be accessible via one central Web service. Although the primary target group for this vision is the science community of the ETH Zurich, the interested public should also be taken into account, for the library has also a nation-wide responsibility.The general idea to launch a complex project like that comes from a survey the library did one and a half years ago. We asked a defined sample of scientists what they expected to get from their library and one constant answer was, that they wanted to have one point of access to all the electronic library services and besides this, the search processes should be as simple as possible. We accepted this demand as an order to develop a "single-point-of-access" to all electronic services the library provides. The presentation gives an overview about the general idea of the project and describes the current status.
  14. Niggemann, E.: Europeana: connecting cultural heritage (2009) 0.00
    0.002883905 = product of:
      0.02307124 = sum of:
        0.02307124 = weight(_text_:work in 2816) [ClassicSimilarity], result of:
          0.02307124 = score(doc=2816,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.16220987 = fieldWeight in 2816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2816)
      0.125 = coord(1/8)
    
    Abstract
    The European Commission's goal for Europeana is to make European information resources easier to use in an online environment. It will build on Europe's rich heritage, combining multicultural and multilingual environments with technological advances and new business models. The Europeana prototype is the result of a 2-year project that began in July 2007. Europeana.eu went live on 20 November 2008, launched by Viviane Reding, European Commissioner for Information Society and Media. Europeana.eu is about ideas and inspiration. It links the user to 2 million digital items: images, text, sounds and videos. Europeana is a Thematic Network funded by the European Commission under the eContentplus programme, as part of the i2010 policy. Originally known as the European digital library network - EDLnet - it is a partnership of 100 representatives of heritage and knowledge organisations and IT experts from throughout Europe. They contribute to the work packages that are solving the technical and usability issues. The project is run by a core team based in the national library of the Netherlands, the Koninklijke Bibliotheek. It builds on the project management and technical expertise developed by The European Library, which is a service of the Conference of European National Librarians. Content is added via so called aggregators, national or domain specific portals aggegrating digital content and channelling it to Europeana. Most of these portals are being developed in the framework of EU funded projects, e.g. European Film Gateway, Athena and EuropeanaLocal. Overseeing the project is the EDL Foundation, which includes key European cultural heritage associations from the four domains. The Foundation's statutes commit members to: * providing access to Europe's cultural and scientific heritage through a cross-domain portal; * co-operating in the delivery and sustainability of the joint portal; * stimulating initiatives to bring together existing digital content; * supporting digitisation of Europe's cultural and scientific heritage. Europeana.eu is a prototype. Europeana Version 1.0 is being developed and will be launched in 2010 with links to over 6 million digital objects.
  15. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.00
    0.0019688278 = product of:
      0.015750622 = sum of:
        0.015750622 = product of:
          0.031501245 = sum of:
            0.031501245 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.031501245 = score(doc=4189,freq=2.0), product of:
                0.13569894 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03875087 = queryNorm
                0.23214069 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.125 = coord(1/8)
    
    Date
    22. 6.2002 19:35:09
  16. Heery, R.; Carpenter, L.; Day, M.: Renardus project developments and the wider digital library context (2001) 0.00
    0.0018024407 = product of:
      0.014419526 = sum of:
        0.014419526 = weight(_text_:work in 1219) [ClassicSimilarity], result of:
          0.014419526 = score(doc=1219,freq=2.0), product of:
            0.14223081 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03875087 = queryNorm
            0.10138117 = fieldWeight in 1219, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1219)
      0.125 = coord(1/8)
    
    Abstract
    The Renardus project has brought together gateways that are 'large-scale national initiatives'. Within the European context this immediately introduces a diversity of organisations, as responsibility for national gateway initiatives is located differently, for example, in national libraries, national agencies with responsibility for educational technology infrastructure, and within universities or consortia of universities. Within the project, gateways are in some cases represented directly by their own personnel, in some cases by other departments or research centres, but not always by the people responsible for providing the gateway service. For example, the UK Resource Discovery Network (RDN) is represented in the project by UKOLN (formerly part of the Resource Discovery Network Centre) and the Institute of Learning and Research Technology (ILRT), University of Bristol -- an RDN 'hub' service provider -- who are primarily responsible for dissemination. Since the start of the project there have been changes within the organisational structures providing gateways and within the service ambitions of gateways themselves. Such lack of stability is inherent within the Internet service environment, and this presents challenges to Renardus activity that has to be planned for a three-year period. For example, within the gateway's funding environment there is now an exploration of 'subject portals' offering more extended services than gateways. There is also potential commercial interest for including gateways as a value-added component to existing commercial services, and new offerings from possible competitors such as Google's Web Directory and country based services. This short update on the Renardus project intends to inform the reader of progress within the project and to give some wider context to its main themes by locating the project within the broader arena of digital library activity. There are twelve partners in the project from Denmark, Finland, France, Germany, the Netherlands and Sweden, as well as the UK. In particular we will focus on the specific activity in which UKOLN is involved: the architectural design, the specification of functional requirements, reaching consensus on a collaborative business model, etc. We will also consider issues of metadata management where all partners have interests. We will highlight implementation issues that connect to areas of debate elsewhere. In particular we see connections with activity related to establishing architectural models for digital library services, connections to the services that may emerge from metadata sharing using the Open Archives Initiative metadata sharing protocol, and links with work elsewhere on navigation of digital information spaces by means of controlled vocabularies.