Search (86 results, page 1 of 5)

  • × theme_ss:"Information Gateway"
  1. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.06
    0.056690704 = product of:
      0.11338141 = sum of:
        0.029262928 = weight(_text_:data in 1447) [ClassicSimilarity], result of:
          0.029262928 = score(doc=1447,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.19762816 = fieldWeight in 1447, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1447)
        0.08411848 = sum of:
          0.058740605 = weight(_text_:processing in 1447) [ClassicSimilarity], result of:
            0.058740605 = score(doc=1447,freq=6.0), product of:
              0.18956426 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046827413 = queryNorm
              0.30987173 = fieldWeight in 1447, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
          0.025377871 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
            0.025377871 = score(doc=1447,freq=2.0), product of:
              0.16398162 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046827413 = queryNorm
              0.15476047 = fieldWeight in 1447, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1447)
      0.5 = coord(2/4)
    
    Abstract
    The BAER Web Resources Group was charged in October 1999 with defining and describing the parameters of electronic resources that do not clearly belong to the categories being defined by the BAER Digital Group or the BAER Electronic Journals Group. After some difficulty identifying precisely which resources fell under the Group's charge, we finally named the following types of resources for our consideration: web sites, electronic texts, indexes, databases and abstracts, online reference resources, and networked and non-networked CD-ROMs. Electronic resources are a vast and growing collection that touch nearly every department within the Library. It is unrealistic to think one department can effectively administer all aspects of the collection. The Group then began to focus on the concern of bibliographic access to these varied resources, and to define parameters for handling or processing them within the Library. Some key elements became evident as the work progressed. * Selection process of resources to be acquired for the collection * Duplication of effort * Use of CORC * Resource Finder design * Maintenance of Resource Finder * CD-ROMs not networked * Communications * Voyager search limitations. An unexpected collaboration with the Web Development Committee on the Resource Finder helped to steer the Group to more detailed descriptions of bibliographic access. This collaboration included development of data elements for the Resource Finder database, and some discussions on Library staff processing of the resources. The Web Resources Group invited expert testimony to help the Group broaden its view to envision public use of the resources and discuss concerns related to technical services processing. The first testimony came from members of the Resource Finder Committee. Some background information on the Web Development Resource Finder Committee was shared. The second testimony was from librarians who select electronic texts. Three main themes were addressed: accessing CD-ROMs; the issue of including non-networked CD-ROMs in the Resource Finder; and, some special concerns about electronic texts. The third testimony came from librarians who select indexes and abstracts and also provide Reference services. Appendices to this report include minutes of the meetings with the experts (Appendix A), a list of proposed data elements to be used in the Resource Finder (Appendix B), and recommendations made to the Resource Finder Committee (Appendix C). Below are summaries of the key elements.
    Date
    21. 4.2002 10:22:31
  2. Sasaki, H.; Kiyoki, Y.: ¬A formulation for patenting content-based retrieval processes in digital libraries (2005) 0.04
    0.043974925 = product of:
      0.08794985 = sum of:
        0.043894395 = weight(_text_:data in 998) [ClassicSimilarity], result of:
          0.043894395 = score(doc=998,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=998)
        0.044055454 = product of:
          0.08811091 = sum of:
            0.08811091 = weight(_text_:processing in 998) [ClassicSimilarity], result of:
              0.08811091 = score(doc=998,freq=6.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.4648076 = fieldWeight in 998, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=998)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    In this paper, we present a formulation and case studies of the conditions for patenting content-based retrieval processes in digital libraries, especially in image libraries. Inventors and practitioners demand a formulation of the conditions for patenting data-processing processes as computer-related inventions in the form of computer programs. A process for content-based retrieval often consists of a combination of prior disclosed means and also comprises means for parameter setting that is adjusted to retrieve specific kinds of images in certain narrow domains. We focus on requirements for technical advancement (nonobviousness) in the combination of data-processing means, i.e., processes and specification (enablement) on the means for parameter setting in computer programs. Our formulation follows the standards of patent examination and litigation on computer-related inventions in the US. We confirm the feasibility and accountability of our formulation by applying it to several inventions patented in the US.
    Source
    Information processing and management. 41(2005) no.1, S.57-74
  3. Fischer, T.; Neuroth, H.: SSG-FI - special subject gateways to high quality Internet resources for scientific users (2000) 0.03
    0.025035713 = product of:
      0.050071426 = sum of:
        0.031038022 = weight(_text_:data in 4873) [ClassicSimilarity], result of:
          0.031038022 = score(doc=4873,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 4873, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4873)
        0.019033402 = product of:
          0.038066804 = sum of:
            0.038066804 = weight(_text_:22 in 4873) [ClassicSimilarity], result of:
              0.038066804 = score(doc=4873,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.23214069 = fieldWeight in 4873, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4873)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Project SSG-FI at SUB Göttingen provides special subject gateways to international high quality Internet resources for scientific users. Internet sites are selected by subject specialists and described using an extension of qualified Dublin Core metadata. A basic evaluation is added. These descriptions are freely available and can be searched and browsed. These are now subject gateways for 3 subject ares: earth sciences (GeoGuide); mathematics (MathGuide); and Anglo-American culture (split into HistoryGuide and AnglistikGuide). Together they receive about 3.300 'hard' requests per day, thus reaching over 1 million requests per year. The project SSG-FI behind these guides is open to collaboration. Institutions and private persons wishing to contribute can notify the SSG-FI team or send full data sets. Regular contributors can request registration with the project to access the database via the Internet and create and edit records
    Date
    22. 6.2002 19:40:42
  4. Borgman, C.L.; Smart, L.J.; Millwood, K.A.; Finley, J.R.; Champeny, L.; Gilliland, A.J.; Leazer, G.H.: Comparing faculty information seeking in teaching and research : implications for the design of digital libraries (2005) 0.02
    0.02426428 = product of:
      0.04852856 = sum of:
        0.035839625 = weight(_text_:data in 3231) [ClassicSimilarity], result of:
          0.035839625 = score(doc=3231,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24204408 = fieldWeight in 3231, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3231)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 3231) [ClassicSimilarity], result of:
              0.025377871 = score(doc=3231,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 3231, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3231)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    ADEPT is a 5-year project whose goals are to develop, deploy, and evaluate inquiry learning capabilities for the Alexandria Digital Library, an extant digital library of primary sources in geography. We interviewed nine geography faculty members who teach undergraduate courses about their information seeking for research and teaching and their use of information resources in teaching. These data were supplemented by interviews with four faculty members from another ADEPT study about the nature of knowledge in geography. Among our key findings are that geography faculty are more likely to encounter useful teaching resources while seeking research resources than vice versa, although the influence goes in both directions. Their greatest information needs are for research data, maps, and images. They desire better searching by concept or theme, in addition to searching by location and place name. They make extensive use of their own research resources in their teaching. Among the implications for functionality and architecture of geographic digital libraries for educational use are that personal digital libraries are essential, because individual faculty members have personalized approaches to selecting, collecting, and organizing teaching resources. Digital library services for research and teaching should include the ability to import content from common office software and to store content in standard formats that can be exported to other applications. Digital library services can facilitate sharing among faculty but cannot overcome barriers such as intellectual property rights, access to proprietary research data, or the desire of individuals to maintain control over their own resources. Faculty use of primary and secondary resources needs to be better understood if we are to design successful digital libraries for research and teaching.
    Date
    3. 6.2005 20:40:22
  5. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.02
    0.017350782 = product of:
      0.06940313 = sum of:
        0.06940313 = weight(_text_:data in 3967) [ClassicSimilarity], result of:
          0.06940313 = score(doc=3967,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46871632 = fieldWeight in 3967, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3967)
      0.25 = coord(1/4)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.
  6. Zhu, X.; Freeman, M.A.: ¬An evaluation of U.S. municipal open data portals : a user interaction framework (2019) 0.01
    0.014458986 = product of:
      0.057835944 = sum of:
        0.057835944 = weight(_text_:data in 5502) [ClassicSimilarity], result of:
          0.057835944 = score(doc=5502,freq=10.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.39059696 = fieldWeight in 5502, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5502)
      0.25 = coord(1/4)
    
    Abstract
    As an increasing number of open government data (OGD) portals are created, an evaluation method is needed to assess these portals. In this study, we drew from the existing principles and evaluation methods to develop a User Interaction Framework, with concrete criteria in five dimensions: Access, Trust, Understand, Engage-integrate, and Participate. The framework was then used to evaluate the current OGD sites created and maintained by 34 U.S. municipal government agencies. The results show that, overall, portals perform well in terms of providing access, but not so well in helping users understand and engage with data. These findings indicate room for improvement in multiple areas and suggest potential roles for information professionals as data mediators. The study also reveals that portals using the Socrata platform performed better, regarding user access, trust, engagement, and participation. However, the variability among portals indicates that some portals should improve their platforms to achieve greater user engagement and participation. In addition, city governments need to develop clear plans about what data should be available and how to make them available to their public.
  7. EuropeanaTech and Multilinguality : Issue 1 of EuropeanaTech Insight (2015) 0.01
    0.013686483 = product of:
      0.05474593 = sum of:
        0.05474593 = weight(_text_:data in 1832) [ClassicSimilarity], result of:
          0.05474593 = score(doc=1832,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.36972845 = fieldWeight in 1832, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1832)
      0.25 = coord(1/4)
    
    Abstract
    Welcome to the very first issue of EuropeanaTech Insight, a multimedia publication about research and development within the EuropeanaTech community. EuropeanaTech is a very active community. It spans all of Europe and is made up of technical experts from the various disciplines within digital cultural heritage. At any given moment, members can be found presenting their work in project meetings, seminars and conferences around the world. Now, through EuropeanaTech Insight, we can share that inspiring work with the whole community. In our first three issues, we're showcasing topics discussed at the EuropeanaTech 2015 Conference, an exciting event that gave rise to lots of innovative ideas and fruitful conversations on the themes of data quality, data modelling, open data, data re-use, multilingualism and discovery. Welcome, bienvenue, bienvenido, Välkommen, Tervetuloa to the first Issue of EuropeanaTech Insight. Are we talking your language? No? Well I can guarantee you Europeana is. One of the European Union's great beauties and strengths is its diversity. That diversity is perhaps most evident in the 24 different languages spoken in the EU. Making it possible for all European citizens to easily and seamlessly communicate in their native language with others who do not speak that language is a huge technical undertaking. Translating documents, news, speeches and historical texts was once exclusively done manually. Clearly, that takes a huge amount of time and resources and means that not everything can be translated... However, with the advances in machine and automatic translation, it's becoming more possible to provide instant and pretty accurate translations. Europeana provides access to over 40 million digitised cultural heritage offering content in over 33 languages. But what value does Europeana provide if people can only find results in their native language? None. That's why the EuropeanaTech community is collectively working towards making it more possible for everyone to discover our collections in their native language. In this issue of EuropeanaTech Insight, we hear from community members who are making great strides in machine translation and enrichment tools to help improve not only access to data, but also how we retrieve, browse and understand it.
    Content
    Juliane Stiller, J.: Automatic Solutions to Improve Multilingual Access in Europeana / Vila-Suero, D. and A. Gómez-Pérez: Multilingual Linked Data / Pilos, S.: Automated Translation: Connecting Culture / Karlgren, J.: Big Data, Libraries, and Multilingual New Text / Ziedins, J.: Latvia translates with hugo.lv
  8. Stempfhuber, M.; Zapilko, B.: Modelling text-fact-integration in digital libraries (2009) 0.01
    0.013439858 = product of:
      0.053759433 = sum of:
        0.053759433 = weight(_text_:data in 3393) [ClassicSimilarity], result of:
          0.053759433 = score(doc=3393,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.3630661 = fieldWeight in 3393, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3393)
      0.25 = coord(1/4)
    
    Abstract
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert's profiles, institutional profiles, project information etc.) according to their scientific users' needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies.
  9. Fang, L.: ¬A developing search service : heterogeneous resources integration and retrieval system (2004) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 1193) [ClassicSimilarity], result of:
          0.05173004 = score(doc=1193,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 1193, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1193)
      0.25 = coord(1/4)
    
    Abstract
    This article describes two approaches for searching heterogeneous resources, which are explained as they are used in two corresponding existing systems-RIRS (Resource Integration Retrieval System) and HRUSP (Heterogeneous Resource Union Search Platform). On analyzing the existing systems, a possible framework-the MUSP (Multimetadata-Based Union Search Platform) is presented. Libraries now face a dilemma. On one hand, libraries subscribe to many types of database retrieval systems that are produced by various providers. The libraries build their data and information systems independently. This results in highly heterogeneous and distributed systems at the technical level (e.g., different operating systems and user interfaces) and at the conceptual level (e.g., the same objects are named using different terms). On the other hand, end users want to access all these heterogeneous data via a union interface, without having to know the structure of each information system or the different retrieval methods used by the systems. Libraries must achieve a harmony between information providers and users. In order to bridge the gap between the service providers and the users, it would seem that all source databases would need to be rebuilt according to a uniform data structure and query language, but this seems impossible. Fortunately, however, libraries and information and technology providers are now making an effort to find a middle course that meets the requirements of both data providers and users. They are doing this through resource integration.
  10. Müller, B.; Poley, C.; Pössel, J.; Hagelstein, A.; Gübitz, T.: LIVIVO - the vertical search engine for life sciences (2017) 0.01
    0.01293251 = product of:
      0.05173004 = sum of:
        0.05173004 = weight(_text_:data in 3368) [ClassicSimilarity], result of:
          0.05173004 = score(doc=3368,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 3368, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3368)
      0.25 = coord(1/4)
    
    Abstract
    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.
  11. Buckland, M.; Lancaster, L.: Combining place, time, and topic : the Electronic Cultural Atlas Initiative (2004) 0.01
    0.012671219 = product of:
      0.050684877 = sum of:
        0.050684877 = weight(_text_:data in 1194) [ClassicSimilarity], result of:
          0.050684877 = score(doc=1194,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.342302 = fieldWeight in 1194, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=1194)
      0.25 = coord(1/4)
    
    Abstract
    The Electronic Cultural Atlas Initiative was formed to encourage scholarly communication and the sharing of data among researchers who emphasize the relationships between place, time, and topic in the study of culture and history. In an effort to develop better tools and practices, The Electronic Cultural Atlas Initiative has sponsored the collaborative development of software for downloading and editing geo-temporal data to create dynamic maps, a clearinghouse of shared datasets accessible through a map-based interface, projects on format and content standards for gazetteers and time period directories, studies to improve geo-temporal aspects in online catalogs, good practice guidelines for preparing e-publications with dynamic geo-temporal displays, and numerous international conferences. The Electronic Cultural Atlas Initiative (ECAI) grew out of discussions among an international group of scholars interested in religious history and area studies. It was established as a unit under the Dean of International and Area Studies at the University of California, Berkeley in 1997. ECAI's mission is to promote an international collaborative effort to transform humanities scholarship through use of the digital environment to share data and by placing greater emphasis on the notions of place and time. Professor Lewis Lancaster is the Director. Professor Michael Buckland, with a library and information studies background, joined the effort as Co-Director in 2000. Assistance from the Lilly Foundation, the California Digital Library (University of California), and other sources has enabled ECAI to nurture a community; to develop a catalog ("clearinghouse") of Internet-accessible georeferenced resources; to support the development of software for obtaining, editing, manipulating, and dynamically visualizing geo-temporally encoded data; and to undertake research and development projects as needs and resources determine. Several hundred scholars worldwide, from a wide range of disciplines, are informally affiliated with ECAI, all interested in shared use of historical and cultural data. The Academia Sinica (Taiwan), The British Library, and the Arts and Humanities Data Service (UK) are among the well-known affiliates. However, ECAI mainly comprises individual scholars and small teams working on their own small projects on a very wide range of cultural, social, and historical topics. Numerous specialist committees have been fostering standardization and collaboration by area and by themes such as trade-routes, cities, religion, and sacred sites.
  12. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.01
    0.011215541 = product of:
      0.044862162 = sum of:
        0.044862162 = product of:
          0.089724325 = sum of:
            0.089724325 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.089724325 = score(doc=4872,freq=4.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:40:22
  13. Collins, L.M.; Hussell, J.A.T.; Hettinga, R.K.; Powell, J.E.; Mane, K.K.; Martinez, M.L.B.: Information visualization and large-scale repositories (2007) 0.01
    0.011199882 = product of:
      0.04479953 = sum of:
        0.04479953 = weight(_text_:data in 2596) [ClassicSimilarity], result of:
          0.04479953 = score(doc=2596,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.30255508 = fieldWeight in 2596, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2596)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - To describe how information visualization can be used in the design of interface tools for large-scale repositories. Design/methodology/approach - One challenge for designers in the context of large-scale repositories is to create interface tools that help users find specific information of interest. In order to be most effective, these tools need to leverage the cognitive characteristics of the target users. At the Los Alamos National Laboratory, the authors' target users are scientists and engineers who can be characterized as higher-order, analytical thinkers. In this paper, the authors describe a visualization tool they have created for making the authors' large-scale digital object repositories more usable for them: SearchGraph, which facilitates data set analysis by displaying search results in the form of a two- or three-dimensional interactive scatter plot. Findings - Using SearchGraph, users can view a condensed, abstract visualization of search results. They can view the same dataset from multiple perspectives by manipulating several display, sort, and filter options. Doing so allows them to see different patterns in the dataset. For example, they can apply a logarithmic transformation in order to create more scatter in a dense cluster of data points or they can apply filters in order to focus on a specific subset of data points. Originality/value - SearchGraph is a creative solution to the problem of how to design interface tools for large-scale repositories. It is particularly appropriate for the authors' target users, who are scientists and engineers. It extends the work of the first two authors on ActiveGraph, a read-write digital library visualization tool.
  14. Subject gateways (2000) 0.01
    0.011102819 = product of:
      0.044411276 = sum of:
        0.044411276 = product of:
          0.08882255 = sum of:
            0.08882255 = weight(_text_:22 in 6483) [ClassicSimilarity], result of:
              0.08882255 = score(doc=6483,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.5416616 = fieldWeight in 6483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6483)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:43:01
  15. Borgman, C.L.: Multi-media, multi-cultural, and multi-lingual digital libraries : or how do we exchange data In 400 languages? (1997) 0.01
    0.011087317 = product of:
      0.044349268 = sum of:
        0.044349268 = weight(_text_:data in 1263) [ClassicSimilarity], result of:
          0.044349268 = score(doc=1263,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29951423 = fieldWeight in 1263, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1263)
      0.25 = coord(1/4)
    
    Abstract
    The Internet would not be very useful if communication were limited to textual exchanges between speakers of English located in the United States. Rather, its value lies in its ability to enable people from multiple nations, speaking multiple languages, to employ multiple media in interacting with each other. While computer networks broke through national boundaries long ago, they remain much more effective for textual communication than for exchanges of sound, images, or mixed media -- and more effective for communication in English than for exchanges in most other languages, much less interactions involving multiple languages. Supporting searching and display in multiple languages is an increasingly important issue for all digital libraries accessible on the Internet. Even if a digital library contains materials in only one language, the content needs to be searchable and displayable on computers in countries speaking other languages. We need to exchange data between digital libraries, whether in a single language or in multiple languages. Data exchanges may be large batch updates or interactive hyperlinks. In any of these cases, character sets must be represented in a consistent manner if exchanges are to succeed. Issues of interoperability, portability, and data exchange related to multi-lingual character sets have received surprisingly little attention in the digital library community or in discussions of standards for information infrastructure, except in Europe. The landmark collection of papers on Standards Policy for Information Infrastructure, for example, contains no discussion of multi-lingual issues except for a passing reference to the Unicode standard. The goal of this short essay is to draw attention to the multi-lingual issues involved in designing digital libraries accessible on the Internet. Many of the multi-lingual design issues parallel those of multi-media digital libraries, a topic more familiar to most readers of D-Lib Magazine. This essay draws examples from multi-media DLs to illustrate some of the urgent design challenges in creating a globally distributed network serving people who speak many languages other than English. First we introduce some general issues of medium, culture, and language, then discuss the design challenges in the transition from local to global systems, lastly addressing technical matters. The technical issues involve the choice of character sets to represent languages, similar to the choices made in representing images or sound. However, the scale of the language problem is far greater. Standards for multi-media representation are being adopted fairly rapidly, in parallel with the availability of multi-media content in electronic form. By contrast, we have hundreds (and sometimes thousands) of years worth of textual materials in hundreds of languages, created long before data encoding standards existed. Textual content from past and present is being encoded in language and application-specific representations that are difficult to exchange without losing data -- if they exchange at all. We illustrate the multi-language DL challenge with examples drawn from the research library community, which typically handles collections of materials in 400 or so languages. These are problems faced not only by developers of digital libraries, but by those who develop and manage any communication technology that crosses national or linguistic boundaries.
  16. Gradmann, S.; Iwanowa, J.; Dröge, E.; Hennicke, S.; Trkulja, V.; Olensky, M.; Stein, C.; Struck, A.; Baierer, K.: Modellierung und Ontologien im Wissensmanagement : Erfahrungen aus drei Projekten im Umfeld von Europeana und des DFG-Exzellenzclusters Bild Wissen Gestaltung an der Humboldt-Universität zu Berlin (2013) 0.01
    0.010973599 = product of:
      0.043894395 = sum of:
        0.043894395 = weight(_text_:data in 904) [ClassicSimilarity], result of:
          0.043894395 = score(doc=904,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.29644224 = fieldWeight in 904, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=904)
      0.25 = coord(1/4)
    
    Abstract
    Im Artikel werden laufende Arbeiten und Ergebnisse der Forschergruppe Wissensmanagement beschrieben. Diese entstanden vor allem durch die am Lehrstuhl Wissensmanagement angesiedelten Projekte Europeana v2.0, Digitised Manuscripts to Europeana (DM2E) sowie von Teilprojekten des vor kurzem gestarteten DFG-Exzellenzclusters Bild Wissen Gestaltung. Die Projekte befassen sich mit Spezialisierungen des Europeana Data Model, der Umwandlung von Metadaten in RDF und der automatisierten und nutzerbasierten semantischen Anreicherung dieser Daten auf Basis eigens entwickelter oder modifizierter Anwendungen sowie der Modellierung von Forschungsaktivitäten, welche derzeit auf die digitale Geisteswissenschaft zugeschnitten ist. Allen Projekten gemeinsam ist die konzeptionelle oder technische Modellierung von Informationsentitäten oder Nutzeraktivitäten, welche am Ende im Linked Data Web repräsentiert werden.
  17. Hommrich, D.; Pasucha, B.; Razum, M.; Riehm, U.: Normdaten und Datenanreicherung beim Fachportal openTA (2018) 0.01
    0.0103460075 = product of:
      0.04138403 = sum of:
        0.04138403 = weight(_text_:data in 2585) [ClassicSimilarity], result of:
          0.04138403 = score(doc=2585,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2794884 = fieldWeight in 2585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0625 = fieldNorm(doc=2585)
      0.25 = coord(1/4)
    
    Abstract
    openTA ist ein webbasiertes Fachportal für das interdisziplinäre Forschungsfeld Technikfolgenabschätzung (TA). Der Beitrag geht zunächst auf die Vorgeschichte von openTA ein und stellt die wesentlichen Merkmale von openTA vor. Im Mittelpunkt steht die geplante Nutzung von Normdaten zur Anreicherung der Daten der openTA-Dienste und deren Verbreitung als Linked Open Data. Dabei sollen sowohl intellektuelle als auch (semi-)automatische Verfahren zum Einsatz kommen, um Entitäten wie Personen, Organisationen, Publikationen und Schlagworte eindeutig zu identifizieren.
  18. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.01
    0.009516701 = product of:
      0.038066804 = sum of:
        0.038066804 = product of:
          0.07613361 = sum of:
            0.07613361 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.07613361 = score(doc=4865,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:41:59
  19. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.01
    0.009516701 = product of:
      0.038066804 = sum of:
        0.038066804 = product of:
          0.07613361 = sum of:
            0.07613361 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.07613361 = score(doc=6040,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 6.2002 19:42:47
  20. Bartolo, L.M.; Lowe, C.S.; Sadoway, D.R.; Powell, A.C.; Glotzer, S.C.: NSDL MatDL : exploring digital library roles (2005) 0.01
    0.009144665 = product of:
      0.03657866 = sum of:
        0.03657866 = weight(_text_:data in 1181) [ClassicSimilarity], result of:
          0.03657866 = score(doc=1181,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 1181, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1181)
      0.25 = coord(1/4)
    
    Abstract
    A primary goal of the NSDL Materials Digital Library (MatDL) is to bring materials science research and education closer together. MatDL is exploring the various roles digital libraries can serve in the materials science community including: 1) supporting a virtual lab, 2) developing markup language applications, and 3) building tools for metadata capture. MatDL is being integrated into an MIT virtual laboratory experience. Early student self-assessment survey results expressed positive opinions of the potential value of MatDL in supporting a virtual lab and in accomplishing additional educational objectives. A separate survey suggested that the effectiveness of a virtual lab may approach that of a physical lab on some laboratory learning objectives. MatDL is collaboratively developing a materials property grapher (KSU and MIT) and a submission tool (KSU and U-M). MatML is an extensible markup language for exchanging materials information developed by materials data experts in industry, government, standards organizations, and professional societies. The web-based MatML grapher allows students to compare selected materials properties across approximately 80 MatML-tagged materials. The MatML grapher adds value in this educational context by allowing students to utilize real property data to make optimal material selection decisions. The submission tool has been integrated into the regular workflow of U-M students and researchers generating nanostructure images. It prompts users for domain-specific information, automatically generating and attaching keywords and editable descriptions.

Languages

  • e 62
  • d 24

Types

  • a 76
  • el 16
  • m 2
  • s 2
  • x 1
  • More… Less…