Search (58 results, page 1 of 3)

  • × theme_ss:"Information Gateway"
  1. Fischer, T.; Neuroth, H.: SSG-FI - special subject gateways to high quality Internet resources for scientific users (2000) 0.06
    0.06394671 = product of:
      0.12789342 = sum of:
        0.12789342 = sum of:
          0.086370535 = weight(_text_:core in 4873) [ClassicSimilarity], result of:
            0.086370535 = score(doc=4873,freq=2.0), product of:
              0.25797358 = queryWeight, product of:
                5.0504966 = idf(docFreq=769, maxDocs=44218)
                0.051078856 = queryNorm
              0.3348038 = fieldWeight in 4873, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.0504966 = idf(docFreq=769, maxDocs=44218)
                0.046875 = fieldNorm(doc=4873)
          0.04152288 = weight(_text_:22 in 4873) [ClassicSimilarity], result of:
            0.04152288 = score(doc=4873,freq=2.0), product of:
              0.17886946 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051078856 = queryNorm
              0.23214069 = fieldWeight in 4873, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4873)
      0.5 = coord(1/2)
    
    Abstract
    Project SSG-FI at SUB Göttingen provides special subject gateways to international high quality Internet resources for scientific users. Internet sites are selected by subject specialists and described using an extension of qualified Dublin Core metadata. A basic evaluation is added. These descriptions are freely available and can be searched and browsed. These are now subject gateways for 3 subject ares: earth sciences (GeoGuide); mathematics (MathGuide); and Anglo-American culture (split into HistoryGuide and AnglistikGuide). Together they receive about 3.300 'hard' requests per day, thus reaching over 1 million requests per year. The project SSG-FI behind these guides is open to collaboration. Institutions and private persons wishing to contribute can notify the SSG-FI team or send full data sets. Regular contributors can request registration with the project to access the database via the Internet and create and edit records
    Date
    22. 6.2002 19:40:42
  2. Cremer, M.; Neuroth, H.: ¬Eine Zukunftswerkstatt des Katalogisierens? : Das CORC-Projekt: Erfahrungen an der SUB Göttingen (2000) 0.04
    0.04071546 = product of:
      0.08143092 = sum of:
        0.08143092 = product of:
          0.16286184 = sum of:
            0.16286184 = weight(_text_:core in 4519) [ClassicSimilarity], result of:
              0.16286184 = score(doc=4519,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.6313121 = fieldWeight in 4519, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4519)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Mit dem Dublin Core Set liegt ein weltweit anerkanntes Format für den Nachweis von Internetressourcen vor. Nach dem Vorbild der Buchkatalogisierung im Verbund werden im Rahmen eines Projektes Nachweise kooperativ erfasst und in einer Datenbank zentral nachgewiesen. Initiator ist der US-amerikanische Verbund OCLC
    Object
    Dublin Core
  3. Cremer, M.; Neuroth, H.: ¬Das CORC-Projekt von OCLC an der Niedersächsischen Staats- und Universitätsbibliothek Göttingen (2000) 0.04
    0.03598772 = product of:
      0.07197544 = sum of:
        0.07197544 = product of:
          0.14395088 = sum of:
            0.14395088 = weight(_text_:core in 4787) [ClassicSimilarity], result of:
              0.14395088 = score(doc=4787,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.5580063 = fieldWeight in 4787, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4787)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    Dublin Core
  4. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.03
    0.030536594 = product of:
      0.061073188 = sum of:
        0.061073188 = product of:
          0.122146375 = sum of:
            0.122146375 = weight(_text_:core in 3967) [ClassicSimilarity], result of:
              0.122146375 = score(doc=3967,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.47348404 = fieldWeight in 3967, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3967)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.
  5. Hickey, T.R.: CORC : a system for gateway creation (2000) 0.03
    0.025191406 = product of:
      0.05038281 = sum of:
        0.05038281 = product of:
          0.10076562 = sum of:
            0.10076562 = weight(_text_:core in 4870) [ClassicSimilarity], result of:
              0.10076562 = score(doc=4870,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.39060444 = fieldWeight in 4870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    CORC is an OCLC project that id developing tools and systems to enable libraries to provide enhanced access to Internet resources. By adapting and extending library techniques and procedures, we are developing a self-supporting system capable of describing a large and useful subset of the Web. CORC is more a system for hosting and supporting subject gateways than a gateway itself and relies on large-scale cooperation among libraries to maintain a centralized database. By supporting emerging metadata standards such as Dublin Core and other standards such as Unicode and RDF, CORC broadens the range of libraries and librarians able to participate. Current plans are for OCLC as a full service in July 2000
  6. Christof, J.: Metadata sharing : Die Verbunddatenbank Internetquellen der Virtuellen Fachbibliothek Politikwissenschaft und der Virtuellen Fachbibliothek Wirtschaftswissenschaften (2003) 0.03
    0.025191406 = product of:
      0.05038281 = sum of:
        0.05038281 = product of:
          0.10076562 = sum of:
            0.10076562 = weight(_text_:core in 1916) [ClassicSimilarity], result of:
              0.10076562 = score(doc=1916,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.39060444 = fieldWeight in 1916, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1916)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Im Kontext der durch die Deutsche Forschungsgemeinschaft (DFG) geförderten Projekte "Virtuelle Fachbibliothek Politikwissenschaft" und "Virtuelle Fachbibliothek Wirtschaftswissenschaften" wird für den Nachweis von Onlinequellen jeweils ein Fachinformationsführer aufgebaut. Die verantwortlichen Institutionen, die Staatsund Universitätsbibliothek Hamburg (SUB Hamburg), die Universitäts- und Stadtbibliothek Köln (USB Köln) und die Deutsche Zentralbibliothek für Wirtschaftswissenschaften (ZBW Kiel) haben dazu ein Metadatenkonzept in Abstimmung mit nationalen und internationalen Entwicklungen auf Basis von Dublin Core entwickelt und dieses Konzept beim Aufbau der Verbunddatenbank Internetquellen umgesetzt.
  7. MacLeod, R.: Promoting a subject gateway : a case study from EEVL (Edinburgh Engineering Virtual Library) (2000) 0.02
    0.024467591 = product of:
      0.048935182 = sum of:
        0.048935182 = product of:
          0.097870365 = sum of:
            0.097870365 = weight(_text_:22 in 4872) [ClassicSimilarity], result of:
              0.097870365 = score(doc=4872,freq=4.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.54716086 = fieldWeight in 4872, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4872)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:40:22
  8. Subject gateways (2000) 0.02
    0.02422168 = product of:
      0.04844336 = sum of:
        0.04844336 = product of:
          0.09688672 = sum of:
            0.09688672 = weight(_text_:22 in 6483) [ClassicSimilarity], result of:
              0.09688672 = score(doc=6483,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.5416616 = fieldWeight in 6483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6483)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:43:01
  9. Becker, H.J.; Neuroth, H.: Crosssearchen und crossbrowsen von "Quality-controlled Subject Gateways" im EU-Projekt Renardus (2002) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 630) [ClassicSimilarity], result of:
              0.086370535 = score(doc=630,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 630, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=630)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das von der Europäischen Union seit Januar 2000 geförderte Projekt Renardus hat das Ziel, einen Service zur Nutzung der in Europa vorhandenen "Quality-controlled Subject Gateways" aufzubauen, d.h. über einen Zugang bzw. eine Schnittstelle crosssearchen und crossbrowsen anzubieten. Für das crossbrowsen wird dabei zum Navigieren die Dewey Decimal Classification (DDC) verwendet. Der Beitrag beschreibt die einzelnen Entwicklungsschritte und stellt detailliert die nötigen Mappingprozesse vor. Dabei handelt es sich einmal um Mappingprozesse von den lokalen Metadatenformaten der einzelnen Subject Gateways zu dem gemeinsamen Kernset an Metadaten in Renardus für die Suche, wobei dieses Kernset auf dem Dublin Core Metadata Set basiert. Zum anderen geht es um die Erstellung von Konkordanzen zwischen den lokalen Klassen der Klassifikationssysteme der Partner und den DDC-Klassen für das Browsen. Der Beitrag beschreibt auch neue zugrunde liegende Definitionen bzw.theoretische Konzepte, die in der Metadatengemeinschaft zurzeit diskutiert werden (z.B. Application Profile, Namespace, Registry). Zum Schluss werden die Funktionalitäten des Renardus-Services (suchen, browsen) näher vorgestellt.
  10. Howarth, L.C.: Modelling a natural language gateway to metadata-enabled resources (2004) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 2626) [ClassicSimilarity], result of:
              0.086370535 = score(doc=2626,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 2626, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2626)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Even as the number of Web-enabled resources and knowledge repositories continues its unabated climb, both general purpose and domain-specific metadata schemas are in vigorous development. While this might be viewed as a promising direction for more precise access to disparate metadata-enabled resources, semantically-oriented tools to facilitate cross-domain searching by end-users unfamiliar with structured approaches to language or particular metadata schema conventions have received little attention. This paper describes findings from a focus group assessment of a natural language "gateway" previously derived from mapping, then categorizing terminology from nine metadata schemas. Semantic ambiguities identified in relation to three core metadata elements, namely, "Names", "Title", and "Subject", are discussed relative to data collection techniques employed in the research. Implications for further research, and particularly that pertaining to the design of an Interlingua gateway to multilingual, metadata-enabled resources, are addressed.
  11. Ohly, H.P.: ¬The organization of Internet links in a social science clearing house (2004) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 2641) [ClassicSimilarity], result of:
              0.086370535 = score(doc=2641,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 2641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2641)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The German Internet Clearinghouse SocioGuide has changed to a database management system. Accordingly the metadata description scheme has become more detailed. The main information types are: institutions, persons, literature, tools, data sets, objects, topics, processes and services. Some of the description elements, such as title, resource identifier, and creator are universal, whereas others, such as primary/secondary information, and availability are specific to information type and cannot be generalized by referring to Dublin Core elements. The quality of Internet sources is indicated implicitly by characteristics, such as extent, restriction, or status. The SocioGuide is managed in DBClear, a generic system that can be adapted to different source types. It makes distributed input possible and contains workflow components.
  12. Kruk, S.R.; Westerki, A.; Kruk, E.: Architecture of semantic digital libraries (2009) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 3379) [ClassicSimilarity], result of:
              0.086370535 = score(doc=3379,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 3379, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3379)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The main motivation of this chapter was to gather existing requirements and solutions, and to present a generic architectural design of semantic digital libraries. This design is meant to answer a number of requirements, such as interoperability or ability to exchange resources and solutions, and set up the foundations for the best practices in the new domain of semantic digital libraries. We start by presenting the library from different high-level perspectives, i.e., user (see Sect. 2) and metadata (see Sect. 1) perspective; this overview narrows the scope and puts emphasis on certain aspects related to the system perspective, i.e., the architecture of the actual digital library management system. We conclude by presenting the system architecture from three perspectives: top-down layered architecture (see Sect. 3), vertical architecture of core services (see Sect. 4), and stack of enabling infrastructures (see Sect. 5); based upon the observations and evaluation of the contemporary state of the art presented in the previous sections, these last three subsections will describe an in-depth model of the digital library management system.
  13. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.02
    0.02076144 = product of:
      0.04152288 = sum of:
        0.04152288 = product of:
          0.08304576 = sum of:
            0.08304576 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.08304576 = score(doc=4865,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:41:59
  14. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.02076144 = product of:
      0.04152288 = sum of:
        0.04152288 = product of:
          0.08304576 = sum of:
            0.08304576 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.08304576 = score(doc=6040,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:42:47
  15. Veen, T. van; Oldroyd, B.: Search and retrieval in The European Library : a new approach (2004) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 1164) [ClassicSimilarity], result of:
              0.07197544 = score(doc=1164,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 1164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The objective of the European Library (TEL) project [TEL] was to set up a co-operative framework and specify a system for integrated access to the major collections of the European national libraries. This has been achieved by successfully applying a new approach for search and retrieval via URLs (SRU) [ZiNG] combined with a new metadata paradigm. One aim of the TEL approach is to have a low barrier of entry into TEL, and this has driven our choice for the technical solution described here. The solution comprises portal and client functionality running completely in the browser, resulting in a low implementation barrier and maximum scalability, as well as giving users control over the search interface and what collections to search. In this article we will describe, step by step, the development of both the search and retrieval architecture and the metadata infrastructure in the European Library project. We will show that SRU is a good alternative to the Z39.50 protocol and can be implemented without losing investments in current Z39.50 implementations. The metadata model being used by TEL is a Dublin Core Application Profile, and we have taken into account that functional requirements will change over time and therefore the metadata model will need to be able to evolve in a controlled way. We make this possible by means of a central metadata registry containing all characteristics of the metadata in TEL. Finally, we provide two scenarios to show how the TEL concept can be developed and extended, with applications capable of increasing their functionality by "learning" new metadata or protocol options.
  16. Mayr, P.; Mutschke, P.; Petras, V.: Reducing semantic complexity in distributed digital libraries : Treatment of term vagueness and document re-ranking (2008) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 1909) [ClassicSimilarity], result of:
              0.07197544 = score(doc=1909,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 1909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1909)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The general science portal "vascoda" merges structured, high-quality information collections from more than 40 providers on the basis of search engine technology (FAST) and a concept which treats semantic heterogeneity between different controlled vocabularies. First experiences with the portal show some weaknesses of this approach which come out in most metadata-driven Digital Libraries (DLs) or subject specific portals. The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Design/methodology/approach - Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks. Findings - The methods, which will be implemented, focus on the query and on the result side of a search and are designed to positively influence each other. Conceptually, they will improve the search quality and guarantee that the most relevant documents in result sets will be ranked higher. Originality/value - The central impact of the paper focuses on the integration of three structural value-adding methods, which aim at reducing the semantic complexity represented in distributed DLs at several stages in the information retrieval process: query construction, search and ranking and re-ranking.
  17. Castelli, D.: Digital libraries of the future - and the role of libraries (2006) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 2589) [ClassicSimilarity], result of:
              0.07197544 = score(doc=2589,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 2589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2589)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this article is to introduce the digital libraries of the future, their enabling technologies and their organisational models. Design/methodology/approach - The paper first discusses the requirements for the digital libraries of the future, then presents the DILIGENT infrastructure as a technological response to these requirements and, finally, it discusses the role that libraries can play in the organisational framework envisioned by DILIGENT. Findings - Digital libraries of the future will give access to a large variety of multimedia and multi-type documents created by integrating content from many different heterogeneous sources that range from repositories of text, images, and audio-video, to scientific data archives, and databases. The digital library will provide a seamless environment where the co-operative access, filtering, manipulation, generation, and preservation of these documents will be supported as a continuous cycle. Users of the library will be both consumers and producers of information, either by themselves or in collaborations with other users. Policy ensuring mechanisms will guarantee that the information produced is visible only to those who have the appropriate rights to access it. The realisation of these new digital libraries requires both the provision of a new technology and a change in the role played by the libraries in the information access-production cycle. Practical implications - Digital libraries of the future will be core instruments for serving a large class of applications, especially in the research field. Originality/value - The paper briefly introduces one of the most innovative technologies for digital libraries, and it discusses how it contributes to the realisation of a novel digital libraries scenario.
  18. Zia, L.L.: new projects and a progress report : ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program (2001) 0.02
    0.017813014 = product of:
      0.035626028 = sum of:
        0.035626028 = product of:
          0.071252055 = sum of:
            0.071252055 = weight(_text_:core in 1227) [ClassicSimilarity], result of:
              0.071252055 = score(doc=1227,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27619904 = fieldWeight in 1227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program comprises a set of projects engaged in a collective effort to build a national digital library of high quality science, technology, engineering, and mathematics (STEM) educational materials for students and teachers at all levels, in both formal and informal settings. By providing broad access to a rich, reliable, and authoritative collection of interactive learning and teaching resources and associated services in a digital environment, the NSDL will encourage and sustain continual improvements in the quality of STEM education for all students, and serve as a resource for lifelong learning. Though the program is relatively new, its vision and operational framework have been developed over a number of years through various workshops and planning meetings. The NSDL program held its first formal funding cycle during fiscal year 2000 (FY00), accepting proposals in four tracks: Core Integration System, Collections, Services, and Targeted Research. Twenty-nine awards were made across these tracks in September 2000. Brief descriptions of each FY00 project appeared in an October 2000 D-Lib Magazine article; full abstracts are available from the Awards Section at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl/>. In FY01 the program received one hundred-nine proposals across its four tracks with the number of proposals in the collections, services, and targeted research tracks increasing to one hundred-one from the eighty received in FY00. In September 2001 grants were awarded to support 35 new projects: 1 project in the core integration track, 18 projects in the collections track, 13 in the services track, and 3 in targeted research. Two NSF directorates, the Directorate for Geosciences (GEO) and the Directorate for Mathematical and Physical Sciences (MPS) are both providing significant co-funding on several projects, illustrating the NSDL program's facilitation of the integration of research and education, an important strategic objective of the NSF. Thus far across both fiscal years of the program fifteen projects have enjoyed this joint support. Following is a list of the FY01 awards indicating the official NSF award number (each beginning with DUE), the project title, the grantee institution, and the name of the Principal Investigator (PI). A condensed description of the project is also included. Full abstracts are available from the Awards Section at the NSDL program site at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl/>. (Grants with shared titles are formal collaborations and are grouped together.) The projects are displayed by track and are listed by award number. In addition, six of these projects have explicit relevance and application to K-12 education. Six others clearly have potential for application to the K-12 arena. The NSDL program will have another funding cycle in fiscal year 2002 with the next program solicitation expected to be available in January 2002, and an anticipated deadline for proposals in mid-April 2002.
  19. Price, A.: Five new Danish subject gateways under development (2000) 0.02
    0.0173012 = product of:
      0.0346024 = sum of:
        0.0346024 = product of:
          0.0692048 = sum of:
            0.0692048 = weight(_text_:22 in 4878) [ClassicSimilarity], result of:
              0.0692048 = score(doc=4878,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.38690117 = fieldWeight in 4878, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4878)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:41:31
  20. Hjoerland, B.: ¬The methodology of constructing classification schemes : a discussion of the state-of-the-art (2003) 0.01
    0.014395089 = product of:
      0.028790178 = sum of:
        0.028790178 = product of:
          0.057580356 = sum of:
            0.057580356 = weight(_text_:core in 2760) [ClassicSimilarity], result of:
              0.057580356 = score(doc=2760,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.22320253 = fieldWeight in 2760, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2760)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Special classifications have been somewhat neglected in KO compared to general classifications. The methodology of constructing special classifications is important, however, also for the methodology of constructing general classification schemes. The methodology of constructing special classifications can be regarded as one among about a dozen approaches to domain analysis. The methodology of (special) classification in LIS has been dominated by the rationalistic facet-analytic tradition, which, however, neglects the question of the empirical basis of classification. The empirical basis is much better grasped by, for example, bibliometric methods. Even the combination of rational and empirical methods is insufficient. This presentation will provide evidence for the necessity of historical and pragmatic methods for the methodology of classification and will point to the necessity of analyzing "paradigms". The presentation covers the methods of constructing classifications from Ranganathan to the design of ontologies in computer science and further to the recent "paradigm shift" in classification research. 1. Introduction Classification of a subject field is one among about eleven approaches to analyzing a domain that are specific for information science and in my opinion define the special competencies of information specialists (Hjoerland, 2002a). Classification and knowledge organization are commonly regarded as core qualifications of librarians and information specialists. Seen from this perspective one expects a firm methodological basis for the field. This paper tries to explore the state-of-the-art conceming the methodology of classification. 2. Classification: Science or non-science? As it is part of the curriculum at universities and subject in scientific journals and conferences like ISKO, orte expects classification/knowledge organization to be a scientific or scholarly activity and a scientific field. However, very often when information specialists classify or index documents and when they revise classification system, the methods seem to be rather ad hoc. Research libraries or scientific databases may employ people with adequate subject knowledge. When information scientists construct or evaluate systems, they very often elicit the knowledge from "experts" (Hjorland, 2002b, p. 260). Mostly no specific arguments are provided for the specific decisions in these processes.

Languages

  • e 36
  • d 22

Types

  • a 50
  • el 7
  • s 3
  • m 2
  • x 1
  • More… Less…