Search (21 results, page 1 of 2)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024650764 = product of:
      0.049301527 = sum of:
        0.049301527 = product of:
          0.098603055 = sum of:
            0.098603055 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.098603055 = score(doc=3925,freq=4.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  2. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.020916866 = product of:
      0.041833732 = sum of:
        0.041833732 = product of:
          0.083667465 = sum of:
            0.083667465 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.083667465 = score(doc=3895,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  3. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.02
    0.01993374 = product of:
      0.03986748 = sum of:
        0.03986748 = product of:
          0.07973496 = sum of:
            0.07973496 = weight(_text_:network in 6470) [ClassicSimilarity], result of:
              0.07973496 = score(doc=6470,freq=4.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.34791988 = fieldWeight in 6470, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6470)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
  4. Dekkers, M.; Weibel, S.L.: State of the Dublin Core Metadata Initiative April 2003 (2003) 0.02
    0.019733394 = product of:
      0.039466787 = sum of:
        0.039466787 = product of:
          0.078933574 = sum of:
            0.078933574 = weight(_text_:network in 2795) [ClassicSimilarity], result of:
              0.078933574 = score(doc=2795,freq=2.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.3444231 = fieldWeight in 2795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2795)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Dublin Core Metadata Initiative continues to grow in participation and recognition as the predominant resource discovery metadata standard on the Internet. With its approval as ISO 15836, DC is firmly established as a foundation block of modular, interoperable metadata for distributed resources. This report summarizes developments in DCMI over the past year, including the annual conference, progress of working groups, new developments in encoding methods, and advances in documentation and dissemination. New developments in broadening the community to commercial users of metadata are discussed, and plans for an international network of national affiliates are described.
  5. Zia, L.L.: Growing a national learning environments and resources network for science, mathematics, engineering, and technology education : current issues and opportunities for the NSDL program (2001) 0.02
    0.019530997 = product of:
      0.039061993 = sum of:
        0.039061993 = product of:
          0.07812399 = sum of:
            0.07812399 = weight(_text_:network in 1217) [ClassicSimilarity], result of:
              0.07812399 = score(doc=1217,freq=6.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.34089047 = fieldWeight in 1217, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1217)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL) program seeks to create, develop, and sustain a national digital library supporting science, mathematics, engineering, and technology (SMET) education at all levels -- preK-12, undergraduate, graduate, and life-long learning. The resulting virtual institution is expected to catalyze and support continual improvements in the quality of science, mathematics, engineering, and technology (SMET) education in both formal and informal settings. The vision for this program has been explored through a series of workshops over the past several years and documented in accompanying reports and monographs. (See [1-7, 10, 12, and 13].) These efforts have led to a characterization of the digital library as a learning environments and resources network for science, mathematics, engineering, and technology education, that is: * designed to meet the needs of learners, in both individual and collaborative settings; * constructed to enable dynamic use of a broad array of materials for learning primarily in digital format; and * managed actively to promote reliable anytime, anywhere access to quality collections and services, available both within and without the network. Underlying the NSDL program are several working assumptions. First, while there is currently no lack of "great piles of content" on the Web, there is an urgent need for "piles of great content". The difficulties in discovering and verifying the authority of appropriate Web-based material are certainly well known, yet there are many examples of learning resources of great promise available (particularly those exploiting the power of multiple media), with more added every day. The breadth and interconnectedness of the Web are simultaneously a great strength and shortcoming. Second, the "unit" or granularity of educational content can and will shrink, affording the opportunity for users to become creators and vice versa, as learning objects are reused, repackaged, and repurposed. To be sure, this scenario cannot take place without serious attention to intellectual property and digital rights management concerns. But new models and technologies are being explored (see a number of recent articles in the January issue of D-Lib Magazine). Third, there is a need for an "organizational infrastructure" that facilitates connections between distributed users and distributed content, as alluded to in the third bullet above. Finally, while much of the ongoing use of the library is envisioned to be "free" in the sense of the public good, there is an opportunity and a need to consider multiple alternative models of sustainability, particularly in the area of services offered by the digital library. More details about the NSDL program including information about proposal deadlines and current awards may be found at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl>.
  6. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.01
    0.014648248 = product of:
      0.029296497 = sum of:
        0.029296497 = product of:
          0.058592994 = sum of:
            0.058592994 = weight(_text_:network in 1195) [ClassicSimilarity], result of:
              0.058592994 = score(doc=1195,freq=6.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.25566787 = fieldWeight in 1195, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1195)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  7. Hitchcock, S.; Bergmark, D.; Brody, T.; Gutteridge, C.; Carr, L.; Hall, W.; Lagoze, C.; Harnad, S.: Open citation linking : the way forward (2002) 0.01
    0.014095282 = product of:
      0.028190564 = sum of:
        0.028190564 = product of:
          0.05638113 = sum of:
            0.05638113 = weight(_text_:network in 1207) [ClassicSimilarity], result of:
              0.05638113 = score(doc=1207,freq=2.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.2460165 = fieldWeight in 1207, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1207)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The speed of scientific communication - the rate of ideas affecting other researchers' ideas - is increasing dramatically. The factor driving this is free, unrestricted access to research papers. Measurements of user activity in mature eprint archives of research papers such as arXiv have shown, for the first time, the degree to which such services support an evolving network of texts commenting on, citing, classifying, abstracting, listing and revising other texts. The Open Citation project has built tools to measure this activity, to build new archives, and has been closely involved with the development of the infrastructure to support open access on which these new services depend. This is the story of the project, intertwined with the concurrent emergence of the Open Archives Initiative (OAI). The paper describes the broad scope of the project's work, showing how it has progressed from early demonstrators of reference linking to produce Citebase, a Web-based citation and impact-ranked search service, and how it has supported the development of the EPrints.org software for building OAI-compliant archives. The work has been underpinned by analysis and experiments on the semantics of documents (digital objects) to determine the features required for formally perfect linking - instantiated as an application programming interface (API) for reference linking - that will enable other applications to build on this work in broader digital library information environments.
  8. Decimal Classification Editorial Policy Committee (2002) 0.01
    0.012325382 = product of:
      0.024650764 = sum of:
        0.024650764 = product of:
          0.049301527 = sum of:
            0.049301527 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
              0.049301527 = score(doc=236,freq=4.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.27358043 = fieldWeight in 236, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Decimal Classification Editorial Policy Committee (EPC) held its Meeting 117 at the Library Dec. 3-5, 2001, with chair Andrea Stamm (Northwestern University) presiding. Through its actions at this meeting, significant progress was made toward publication of DDC unabridged Edition 22 in mid-2003 and Abridged Edition 14 in early 2004. For Edition 22, the committee approved the revisions to two major segments of the classification: Table 2 through 55 Iran (the first half of the geographic area table) and 900 History and geography. EPC approved updates to several parts of the classification it had already considered: 004-006 Data processing, Computer science; 340 Law; 370 Education; 510 Mathematics; 610 Medicine; Table 3 issues concerning treatment of scientific and technical themes, with folklore, arts, and printing ramifications at 398.2 - 398.3, 704.94, and 758; Table 5 and Table 6 Ethnic Groups and Languages (portions concerning American native peoples and languages); and tourism issues at 647.9 and 790. Reports on the results of testing the approved 200 Religion and 305-306 Social groups schedules were received, as was a progress report on revision work for the manual being done by Ross Trotter (British Library, retired). Revisions for Abridged Edition 14 that received committee approval included 010 Bibliography; 070 Journalism; 150 Psychology; 370 Education; 380 Commerce, communications, and transportation; 621 Applied physics; 624 Civil engineering; and 629.8 Automatic control engineering. At the meeting the committee received print versions of _DC&_ numbers 4 and 5. Primarily for the use of Dewey translators, these cumulations list changes, substantive and cosmetic, to DDC Edition 21 and Abridged Edition 13 for the period October 1999 - December 2001. EPC will hold its Meeting 118 at the Library May 15-17, 2002.
  9. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.012201506 = product of:
      0.024403011 = sum of:
        0.024403011 = product of:
          0.048806023 = sum of:
            0.048806023 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.048806023 = score(doc=759,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2013 19:22:18
  10. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.010458433 = product of:
      0.020916866 = sum of:
        0.020916866 = product of:
          0.041833732 = sum of:
            0.041833732 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.041833732 = score(doc=4820,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3.12.2016 18:39:22
  11. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.01
    0.010458433 = product of:
      0.020916866 = sum of:
        0.020916866 = product of:
          0.041833732 = sum of:
            0.041833732 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.041833732 = score(doc=3261,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28.11.2016 12:43:22
  12. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.01
    0.010458433 = product of:
      0.020916866 = sum of:
        0.020916866 = product of:
          0.041833732 = sum of:
            0.041833732 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.041833732 = score(doc=479,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 1.2022 10:22:18
  13. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.01
    0.00996687 = product of:
      0.01993374 = sum of:
        0.01993374 = product of:
          0.03986748 = sum of:
            0.03986748 = weight(_text_:network in 1166) [ClassicSimilarity], result of:
              0.03986748 = score(doc=1166,freq=4.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.17395994 = fieldWeight in 1166, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1166)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
    NACO was established in 1976 and is hosted by the Library of Congress. At the beginning of 2003, nearly 400 institutions were involved in this undertaking, including 43 institutions from outside the United States.6 Despite the enormous success of NACO and the impressive annual growth of the initiative, there are requirements for participation that form an obstacle for many institutions: they have to follow the Anglo-American Cataloguing Rules (AACR2) and employ the MARC217 data format. Participating institutions also have to belong to either OCLC (Online Computer Library Center) or RLG (Research Libraries Group) in order to be able to contribute records, and they have to provide a specified minimum number of authority records per year. A recent proof of concept project of the Library of Congress, OCLC and the German National Library-Virtual International Authority File (VIAF)8-will, in its first phase, test automatic linking of the records of the Library of Congress Name Authority File (LCNAF) and the German Personal Name Authority File by using matching algorithms and software developed by OCLC. The results are expected to form the basis of a "Virtual International Authority File". The project will then test the maintenance of the virtual authority file by employing the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)9 to harvest the metadata for new, updated, and deleted records. When using the "Virtual International Authority File" a cataloguer will be able to check the system to see whether the authority record he wants to establish already exists. The final phase of the project will test possibilities for displaying records in the preferred language and script of the end user. Currently, there are still some clear limitations associated with the ways in which authority records are used by memory institutions. One of the main problems has to do with limited access: generally only large institutions or those that are part of a library network have unlimited online access to permanently updated authority records. Smaller institutions outside these networks usually have to fall back on less efficient ways of obtaining authority data, or have no access at all. Cross-domain sharing of authority data between libraries, archives, museums and other memory institutions simply does not happen at present. Public users are, by and large, not even aware that such things as name authority records exist and are excluded from access to these information resources.
  14. Heery, R.; Carpenter, L.; Day, M.: Renardus project developments and the wider digital library context (2001) 0.01
    0.00996687 = product of:
      0.01993374 = sum of:
        0.01993374 = product of:
          0.03986748 = sum of:
            0.03986748 = weight(_text_:network in 1219) [ClassicSimilarity], result of:
              0.03986748 = score(doc=1219,freq=4.0), product of:
                0.22917621 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.05146125 = queryNorm
                0.17395994 = fieldWeight in 1219, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1219)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Renardus project has brought together gateways that are 'large-scale national initiatives'. Within the European context this immediately introduces a diversity of organisations, as responsibility for national gateway initiatives is located differently, for example, in national libraries, national agencies with responsibility for educational technology infrastructure, and within universities or consortia of universities. Within the project, gateways are in some cases represented directly by their own personnel, in some cases by other departments or research centres, but not always by the people responsible for providing the gateway service. For example, the UK Resource Discovery Network (RDN) is represented in the project by UKOLN (formerly part of the Resource Discovery Network Centre) and the Institute of Learning and Research Technology (ILRT), University of Bristol -- an RDN 'hub' service provider -- who are primarily responsible for dissemination. Since the start of the project there have been changes within the organisational structures providing gateways and within the service ambitions of gateways themselves. Such lack of stability is inherent within the Internet service environment, and this presents challenges to Renardus activity that has to be planned for a three-year period. For example, within the gateway's funding environment there is now an exploration of 'subject portals' offering more extended services than gateways. There is also potential commercial interest for including gateways as a value-added component to existing commercial services, and new offerings from possible competitors such as Google's Web Directory and country based services. This short update on the Renardus project intends to inform the reader of progress within the project and to give some wider context to its main themes by locating the project within the broader arena of digital library activity. There are twelve partners in the project from Denmark, Finland, France, Germany, the Netherlands and Sweden, as well as the UK. In particular we will focus on the specific activity in which UKOLN is involved: the architectural design, the specification of functional requirements, reaching consensus on a collaborative business model, etc. We will also consider issues of metadata management where all partners have interests. We will highlight implementation issues that connect to areas of debate elsewhere. In particular we see connections with activity related to establishing architectural models for digital library services, connections to the services that may emerge from metadata sharing using the Open Archives Initiative metadata sharing protocol, and links with work elsewhere on navigation of digital information spaces by means of controlled vocabularies.
  15. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.01
    0.008715361 = product of:
      0.017430723 = sum of:
        0.017430723 = product of:
          0.034861445 = sum of:
            0.034861445 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
              0.034861445 = score(doc=2564,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.19345059 = fieldWeight in 2564, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2564)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 1.2016 10:22:28
  16. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.01
    0.008715361 = product of:
      0.017430723 = sum of:
        0.017430723 = product of:
          0.034861445 = sum of:
            0.034861445 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
              0.034861445 = score(doc=2565,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.19345059 = fieldWeight in 2565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2565)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    16. 1.2016 10:22:28
  17. Baker, T.: ¬A grammar of Dublin Core (2000) 0.01
    0.006972289 = product of:
      0.013944578 = sum of:
        0.013944578 = product of:
          0.027889157 = sum of:
            0.027889157 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
              0.027889157 = score(doc=1236,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.15476047 = fieldWeight in 1236, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    26.12.2011 14:01:22
  18. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.01
    0.006972289 = product of:
      0.013944578 = sum of:
        0.013944578 = product of:
          0.027889157 = sum of:
            0.027889157 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.027889157 = score(doc=3284,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2010 14:41:24
  19. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.01
    0.006972289 = product of:
      0.013944578 = sum of:
        0.013944578 = product of:
          0.027889157 = sum of:
            0.027889157 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.027889157 = score(doc=1163,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  20. Foerster, H. von; Müller, A.; Müller, K.H.: Rück- und Vorschauen : Heinz von Foerster im Gespräch mit Albert Müller und Karl H. Müller (2001) 0.01
    0.0052292165 = product of:
      0.010458433 = sum of:
        0.010458433 = product of:
          0.020916866 = sum of:
            0.020916866 = weight(_text_:22 in 5988) [ClassicSimilarity], result of:
              0.020916866 = score(doc=5988,freq=2.0), product of:
                0.18020853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05146125 = queryNorm
                0.116070345 = fieldWeight in 5988, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=5988)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2006 17:22:54