Search (1094 results, page 1 of 55)

  • × type_ss:"el"
  1. Woods, E.W.; IFLA Section on classification and Indexing and Indexing and Information Technology; Joint Working Group on a Classification Format: Requirements for a format of classification data : Final report, July 1996 (1996) 0.22
    0.22314863 = product of:
      0.2789358 = sum of:
        0.18403849 = weight(_text_:section in 3008) [ClassicSimilarity], result of:
          0.18403849 = score(doc=3008,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.69962364 = fieldWeight in 3008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
        0.045214903 = weight(_text_:on in 3008) [ClassicSimilarity], result of:
          0.045214903 = score(doc=3008,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.4123903 = fieldWeight in 3008, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
        0.020367749 = weight(_text_:information in 3008) [ClassicSimilarity], result of:
          0.020367749 = score(doc=3008,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.23274569 = fieldWeight in 3008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
        0.029314637 = product of:
          0.058629274 = sum of:
            0.058629274 = weight(_text_:technology in 3008) [ClassicSimilarity], result of:
              0.058629274 = score(doc=3008,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.39488205 = fieldWeight in 3008, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3008)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
  2. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.12
    0.12160253 = product of:
      0.20267087 = sum of:
        0.18403849 = weight(_text_:section in 754) [ClassicSimilarity], result of:
          0.18403849 = score(doc=754,freq=32.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.69962364 = fieldWeight in 754, product of:
              5.656854 = tf(freq=32.0), with freq of:
                32.0 = termFreq=32.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.0234375 = fieldNorm(doc=754)
        0.011303726 = weight(_text_:on in 754) [ClassicSimilarity], result of:
          0.011303726 = score(doc=754,freq=4.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.10309757 = fieldWeight in 754, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=754)
        0.0073286593 = product of:
          0.014657319 = sum of:
            0.014657319 = weight(_text_:technology in 754) [ClassicSimilarity], result of:
              0.014657319 = score(doc=754,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.09872051 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The preliminary intended and approved list was: Section 1: To concentrate on Google as virtual monopoly, and Google's reported support of Wikipedia. To find experimental evidence of this support or show that the reports are not more than rumours. Section 2: To address the copy-past syndrome with socio-cultural consequences associated with it. Section 3: To deal with plagiarism and IPR violations as two intertwined topics: how they affect various players (teachers and pupils in school; academia; corporations; governmental studies, etc.). To establish that not enough is done concerning these issues, partially due to just plain ignorance. We will propose some ways to alleviate the problem. Section 4: To discuss the usual tools to fight plagiarism and their shortcomings. Section 5: To propose ways to overcome most of above problems according to proposals by Maurer/Zaka. To examples, but to make it clear that do this more seriously a pilot project is necessary beyond this particular study. Section 6: To briefly analyze various views of plagiarism as it is quite different in different fields (journalism, engineering, architecture, painting, .) and to present a concept that avoids plagiarism from the very beginning. Section 7: To point out the many other dangers of Google or Google-like undertakings: opportunistic ranking, analysis of data as window into commercial future. Section 8: To outline the need of new international laws. Section 9: To mention the feeble European attempts to fight Google, despite Google's growing power. Section 10. To argue that there is no way to catch up with Google in a frontal attack.
    Section 11: To argue that fighting large search engines and plagiarism slice-by-slice by using dedicated servers combined by one hub could eventually decrease the importance of other global search engines. Section 12: To argue that global search engines are an area that cannot be left to the free market, but require some government control or at least non-profit institutions. We will mention other areas where similar if not as glaring phenomena are visible. Section 13: We will mention in passing the potential role of virtual worlds, such as the currently overhyped system "second life". Section 14: To elaborate and try out a model for knowledge workers that does not require special search engines, with a description of a simple demonstrator. Section 15 (Not originally part of the proposal): To propose concrete actions and to describe an Austrian effort that could, with moderate support, minimize the role of Google for Austria. Section 16: References (Not originally part of the proposal) In what follows, we will stick to Sections 1 -14 plus the new Sections 15 and 16 as listed, plus a few Appendices.
    We believe that the importance has shifted considerably since the approval of the project. We thus will emphasize some aspects much more than ever planned, and treat others in a shorter fashion. We believe and hope that this is also seen as unexpected benefit by BMVIT. This report is structured as follows: After an Executive Summary that will highlight why the topic is of such paramount importance we explain in an introduction possible optimal ways how to study the report and its appendices. We can report with some pride that many of the ideas have been accepted by the international scene at conferences and by journals as of such crucial importance that a number of papers (constituting the appendices and elaborating the various sections) have been considered high quality material for publication. We want to thank the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) for making this study possible. We would be delighted if the study can be distributed widely to European decision makers, as some of the issues involved do indeed involve all of Europe, if not the world.
  3. McIlwaine, I.: Section on Classification and Indexing : review of activities, 2000-2001 (2001) 0.12
    0.11520548 = product of:
      0.2880137 = sum of:
        0.24538466 = weight(_text_:section in 6905) [ClassicSimilarity], result of:
          0.24538466 = score(doc=6905,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.9328315 = fieldWeight in 6905, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.125 = fieldNorm(doc=6905)
        0.04262902 = weight(_text_:on in 6905) [ClassicSimilarity], result of:
          0.04262902 = score(doc=6905,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.3888053 = fieldWeight in 6905, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.125 = fieldNorm(doc=6905)
      0.4 = coord(2/5)
    
  4. Witt, M.: Section on cataloguing : report of activities 2000-2001 (2001) 0.12
    0.11520548 = product of:
      0.2880137 = sum of:
        0.24538466 = weight(_text_:section in 6913) [ClassicSimilarity], result of:
          0.24538466 = score(doc=6913,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.9328315 = fieldWeight in 6913, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.125 = fieldNorm(doc=6913)
        0.04262902 = weight(_text_:on in 6913) [ClassicSimilarity], result of:
          0.04262902 = score(doc=6913,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.3888053 = fieldWeight in 6913, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.125 = fieldNorm(doc=6913)
      0.4 = coord(2/5)
    
  5. Benito, M.: Better consistency of the UDC system moving medicine from section 61 to section 4 (2007) 0.11
    0.10510413 = product of:
      0.17517355 = sum of:
        0.15336542 = weight(_text_:section in 554) [ClassicSimilarity], result of:
          0.15336542 = score(doc=554,freq=8.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.58301973 = fieldWeight in 554, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
        0.013321568 = weight(_text_:on in 554) [ClassicSimilarity], result of:
          0.013321568 = score(doc=554,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.121501654 = fieldWeight in 554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
        0.0084865615 = weight(_text_:information in 554) [ClassicSimilarity], result of:
          0.0084865615 = score(doc=554,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.09697737 = fieldWeight in 554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=554)
      0.6 = coord(3/5)
    
    Abstract
    Over 45 years ago it was decided to move the class 4 for language to the section 8 together with literature. Since then class 4 has not been used. A recent masters thesis at the school of librarianship in Boras , "UDC, A Proposal to Basic Class 4" by Fredrik Hultqvist, (Magisteruppsats; 2006:39) proved the possibility of moving Medicine from the section 61 to the empty class 4. This is not a new idea, but has never been implemented. There are now new reasons that can facilitate the change. Medicine has been subjected to readjustments and proposals of change presented over recent years and this work is more or less completed. The change of notation 61 to notation 4 does not make the work done for the revision of Medicine obsolete; on the contrary it facilitates the change in libraries as they anyway need to change the notations of the entire discipline. The change to 4 makes medicine a digit shorter in all the subdivisions. This is an opportunity which will not come again for years. This is the practical reason. The theoretical reason can be found by analysing other classification systems. It seems that only the Dewey system, and therefore the UDC, has Medicine together with other practical disciplines in the same division. Most systems have Medicine as a main discipline with a division of its own.
    Content
    Präsentation anlässlich des 'UDC Seminar: Information Access for the Global Community, The Hague, 4-5 June 2007'
  6. Byrum, J.D. Jr.: Section on bibliography : report of the activities 2000-2001 (2001) 0.10
    0.10080478 = product of:
      0.25201195 = sum of:
        0.21471158 = weight(_text_:section in 6896) [ClassicSimilarity], result of:
          0.21471158 = score(doc=6896,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.81622756 = fieldWeight in 6896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.109375 = fieldNorm(doc=6896)
        0.03730039 = weight(_text_:on in 6896) [ClassicSimilarity], result of:
          0.03730039 = score(doc=6896,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.34020463 = fieldWeight in 6896, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.109375 = fieldNorm(doc=6896)
      0.4 = coord(2/5)
    
  7. Payette, S.; Blanchi, C.; Lagoze, C.; Overly, E.A.: Interoperability for digital objects and repositories : the Cornell/CNRI experiments (1999) 0.09
    0.09226704 = product of:
      0.15377839 = sum of:
        0.12269233 = weight(_text_:section in 1248) [ClassicSimilarity], result of:
          0.12269233 = score(doc=1248,freq=8.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.46641576 = fieldWeight in 1248, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.03125 = fieldNorm(doc=1248)
        0.02131451 = weight(_text_:on in 1248) [ClassicSimilarity], result of:
          0.02131451 = score(doc=1248,freq=8.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.19440265 = fieldWeight in 1248, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1248)
        0.009771545 = product of:
          0.01954309 = sum of:
            0.01954309 = weight(_text_:technology in 1248) [ClassicSimilarity], result of:
              0.01954309 = score(doc=1248,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.13162735 = fieldWeight in 1248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1248)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    For several years the Digital Library Research Group at Cornell University and the Corporation for National Research Initiatives (CNRI) have been engaged in research focused on the design and development of infrastructures for open architecture, confederated digital libraries. The goal of this effort is to achieve interoperability and extensibility of digital library systems through the definition of key digital library services and their open interfaces, allowing flexible interaction of existing services and augmentation of the infrastructure with new services. Some aspects of this research have included the development and deployment of the Dienst software, the Handle System®, and the architecture of digital objects and repositories. In this paper, we describe the joint effort by Cornell and CNRI to prototype a rich and deployable architecture for interoperable digital objects and repositories. This effort has challenged us to move theories of interoperability closer to practice. The Cornell/CNRI collaboration builds on two existing projects focusing on the development of interoperable digital libraries. Details relating to the technology of these projects are described elsewhere. Both projects were strongly influenced by the fundamental abstractions of repositories and digital objects as articulated by Kahn and Wilensky in A Framework for Distributed Digital Object Services. Furthermore, both programs were influenced by the container architecture described in the Warwick Framework, and by the notions of distributed dynamic objects presented by Lagoze and Daniel in their Distributed Active Relationship work. With these common roots, one would expect that the CNRI and Cornell repositories would be at least theoretically interoperable. However, the actual test would be the extent to which our independently developed repositories were practically interoperable. This paper focuses on the definition of interoperability in the joint Cornell/CNRI work and the set of experiments conducted to formally test it. Our motivation for this work is the eventual deployment of formally tested reference implementations of the repository architecture for experimentation and development by fellow digital library researchers. In Section 2, we summarize the digital object and repository approach that was the focus of our interoperability experiments. In Section 3, we describe the set of experiments that progressively tested interoperability at increasing levels of functionality. In Section 4, we discuss general conclusions, and in Section 5, we give a preview of our future work, including our plans to evolve our experimentation to the point of defining a set of formal metrics for measuring interoperability for repositories and digital objects. This is still a work in progress that is expected to undergo additional refinements during its development.
  8. McIlwaine, I.C.: Section on Classification and Indexing : review of activities 1999-2000 (2000) 0.09
    0.08640411 = product of:
      0.21601026 = sum of:
        0.18403849 = weight(_text_:section in 5409) [ClassicSimilarity], result of:
          0.18403849 = score(doc=5409,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.69962364 = fieldWeight in 5409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.09375 = fieldNorm(doc=5409)
        0.031971764 = weight(_text_:on in 5409) [ClassicSimilarity], result of:
          0.031971764 = score(doc=5409,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.29160398 = fieldWeight in 5409, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.09375 = fieldNorm(doc=5409)
      0.4 = coord(2/5)
    
  9. Zia, L.L.: new projects and a progress report : ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program (2001) 0.08
    0.08478953 = product of:
      0.10598691 = sum of:
        0.075912006 = weight(_text_:section in 1227) [ClassicSimilarity], result of:
          0.075912006 = score(doc=1227,freq=4.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.28858003 = fieldWeight in 1227, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1227)
        0.009325097 = weight(_text_:on in 1227) [ClassicSimilarity], result of:
          0.009325097 = score(doc=1227,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.08505116 = fieldWeight in 1227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1227)
        0.0059405933 = weight(_text_:information in 1227) [ClassicSimilarity], result of:
          0.0059405933 = score(doc=1227,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.06788416 = fieldWeight in 1227, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1227)
        0.014809214 = product of:
          0.029618427 = sum of:
            0.029618427 = weight(_text_:technology in 1227) [ClassicSimilarity], result of:
              0.029618427 = score(doc=1227,freq=6.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.19948712 = fieldWeight in 1227, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1227)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program comprises a set of projects engaged in a collective effort to build a national digital library of high quality science, technology, engineering, and mathematics (STEM) educational materials for students and teachers at all levels, in both formal and informal settings. By providing broad access to a rich, reliable, and authoritative collection of interactive learning and teaching resources and associated services in a digital environment, the NSDL will encourage and sustain continual improvements in the quality of STEM education for all students, and serve as a resource for lifelong learning. Though the program is relatively new, its vision and operational framework have been developed over a number of years through various workshops and planning meetings. The NSDL program held its first formal funding cycle during fiscal year 2000 (FY00), accepting proposals in four tracks: Core Integration System, Collections, Services, and Targeted Research. Twenty-nine awards were made across these tracks in September 2000. Brief descriptions of each FY00 project appeared in an October 2000 D-Lib Magazine article; full abstracts are available from the Awards Section at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl/>. In FY01 the program received one hundred-nine proposals across its four tracks with the number of proposals in the collections, services, and targeted research tracks increasing to one hundred-one from the eighty received in FY00. In September 2001 grants were awarded to support 35 new projects: 1 project in the core integration track, 18 projects in the collections track, 13 in the services track, and 3 in targeted research. Two NSF directorates, the Directorate for Geosciences (GEO) and the Directorate for Mathematical and Physical Sciences (MPS) are both providing significant co-funding on several projects, illustrating the NSDL program's facilitation of the integration of research and education, an important strategic objective of the NSF. Thus far across both fiscal years of the program fifteen projects have enjoyed this joint support. Following is a list of the FY01 awards indicating the official NSF award number (each beginning with DUE), the project title, the grantee institution, and the name of the Principal Investigator (PI). A condensed description of the project is also included. Full abstracts are available from the Awards Section at the NSDL program site at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl/>. (Grants with shared titles are formal collaborations and are grouped together.) The projects are displayed by track and are listed by award number. In addition, six of these projects have explicit relevance and application to K-12 education. Six others clearly have potential for application to the K-12 arena. The NSDL program will have another funding cycle in fiscal year 2002 with the next program solicitation expected to be available in January 2002, and an anticipated deadline for proposals in mid-April 2002.
    Theme
    Information Gateway
  10. Daniel Jr., R.; Lagoze, C.: Extending the Warwick framework : from metadata containers to active digital objects (1997) 0.08
    0.08378517 = product of:
      0.13964194 = sum of:
        0.12002743 = weight(_text_:section in 1264) [ClassicSimilarity], result of:
          0.12002743 = score(doc=1264,freq=10.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.45628512 = fieldWeight in 1264, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1264)
        0.009325097 = weight(_text_:on in 1264) [ClassicSimilarity], result of:
          0.009325097 = score(doc=1264,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.08505116 = fieldWeight in 1264, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1264)
        0.01028941 = weight(_text_:information in 1264) [ClassicSimilarity], result of:
          0.01028941 = score(doc=1264,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.11757882 = fieldWeight in 1264, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1264)
      0.6 = coord(3/5)
    
    Abstract
    Defining metadata as "data about data" provokes more questions than it answers. What are the forms of the data and metadata? Can we be more specific about the manner in which the metadata is "about" the data? Are data and metadata distinguished only in the context of their relationship? Is the nature of the relationship between the datasets declarative or procedural? Can the metadata itself be described by other data? Over the past several years, we have been engaged in a number of efforts examining the role, format, composition, and architecture of metadata for networked resources. During this time, we have noticed the tendency to be led astray by comfortable, but somewhat inappropriate, models in the non-digital information environment. Rather than pursuing familiar models, there is the need for a new model that fully exploits the unique combination of computation and connectivity that characterizes the digital library. In this paper, we describe an extension of the Warwick Framework that we call Distributed Active Relationships (DARs). DARs provide a powerful model for representing data and metadata in digital library objects. They explicitly express the relationships between networked resources, and even allow those relationships to be dynamically downloadable and executable. The DAR model is based on the following principles, which our examination of the "data about data" definition has led us to regard as axiomatic: * There is no essential distinction between data and metadata. We can only make such a distinction in terms of a particular "about" relationship. As a result, what is metadata in the context of one "about" relationship may be data in another. * There is no single "about" relationship. There are many different and important relationships between data resources. * Resources can be related without regard for their location. The connectivity in networked information architectures makes it possible to have data in one repository describe data in another repository. * The computational power of the networked information environment makes it possible to consider active or dynamic relationships between data sets. This adds considerable power to the "data about data" definition. First, data about another data set may not physically exist, but may be automatically derived. Second, the "about" relationship may be an executable object -- in a sense interpretable metadata. As will be shown, this provides useful mechanisms for handling complex metadata problems such as rights management of digital objects. The remainder of this paper describes the development and consequences of the DAR model. Section 2 reviews the Warwick Framework, which is the basis for the model described in this paper. Section 3 examines the concept of the Warwick Framework Catalog, which provides a mechanism for expressing the relationships between the packages in a Warwick Framework container. With that background established, section 4 generalizes the Warwick Framework by removing the restriction that it only contains "metadata". This allows us to consider digital library objects that are aggregations of (possibly distributed) data sets, with the relationships between the data sets expressed using a Warwick Framework Catalog. Section 5 further extends the model by describing Distributed Active Relationships (DARs). DARs are the explicit relationships that have the potential to be executable, as alluded to earlier. Finally, section 6 describes two possible implementations of these concepts.
  11. Noerr, P.: ¬The Digital Library Tool Kit (2001) 0.08
    0.083551876 = product of:
      0.13925312 = sum of:
        0.12269233 = weight(_text_:section in 6774) [ClassicSimilarity], result of:
          0.12269233 = score(doc=6774,freq=8.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.46641576 = fieldWeight in 6774, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.03125 = fieldNorm(doc=6774)
        0.0067892494 = weight(_text_:information in 6774) [ClassicSimilarity], result of:
          0.0067892494 = score(doc=6774,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.0775819 = fieldWeight in 6774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=6774)
        0.009771545 = product of:
          0.01954309 = sum of:
            0.01954309 = weight(_text_:technology in 6774) [ClassicSimilarity], result of:
              0.01954309 = score(doc=6774,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.13162735 = fieldWeight in 6774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6774)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    This second edition is an update and expansion of the original April 1998 edition. It contains more of everything. In particular, the resources section has been expanded and updated. This document is designed to help those who are contemplating setting up a digital library. Whether this is a first time computerization effort or an extension of an existing library's services, there are questions to be answered, deci-sions to be made, and work to be done. This document covers all those stages and more. The first section (Chapter 1) is a series of questions to ask yourself and your organization. The questions are designed generally to raise issues rather than to provide definitive answers. The second section (Chapters 2-5) discusses the planning and implementation of a digital library. It raises some issues which are specific, and contains information to help answer the specifics and a host of other aspects of a digital li-brary project. The third section (Chapters 6 -7) includes resources and a look at current research, existing digital library systems, and the future. These chapters enable you to find additional resources and help, as well as show you where to look for interesting examples of the current state of the art
    Footnote
    This Digital Library Tool Kit was sponsored by Sun Microsystems, Inc. to address some of the leading questions that academic institutions, public libraries, government agencies, and museums face in trying to develop, manage, and distribute digital content. The evolution of Java programming, digital object standards, Internet access, electronic commerce, and digital media management models is causing educators, CIOs, and librarians to rethink many of their traditional goals and modes of operation. New audiences, continuous access to collections, and enhanced services to user communities are enabled. As one of the leading technology providers to education and library communities, Sun is pleased to present this comprehensive introduction to digital libraries
  12. Louie, A.J.; Maddox, E.L.; Washington, W.: Using faceted classification to provide structure for information architecture (2003) 0.08
    0.07538647 = product of:
      0.12564412 = sum of:
        0.092019245 = weight(_text_:section in 2471) [ClassicSimilarity], result of:
          0.092019245 = score(doc=2471,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.34981182 = fieldWeight in 2471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.046875 = fieldNorm(doc=2471)
        0.015985882 = weight(_text_:on in 2471) [ClassicSimilarity], result of:
          0.015985882 = score(doc=2471,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.14580199 = fieldWeight in 2471, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2471)
        0.017638987 = weight(_text_:information in 2471) [ClassicSimilarity], result of:
          0.017638987 = score(doc=2471,freq=6.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.20156369 = fieldWeight in 2471, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2471)
      0.6 = coord(3/5)
    
    Abstract
    This is a short, but very thorough and very interesting, report on how the writers built a faceted classification for some legal information and used it to structure a web site with navigation and searching. There is a good summary of why facets work well and how they fit into bibliographic control in general. The last section is about their implementation of a web site for the Washington State Bar Association's Council for Legal Public Education. Their classification uses three facets: Purpose (the general aim of the document, e.g. Resources for K-12 Teachers), Topic (the subject of the document), and Type (the legal format of the document). See Example Web Sites, below, for a discussion of the site and a problem with its design.
    Footnote
    Paper presented at the ASIS&T 2003 Information Architecture Summit, Portland, OR, 21-23 March 2003.
  13. Paskin, N.: Identifier interoperability : a report on two recent ISO activities (2006) 0.07
    0.0727616 = product of:
      0.090952 = sum of:
        0.054222863 = weight(_text_:section in 1179) [ClassicSimilarity], result of:
          0.054222863 = score(doc=1179,freq=4.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.2061286 = fieldWeight in 1179, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1179)
        0.022091324 = weight(_text_:on in 1179) [ClassicSimilarity], result of:
          0.022091324 = score(doc=1179,freq=22.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.2014877 = fieldWeight in 1179, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1179)
        0.0060009053 = weight(_text_:information in 1179) [ClassicSimilarity], result of:
          0.0060009053 = score(doc=1179,freq=4.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.068573356 = fieldWeight in 1179, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1179)
        0.008636908 = product of:
          0.017273815 = sum of:
            0.017273815 = weight(_text_:technology in 1179) [ClassicSimilarity], result of:
              0.017273815 = score(doc=1179,freq=4.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.116343245 = fieldWeight in 1179, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Abstract
    Two significant activities within ISO, the International Organisation for Standardization, are underway, each of which has potential implications for the management of content by digital libraries and their users. Moreover these two activities are complementary and have the potential to provide tools for significantly improved identifier interoperability. This article presents a report on these: the first activity investigates the practical implications of interoperability across the family of ISO TC46/SC9 identifiers (better known as the ISBN and related identifiers); the second activity is the implementation of an ontology-based data dictionary that could provide a mechanism for this, the ISO/IEC 21000-6 standard. ISO/TC 46 is the ISO Technical Committee responsible for standards of "Information and documentation". Subcommittee 9 (SC9) of that body is responsible for "Presentation, identification and description of documents": the standards that it manages are identifiers familiar to the content and digital library communities, including the International Standard Book Number (ISBN); International Standard Serial Number (ISSN); International Standard Recording Code (ISRC); International Standard Music Number (ISMN); International Standard Audio-visual Number (ISAN) and the related Version identifier for Audio-visual Works (V-ISAN); and the International Standard Musical Work Code (ISWC). Most recently ISO has introduced the International Standard Text Code (ISTC), and is about to consider standardisation of the DOI system. The ISO identifier schemes provide numbering schemes as labels of entities of "content": many of the identifiers have as referents abstract content entities ("works" rather than a specific physical or digital form: e.g., ISAN, ISWC, ISTC). The existing schemes are numbering management schemes, not tied to any specific implementation (hence for internet "actionability", these identifiers may be incorporated into URN, URI, or DOI formats, etc.). Recently SC9 has requested that new and revised identifier schemes specify mandatory structured metadata to specify the item identified; that metadata is now becoming key to interoperability.
    There has been continuing discussion over a number of years within ISO TC46 SC9 of the need for interoperability between the various standard identifiers for which this committee is responsible. However, the nature of what that interoperability might mean - and how it might be achieved - has not been well explored. Considerable amounts of work have been done on standardising the identification schemes within each media sector, by creating standard identifiers that can be used within that sector. Equally, much work has been done on creating standard or reference metadata sets that can be used to associate key metadata descriptors with content. Much less work has been done on the impact of cross-sector working. Relatively little is understood about the effect of using one industry's identifiers in another industry, or on attempting to import metadata from one identification scheme into a system based on another. In the long term it is clear that interoperability of all these media identifiers and metadata schemes will be required. What is not clear is what initial steps are likely to deliver this soonest. Under the auspices of ISO TC46, an ad hoc group of representatives of TC46 SC9 Registration Authorities and invited experts met in London in late 2005, in a facilitated workshop funded by the registration agencies (RAs) responsible for ISAN, ISWC, ISRC, ISAN and DOI, to develop definitions and use cases, with the intention of providing a framework within which a more structured exploration of the issues might be undertaken. A report of the workshop prepared by Mark Bide of Rightscom Ltd. was used as the input for a wider discussion at the ISO TC46 meeting held in Thailand in February 2006, at which ISO TC46/SC9 agreed that Registration Authorities for ISRC, ISWC, ISAN, ISBN, ISSN and ISMN and the proposed RAs for ISTC and DOI should continue working on common issues relating to interoperability of identifier systems developed within TC46/SC9; some of the use cases have been selected for further in-depth investigation, in parallel with discussions on potential solutions.
    Section 2 below is based extensively on the report of the output from that workshop, with minor editorial changes to reflect points raised in the subsequent discussion. The second activity, not yet widely appreciated as being related, is the development of a content-focussed data dictionary within MPEG. ISO/IEC JTC 1/SC29, The Moving Picture Experts Group (MPEG), is formally a joint working group of ISO and the International Electrotechnical Commission. Originally best known for compression standards for audio, MPEG now includes the MPEG-21 "Multimedia Framework", which includes several components of digital rights management technology standardisation. Some of the components are already being used in digital library activities. One component is a Rights Data Dictionary that was established as a component to support activities such as the MPEG Rights Expression Language. In April 2005, the ISO/IEC Technical Management Board appointed a Registration Authority for the MPEG 21 Rights Data Dictionary (ISO/IEC Information technology - Multimedia framework (MPEG-21) - Part 6: Rights Data Dictionary, ISO/IEC 21000-6), and an implementation of the dictionary is about to be launched. However, the Dictionary design is based on a generic interoperability framework, and it will offer extensive additional possibilities. The design of the dictionary goes back to one of the major studies of the conceptual model of interoperability, <indecs>. Section 3 below provides a brief summary of the origins and possible applications of the ISO/IEC 21000-6 Dictionary.
  14. Gayathri, R.; Uma, V.: Ontology based knowledge representation technique, domain modeling languages and planners for robotic path planning : a survey (2018) 0.07
    0.070913404 = product of:
      0.11818901 = sum of:
        0.092019245 = weight(_text_:section in 5605) [ClassicSimilarity], result of:
          0.092019245 = score(doc=5605,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.34981182 = fieldWeight in 5605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.046875 = fieldNorm(doc=5605)
        0.015985882 = weight(_text_:on in 5605) [ClassicSimilarity], result of:
          0.015985882 = score(doc=5605,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.14580199 = fieldWeight in 5605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5605)
        0.0101838745 = weight(_text_:information in 5605) [ClassicSimilarity], result of:
          0.0101838745 = score(doc=5605,freq=2.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.116372846 = fieldWeight in 5605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5605)
      0.6 = coord(3/5)
    
    Abstract
    Knowledge Representation and Reasoning (KR & R) has become one of the promising fields of Artificial Intelligence. KR is dedicated towards representing information about the domain that can be utilized in path planning. Ontology based knowledge representation and reasoning techniques provide sophisticated knowledge about the environment for processing tasks or methods. Ontology helps in representing the knowledge about environment, events and actions that help in path planning and making robots more autonomous. Knowledge reasoning techniques can infer new conclusion and thus aids planning dynamically in a non-deterministic environment. In the initial sections, the representation of knowledge using ontology and the techniques for reasoning that could contribute in path planning are discussed in detail. In the following section, we also provide comparison of various planning domain modeling languages, ontology editors, planners and robot simulation tools.
    Content
    Part of special issue: SI on Artificial Intelligence and Machine Learning. Vgl.: https://doi.org/10.1016/j.icte.2018.04.008.
  15. Genetasio, G.: ¬The International Cataloguing Principles and their future", in: JLIS.it 3/1 (2012) (2012) 0.07
    0.070139915 = product of:
      0.17534979 = sum of:
        0.13013488 = weight(_text_:section in 2625) [ClassicSimilarity], result of:
          0.13013488 = score(doc=2625,freq=4.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.49470866 = fieldWeight in 2625, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
        0.045214903 = weight(_text_:on in 2625) [ClassicSimilarity], result of:
          0.045214903 = score(doc=2625,freq=16.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.4123903 = fieldWeight in 2625, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2625)
      0.4 = coord(2/5)
    
    Abstract
    The article aims to provide an update on the 2009 Statement of International Cataloguing Principles (ICP) and on the status of work on the Statement by the IFLA Cataloguing Section. The article begins with a summary of the drafting process of the ICP by the IME ICC, International Meeting of Experts on an International Cataloguing Code, focusing in particular on the first meeting (IME ICC1) and on the earlier drafts of the 2009 Statement. It then analyzes both the major innovations and the unsatisfactory aspects of the ICP. Finally, it explains and comments on the recent documents by the IFLA Cataloguing Section relating to the ICP, which express their intention to revise the Statement and to verify the convenience of drawing up an international cataloguing code. The latter intention is considered in detail and criticized by the author in the light of the recent publication of the RDA, Resource Description and Access. The article is complemented by an updated bibliography on the ICP.
  16. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.06
    0.0634407 = product of:
      0.15860176 = sum of:
        0.13195862 = product of:
          0.39587584 = sum of:
            0.39587584 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39587584 = score(doc=1826,freq=2.0), product of:
                0.42262965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049850095 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.026643137 = weight(_text_:on in 1826) [ClassicSimilarity], result of:
          0.026643137 = score(doc=1826,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.24300331 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.4 = coord(2/5)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  17. Styltsvig, H.B.: Ontology-based information retrieval (2006) 0.06
    0.058660645 = product of:
      0.09776774 = sum of:
        0.061346166 = weight(_text_:section in 1154) [ClassicSimilarity], result of:
          0.061346166 = score(doc=1154,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.23320788 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.018458908 = weight(_text_:on in 1154) [ClassicSimilarity], result of:
          0.018458908 = score(doc=1154,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.16835764 = fieldWeight in 1154, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
        0.017962666 = weight(_text_:information in 1154) [ClassicSimilarity], result of:
          0.017962666 = score(doc=1154,freq=14.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.20526241 = fieldWeight in 1154, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1154)
      0.6 = coord(3/5)
    
    Abstract
    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun phrases. Furthermore, we briefly cover the identification of semantic relations inside and between noun phrases, as well as discuss which kind of problems are caused by an increase in compoundness with respect to the structure of concepts in the evaluation of queries. Measuring similarity between concepts based on distances in the structure of the ontology is discussed. In addition, a shared nodes measure is introduced and, based on a set of intuitive similarity properties, compared to a number of different measures. In this comparison the shared nodes measure appears to be superior, though more computationally complex. Some of the major problems of shared nodes which relate to the way relations differ with respect to the degree they bring the concepts they connect closer are discussed. A generalized measure called weighted shared nodes is introduced to deal with these problems. Finally, the utilization of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems, it appears that the fuzzy set model comprises the flexibility needed when generalizing to an ontology-based retrieval model and, with the introduction of a hierarchical fuzzy aggregation principle, compound concepts can be handled in a straightforward and natural manner.
    Imprint
    Roskilde : Roskilde University, Computer Science Section
  18. Hill, L.L.; Frew, J.; Zheng, Q.: Geographic names : the implementation of a gazetteer in a georeferenced digital library (1999) 0.06
    0.056030147 = product of:
      0.09338357 = sum of:
        0.061346166 = weight(_text_:section in 1240) [ClassicSimilarity], result of:
          0.061346166 = score(doc=1240,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.23320788 = fieldWeight in 1240, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.03125 = fieldNorm(doc=1240)
        0.018458908 = weight(_text_:on in 1240) [ClassicSimilarity], result of:
          0.018458908 = score(doc=1240,freq=6.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.16835764 = fieldWeight in 1240, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1240)
        0.013578499 = weight(_text_:information in 1240) [ClassicSimilarity], result of:
          0.013578499 = score(doc=1240,freq=8.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.1551638 = fieldWeight in 1240, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1240)
      0.6 = coord(3/5)
    
    Abstract
    The Alexandria Digital Library (ADL) Project has developed a content standard for gazetteer objects and a hierarchical type scheme for geographic features. Both of these developments are based on ADL experience with an earlier gazetteer component for the Library, based on two gazetteers maintained by the U.S. federal government. We define the minimum components of a gazetteer entry as (1) a geographic name, (2) a geographic location represented by coordinates, and (3) a type designation. With these attributes, a gazetteer can function as a tool for indirect spatial location identification through names and types. The ADL Gazetteer Content Standard supports contribution and sharing of gazetteer entries with rich descriptions beyond the minimum requirements. This paper describes the content standard, the feature type thesaurus, and the implementation and research issues. A gazetteer is list of geographic names, together with their geographic locations and other descriptive information. A geographic name is a proper name for a geographic place and feature, such as Santa Barbara County, Mount Washington, St. Francis Hospital, and Southern California. There are many types of printed gazetteers. For example, the New York Times Atlas has a gazetteer section that can be used to look up a geographic name and find the page(s) and grid reference(s) where the corresponding feature is shown. Some gazetteers provide information about places and features; for example, a history of the locale, population data, physical data such as elevation, or the pronunciation of the name. Some lists of geographic names are available as hierarchical term sets (thesauri) designed for information retreival; these are used to describe bibliographic or museum materials. Examples include the authority files of the U.S. Library of Congress and the GeoRef Thesaurus produced by the American Geological Institute. The Getty Museum has recently made their Thesaurus of Geographic Names available online. This is a major project to develop a controlled vocabulary of current and historical names to describe (i.e., catalog) art and architecture literature. U.S. federal government mapping agencies maintain gazetteers containing the official names of places and/or the names that appear on map series. Examples include the U.S. Geological Survey's Geographic Names Information System (GNIS) and the National Imagery and Mapping Agency's Geographic Names Processing System (GNPS). Both of these are maintained in cooperation with the U.S. Board of Geographic Names (BGN). Many other examples could be cited -- for local areas, for other countries, and for special purposes. There is remarkable diversity in approaches to the description of geographic places and no standardization beyond authoritative sources for the geographic names themselves.
  19. Dodge, M.: ¬A map of Yahoo! (2000) 0.06
    0.055014312 = product of:
      0.06876789 = sum of:
        0.030673083 = weight(_text_:section in 1555) [ClassicSimilarity], result of:
          0.030673083 = score(doc=1555,freq=2.0), product of:
            0.26305357 = queryWeight, product of:
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.049850095 = queryNorm
            0.11660394 = fieldWeight in 1555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.276892 = idf(docFreq=613, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
        0.01921264 = weight(_text_:on in 1555) [ClassicSimilarity], result of:
          0.01921264 = score(doc=1555,freq=26.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.17523219 = fieldWeight in 1555, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
        0.013996395 = weight(_text_:information in 1555) [ClassicSimilarity], result of:
          0.013996395 = score(doc=1555,freq=34.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.15993917 = fieldWeight in 1555, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=1555)
        0.0048857727 = product of:
          0.009771545 = sum of:
            0.009771545 = weight(_text_:technology in 1555) [ClassicSimilarity], result of:
              0.009771545 = score(doc=1555,freq=2.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.065813676 = fieldWeight in 1555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1555)
          0.5 = coord(1/2)
      0.8 = coord(4/5)
    
    Content
    "Introduction Yahoo! is the undisputed king of the Web directories, providing one of the key information navigation tools on the Internet. It has maintained its popularity over many Internet-years as the most visited Web site, against intense competition. This is because it does a good job of shifting, cataloguing and organising the Web [1] . But what would a map of Yahoo!'s hierarchical classification of the Web look like? Would an interactive map of Yahoo!, rather than the conventional listing of sites, be more useful as navigational tool? We can get some idea what a map of Yahoo! might be like by taking a look at ET-Map, a prototype developed by Hsinchun Chen and colleagues in the Artificial Intelligence Lab [2] at the University of Arizona. ET-Map was developed in 1995 as part of innovative research in automatic Internet homepage categorization and it charts a large chunk of Yahoo!, from the entertainment section representing some 110,000 different Web links. The map is a two-dimensional, multi-layered category map; its aim is to provide an intuitive visual information browsing tool. ET-Map can be browsed interactively, explored and queried, using the familiar point-and-click navigation style of the Web to find information of interest.
    The View From Above Browsing for a particular piece on information on the Web can often feel like being stuck in an unfamiliar part of town walking around at street level looking for a particular store. You know the store is around there somewhere, but your viewpoint at ground level is constrained. What you really want is to get above the streets, hovering half a mile or so up in the air, to see the whole neighbourhood. This kind of birds-eye view function has been memorably described by David D. Clark, Senior Research Scientist at MIT's Laboratory for Computer Science and the Chairman of the Invisible Worlds Protocol Advisory Board, as the missing "up button" on the browser [3] . ET-Map is a nice example of a prototype for Clark's "up-button" view of an information space. The goal of information maps, like ET-Map, is to provide the browser with a sense of the lie of the information landscape, what is where, the location of clusters and hotspots, what is related to what. Ideally, this 'big-picture' all-in-one visual summary needs to fit on a single standard computer screen. ET-Map is one of my favourite examples, but there are many other interesting information maps being developed by other researchers and companies (see inset at the bottom of this page). How does ET-Map work? Here is a sequence of screenshots of a typical browsing session with ET-Map, which ends with access to Web pages on jazz musician Miles Davis. You can also tryout ET-Map for yourself, using a fully working demo on the AI Lab's website [4] . We begin with the top-level map showing forty odd broad entertainment 'subject regions' represented by regularly shaped tiles. Each tile is a visual summary of a group of Web pages with similar content. These tiles are shaded different colours to differentiate them, while labels identify the subject of the tile and the number in brackets telling you how many individual Web page links it contains. ET-Map uses two important, but common-sense, spatial concepts in its organisation and representation of the Web. Firstly, the 'subject regions' size is directly related to the number of Web pages in that category. For example, the 'MUSIC' subject area contains over 11,000 pages and so has a much larger area than the neighbouring area of 'LIVE' which only has 4,300 odd pages. This is intuitively meaningful, as the largest tiles are visually more prominent on the map and are likely to be more significant as they contain the most links. In addition, a second spatial concept, that of neighbourhood proximity, is applied so 'subject regions' closely related in term of content are plotted close to each other on the map. For example, 'FILM' and 'YEAR'S OSCARS', at the bottom left, are neighbours in both semantic and spatial space. This make senses as many things in the real-world are ordered in this way, with things that are alike being spatially close together (e.g. layout of goods in a store, or books in a library). Importantly, ET-Map is also a multi-layer map, with sub-maps showing greater informational resolution through a finer degree of categorization. So for any subject region that contains more than two hundred Web pages, a second-level map, with more detailed categories is generated. This subdivision of information space is repeated down the hierarchy as far as necessary. In the example, the user selected the 'MUSIC' subject region which, not surprisingly, contained many thousands of pages. A second-level map with numerous different music categories is then presented to the user. Delving deeper, the user wants to learn more about jazz music, so clicking on the 'JAZZ' tile leads to a third-level map, a fine-grained map of jazz related Web pages. Finally, selecting the 'MILES DAVIS' subject region leads to more a conventional looking ranking of pages from which the user selects one to download.
    ET-Map was created using a sophisticated AI technique called Kohonen self-organizing map, a neural network approach that has been used for automatic analysis and classification of semantic content of text documents like Web pages. I do not pretend to fully understand how this technique works; I tend to think of it as a clever 'black-box' that group together things that are alike [5] . It is a real challenge to automatically classify pages from a very heterogeneous information collection like the Web into categories that will match the conceptions of a typical user. Directories like Yahoo! tend to rely on the skill of human editors to achieve this. ET-Map is an interesting prototype that I think highlights well the potential for a map-based approach to Web browsing. I am surprised none of the major search engines or directories have introduced the option of mapping results. Although, I am sure many are working on ideas. People certainly need all the help they get, as Web growth shows no sign of slowing. Just last month it was reported that the Web had surpassed one billion indexable pages [6].
    Information Maps There are many other fascinating examples that employ two dimensional interactive maps to provide a 'birds-eye' view of information. They use various underlying techniques of textual analysis and clustering to turn the mass of information into a useful summary map (see "Mining in Textual Mountains" in Mappa.Mundi Magazine). In terms of visual representations they can be divided into two groups, those that generate smooth surfaces and those that produce regular, tiled maps. Unfortunately, we don't have space to examine them in detail, but they are well worth spending some time exploring. I will be covering some of them in future columns.
    Research Prototypes Visual SiteMap Developed by Xia Lin, based at the College of Library and Information Science, Drexel University. CVG Cyberspace geography visualization, developed by Luc Girardin, at The Graduate Institute of International Studies, Switzerland. WEBSOM Maps the thousands of articles posted on Usenet newsgroups. It is being developed by researchers at the Neural Networks Research Centre, Helsinki University of Technology in Finland. TreeMaps Developed by Brian Johnson, Ben Shneiderman and colleagues in the Human-Computer Interaction Lab at the University of Maryland. Commercial Information Maps: NewsMaps Provides interactive information landscapes summarizing daily news stories, developed Cartia, Inc. Web Squirrel Creates maps known as information farms. It is developed by Eastgate Systems, Inc. Umap Produces interactive maps of Web searches. Map of the Market An interactive map of the market performance of the stocks of major US corporations developed by SmartMoney.com."
  20. Kirk, J.: Theorising information use : managers and their work (2002) 0.05
    0.051403057 = product of:
      0.08567176 = sum of:
        0.018650195 = weight(_text_:on in 560) [ClassicSimilarity], result of:
          0.018650195 = score(doc=560,freq=2.0), product of:
            0.109641045 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.049850095 = queryNorm
            0.17010231 = fieldWeight in 560, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
        0.042838223 = weight(_text_:information in 560) [ClassicSimilarity], result of:
          0.042838223 = score(doc=560,freq=26.0), product of:
            0.08751074 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.049850095 = queryNorm
            0.4895196 = fieldWeight in 560, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=560)
        0.024183342 = product of:
          0.048366684 = sum of:
            0.048366684 = weight(_text_:technology in 560) [ClassicSimilarity], result of:
              0.048366684 = score(doc=560,freq=4.0), product of:
                0.14847288 = queryWeight, product of:
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.049850095 = queryNorm
                0.32576108 = fieldWeight in 560, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.978387 = idf(docFreq=6114, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=560)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    The focus of this thesis is information use. Although a key concept in information behaviour, information use has received little attention from information science researchers. Studies of other key concepts such as information need and information seeking are dominant in information behaviour research. Information use is an area of interest to information professionals who rely on research outcomes to shape their practice. There are few empirical studies of how people actually use information that might guide and refine the development of information systems, products and services.
    Content
    A thesis submitted to the University of Technology, Sydney in fulfilment of the requirements for the degree of Doctor of Philosophy. - Vgl. unter: http://epress.lib.uts.edu.au/dspace/bitstream/2100/309/2/02whole.pdf.
    Imprint
    Sydney : University of Technology / Faculty of Humanities and Social Sciences
    Theme
    Information

Years

Languages

Types

  • a 539
  • r 25
  • i 22
  • s 20
  • m 18
  • x 17
  • n 14
  • p 8
  • b 7
  • More… Less…

Themes