Search (258 results, page 2 of 13)

  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.03
    0.02582543 = product of:
      0.06456357 = sum of:
        0.032278713 = weight(_text_:context in 4746) [ClassicSimilarity], result of:
          0.032278713 = score(doc=4746,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 4746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
        0.032284863 = weight(_text_:system in 4746) [ClassicSimilarity], result of:
          0.032284863 = score(doc=4746,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24108742 = fieldWeight in 4746, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
      0.4 = coord(2/5)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  2. Landry, P.: MACS update : moving toward a link management production database (2003) 0.03
    0.025715468 = product of:
      0.06428867 = sum of:
        0.045648996 = weight(_text_:context in 2864) [ClassicSimilarity], result of:
          0.045648996 = score(doc=2864,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.25904062 = fieldWeight in 2864, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=2864)
        0.018639674 = weight(_text_:system in 2864) [ClassicSimilarity], result of:
          0.018639674 = score(doc=2864,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 2864, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=2864)
      0.4 = coord(2/5)
    
    Abstract
    Introduction Multilingualism has long been an issue that have been discussed and debated at ELAG conferences. Members of ELAG have generally considered the role of automation as an important factor in the development of multilingual subject access solutions. It is quite fitting that in the context of this year's theme of "Cross language applications and the web" that the latest development of the MACS project be presented. As the title indicates, this presentation will focus an the latest development of the Link management Interface (LMI) which is the pivotal tool of the MACS multilingual subject access solution. It will update the presentation given by Genevieve ClavelMerrin at last year's ELAG 2002 Conference in Rome. That presentation gave a thorough description of the work that had been undertaken since 1997. In particular, G. Clavel-Merrin described the development of the MACS prototype in which the mechanisms for the establishment and management of links between subject heading languages (SHLs) and the user search interface had been implemented.
    Conclusion After a few years of design work and testing, it now appears that the MACS project is almost ready to move to production. The latest LMI release has shown that it can be used in a federated work network and that it is robust enough to manage many thousands of links. Once in the production phase, consideration should be given to extend MACS to other SHLs in other languages. There is still a great interest from other CENL members to participate in this project and the consortium structure will need to be finalised in order to incorporate gradually and successfully new partners in the MACS system. Work will also continue to improve the Search Interface (SI) before it can be successfully integrated in each of the partners OPAC. In this context, some form of access to the local authority files should be investigated so that users can select the most appropriate heading within each subject hierarchies before sending their search to the different target databases. Testing of Z39.50 access to the partners' library catalogues will also continue to further refine search results. The long range prospect of the MACS initiative will have to be addressed in the foreseeable future. Financial as well as institutional support will need to be reinforced and possibly new types of partnership identified. As the need to improve subject access continues to be an issue for many European national libraries, MACS will hopefully remain a viable tool for ensuring cross-language access. One of the potential targets is the TEL project. Within the scope of that initiative, is it possible and useful to envisage the integration of MACS in TEL as an additional access point? It is worth stating the question in light of the challenge to European national libraries to offer improved access to their collections.
  3. Harzing, A.-W.: Comparing the Google Scholar h-index with the ISI Journal Impact Factor (2008) 0.03
    0.025116233 = product of:
      0.12558116 = sum of:
        0.12558116 = weight(_text_:index in 855) [ClassicSimilarity], result of:
          0.12558116 = score(doc=855,freq=8.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.67591333 = fieldWeight in 855, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0546875 = fieldNorm(doc=855)
      0.2 = coord(1/5)
    
    Abstract
    Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF). This paper proposes an alternative metric - Hirsch's h-index - and data source - Google Scholar - to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.
    Object
    h-index
  4. Bladow, N.; Dorey, C.; Frederickson, L.; Grover, P.; Knudtson, Y.; Krishnamurthy, S.; Lazarou, V.: What's the Buzz about? : An empirical examination of Search on Yahoo! (2005) 0.02
    0.024069263 = product of:
      0.12034631 = sum of:
        0.12034631 = weight(_text_:index in 3072) [ClassicSimilarity], result of:
          0.12034631 = score(doc=3072,freq=10.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.64773786 = fieldWeight in 3072, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=3072)
      0.2 = coord(1/5)
    
    Abstract
    We present an analysis of the Yahoo Buzz Index over a period of 45 weeks. Our key findings are that: (1) It is most common for a search term to show up on the index for one week, followed by two weeks, three weeks, etc. Only two terms persist for all 45 weeks studied - Britney Spears and Jennifer Lopez. Search term longevity follows a power-law distribution or a winner-take-all structure; (2) Most search terms focus on entertainment. Search terms related to serious topics are found less often. The Buzz Index does not necessarily follow the "news cycle"; and, (3) We provide two ways to determine "star power" of various search terms - one that emphasizes staying power on the Index and another that emphasizes rank. In general, the methods lead to dramatically different results. Britney Spears performs well in both methods. We conclude that the data available on the Index is symptomatic of a celebrity-crazed, entertainment-centered culture.
  5. Reiner, U.: DDC-basierte Suche in heterogenen digitalen Bibliotheks- und Wissensbeständen (2005) 0.02
    0.024017572 = product of:
      0.06004393 = sum of:
        0.04841807 = weight(_text_:context in 4854) [ClassicSimilarity], result of:
          0.04841807 = score(doc=4854,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 4854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=4854)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 4854) [ClassicSimilarity], result of:
              0.034877572 = score(doc=4854,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 4854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4854)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Waehrend die Sachsuche schon seit Laengerem vereinheitlicht wurde, steht die Vereinheitlichung der Fachsuche fuer Endbenutzer(innen) im Wesentlichen noch aus. Als Mitglied im DFG-gefoerderten Konsortium DDC Deutsch erarbeitet der GBV Voraussetzungen fuer eine thematische Suche mit Hilfe von DDC-Notationen. Im Rahmen des VZG-Projektes Colibri (COntext Generation and LInguistic Tools for Bibliographic Retrieval Interfaces) wird das Ziel verfolgt, alle Titeldatensätze des Verbundkataloges (GVK) und die mehr als 20 Mio. Aufsatztitel der Online Contents Datenbank, die noch nicht intellektuell DDC-klassifiziert sind, entweder auf Basis von Konkordanzen zu anderen Erschliessungssystemen oder automatisch mit DDC-Notationen zu versehen. Hierzu wird aus GVK-Titeldatensaetzen eine DDC-Basis erstellt. Sowohl molekulare als auch atomare DDC-Notationen sind Bestandteil dieser DDC-Basis. Eingegangen wird auf den Stand der Forschung, insbesondere auf Songqiao Liu's Dissertation zur automatischen Zerlegung von DDC-Notationen (Univ. of California, Los Angeles, Calif., 1993). Anhand von Beispielen wird dargelegt, dass sich Liu's Ergebnisse reproduzieren lassen. Weiterhin wird der Stand der VZG-Colibri-Arbeiten zur Modell- und Begriffsbildung, Klassifizierung und Implementierung vorgestellt. Schliesslich wird gezeigt, wie DDC-Notationen zur systematischen Erkundung heterogener digitaler Bibliotheks- und Wissensbestaende herangezogen werden koennen.
    Date
    19. 1.2006 19:15:29
  6. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.02
    0.02397943 = product of:
      0.05994857 = sum of:
        0.048427295 = weight(_text_:system in 3261) [ClassicSimilarity], result of:
          0.048427295 = score(doc=3261,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.36163113 = fieldWeight in 3261, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3261)
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.03456382 = score(doc=3261,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Ontologies improve IR systems regarding its retrieval and presentation of information, which make the task of finding information more effective, efficient, and interactive. In this paper we argue that ontologies also greatly improve the engineering of such systems. We created a framework that uses ontology to drive the process of engineering an IR system. We developed a prototype that shows how a domain specialist without knowledge in the IR field can build an IR system with interactive components. The resulting system provides support for users not only to find their information needs but also to extend their state of knowledge. This way, our approach to ontology-enabled information retrieval addresses both the engineering aspect described here and also the usability aspect described elsewhere.
    Date
    28.11.2016 12:43:22
  7. Anderson, T.D.: Situating relevance : exploring individual relevance assessments in context (2001) 0.02
    0.0225951 = product of:
      0.1129755 = sum of:
        0.1129755 = weight(_text_:context in 3909) [ClassicSimilarity], result of:
          0.1129755 = score(doc=3909,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.64109284 = fieldWeight in 3909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.109375 = fieldNorm(doc=3909)
      0.2 = coord(1/5)
    
  8. Danowski, P.: Authority files and Web 2.0 : Wikipedia and the PND. An Example (2007) 0.02
    0.021780593 = product of:
      0.05445148 = sum of:
        0.044850416 = weight(_text_:index in 1291) [ClassicSimilarity], result of:
          0.044850416 = score(doc=1291,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.24139762 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.028803186 = score(doc=1291,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 1291, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    More and more users index everything on their own in the web 2.0. There are services for links, videos, pictures, books, encyclopaedic articles and scientific articles. All these services are library independent. But must that really be? Can't libraries help with their experience and tools to make user indexing better? On the experience of a project from German language Wikipedia together with the German person authority files (Personen Namen Datei - PND) located at German National Library (Deutsche Nationalbibliothek) I would like to show what is possible. How users can and will use the authority files, if we let them. We will take a look how the project worked and what we can learn for future projects. Conclusions - Authority files can have a role in the web 2.0 - there must be an open interface/ service for retrieval - everything that is indexed on the net with authority files can be easy integrated in a federated search - O'Reilly: You have to found ways that your data get more important that more it will be used
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  9. Mas, S.; Zaher, L'H.; Zacklad, M.: Design & evaluation of multi-viewed knowledge system for administrative electronic document organization (2008) 0.02
    0.021112198 = product of:
      0.052780494 = sum of:
        0.03727935 = weight(_text_:system in 2480) [ClassicSimilarity], result of:
          0.03727935 = score(doc=2480,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.27838376 = fieldWeight in 2480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0625 = fieldNorm(doc=2480)
        0.015501143 = product of:
          0.04650343 = sum of:
            0.04650343 = weight(_text_:29 in 2480) [ClassicSimilarity], result of:
              0.04650343 = score(doc=2480,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.31092256 = fieldWeight in 2480, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2480)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    29. 8.2009 21:15:48
  10. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.02
    0.020296982 = product of:
      0.10148491 = sum of:
        0.10148491 = weight(_text_:index in 4088) [ClassicSimilarity], result of:
          0.10148491 = score(doc=4088,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.5462205 = fieldWeight in 4088, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0625 = fieldNorm(doc=4088)
      0.2 = coord(1/5)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  11. Linden, E.J. van der; Vliegen, R.; Wijk, J.J. van: Visual Universal Decimal Classification (2007) 0.02
    0.020017719 = product of:
      0.0500443 = sum of:
        0.04035608 = weight(_text_:system in 548) [ClassicSimilarity], result of:
          0.04035608 = score(doc=548,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.30135927 = fieldWeight in 548, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 548) [ClassicSimilarity], result of:
              0.029064644 = score(doc=548,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    UDC aims to be a consistent and complete classification system, that enables practitioners to classify documents swiftly and smoothly. The eventual goal of UDC is to enable the public at large to retrieve documents from large collections of documents that are classified with UDC. The large size of the UDC Master Reference File, MRF with over 66.000 records, makes it difficult to obtain an overview and to understand its structure. Moreover, finding the right classification in MRF turns out to be difficult in practice. Last but not least, retrieval of documents requires insight and understanding of the coding system. Visualization is an effective means to support the development of UDC as well as its use by practitioners. Moreover, visualization offers possibilities to use the classification without use of the coding system as such. MagnaView has developed an application which demonstrates the use of interactive visualization to face these challenges. In our presentation, we discuss these challenges, and we give a demonstration of the way the application helps face these. Examples of visualizations can be found below.
    Source
    Extensions and corrections to the UDC. 29(2007), S.297-300
  12. Balikova, M.: ¬The national bibliography of a small country in international context (2000) 0.02
    0.01936723 = product of:
      0.09683614 = sum of:
        0.09683614 = weight(_text_:context in 5397) [ClassicSimilarity], result of:
          0.09683614 = score(doc=5397,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.54950815 = fieldWeight in 5397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.09375 = fieldNorm(doc=5397)
      0.2 = coord(1/5)
    
  13. Stoklasova, B.: ¬The national bibliography of a small country in international context (2000) 0.02
    0.01936723 = product of:
      0.09683614 = sum of:
        0.09683614 = weight(_text_:context in 5415) [ClassicSimilarity], result of:
          0.09683614 = score(doc=5415,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.54950815 = fieldWeight in 5415, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.09375 = fieldNorm(doc=5415)
      0.2 = coord(1/5)
    
  14. Pitti, D.V.: Creator description : Ecoded Archival Context (2003) 0.02
    0.01936723 = product of:
      0.09683614 = sum of:
        0.09683614 = weight(_text_:context in 3797) [ClassicSimilarity], result of:
          0.09683614 = score(doc=3797,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.54950815 = fieldWeight in 3797, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.09375 = fieldNorm(doc=3797)
      0.2 = coord(1/5)
    
  15. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.02
    0.018424368 = product of:
      0.04606092 = sum of:
        0.03261943 = weight(_text_:system in 540) [ClassicSimilarity], result of:
          0.03261943 = score(doc=540,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 540, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=540)
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
              0.04032446 = score(doc=540,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=540)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  16. Van de Sompel, H.; Beit-Arie, O.: Generalizing the OpenURL framework beyond references to scholarly works : the Bison-Futé model (2001) 0.02
    0.01804435 = product of:
      0.09022175 = sum of:
        0.09022175 = weight(_text_:context in 1223) [ClassicSimilarity], result of:
          0.09022175 = score(doc=1223,freq=10.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.511974 = fieldWeight in 1223, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1223)
      0.2 = coord(1/5)
    
    Abstract
    This paper introduces the Bison-Futé model, a conceptual generalization of the OpenURL framework for open and context-sensitive reference linking in the web-based scholarly information environment. The Bison-Futé model is an abstract framework that identifies and defines components that are required to enable open and context-sensitive linking on the web in general. It is derived from experience gathered from the deployment of the OpenURL framework over the course of the past year. It is a generalization of the current OpenURL framework in several aspects. It aims to extend the scope of open and context-sensitive linking beyond web-based scholarly information. In addition, it offers a generalization of the manner in which referenced items -- as well as the context in which these items are referenced -- can be described for the specific purpose of open and context-sensitive linking. The Bison-Futé model is not suggested as a replacement of the OpenURL framework. On the contrary: it confirms the conceptual foundations of the OpenURL framework and, at the same time, it suggests directions and guidelines as to how the current OpenURL specifications could be extended to become applicable beyond the scholarly information environment.
  17. Veltman, K.H.: From Recorded World to Recording Worlds (2007) 0.02
    0.017821437 = product of:
      0.044553593 = sum of:
        0.028243875 = weight(_text_:context in 512) [ClassicSimilarity], result of:
          0.028243875 = score(doc=512,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.16027321 = fieldWeight in 512, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.02734375 = fieldNorm(doc=512)
        0.016309716 = weight(_text_:system in 512) [ClassicSimilarity], result of:
          0.016309716 = score(doc=512,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.1217929 = fieldWeight in 512, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02734375 = fieldNorm(doc=512)
      0.4 = coord(2/5)
    
    Abstract
    The range, depths and limits of what we know depend on the media with which we attempt to record our knowledge. This essay begins with a brief review of developments in a) media: stone, manuscripts, books and digital media, to trace how collections of recorded knowledge expanded to 235,000 in 1837 and have expanded to over 100 million unique titles in a single database including over 1 billion individual listings in 2007. The advent of digital media has brought full text scanning and electronic networks, which enable us to consult digital books and images from our office, home or potentially even with our cell phones. These magnificent developments raise a number of concerns and new challenges. An historical survey of major projects that changed the world reveals that they have taken from one to eight centuries. This helps explain why commercial offerings, which offer useful, and even profitable short-term solutions often undermine a long-term vision. New technologies have the potential to transform our approach to knowledge, but require a vision of a systematic new approach to knowledge. This paper outlines four ingredients for such a vision in the European context. First, the scope of European observatories should be expanded to inform memory institutions of latest technological developments. Second, the quest for a European Digital Library should be expanded to include a distributed repository, a digital reference room and a virtual agora, whereby memory institutions will be linked with current research;. Third, there is need for an institute on Knowledge Organization that takes up anew Otlet's vision, and the pioneering efforts of the Mundaneum (Brussels) and the Bridge (Berlin). Fourth, we need to explore requirements for a Universal Digital Library, which works with countries around the world rather than simply imposing on them an external system. Here, the efforts of the proposed European University of Culture could be useful. Ultimately we need new systems, which open research into multiple ways of knowing, multiple "knowledges". In the past, we went to libraries to study the recorded world. In a world where cameras and sensors are omnipresent we have new recording worlds. In future, we may also use these recording worlds to study the riches of libraries.
  18. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.02
    0.017424472 = product of:
      0.04356118 = sum of:
        0.03588033 = weight(_text_:index in 3284) [ClassicSimilarity], result of:
          0.03588033 = score(doc=3284,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.1931181 = fieldWeight in 3284, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=3284)
        0.0076808496 = product of:
          0.023042548 = sum of:
            0.023042548 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
              0.023042548 = score(doc=3284,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15476047 = fieldWeight in 3284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3284)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Date
    22. 1.2010 14:41:24
    Footnote
    Vortrag gehalten am 03.06.2009 auf dem 98. Bibliothekartag 2009 in Erfurt; erscheint in: Dialog mit Biliotheken. Vgl. auch: http://www.gbv.de/vgm/info/biblio/01VZG/06Publikationen/2009/index.
  19. San Segundo Manuel, R.: ¬The use of the UDC in Spain, and related issues of education, training and research (2007) 0.02
    0.017055526 = product of:
      0.042638816 = sum of:
        0.032950602 = weight(_text_:system in 2529) [ClassicSimilarity], result of:
          0.032950602 = score(doc=2529,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 2529, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2529)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 2529) [ClassicSimilarity], result of:
              0.029064644 = score(doc=2529,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 2529, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2529)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    It was from 1895 onwards, the year in which the First International Bibliography Conference was held and the Decimal System began to be primarily implemented on a European scale, that it first began to be disseminated in Spain . The introduction of the UDC (Universal Decimal Classification) scheme was initially subject to numerous difficulties owing to isolated incidents with librarians, but it subsequently received the support of the Spanish Administration. It was in 1939 that the UDC was officially implemented in all Spanish libraries although what was introduced in the decree was the 1934 German version. Nevertheless, in its practical implementation in libraries, the latest version of the UDC tables was introduced. Finally, from 1989 onwards, the compulsoriness of using the UDC to classify collections and catalogues was repealed, although its implementation in libraries, catalogues and bibliographies is almost complete. The UDC is taught within the framework of regulated Library and Information Science courses, both from a theoretical and from a practical point of view. Research in Spain on the UDC is already quite important; translations, adaptations and versions of the tables have been undertaken and there are also analytical works on different aspects of the UDC system.
    Source
    Extensions and corrections to the UDC. 29(2007), S.285-296
  20. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.02
    0.017020667 = product of:
      0.042551666 = sum of:
        0.032950602 = weight(_text_:system in 3628) [ClassicSimilarity], result of:
          0.032950602 = score(doc=3628,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 3628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.028803186 = score(doc=3628,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.

Languages

  • e 199
  • d 51
  • a 2
  • el 2
  • i 1
  • More… Less…

Types

  • a 80
  • i 7
  • x 7
  • p 4
  • r 4
  • n 2
  • b 1
  • m 1
  • s 1
  • More… Less…