Search (170 results, page 1 of 9)

  • × year_i:[2000 TO 2010}
  • × type_ss:"el"
  1. Facet analytical theory for managing knowledge structure in the humanities : FATKS (2003) 0.03
    0.034122285 = product of:
      0.2047337 = sum of:
        0.2047337 = sum of:
          0.119337276 = weight(_text_:theory in 2526) [ClassicSimilarity], result of:
            0.119337276 = score(doc=2526,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.7351069 = fieldWeight in 2526, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.125 = fieldNorm(doc=2526)
          0.08539642 = weight(_text_:29 in 2526) [ClassicSimilarity], result of:
            0.08539642 = score(doc=2526,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.6218451 = fieldWeight in 2526, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.125 = fieldNorm(doc=2526)
      0.16666667 = coord(1/6)
    
    Date
    29. 8.2004 9:17:18
  2. Spero, S.: Dashed suspicuous (2008) 0.03
    0.025216494 = product of:
      0.15129896 = sum of:
        0.15129896 = weight(_text_:graphic in 2626) [ClassicSimilarity], result of:
          0.15129896 = score(doc=2626,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.5852823 = fieldWeight in 2626, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.0625 = fieldNorm(doc=2626)
      0.16666667 = coord(1/6)
    
    Content
    "This is the latest version of the Doorbell -> Mammal graph; it shows the direct and indirect broader terms of doorbells in LCSH. This incarnation of the graphic adds one new piece of visual information that seems to be very very suggestive. Dashed lines are used to indicate broader term references that have never been validated since BT and NT references were automatically generated from the old SA (See Also) links in 1988."
  3. Broughton, V.: Facet analysis as a fundamental theory for structuring subject organization tools (2007) 0.02
    0.021180402 = product of:
      0.12708241 = sum of:
        0.12708241 = sum of:
          0.084384196 = weight(_text_:theory in 537) [ClassicSimilarity], result of:
            0.084384196 = score(doc=537,freq=4.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.51979905 = fieldWeight in 537, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0625 = fieldNorm(doc=537)
          0.04269821 = weight(_text_:29 in 537) [ClassicSimilarity], result of:
            0.04269821 = score(doc=537,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.31092256 = fieldWeight in 537, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0625 = fieldNorm(doc=537)
      0.16666667 = coord(1/6)
    
    Abstract
    The presentation will examine the potential of facet analysis as a basis for determining status and relationships of concepts in subject based tools using a controlled vocabulary, and the extent to which it can be used as a general theory of knowledge organization as opposed to a methodology for structuring classifications only.
    Date
    26.12.2011 13:21:29
  4. Gödert, W.: Knowledge organization and information retrieval in times of change : concepts for education in Germany (2001) 0.02
    0.016835473 = product of:
      0.05050642 = sum of:
        0.026105028 = product of:
          0.052210055 = sum of:
            0.052210055 = weight(_text_:theory in 3413) [ClassicSimilarity], result of:
              0.052210055 = score(doc=3413,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.32160926 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
        0.024401393 = product of:
          0.048802786 = sum of:
            0.048802786 = weight(_text_:methods in 3413) [ClassicSimilarity], result of:
              0.048802786 = score(doc=3413,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31093797 = fieldWeight in 3413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3413)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A survey is given, how modifications in the field of the information processing and technology have influenced the concepts for teaching and studying the subjects of knowledge organization and information retrieval in German universities for library and information science. The discussion will distinguish between fields of modifications and fields of stability. The fields of the modifications are characterised by procedures and applications in libraries. The fields of stability are characterised by theory and methods
  5. Si, L.: Encoding formats and consideration of requirements for mapping (2007) 0.01
    0.014304606 = product of:
      0.085827634 = sum of:
        0.085827634 = sum of:
          0.048802786 = weight(_text_:methods in 540) [ClassicSimilarity], result of:
            0.048802786 = score(doc=540,freq=2.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.31093797 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
          0.037024844 = weight(_text_:22 in 540) [ClassicSimilarity], result of:
            0.037024844 = score(doc=540,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.2708308 = fieldWeight in 540, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=540)
      0.16666667 = coord(1/6)
    
    Abstract
    With the increasing requirement of establishing semantic mappings between different vocabularies, further development of these encoding formats is becoming more and more important. For this reason, four types of knowledge representation formats were assessed:MARC21 for Classification Data in XML, Zthes XML Schema, XTM(XML Topic Map), and SKOS (Simple Knowledge Organisation System). This paper explores the potential of adapting these representation formats to support different semantic mapping methods, and discusses the implication of extending them to represent more complex KOS.
    Date
    26.12.2011 13:22:27
  6. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.01
    0.014268155 = product of:
      0.042804465 = sum of:
        0.014917159 = product of:
          0.029834319 = sum of:
            0.029834319 = weight(_text_:theory in 1178) [ClassicSimilarity], result of:
              0.029834319 = score(doc=1178,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.18377672 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
        0.027887305 = product of:
          0.05577461 = sum of:
            0.05577461 = weight(_text_:methods in 1178) [ClassicSimilarity], result of:
              0.05577461 = score(doc=1178,freq=8.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.35535768 = fieldWeight in 1178, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  7. Hajdu Barat, A.: Multilevel education, training, traditions and research in Hungary (2007) 0.01
    0.012795856 = product of:
      0.07677513 = sum of:
        0.07677513 = sum of:
          0.044751476 = weight(_text_:theory in 545) [ClassicSimilarity], result of:
            0.044751476 = score(doc=545,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.27566507 = fieldWeight in 545, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=545)
          0.032023653 = weight(_text_:29 in 545) [ClassicSimilarity], result of:
            0.032023653 = score(doc=545,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.23319192 = fieldWeight in 545, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=545)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper aims to explore the theory and practice of education in schools and the further education as two levels of the Information Society in Hungary . The LIS education is considered the third level over previous levels. I attempt to survey the curriculum and content of different subjects in school; and the division of the programme for librarians. There is a great and long history of UDC usage in Hungary. The lecture sketches stairs of tradition from the beginning to the situation nowadays. Szab ó Ervin began to train the UDC at the Municipal Library in Budapest from 1910. He not only used, but taught the UDC for librarians in his courses, too. As a consequence of Szab ó Ervin's activity the librarians knew and used the UDC very early, and all libraries would use it. The article gives a short overview of recent developments and duties, the situation after the new Hungarian edition, the UDC usage in Hungarian OPAC and the possibility of UDC visualization.
    Source
    Extensions and corrections to the UDC. 29(2007), S.273-284
  8. Naudet, Y.; Latour, T.; Chen, D.: ¬A Systemic approach to Interoperability formalization (2009) 0.01
    0.012795856 = product of:
      0.07677513 = sum of:
        0.07677513 = sum of:
          0.044751476 = weight(_text_:theory in 2740) [ClassicSimilarity], result of:
            0.044751476 = score(doc=2740,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.27566507 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.046875 = fieldNorm(doc=2740)
          0.032023653 = weight(_text_:29 in 2740) [ClassicSimilarity], result of:
            0.032023653 = score(doc=2740,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.23319192 = fieldWeight in 2740, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.046875 = fieldNorm(doc=2740)
      0.16666667 = coord(1/6)
    
    Abstract
    With a first version developed last year, the Ontology of Interoperability (OoI) aims at formally describing concepts relating to problems and solutions in the domain of interoperability. From the beginning, the OoI has its foundations in the systemic theory and addresses interoperability from the general point of view of a system, whether it is composed by other systems (systems-of-systems) or not. In this paper, we present the last OoI focusing on the systemic approach. We then integrate a classification of interoperability knowledge provided by the Framework for Enterprise Interoperability. This way, we contextualize the OoI with a specific vocabulary to the enterprise domain, where solutions to interoperability problems are characterized according to interoperability approaches defined in the ISO 14258 and both solutions and problems can be localized into enterprises levels and characterized by interoperability levels, as defined in the European Interoperability Framework.
    Date
    29. 1.2016 18:48:14
  9. Furner, J.: User tagging of library resources : toward a framework for system evaluation (2007) 0.01
    0.012309102 = product of:
      0.036927305 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 703) [ClassicSimilarity], result of:
              0.032023653 = score(doc=703,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=703)
          0.5 = coord(1/2)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 703) [ClassicSimilarity], result of:
              0.041830957 = score(doc=703,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=703)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Although user tagging of library resources shows substantial promise as a means of improving the quality of users' access to those resources, several important questions about the level and nature of the warrant for basing retrieval tools on user tagging are yet to receive full consideration by library practitioners and researchers. Among these is the simple evaluative question: What, specifically, are the factors that determine whether or not user-tagging services will be successful? If success is to be defined in terms of the effectiveness with which systems perform the particular functions expected of them (rather than simply in terms of popularity), an understanding is needed both of the multifunctional nature of tagging tools, and of the complex nature of users' mental models of that multifunctionality. In this paper, a conceptual framework is developed for the evaluation of systems that integrate user tagging with more traditional methods of library resource description.
    Date
    26.12.2011 13:29:31
  10. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.01
    0.012309102 = product of:
      0.036927305 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 4640) [ClassicSimilarity], result of:
              0.032023653 = score(doc=4640,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.5 = coord(1/2)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 4640) [ClassicSimilarity], result of:
              0.041830957 = score(doc=4640,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 4640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    29. 7.2011 14:44:56
  11. Hollink, L.; Assem, M. van; Wang, S.; Isaac, A.; Schreiber, G.: Two variations on ontology alignment evaluation : methodological issues (2008) 0.01
    0.012309102 = product of:
      0.036927305 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 4645) [ClassicSimilarity], result of:
              0.032023653 = score(doc=4645,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 4645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4645)
          0.5 = coord(1/2)
        0.020915478 = product of:
          0.041830957 = sum of:
            0.041830957 = weight(_text_:methods in 4645) [ClassicSimilarity], result of:
              0.041830957 = score(doc=4645,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.26651827 = fieldWeight in 4645, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4645)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Evaluation of ontology alignments is in practice done in two ways: (1) assessing individual correspondences and (2) comparing the alignment to a reference alignment. However, this type of evaluation does not guarantee that an application which uses the alignment will perform well. In this paper, we contribute to the current ontology alignment evaluation practices by proposing two alternative evaluation methods that take into account some characteristics of a usage scenario without doing a full-fledged end-to-end evaluation. We compare different evaluation approaches in three case studies, focussing on methodological issues. Each case study considers an alignment between a different pair of ontologies, ranging from rich and well-structured to small and poorly structured. This enables us to conclude on the use of different evaluation approaches in different settings.
    Date
    29. 7.2011 14:44:56
  12. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.01062654 = product of:
      0.03187962 = sum of:
        0.016011827 = product of:
          0.032023653 = sum of:
            0.032023653 = weight(_text_:29 in 1289) [ClassicSimilarity], result of:
              0.032023653 = score(doc=1289,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23319192 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
        0.01586779 = product of:
          0.03173558 = sum of:
            0.03173558 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.03173558 = score(doc=1289,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
    Date
    20. 1.2008 17:28:29
  13. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.01
    0.010217575 = product of:
      0.06130545 = sum of:
        0.06130545 = sum of:
          0.034859132 = weight(_text_:methods in 2564) [ClassicSimilarity], result of:
            0.034859132 = score(doc=2564,freq=2.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.22209854 = fieldWeight in 2564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
          0.026446318 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
            0.026446318 = score(doc=2564,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.19345059 = fieldWeight in 2564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
      0.16666667 = coord(1/6)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
  14. Panzer, M.: Designing identifiers for the DDC (2007) 0.01
    0.008582216 = product of:
      0.025746645 = sum of:
        0.008005913 = product of:
          0.016011827 = sum of:
            0.016011827 = weight(_text_:29 in 1752) [ClassicSimilarity], result of:
              0.016011827 = score(doc=1752,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.11659596 = fieldWeight in 1752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1752)
          0.5 = coord(1/2)
        0.017740732 = product of:
          0.035481464 = sum of:
            0.035481464 = weight(_text_:22 in 1752) [ClassicSimilarity], result of:
              0.035481464 = score(doc=1752,freq=10.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2595412 = fieldWeight in 1752, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1752)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Content
    Some examples of identifiers for concepts follow: <http://dewey.info/concept/338.4/en/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the English-language version of Edition 22. <http://dewey.info/concept/338.4/de/edn/22/> This identifier is used to retrieve or identify the 338.4 concept in the German-language version of Edition 22. <http://dewey.info/concept/333.7-333.9/> This identifier is used to retrieve or identify the 333.7-333.9 concept across all editions and language versions. <http://dewey.info/concept/333.7-333.9/about.skos> This identifier is used to retrieve a SKOS representation of the 333.7-333.9 concept (using the "resource" element). There are several open issues at this preliminary stage of development: Use cases: URIs need to represent the range of statements or questions that could be submitted to a Dewey web service. Therefore, it seems that some general questions have to be answered first: What information does an agent have when coming to a Dewey web service? What kind of questions will such an agent ask? Placement of the {locale} component: It is still an open question if the {locale} component should be placed after the {version} component instead (<http://dewey.info/concept/338.4/edn/22/en>) to emphasize that the most important instantiation of a Dewey class is its edition, not its language version. From a services point of view, however, it could make more sense to keep the current arrangement, because users are more likely to come to the service with a present understanding of the language version they are seeking without knowing the specifics of a certain edition in which they are trying to find topics. Identification of other Dewey entities: The goal is to create a locator that does not answer all, but a lot of questions that could be asked about the DDC. Which entities are missing but should be surfaced for services or user agents? How will those services or agents interact with them? Should some entities be rendered in a different way as presented? For example, (how) should the DDC Summaries be retrievable? Would it be necessary to make the DDC Manual accessible through this identifier structure?"
    Date
    21. 3.2008 19:29:28
  15. Ramisch, C.; Schreiner, P.; Idiart, M.; Villavicencio, A.: ¬An evaluation of methods for the extraction of multiword expressions (20xx) 0.01
    0.008050373 = product of:
      0.048302233 = sum of:
        0.048302233 = product of:
          0.09660447 = sum of:
            0.09660447 = weight(_text_:methods in 962) [ClassicSimilarity], result of:
              0.09660447 = score(doc=962,freq=6.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.6154976 = fieldWeight in 962, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0625 = fieldNorm(doc=962)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper focuses on the evaluation of some methods for the automatic acquisition of Multiword Expressions (MWEs). First we investigate the hypothesis that MWEs can be detected solely by the distinct statistical properties of their component words, regardless of their type, comparing 3 statistical measures: Mutual Information, Chi**2 and Permutation Entropy. Moreover, we also look at the impact that the addition of type-specific linguistic information has on the performance of these methods.
  16. Hjoerland, B.: Theory of knowledge organization and the feasibility of universal solutions (2004) 0.01
    0.0074585797 = product of:
      0.044751476 = sum of:
        0.044751476 = product of:
          0.08950295 = sum of:
            0.08950295 = weight(_text_:theory in 2404) [ClassicSimilarity], result of:
              0.08950295 = score(doc=2404,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.55133015 = fieldWeight in 2404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2404)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  17. ws: ¬Das Große Wissen.de Lexikon 2004 (2003) 0.01
    0.00708436 = product of:
      0.02125308 = sum of:
        0.010674552 = product of:
          0.021349104 = sum of:
            0.021349104 = weight(_text_:29 in 1079) [ClassicSimilarity], result of:
              0.021349104 = score(doc=1079,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.15546128 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
        0.010578527 = product of:
          0.021157054 = sum of:
            0.021157054 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.021157054 = score(doc=1079,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.15476047 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    20. 3.2004 12:58:22
    Footnote
    Rez. u.d.T. "Die Welt ist eine Scheibe" in: CD-Info. 2004, H.1, S.29 (ws): "Das Lexikon entspricht mit seinen 117.000 Stichworten vom Umfang etwa einem ca. 24-bändigen gedruckten Lexikon und vereint aktuelle Inhalte mit einer Vielzahl von Multimedia-Elementen wie Tondokumenten, Bildern und Videos. Dank ausgeklügelter Suchfunktionen, einem Online Update-Service und ergänzenden Links ins Internet, ist das Lexikon sowohl zum Nachschlagen als auch zum Stöbern geeignet. Neben dem Lexikon enthält die DVD noch ein Fremdwörterlexikon, ein viersprachiges Wörterbuch (E, F, I, E) sowie einen aktuellen Weltatlas. Mit Hilfe der übersichtlichen Benutzeroberfläche stehen dem Benutzer mehrere Einstiegsmöglichkeiten zur Verfügung: "Wissen A - Z" beinhaltet eine Stichwort- und Volltextsuche, "Timeline" liefert die Geschichte der Menschheit von den alten Ägyptern bis zum Fall Bagdads auf einem Zeitstrahl. "Themenreisen" stellt besondere Themengebiete wie beispielsweise "Aufstieg und Fall der Sowjetunion" kompakt mit allen zugehörigen Lexika-Einträgen und Internet-Links dar. Und in der "Mediengalerie" erschließen sich dem Benutzer die über 16.000 enthaltenen Medienelemente übersichtlich sortiert nach Themengebiet oder Medientyp."
  18. Poli, R.: Steps towards a synthetic methodology (2006) 0.01
    0.0070320163 = product of:
      0.042192098 = sum of:
        0.042192098 = product of:
          0.084384196 = sum of:
            0.084384196 = weight(_text_:theory in 1094) [ClassicSimilarity], result of:
              0.084384196 = score(doc=1094,freq=16.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.51979905 = fieldWeight in 1094, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1094)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    Three of the principal theories which can be used to understand, categorize and organize the many aspects of reality prima facie have unexpected interdependencies. The theories to which I refer are those concerned with the causal connections among the items that make up the real world, the space and the time in which they grow, and the levels of reality. What matters most is the discovery that the difficulties internal to theories of causation and to theories of space and time can be understood better, and perhaps dealt with, in the categorial context furnished by the theory of the levels of reality. The structural condition for this development to be possible is that the first two theories be opportunely generalized. In other words, the thesis outlined in this position paper has two aspects. The first is the hypothesis that the theory of levels can function as a general categorial framework within which to recast our understanding of causal and spatio-temporal phenomena. The second aspect is that the best-known and most usual categorizations of causal, spatial and temporal dependencies are not sufficiently generic and are structurally constrained to express only some of the relevant phenomena. Explicit consideration of the theory of the levels of reality furnishes the keystone for generalization of both the theory of causes and the theory of times and spaces. To assert that a theory is not sufficiently generic is to say that the manner in which it is configured may hamper rather than help full understanding of the relevant phenomena. From this assertion follow two of the three obstructions mentioned in the title to this paper. The third obstruction is easier to specify. Whilst the theories of causality and space-time are robust and well-structured - whatever criticisms one might wish to make of them - the situation of the theory of the levels of reality is entirely different, in that it is not at all widely endorsed or thoroughly developed. On the contrary, it is a decidedly minority proposal, and it still has many obscure, or simply under-developed, aspects. The theory of levels is the third obstruction cited in the title. Nonetheless, the approach outlined in what follows seems to be the most promising route to follow.
  19. Wilson, T.D.: Recent trends in user studies : action research and qualitative methods (2000) 0.01
    0.0069718263 = product of:
      0.041830957 = sum of:
        0.041830957 = product of:
          0.083661914 = sum of:
            0.083661914 = weight(_text_:methods in 6115) [ClassicSimilarity], result of:
              0.083661914 = score(doc=6115,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.53303653 = fieldWeight in 6115, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6115)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
  20. Notess, M.: Three looks at users : a comparison of methods for studying digital library use (2004) 0.01
    0.0069718263 = product of:
      0.041830957 = sum of:
        0.041830957 = product of:
          0.083661914 = sum of:
            0.083661914 = weight(_text_:methods in 4167) [ClassicSimilarity], result of:
              0.083661914 = score(doc=4167,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.53303653 = fieldWeight in 4167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4167)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    

Languages

  • e 136
  • d 30
  • el 2
  • a 1
  • i 1
  • More… Less…

Types

  • a 61
  • i 7
  • m 1
  • n 1
  • p 1
  • r 1
  • s 1
  • x 1
  • More… Less…