Search (31 results, page 1 of 2)

  • × type_ss:"a"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Kaser, R.T.: If information wants to be free . . . then who's going to pay for it? (2000) 0.03
    0.029609075 = product of:
      0.05921815 = sum of:
        0.05921815 = product of:
          0.1184363 = sum of:
            0.1184363 = weight(_text_:i in 1234) [ClassicSimilarity], result of:
              0.1184363 = score(doc=1234,freq=22.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.6910539 = fieldWeight in 1234, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    I have become "brutally honest" of late, at least according to one listener who heard my remarks during a recent whistle stop speaking tour of publishing conventions. This comment caught me a little off guard. Not that I haven't always been frank, but I do try never to be brutal. The truth, I guess, can be painful, even if the intention of the teller is simply objectivity. This paper is based on a "brutally honest" talk I have been giving to publishers, first, in February, to the Association of American Publishers' Professional and Scholarly Publishing Division, at which point I was calling the piece, "The Illusion of Free Information." It was this initial rendition that led to the invitation to publish something here. Since then I've been working on the talk. I gave a second version of it in March to the assembly of the American Society of Information Dissemination Centers, where I called it, "When Sectors Clash: Public Access vs. Private Interest." And, most recently, I gave yet a third version of it to the governing board of the American Institute of Physics. This time I called it: "The Future of Society Publishing." The notion of free information, our government's proper role in distributing free information, and the future of scholarly publishing in a world of free information . . . these are the issues that are floating around in my head. My goal here is to tell you where my thinking is only at this moment, for I reserve the right to continue thinking and developing new permutations on this mentally challenging theme.
  2. Baker, T.: ¬A grammar of Dublin Core (2000) 0.03
    0.02659677 = product of:
      0.05319354 = sum of:
        0.05319354 = sum of:
          0.02856791 = weight(_text_:i in 1236) [ClassicSimilarity], result of:
            0.02856791 = score(doc=1236,freq=2.0), product of:
              0.17138503 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.045439374 = queryNorm
              0.16668847 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.024625631 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.024625631 = score(doc=1236,freq=2.0), product of:
              0.15912095 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.045439374 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  3. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.02176619 = product of:
      0.04353238 = sum of:
        0.04353238 = product of:
          0.08706476 = sum of:
            0.08706476 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.08706476 = score(doc=3925,freq=4.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  4. Negrini, G.: Principi filosofici per classificare : una teoria per la scienza (2003) 0.02
    0.020200564 = product of:
      0.040401127 = sum of:
        0.040401127 = product of:
          0.080802254 = sum of:
            0.080802254 = weight(_text_:i in 4137) [ClassicSimilarity], result of:
              0.080802254 = score(doc=4137,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.4714662 = fieldWeight in 4137, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4137)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The article illustrates briefly the principles at the basis of the theory of science and the universal system conceived. Ingetraut Dahlberg occupa un posto di primo piano in Documentazione per le iniziative importanti che ha realizzato e per l'apporto teoretico fornito all'ordinamento conoscenze. Critica sulle possibilità di sviluppo degli attuali sistemi di classificazione universale, fondati per paradigmi storico-filosofici su discipline, Dahlberg introduce una teoria concettuale rivolta all'organizzazione del singolo campo di conoscenza e del sapere universale. L'articolo espone brevemente i principî che sono alla base della teoria e del sistema universale concepito.
    Language
    i
  5. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.018469224 = product of:
      0.036938448 = sum of:
        0.036938448 = product of:
          0.073876895 = sum of:
            0.073876895 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.073876895 = score(doc=3895,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  6. Weibel, S.L.: Border crossings : reflections on a decade of metadata consensus building (2005) 0.02
    0.017854942 = product of:
      0.035709884 = sum of:
        0.035709884 = product of:
          0.07141977 = sum of:
            0.07141977 = weight(_text_:i in 1187) [ClassicSimilarity], result of:
              0.07141977 = score(doc=1187,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41672117 = fieldWeight in 1187, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1187)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In June of this year, I performed my final official duties as part of the Dublin Core Metadata Initiative management team. It is a happy irony to affix a seal on that service in this journal, as both D-Lib Magazine and the Dublin Core celebrate their tenth anniversaries. This essay is a personal reflection on some of the achievements and lessons of that decade. The OCLC-NCSA Metadata Workshop took place in March of 1995, and as we tried to understand what it meant and who would care, D-Lib magazine came into being and offered a natural venue for sharing our work. I recall a certain skepticism when Bill Arms said "We want D-Lib to be the first place people look for the latest developments in digital library research." These were the early days in the evolution of electronic publishing, and the goal was ambitious. By any measure, a decade of high-quality electronic publishing is an auspicious accomplishment, and D-Lib (and its host, CNRI) deserve congratulations for having achieved their goal. I am grateful to have been a contributor. That first DC workshop led to further workshops, a community, a variety of standards in several countries, an ISO standard, a conference series, and an international consortium. Looking back on this evolution is both satisfying and wistful. While I am pleased that the achievements are substantial, the unmet challenges also provide a rich till in which to cultivate insights on the development of digital infrastructure.
  7. Rogers, I.: ¬The Google Pagerank algorithm and how it works (2002) 0.02
    0.017854942 = product of:
      0.035709884 = sum of:
        0.035709884 = product of:
          0.07141977 = sum of:
            0.07141977 = weight(_text_:i in 2548) [ClassicSimilarity], result of:
              0.07141977 = score(doc=2548,freq=8.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.41672117 = fieldWeight in 2548, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2548)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Page Rank is a topic much discussed by Search Engine Optimisation (SEO) experts. At the heart of PageRank is a mathematical formula that seems scary to look at but is actually fairly simple to understand. Despite this many people seem to get it wrong! In particular "Chris Ridings of www.searchenginesystems.net" has written a paper entitled "PageRank Explained: Everything you've always wanted to know about PageRank", pointed to by many people, that contains a fundamental mistake early on in the explanation! Unfortunately this means some of the recommendations in the paper are not quite accurate. By showing code to correctly calculate real PageRank I hope to achieve several things in this response: - Clearly explain how PageRank is calculated. - Go through every example in Chris' paper, and add some more of my own, showing the correct PageRank for each diagram. By showing the code used to calculate each diagram I've opened myself up to peer review - mostly in an effort to make sure the examples are correct, but also because the code can help explain the PageRank calculations. - Describe some principles and observations on website design based on these correctly calculated examples. Any good web designer should take the time to fully understand how PageRank really works - if you don't then your site's layout could be seriously hurting your Google listings! [Note: I have nothing in particular against Chris. If I find any other papers on the subject I'll try to comment evenly]
  8. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.01
    0.012625352 = product of:
      0.025250703 = sum of:
        0.025250703 = product of:
          0.050501406 = sum of:
            0.050501406 = weight(_text_:i in 1176) [ClassicSimilarity], result of:
              0.050501406 = score(doc=1176,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29466638 = fieldWeight in 1176, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  9. Francu, V.: Does convenience trump accuracy? : the avatars of the UDC in Romania (2007) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 544) [ClassicSimilarity], result of:
              0.049993843 = score(doc=544,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper will concentrate on some major issues regarding the potential of UDC and the current controversy about its use UDC in Romania: i) the importance of hierarchical structures in controlled vocabularies with a direct impact on improved information retrieval given by the browsing function which enables visualizing the hierarchies in subject areas rather than just locating a particular topic; ii) the lack of popularity of the UDC as an indexing and information retrieval language among its users be they librarians or end users of library OPACs; and iii) the situation of UDC teachers and teaching in Romanian universities.
  10. Paralic, J.; Kostial, I.: Ontology-based information retrieval (2003) 0.01
    0.012498461 = product of:
      0.024996921 = sum of:
        0.024996921 = product of:
          0.049993843 = sum of:
            0.049993843 = weight(_text_:i in 1153) [ClassicSimilarity], result of:
              0.049993843 = score(doc=1153,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.29170483 = fieldWeight in 1153, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1153)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.01
    0.012370269 = product of:
      0.024740538 = sum of:
        0.024740538 = product of:
          0.049481075 = sum of:
            0.049481075 = weight(_text_:i in 1178) [ClassicSimilarity], result of:
              0.049481075 = score(doc=1178,freq=6.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.28871292 = fieldWeight in 1178, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  12. Decimal Classification Editorial Policy Committee (2002) 0.01
    0.010883095 = product of:
      0.02176619 = sum of:
        0.02176619 = product of:
          0.04353238 = sum of:
            0.04353238 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
              0.04353238 = score(doc=236,freq=4.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.27358043 = fieldWeight in 236, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=236)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Decimal Classification Editorial Policy Committee (EPC) held its Meeting 117 at the Library Dec. 3-5, 2001, with chair Andrea Stamm (Northwestern University) presiding. Through its actions at this meeting, significant progress was made toward publication of DDC unabridged Edition 22 in mid-2003 and Abridged Edition 14 in early 2004. For Edition 22, the committee approved the revisions to two major segments of the classification: Table 2 through 55 Iran (the first half of the geographic area table) and 900 History and geography. EPC approved updates to several parts of the classification it had already considered: 004-006 Data processing, Computer science; 340 Law; 370 Education; 510 Mathematics; 610 Medicine; Table 3 issues concerning treatment of scientific and technical themes, with folklore, arts, and printing ramifications at 398.2 - 398.3, 704.94, and 758; Table 5 and Table 6 Ethnic Groups and Languages (portions concerning American native peoples and languages); and tourism issues at 647.9 and 790. Reports on the results of testing the approved 200 Religion and 305-306 Social groups schedules were received, as was a progress report on revision work for the manual being done by Ross Trotter (British Library, retired). Revisions for Abridged Edition 14 that received committee approval included 010 Bibliography; 070 Journalism; 150 Psychology; 370 Education; 380 Commerce, communications, and transportation; 621 Applied physics; 624 Civil engineering; and 629.8 Automatic control engineering. At the meeting the committee received print versions of _DC&_ numbers 4 and 5. Primarily for the use of Dewey translators, these cumulations list changes, substantive and cosmetic, to DDC Edition 21 and Abridged Edition 13 for the period October 1999 - December 2001. EPC will hold its Meeting 118 at the Library May 15-17, 2002.
  13. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.01
    0.010773714 = product of:
      0.021547427 = sum of:
        0.021547427 = product of:
          0.043094855 = sum of:
            0.043094855 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
              0.043094855 = score(doc=759,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.2708308 = fieldWeight in 759, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2013 19:22:18
  14. Hajdu Barat, A.: Multilevel education, training, traditions and research in Hungary (2007) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 545) [ClassicSimilarity], result of:
              0.042851865 = score(doc=545,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 545, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=545)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper aims to explore the theory and practice of education in schools and the further education as two levels of the Information Society in Hungary . The LIS education is considered the third level over previous levels. I attempt to survey the curriculum and content of different subjects in school; and the division of the programme for librarians. There is a great and long history of UDC usage in Hungary. The lecture sketches stairs of tradition from the beginning to the situation nowadays. Szab ó Ervin began to train the UDC at the Municipal Library in Budapest from 1910. He not only used, but taught the UDC for librarians in his courses, too. As a consequence of Szab ó Ervin's activity the librarians knew and used the UDC very early, and all libraries would use it. The article gives a short overview of recent developments and duties, the situation after the new Hungarian edition, the UDC usage in Hungarian OPAC and the possibility of UDC visualization.
  15. Wang, Y.-H.; Jhuo, P.-S.: ¬A semantic faceted search with rule-based inference (2009) 0.01
    0.010712966 = product of:
      0.021425933 = sum of:
        0.021425933 = product of:
          0.042851865 = sum of:
            0.042851865 = weight(_text_:i in 540) [ClassicSimilarity], result of:
              0.042851865 = score(doc=540,freq=2.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.25003272 = fieldWeight in 540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=540)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Proceedings of the International MultiConference of Engineers and Computer Scientists 2009 Vol I, IMECS 2009, March 18 - 20, 2009, Hong Kong
  16. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.01
    0.010100282 = product of:
      0.020200564 = sum of:
        0.020200564 = product of:
          0.040401127 = sum of:
            0.040401127 = weight(_text_:i in 1177) [ClassicSimilarity], result of:
              0.040401127 = score(doc=1177,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.2357331 = fieldWeight in 1177, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  17. Bittner, T.; Donnelly, M.; Winter, S.: Ontology and semantic interoperability (2006) 0.01
    0.009234612 = product of:
      0.018469224 = sum of:
        0.018469224 = product of:
          0.036938448 = sum of:
            0.036938448 = weight(_text_:22 in 4820) [ClassicSimilarity], result of:
              0.036938448 = score(doc=4820,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.23214069 = fieldWeight in 4820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4820)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3.12.2016 18:39:22
  18. Beppler, F.D.; Fonseca, F.T.; Pacheco, R.C.S.: Hermeneus: an architecture for an ontology-enabled information retrieval (2008) 0.01
    0.009234612 = product of:
      0.018469224 = sum of:
        0.018469224 = product of:
          0.036938448 = sum of:
            0.036938448 = weight(_text_:22 in 3261) [ClassicSimilarity], result of:
              0.036938448 = score(doc=3261,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.23214069 = fieldWeight in 3261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    28.11.2016 12:43:22
  19. Atran, S.; Medin, D.L.; Ross, N.: Evolution and devolution of knowledge : a tale of two biologies (2004) 0.01
    0.009234612 = product of:
      0.018469224 = sum of:
        0.018469224 = product of:
          0.036938448 = sum of:
            0.036938448 = weight(_text_:22 in 479) [ClassicSimilarity], result of:
              0.036938448 = score(doc=479,freq=2.0), product of:
                0.15912095 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045439374 = queryNorm
                0.23214069 = fieldWeight in 479, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    23. 1.2022 10:22:18
  20. Hammond, T.; Hannay, T.; Lund, B.; Scott, J.: Social bookmarking tools (I) : a general review (2005) 0.01
    0.008837746 = product of:
      0.017675493 = sum of:
        0.017675493 = product of:
          0.035350986 = sum of:
            0.035350986 = weight(_text_:i in 1188) [ClassicSimilarity], result of:
              0.035350986 = score(doc=1188,freq=4.0), product of:
                0.17138503 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.045439374 = queryNorm
                0.20626646 = fieldWeight in 1188, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1188)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Because, to paraphrase a pop music lyric from a certain rock and roll band of yesterday, "the Web is old, the Web is new, the Web is all, the Web is you", it seems like we might have to face up to some of these stark realities. With the introduction of new social software applications such as blogs, wikis, newsfeeds, social networks, and bookmarking tools (the subject of this paper), the claim that Shelley Powers makes in a Burningbird blog entry seems apposite: "This is the user's web now, which means it's my web and I can make the rules." Reinvention is revolution - it brings us always back to beginnings. We are here going to remind you of hyperlinks in all their glory, sell you on the idea of bookmarking hyperlinks, point you at other folks who are doing the same, and tell you why this is a good thing. Just as long as those hyperlinks (or let's call them plain old links) are managed, tagged, commented upon, and published onto the Web, they represent a user's own personal library placed on public record, which - when aggregated with other personal libraries - allows for rich, social networking opportunities. Why spill any ink (digital or not) in rewriting what someone else has already written about instead of just pointing at the original story and adding the merest of titles, descriptions and tags for future reference? More importantly, why not make these personal 'link playlists' available to oneself and to others from whatever browser or computer one happens to be using at the time? This paper reviews some current initiatives, as of early 2005, in providing public link management applications on the Web - utilities that are often referred to under the general moniker of 'social bookmarking tools'. There are a couple of things going on here: 1) server-side software aimed specifically at managing links with, crucially, a strong, social networking flavour, and 2) an unabashedly open and unstructured approach to tagging, or user classification, of those links.