Search (5774 results, page 1 of 289)

  • × type_ss:"a"
  1. Yee, M.M.: What is a work? : part 1: the user and the objects of the catalog (1994) 0.25
    0.2506367 = sum of:
      0.06354813 = product of:
        0.1906444 = sum of:
          0.1906444 = weight(_text_:objects in 735) [ClassicSimilarity], result of:
            0.1906444 = score(doc=735,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.5813359 = fieldWeight in 735, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=735)
        0.33333334 = coord(1/3)
      0.18708855 = sum of:
        0.1285717 = weight(_text_:work in 735) [ClassicSimilarity], result of:
          0.1285717 = score(doc=735,freq=8.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.56773454 = fieldWeight in 735, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=735)
        0.058516845 = weight(_text_:22 in 735) [ClassicSimilarity], result of:
          0.058516845 = score(doc=735,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.2708308 = fieldWeight in 735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=735)
    
    Abstract
    Part 1 of a series of articles, exploring the concept of 'the work' in cataloguing practice, which attempts to construct a definition of the term based on AACR theory and practice. The study begins with a consideration of the objects of the catalogue, their history and the evidence that bears on the question of the degree to which the user needs access to the work, as opposed to a particular edition of the work
    Footnote
    Vgl. auch: Pt.2: Cataloging and classification quarterly. 19(1994) no.2, S.5-22; Pt.3: Cataloging and classification quarterly. 20(1995) no.1, S.25-46; Pt.4: Cataloging and classification quarterly. 20(1995) no.2, S.3-24
  2. Klas, C.-P.; Fuhr, N.; Schaefer, A.: Evaluating strategic support for information access in the DAFFODIL system (2004) 0.20
    0.1988776 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 2419) [ClassicSimilarity], result of:
            0.11554795 = score(doc=2419,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 2419, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2419)
        0.33333334 = coord(1/3)
      0.16036162 = sum of:
        0.11020432 = weight(_text_:work in 2419) [ClassicSimilarity], result of:
          0.11020432 = score(doc=2419,freq=8.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.4866296 = fieldWeight in 2419, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2419)
        0.050157297 = weight(_text_:22 in 2419) [ClassicSimilarity], result of:
          0.050157297 = score(doc=2419,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.23214069 = fieldWeight in 2419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2419)
    
    Abstract
    The digital library system Daffodil is targeted at strategic support of users during the information search process. For searching, exploring and managing digital library objects it provides user-customisable information seeking patterns over a federation of heterogeneous digital libraries. In this paper evaluation results with respect to retrieval effectiveness, efficiency and user satisfaction are presented. The analysis focuses on strategic support for the scientific work-flow. Daffodil supports the whole work-flow, from data source selection over information seeking to the representation, organisation and reuse of information. By embedding high level search functionality into the scientific work-flow, the user experiences better strategic system support due to a more systematic work process. These ideas have been implemented in Daffodil followed by a qualitative evaluation. The evaluation has been conducted with 28 participants, ranging from information seeking novices to experts. The results are promising, as they support the chosen model.
    Date
    16.11.2008 16:22:48
  3. Varela, C.A.; Agha, G.A.: What after Java? : From objects to actors (1998) 0.17
    0.167738 = sum of:
      0.044935312 = product of:
        0.13480593 = sum of:
          0.13480593 = weight(_text_:objects in 3612) [ClassicSimilarity], result of:
            0.13480593 = score(doc=3612,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41106653 = fieldWeight in 3612, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3612)
        0.33333334 = coord(1/3)
      0.1228027 = sum of:
        0.06428585 = weight(_text_:work in 3612) [ClassicSimilarity], result of:
          0.06428585 = score(doc=3612,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.28386727 = fieldWeight in 3612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3612)
        0.058516845 = weight(_text_:22 in 3612) [ClassicSimilarity], result of:
          0.058516845 = score(doc=3612,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.2708308 = fieldWeight in 3612, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3612)
    
    Abstract
    Discusses drawbacks of the Java programming language, and proposes some potential improvements for concurrent object-oriented software development. Java's passive object model does not provide an effective means for building distributed applications, critical for the future of Web-based next-generation information systems. Suggests improvements to Java's existing mechanisms for maintaining consistency across multiple threads, sending asynchronous messages and controlling resources. Drives the discussion with examples and suggestions from work on the Actor model of computation
    Date
    1. 8.1996 22:08:06
  4. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.14
    0.14377543 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 2623) [ClassicSimilarity], result of:
            0.11554795 = score(doc=2623,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 2623, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2623)
        0.33333334 = coord(1/3)
      0.105259456 = sum of:
        0.05510216 = weight(_text_:work in 2623) [ClassicSimilarity], result of:
          0.05510216 = score(doc=2623,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.2433148 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.050157297 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
          0.050157297 = score(doc=2623,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.23214069 = fieldWeight in 2623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Kruk, S.R.; Kruk, E.; Stankiewicz, K.: Evaluation of semantic and social technologies for digital libraries (2009) 0.14
    0.14377543 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 3387) [ClassicSimilarity], result of:
            0.11554795 = score(doc=3387,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 3387, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=3387)
        0.33333334 = coord(1/3)
      0.105259456 = sum of:
        0.05510216 = weight(_text_:work in 3387) [ClassicSimilarity], result of:
          0.05510216 = score(doc=3387,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.2433148 = fieldWeight in 3387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
        0.050157297 = weight(_text_:22 in 3387) [ClassicSimilarity], result of:
          0.050157297 = score(doc=3387,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.23214069 = fieldWeight in 3387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3387)
    
    Abstract
    Libraries are the tools we use to learn and to answer our questions. The quality of our work depends, among others, on the quality of the tools we use. Recent research in digital libraries is focused, on one hand on improving the infrastructure of the digital library management systems (DLMS), and on the other on improving the metadata models used to annotate collections of objects maintained by DLMS. The latter includes, among others, the semantic web and social networking technologies. Recently, the semantic web and social networking technologies are being introduced to the digital libraries domain. The expected outcome is that the overall quality of information discovery in digital libraries can be improved by employing social and semantic technologies. In this chapter we present the results of an evaluation of social and semantic end-user information discovery services for the digital libraries.
    Date
    1. 8.2010 12:35:22
  6. Frâncu, V.: ¬An interpretation of the FRBR model (2004) 0.14
    0.14125696 = sum of:
      0.025677323 = product of:
        0.07703197 = sum of:
          0.07703197 = weight(_text_:objects in 2647) [ClassicSimilarity], result of:
            0.07703197 = score(doc=2647,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.23489517 = fieldWeight in 2647, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.03125 = fieldNorm(doc=2647)
        0.33333334 = coord(1/3)
      0.11557964 = sum of:
        0.082141444 = weight(_text_:work in 2647) [ClassicSimilarity], result of:
          0.082141444 = score(doc=2647,freq=10.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.3627123 = fieldWeight in 2647, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
        0.0334382 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
          0.0334382 = score(doc=2647,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.15476047 = fieldWeight in 2647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2647)
    
    Abstract
    Despite the existence of a logical structural model for bibliographic records which integrates any record type, library catalogues persist in offering catalogue records at the level of 'items'. Such records however, do not clearly indicate which works they contain. Hence the search possibilities of the end user are unduly limited. The Functional Requirements for Bibliographic Records (FRBR) present through a conceptual model, independent of any cataloguing code or implementation, a globalized view of the bibliographic universe. This model, a synthesis of the existing cataloguing rules, consists of clearly structured entities and well defined types of relationships among them. From a theoretical viewpoint, the model is likely to be a good knowledge organiser with great potential in identifying the author and the work represented by an item or publication and is able to link different works of the author with different editions, translations or adaptations of those works aiming at better answering the user needs. This paper is presenting an interpretation of the FRBR model opposing it to a traditional bibliographic record of a complex library material.
    Content
    1. Introduction With the diversification of the material available in library collections such as: music, film, 3D objects, cartographic material and electronic resources like CD-ROMS and Web sites, the existing cataloguing principles and codes are no longer adequate to enable the user to find, identify, select and obtain a particular entity. The problem is not only that material fails to be appropriately represented in the catalogue records but also access to such material, or parts of it, is difficult if possible at all. Consequently, the need emerged to develop new rules and build up a new conceptual model able to cope with all the requirements demanded by the existing library material. The Functional Requirements for Bibliographic Records developed by an IFLA Study Group from 1992 through 1997 present a generalised view of the bibliographic universe and are intended to be independent of any cataloguing code or implementation (Tillett, 2002). Outstanding scholars like Antonio Panizzi, Charles A. Cutter and Seymour Lubetzky formulated the basic cataloguing principles of which some can be retrieved, as Denton (2003) argues as updated versions, between the basic lines of the FRBR model: - the relation work-author groups all the works of an author - all the editions, translations, adaptations of a work are clearly separated (as expressions and manifestations) - all the expressions and manifestations of a work are collocated with their related works in bibliographic families - any document (manifestation and item) can be found if the author, title or subject of that document is known - the author is authorised by the authority control - the title is an intrinsic part of the work + authority control entity
    Date
    17. 6.2015 14:40:22
  7. Walker, S.: Improving subject access painlessly : recent work on the Okapi online catalogue projects (1988) 0.14
    0.14034593 = product of:
      0.28069186 = sum of:
        0.28069186 = sum of:
          0.14693908 = weight(_text_:work in 7403) [ClassicSimilarity], result of:
            0.14693908 = score(doc=7403,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.6488395 = fieldWeight in 7403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.125 = fieldNorm(doc=7403)
          0.1337528 = weight(_text_:22 in 7403) [ClassicSimilarity], result of:
            0.1337528 = score(doc=7403,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.61904186 = fieldWeight in 7403, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.125 = fieldNorm(doc=7403)
      0.5 = coord(1/2)
    
    Source
    Program. 22(1988), S.21-31
  8. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.14
    0.13695967 = sum of:
      0.097996555 = product of:
        0.29398966 = sum of:
          0.29398966 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
            0.29398966 = score(doc=862,freq=2.0), product of:
              0.5230965 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.061700378 = queryNorm
              0.56201804 = fieldWeight in 862, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.33333334 = coord(1/3)
      0.03896311 = product of:
        0.07792622 = sum of:
          0.07792622 = weight(_text_:work in 862) [ClassicSimilarity], result of:
            0.07792622 = score(doc=862,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.3440991 = fieldWeight in 862, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=862)
        0.5 = coord(1/2)
    
    Abstract
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges- summary and question answering- prompt ChatGPT to produce original content (98-99%) from a single text entry and sequential questions initially posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, overall quality, and plagiarism risks. While Turing's original prose scores at least 14% below the machine-generated output, whether an algorithm displays hints of Turing's true initial thoughts (the "Lovelace 2.0" test) remains unanswerable.
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  9. Madison, O.M.A.: ¬The IFLA Functional Requirements for Bibliographic Records : international standards for bibliographic control (2000) 0.13
    0.13310774 = sum of:
      0.045391526 = product of:
        0.13617457 = sum of:
          0.13617457 = weight(_text_:objects in 187) [ClassicSimilarity], result of:
            0.13617457 = score(doc=187,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41523993 = fieldWeight in 187, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=187)
        0.33333334 = coord(1/3)
      0.087716214 = sum of:
        0.045918465 = weight(_text_:work in 187) [ClassicSimilarity], result of:
          0.045918465 = score(doc=187,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.20276234 = fieldWeight in 187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=187)
        0.04179775 = weight(_text_:22 in 187) [ClassicSimilarity], result of:
          0.04179775 = score(doc=187,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.19345059 = fieldWeight in 187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=187)
    
    Abstract
    The formal charge for the IFLA study involving international bibliography standards was to delineate the functions that are performed by the bibliographic record with respect to various media, applications, and user needs. The method used was the entity relationship analysis technique. Three groups of entities that are the key objects of interest to users of bibliographic records were defined. The primary group contains four entities: work, expression, manifestation, and item. The second group includes entities responsible for the intellectual or artistic content, production, or ownership of entities in the first group. The third group includes entities that represent concepts, objects, events, and places. In the study we identified the attributes associated with each entity and the relationships that are most important to users. The attributes and relationships were mapped to the functional requirements for bibliographic records that were defined in terms of four user tasks: to find, identify, select, and obtain. Basic requirements for national bibliographic records were recommended based on the entity analysis. The recommendations of the study are compared with two standards, AACR (Anglo-American Cataloguing Rules) and the Dublin Core, to place them into pragmatic context. The results of the study are being used in the review of the complete set of ISBDs as the initial benchmark in determining data elements for each format.
    Date
    10. 9.2000 17:38:22
  10. Raper, J.: Geographic relevance (2007) 0.13
    0.13310774 = sum of:
      0.045391526 = product of:
        0.13617457 = sum of:
          0.13617457 = weight(_text_:objects in 846) [ClassicSimilarity], result of:
            0.13617457 = score(doc=846,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.41523993 = fieldWeight in 846, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0390625 = fieldNorm(doc=846)
        0.33333334 = coord(1/3)
      0.087716214 = sum of:
        0.045918465 = weight(_text_:work in 846) [ClassicSimilarity], result of:
          0.045918465 = score(doc=846,freq=2.0), product of:
            0.22646447 = queryWeight, product of:
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.061700378 = queryNorm
            0.20276234 = fieldWeight in 846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6703904 = idf(docFreq=3060, maxDocs=44218)
              0.0390625 = fieldNorm(doc=846)
        0.04179775 = weight(_text_:22 in 846) [ClassicSimilarity], result of:
          0.04179775 = score(doc=846,freq=2.0), product of:
            0.21606421 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.061700378 = queryNorm
            0.19345059 = fieldWeight in 846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=846)
    
    Abstract
    Purpose - The purpose of this paper concerns the dimensions of relevance in information retrieval systems and their completeness in new retrieval contexts such as mobile search. Geography as a factor in relevance is little understood and information seeking is assumed to take place in indoor environments. Yet the rise of information seeking on the move using mobile devices implies the need to better understand the kind of situational relevance operating in this kind of context. Design/methodology/approach - The paper outlines and explores a geographic information seeking process in which geographic information needs (conditioned by needs and tasks, in context) drive the acquisition and use of geographic information objects, which in turn influence geographic behaviour in the environment. Geographic relevance is defined as "a relation between a geographic information need" (like an attention span) and "the spatio-temporal expression of the geographic information objects needed to satisfy it" (like an area of influence). Some empirical examples are given to indicate the theoretical and practical application of this work. Findings - The paper sets out definitions of geographical information needs based on cognitive and geographic criteria, and proposes four canonical cases, which might be theorised as anomalous states of geographic knowledge (ASGK). The paper argues that geographic relevance is best defined as a spatio-temporally extended relation between information need (an "attention" span) and geographic information object (a zone of "influence"), and it defines four domains of geographic relevance. Finally a model of geographic relevance is suggested in which attention and influence are modelled as map layers whose intersection can define the nature of the relation. Originality/value - Geographic relevance is a new field of research that has so far been poorly defined and little researched. This paper sets out new principles for the study of geographic information behaviour.
    Date
    23.12.2007 14:22:24
  11. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.13
    0.12554763 = sum of:
      0.097996555 = product of:
        0.29398966 = sum of:
          0.29398966 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.29398966 = score(doc=400,freq=2.0), product of:
              0.5230965 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.061700378 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 400) [ClassicSimilarity], result of:
            0.05510216 = score(doc=400,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  12. Marchese, C.; Smiraglia, R.P.: Boundary objects: CWA, an HR Firm, and emergent vocabulary (2013) 0.12
    0.123287216 = sum of:
      0.077830255 = product of:
        0.23349077 = sum of:
          0.23349077 = weight(_text_:objects in 1066) [ClassicSimilarity], result of:
            0.23349077 = score(doc=1066,freq=6.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.7119882 = fieldWeight in 1066, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1066)
        0.33333334 = coord(1/3)
      0.04545696 = product of:
        0.09091392 = sum of:
          0.09091392 = weight(_text_:work in 1066) [ClassicSimilarity], result of:
            0.09091392 = score(doc=1066,freq=4.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.40144894 = fieldWeight in 1066, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1066)
        0.5 = coord(1/2)
    
    Abstract
    Knowledge organization structures are dependent upon domain-analytical processes for determining ontological imperatives. Boundary objects-terms used in multiple domains but understood differently in each-are ontological clash points. Cognitive Work Analysis is an effective qualitative methodology for domain analysis of a group of people who work together. CWA was used recently to understand the ontology of a human resources firm. Boundary objects from the taxonomy that emerged from narrative analysis are presented here for individual analysis.
  13. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.1230752 = sum of:
      0.097996555 = product of:
        0.29398966 = sum of:
          0.29398966 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.29398966 = score(doc=562,freq=2.0), product of:
              0.5230965 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.061700378 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.025078649 = product of:
        0.050157297 = sum of:
          0.050157297 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.050157297 = score(doc=562,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  14. Davis, M.: Building a global legal index : a work in progress (2001) 0.12
    0.1228027 = product of:
      0.2456054 = sum of:
        0.2456054 = sum of:
          0.1285717 = weight(_text_:work in 6443) [ClassicSimilarity], result of:
            0.1285717 = score(doc=6443,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.56773454 = fieldWeight in 6443, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.109375 = fieldNorm(doc=6443)
          0.11703369 = weight(_text_:22 in 6443) [ClassicSimilarity], result of:
            0.11703369 = score(doc=6443,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.5416616 = fieldWeight in 6443, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=6443)
      0.5 = coord(1/2)
    
    Source
    Indexer. 22(2001) no.3, S.123-127
  15. Bueno-de-la-Fuente, G.; Hernández-Pérez, T.; Rodríguez-Mateos, D.; Méndez-Rodríguez, E.M.; Martín-Galán, B.: Study on the use of metadata for digital learning objects in University Institutional Repositories (MODERI) (2009) 0.12
    0.12189559 = sum of:
      0.09434451 = product of:
        0.28303352 = sum of:
          0.28303352 = weight(_text_:objects in 2981) [ClassicSimilarity], result of:
            0.28303352 = score(doc=2981,freq=12.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.86305994 = fieldWeight in 2981, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2981)
        0.33333334 = coord(1/3)
      0.02755108 = product of:
        0.05510216 = sum of:
          0.05510216 = weight(_text_:work in 2981) [ClassicSimilarity], result of:
            0.05510216 = score(doc=2981,freq=2.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.2433148 = fieldWeight in 2981, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2981)
        0.5 = coord(1/2)
    
    Abstract
    Metadata is a core issue for the creation of repositories. Different institutional repositories have chosen and use different metadata models, elements and values for describing the range of digital objects they store. Thus, this paper analyzes the current use of metadata describing those Learning Objects that some open higher educational institutions' repositories include in their collections. The goal of this work is to identify and analyze the different metadata models being used to describe educational features of those specific digital educational objects (such as audience, type of educational material, learning objectives, etc.). Also discussed is the concept and typology of Learning Objects (LO) through their use in University Repositories. We will also examine the usefulness of specifically describing those learning objects, setting them apart from other kind of documents included in the repository, mainly scholarly publications and research results of the Higher Education institution.
  16. Malan, C.: Personal strategies in reference work (1992) 0.12
    0.121330865 = product of:
      0.24266173 = sum of:
        0.24266173 = sum of:
          0.15906623 = weight(_text_:work in 5689) [ClassicSimilarity], result of:
            0.15906623 = score(doc=5689,freq=6.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.70238936 = fieldWeight in 5689, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.078125 = fieldNorm(doc=5689)
          0.0835955 = weight(_text_:22 in 5689) [ClassicSimilarity], result of:
            0.0835955 = score(doc=5689,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.38690117 = fieldWeight in 5689, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5689)
      0.5 = coord(1/2)
    
    Abstract
    Most of the skills required in reference work can be learnt or developed over a period of time. Examines the following: skills and knowledge pertaining to stock, reference tools and the communities served in South Africa; and interpersonal skiills in traditional reference work and in a computerized library
    Source
    Cape librarian. 36(1992) no.10, S.22-23
  17. Day, R.E.: Works and representation (2008) 0.12
    0.12116922 = sum of:
      0.038515985 = product of:
        0.11554795 = sum of:
          0.11554795 = weight(_text_:objects in 2007) [ClassicSimilarity], result of:
            0.11554795 = score(doc=2007,freq=2.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.35234275 = fieldWeight in 2007, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.046875 = fieldNorm(doc=2007)
        0.33333334 = coord(1/3)
      0.08265323 = product of:
        0.16530646 = sum of:
          0.16530646 = weight(_text_:work in 2007) [ClassicSimilarity], result of:
            0.16530646 = score(doc=2007,freq=18.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.72994435 = fieldWeight in 2007, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.046875 = fieldNorm(doc=2007)
        0.5 = coord(1/2)
    
    Abstract
    The concept of the work in art differs from and challenges traditional concepts of the work in bibliography. Whereas the traditional bibliographic concept of the work takes an ideational approach that incorporates mentalist epistemologies, container-content metaphors, and the conduit metaphor of information transfer and re-presentation, the concept of the work of art as is presented here begins with the site-specific and time-valued nature of the object as a product of human labor and as an event that is emergent through cultural forms and from social situations. The account of the work, here, is thus materialist and expressionist rather than ideational. This article takes the discussion of the work in the philosopher Martin Heidegger's philosophical-historical account and joins this with the concept of the work in the modern avant-garde, toward bringing into critique the traditional bibliographic conception of the work and toward illuminating a materialist perspective that may be useful in understanding cultural work-objects, as well as texts proper.
  18. Palmer, C.L.: Information work at the boundaries of science : linking library services to research practices (1996) 0.12
    0.119221315 = sum of:
      0.06354813 = product of:
        0.1906444 = sum of:
          0.1906444 = weight(_text_:objects in 7175) [ClassicSimilarity], result of:
            0.1906444 = score(doc=7175,freq=4.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.5813359 = fieldWeight in 7175, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7175)
        0.33333334 = coord(1/3)
      0.05567318 = product of:
        0.11134636 = sum of:
          0.11134636 = weight(_text_:work in 7175) [ClassicSimilarity], result of:
            0.11134636 = score(doc=7175,freq=6.0), product of:
              0.22646447 = queryWeight, product of:
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.061700378 = queryNorm
              0.49167252 = fieldWeight in 7175, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.6703904 = idf(docFreq=3060, maxDocs=44218)
                0.0546875 = fieldNorm(doc=7175)
        0.5 = coord(1/2)
    
    Abstract
    Examines the information seeking practices and strategies used by interdisciplinary scientists as they work in 'boundaries' between disciplines. As researchers gather and disseminate information outside their core knowledge domains through personal networks, conferences and the literature, they interact with objects, methods, people and words. Much of their information work is devoted to probing and learning new subject areas and they often rely on intermediaries to help collect and translate materials from unfamiliar subjects. Libraries that wish to facilitate cross disciplinary enquiry will need to design information environments that support learning, provide tools that function as 'boundary objects' and offer intermediary services that assist in the transfer and translation of information across scientific communities
  19. Dick, S.J.: Astronomy's Three Kingdom System : a comprehensive classification system of celestial objects (2019) 0.12
    0.11912905 = sum of:
      0.089870624 = product of:
        0.26961187 = sum of:
          0.26961187 = weight(_text_:objects in 5455) [ClassicSimilarity], result of:
            0.26961187 = score(doc=5455,freq=8.0), product of:
              0.3279419 = queryWeight, product of:
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.061700378 = queryNorm
              0.82213306 = fieldWeight in 5455, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                5.315071 = idf(docFreq=590, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.33333334 = coord(1/3)
      0.029258423 = product of:
        0.058516845 = sum of:
          0.058516845 = weight(_text_:22 in 5455) [ClassicSimilarity], result of:
            0.058516845 = score(doc=5455,freq=2.0), product of:
              0.21606421 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.061700378 = queryNorm
              0.2708308 = fieldWeight in 5455, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5455)
        0.5 = coord(1/2)
    
    Abstract
    Although classification has been an important aspect of astronomy since stellar spectroscopy in the late nineteenth century, to date no comprehensive classification system has existed for all classes of objects in the universe. Here we present such a system, and lay out its foundational definitions and principles. The system consists of the "Three Kingdoms" of planets, stars and galaxies, eighteen families, and eighty-two classes of objects. Gravitation is the defining organizing principle for the families and classes, and the physical nature of the objects is the defining characteristic of the classes. The system should prove useful for both scientific and pedagogical purposes.
    Date
    21.11.2019 18:46:22
  20. Malsburg, C. von der: ¬The correlation theory of brain function (1981) 0.11
    0.11376046 = product of:
      0.22752091 = sum of:
        0.22752091 = product of:
          0.34128135 = sum of:
            0.09628996 = weight(_text_:objects in 76) [ClassicSimilarity], result of:
              0.09628996 = score(doc=76,freq=2.0), product of:
                0.3279419 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.061700378 = queryNorm
                0.29361898 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
            0.2449914 = weight(_text_:3a in 76) [ClassicSimilarity], result of:
              0.2449914 = score(doc=76,freq=2.0), product of:
                0.5230965 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.061700378 = queryNorm
                0.46834838 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=76)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Difficulties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces exibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between di erent memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
    Source
    http%3A%2F%2Fcogprints.org%2F1380%2F1%2FvdM_correlation.pdf&usg=AOvVaw0g7DvZbQPb2U7dYb49b9v_

Languages

Types

  • el 195
  • b 37
  • p 1
  • More… Less…

Themes

Classifications