Search (1252 results, page 2 of 63)

  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Hjoerland, B.: ¬The controversy over the concept of information : a rejoinder to Professor Bates (2009) 0.04
    0.036787182 = product of:
      0.073574364 = sum of:
        0.073574364 = sum of:
          0.058385678 = weight(_text_:e.g in 2748) [ClassicSimilarity], result of:
            0.058385678 = score(doc=2748,freq=6.0), product of:
              0.23393378 = queryWeight, product of:
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.044842023 = queryNorm
              0.24958208 = fieldWeight in 2748, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2748)
          0.0151886875 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.0151886875 = score(doc=2748,freq=2.0), product of:
              0.15702912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044842023 = queryNorm
              0.09672529 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2748)
      0.5 = coord(1/2)
    
    Content
    "This letter considers some main arguments in Professor Bates' article (2008), which is part of our former debate (Bates, 2005,2006; Hjoerland, 2007). Bates (2008) does not write much to restate or enlarge on her theoretical position but is mostly arguing about what she claims Hjorland (2007) ignored or misinterpreted in her two articles. Bates (2008, p. 842) wrote that my arguments did not reflect "a standard of coherence, consistency, and logic that is expected of an argument presented in a scientific journal." My argumentation below will refute this statement. This controversy is whether information should be understood as a subjective phenomenon (alone), as an objective phenomenon (alone), or as a combined objective and a subjective phenomenon ("having it both ways"). Bates (2006) defined "information" (sometimes, e.g., termed "information 1," p. 1042) as an objective phenomenon and "information 2" as a subjective phenomenon. However, sometimes the term "information" is also used as a synonym for "information 2," e.g., "the term information is understood to refer to one or both senses" (p. 1042). Thus, Professor Bates is not consistent in using the terminology that she herself introduces, and confusion in this controversy may be caused by Professor Bates' ambiguity in her use of the term "information." Bates (2006, p. 1033) defined information as an objective phenomenon by joining a definition by Edwin Parker: "Information is the pattern of organization of matter and energy." The argument in Hjoerland (2007) is, by contrast, that information should be understood as a subjective phenomenon all the way down: That neither the objective definition of information nor "having it both ways" is fruitful. This is expressed, for example, by joining Karpatschof's (2000) definition of information as a physical signal relative to a certain release mechanism, which implies that information is not something objective that can be understood independently of an observer or independently of other kinds of mechanism that are programmed to be sensitive to specific attributes of a signal: There are many differences in the world, and each of them is potentially informative in given situations. Regarding Parker's definition, "patterns of organization of matter and energy" are no more than that until they inform somebody about something. When they inform somebody about something, they may be considered information. The following quote is part of the argumentation in Bates (2008): "He contrasts my definition of information as 'observer-independent' with his position that information is 'situational' and adds a list of respected names on the situational side (Hjoerland, 2007, p. 1448). What this sentence, and much of the remainder of his argument, ignores is the fact that my approach accounts for both an observer-independent and a contextual, situational sense of information." Yes, it is correct that I mostly concentrated on refuting Bates' objective definition of information. It is as if Bates expects an overall appraisal of her work rather than providing a specific analysis of the points on which there are disagreements. I see Bates' "having it both ways": a symptom of inconsistence in argumentation.
    Bates (2008, p. 843) further writes about her definition of information: "This is the objectivist foundation, the rock bottom minimum of the meaning of information; it informs both articles throughout." This is exactly the focus of my disagreement. If we take a word in a language, it is understood as both being a "pattern of organization of matter and energy" (e.g., a sound) and carrying meaning. But the relation between the physical sign and its meaning is considered an arbitrary relation in linguistics. Any physical material has the potential of carrying any meaning and to inform somebody. The physical stuff in itself is not information until it is used as a sign. An important issue in this debate is whether Bates' examples demonstrate the usefulness of her own position as opposed to mine. Her example about information seeking concerning navigation and how "the very layout of the ship and the design of the bridge promoted the smooth flow of information from the exterior of the ship to the crew and among the crewmembers" (Bates, 2006, pp. 1042-1043) does not justify Bates' definition of information as an objective phenomenon. The design is made for a purpose, and this purpose determines how information should be defined in this context. Bates' view on "curatorial sciences" (2006, p. 1043) is close to Hjorland's suggestions (2000) about "memory institutions," which is based on the subjective understanding of information. However, she does not relate to this proposal, and she does not argue how the objective understanding of information is related to this example. I therefore conclude that Bates' practical examples do not support her objective definition of information, nor do they support her "having it both ways." Finally, I exemplify the consequences of my understanding of information by showing how an archaeologist and a geologist might represent the same stone differently in information systems. Bates (2008, p. 843) writes about this example: "This position is completely consistent with mine." However, this "consistency" was not recognized by Bates until I published my objections and, therefore, this is an indication that my criticism was needed. I certainly share Professor Bates (2008) advice to read her original articles: They contain much important stuff. I just recommend that the reader ignore the parts that argue about information being an objective phenomenon."
    Date
    22. 3.2009 18:13:27
  2. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.04
    0.035610512 = product of:
      0.071221024 = sum of:
        0.071221024 = product of:
          0.2848841 = sum of:
            0.2848841 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.2848841 = score(doc=140,freq=2.0), product of:
                0.38017118 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044842023 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  3. Buckland, M.; Shaw, R.: 4W vocabulary mapping across diiverse reference genres (2008) 0.04
    0.035031408 = product of:
      0.070062816 = sum of:
        0.070062816 = product of:
          0.14012563 = sum of:
            0.14012563 = weight(_text_:e.g in 2258) [ClassicSimilarity], result of:
              0.14012563 = score(doc=2258,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.598997 = fieldWeight in 2258, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2258)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    This paper examines three themes in the design of search support services: linking different genres of reference resources (e.g. bibliographies, biographical dictionaries, catalogs, encyclopedias, place name gazetteers); the division of vocabularies by facet (e.g. What, Where, When, and Who); and mapping between both similar and dissimilar vocabularies. Different vocabularies within a facet can be used in conjunction, e.g. a place name combined with spatial coordinates for Where. In practice, vocabularies of different facets are used in combination in the representation or description of complex topics. Rich opportunities arise from mapping across vocabularies of dissimilar reference genres to recreate the amenities of a reference library. In a network environment, in which vocabulary control cannot be imposed, semantic correspondence across diverse vocabularies is a challenge and an opportunity.
  4. RAK-NBM : Interpretationshilfe zu NBM 3b,3 (2000) 0.03
    0.034368075 = product of:
      0.06873615 = sum of:
        0.06873615 = product of:
          0.1374723 = sum of:
            0.1374723 = weight(_text_:22 in 4362) [ClassicSimilarity], result of:
              0.1374723 = score(doc=4362,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.8754574 = fieldWeight in 4362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4362)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2000 19:22:27
  5. Diederichs, A.: Wissensmanagement ist Macht : Effektiv und kostenbewußt arbeiten im Informationszeitalter (2005) 0.03
    0.034368075 = product of:
      0.06873615 = sum of:
        0.06873615 = product of:
          0.1374723 = sum of:
            0.1374723 = weight(_text_:22 in 3211) [ClassicSimilarity], result of:
              0.1374723 = score(doc=3211,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.8754574 = fieldWeight in 3211, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=3211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2005 9:16:22
  6. Hawking, D.; Robertson, S.: On collection size and retrieval effectiveness (2003) 0.03
    0.034368075 = product of:
      0.06873615 = sum of:
        0.06873615 = product of:
          0.1374723 = sum of:
            0.1374723 = weight(_text_:22 in 4109) [ClassicSimilarity], result of:
              0.1374723 = score(doc=4109,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.8754574 = fieldWeight in 4109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4109)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    14. 8.2005 14:22:22
  7. Khoo, C.S.G.; Wan, K.-W.: ¬A simple relevancy-ranking strategy for an interface to Boolean OPACs (2004) 0.03
    0.03422837 = product of:
      0.06845674 = sum of:
        0.06845674 = sum of:
          0.04719258 = weight(_text_:e.g in 2509) [ClassicSimilarity], result of:
            0.04719258 = score(doc=2509,freq=2.0), product of:
              0.23393378 = queryWeight, product of:
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.044842023 = queryNorm
              0.20173478 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2168427 = idf(docFreq=651, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2509)
          0.021264162 = weight(_text_:22 in 2509) [ClassicSimilarity], result of:
            0.021264162 = score(doc=2509,freq=2.0), product of:
              0.15702912 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044842023 = queryNorm
              0.1354154 = fieldWeight in 2509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2509)
      0.5 = coord(1/2)
    
    Content
    "Most Web search engines accept natural language queries, perform some kind of fuzzy matching and produce ranked output, displaying first the documents that are most likely to be relevant. On the other hand, most library online public access catalogs (OPACs) an the Web are still Boolean retrieval systems that perform exact matching, and require users to express their search requests precisely in a Boolean search language and to refine their search statements to improve the search results. It is well-documented that users have difficulty searching Boolean OPACs effectively (e.g. Borgman, 1996; Ensor, 1992; Wallace, 1993). One approach to making OPACs easier to use is to develop a natural language search interface that acts as a middleware between the user's Web browser and the OPAC system. The search interface can accept a natural language query from the user and reformulate it as a series of Boolean search statements that are then submitted to the OPAC. The records retrieved by the OPAC are ranked by the search interface before forwarding them to the user's Web browser. The user, then, does not need to interact directly with the Boolean OPAC but with the natural language search interface or search intermediary. The search interface interacts with the OPAC system an the user's behalf. The advantage of this approach is that no modification to the OPAC or library system is required. Furthermore, the search interface can access multiple OPACs, acting as a meta search engine, and integrate search results from various OPACs before sending them to the user. The search interface needs to incorporate a method for converting the user's natural language query into a series of Boolean search statements, and for ranking the OPAC records retrieved. The purpose of this study was to develop a relevancyranking algorithm for a search interface to Boolean OPAC systems. This is part of an on-going effort to develop a knowledge-based search interface to OPACs called the E-Referencer (Khoo et al., 1998, 1999; Poo et al., 2000). E-Referencer v. 2 that has been implemented applies a repertoire of initial search strategies and reformulation strategies to retrieve records from OPACs using the Z39.50 protocol, and also assists users in mapping query keywords to the Library of Congress subject headings."
    Source
    Electronic library. 22(2004) no.2, S.112-120
  8. Egghe, L.: Properties of the n-overlap vector and n-overlap similarity theory (2006) 0.03
    0.03370899 = product of:
      0.06741798 = sum of:
        0.06741798 = product of:
          0.13483596 = sum of:
            0.13483596 = weight(_text_:e.g in 194) [ClassicSimilarity], result of:
              0.13483596 = score(doc=194,freq=8.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57638514 = fieldWeight in 194, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=194)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the first part of this article the author defines the n-overlap vector whose coordinates consist of the fraction of the objects (e.g., books, N-grams, etc.) that belong to 1, 2, , n sets (more generally: families) (e.g., libraries, databases, etc.). With the aid of the Lorenz concentration theory, a theory of n-overlap similarity is conceived together with corresponding measures, such as the generalized Jaccard index (generalizing the well-known Jaccard index in case n 5 2). Next, the distributional form of the n-overlap vector is determined assuming certain distributions of the object's and of the set (family) sizes. In this section the decreasing power law and decreasing exponential distribution is explained for the n-overlap vector. Both item (token) n-overlap and source (type) n-overlap are studied. The n-overlap properties of objects indexed by a hierarchical system (e.g., books indexed by numbers from a UDC or Dewey system or by N-grams) are presented in the final section. The author shows how the results given in the previous section can be applied as well as how the Lorenz order of the n-overlap vector is respected by an increase or a decrease of the level of refinement in the hierarchical system (e.g., the value N in N-grams).
  9. Given, L.M.; Ruecker, S.; Simpson, H.; Sadler, E.; Ruskin, A.: Inclusive interface design for seniors : Image-browsing for a health information context (2007) 0.03
    0.033370197 = product of:
      0.06674039 = sum of:
        0.06674039 = product of:
          0.13348079 = sum of:
            0.13348079 = weight(_text_:e.g in 579) [ClassicSimilarity], result of:
              0.13348079 = score(doc=579,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57059216 = fieldWeight in 579, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=579)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study explores an image-based retrieval interface for drug information, focusing on usability for a specific population - seniors. Qualitative, task-based interviews examined participants' health information behaviors and documented search strategies using an existing database (www.drugs.com) and a new prototype that uses similarity-based clustering of pill images for retrieval. Twelve participants (aged 65 and older), reflecting a diversity of backgrounds and experience with Web-based resources, located pill information using the interfaces and discussed navigational and other search preferences. Findings point to design features (e.g., image enlargement) that meet seniors' needs in the context of other health-related information-seeking strategies (e.g., contacting pharmacists).
  10. Nottelmann, H.; Straccia, U.: Information retrieval and machine learning for probabilistic schema matching (2007) 0.03
    0.033370197 = product of:
      0.06674039 = sum of:
        0.06674039 = product of:
          0.13348079 = sum of:
            0.13348079 = weight(_text_:e.g in 911) [ClassicSimilarity], result of:
              0.13348079 = score(doc=911,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57059216 = fieldWeight in 911, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=911)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Schema matching is the problem of finding correspondences (mapping rules, e.g. logical formulae) between heterogeneous schemas e.g. in the data exchange domain, or for distributed IR in federated digital libraries. This paper introduces a probabilistic framework, called sPLMap, for automatically learning schema mapping rules, based on given instances of both schemas. Different techniques, mostly from the IR and machine learning fields, are combined for finding suitable mapping candidates. Our approach gives a probabilistic interpretation of the prediction weights of the candidates, selects the rule set with highest matching probability, and outputs probabilistic rules which are capable to deal with the intrinsic uncertainty of the mapping process. Our approach with different variants has been evaluated on several test sets.
  11. Bache, R.; Baillie, M.; Crestani, F.: Measuring the likelihood property of scoring functions in general retrieval models (2009) 0.03
    0.033370197 = product of:
      0.06674039 = sum of:
        0.06674039 = product of:
          0.13348079 = sum of:
            0.13348079 = weight(_text_:e.g in 2860) [ClassicSimilarity], result of:
              0.13348079 = score(doc=2860,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.57059216 = fieldWeight in 2860, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2860)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Although retrieval systems based on probabilistic models will rank the objects (e.g., documents) being retrieved according to the probability of some matching criterion (e.g., relevance), they rarely yield an actual probability, and the scoring function is interpreted to be purely ordinal within a given retrieval task. In this brief communication, it is shown that some scoring functions possess the likelihood property, which means that the scoring function indicates the likelihood of matching when compared to other retrieval tasks, which is potentially more useful than pure ranking although it cannot be interpreted as an actual probability. This property can be detected by using two modified effectiveness measures: entire precision and entire recall.
  12. Buzydlowski, J.W.; White, H.D.; Lin, X.: Term Co-occurrence Analysis as an Interface for Digital Libraries (2002) 0.03
    0.031569093 = product of:
      0.06313819 = sum of:
        0.06313819 = product of:
          0.12627637 = sum of:
            0.12627637 = weight(_text_:22 in 1339) [ClassicSimilarity], result of:
              0.12627637 = score(doc=1339,freq=6.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.804159 = fieldWeight in 1339, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1339)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2003 17:25:39
    22. 2.2003 18:16:22
  13. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.03
    0.031159198 = product of:
      0.062318396 = sum of:
        0.062318396 = product of:
          0.24927358 = sum of:
            0.24927358 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.24927358 = score(doc=306,freq=2.0), product of:
                0.38017118 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.044842023 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  14. Pesch, K.: ¬Eine gigantische Informationsfülle : "Brockhaus multimedial 2004" kann jedoch nicht rundum überzeugen (2003) 0.03
    0.030072069 = product of:
      0.060144138 = sum of:
        0.060144138 = product of:
          0.120288275 = sum of:
            0.120288275 = weight(_text_:22 in 502) [ClassicSimilarity], result of:
              0.120288275 = score(doc=502,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.76602525 = fieldWeight in 502, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=502)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22
    22. 9.2003 10:02:00
  15. Hemminger, B.M.: Introduction to the special issue on bioinformatics (2005) 0.03
    0.030072069 = product of:
      0.060144138 = sum of:
        0.060144138 = product of:
          0.120288275 = sum of:
            0.120288275 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.120288275 = score(doc=4189,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.76602525 = fieldWeight in 4189, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 14:19:22
  16. Solomon, P.: Exploring structuration in knowledge organization : implications for managing the tension between stability and dynamism (2000) 0.03
    0.029192839 = product of:
      0.058385678 = sum of:
        0.058385678 = product of:
          0.116771355 = sum of:
            0.116771355 = weight(_text_:e.g in 148) [ClassicSimilarity], result of:
              0.116771355 = score(doc=148,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.49916416 = fieldWeight in 148, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=148)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper builds on numerous suggestions of the need for a theoretical basis for knowledge organization from the point of view of interest, concern, or problem (e.g., domain, ecology, use environment, or language game). This is accomplished by first developing a possible theoretical understanding of why knowledge organization schemes tend toward stability through structuration and autopoiesis. In understanding this tendency, the possibility of promoting (desirable) change is also considered through activity. Second, the paper considers the requirements for the contextualization provided by such mappings. Finally, the case of the Internet is briefly explored. All of this provides a recipe a theory for practice 'stew,' which would highlight the possibility that just as structures (e.g., classification schemes) enable actions (e.g., information retrieval, knowledge transfer), actions enable structures. For this theoretical stew to influence practice, rules and resources-the structures of a knowledge organization scheme or system-must both support self-reflection and needs for consistency and adaptability. The virtuality of the developing electronic information world suggests the possibility of both coexisting through, for instance, mappings or crosswalks
  17. Egghe, L.: Type/Token-Taken informetrics (2003) 0.03
    0.029192839 = product of:
      0.058385678 = sum of:
        0.058385678 = product of:
          0.116771355 = sum of:
            0.116771355 = weight(_text_:e.g in 1608) [ClassicSimilarity], result of:
              0.116771355 = score(doc=1608,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.49916416 = fieldWeight in 1608, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
  18. McKechnie, L.(E.F.); Pettigrew, K.E.: Surveying the use of theory in library and information science research : a disciplinary perspective (2002) 0.03
    0.029192839 = product of:
      0.058385678 = sum of:
        0.058385678 = product of:
          0.116771355 = sum of:
            0.116771355 = weight(_text_:e.g in 815) [ClassicSimilarity], result of:
              0.116771355 = score(doc=815,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.49916416 = fieldWeight in 815, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=815)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A content analysis of 1,160 Library and Information Science (LIS) articles published in six LIS journals between 1993 and 1998 was conducted to examine the use of theory in LIS research. Overall, 34.2 percent of articles incorporated theory in either the title, abstract, or text for a total of 1,083 theory incidents or an average of .93 incidents per article. Articles dealing with topics from the humanities (e.g., information policy, history) had the highest rate of theory use with 1.81 incidents per article, followed by social science papers (e.g., information behavior, management) with .98 incidents per article and science articles (e.g., bibliometrics, information retrieval) with .75 theory incidents per article. These findings imply that differences exist in the use of theory in LIS that are associated with the broad disciplinary content of the research. These differences may arise from variant conceptions of and approaches to the use of theory in the research traditions of the humanities, social sciences, and sciences. It is suggested that the multidisciplinary background of LIS researchers provides a rich but still under-utilized opportunity for the use and development of theory within LIS.
  19. Egghe, L.: Relations between the continuous and the discrete Lotka power function (2005) 0.03
    0.028603025 = product of:
      0.05720605 = sum of:
        0.05720605 = product of:
          0.1144121 = sum of:
            0.1144121 = weight(_text_:e.g in 3464) [ClassicSimilarity], result of:
              0.1144121 = score(doc=3464,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.489079 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The discrete Lotka power function describes the number of sources (e.g., authors) with n = 1, 2, 3, ... items (e.g., publications). As in econometrics, informetrics theory requires functions of a continuous variable j, replacing the discrete variable n. Now j represents item densities instead of number of items. The continuous Lotka power function describes the density of sources with item density j. The discrete Lotka function one obtains from data, obtained empirically; the continuous Lotka function is the one needed when one wants to apply Lotkaian informetrics, i.e., to determine properties that can be derived from the (continuous) model. It is, hence, important to know the relations between the two models. We show that the exponents of the discrete Lotka function (if not too high, i.e., within limits encountered in practice) and of the continuous Lotka function are approximately the same. This is important to know in applying theoretical results (from the continuous model), derived from practical data.
  20. Mukhopadhyay, S.; Peng, S.; Raje, R.; Mostafa, J.; Palakal, M.: Distributed multi-agent information filtering : a comparative study (2005) 0.03
    0.028603025 = product of:
      0.05720605 = sum of:
        0.05720605 = product of:
          0.1144121 = sum of:
            0.1144121 = weight(_text_:e.g in 3559) [ClassicSimilarity], result of:
              0.1144121 = score(doc=3559,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.489079 = fieldWeight in 3559, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3559)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Information filtering is a technique to identify, in large collections, information that is relevant according to some criteria (e.g., a user's personal interests, or a research project objective). As such, it is a key technology for providing efficient user services in any large-scale information infrastructure, e.g., digital libraries. To provide large-scale Information filtering services, both computational and knowledge management issues need to be addressed. A centralized (single-agent) approach to information filtering suffers from serious drawbacks in terms of speed, accuracy, and economic considerations, and becomes unrealistic even for medium-scale applications. In this article, we discuss two distributed (multiagent) information filtering approaches, that are distributed with respect to knowledge or functionality, to overcome the limitations of single-agent centralized information filtering. Large-scale experimental studies involving the weIl-known TREC data set are also presented to illustrate the advantages of distributed filtering as weIl as to compare the different distributed approaches.

Languages

Types

Themes