Search (2213 results, page 1 of 111)

  • × language_ss:"e"
  1. Fensel, D.: Ontologies : a silver bullet for knowledge management and electronic commerce (2004) 0.12
    0.12052073 = product of:
      0.30130184 = sum of:
        0.25483155 = weight(_text_:theorem in 1949) [ClassicSimilarity], result of:
          0.25483155 = score(doc=1949,freq=4.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.6406546 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1949)
        0.046470284 = weight(_text_:22 in 1949) [ClassicSimilarity], result of:
          0.046470284 = score(doc=1949,freq=4.0), product of:
            0.16985968 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04850598 = queryNorm
            0.27358043 = fieldWeight in 1949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1949)
      0.4 = coord(2/5)
    
    Classification
    004.67/8 22
    ST 304 Informatik / Monographien / Künstliche Intelligenz / Automatisches Programmieren, Deduction and theorem proving, Wissensrepräsentation
    DDC
    004.67/8 22
    RVK
    ST 304 Informatik / Monographien / Künstliche Intelligenz / Automatisches Programmieren, Deduction and theorem proving, Wissensrepräsentation
  2. Chaitin, G.J.: Gödel's theorem and information (1982) 0.12
    0.1153236 = product of:
      0.576618 = sum of:
        0.576618 = weight(_text_:theorem in 2448) [ClassicSimilarity], result of:
          0.576618 = score(doc=2448,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            1.449636 = fieldWeight in 2448, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.125 = fieldNorm(doc=2448)
      0.2 = coord(1/5)
    
  3. Raan, A.F.J. van: Statistical properties of bibliometric indicators : research group indicator distributions and correlations (2006) 0.11
    0.10879844 = product of:
      0.27199608 = sum of:
        0.21623175 = weight(_text_:theorem in 5275) [ClassicSimilarity], result of:
          0.21623175 = score(doc=5275,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.5436135 = fieldWeight in 5275, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=5275)
        0.055764344 = weight(_text_:22 in 5275) [ClassicSimilarity], result of:
          0.055764344 = score(doc=5275,freq=4.0), product of:
            0.16985968 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04850598 = queryNorm
            0.32829654 = fieldWeight in 5275, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=5275)
      0.4 = coord(2/5)
    
    Abstract
    In this article we present an empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply available aggregation level: the research group. We focus on the distribution functions of a coherent set of indicators that are used frequently in the analysis of research performance. In this sense, the coherent set of indicators acts as a measuring instrument. Better insight into the statistical properties of a measuring instrument is necessary to enable assessment of the instrument itself. The most basic distribution in bibliometric analysis is the distribution of citations over publications, and this distribution is very skewed. Nevertheless, we clearly observe the working of the central limit theorem and find that at the level of research groups the distribution functions of the main indicators, particularly the journal- normalized and the field-normalized indicators, approach normal distributions. The results of our study underline the importance of the idea of group oeuvre, that is, the role of sets of related publications as a unit of analysis.
    Date
    22. 7.2006 16:20:22
  4. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.10822096 = product of:
      0.2705524 = sum of:
        0.23112105 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
          0.23112105 = score(doc=562,freq=2.0), product of:
            0.41123423 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04850598 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.03943134 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.03943134 = score(doc=562,freq=2.0), product of:
            0.16985968 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04850598 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  5. Akerele, O.; David, A.; Osofisan, A.: Using the concepts of Case Based Reasoning and Basic Categories for enhancing adaptation to the user's level of knowledge in Decision Support System (2014) 0.10
    0.10226524 = product of:
      0.2556631 = sum of:
        0.21623175 = weight(_text_:theorem in 1449) [ClassicSimilarity], result of:
          0.21623175 = score(doc=1449,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.5436135 = fieldWeight in 1449, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=1449)
        0.03943134 = weight(_text_:22 in 1449) [ClassicSimilarity], result of:
          0.03943134 = score(doc=1449,freq=2.0), product of:
            0.16985968 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04850598 = queryNorm
            0.23214069 = fieldWeight in 1449, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1449)
      0.4 = coord(2/5)
    
    Abstract
    In most search systems, mapping queries with documents employs techniques such as vector space model, naïve Bayes, Bayesian theorem etc. to classify resulting documents. In this research studies, we are proposing the use of the concept of basic categories to representing the user's level of knowledge based on the concepts he employed during his search activities, so that the system could propose adapted results based on the observed user's level of knowledge. Our hypothesis is that this approach will enhance the decision support system for solving decisional problems in which information retrieval constitutes the backbone technical problem.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  6. Falkowski, B.-J.: On certain generalizations of inner product similarity measures (1998) 0.10
    0.09987318 = product of:
      0.49936587 = sum of:
        0.49936587 = weight(_text_:theorem in 3555) [ClassicSimilarity], result of:
          0.49936587 = score(doc=3555,freq=6.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            1.2554216 = fieldWeight in 3555, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
      0.2 = coord(1/5)
    
    Abstract
    Introduces linear similarity measures and defines an acceptable ranking function. Proves the existence theorem for acceptable ranking functions in the case of a linear measure. Defines an asymptotic inner product measure. Obtains an existence theorem for acceptable ranking functions. Exhibits contour sets for large query vectors in 2 cases of interest. Includes a brief proof of the theorem of the separating hyperplane
  7. Egghe, L.: ¬A new short proof of Naranan's theorem, explaining Lotka's law and Zipf's law (2010) 0.09
    0.08738903 = product of:
      0.43694514 = sum of:
        0.43694514 = weight(_text_:theorem in 3432) [ClassicSimilarity], result of:
          0.43694514 = score(doc=3432,freq=6.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            1.0984939 = fieldWeight in 3432, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3432)
      0.2 = coord(1/5)
    
    Abstract
    Naranan's important theorem, published in Nature in 1970, states that if the number of journals grows exponentially and if the number of articles in each journal grows exponentially (at the same rate for each journal), then the system satisfies Lotka's law and a formula for the Lotka's exponent is given in function of the growth rates of the journals and the articles. This brief communication re-proves this result by showing that the system satisfies Zipf's law, which is equivalent with Lotka's law. The proof is short and algebraic and does not use infinitesimal arguments.
    Object
    Naranan-Theorem
  8. Octavio, A.: ¬The '¬indexed' theorem (1996) 0.09
    0.0864927 = product of:
      0.4324635 = sum of:
        0.4324635 = weight(_text_:theorem in 377) [ClassicSimilarity], result of:
          0.4324635 = score(doc=377,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            1.087227 = fieldWeight in 377, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.09375 = fieldNorm(doc=377)
      0.2 = coord(1/5)
    
  9. Velleman, D.J.: Fermat's last theorem and Hilbert's program (1997) 0.08
    0.0815461 = product of:
      0.4077305 = sum of:
        0.4077305 = weight(_text_:theorem in 6443) [ClassicSimilarity], result of:
          0.4077305 = score(doc=6443,freq=4.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            1.0250474 = fieldWeight in 6443, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0625 = fieldNorm(doc=6443)
      0.2 = coord(1/5)
    
    Abstract
    Most mathematicians are aware of the controversies in the foundations of mathematics that ocurred earlier in this century: the debate between logicists and the intuitionists, and Hilbert's proposal of a prgram that he hoped would resolve the issue. But the controversies have died down since then, and few mathematicians worry about these issues anymore. The recent excitement over the proof of Fermat's Last Theorem provides an opportunity to reexamnine these issues, because Wiles's proof is an excellent example of precisely the kind of mathematics that Hilbert hopes to justify with his program
  10. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.08
    0.07704036 = product of:
      0.38520178 = sum of:
        0.38520178 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
          0.38520178 = score(doc=1826,freq=2.0), product of:
            0.41123423 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04850598 = queryNorm
            0.93669677 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.2 = coord(1/5)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  11. Bruss, F.T.: 250 years of "An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, F.R.S. communicated by Mr. Proce, in a letter to John Canton, A.M.F.R.S." (2014) 0.07
    0.07207725 = product of:
      0.36038625 = sum of:
        0.36038625 = weight(_text_:theorem in 808) [ClassicSimilarity], result of:
          0.36038625 = score(doc=808,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.9060225 = fieldWeight in 808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.078125 = fieldNorm(doc=808)
      0.2 = coord(1/5)
    
    Abstract
    Historischer Beitrag zum Bayes-Theorem über bedingte Wahrscheinlichkeiten.
  12. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.06
    0.061632283 = product of:
      0.3081614 = sum of:
        0.3081614 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
          0.3081614 = score(doc=230,freq=2.0), product of:
            0.41123423 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04850598 = queryNorm
            0.7493574 = fieldWeight in 230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=230)
      0.2 = coord(1/5)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  13. Duff, A.: ¬The Rawls-Tawney theorem and the digital divide in postindustrial society (2011) 0.06
    0.061159577 = product of:
      0.30579787 = sum of:
        0.30579787 = weight(_text_:theorem in 4352) [ClassicSimilarity], result of:
          0.30579787 = score(doc=4352,freq=4.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.7687856 = fieldWeight in 4352, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.046875 = fieldNorm(doc=4352)
      0.2 = coord(1/5)
    
    Abstract
    The digital divide continues to challenge political and academic circles worldwide. A range of policy solutions is briefly evaluated, from laissez-faire on the right to "arithmetic" egalitarianism on the left. The article recasts the digital divide as a problem for the social distribution of presumptively important information (e.g., electoral data, news, science) within postindustrial society. Endorsing in general terms the left-liberal approach of differential or "geometric" egalitarianism, it seeks to invest this with greater precision, and therefore utility, by means of a possibly original synthesis of the ideas of John Rawls and R. H. Tawney. It is argued that, once certain categories of information are accorded the status of "primary goods," their distribution must then comply with principles of justice as articulated by those major 20th century exponents of ethical social democracy. The resultant Rawls-Tawney theorem, if valid, might augment the portfolio of options for interventionist information policy in the 21st century.
  14. Zadeh, L.A.: Fuzzy sets (1965) 0.06
    0.0576618 = product of:
      0.288309 = sum of:
        0.288309 = weight(_text_:theorem in 5460) [ClassicSimilarity], result of:
          0.288309 = score(doc=5460,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.724818 = fieldWeight in 5460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0625 = fieldNorm(doc=5460)
      0.2 = coord(1/5)
    
    Abstract
    A fuzzy set is a class of objects with a continuum of grades of membership. Such a set is characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one. The notions of inclusion, union, intersection, complement, relation, convexity, etc., are extended to such sets, and various properties of theses notions in the context of fuzzy sets are established. In particular, a separation theorem for convex fuzzy sets is proved without requiring that the fuzzy sets be disjoint
  15. Parkes, A.P.: ¬A study of problem solving activities in hypermedia representation (1994) 0.06
    0.0576618 = product of:
      0.288309 = sum of:
        0.288309 = weight(_text_:theorem in 765) [ClassicSimilarity], result of:
          0.288309 = score(doc=765,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.724818 = fieldWeight in 765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0625 = fieldNorm(doc=765)
      0.2 = coord(1/5)
    
    Abstract
    Presents a study of problem solving activities in a hypermedia representation of a theorem proving problem. The users interacted with a system called HUGH&ME which presented users with 2 representations simultaneously. The respresentations were such that any operations carried out on the other. Describes a quantitative analysis of user activities, and a qualitative analysis of users' responses to questions asked during the session. Discusses the need for hypermedia based tools to support expression and refinement of users' reasoning when engaged in hypermedia-based problem solving activities
  16. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.05
    0.053928252 = product of:
      0.26964125 = sum of:
        0.26964125 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
          0.26964125 = score(doc=306,freq=2.0), product of:
            0.41123423 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04850598 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  17. Sautoy, M. du: What we cannot know (2016) 0.05
    0.05113262 = product of:
      0.12783155 = sum of:
        0.108115874 = weight(_text_:theorem in 3034) [ClassicSimilarity], result of:
          0.108115874 = score(doc=3034,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.27180675 = fieldWeight in 3034, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3034)
        0.01971567 = weight(_text_:22 in 3034) [ClassicSimilarity], result of:
          0.01971567 = score(doc=3034,freq=2.0), product of:
            0.16985968 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04850598 = queryNorm
            0.116070345 = fieldWeight in 3034, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3034)
      0.4 = coord(2/5)
    
    Date
    22. 6.2016 16:08:54
    Footnote
    Rez. in: Economist vom Jun 18.06.2016 [http://www.economist.com/news/books-and-arts/21700611-circle-circle]: "Everyone by nature desires to know," wrote Aristotle more than 2,000 years ago. But are there limits to what human beings can know? This is the question that Marcus du Sautoy, the British mathematician who succeeeded Richard Dawkins as the Simonyi professor for the public understanding of science at Oxford University, explores in "What We Cannot Know", his fascinating book on the limits of scientific knowledge. As Mr du Sautoy argues, this is a golden age of scientific knowledge. Remarkable achievements stretch across the sciences, from the Large Hadron Collider and the sequencing of the human genome to the proof of Fermat's Last Theorem. And the rate of progress is accelerating: the number of scientific publications has doubled every nine years since the second world war. But even bigger challenges await. Can cancer be cured? Ageing beaten? Is there a "Theory of Everything" that will include all of physics? Can we know it all? One limit to people's knowledge is practical. In theory, if you throw a die, Newton's laws of motion make it possible to predict what number will come up. But the calculations are too long to be practicable. What is more, many natural systems, such as the weather, are "chaotic" or sensitive to small changes: a tiny nudge now can lead to vastly different behaviour later. Since people cannot measure with complete accuracy, they can't forecast far into the future. The problem was memorably articulated by Edward Lorenz, an American scientist, in 1972 in a famous paper called "Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?"
  18. Cole, C.: Operationalizing the notion of information as a subjective construct (1994) 0.05
    0.050454076 = product of:
      0.25227037 = sum of:
        0.25227037 = weight(_text_:theorem in 7747) [ClassicSimilarity], result of:
          0.25227037 = score(doc=7747,freq=2.0), product of:
            0.39776745 = queryWeight, product of:
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.04850598 = queryNorm
            0.6342157 = fieldWeight in 7747, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.200379 = idf(docFreq=32, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7747)
      0.2 = coord(1/5)
    
    Abstract
    We discuss information by attempting to operationalize it using: (1) Dervin and Nilan's idea that information is a subjective construct rather than an objective thing; (2) Brookes's idea that information is that which modifies knowledge structure; and (3) Neisser's idea that perception is top-down or schemata driven to the point of paradoxon. De Mey, Minsky's theorem of frames, and top-down and bottom-up models from reading theory are discussed. We conclude that information must be rare because only rare information can modify knowledge structure at its upper levels, and that to modify knowledge structure at its upper levels (its essence) information may have to enter the perception cycle in 2 stages
  19. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.05
    0.04622421 = product of:
      0.23112105 = sum of:
        0.23112105 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
          0.23112105 = score(doc=2918,freq=2.0), product of:
            0.41123423 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04850598 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.2 = coord(1/5)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  20. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.05
    0.04622421 = product of:
      0.23112105 = sum of:
        0.23112105 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
          0.23112105 = score(doc=2514,freq=2.0), product of:
            0.41123423 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.04850598 = queryNorm
            0.56201804 = fieldWeight in 2514, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.

Types

  • a 1941
  • m 156
  • s 99
  • el 69
  • b 31
  • r 10
  • x 8
  • i 3
  • n 2
  • p 2
  • h 1
  • More… Less…

Themes

Subjects

Classifications