Search (2901 results, page 1 of 146)

  • × year_i:[2000 TO 2010}
  1. RAK-NBM : Interpretationshilfe zu NBM 3b,3 (2000) 0.44
    0.43564838 = product of:
      1.5247693 = sum of:
        0.49968433 = weight(_text_:3b in 4362) [ClassicSimilarity], result of:
          0.49968433 = score(doc=4362,freq=2.0), product of:
            0.2667077 = queryWeight, product of:
              10.598275 = idf(docFreq=2, maxDocs=44218)
              0.025165197 = queryNorm
            1.873528 = fieldWeight in 4362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              10.598275 = idf(docFreq=2, maxDocs=44218)
              0.125 = fieldNorm(doc=4362)
        0.49968433 = weight(_text_:3b in 4362) [ClassicSimilarity], result of:
          0.49968433 = score(doc=4362,freq=2.0), product of:
            0.2667077 = queryWeight, product of:
              10.598275 = idf(docFreq=2, maxDocs=44218)
              0.025165197 = queryNorm
            1.873528 = fieldWeight in 4362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              10.598275 = idf(docFreq=2, maxDocs=44218)
              0.125 = fieldNorm(doc=4362)
        0.49968433 = weight(_text_:3b in 4362) [ClassicSimilarity], result of:
          0.49968433 = score(doc=4362,freq=2.0), product of:
            0.2667077 = queryWeight, product of:
              10.598275 = idf(docFreq=2, maxDocs=44218)
              0.025165197 = queryNorm
            1.873528 = fieldWeight in 4362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              10.598275 = idf(docFreq=2, maxDocs=44218)
              0.125 = fieldNorm(doc=4362)
        0.025716338 = product of:
          0.07714901 = sum of:
            0.07714901 = weight(_text_:22 in 4362) [ClassicSimilarity], result of:
              0.07714901 = score(doc=4362,freq=4.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.8754574 = fieldWeight in 4362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=4362)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Date
    22. 1.2000 19:22:27
  2. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.11140175 = product of:
      0.3119249 = sum of:
        0.029976752 = product of:
          0.11990701 = sum of:
            0.11990701 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.11990701 = score(doc=562,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.25 = coord(1/4)
        0.11990701 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11990701 = score(doc=562,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.11990701 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.11990701 = score(doc=562,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.03531506 = weight(_text_:representation in 562) [ClassicSimilarity], result of:
          0.03531506 = score(doc=562,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.006819073 = product of:
          0.02045722 = sum of:
            0.02045722 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.02045722 = score(doc=562,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
      0.35714287 = coord(5/14)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  3. Budd, J.M.: Phenomenology and information studies (2005) 0.11
    0.10774526 = product of:
      0.5028112 = sum of:
        0.13639407 = weight(_text_:edmund in 4410) [ClassicSimilarity], result of:
          0.13639407 = score(doc=4410,freq=2.0), product of:
            0.24926448 = queryWeight, product of:
              9.905128 = idf(docFreq=5, maxDocs=44218)
              0.025165197 = queryNorm
            0.54718614 = fieldWeight in 4410, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.905128 = idf(docFreq=5, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4410)
        0.19290914 = weight(_text_:phenomenology in 4410) [ClassicSimilarity], result of:
          0.19290914 = score(doc=4410,freq=8.0), product of:
            0.20961581 = queryWeight, product of:
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.025165197 = queryNorm
            0.9202986 = fieldWeight in 4410, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4410)
        0.17350797 = weight(_text_:intentionality in 4410) [ClassicSimilarity], result of:
          0.17350797 = score(doc=4410,freq=4.0), product of:
            0.23640947 = queryWeight, product of:
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.025165197 = queryNorm
            0.7339299 = fieldWeight in 4410, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4410)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - To examine work on phenomenology and determine what information studies can learn and use from that work. Design/methodology/approach - The paper presents a literature-based conceptual analysis of pioneering work in phenomenology (including that of Edmund Husserl, Martin Heidegger, Paul Ricoeur, and others), application of such ideas as intentionality and being in information studies work, and the potential for greater application of the information seeker as other. Findings - The literature on phenomenology contains thought that is directly relevant to information studies and information work. Close examination of perception, intentionality, and interpretation is integral to individuals' activities related to searching for and retrieving information, determining relevance, and using technology. Essential to the realization of phenomenology's potential is adoption of communication by dialogue so that an information seeker is able both to conceptualize need and to articulate that need. Some promising work in information studies demonstrates an openness to the ongoing and continuous perceptual experiences of information seekers and the relation of that process of perceiving to the growth of knowledge. Originality/value - Offers a different way of thinking about human-information relationships and the ways that information professionals can interact with information seekers.
  4. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.08
    0.07904907 = product of:
      0.27667174 = sum of:
        0.029976752 = product of:
          0.11990701 = sum of:
            0.11990701 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
              0.11990701 = score(doc=2918,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.56201804 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.25 = coord(1/4)
        0.11990701 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11990701 = score(doc=2918,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.11990701 = weight(_text_:2f in 2918) [ClassicSimilarity], result of:
          0.11990701 = score(doc=2918,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
        0.006880972 = product of:
          0.020642916 = sum of:
            0.020642916 = weight(_text_:29 in 2918) [ClassicSimilarity], result of:
              0.020642916 = score(doc=2918,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23319192 = fieldWeight in 2918, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2918)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Date
    29. 8.2009 21:15:48
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  5. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.08
    0.07708308 = product of:
      0.35972103 = sum of:
        0.039969005 = product of:
          0.15987602 = sum of:
            0.15987602 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.15987602 = score(doc=140,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.25 = coord(1/4)
        0.15987602 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.15987602 = score(doc=140,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
        0.15987602 = weight(_text_:2f in 140) [ClassicSimilarity], result of:
          0.15987602 = score(doc=140,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.21428572 = coord(3/14)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  6. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.07
    0.0674477 = product of:
      0.31475592 = sum of:
        0.03497288 = product of:
          0.13989152 = sum of:
            0.13989152 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
              0.13989152 = score(doc=306,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.65568775 = fieldWeight in 306, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=306)
          0.25 = coord(1/4)
        0.13989152 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.13989152 = score(doc=306,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
        0.13989152 = weight(_text_:2f in 306) [ClassicSimilarity], result of:
          0.13989152 = score(doc=306,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.21428572 = coord(3/14)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  7. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.06
    0.0581154 = product of:
      0.20340389 = sum of:
        0.019984502 = product of:
          0.07993801 = sum of:
            0.07993801 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
              0.07993801 = score(doc=701,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.3746787 = fieldWeight in 701, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=701)
          0.25 = coord(1/4)
        0.07993801 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.07993801 = score(doc=701,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.07993801 = weight(_text_:2f in 701) [ClassicSimilarity], result of:
          0.07993801 = score(doc=701,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.3746787 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
        0.023543375 = weight(_text_:representation in 701) [ClassicSimilarity], result of:
          0.023543375 = score(doc=701,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=701)
      0.2857143 = coord(4/14)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
  8. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.05
    0.048176926 = product of:
      0.22482565 = sum of:
        0.024980627 = product of:
          0.09992251 = sum of:
            0.09992251 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
              0.09992251 = score(doc=5895,freq=2.0), product of:
                0.21335082 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.025165197 = queryNorm
                0.46834838 = fieldWeight in 5895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5895)
          0.25 = coord(1/4)
        0.09992251 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.09992251 = score(doc=5895,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
        0.09992251 = weight(_text_:2f in 5895) [ClassicSimilarity], result of:
          0.09992251 = score(doc=5895,freq=2.0), product of:
            0.21335082 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.025165197 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.21428572 = coord(3/14)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  9. Hommen, D.L.: Collective intentionality and the structure of scientific theories (2007) 0.03
    0.028043123 = product of:
      0.3926037 = sum of:
        0.3926037 = weight(_text_:intentionality in 848) [ClassicSimilarity], result of:
          0.3926037 = score(doc=848,freq=2.0), product of:
            0.23640947 = queryWeight, product of:
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.025165197 = queryNorm
            1.6606936 = fieldWeight in 848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              9.394302 = idf(docFreq=9, maxDocs=44218)
              0.125 = fieldNorm(doc=848)
      0.071428575 = coord(1/14)
    
  10. Cole, C.; Lin, Y.; Leide, J.; Large, A.; Beheshti, J.: ¬A classification of mental models of undergraduates seeking information for a course essay in history and psychology : preliminary investigations into aligning their mental models with online thesauri (2007) 0.03
    0.025847128 = product of:
      0.18092988 = sum of:
        0.15738651 = weight(_text_:mental in 625) [ClassicSimilarity], result of:
          0.15738651 = score(doc=625,freq=22.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.957446 = fieldWeight in 625, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.03125 = fieldNorm(doc=625)
        0.023543375 = weight(_text_:representation in 625) [ClassicSimilarity], result of:
          0.023543375 = score(doc=625,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.20333713 = fieldWeight in 625, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.03125 = fieldNorm(doc=625)
      0.14285715 = coord(2/14)
    
    Abstract
    The article reports a field study which examined the mental models of 80 undergraduates seeking information for either a history or psychology course essay when they were in an early, exploration stage of researching their essay. This group is presently at a disadvantage when using thesaurus-type schemes in indexes and online search engines because there is a disconnect between how domain novice users of IR systems represent a topic space and how this space is represented in the standard IR system thesaurus. The study attempted to (a) ascertain the coding language used by the 80 undergraduates in the study to mentally represent their topic and then (b) align the mental models with the hierarchical structure found in many thesauri. The intervention focused the undergraduates' thinking about their topic from a topic statement to a thesis statement. The undergraduates were asked to produce three mental model diagrams for their real-life course essay at the beginning, middle, and end of the interview, for a total of 240 mental model diagrams, from which we created a 12-category mental model classification scheme. Findings indicate that at the end of the intervention, (a) the percentage of vertical mental models increased from 24 to 35% of all mental models; but that (b) 3rd-year students had fewer vertical mental models than did 1st-year undergraduates in the study, which is counterintuitive. The results indicate that there is justification for pursuing our research based on the hypothesis that rotating a domain novice's mental model into a vertical position would make it easier for him or her to cognitively connect with the thesaurus's hierarchical representation of the topic area.
  11. Murphy, M.L.: Semantic relations and the lexicon : antonymy, synonymy and other paradigms (2008) 0.03
    0.025499802 = product of:
      0.11899908 = sum of:
        0.083887294 = weight(_text_:mental in 997) [ClassicSimilarity], result of:
          0.083887294 = score(doc=997,freq=4.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.5103204 = fieldWeight in 997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.0390625 = fieldNorm(doc=997)
        0.02942922 = weight(_text_:representation in 997) [ClassicSimilarity], result of:
          0.02942922 = score(doc=997,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.25417143 = fieldWeight in 997, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=997)
        0.0056825615 = product of:
          0.017047685 = sum of:
            0.017047685 = weight(_text_:22 in 997) [ClassicSimilarity], result of:
              0.017047685 = score(doc=997,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.19345059 = fieldWeight in 997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=997)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Semantic Relations and the Lexicon explores the many paradigmatic semantic relations between words, such as synonymy, antonymy and hyponymy, and their relevance to the mental organization of our vocabularies. Drawing on a century's research in linguistics, psychology, philosophy, anthropology and computer science, M. Lynne Murphy proposes a pragmatic approach to these relations. Whereas traditional approaches have claimed that paradigmatic relations are part of our lexical knowledge, Dr Murphy argues that they constitute metalinguistic knowledge, which can be derived through a single relational principle, and may also be stored as part of our extra-lexical, conceptual representations of a word. Part I shows how this approach can account for the properties of lexical relations in ways that traditional approaches cannot, and Part II examines particular relations in detail. This book will serve as an informative handbook for all linguists and cognitive scientists interested in the mental representation of vocabulary.
    Date
    22. 7.2013 10:53:30
  12. Zhang, X.; Chignell, M.: Assessment of the effects of user characteristics on mental models of information retrieval systems (2001) 0.02
    0.01859566 = product of:
      0.13016962 = sum of:
        0.12328864 = weight(_text_:mental in 5753) [ClassicSimilarity], result of:
          0.12328864 = score(doc=5753,freq=6.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.7500148 = fieldWeight in 5753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.046875 = fieldNorm(doc=5753)
        0.006880972 = product of:
          0.020642916 = sum of:
            0.020642916 = weight(_text_:29 in 5753) [ClassicSimilarity], result of:
              0.020642916 = score(doc=5753,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23319192 = fieldWeight in 5753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5753)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This article reports the results of a study that investigated effects of four user characteristics on users' mental models of information retrieval systems: educational and professional status, first language, academic background, and computer experience. The repertory grid technique was used in the study. Using this method, important components of information retrieval systems were represented by nine concepts, based on four IR experts' judgments. Users' mental models were represented by factor scores that were derived from users' matrices of concept ratings on different attributes of the concepts. The study found that educational and professional status, academic background, and computer experience had significant effects in differentiating users on their factor scores. First language had a borderline effect, but the effect was not significant enough at a = 0.05 level. Specific different views regarding IR systems among different groups of users are described and discussed. Implications of the study for information science and IR system designs are suggested
    Date
    29. 9.2001 14:00:33
  13. Lin, S.-j.; Belkin, N.: Validation of a model of information seeking over multiple search sessions (2005) 0.02
    0.01750922 = product of:
      0.12256454 = sum of:
        0.11574547 = weight(_text_:phenomenology in 3450) [ClassicSimilarity], result of:
          0.11574547 = score(doc=3450,freq=2.0), product of:
            0.20961581 = queryWeight, product of:
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.025165197 = queryNorm
            0.5521791 = fieldWeight in 3450, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.046875 = fieldNorm(doc=3450)
        0.006819073 = product of:
          0.02045722 = sum of:
            0.02045722 = weight(_text_:22 in 3450) [ClassicSimilarity], result of:
              0.02045722 = score(doc=3450,freq=2.0), product of:
                0.08812423 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.025165197 = queryNorm
                0.23214069 = fieldWeight in 3450, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3450)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    Most information systems share a common assumption: information seeking is discrete. Such an assumption neither reflects real-life information seeking processes nor conforms to the perspective of phenomenology, "life is a journey constituted by continuous acquisition of knowledge." Thus, this study develops and validates a theoretical model that explains successive search experience for essentially the same information problem. The proposed model is called Multiple Information Seeking Episodes (MISE), which consists of four dimensions: problematic situation, information problem, information seeking process, episodes. Eight modes of multiple information seeking episodes are identified and specified with properties of the four dimensions of MISE. The results partially validate MISE by finding that the original MISE model is highly accurate, but less sufficient in characterizing successive searches; all factors in the MISE model are empirically confirmed, but new factors are identified as weIl. The revised MISE model is shifted from the user-centered to the interaction-centered perspective, taking into account factors of searcher, system, search activity, search context, information attainment, and information use activities.
    Date
    10. 4.2005 14:52:22
  14. Silva, S.M. de; Zainab, A.N.: ¬An adviser for cataloguing conference proceedings : design and development of CoPAS (2000) 0.02
    0.015770845 = product of:
      0.110395916 = sum of:
        0.10066475 = weight(_text_:mental in 966) [ClassicSimilarity], result of:
          0.10066475 = score(doc=966,freq=4.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.6123845 = fieldWeight in 966, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.046875 = fieldNorm(doc=966)
        0.009731165 = product of:
          0.029193494 = sum of:
            0.029193494 = weight(_text_:29 in 966) [ClassicSimilarity], result of:
              0.029193494 = score(doc=966,freq=4.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.3297832 = fieldWeight in 966, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=966)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This article describes the design and development of an expert adviser to catalogue published conference proceedings. The Conference Proceeding Adviser System (CoPAS) was designed to educate novice cataloguers in creating bibliographic records for published conference proceedings as well as to improve conventional instruction in the cataloguing of conference proceedings. The development tool was Asymetrix ToolBook !!. The knowledge base of the expert system was in the domain of cataloguing published conference proceedings and consists of public and private knowledge. Public/published knowledge are the relevant AACR2R rules that wer, identified based an the nine types of published conference proceedings. Private knowledge or heuristics was elicited from three human expert cataloguers through a multiple-observation approach. The elicited personal knowledge was then modelled into a mental map of their thought processes an how to provide a bibliographic description for published conference proceedings. Based an the mental mapping of the experts, the expert adviser system was designed and developed.
    Date
    4. 9.2002 9:29:37
    Source
    Cataloging and classification quarterly. 29(2000) no.3, S.63-80
  15. Irandoust, H.; Moulin, B.: Pragmatic representation of argumentative text : a challenge for the conceptual graph approach (2000) 0.02
    0.015213685 = product of:
      0.10649579 = sum of:
        0.07118073 = weight(_text_:mental in 5093) [ClassicSimilarity], result of:
          0.07118073 = score(doc=5093,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.43302125 = fieldWeight in 5093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.046875 = fieldNorm(doc=5093)
        0.03531506 = weight(_text_:representation in 5093) [ClassicSimilarity], result of:
          0.03531506 = score(doc=5093,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 5093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=5093)
      0.14285715 = coord(2/14)
    
    Abstract
    Currently, there is a growing interest for using diagrams such as Argumentation Maps to represent arguments in various domains. In this paper we show how various structures can be used to describe the argumentative process as a mental path across an abstract space in which domain-specific topics are used as conceptual landmarks, and to re-construct the discursive goals structure that underlies the text content. Our approach for modeling the argumentative process comprises several components: the Thematic Map, the Goal Structure, the Text and the Conceptual Map, the Evaluation Map and the Discursive Goals Structure. Because of their strong argurnentative framework, we chose movie reviews to illustrate our approach. Finally, we challenge interested CG researchers to find adequate CG structures to represent and manipulate the models that support our analysis of argumentative discourse
  16. Gnoli, C.; Poli, R.: Levels of reality and levels of representation (2004) 0.02
    0.015213685 = product of:
      0.10649579 = sum of:
        0.07118073 = weight(_text_:mental in 3533) [ClassicSimilarity], result of:
          0.07118073 = score(doc=3533,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.43302125 = fieldWeight in 3533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.046875 = fieldNorm(doc=3533)
        0.03531506 = weight(_text_:representation in 3533) [ClassicSimilarity], result of:
          0.03531506 = score(doc=3533,freq=2.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.3050057 = fieldWeight in 3533, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.046875 = fieldNorm(doc=3533)
      0.14285715 = coord(2/14)
    
    Abstract
    Ontology, in its philosophical meaning, is the discipline investigating the structure of reality. Its findings can be relevant to knowledge organization, and models of knowledge can, in turn, offer relevant ontological suggestions. Several philosophers in time have pointed out that reality is structured into a series of integrative levels, like the physical, the biological, the mental, and the cultural, and that each level plays as a base for the emergence of more complex levels. More detailed theories of levels have been developed by Nicolai Hartmann and James K. Feibleman, and these have been considered as a source for structuring principles in bibliographic classification by both the Classification Research Group (CRG) and Ingetraut Dahlberg. CRG's analysis of levels and of their possible application to a new general classification scheme based an phenomena instead of disciplines, as it was formulated by Derek Austin in 1969, is examined in detail. Both benefits and open problems in applying integrative levels to bibliographic classification are pointed out.
  17. Popst, H.; Croissant, C.R.: ¬The development of descriptive cataloging in Germany (2002) 0.01
    0.014962038 = product of:
      0.104734264 = sum of:
        0.09900012 = weight(_text_:1938 in 5487) [ClassicSimilarity], result of:
          0.09900012 = score(doc=5487,freq=2.0), product of:
            0.21236381 = queryWeight, product of:
              8.43879 = idf(docFreq=25, maxDocs=44218)
              0.025165197 = queryNorm
            0.4661817 = fieldWeight in 5487, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.43879 = idf(docFreq=25, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5487)
        0.005734144 = product of:
          0.017202431 = sum of:
            0.017202431 = weight(_text_:29 in 5487) [ClassicSimilarity], result of:
              0.017202431 = score(doc=5487,freq=2.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.19432661 = fieldWeight in 5487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5487)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    This article discusses the development of descriptive cataloging in Germany and the evolution of cataloging principles. The Instruktionen für die alphabetischen Kataloge der preußischen Bibliotheken (Instructions for the Alphabetic Catalogs of the Prussian Libraries, known as the Prussian Instructions, or PI, for short) were published in 1899. The so-called Berliner Anweisungen ("Berlin Instructions," Instructions for the Alphabetic Catalog in Public Libraries) appeared in 1938. Discussion for reform of cataloging rules began in the 1950s and received impetus from the International Conference on Cataloging Principles in Paris in 1961 and from the International Meeting of Cataloging Experts in Copenhagen in 1969. Preliminary drafts of the new Regeln für die alphabetische Katalogisierung, RAK (Rules for Descriptive Cataloging) were issued between 1969 and 1976; the complete edition of the RAK was published in the German Democratic Republic (East Germany) in 1976 and in a slightly different version in 1977 for the Federal Republic of Germany (West Germany). A version for academic libraries appeared in 1983, followed by a version for public libraries in 1986. Between 1987 and 1997, supplementary rules for special categories of materials were published.
    Date
    29. 7.2006 19:47:05
  18. Couvering, E. van: ¬The economy of navigation : search engines, search optimisation and search results (2007) 0.01
    0.014937696 = product of:
      0.10456387 = sum of:
        0.09645457 = weight(_text_:phenomenology in 379) [ClassicSimilarity], result of:
          0.09645457 = score(doc=379,freq=2.0), product of:
            0.20961581 = queryWeight, product of:
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.025165197 = queryNorm
            0.4601493 = fieldWeight in 379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.329592 = idf(docFreq=28, maxDocs=44218)
              0.0390625 = fieldNorm(doc=379)
        0.008109303 = product of:
          0.02432791 = sum of:
            0.02432791 = weight(_text_:29 in 379) [ClassicSimilarity], result of:
              0.02432791 = score(doc=379,freq=4.0), product of:
                0.08852329 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.025165197 = queryNorm
                0.2748193 = fieldWeight in 379, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=379)
          0.33333334 = coord(1/3)
      0.14285715 = coord(2/14)
    
    Abstract
    The political economy of communication focuses critically on what structural issues in mass media - ownership, labour practices, professional ethics, and so on - mean for products of those mass media and thus for society more generally. In the case of new media, recent political economic studies have looked at the technical infrastructure of the Internet and also at Internet usage. However, political economic studies of internet content are only beginning. Recent studies on the phenomenology of the Web, that is, the way the Web is experienced from an individual user's perspective, highlight the centrality of the search engine to most users' experiences of the Web, particularly when they venture beyond familiar Web sites. Search engines are therefore an obvi ous place to begin the analysis of Web content. An important assumption of this chapter is that internet search engines are media businesses and that the tools developed in media studies can be profitably brought to bear on them. This focus on search engine as industry comes from the critical tradition of the political economy of communications in rejecting the notion that the market alone should be the arbiter of the structure of the media industry, as might be appropriate for other types of products.
    Date
    13. 5.2007 10:29:29
  19. Day, R.E.: Clearing up "Implicit Knowledge" : implications for knowledge management, information science, psychology, and social epistemology (2005) 0.01
    0.014419497 = product of:
      0.10093647 = sum of:
        0.059317272 = weight(_text_:mental in 3461) [ClassicSimilarity], result of:
          0.059317272 = score(doc=3461,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.36085102 = fieldWeight in 3461, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3461)
        0.041619197 = weight(_text_:representation in 3461) [ClassicSimilarity], result of:
          0.041619197 = score(doc=3461,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35945266 = fieldWeight in 3461, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3461)
      0.14285715 = coord(2/14)
    
    Abstract
    "Implicit knowledge" and "tacit knowledge" in Knowledge Management (KM) are important, often synonymous, terms. In KM they often refer to private or personal knowledge that needs to be made public. The original reference of "tacit knowledge" is to the work of the late scientist and philosopher, Michael Polanyi (Polanyi, 1969), but there is substantial evidence that the KM discourse has poorly understood Polanyi's term. Two theoretical problems in Knowledge Management's notion of "implicit knowledge," which undermine empirical work in this area, are examined. The first problem involves understanding the term "knowledge" according to a folk-psychology of mental representation to model expression. The second is epistemological and social: understanding Polanyi's term, tacit knowing as a psychological concept instead of as an epistemological problem, in general, and one of social epistemology and of the epistemology of the sciences, in particular. Further, exploring Polanyi's notion of tacit knowing in more detail yields important insights into the role of knowledge in science, including empirical work in information science. This article has two parts: first, there is a discussion of the folk-psychology model of representation and the need to replace this with a more expressionist model. In the second part, Polanyi's concept of tacit knowledge in relation to the role of analogical thought in expertise is examined. The works of philosophers, particularly Harre and Wittgenstein, are brought to bear an these problems. Conceptual methods play several roles in information science that cannot satisfactorily be performed empirically at all or alone. Among these roles, such methods may examine historical issues, they may critically engage foundational assumptions, and they may deploy new concepts. In this article the last two roles are examined.
  20. Lee, J.; Boling, E.: Information-conveying approaches and cognitive styles of mental modeling in a hypermedia-based learning environment (2008) 0.01
    0.014419497 = product of:
      0.10093647 = sum of:
        0.059317272 = weight(_text_:mental in 1385) [ClassicSimilarity], result of:
          0.059317272 = score(doc=1385,freq=2.0), product of:
            0.16438161 = queryWeight, product of:
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.025165197 = queryNorm
            0.36085102 = fieldWeight in 1385, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.532101 = idf(docFreq=174, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1385)
        0.041619197 = weight(_text_:representation in 1385) [ClassicSimilarity], result of:
          0.041619197 = score(doc=1385,freq=4.0), product of:
            0.11578492 = queryWeight, product of:
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.025165197 = queryNorm
            0.35945266 = fieldWeight in 1385, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.600994 = idf(docFreq=1206, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1385)
      0.14285715 = coord(2/14)
    
    Abstract
    The increasing spread of Internet technology has highlighted the need for a better understanding of the fundamental issues concerning human users in a virtual space. Despite the great degree of navigational freedom, however, not all hypermedia users have the capability to locate information or assimilate internal knowledge. Research findings suggest that this type of problem could be solved if users were able to hold a cognitive overview of the hypermedia structure. How a learner can acquire the correct structural knowledge of online information has become an important factor in learning performance in a hypermedia environment. Variables that might influence learners' abilities in structuring a cognitive overview, such as users' cognitive styles and the different ways of representing information, should be carefully taken into account. The results of this study show that the interactions between information representation approaches and learners' cognitive styles have significant effects on learners' performance in terms of structural knowledge and feelings of disorientation. Learners' performance could decline if a representational approach that contradicts their cognitive style is used. Finally, the results of the present study may apply only when the learner's knowledge level is in the introductory stage. It is not clear how and what type of cognitive styles, as well as information representation approaches, will affect the performance of advanced and expert learners.

Languages

Types

  • a 2477
  • m 277
  • el 147
  • s 104
  • b 28
  • x 23
  • i 12
  • r 5
  • n 4
  • p 3
  • More… Less…

Themes

Subjects

Classifications