Search (1378 results, page 1 of 69)

  • × year_i:[2000 TO 2010}
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.113088265 = product of:
      0.28272066 = sum of:
        0.24151587 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
          0.24151587 = score(doc=562,freq=2.0), product of:
            0.42972976 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.050687566 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.04120479 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
          0.04120479 = score(doc=562,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.23214069 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
      0.4 = coord(2/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.06
    0.06440424 = product of:
      0.3220212 = sum of:
        0.3220212 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
          0.3220212 = score(doc=140,freq=2.0), product of:
            0.42972976 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.050687566 = queryNorm
            0.7493574 = fieldWeight in 140, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=140)
      0.2 = coord(1/5)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  3. Chylkowska, E.: Implementation of information exchange : online dictionaries (2005) 0.06
    0.06354797 = product of:
      0.15886992 = sum of:
        0.124532595 = weight(_text_:line in 3011) [ClassicSimilarity], result of:
          0.124532595 = score(doc=3011,freq=4.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.43811268 = fieldWeight in 3011, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3011)
        0.034337327 = weight(_text_:22 in 3011) [ClassicSimilarity], result of:
          0.034337327 = score(doc=3011,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.19345059 = fieldWeight in 3011, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3011)
      0.4 = coord(2/5)
    
    Abstract
    We are living in a society in which using Internet is a part of everyday life. People use Internet at schools, universities, at work in small and big companies. The Web gives huge number of information from every possible field of knowledge, and one of the problems that one can face by searching through the web is the fact that this information may be written in many different languages that one does not understand. That is why web site designers came up with an idea to create on-line dictionaries to make surfing on the Web easier. The most popular are bilingual dictionaries (in Poland the most known are: LING.pl, LEKSYKA.pl, and Dict.pl), but one can find also multilingual ones (Logos.com, Lexicool.com). Nowadays, when using Internet in education becomes more and more popular, on-line dictionaries are the best supplement for a good quality work. The purpose of this paper is to present, compare and recommend the best (from the author's point of view) multilingual dictionaries that can be found on the Internet and that can serve educational purposes well.
    Date
    22. 7.2009 11:05:56
  4. Vetere, G.; Lenzerini, M.: Models for semantic interoperability in service-oriented architectures (2005) 0.06
    0.056353707 = product of:
      0.28176853 = sum of:
        0.28176853 = weight(_text_:3a in 306) [ClassicSimilarity], result of:
          0.28176853 = score(doc=306,freq=2.0), product of:
            0.42972976 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.050687566 = queryNorm
            0.65568775 = fieldWeight in 306, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0546875 = fieldNorm(doc=306)
      0.2 = coord(1/5)
    
    Content
    Vgl.: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5386707&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5386707.
  5. Doszkocs, T.E.; Zamora, A.: Dictionary services and spelling aids for Web searching (2004) 0.05
    0.054647263 = product of:
      0.13661815 = sum of:
        0.08805784 = weight(_text_:line in 2541) [ClassicSimilarity], result of:
          0.08805784 = score(doc=2541,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.30979243 = fieldWeight in 2541, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
        0.048560314 = weight(_text_:22 in 2541) [ClassicSimilarity], result of:
          0.048560314 = score(doc=2541,freq=4.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.27358043 = fieldWeight in 2541, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2541)
      0.4 = coord(2/5)
    
    Abstract
    The Specialized Information Services Division (SIS) of the National Library of Medicine (NLM) provides Web access to more than a dozen scientific databases on toxicology and the environment on TOXNET . Search queries on TOXNET often include misspelled or variant English words, medical and scientific jargon and chemical names. Following the example of search engines like Google and ClinicalTrials.gov, we set out to develop a spelling "suggestion" system for increased recall and precision in TOXNET searching. This paper describes development of dictionary technology that can be used in a variety of applications such as orthographic verification, writing aid, natural language processing, and information storage and retrieval. The design of the technology allows building complex applications using the components developed in the earlier phases of the work in a modular fashion without extensive rewriting of computer code. Since many of the potential applications envisioned for this work have on-line or web-based interfaces, the dictionaries and other computer components must have fast response, and must be adaptable to open-ended database vocabularies, including chemical nomenclature. The dictionary vocabulary for this work was derived from SIS and other databases and specialized resources, such as NLM's Unified Medical Language Systems (UMLS) . The resulting technology, A-Z Dictionary (AZdict), has three major constituents: 1) the vocabulary list, 2) the word attributes that define part of speech and morphological relationships between words in the list, and 3) a set of programs that implements the retrieval of words and their attributes, and determines similarity between words (ChemSpell). These three components can be used in various applications such as spelling verification, spelling aid, part-of-speech tagging, paraphrasing, and many other natural language processing functions.
    Date
    14. 8.2004 17:22:56
    Source
    Online. 28(2004) no.3, S.22-29
  6. Wang, Z.; Chaudhry, A.S.; Khoo, C.S.G.: Using classification schemes and thesauri to build an organizational taxonomy for organizing content and aiding navigation (2008) 0.05
    0.050838377 = product of:
      0.12709594 = sum of:
        0.09962608 = weight(_text_:line in 2346) [ClassicSimilarity], result of:
          0.09962608 = score(doc=2346,freq=4.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.35049015 = fieldWeight in 2346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
        0.027469862 = weight(_text_:22 in 2346) [ClassicSimilarity], result of:
          0.027469862 = score(doc=2346,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.15476047 = fieldWeight in 2346, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2346)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - Potential and benefits of classification schemes and thesauri in building organizational taxonomies cannot be fully utilized by organizations. Empirical data of building an organizational taxonomy by the top-down approach of using classification schemes and thesauri appear to be lacking. The paper seeks to make a contribution in this regard. Design/methodology/approach - A case study of building an organizational taxonomy was conducted in the information studies domain for the Division of Information Studies at Nanyang Technology University, Singapore. The taxonomy was built by using the Dewey Decimal Classification, the Information Science Taxonomy, two information systems taxonomies, and three thesauri (ASIS&T, LISA, and ERIC). Findings - Classification schemes and thesauri were found to be helpful in creating the structure and categories related to the subject facet of the taxonomy, but organizational community sources had to be consulted and several methods had to be employed. The organizational activities and stakeholders' needs had to be identified to determine the objectives, facets, and the subject coverage of the taxonomy. Main categories were determined by identifying the stakeholders' interests and consulting organizational community sources and domain taxonomies. Category terms were selected from terminologies of classification schemes, domain taxonomies, and thesauri against the stakeholders' interests. Hierarchical structures of the main categories were constructed in line with the stakeholders' perspectives and the navigational role taking advantage of structures/term relationships from classification schemes and thesauri. Categories were determined in line with the concepts and the hierarchical levels. Format of categories were uniformed according to a commonly used standard. The consistency principle was employed to make the taxonomy structure and categories neater. Validation of the draft taxonomy through consultations with the stakeholders further refined the taxonomy. Originality/value - No similar study could be traced in the literature. The steps and methods used in the taxonomy development, and the information studies taxonomy itself, will be helpful for library and information schools and other similar organizations in their effort to develop taxonomies for organizing content and aiding navigation on organizational sites.
    Date
    7.11.2008 15:22:04
  7. Buxton, A.; Hopkinson, A.: ¬The CDS/ISIS for Windows handbook (2001) 0.05
    0.04981304 = product of:
      0.24906519 = sum of:
        0.24906519 = weight(_text_:line in 775) [ClassicSimilarity], result of:
          0.24906519 = score(doc=775,freq=4.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.87622535 = fieldWeight in 775, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.078125 = fieldNorm(doc=775)
      0.2 = coord(1/5)
    
    COMPASS
    Information retrieval / Use of / On-line computers
    Subject
    Information retrieval / Use of / On-line computers
  8. O'Connor, L.: Approaching the challenges and costs of the North American Industrial Classification System (2000) 0.05
    0.049312394 = product of:
      0.24656196 = sum of:
        0.24656196 = weight(_text_:line in 3380) [ClassicSimilarity], result of:
          0.24656196 = score(doc=3380,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.8674188 = fieldWeight in 3380, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.109375 = fieldNorm(doc=3380)
      0.2 = coord(1/5)
    
    Source
    Bottom line. 13(2000) no.2, S.83-89
  9. Line, M.B.: Social science information : the poor relation (2000) 0.05
    0.049312394 = product of:
      0.24656196 = sum of:
        0.24656196 = weight(_text_:line in 6330) [ClassicSimilarity], result of:
          0.24656196 = score(doc=6330,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.8674188 = fieldWeight in 6330, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.109375 = fieldNorm(doc=6330)
      0.2 = coord(1/5)
    
  10. Scott, M.: Legal deposit of on-line materials and national bibliographies (2001) 0.05
    0.049312394 = product of:
      0.24656196 = sum of:
        0.24656196 = weight(_text_:line in 6909) [ClassicSimilarity], result of:
          0.24656196 = score(doc=6909,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.8674188 = fieldWeight in 6909, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.109375 = fieldNorm(doc=6909)
      0.2 = coord(1/5)
    
  11. Computational linguistics for the new millennium : divergence or synergy? Proceedings of the International Symposium held at the Ruprecht-Karls Universität Heidelberg, 21-22 July 2000. Festschrift in honour of Peter Hellwig on the occasion of his 60th birthday (2002) 0.05
    0.048958067 = product of:
      0.122395165 = sum of:
        0.08805784 = weight(_text_:line in 4900) [ClassicSimilarity], result of:
          0.08805784 = score(doc=4900,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.30979243 = fieldWeight in 4900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4900)
        0.034337327 = weight(_text_:22 in 4900) [ClassicSimilarity], result of:
          0.034337327 = score(doc=4900,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.19345059 = fieldWeight in 4900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4900)
      0.4 = coord(2/5)
    
    Abstract
    The two seemingly conflicting tendencies, synergy and divergence, are both fundamental to the advancement of any science. Their interplay defines the demarcation line between application-oriented and theoretical research. The papers in this festschrift in honour of Peter Hellwig are geared to answer questions that arise from this insight: where does the discipline of Computational Linguistics currently stand, what has been achieved so far and what should be done next. Given the complexity of such questions, no simple answers can be expected. However, each of the practitioners and researchers are contributing from their very own perspective a piece of insight into the overall picture of today's and tomorrow's computational linguistics.
  12. Talja, S.: ¬The social and discursive construction of computing skills (2005) 0.05
    0.048958067 = product of:
      0.122395165 = sum of:
        0.08805784 = weight(_text_:line in 4902) [ClassicSimilarity], result of:
          0.08805784 = score(doc=4902,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.30979243 = fieldWeight in 4902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4902)
        0.034337327 = weight(_text_:22 in 4902) [ClassicSimilarity], result of:
          0.034337327 = score(doc=4902,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.19345059 = fieldWeight in 4902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4902)
      0.4 = coord(2/5)
    
    Abstract
    In this article a social constructionist approach to information technology (IT) literacy is introduced. This approach contributes to the literature an IT literacy by introducing the concept of IT self as a description of the momentary, context-dependent, and multilayered nature of interpretations of IT competencies. In the research litersture, IT literacy is offen defined as sets of basic skills to be learned, and competencies to be demonstrated. In line with this approach, research an IT competencies conventionally develops models for explaining user acceptance, and for measuring computer-related attitudes and skills. The assumption is that computerrelated attitudes and seif-efficacy impact IT adoption and success in computer use. Computer seif-efficacy measures are, however, often based an seif-assessments that measure interpretations of skills rather than performance in practice. An analysis of empirical interview data in which academic researchers discuss their relationships with computers and IT competence shows how a seif-assessment such as "computer anxiety" presented in one discussion context can in another discussion context be consigned to the past in favor of a different and more positive version. Here it is argued that descriptions of IT competencies and computer-related attitudes are dialogic social constructs and closely tied with more general implicit understandings of the nature of technical artifacts and technical knowledge. These implicit theories and assumptions are rarely taken under scrutiny in discussions of IT literacy yet they have profound implications for the aims and methods in teaching computer skills.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.1, S.13-22
  13. Näppilä, T.; Järvelin, K.; Niemi, T.: ¬A tool for data cube construction from structurally heterogeneous XML documents (2008) 0.05
    0.048958067 = product of:
      0.122395165 = sum of:
        0.08805784 = weight(_text_:line in 1369) [ClassicSimilarity], result of:
          0.08805784 = score(doc=1369,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.30979243 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.034337327 = weight(_text_:22 in 1369) [ClassicSimilarity], result of:
          0.034337327 = score(doc=1369,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.19345059 = fieldWeight in 1369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
      0.4 = coord(2/5)
    
    Abstract
    Data cubes for OLAP (On-Line Analytical Processing) often need to be constructed from data located in several distributed and autonomous information sources. Such a data integration process is challenging due to semantic, syntactic, and structural heterogeneity among the data. While XML (extensible markup language) is the de facto standard for data exchange, the three types of heterogeneity remain. Moreover, popular path-oriented XML query languages, such as XQuery, require the user to know in much detail the structure of the documents to be processed and are, thus, effectively impractical in many real-world data integration tasks. Several Lowest Common Ancestor (LCA)-based XML query evaluation strategies have recently been introduced to provide a more structure-independent way to access XML documents. We shall, however, show that this approach leads in the context of certain - not uncommon - types of XML documents to undesirable results. This article introduces a novel high-level data extraction primitive that utilizes the purpose-built Smallest Possible Context (SPC) query evaluation strategy. We demonstrate, through a system prototype for OLAP data cube construction and a sample application in informetrics, that our approach has real advantages in data integration.
    Date
    9. 2.2008 17:22:42
  14. Mas, S.; Marleau, Y.: Proposition of a faceted classification model to support corporate information organization and digital records management (2009) 0.05
    0.048303176 = product of:
      0.24151587 = sum of:
        0.24151587 = weight(_text_:3a in 2918) [ClassicSimilarity], result of:
          0.24151587 = score(doc=2918,freq=2.0), product of:
            0.42972976 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.050687566 = queryNorm
            0.56201804 = fieldWeight in 2918, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2918)
      0.2 = coord(1/5)
    
    Footnote
    Vgl.: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4755313%2F4755314%2F04755480.pdf%3Farnumber%3D4755480&authDecision=-203.
  15. Hildreth, C.R.: Accounting for users' inflated assessments of on-line catalogue search performance and usefulness : an experimental study (2001) 0.04
    0.042267766 = product of:
      0.21133882 = sum of:
        0.21133882 = weight(_text_:line in 4130) [ClassicSimilarity], result of:
          0.21133882 = score(doc=4130,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.74350184 = fieldWeight in 4130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.09375 = fieldNorm(doc=4130)
      0.2 = coord(1/5)
    
  16. Donsbach, W.: Wahrheit in den Medien : über den Sinn eines methodischen Objektivitätsbegriffes (2001) 0.04
    0.04025265 = product of:
      0.20126323 = sum of:
        0.20126323 = weight(_text_:3a in 5895) [ClassicSimilarity], result of:
          0.20126323 = score(doc=5895,freq=2.0), product of:
            0.42972976 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.050687566 = queryNorm
            0.46834838 = fieldWeight in 5895, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5895)
      0.2 = coord(1/5)
    
    Source
    Politische Meinung. 381(2001) Nr.1, S.65-74 [https%3A%2F%2Fwww.dgfe.de%2Ffileadmin%2FOrdnerRedakteure%2FSektionen%2FSek02_AEW%2FKWF%2FPublikationen_Reihe_1989-2003%2FBand_17%2FBd_17_1994_355-406_A.pdf&usg=AOvVaw2KcbRsHy5UQ9QRIUyuOLNi]
  17. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.04
    0.04025265 = product of:
      0.20126323 = sum of:
        0.20126323 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
          0.20126323 = score(doc=692,freq=2.0), product of:
            0.42972976 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.050687566 = queryNorm
            0.46834838 = fieldWeight in 692, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0390625 = fieldNorm(doc=692)
      0.2 = coord(1/5)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  18. Schamber, L.: Time-line interviews and inductive content analysis : their effectiveness for exploring cognitive behaviors (2000) 0.04
    0.039850432 = product of:
      0.19925216 = sum of:
        0.19925216 = weight(_text_:line in 4808) [ClassicSimilarity], result of:
          0.19925216 = score(doc=4808,freq=4.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.7009803 = fieldWeight in 4808, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.0625 = fieldNorm(doc=4808)
      0.2 = coord(1/5)
    
    Abstract
    In studies of information users' cognitive behaviors, it is widely recognised that users' perceptions of their information problem situations play a major role. Time-line interviewing and inductive content analysis are 2 research methods that, used together, have proven extremely useful for exploring and describing users' perceptions in various situational contexts. This article describes advantages and disadvantages of the methods using examples from a study of users' criteria for evaluation in a multimedia context
  19. Herrero-Solana, V.; Moya Anegón, F. de: Graphical Table of Contents (GTOC) for library collections : the application of UDC codes for the subject maps (2003) 0.04
    0.039166458 = product of:
      0.09791614 = sum of:
        0.070446275 = weight(_text_:line in 2758) [ClassicSimilarity], result of:
          0.070446275 = score(doc=2758,freq=2.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.24783395 = fieldWeight in 2758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.03125 = fieldNorm(doc=2758)
        0.027469862 = weight(_text_:22 in 2758) [ClassicSimilarity], result of:
          0.027469862 = score(doc=2758,freq=2.0), product of:
            0.17749922 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.050687566 = queryNorm
            0.15476047 = fieldWeight in 2758, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2758)
      0.4 = coord(2/5)
    
    Abstract
    The representation of information contents by graphical maps is an extended ongoing research topic. In this paper we introduce the application of UDC codes for the subject maps development. We use the following graphic representation methodologies: 1) Multidimensional scaling (MDS), 2) Cluster analysis, 3) Neural networks (Self Organizing Map - SOM). Finally, we conclude about the application viability of every kind of map. 1. Introduction Advanced techniques for Information Retrieval (IR) currently make up one of the most active areas for research in the field of library and information science. New models representing document content are replacing the classic systems in which the search terms supplied by the user were compared against the indexing terms existing in the inverted files of a database. One of the topics most often studied in the last years is bibliographic browsing, a good complement to querying strategies. Since the 80's, many authors have treated this topic. For example, Ellis establishes that browsing is based an three different types of tasks: identification, familiarization and differentiation (Ellis, 1989). On the other hand, Cove indicates three different browsing types: searching browsing, general purpose browsing and serendipity browsing (Cove, 1988). Marcia Bates presents six different types (Bates, 1989), although the classification of Bawden is the one that really interests us: 1) similarity comparison, 2) structure driven, 3) global vision (Bawden, 1993). The global vision browsing implies the use of graphic representations, which we will call map displays, that allow the user to get a global idea of the nature and structure of the information in the database. In the 90's, several authors worked an this research line, developing different types of maps. One of the most active was Xia Lin what introduced the concept of Graphical Table of Contents (GTOC), comparing the maps to true table of contents based an graphic representations (Lin 1996). Lin applies the algorithm SOM to his own personal bibliography, analyzed in function of the words of the title and abstract fields, and represented in a two-dimensional map (Lin 1997). Later on, Lin applied this type of maps to create websites GTOCs, through a Java application.
    Date
    12. 9.2004 14:31:22
  20. Palmquist, R.A.; Kim, K.-S.: Cognitive style and on-line database search experience as predictors of Web search performance (2000) 0.04
    0.036604963 = product of:
      0.18302481 = sum of:
        0.18302481 = weight(_text_:line in 4605) [ClassicSimilarity], result of:
          0.18302481 = score(doc=4605,freq=6.0), product of:
            0.28424788 = queryWeight, product of:
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.050687566 = queryNorm
            0.6438916 = fieldWeight in 4605, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.6078424 = idf(docFreq=440, maxDocs=44218)
              0.046875 = fieldNorm(doc=4605)
      0.2 = coord(1/5)
    
    Abstract
    This study sought to investigate the effects of cognitive style (field dependent and field independent) and on-line database search experience (novice and experienced) on the WWW search performance of undergraduate college students (n=48). It also attempted to find user factors that could be used to predict search efficiency. search performance, the dependent variable was defined in 2 ways: (1) time required for retrieving a relevant information item, and (2) the number of nodes traversed for retrieving a relevant information item. the search tasks required were carried out on a University Web site, and included a factual task and a topical search task of interest to the participant. Results indicated that while cognitive style (FD/FI) significantly influenced the search performance of novice searchers, the influence was greatly reduced in those searchers who had on-line database search experience. Based on the findings, suggestions for possible changes to the design of the current Web interface and to user training programs are provided

Languages

Types

  • a 1161
  • m 151
  • el 70
  • s 51
  • b 26
  • x 13
  • i 9
  • n 2
  • r 1
  • More… Less…

Themes

Subjects

Classifications