Search (16926 results, page 847 of 847)

  • × type_ss:"a"
  1. Dole, J.A.; Sinatra, G.M.: Reconceptualizing change in the cognitive construction of knowledge (1989) 0.00
    1.5251554E-4 = product of:
      0.003660373 = sum of:
        0.003660373 = product of:
          0.010981118 = sum of:
            0.010981118 = weight(_text_:p in 2632) [ClassicSimilarity], result of:
              0.010981118 = score(doc=2632,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.13903812 = fieldWeight in 2632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2632)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    A major contribution of cognitive psychology has been the conceptualization of knowledge as memory representations in the form of scripts, frames, or schemata (Anderson & Pearson, 1984; Rumelhart & Ortony, 1977; Shank & Abelson, 1977; Spiro, 1980). Schemata are defined as "packets of integrated information on various topics" (Hunt, 1993 , p.530). Throughout the 1970s and 1980s, cognitive psychologists were interested in describing the nature of these packets of information. Spiro (1980 ) demonstrated the constructive and complex nature of schemata and highlighted contextual factors--including tasks, texts, and situational contexts--that influenced how knowledge is organized in memory. Recently, cognitive researchers have come to view knowledge and schemata as multidimensional (Jetton, Rupley, & Willson, 1995). For example, researchers have differentiated novice and experts' knowledge structures in subject-matter domains (Chase & Simon, 1973; Chi, Glaser, & Rees, 1982; Larkin, McDermott, Simon, & Simon, 1981; Voss, Greene, Post, & Penner, 1983). Researchers have examined discourse knowledge--knowledge about language and how it works (McCutchen, 1986). Another aspect of knowledge that has been extensively studied is strategic knowledge--knowledge about procedures for accomplishing a goal or task (Alexander & Judy, 1988; J. R. Anderson, 1983a; Prawat, 1989).
  2. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 1202) [ClassicSimilarity], result of:
              0.009412387 = score(doc=1202,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 1202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
  3. Bertolucci, K.: Happiness is taxonomy : four structures for Snoopy - libraries' method of categorizing and classification (2003) 0.00
    1.3072761E-4 = product of:
      0.0031374625 = sum of:
        0.0031374625 = product of:
          0.009412387 = sum of:
            0.009412387 = weight(_text_:p in 1212) [ClassicSimilarity], result of:
              0.009412387 = score(doc=1212,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.11917553 = fieldWeight in 1212, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1212)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Content
    Vgl.: http://findarticles.com/p/articles/mi_m0FWE/is_3_7/ai_99011617.
  4. Ewbank, L.: Crisis in subject cataloging and retrieval (1996) 0.00
    1.24004E-4 = product of:
      0.002976096 = sum of:
        0.002976096 = product of:
          0.005952192 = sum of:
            0.005952192 = weight(_text_:22 in 5580) [ClassicSimilarity], result of:
              0.005952192 = score(doc=5580,freq=2.0), product of:
                0.07692135 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021966046 = queryNorm
                0.07738023 = fieldWeight in 5580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=5580)
          0.5 = coord(1/2)
      0.041666668 = coord(1/24)
    
    Source
    Cataloging and classification quarterly. 22(1996) no.2, S.90-97
  5. Smiraglia, R.P.: Curating and virtual shelves : an editorial (2006) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 409) [ClassicSimilarity], result of:
              0.007843656 = score(doc=409,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 409, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=409)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Content
    Librarianship incorporates the tools of knowledge organization as part of its role as cultural disseminator. Subject headings and classification were both intended by their 19`h century promulgators - perhaps most notably Dewey and Cutter - to facilitate learning by grouping materials of high quality together. We might call this enhanced serendipity if we think it happens by accident or act of fate, or we might call it curatorship if we realize the responsibility inherent in our social role. The cataloger's job always has been to place each work sensitively among other works related to it, and to make the relationships explicit to facilitate and even encourage selection (see Miksa 1983). Schallier (2004) reported on the use of classification in an online catalog to enhance just such a curatorial purpose. UDC classification codes were exploded into linguistic strings to allow users to search, not just for a given term, but for the terms that occur around it - that is, terms that are adjacent in the classification. These displays are used alongside LCSH to provide enhanced-serendipity for users. What caught my attention was the intention of the project (p. 271): UDC permits librarians to build virtual library shelves, where a document's subjects can be described in thematic categories rather than in detailed verbal terms. And: It is our experience that most end users are not familiar with large controlled vocabularies. UDC could be an answer to this, since its alphanumeric makeup could be used to build a tree structure of terms, which would guide end users in their searchers. There are other implications from this project, including background linkage from UDC codes that drive the "virtual shelves" to subject terms that drive the initial classification. Knowledge organization has consequences in both theory and application."
  6. Crane, G.; Jones, A.: Text, information, knowledge and the evolving record of humanity (2006) 0.00
    1.08939676E-4 = product of:
      0.0026145522 = sum of:
        0.0026145522 = product of:
          0.007843656 = sum of:
            0.007843656 = weight(_text_:p in 1182) [ClassicSimilarity], result of:
              0.007843656 = score(doc=1182,freq=2.0), product of:
                0.078979194 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.021966046 = queryNorm
                0.099312946 = fieldWeight in 1182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1182)
          0.33333334 = coord(1/3)
      0.041666668 = coord(1/24)
    
    Abstract
    Consider a sentence such as "the current price of tea in China is 35 cents per pound." In a library with millions of books we might find many statements of the above form that we could capture today with relatively simple rules: rather than pursuing every variation of a statement, programs can wait, like predators at a water hole, for their informational prey to reappear in a standard linguistic pattern. We can make inferences from sentences such as "NAME1 born at NAME2 in DATE" that NAME more likely than not represents a person and NAME a place and then convert the statement into a proposition about a person born at a given place and time. The changing price of tea in China, pedestrian birth and death dates, or other basic statements may not be truth and beauty in the Phaedrus, but a digital library that could plot the prices of various commodities in different markets over time, plot the various lifetimes of individuals, or extract and classify many events would be very useful. Services such as the Syllabus Finder1 and H-Bot2 (which Dan Cohen describes elsewhere in this issue of D-Lib) represent examples of information extraction already in use. H-Bot, in particular, builds on our evolving ability to extract information from very large corpora such as the billions of web pages available through the Google API. Aside from identifying higher order statements, however, users also want to search and browse named entities: they want to read about "C. P. E. Bach" rather than his father "Johann Sebastian" or about "Cambridge, Maryland", without hearing about "Cambridge, Massachusetts", Cambridge in the UK or any of the other Cambridges scattered around the world. Named entity identification is a well-established area with an ongoing literature. The Natural Language Processing Research Group at the University of Sheffield has developed its open source Generalized Architecture for Text Engineering (GATE) for years, while IBM's Unstructured Information Analysis and Search (UIMA) is "available as open source software to provide a common foundation for industry and academia." Powerful tools are thus freely available and more demanding users can draw upon published literature to develop their own systems. Major search engines such as Google and Yahoo also integrate increasingly sophisticated tools to categorize and identify places. The software resources are rich and expanding. The reference works on which these systems depend, however, are ill-suited for historical analysis. First, simple gazetteers and similar authority lists quickly grow too big for useful information extraction. They provide us with potential entities against which to match textual references, but existing electronic reference works assume that human readers can use their knowledge of geography and of the immediate context to pick the right Boston from the Bostons in the Getty Thesaurus of Geographic Names (TGN), but, with the crucial exception of geographic location, the TGN records do not provide any machine readable clues: we cannot tell which Bostons are large or small. If we are analyzing a document published in 1818, we cannot filter out those places that did not yet exist or that had different names: "Jefferson Davis" is not the name of a parish in Louisiana (tgn,2000880) or a county in Mississippi (tgn,2001118) until after the Civil War.

Authors

Languages

Types

  • el 626
  • b 38
  • p 3
  • i 1
  • More… Less…

Themes