Search (3980 results, page 2 of 199)

  1. Nyholm, J.: ¬The code in the light of the critics (1941) 0.07
    0.07453426 = product of:
      0.14906852 = sum of:
        0.14906852 = product of:
          0.29813704 = sum of:
            0.29813704 = weight(_text_:light in 6196) [ClassicSimilarity], result of:
              0.29813704 = score(doc=6196,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                1.02094 = fieldWeight in 6196, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.125 = fieldNorm(doc=6196)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Notess, G.R.: Northern Light : new search engine for the Web and full-text articles (1998) 0.07
    0.07453426 = product of:
      0.14906852 = sum of:
        0.14906852 = product of:
          0.29813704 = sum of:
            0.29813704 = weight(_text_:light in 3274) [ClassicSimilarity], result of:
              0.29813704 = score(doc=3274,freq=8.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                1.02094 = fieldWeight in 3274, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3274)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Northern Light Search from Northern Light technology Internet search engine searches both WWW pages and full-text-articles and sorts its results into folders based on keywords, source and other criteria. Prices for articles range from free to $14 an article, full citations are free. Details its scope, search syntax, describes its use of custom folders, record structure, and advanced searching
    Object
    Northern Light
  3. Laan, H. van der: Northern Light : nieuw licht aan een veranderend firmament (1997) 0.07
    0.07291536 = product of:
      0.14583072 = sum of:
        0.14583072 = product of:
          0.29166144 = sum of:
            0.29166144 = weight(_text_:light in 1479) [ClassicSimilarity], result of:
              0.29166144 = score(doc=1479,freq=10.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.99876493 = fieldWeight in 1479, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Northern Light Internet search engine was inaugurated in Aug 1997. The service differs from existing services in that it classifies search results in a set of custom search folders and for a modest fee provides access to a range of bibliographical resources not normally available via the Web. Paymant may be made online via a simple transaction. The custom search folders enable users to refine searches more easily than with other search engines. However, the search commands to not permit truncated or wild card searches
    Footnote
    Übers. d. Titels: Northern Light: new light on a changing foundation
    Object
    Northern Light
  4. Kandel, E.R.: Reductionism in art and brain science : bridging the two cultures (2016) 0.07
    0.068468735 = product of:
      0.13693747 = sum of:
        0.13693747 = sum of:
          0.11295999 = weight(_text_:light in 5305) [ClassicSimilarity], result of:
            0.11295999 = score(doc=5305,freq=6.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.38682 = fieldWeight in 5305, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5305)
          0.023977486 = weight(_text_:22 in 5305) [ClassicSimilarity], result of:
            0.023977486 = score(doc=5305,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.1354154 = fieldWeight in 5305, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5305)
      0.5 = coord(1/2)
    
    Abstract
    Are art and science separated by an unbridgeable divide? Can they find common ground? In this new book, neuroscientist Eric R. Kandel, whose remarkable scientific career and deep interest in art give him a unique perspective, demonstrates how science can inform the way we experience a work of art and seek to understand its meaning. Kandel illustrates how reductionism?the distillation of larger scientific or aesthetic concepts into smaller, more tractable components?has been used by scientists and artists alike to pursue their respective truths. He draws on his Nobel Prize-winning work revealing the neurobiological underpinnings of learning and memory in sea slugs to shed light on the complex workings of the mental processes of higher animals. In Reductionism in Art and Brain Science, Kandel shows how this radically reductionist approach, applied to the most complex puzzle of our time?the brain?has been employed by modern artists who distill their subjective world into color, form, and light. Kandel demonstrates through bottom-up sensory and top-down cognitive functions how science can explore the complexities of human perception and help us to perceive, appreciate, and understand great works of art. At the heart of the book is an elegant elucidation of the contribution of reductionism to the evolution of modern art and its role in a monumental shift in artistic perspective. Reductionism steered the transition from figurative art to the first explorations of abstract art reflected in the works of Turner, Monet, Kandinsky, Schoenberg, and Mondrian. Kandel explains how, in the postwar era, Pollock, de Kooning, Rothko, Louis, Turrell, and Flavin used a reductionist approach to arrive at their abstract expressionism and how Katz, Warhol, Close, and Sandback built upon the advances of the New York School to reimagine figurative and minimal art. Featuring captivating drawings of the brain alongside full-color reproductions of modern art masterpieces, this book draws out the common concerns of science and art and how they illuminate each other.
    Content
    The emergence of a reductionist school of abstract art in New York -- The Beginning of a Scientific Approach to Art -- The Biology of the Beholder's Share: Visual Perception and Bottom-Up Processing in Art -- The Biology of Learning and Memory: Top-Down Processing in Art -- A Reductionist Approach to Art. Reductionism in the Emergence of Abstract Art -- Mondrian and the Radical Reduction of the Figurative Image -- The New York School of Painters -- How the Brain Processes and Perceives Abstract Images -- From Figuration to Color Abstraction -- Color and the Brain -- A Focus on Light -- A Reductionist Influence on Figuration -- The Emerging Dialogue Between Abstract Art and Science. Why Is Reductionism Successful in Art? -- A Return to the Two Cultures
    Date
    14. 6.2019 12:22:37
  5. #220 0.07
    0.067818575 = product of:
      0.13563715 = sum of:
        0.13563715 = product of:
          0.2712743 = sum of:
            0.2712743 = weight(_text_:22 in 219) [ClassicSimilarity], result of:
              0.2712743 = score(doc=219,freq=4.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.5320505 = fieldWeight in 219, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=219)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  6. #1387 0.07
    0.067818575 = product of:
      0.13563715 = sum of:
        0.13563715 = product of:
          0.2712743 = sum of:
            0.2712743 = weight(_text_:22 in 1386) [ClassicSimilarity], result of:
              0.2712743 = score(doc=1386,freq=4.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.5320505 = fieldWeight in 1386, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=1386)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  7. #2103 0.07
    0.067818575 = product of:
      0.13563715 = sum of:
        0.13563715 = product of:
          0.2712743 = sum of:
            0.2712743 = weight(_text_:22 in 2102) [ClassicSimilarity], result of:
              0.2712743 = score(doc=2102,freq=4.0), product of:
                0.17706616 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050563898 = queryNorm
                1.5320505 = fieldWeight in 2102, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.21875 = fieldNorm(doc=2102)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1998 20:02:22
  8. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.066924065 = product of:
      0.13384813 = sum of:
        0.13384813 = product of:
          0.40154436 = sum of:
            0.40154436 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40154436 = score(doc=1826,freq=2.0), product of:
                0.42868128 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050563898 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  9. Hann, W.: Search insider (1997) 0.07
    0.0658796 = product of:
      0.1317592 = sum of:
        0.1317592 = product of:
          0.2635184 = sum of:
            0.2635184 = weight(_text_:light in 942) [ClassicSimilarity], result of:
              0.2635184 = score(doc=942,freq=4.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.90239197 = fieldWeight in 942, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.078125 = fieldNorm(doc=942)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Explains how to use the search engine Northern Light, the first to offer simultaneous access to a 'special collection' of 1.800 'quality sources' and the Internet and which collects results and ranks for relevance. Describes the purchasing procedure and pricing
    Object
    Northern Light
  10. Wiggins, R.: Vendors future : Northern Light - delivering high-quality content to a large Internet audience (1997) 0.07
    0.06521748 = product of:
      0.13043496 = sum of:
        0.13043496 = product of:
          0.26086992 = sum of:
            0.26086992 = weight(_text_:light in 2890) [ClassicSimilarity], result of:
              0.26086992 = score(doc=2890,freq=8.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.89332247 = fieldWeight in 2890, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A new Web based service, Northern Light, aims to serve large populations of users, by delivering high-quality content on both general and narrow topics. Analyzes the trends that have led to an explosion of access to information on the Internet, but also to difficulties in finding relevant, quality information. Describes the Northern Light search engine which improves naive user searching through its innovative refinement scheme Custom Search folders, but also offers a more sophisticated search syntax for finer control. Searching is free, as is access to many Web Sies, but access to full text articles from a special collection of journals is fee-based. Advocates this free saerch / pay for content payment model for the wider information industry
    Object
    Northern Light
  11. Madison, O.M.A.: Standards in light of new technologies : functional requirements for bibliographic records (1999) 0.07
    0.06521748 = product of:
      0.13043496 = sum of:
        0.13043496 = product of:
          0.26086992 = sum of:
            0.26086992 = weight(_text_:light in 4182) [ClassicSimilarity], result of:
              0.26086992 = score(doc=4182,freq=2.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.89332247 = fieldWeight in 4182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4182)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Northern Light bridges two worlds : innovative search service expands possibilities for ordinary Web users (1998) 0.06
    0.06454857 = product of:
      0.12909713 = sum of:
        0.12909713 = product of:
          0.25819427 = sum of:
            0.25819427 = weight(_text_:light in 2607) [ClassicSimilarity], result of:
              0.25819427 = score(doc=2607,freq=6.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.88416 = fieldWeight in 2607, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2607)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Northern Light is an innovative service which creatively blends Web and proprietary online searching. It simultaneously searches the Web and a large, multidisciplinary, full text database, using a relevance system with some clever tweaks. Its risky pricing scheme depends upon users' willingness to pay for proprietary content
    Object
    Northern Light
  13. Jascó, P.: Northern Light (1998) 0.06
    0.06454857 = product of:
      0.12909713 = sum of:
        0.12909713 = product of:
          0.25819427 = sum of:
            0.25819427 = weight(_text_:light in 3310) [ClassicSimilarity], result of:
              0.25819427 = score(doc=3310,freq=6.0), product of:
                0.2920221 = queryWeight, product of:
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.050563898 = queryNorm
                0.88416 = fieldWeight in 3310, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.7753086 = idf(docFreq=372, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Northern Light is part WWW search engine and part full text database. The latter is called Special Collection and consists of fulltext articles from 1.800 journals, newswires and other resources. Searching, bibliographic information and summaries are free but with prices per article ranging from $1 to $4 of a monthly subscription for 50 documents from a 880 journal subset. Highlights weaknesses with the software
    Object
    Northern Light
  14. Byström, K.: Information seekers in context : an analysis of the 'doer' in INSU studies (1999) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 297) [ClassicSimilarity], result of:
            0.09316782 = score(doc=297,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 297, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=297)
          0.034253553 = weight(_text_:22 in 297) [ClassicSimilarity], result of:
            0.034253553 = score(doc=297,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 297, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=297)
      0.5 = coord(1/2)
    
    Abstract
    In information needs, seeking and use (INSU) research, individuals have most commonly been perceived as users (e.g., Kuhlthau, 1991; Dervin & Nilan, 1986; Dervin, 1989; Belkin, 1980). The concept user originates from the user of libraries and other information services and information systems. Over the years the scope of the concept has become wider and it is nowadays often understood in the sense of seekers of information (e.g., Wilson, 1981; Marchionini, 1995) and users of information (e.g., Streatfield, 1983). Nevertheless, the concept has remained ambiguous by being on the one hand universal and on the other hand extremely specific. The purpose of this paper is to map and evaluate views on people whose information behaviour has been in one way or another the core of our research area. The goal is to shed some light on various relationships between the different aspects of doers in INSU studies. The paper is inspired by Dervin's (1997) analysis of context where she identified among other themes the nature of subject by contrasting a `transcendental individual' with a `decentered subject', and Talja's (1997) presentation about constituting `information' and `user' from the discourse analytic viewpoint as opposed to the cognitive viewpoint. Instead of the metatheoretical approach applied by Dervin and Talja, a more concrete approach is valid in the present analysis where no direct arguments for or against the underlying metatheories are itemised. The focus is on doers in INSU studies leaving other, even closely-related concepts (i.e., information, information seeking, knowledge etc.), outside the scope of the paper.
    Date
    22. 3.2002 9:55:52
  15. Song, D.; Bruza, P.D.: Towards context sensitive information inference (2003) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 1428) [ClassicSimilarity], result of:
            0.09316782 = score(doc=1428,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
          0.034253553 = weight(_text_:22 in 1428) [ClassicSimilarity], result of:
            0.034253553 = score(doc=1428,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 1428, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1428)
      0.5 = coord(1/2)
    
    Abstract
    Humans can make hasty, but generally robust judgements about what a text fragment is, or is not, about. Such judgements are termed information inference. This article furnishes an account of information inference from a psychologistic stance. By drawing an theories from nonclassical logic and applied cognition, an information inference mechanism is proposed that makes inferences via computations of information flow through an approximation of a conceptual space. Within a conceptual space information is represented geometrically. In this article, geometric representations of words are realized as vectors in a high dimensional semantic space, which is automatically constructed from a text corpus. Two approaches were presented for priming vector representations according to context. The first approach uses a concept combination heuristic to adjust the vector representation of a concept in the light of the representation of another concept. The second approach computes a prototypical concept an the basis of exemplar trace texts and moves it in the dimensional space according to the context. Information inference is evaluated by measuring the effectiveness of query models derived by information flow computations. Results show that information flow contributes significantly to query model effectiveness, particularly with respect to precision. Moreover, retrieval effectiveness compares favorably with two probabilistic query models, and another based an semantic association. More generally, this article can be seen as a contribution towards realizing operational systems that mimic text-based human reasoning.
    Date
    22. 3.2003 19:35:46
  16. Dominich, S.: Mathematical foundations of information retrieval (2001) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 1753) [ClassicSimilarity], result of:
            0.09316782 = score(doc=1753,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 1753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
          0.034253553 = weight(_text_:22 in 1753) [ClassicSimilarity], result of:
            0.034253553 = score(doc=1753,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 1753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1753)
      0.5 = coord(1/2)
    
    Abstract
    This book offers a comprehensive and consistent mathematical approach to information retrieval (IR) without which no implementation is possible, and sheds an entirely new light upon the structure of IR models. It contains the descriptions of all IR models in a unified formal style and language, along with examples for each, thus offering a comprehensive overview of them. The book also creates mathematical foundations and a consistent mathematical theory (including all mathematical results achieved so far) of IR as a stand-alone mathematical discipline, which thus can be read and taught independently. Also, the book contains all necessary mathematical knowledge on which IR relies, to help the reader avoid searching different sources. The book will be of interest to computer or information scientists, librarians, mathematicians, undergraduate students and researchers whose work involves information retrieval.
    Date
    22. 3.2008 12:26:32
  17. Dousa, T.M.: ¬The simple and the complex in E. C. Richardson's theory of classification : observations on an early KO model of the relationship between ontology and epistemology (2010) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 3509) [ClassicSimilarity], result of:
            0.09316782 = score(doc=3509,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 3509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3509)
          0.034253553 = weight(_text_:22 in 3509) [ClassicSimilarity], result of:
            0.034253553 = score(doc=3509,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 3509, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3509)
      0.5 = coord(1/2)
    
    Abstract
    In light of ongoing debates about ontological vs. epistemological approaches to knowledge organization (KO), this paper examines E. C. Richardson's treatment of ontology and epistemology in his theory of classification. According to Richardson, there is a natural order of things in the world accessible to human cognition, which may be expressed in two classificatory orders: evolutionary classification, which ranges classes of things from the most simple to the most complex, and logical classification, which ranges classes of things in the inverse order, from the most complex to the most simple. Evolutionary classification reflects ontological order and logical classification reflects epistemological order: both are faces of a single natural order. Such a view requires adherence to a representationalist, or, in Hjorland's (2008) terms, positivist understanding of epistemology, wherein human knowledge faithfully mirrors the structure of the external world. Richardson's harmonization of ontology and epistemology will find little favor among proponents of the currently fashionable pragmatist approach to KO. Nevertheless, it constitutes an early version of what Gnoli (2004) terms a naturalistic epistemology, which, once deepened and clarified, offers the best prospects for an explicit expression of both the ontological and epistemological dimensions of knowledge within a single classification of general scope.
    Pages
    S.15-22
  18. Stvilia, B.; Hinnant, C.C.; Schindler, K.; Worrall, A.; Burnett, G.; Burnett, K.; Kazmer, M.M.; Marty, P.F.: Composition of scientific teams and publication productivity at a national science lab (2011) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 4191) [ClassicSimilarity], result of:
            0.09316782 = score(doc=4191,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 4191, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4191)
          0.034253553 = weight(_text_:22 in 4191) [ClassicSimilarity], result of:
            0.034253553 = score(doc=4191,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 4191, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4191)
      0.5 = coord(1/2)
    
    Abstract
    The production of scientific knowledge has evolved from a process of inquiry largely based on the activities of individual scientists to one grounded in the collaborative efforts of specialized research teams. This shift brings to light a new question: how the composition of scientific teams affects their production of knowledge. This study employs data from 1,415 experiments conducted at the National High Magnetic Field Laboratory (NHMFL) between 2005 and 2008 to identify and select a sample of 89 teams and examine whether team diversity and network characteristics affect productivity. The study examines how the diversity of science teams along several variables affects overall team productivity. Results indicate several diversity measures associated with network position and team productivity. Teams with mixed institutional associations were more central to the overall network compared with teams that primarily comprised NHMFL's own scientists. Team cohesion was positively related to productivity. The study indicates that high productivity in teams is associated with high disciplinary diversity and low seniority diversity of team membership. Finally, an increase in the share of senior members negatively affects productivity, and teams with members in central structural positions perform better than other teams.
    Date
    22. 1.2011 13:19:42
  19. Baião Salgado Silva, G.; Lima, G.Â. Borém de Oliveira: Using topic maps in establishing compatibility of semantically structured hypertext contents (2012) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 633) [ClassicSimilarity], result of:
            0.09316782 = score(doc=633,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
          0.034253553 = weight(_text_:22 in 633) [ClassicSimilarity], result of:
            0.034253553 = score(doc=633,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=633)
      0.5 = coord(1/2)
    
    Abstract
    Considering the characteristics of hypertext systems and problems such as cognitive overload and the disorientation of users, this project studies subject hypertext documents that have undergone conceptual structuring using facets for content representation and improvement of information retrieval during navigation. The main objective was to assess the possibility of the application of topic map technology for automating the compatibilization process of these structures. For this purpose, two dissertations from the UFMG Information Science Post-Graduation Program were adopted as samples. Both dissertations had been duly analyzed and structured on the MHTX (Hypertextual Map) prototype database. The faceted structures of both dissertations, which had been represented in conceptual maps, were then converted into topic maps. It was then possible to use the merge property of the topic maps to promote the semantic interrelationship between the maps and, consequently, between the hypertextual information resources proper. The merge results were then analyzed in the light of theories dealing with the compatibilization of languages developed within the realm of information technology and librarianship from the 1960s on. The main goals accomplished were: (a) the detailed conceptualization of the merge process of the topic maps, considering the possible compatibilization levels and the applicability of this technology in the integration of faceted structures; and (b) the production of a detailed sequence of steps that may be used in the implementation of topic maps based on faceted structures.
    Date
    22. 2.2013 11:39:23
  20. Lee, J.H.; Price, R.: User experience with commercial music services : an empirical exploration (2016) 0.06
    0.06371069 = product of:
      0.12742138 = sum of:
        0.12742138 = sum of:
          0.09316782 = weight(_text_:light in 2845) [ClassicSimilarity], result of:
            0.09316782 = score(doc=2845,freq=2.0), product of:
              0.2920221 = queryWeight, product of:
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.050563898 = queryNorm
              0.31904373 = fieldWeight in 2845, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.7753086 = idf(docFreq=372, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2845)
          0.034253553 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
            0.034253553 = score(doc=2845,freq=2.0), product of:
              0.17706616 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050563898 = queryNorm
              0.19345059 = fieldWeight in 2845, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2845)
      0.5 = coord(1/2)
    
    Abstract
    The music information retrieval (MIR) community has long understood the role of evaluation as a critical component for successful information retrieval systems. Over the past several years, it has also become evident that user-centered evaluation based on realistic tasks is essential for creating systems that are commercially marketable. Although user-oriented research has been increasing, the MIR field is still lacking in holistic, user-centered approaches to evaluating music services beyond measuring the performance of search or classification algorithms. In light of this need, we conducted a user study exploring how users evaluate their overall experience with existing popular commercial music services, asking about their interactions with the system as well as situational and personal characteristics. In this paper, we present a qualitative heuristic evaluation of commercial music services based on Jakob Nielsen's 10 usability heuristics for user interface design, and also discuss 8 additional criteria that may be used for the holistic evaluation of user experience in MIR systems. Finally, we recommend areas of future user research raised by trends and patterns that surfaced from this user study.
    Date
    17. 3.2016 19:22:15

Languages

Types

  • a 3346
  • m 369
  • el 180
  • s 147
  • b 39
  • x 36
  • i 23
  • r 18
  • ? 8
  • p 4
  • d 3
  • n 3
  • u 2
  • z 2
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications