Search (3769 results, page 1 of 189)

  1. Swigon, M.: Information limits : definition, typology and types (2011) 0.26
    0.2626864 = product of:
      0.5253728 = sum of:
        0.5253728 = sum of:
          0.46858665 = weight(_text_:limits in 300) [ClassicSimilarity], result of:
            0.46858665 = score(doc=300,freq=10.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              1.3295548 = fieldWeight in 300, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0625 = fieldNorm(doc=300)
          0.056786157 = weight(_text_:22 in 300) [ClassicSimilarity], result of:
            0.056786157 = score(doc=300,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.30952093 = fieldWeight in 300, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=300)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - This paper seeks to organize the extensive field and to compile the complete list of information limits. Design/methodology/approach - A thorough analysis of literature from the field beginning with the 1960s up to the present has been performed. Findings - A universal typology of information limits has been proposed. A list of barriers mentioned in the literature of the subject has been compiled. Research limitations/implications - The term "information limits" is not commonly used. Originality/value - The complete list of information limits with bibliographical hints (helpful for future research) is presented.
    Date
    12. 7.2011 18:22:52
  2. Marchiori, M.: ¬The limits of Web metadata, and beyond (1998) 0.15
    0.15450154 = product of:
      0.30900308 = sum of:
        0.30900308 = sum of:
          0.2593152 = weight(_text_:limits in 3383) [ClassicSimilarity], result of:
            0.2593152 = score(doc=3383,freq=4.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.73577374 = fieldWeight in 3383, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3383)
          0.04968789 = weight(_text_:22 in 3383) [ClassicSimilarity], result of:
            0.04968789 = score(doc=3383,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.2708308 = fieldWeight in 3383, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3383)
      0.5 = coord(1/2)
    
    Abstract
    Highlights 2 major problems of the WWW metadata: it will take some time before a reasonable number of people start using metadata to provide a better Web classification, and that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. Addresses the problem of how to cope with intrinsic limits of Web metadata, proposes a method to solve these problems and show evidence of its effectiveness. Examines the important problem of what is the required critical mass in the WWW for metadata in order for it to be really useful
    Date
    1. 8.1996 22:08:06
  3. Bachiochi, D.: Usability studies and designing navigational aids for the World Wide Web (1997) 0.13
    0.13317224 = product of:
      0.2663445 = sum of:
        0.2663445 = sum of:
          0.20955832 = weight(_text_:limits in 2402) [ClassicSimilarity], result of:
            0.20955832 = score(doc=2402,freq=2.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.59459496 = fieldWeight in 2402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0625 = fieldNorm(doc=2402)
          0.056786157 = weight(_text_:22 in 2402) [ClassicSimilarity], result of:
            0.056786157 = score(doc=2402,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.30952093 = fieldWeight in 2402, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2402)
      0.5 = coord(1/2)
    
    Abstract
    Describes how usability testing was used to validate design recommendations WWW navigation aids. The results show a need for navigational aids that are related to the particular Website and located beneath browser buttons. Usability criteria were established that limits page changes to 4 and search times to 60 seconds for information retrieval
    Date
    1. 8.1996 22:08:06
  4. Lespinasse, K.: TREC: une conference pour l'evaluation des systemes de recherche d'information (1997) 0.13
    0.13317224 = product of:
      0.2663445 = sum of:
        0.2663445 = sum of:
          0.20955832 = weight(_text_:limits in 744) [ClassicSimilarity], result of:
            0.20955832 = score(doc=744,freq=2.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.59459496 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
          0.056786157 = weight(_text_:22 in 744) [ClassicSimilarity], result of:
            0.056786157 = score(doc=744,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.30952093 = fieldWeight in 744, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=744)
      0.5 = coord(1/2)
    
    Abstract
    TREC ia an annual conference held in the USA devoted to electronic systems for large full text information searching. The conference deals with evaluation and comparison techniques developed since 1992 by participants from the research and industrial fields. The work of the conference is destined for designers (rather than users) of systems which access full text information. Describes the context, objectives, organization, evaluation methods and limits of TREC
    Date
    1. 8.1996 22:01:00
  5. Farley, L.: Together at last : regeneration and merging of the MELVYL catalog and periodicals databases (1997) 0.13
    0.13317224 = product of:
      0.2663445 = sum of:
        0.2663445 = sum of:
          0.20955832 = weight(_text_:limits in 1834) [ClassicSimilarity], result of:
            0.20955832 = score(doc=1834,freq=2.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.59459496 = fieldWeight in 1834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0625 = fieldNorm(doc=1834)
          0.056786157 = weight(_text_:22 in 1834) [ClassicSimilarity], result of:
            0.056786157 = score(doc=1834,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.30952093 = fieldWeight in 1834, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1834)
      0.5 = coord(1/2)
    
    Abstract
    A Serials Task Force at the University of California, USA, is currently working on merging the MELVYL catalogue and periodicals database. Details its design principles and discusses the major design issues of: name authority control, subject authority files, subsets, long searches, consolidation, sorting, and creation of possible new indexes for keywords, notes, titles, conferences, call numbers, combine author/titles, music, geospatial searching, and form limits
    Source
    DLA bulletin. 17(1997) no.1, S.18-22
  6. Regimbeau, G.: Acces thématiques aux oeuvres d'art contemporaines dans les banques de données (1998) 0.13
    0.13317224 = product of:
      0.2663445 = sum of:
        0.2663445 = sum of:
          0.20955832 = weight(_text_:limits in 2237) [ClassicSimilarity], result of:
            0.20955832 = score(doc=2237,freq=2.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.59459496 = fieldWeight in 2237, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0625 = fieldNorm(doc=2237)
          0.056786157 = weight(_text_:22 in 2237) [ClassicSimilarity], result of:
            0.056786157 = score(doc=2237,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.30952093 = fieldWeight in 2237, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2237)
      0.5 = coord(1/2)
    
    Abstract
    Discusses the possibilities and difficulties encountered when using a thematic index to search contemporary art databanks. Jaconde and Videomuseum, 2 French databanks, are used as examples. the core problems found in the study are the methods and limits of indexing in both systems. A thematic index should be developed that is better adapted to 20th century art, based on the complementary and reciprocal relationship between text and image, and which fully exploits hypertext
    Date
    1. 8.1996 22:01:00
  7. Agosto, D.E.: Bounded rationality and satisficing in young people's Web-based decision making (2002) 0.13
    0.1324299 = product of:
      0.2648598 = sum of:
        0.2648598 = sum of:
          0.22227018 = weight(_text_:limits in 177) [ClassicSimilarity], result of:
            0.22227018 = score(doc=177,freq=4.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.6306632 = fieldWeight in 177, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.046875 = fieldNorm(doc=177)
          0.042589616 = weight(_text_:22 in 177) [ClassicSimilarity], result of:
            0.042589616 = score(doc=177,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.23214069 = fieldWeight in 177, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=177)
      0.5 = coord(1/2)
    
    Abstract
    This study investigated Simon's behavioral decisionmaking theories of bounded rationality and satisficing in relation to young people's decision making in the World Wide Web, and considered the role of personal preferences in Web-based decisions. It employed a qualitative research methodology involving group interviews with 22 adolescent females. Data analysis took the form of iterative pattern coding using QSR NUD*IST Vivo qualitative data analysis software. Data analysis revealed that the study participants did operate within the limits of bounded rationality. These limits took the form of time constraints, information overload, and physical constraints. Data analysis also uncovered two major satisficing behaviors-reduction and termination. Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences. This study has related implications for Web site designers and for adult intermediaries who work with young people and the Web
  8. Moulaison, H.L.: OPAC queries at a medium-sized academic library : a transaction log analysis (2008) 0.12
    0.11652571 = product of:
      0.23305142 = sum of:
        0.23305142 = sum of:
          0.18336353 = weight(_text_:limits in 3599) [ClassicSimilarity], result of:
            0.18336353 = score(doc=3599,freq=2.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.5202706 = fieldWeight in 3599, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3599)
          0.04968789 = weight(_text_:22 in 3599) [ClassicSimilarity], result of:
            0.04968789 = score(doc=3599,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.2708308 = fieldWeight in 3599, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3599)
      0.5 = coord(1/2)
    
    Abstract
    Patron queries at a four-year comprehensive college's online public access catalog were examined via transaction logs from March 2007. Three representative days were isolated for a more detailed examination of search characteristics. The results show that library users employed an average of one to three terms in a search, did not use Boolean operators, and made use of limits one-tenth of the time. Failed queries remained problematic, as a full one-third of searches resulted in zero hits. Implications and recommendations for improvements in the online public access catalog are discussed.
    Date
    10. 9.2000 17:38:22
  9. Kurth, M.: ¬The limits and limitations of transaction log analysis (1993) 0.11
    0.11113509 = product of:
      0.22227018 = sum of:
        0.22227018 = product of:
          0.44454035 = sum of:
            0.44454035 = weight(_text_:limits in 5313) [ClassicSimilarity], result of:
              0.44454035 = score(doc=5313,freq=4.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.2613264 = fieldWeight in 5313, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5313)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines both the practical limitations and the natural or logical limits of the methodology, including the ethical and legal issues involved
  10. Burgin, R.: ¬The retrieval effectiveness of 5 clustering algorithms as a function of indexing exhaustivity (1995) 0.11
    0.110358246 = product of:
      0.22071649 = sum of:
        0.22071649 = sum of:
          0.18522514 = weight(_text_:limits in 3365) [ClassicSimilarity], result of:
            0.18522514 = score(doc=3365,freq=4.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.5255527 = fieldWeight in 3365, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3365)
          0.035491347 = weight(_text_:22 in 3365) [ClassicSimilarity], result of:
            0.035491347 = score(doc=3365,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.19345059 = fieldWeight in 3365, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3365)
      0.5 = coord(1/2)
    
    Abstract
    The retrieval effectiveness of 5 hierarchical clustering methods (single link, complete link, group average, Ward's method, and weighted average) is examined as a function of indexing exhaustivity with 4 test collections (CR, Cranfield, Medlars, and Time). Evaluations of retrieval effectiveness, based on 3 measures of optimal retrieval performance, confirm earlier findings that the performance of a retrieval system based on single link clustering varies as a function of indexing exhaustivity but fail ti find similar patterns for other clustering methods. The data also confirm earlier findings regarding the poor performance of single link clustering is a retrieval environment. The poor performance of single link clustering appears to derive from that method's tendency to produce a small number of large, ill defined document clusters. By contrast, the data examined here found the retrieval performance of the other clustering methods to be general comparable. The data presented also provides an opportunity to examine the theoretical limits of cluster based retrieval and to compare these theoretical limits to the effectiveness of operational implementations. Performance standards of the 4 document collections examined were found to vary widely, and the effectiveness of operational implementations were found to be in the range defined as unacceptable. Further improvements in search strategies and document representations warrant investigations
    Date
    22. 2.1996 11:20:06
  11. Sautoy, M. du: What we cannot know (2016) 0.11
    0.106893204 = product of:
      0.21378641 = sum of:
        0.21378641 = sum of:
          0.1924916 = weight(_text_:limits in 3034) [ClassicSimilarity], result of:
            0.1924916 = score(doc=3034,freq=12.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.54617035 = fieldWeight in 3034, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3034)
          0.021294808 = weight(_text_:22 in 3034) [ClassicSimilarity], result of:
            0.021294808 = score(doc=3034,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.116070345 = fieldWeight in 3034, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3034)
      0.5 = coord(1/2)
    
    Abstract
    Britain's most famous mathematician takes us to the edge of knowledge to show us what we cannot know. Science is king. Every week, headlines announce new breakthroughs in our understanding of the universe, new technologies that will transform our environment, new medical advances that will extend our lives. Science is giving us unprecedented insight into some of the big questions that have challenged humanity ever since we've been able to formulate those questions. Where did we come from? What is the ultimate destiny of the universe? What are the building blocks of the physical world? What is consciousness? This book asks us to rein in this unbridled enthusiasm for the power of science. Marcus du Sautoy explores the limits of human knowledge, to probe whether there is anything we truly cannot know
    Date
    22. 6.2016 16:08:54
    Footnote
    Rez. in: Economist vom Jun 18.06.2016 [http://www.economist.com/news/books-and-arts/21700611-circle-circle]: "Everyone by nature desires to know," wrote Aristotle more than 2,000 years ago. But are there limits to what human beings can know? This is the question that Marcus du Sautoy, the British mathematician who succeeeded Richard Dawkins as the Simonyi professor for the public understanding of science at Oxford University, explores in "What We Cannot Know", his fascinating book on the limits of scientific knowledge. As Mr du Sautoy argues, this is a golden age of scientific knowledge. Remarkable achievements stretch across the sciences, from the Large Hadron Collider and the sequencing of the human genome to the proof of Fermat's Last Theorem. And the rate of progress is accelerating: the number of scientific publications has doubled every nine years since the second world war. But even bigger challenges await. Can cancer be cured? Ageing beaten? Is there a "Theory of Everything" that will include all of physics? Can we know it all? One limit to people's knowledge is practical. In theory, if you throw a die, Newton's laws of motion make it possible to predict what number will come up. But the calculations are too long to be practicable. What is more, many natural systems, such as the weather, are "chaotic" or sensitive to small changes: a tiny nudge now can lead to vastly different behaviour later. Since people cannot measure with complete accuracy, they can't forecast far into the future. The problem was memorably articulated by Edward Lorenz, an American scientist, in 1972 in a famous paper called "Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?"
    Even if the future cannot be predicted, people can still hope to uncover the laws of physics. As Stephen Hawking wrote in his 1988 bestseller "A Brief History of Time", "I still believe there are grounds for cautious optimism that we may be near the end of the search for the ultimate laws of nature." But how can people know when they have got there? They have been wrong before: Lord Kelvin, a great physicist, confidently announced in 1900: "There is nothing new to be discovered in physics now." Just a few years later, physics was upended by the new theories of relativity and quantum physics. Quantum physics presents particular limits on human knowledge, as it suggests that there is a basic randomness or uncertainty in the universe. For example, electrons exist as a "wave function", smeared out across space, and do not have a definite position until you observe them (which "collapses" the wave function). At the same time there seems to be an absolute limit on how much people can know. This is quantified by Heisenberg's Uncertainty Principle, which says that there is a trade-off between knowing the position and momentum of a particle. So the more you know about where an electron is, the less you know about which way it is going. Even scientists find this weird. As Niels Bohr, a Danish physicist, said: "If quantum physics hasn't profoundly shocked you, you haven't understood it yet."
    Mr du Sautoy probes these limits throughout his book. He talks about the origins of the universe in the Big Bang, the discovery of subatomic particles (starting with the positron in the 1930s) and the disappearance of matter and information into black holes. There are also fascinating details about the human brain, where his discussion ranges from the structure of neurons to the problem of consciousness. Eventually, he turns to his own field of mathematics. If people cannot know everything about the physical world, then perhaps they can at least rely on mathematical truth? But even here there are limits. Mathematicians have shown that some theorems have proofs so long that it would take the lifetime of the universe to finish them. And no mathematical system is complete: as Kurt Gödel, an Austrian logician, showed in the 1930s, there are always true statements that the system is not strong enough to prove. Where does this leave us? In the end, Mr du Sautoy has an optimistic message. There may be things people will never know, but they don't know what they are. And ultimately, it is the desire to know the unknown that inspires humankind's search for knowledge in the first place."
  12. Maltby, A.: Classification : logic, limits, levels (1976) 0.10
    0.10477916 = product of:
      0.20955832 = sum of:
        0.20955832 = product of:
          0.41911665 = sum of:
            0.41911665 = weight(_text_:limits in 290) [ClassicSimilarity], result of:
              0.41911665 = score(doc=290,freq=2.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.1891899 = fieldWeight in 290, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.125 = fieldNorm(doc=290)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1996) 0.10
    0.10477916 = product of:
      0.20955832 = sum of:
        0.20955832 = product of:
          0.41911665 = sum of:
            0.41911665 = weight(_text_:limits in 6607) [ClassicSimilarity], result of:
              0.41911665 = score(doc=6607,freq=8.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.1891899 = fieldWeight in 6607, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6607)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Test sets for the document retrieval task composed of human relevance judgments have been constructed that allow one to compare human performance directly with that of automatic methods and that place absolute limits on performance by any method. Current retrieval systems are found to generate only about half of the information allowed by these absolute limits. The data suggests that most of the improvement that could be achieved consistent with these limits can only be achieved by incorporating specific subject information into retrieval systems
  14. Argyris, C.: Reasons and rationalizations : the limits to organizational knowledge (2004) 0.10
    0.10477916 = product of:
      0.20955832 = sum of:
        0.20955832 = product of:
          0.41911665 = sum of:
            0.41911665 = weight(_text_:limits in 4326) [ClassicSimilarity], result of:
              0.41911665 = score(doc=4326,freq=2.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.1891899 = fieldWeight in 4326, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.125 = fieldNorm(doc=4326)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.10
    0.104505755 = sum of:
      0.083210945 = product of:
        0.24963282 = sum of:
          0.24963282 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24963282 = score(doc=562,freq=2.0), product of:
              0.44417226 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052391093 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.021294808 = product of:
        0.042589616 = sum of:
          0.042589616 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042589616 = score(doc=562,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  16. Chung, W.; Chen, H.: Browsing the underdeveloped Web : an experiment on the Arabic Medical Web Directory (2009) 0.10
    0.09987918 = product of:
      0.19975837 = sum of:
        0.19975837 = sum of:
          0.15716875 = weight(_text_:limits in 2733) [ClassicSimilarity], result of:
            0.15716875 = score(doc=2733,freq=2.0), product of:
              0.35243878 = queryWeight, product of:
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.052391093 = queryNorm
              0.44594622 = fieldWeight in 2733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.727074 = idf(docFreq=143, maxDocs=44218)
                0.046875 = fieldNorm(doc=2733)
          0.042589616 = weight(_text_:22 in 2733) [ClassicSimilarity], result of:
            0.042589616 = score(doc=2733,freq=2.0), product of:
              0.18346468 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052391093 = queryNorm
              0.23214069 = fieldWeight in 2733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2733)
      0.5 = coord(1/2)
    
    Abstract
    While the Web has grown significantly in recent years, some portions of the Web remain largely underdeveloped, as shown in a lack of high-quality content and functionality. An example is the Arabic Web, in which a lack of well-structured Web directories limits users' ability to browse for Arabic resources. In this research, we proposed an approach to building Web directories for the underdeveloped Web and developed a proof-of-concept prototype called the Arabic Medical Web Directory (AMedDir) that supports browsing of over 5,000 Arabic medical Web sites and pages organized in a hierarchical structure. We conducted an experiment involving Arab participants and found that the AMedDir significantly outperformed two benchmark Arabic Web directories in terms of browsing effectiveness, efficiency, information quality, and user satisfaction. Participants expressed strong preference for the AMedDir and provided many positive comments. This research thus contributes to developing a useful Web directory for organizing the information in the Arabic medical domain and to a better understanding of how to support browsing on the underdeveloped Web.
    Date
    22. 3.2009 17:57:50
  17. Bruce, H.: ¬The user's view of the Internet (2002) 0.09
    0.094610654 = sum of:
      0.028395703 = product of:
        0.08518711 = sum of:
          0.08518711 = weight(_text_:object's in 4344) [ClassicSimilarity], result of:
            0.08518711 = score(doc=4344,freq=2.0), product of:
              0.51894045 = queryWeight, product of:
                9.905128 = idf(docFreq=5, maxDocs=44218)
                0.052391093 = queryNorm
              0.16415584 = fieldWeight in 4344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                9.905128 = idf(docFreq=5, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.33333334 = coord(1/3)
      0.06621495 = sum of:
        0.055567544 = weight(_text_:limits in 4344) [ClassicSimilarity], result of:
          0.055567544 = score(doc=4344,freq=4.0), product of:
            0.35243878 = queryWeight, product of:
              6.727074 = idf(docFreq=143, maxDocs=44218)
              0.052391093 = queryNorm
            0.1576658 = fieldWeight in 4344, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.727074 = idf(docFreq=143, maxDocs=44218)
              0.01171875 = fieldNorm(doc=4344)
        0.010647404 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
          0.010647404 = score(doc=4344,freq=2.0), product of:
            0.18346468 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052391093 = queryNorm
            0.058035173 = fieldWeight in 4344, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01171875 = fieldNorm(doc=4344)
    
    Footnote
    Rez. in: JASIST. 54(2003) no.9, S.906-908 (E.G. Ackermann): "In this book Harry Bruce provides a construct or view of "how and why people are using the Internet," which can be used "to inform the design of new services and to augment our usings of the Internet" (pp. viii-ix; see also pp. 183-184). In the process, he develops an analytical tool that I term the Metatheory of Circulating Usings, and proves an impressive distillation of a vast quantity of research data from previous studies. The book's perspective is explicitly user-centered, as is its theoretical bent. The book is organized into a preface, acknowledgments, and five chapters (Chapter 1, "The Internet Story;" Chapter 2, "Technology and People;" Chapter 3, "A Focus an Usings;" Chapter 4, "Users of the Internet;" Chapter 5, "The User's View of the Internet"), followed by an extensive bibliography and short index. Any notes are found at the end of the relevant Chapter. The book is illustrated with figures and tables, which are clearly presented and labeled. The text is clearly written in a conversational style, relatively jargon-free, and contains no quantification. The intellectual structure follows that of the book for the most part, with some exceptions. The definition of several key concepts or terms are scattered throughout the book, often appearing much later after extensive earlier use. For example, "stakeholders" used repeatedly from p. viii onward, remains undefined until late in the book (pp. 175-176). The study's method is presented in Chapter 3 (p. 34), relatively late in the book. Its metatheoretical basis is developed in two widely separated places (Chapter 3, pp. 56-61, and Chapter 5, pp. 157-159) for no apparent reason. The goal or purpose of presenting the data in Chapter 4 is explained after its presentation (p. 129) rather than earlier with the limits of the data (p. 69). Although none of these problems are crippling to the book, it does introduce an element of unevenness into the flow of the narrative that can confuse the reader and unnecessarily obscures the author's intent. Bruce provides the contextual Background of the book in Chapter 1 (The Internet Story) in the form of a brief history of the Internet followed by a brief delineation of the early popular views of the Internet as an information superstructure. His recapitulation of the origins and development of the Internet from its origins as ARPANET in 1957 to 1995 touches an the highlights of this familiar story that will not be retold here. The early popular views or characterizations of the Internet as an "information society" or "information superhighway" revolved primarily around its function as an information infrastructure (p. 13). These views shared three main components (technology, political values, and implied information values) as well as a set of common assumptions. The technology aspect focused an the Internet as a "common ground an which digital information products and services achieve interoperability" (p. 14). The political values provided a "vision of universal access to distributed information resources and the benefits that this will bring to the lives of individual people and to society in general" (p. 14). The implied communication and information values portrayed the Internet as a "medium for human creativity and innovation" (p. 14). These popular views also assumed that "good decisions arise from good information," that "good democracy is based an making information available to all sectors of society," and that "wisdom is the by-product of effective use of information" (p. 15). Therefore, because the Internet is an information infrastructure, it must be "good and using the Internet will benefit individuals and society in general" (p. 15).
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
    The UCP's effect an business practices is focused mainly in the management and marketing areas. Marketing experienced a shift from "product-oriented operations" with its focus an "selling the products' features" and customer contact only at the point of sale toward more service-Centered business practice ("customer Jemand orientation") and the development of one-to-one customer relationships (pp. 35-36). For management, the adoption of the UCP caused a shift from "mechanistic, bureaucratic, top-down organizational structures" to "flatter, inclusive, and participative" ones (p. 37). In education, practice shifted from the teachercentered model where the "teacher is responsible for and makes all the decisions related to the learning environment" to a learnercentered model where the student is "responsible for his or her own learning" and the teacher focuses an "matching learning events to the individual skills, aptitudes, and interests of the individual learner" (pp. 38-39). Cognitive engineering saw the rise of "user-Centered design" and human factors that were concerned with applying "scientific knowledge of humans to the design of man-machine interface systems" (p. 44). The UCP had a great effect an Information Science in the "design of information systems" (p. 47). Previous to UCP's explicit proposed by Brenda Dervin and M. Nilan in 1986, systems design was dominated by the "physical of system oriented paradigm" (p. 48). The physical paradigm held a positivistic and materialistic view of technology and (passive) human interaction as exemplified by the 1953 Cranfield tests of information retrieval mechanisms. Instead, the UCP focuses an "users rather than systems" by making the perceptions of individual information users the "centerpiece consideration for information service and system design" (pp. 47-48). Bruce briefly touches an the various schools of thought within user-oriented paradigm, such as the cognitive/self studies approach with its emphasis is an an individual's knowledge structures or model of the world [e.g., Belkin (1990)], the cognitve/context studies approach that focuses an "context in explaining variations in information behavior" [e.g., Savolainen (1995) and Dervin's (1999) sensemaking], and the social constructionism/discourse analytic theory with its focus an that language, not mental/knowledge constructs, as the primary shaper of the world as a system of intersubjective meanings [e.g., Talja 1996], (pp. 53-54). Drawing from the rich tradition of user oriented research, Bruce attempts to gain a metatheoretical understanding of the Internet as a phenomena by combining Dervin's (1996) "micromoments of human usings" with the French philosopher Bruno Latour's (1999) "conception of Circulating reference" to form what 1 term the Metatheory of Circulating Usings (pp. ix, 56, 60). According to Bruce, Latour's concept is designed to bridge "the gap between mind and object" by engaging in a "succession of finely grained transformations that construct and transfer truth about the object" through a chain of "microtranslations" from "matter to form," thereby connecting mind and object (p. 56). The connection works as long as the chain remains unbroken. The nature of this chain of "information producing translations" are such that as one moves away from the object, one experiences a "reduction" of the object's "locality, particularity, materiality, multiplicity and continuity," while simultaneously gaining the "amplification" of its "compatibility, standardization, text, calculation, circulation, and relative universality" (p. 57).
    Bruce points out that Dervin is also concerned about how "we look at the world" in terms of "information needs and seeking" (p.60). She maintains that information scientists traditionally view information seeking and needs in terms of "contexts, users, and systems." Dervin questions whether or not, from a user's point of view, these three "points of interest" even exist. Rather it is the "micromoments of human usings" [emphasis original], and the "world viewings, seekings, and valuings" that comprise them that are real (p. 60). Using his metatheory, Bruce represents the Internet, the "object" of study, as a "chain of transformations made up of the micromoments of human usings" (p. 60). The Internet then is a "composite of usings" that, through research and study, is continuously reduced in complexity while its "essence" and "explanation" are amplified (p. 60). Bruce plans to use the Metatheory of Circulating Usings as an analytical "lens" to "tease out a characterization of the micromoments of Internet usings" from previous research an the Internet thereby exposing "the user's view of the Internet" (pp. 60-61). In Chapter 4 (Users of the Internet), Bruce presents the research data for the study. He begins with an explanation of the limits of the data, and to a certain extent, the study itself. The perspective is that of the Internet user, with a focus an use, not nonuse, thereby exluding issues such as the digital divide and universal service. The research is limited to Internet users "in modern economies around the world" (p. 60). The data is a synthesis of research from many disciplines, but mainly from those "associated with the information field" with its traditional focus an users, systems, and context rather than usings (p. 70). Bruce then presents an extensive summary of the research results from a massive literature review of available Internet studies. He examines the research for each study group in order of the amount of data available, starting with the most studied group professional users ("academics, librarians, and teachers") followed by "the younger generation" ("College students, youths, and young adults"), users of e-government information and e-business services, and ending with the general public (the least studied group) (p. 70). Bruce does a masterful job of condensing and summarizing a vast amount of research data in 49 pages. Although there is too muck to recapitulate here, one can get a sense of the results by looking at the areas of data examined for one of the study groups: academic Internet users. There is data an their frequency of use, reasons for nonuse, length of use, specific types of use (e.g., research, teaching, administration), use of discussion lists, use of e-journals, use of Web browsers and search engines, how academics learn to use web tools and services (mainly by self-instruction), factors affecting use, and information seeking habits. Bruce's goal in presenting all this research data is to provide "the foundation for constructs of the Internet that can inform stakeholders who will play a role in determining how the Internet will develop" (p. 129). These constructs are presented in Chapter 5.
  18. Miller, G.A.: ¬The magical number, seven plus or minus two : some limits on our capacity for processing information (1956) 0.09
    0.09168176 = product of:
      0.18336353 = sum of:
        0.18336353 = product of:
          0.36672705 = sum of:
            0.36672705 = weight(_text_:limits in 2752) [ClassicSimilarity], result of:
              0.36672705 = score(doc=2752,freq=2.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.0405412 = fieldWeight in 2752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2752)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Sormunen, E.: Free-text searching in full-text databases : probing system limits (1993) 0.09
    0.09168176 = product of:
      0.18336353 = sum of:
        0.18336353 = product of:
          0.36672705 = sum of:
            0.36672705 = weight(_text_:limits in 7120) [ClassicSimilarity], result of:
              0.36672705 = score(doc=7120,freq=2.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.0405412 = fieldWeight in 7120, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7120)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Wilbur, W.J.: Human subjectivity and performance limits in document retrieval (1999) 0.09
    0.09168176 = product of:
      0.18336353 = sum of:
        0.18336353 = product of:
          0.36672705 = sum of:
            0.36672705 = weight(_text_:limits in 4539) [ClassicSimilarity], result of:
              0.36672705 = score(doc=4539,freq=2.0), product of:
                0.35243878 = queryWeight, product of:
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.052391093 = queryNorm
                1.0405412 = fieldWeight in 4539, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.727074 = idf(docFreq=143, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4539)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Languages

Types

  • a 3143
  • m 365
  • el 168
  • s 142
  • b 39
  • x 36
  • i 25
  • r 17
  • ? 8
  • p 4
  • d 3
  • n 3
  • u 2
  • z 2
  • ag 1
  • au 1
  • h 1
  • More… Less…

Themes

Subjects

Classifications