Search (1744 results, page 1 of 88)

  • × year_i:[2000 TO 2010}
  1. Bates, M.J.: Information (2009) 0.15
    0.15157056 = sum of:
      0.01696103 = product of:
        0.06784412 = sum of:
          0.06784412 = weight(_text_:authors in 3721) [ClassicSimilarity], result of:
            0.06784412 = score(doc=3721,freq=2.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.30220953 = fieldWeight in 3721, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=3721)
        0.25 = coord(1/4)
      0.13460954 = product of:
        0.26921907 = sum of:
          0.26921907 = weight(_text_:a.d in 3721) [ClassicSimilarity], result of:
            0.26921907 = score(doc=3721,freq=4.0), product of:
              0.37604806 = queryWeight, product of:
                7.636444 = idf(docFreq=57, maxDocs=44218)
                0.04924387 = queryNorm
              0.71591663 = fieldWeight in 3721, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                7.636444 = idf(docFreq=57, maxDocs=44218)
                0.046875 = fieldNorm(doc=3721)
        0.5 = coord(1/2)
    
    Abstract
    A selection of representative definitions of information is drawn from information science and related disciplines, and discussed and compared. Defining information remains such a contested project that any claim to present a unified, singular vision of the topic would be disingenuous. Seven categories of definitions are described: Communicatory or semiotic; activity-based (i.e., information as event); propositional; structural; social; multitype; and deconstructionist. The impact of Norbert Wiener and Claude Shannon is discussed, as well as the widespread influence of Karl Popper's ideas. The data-information-knowledge-wisdom (DIKW) continuum is also addressed. Work of these authors is reviewed: Marcia J. Bates, Gregory Bateson, B.C. Brookes, Michael Buckland, Ian Cornelius, Ronald Day, Richard Derr, Brenda Dervin, Fred Dretske, Jason Farradane, Christopher Fox, Bernd Frohmann, Jonathan Furner, J.A. Goguen, Robert Losee, A.D. Madden, D.M. McKay, Doede Nauta, A.D. Pratt, Frederick Thompson.
  2. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.13
    0.1252729 = product of:
      0.2505458 = sum of:
        0.2505458 = product of:
          0.5010916 = sum of:
            0.19553082 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.19553082 = score(doc=692,freq=2.0), product of:
                0.4174901 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04924387 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
            0.30556077 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.30556077 = score(doc=692,freq=2.0), product of:
                0.5219001 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.04924387 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(2/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  3. Koutsomitropoulos, D.A.; Solomou, G.D.; Alexopoulos, A.D.; Papatheodorou, T.S.: Semantic metadata interoperability and inference-based querying in digital repositories (2009) 0.12
    0.11916983 = sum of:
      0.023986518 = product of:
        0.09594607 = sum of:
          0.09594607 = weight(_text_:authors in 3731) [ClassicSimilarity], result of:
            0.09594607 = score(doc=3731,freq=4.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.42738882 = fieldWeight in 3731, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=3731)
        0.25 = coord(1/4)
      0.09518331 = product of:
        0.19036663 = sum of:
          0.19036663 = weight(_text_:a.d in 3731) [ClassicSimilarity], result of:
            0.19036663 = score(doc=3731,freq=2.0), product of:
              0.37604806 = queryWeight, product of:
                7.636444 = idf(docFreq=57, maxDocs=44218)
                0.04924387 = queryNorm
              0.5062295 = fieldWeight in 3731, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.636444 = idf(docFreq=57, maxDocs=44218)
                0.046875 = fieldNorm(doc=3731)
        0.5 = coord(1/2)
    
    Abstract
    Metadata applications have evolved in time into highly structured "islands of information" about digital resources, often bearing a strong semantic interpretation. Scarcely however are these semantics being communicated in machine readable and understandable ways. At the same time, the process for transforming the implied metadata knowledge into explicit Semantic Web descriptions can be problematic and is not always evident. In this article we take upon the well-established Dublin Core metadata standard as well as other metadata schemata, which often appear in digital repositories set-ups, and suggest a proper Semantic Web OWL ontology. In this process the authors cope with discrepancies and incompatibilities, indicative of such attempts, in novel ways. Moreover, we show the potential and necessity of this approach by demonstrating inferences on the resulting ontology, instantiated with actual metadata records. The authors conclude by presenting a working prototype that provides for inference-based querying on top of digital repositories.
  4. Information ethics : privacy, property, and power (2005) 0.11
    0.1129024 = sum of:
      0.009994383 = product of:
        0.039977532 = sum of:
          0.039977532 = weight(_text_:authors in 2392) [ClassicSimilarity], result of:
            0.039977532 = score(doc=2392,freq=4.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.17807868 = fieldWeight in 2392, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=2392)
        0.25 = coord(1/4)
      0.10290802 = sum of:
        0.07931942 = weight(_text_:a.d in 2392) [ClassicSimilarity], result of:
          0.07931942 = score(doc=2392,freq=2.0), product of:
            0.37604806 = queryWeight, product of:
              7.636444 = idf(docFreq=57, maxDocs=44218)
              0.04924387 = queryNorm
            0.21092895 = fieldWeight in 2392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.636444 = idf(docFreq=57, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2392)
        0.023588603 = weight(_text_:22 in 2392) [ClassicSimilarity], result of:
          0.023588603 = score(doc=2392,freq=4.0), product of:
            0.17244364 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.04924387 = queryNorm
            0.13679022 = fieldWeight in 2392, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2392)
    
    Classification
    323.44/5 22 (GBV;LoC)
    DDC
    323.44/5 22 (GBV;LoC)
    Editor
    Moore, A.D.
    Footnote
    The book also includes an index, a selected bibliography, and endnotes for each article. More information on the authors of the articles would have been useful, however. One of the best features of Information Ethics is the discussion cases at the end of each chapter. For instance, in the discussion cases, Moore asks questions like: Would you allow one person to die to save nine? Should a scientist be allowed to experiment on people without their knowledge if there is no harm? Should marriages between people carrying a certain gene be outlawed? These discussion cases really add to the value of the readings. The only suggestion would be to have put them at the beginning of each section so the reader could have the questions floating in their heads as they read the material. Information Ethics is a well thought out and organized collection of articles. Moore has done an excellent job of finding articles to provide a fair and balanced look at a variety of complicated and far-reaching topics. Further, the work has breadth and depth. Moore is careful to include enough historical articles, like the 1890 Warren article, to give balance and perspective to new and modern topics like E-mail surveillance, biopiracy, and genetics. This provides a reader with just enough philosophy and history theory to work with the material. The articles are written by a variety of authors from differing fields so they range in length, tone, and style, creating a rich tapestry of ideas and arguments. However, this is not a quick or easy read. The subject matter is complex and one should plan to spend time with the book. The book is well worth the effort though. Overall, this is a highly recommended work for all libraries especially academic ones."
  5. Madden, A.D.: ¬A definition of information (2000) 0.11
    0.11214434 = sum of:
      0.01696103 = product of:
        0.06784412 = sum of:
          0.06784412 = weight(_text_:authors in 713) [ClassicSimilarity], result of:
            0.06784412 = score(doc=713,freq=2.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.30220953 = fieldWeight in 713, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=713)
        0.25 = coord(1/4)
      0.09518331 = product of:
        0.19036663 = sum of:
          0.19036663 = weight(_text_:a.d in 713) [ClassicSimilarity], result of:
            0.19036663 = score(doc=713,freq=2.0), product of:
              0.37604806 = queryWeight, product of:
                7.636444 = idf(docFreq=57, maxDocs=44218)
                0.04924387 = queryNorm
              0.5062295 = fieldWeight in 713, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.636444 = idf(docFreq=57, maxDocs=44218)
                0.046875 = fieldNorm(doc=713)
        0.5 = coord(1/2)
    
    Abstract
    One difficulty faced by students on many information management courses is the lack of any attempt to teach concepts of information. Therefore, if a core module does not fit in with a student's existing concept of information, it can make it hard for the student to recognise the relevance of that module. This paper addresses that problem by summarising concepts of information, and by presenting a simple model that attempts to unite the various concepts listed. The model is based on the idea that the meaning in a message depends on the context in which the message originated (the authorial context), and the context in which it is interpreted (the readership context). Characteristics of authors, readers and messages are discussed. The impact of the 'knowledge' of 'information' users, and of their community, is considered. Implications of the model are discussed. A definition of information is suggested, which attempts to encapsulate the nature of information implied by the model.
  6. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.09
    0.09306985 = sum of:
      0.07639019 = product of:
        0.30556077 = sum of:
          0.30556077 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
            0.30556077 = score(doc=193,freq=2.0), product of:
              0.5219001 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.04924387 = queryNorm
              0.5854775 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.25 = coord(1/4)
      0.016679661 = product of:
        0.033359323 = sum of:
          0.033359323 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.033359323 = score(doc=193,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.5 = coord(1/2)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  7. Friederici, A.D.: ¬Der Lauscher im Kopf (2003) 0.08
    0.07931942 = product of:
      0.15863883 = sum of:
        0.15863883 = product of:
          0.31727767 = sum of:
            0.31727767 = weight(_text_:a.d in 4542) [ClassicSimilarity], result of:
              0.31727767 = score(doc=4542,freq=2.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.8437158 = fieldWeight in 4542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4542)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Brown, E.W.; Carmel, D.; Franz, M.; Ittycheriah, A.; Kanungo, T.; Maarek, Y.; McCarley, J.S.; Mack, R.L.; Prager, J.M.; Smith, J.R.; Soffer, A.; Zien, J.Y.; Marwick, A.D.: IBM research activities at TREC (2005) 0.08
    0.07931942 = product of:
      0.15863883 = sum of:
        0.15863883 = product of:
          0.31727767 = sum of:
            0.31727767 = weight(_text_:a.d in 5093) [ClassicSimilarity], result of:
              0.31727767 = score(doc=5093,freq=2.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.8437158 = fieldWeight in 5093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5093)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Cleveland, D.B.; Cleveland, A.D.: Introduction to abstracting and indexing (2001) 0.08
    0.07931942 = product of:
      0.15863883 = sum of:
        0.15863883 = product of:
          0.31727767 = sum of:
            0.31727767 = weight(_text_:a.d in 316) [ClassicSimilarity], result of:
              0.31727767 = score(doc=316,freq=2.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.8437158 = fieldWeight in 316, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.078125 = fieldNorm(doc=316)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07867484 = sum of:
      0.05865924 = product of:
        0.23463696 = sum of:
          0.23463696 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23463696 = score(doc=562,freq=2.0), product of:
              0.4174901 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.04924387 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.020015594 = product of:
        0.040031187 = sum of:
          0.040031187 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.040031187 = score(doc=562,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  11. Auf dem Stand von Jägern und Sammlern : Auszüge aus dem Manifest elf führender Neurowissenschaftler über Gegenwart und Zukunft der Hirnforschung (2004) 0.06
    0.06345554 = product of:
      0.12691107 = sum of:
        0.12691107 = product of:
          0.25382215 = sum of:
            0.25382215 = weight(_text_:a.d in 2309) [ClassicSimilarity], result of:
              0.25382215 = score(doc=2309,freq=2.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.67497265 = fieldWeight in 2309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2309)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Beteiligte: C.E. Elger; A.D. Frederici; C. Koch; H. Luhmann; C. von der Malsburg; R. Menzel; H. Monyer; F. Rösler; G. Roth; H. Scheich; W. Singer
  12. Hickey, T.B.; Toves, J.; O'Neill, E.T.: NACO normalization : a detailed examination of the authority file comparison rules (2006) 0.06
    0.057625122 = sum of:
      0.034273595 = product of:
        0.13709438 = sum of:
          0.13709438 = weight(_text_:authors in 5760) [ClassicSimilarity], result of:
            0.13709438 = score(doc=5760,freq=6.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.61068267 = fieldWeight in 5760, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.25 = coord(1/4)
      0.023351526 = product of:
        0.04670305 = sum of:
          0.04670305 = weight(_text_:22 in 5760) [ClassicSimilarity], result of:
            0.04670305 = score(doc=5760,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.2708308 = fieldWeight in 5760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.5 = coord(1/2)
    
    Abstract
    Normalization rules are essential for interoperability between bibliographic systems. In the process of working with Name Authority Cooperative Program (NACO) authority files to match records with Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing their current implementation.
    Date
    10. 9.2000 17:38:22
  13. Dworman, G.O.; Kimbrough, S.O.; Patch, C.: On pattern-directed search of arcives and collections (2000) 0.06
    0.056087304 = product of:
      0.11217461 = sum of:
        0.11217461 = product of:
          0.22434922 = sum of:
            0.22434922 = weight(_text_:a.d in 4289) [ClassicSimilarity], result of:
              0.22434922 = score(doc=4289,freq=4.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.5965972 = fieldWeight in 4289, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article begins by presenting and discussing the distinction between record-oriented and pattern-oriented search. Examples or recordoriented (or item-oriented) questions include: "What (or how many, etc.) glass items made prior to 100 A.D. do we have in our collection?" and "How many paintings featuring dogs do we have that were painted during the 19th century, and who painted them?" Standard database systems are well suited to answering such questions, based on the data in, for example, a collections management system. Examples of pattern-oriented questions include: "How does the (apparent) productoin of glass objects vary over time between 400 B.C. and 100 A.D.?" and "What other animals are present in paintings with dogs (painted during the 19th century and in our collection)?" Standard database systems are not well suited to answering these sorts of questions, even though the basic data is properly stored in them. To answer pattern-oriented questions it is the accepted solution to transform the underlying (relational) data to what is called the data cube or cross tabulation form. We discuss how this can be done for non-numeric data, such as are found in museum collections and archives
  14. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.05
    0.051335797 = sum of:
      0.02798427 = product of:
        0.11193708 = sum of:
          0.11193708 = weight(_text_:authors in 5267) [ClassicSimilarity], result of:
            0.11193708 = score(doc=5267,freq=4.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.49862027 = fieldWeight in 5267, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.25 = coord(1/4)
      0.023351526 = product of:
        0.04670305 = sum of:
          0.04670305 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
            0.04670305 = score(doc=5267,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.2708308 = fieldWeight in 5267, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.5 = coord(1/2)
    
    Abstract
    The outputs of several information filtering (IF) systems can be combined to improve filtering performance. In this article the authors propose and explore a framework based on the so-called information structure (IS) model, which is frequently used in Information Economics, for combining the output of multiple IF systems according to each user's preferences (profile). The combination seeks to maximize the expected payoff to that user. The authors show analytically that the proposed framework increases users expected payoff from the combined filtering output for any user preferences. An experiment using the TREC-6 test collection confirms the theoretical findings.
    Date
    22. 7.2006 15:05:39
  15. Madden, A.D.: Evolution and information (2004) 0.05
    0.047591656 = product of:
      0.09518331 = sum of:
        0.09518331 = product of:
          0.19036663 = sum of:
            0.19036663 = weight(_text_:a.d in 4439) [ClassicSimilarity], result of:
              0.19036663 = score(doc=4439,freq=2.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.5062295 = fieldWeight in 4439, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4439)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. McCulloch, E.; Shiri, A.; Nicholson, A.D.: Subject searching requirements : the HILT II experience (2004) 0.05
    0.047591656 = product of:
      0.09518331 = sum of:
        0.09518331 = product of:
          0.19036663 = sum of:
            0.19036663 = weight(_text_:a.d in 4758) [ClassicSimilarity], result of:
              0.19036663 = score(doc=4758,freq=2.0), product of:
                0.37604806 = queryWeight, product of:
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.04924387 = queryNorm
                0.5062295 = fieldWeight in 4758, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.636444 = idf(docFreq=57, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4758)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. LeBlanc, J.; Kurth, M.: ¬An operational model for library metadata maintenance (2008) 0.05
    0.04526736 = sum of:
      0.01696103 = product of:
        0.06784412 = sum of:
          0.06784412 = weight(_text_:authors in 101) [ClassicSimilarity], result of:
            0.06784412 = score(doc=101,freq=2.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.30220953 = fieldWeight in 101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.25 = coord(1/4)
      0.028306326 = product of:
        0.05661265 = sum of:
          0.05661265 = weight(_text_:22 in 101) [ClassicSimilarity], result of:
            0.05661265 = score(doc=101,freq=4.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.32829654 = fieldWeight in 101, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.5 = coord(1/2)
    
    Abstract
    Libraries pay considerable attention to the creation, preservation, and transformation of descriptive metadata in both MARC and non-MARC formats. Little evidence suggests that they devote as much time, energy, and financial resources to the ongoing maintenance of non-MARC metadata, especially with regard to updating and editing existing descriptive content, as they do to maintenance of such information in the MARC-based online public access catalog. In this paper, the authors introduce a model, derived loosely from J. A. Zachman's framework for information systems architecture, with which libraries can identify and inventory components of catalog or metadata maintenance and plan interdepartmental, even interinstitutional, workflows. The model draws on the notion that the expertise and skills that have long been the hallmark for the maintenance of libraries' catalog data can and should be parlayed towards metadata maintenance in a broader set of information delivery systems.
    Date
    10. 9.2000 17:38:22
    19. 6.2010 19:22:28
  18. Resnick, M.L.; Vaughan, M.W.: Best practices and future visions for search user interfaces (2006) 0.04
    0.044002112 = sum of:
      0.023986518 = product of:
        0.09594607 = sum of:
          0.09594607 = weight(_text_:authors in 5293) [ClassicSimilarity], result of:
            0.09594607 = score(doc=5293,freq=4.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.42738882 = fieldWeight in 5293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.25 = coord(1/4)
      0.020015594 = product of:
        0.040031187 = sum of:
          0.040031187 = weight(_text_:22 in 5293) [ClassicSimilarity], result of:
            0.040031187 = score(doc=5293,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.23214069 = fieldWeight in 5293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.5 = coord(1/2)
    
    Abstract
    The authors describe a set of best practices that were developed to assist in the design of search user interfaces. Search user interfaces represent a challenging design domain because novices who have no desire to learn the mechanics of search engine architecture or algorithms often use them. These can lead to frustration and task failure when it is not addressed by the user interface. The best practices are organized into five domains: the corpus, search algorithms, user and task context, the search interface, and mobility. In each section the authors present an introduction to the design challenges related to the domain and a set of best practices for creating a user interface that facilitates effective use by a broad population of users and tasks.
    Date
    22. 7.2006 17:38:51
  19. Camacho-Miñano, M.-del-Mar; Núñez-Nickel, M.: ¬The multilayered nature of reference selection (2009) 0.04
    0.044002112 = sum of:
      0.023986518 = product of:
        0.09594607 = sum of:
          0.09594607 = weight(_text_:authors in 2751) [ClassicSimilarity], result of:
            0.09594607 = score(doc=2751,freq=4.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.42738882 = fieldWeight in 2751, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
        0.25 = coord(1/4)
      0.020015594 = product of:
        0.040031187 = sum of:
          0.040031187 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
            0.040031187 = score(doc=2751,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.23214069 = fieldWeight in 2751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
        0.5 = coord(1/2)
    
    Abstract
    Why authors choose some references in preference to others is a question that is still not wholly answered despite its being of interest to scientists. The relevance of references is twofold: They are a mechanism for tracing the evolution of science, and because they enhance the image of the cited authors, citations are a widely known and used indicator of scientific endeavor. Following an extensive review of the literature, we selected all papers that seek to answer the central question and demonstrate that the existing theories are not sufficient: Neither citation nor indicator theory provides a complete and convincing answer. Some perspectives in this arena remain, which are isolated from the core literature. The purpose of this article is to offer a fresh perspective on a 30-year-old problem by extending the context of the discussion. We suggest reviving the discussion about citation theories with a new perspective, that of the readers, by layers or phases, in the final choice of references, allowing for a new classification in which any paper, to date, could be included.
    Date
    22. 3.2009 19:05:07
  20. Kavcic-Colic, A.: Archiving the Web : some legal aspects (2003) 0.04
    0.043139394 = sum of:
      0.019787868 = product of:
        0.079151474 = sum of:
          0.079151474 = weight(_text_:authors in 4754) [ClassicSimilarity], result of:
            0.079151474 = score(doc=4754,freq=2.0), product of:
              0.22449365 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04924387 = queryNorm
              0.35257778 = fieldWeight in 4754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4754)
        0.25 = coord(1/4)
      0.023351526 = product of:
        0.04670305 = sum of:
          0.04670305 = weight(_text_:22 in 4754) [ClassicSimilarity], result of:
            0.04670305 = score(doc=4754,freq=2.0), product of:
              0.17244364 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04924387 = queryNorm
              0.2708308 = fieldWeight in 4754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4754)
        0.5 = coord(1/2)
    
    Abstract
    Technological developments have changed the concepts of publication, reproduction and distribution. However, legislation, and in particular the Legal Deposit Law has not adjusted to these changes - it is very restrictive in the sense of protecting the rights of authors of electronic publications. National libraries and national archival institutions, being aware of their important role in preserving the written and spoken cultural heritage, try to find different legal ways to live up to these responsibilities. This paper presents some legal aspects of archiving Web pages, examines the harvesting of Web pages, provision of public access to pages, and their long-term preservation.
    Date
    10.12.2005 11:22:13

Languages

Types

  • a 1459
  • m 204
  • el 83
  • s 76
  • b 26
  • x 14
  • i 8
  • r 4
  • n 2
  • More… Less…

Themes

Subjects

Classifications