Search (1746 results, page 1 of 88)

  • × year_i:[2000 TO 2010}
  1. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.14
    0.14063114 = product of:
      0.28126228 = sum of:
        0.28126228 = sum of:
          0.23476142 = weight(_text_:g.g in 5273) [ClassicSimilarity], result of:
            0.23476142 = score(doc=5273,freq=2.0), product of:
              0.38578537 = queryWeight, product of:
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.049030673 = queryNorm
              0.60852855 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.046500854 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.046500854 = score(doc=5273,freq=2.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  2. Chowdhury, G.G.; Chowdhury, S.: ¬An overview of the information retrieval features of twenty digital libraries (2000) 0.13
    0.13414939 = product of:
      0.26829877 = sum of:
        0.26829877 = product of:
          0.53659755 = sum of:
            0.53659755 = weight(_text_:g.g in 519) [ClassicSimilarity], result of:
              0.53659755 = score(doc=519,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                1.3909224 = fieldWeight in 519, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.125 = fieldNorm(doc=519)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.12
    0.12473053 = product of:
      0.24946105 = sum of:
        0.24946105 = product of:
          0.4989221 = sum of:
            0.19468427 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.19468427 = score(doc=692,freq=2.0), product of:
                0.41568258 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049030673 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
            0.30423784 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.30423784 = score(doc=692,freq=2.0), product of:
                0.51964056 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.049030673 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(2/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  4. Belogonov, G.G.: Metod analogii v komputernoi (2000) 0.12
    0.11738071 = product of:
      0.23476142 = sum of:
        0.23476142 = product of:
          0.46952283 = sum of:
            0.46952283 = weight(_text_:g.g in 979) [ClassicSimilarity], result of:
              0.46952283 = score(doc=979,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                1.2170571 = fieldWeight in 979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.109375 = fieldNorm(doc=979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Belonogov, G.G.: Sistemy frazeologicheskogo machinnogo perevoda RETRANS i ERTRANS v seti Internet (2000) 0.10
    0.10061203 = product of:
      0.20122406 = sum of:
        0.20122406 = product of:
          0.40244812 = sum of:
            0.40244812 = weight(_text_:g.g in 183) [ClassicSimilarity], result of:
              0.40244812 = score(doc=183,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                1.0431918 = fieldWeight in 183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.09375 = fieldNorm(doc=183)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.09
    0.09266691 = sum of:
      0.07605946 = product of:
        0.30423784 = sum of:
          0.30423784 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
            0.30423784 = score(doc=193,freq=2.0), product of:
              0.51964056 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.049030673 = queryNorm
              0.5854775 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.25 = coord(1/4)
      0.016607448 = product of:
        0.033214897 = sum of:
          0.033214897 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.033214897 = score(doc=193,freq=2.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.5 = coord(1/2)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  7. Chowdhury, G.G.: Information sources and searching on the World Wide Web (2001) 0.08
    0.083843365 = product of:
      0.16768673 = sum of:
        0.16768673 = product of:
          0.33537346 = sum of:
            0.33537346 = weight(_text_:g.g in 6136) [ClassicSimilarity], result of:
              0.33537346 = score(doc=6136,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.86932653 = fieldWeight in 6136, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6136)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07833421 = sum of:
      0.058405276 = product of:
        0.2336211 = sum of:
          0.2336211 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2336211 = score(doc=562,freq=2.0), product of:
              0.41568258 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049030673 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.019928938 = product of:
        0.039857876 = sum of:
          0.039857876 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.039857876 = score(doc=562,freq=2.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  9. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.08
    0.078333095 = sum of:
      0.011258399 = product of:
        0.045033596 = sum of:
          0.045033596 = weight(_text_:authors in 2643) [ClassicSimilarity], result of:
            0.045033596 = score(doc=2643,freq=2.0), product of:
              0.22352172 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.049030673 = queryNorm
              0.20147301 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=2643)
        0.25 = coord(1/4)
      0.067074694 = product of:
        0.13414939 = sum of:
          0.13414939 = weight(_text_:g.g in 2643) [ClassicSimilarity], result of:
            0.13414939 = score(doc=2643,freq=2.0), product of:
              0.38578537 = queryWeight, product of:
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.049030673 = queryNorm
              0.3477306 = fieldWeight in 2643, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.8682456 = idf(docFreq=45, maxDocs=44218)
                0.03125 = fieldNorm(doc=2643)
        0.5 = coord(1/2)
    
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).
  10. Haisken-DeNew, J.; Pischner, R.; Wagner, G.G.: Wer nutzt eigentlich Computer und Internet? : Einkommen und Bildung entscheiden (2000) 0.06
    0.058690354 = product of:
      0.11738071 = sum of:
        0.11738071 = product of:
          0.23476142 = sum of:
            0.23476142 = weight(_text_:g.g in 2866) [ClassicSimilarity], result of:
              0.23476142 = score(doc=2866,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.60852855 = fieldWeight in 2866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2866)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Lee, C.; Lee, G.G.: Probabilistic information retrieval model for a dependence structured indexing system (2005) 0.06
    0.058690354 = product of:
      0.11738071 = sum of:
        0.11738071 = product of:
          0.23476142 = sum of:
            0.23476142 = weight(_text_:g.g in 1004) [ClassicSimilarity], result of:
              0.23476142 = score(doc=1004,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.60852855 = fieldWeight in 1004, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Cho, B.-H.; Lee, C.; Lee, G.G.: Exploring term dependences in probabilistic information retrieval model (2003) 0.06
    0.058690354 = product of:
      0.11738071 = sum of:
        0.11738071 = product of:
          0.23476142 = sum of:
            0.23476142 = weight(_text_:g.g in 1077) [ClassicSimilarity], result of:
              0.23476142 = score(doc=1077,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.60852855 = fieldWeight in 1077, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1077)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Hickey, T.B.; Toves, J.; O'Neill, E.T.: NACO normalization : a detailed examination of the authority file comparison rules (2006) 0.06
    0.057375636 = sum of:
      0.03412521 = product of:
        0.13650084 = sum of:
          0.13650084 = weight(_text_:authors in 5760) [ClassicSimilarity], result of:
            0.13650084 = score(doc=5760,freq=6.0), product of:
              0.22352172 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.049030673 = queryNorm
              0.61068267 = fieldWeight in 5760, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.25 = coord(1/4)
      0.023250427 = product of:
        0.046500854 = sum of:
          0.046500854 = weight(_text_:22 in 5760) [ClassicSimilarity], result of:
            0.046500854 = score(doc=5760,freq=2.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.2708308 = fieldWeight in 5760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.5 = coord(1/2)
    
    Abstract
    Normalization rules are essential for interoperability between bibliographic systems. In the process of working with Name Authority Cooperative Program (NACO) authority files to match records with Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing their current implementation.
    Date
    10. 9.2000 17:38:22
  14. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.05
    0.051113542 = sum of:
      0.027863115 = product of:
        0.11145246 = sum of:
          0.11145246 = weight(_text_:authors in 5267) [ClassicSimilarity], result of:
            0.11145246 = score(doc=5267,freq=4.0), product of:
              0.22352172 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.049030673 = queryNorm
              0.49862027 = fieldWeight in 5267, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.25 = coord(1/4)
      0.023250427 = product of:
        0.046500854 = sum of:
          0.046500854 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
            0.046500854 = score(doc=5267,freq=2.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.2708308 = fieldWeight in 5267, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.5 = coord(1/2)
    
    Abstract
    The outputs of several information filtering (IF) systems can be combined to improve filtering performance. In this article the authors propose and explore a framework based on the so-called information structure (IS) model, which is frequently used in Information Economics, for combining the output of multiple IF systems according to each user's preferences (profile). The combination seeks to maximize the expected payoff to that user. The authors show analytically that the proposed framework increases users expected payoff from the combined filtering output for any user preferences. An experiment using the TREC-6 test collection confirms the theoretical findings.
    Date
    22. 7.2006 15:05:39
  15. Chowdhury, G.G.: Natural language processing (2002) 0.05
    0.050306015 = product of:
      0.10061203 = sum of:
        0.10061203 = product of:
          0.20122406 = sum of:
            0.20122406 = weight(_text_:g.g in 4284) [ClassicSimilarity], result of:
              0.20122406 = score(doc=4284,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.5215959 = fieldWeight in 4284, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4284)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Meyyappan, N.; Foo, F.; Chowdhury, G.G.: Design and evaluation of a task-based digital library for the academic community (2004) 0.05
    0.050306015 = product of:
      0.10061203 = sum of:
        0.10061203 = product of:
          0.20122406 = sum of:
            0.20122406 = weight(_text_:g.g in 4425) [ClassicSimilarity], result of:
              0.20122406 = score(doc=4425,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.5215959 = fieldWeight in 4425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4425)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  17. Yoon, Y.; Lee, G.G.: Efficient implementation of associative classifiers for document classification (2007) 0.05
    0.050306015 = product of:
      0.10061203 = sum of:
        0.10061203 = product of:
          0.20122406 = sum of:
            0.20122406 = weight(_text_:g.g in 909) [ClassicSimilarity], result of:
              0.20122406 = score(doc=909,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.5215959 = fieldWeight in 909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.046875 = fieldNorm(doc=909)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Jung, H.; Yi, E.; Kim, D.; Lee, G.G.: Information extraction with automatic knowledge expansion (2005) 0.05
    0.050306015 = product of:
      0.10061203 = sum of:
        0.10061203 = product of:
          0.20122406 = sum of:
            0.20122406 = weight(_text_:g.g in 1008) [ClassicSimilarity], result of:
              0.20122406 = score(doc=1008,freq=2.0), product of:
                0.38578537 = queryWeight, product of:
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.049030673 = queryNorm
                0.5215959 = fieldWeight in 1008, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.8682456 = idf(docFreq=45, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1008)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. LeBlanc, J.; Kurth, M.: ¬An operational model for library metadata maintenance (2008) 0.05
    0.045071375 = sum of:
      0.016887598 = product of:
        0.06755039 = sum of:
          0.06755039 = weight(_text_:authors in 101) [ClassicSimilarity], result of:
            0.06755039 = score(doc=101,freq=2.0), product of:
              0.22352172 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.049030673 = queryNorm
              0.30220953 = fieldWeight in 101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.25 = coord(1/4)
      0.028183777 = product of:
        0.056367554 = sum of:
          0.056367554 = weight(_text_:22 in 101) [ClassicSimilarity], result of:
            0.056367554 = score(doc=101,freq=4.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.32829654 = fieldWeight in 101, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.5 = coord(1/2)
    
    Abstract
    Libraries pay considerable attention to the creation, preservation, and transformation of descriptive metadata in both MARC and non-MARC formats. Little evidence suggests that they devote as much time, energy, and financial resources to the ongoing maintenance of non-MARC metadata, especially with regard to updating and editing existing descriptive content, as they do to maintenance of such information in the MARC-based online public access catalog. In this paper, the authors introduce a model, derived loosely from J. A. Zachman's framework for information systems architecture, with which libraries can identify and inventory components of catalog or metadata maintenance and plan interdepartmental, even interinstitutional, workflows. The model draws on the notion that the expertise and skills that have long been the hallmark for the maintenance of libraries' catalog data can and should be parlayed towards metadata maintenance in a broader set of information delivery systems.
    Date
    10. 9.2000 17:38:22
    19. 6.2010 19:22:28
  20. Resnick, M.L.; Vaughan, M.W.: Best practices and future visions for search user interfaces (2006) 0.04
    0.04381161 = sum of:
      0.023882672 = product of:
        0.09553069 = sum of:
          0.09553069 = weight(_text_:authors in 5293) [ClassicSimilarity], result of:
            0.09553069 = score(doc=5293,freq=4.0), product of:
              0.22352172 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.049030673 = queryNorm
              0.42738882 = fieldWeight in 5293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.25 = coord(1/4)
      0.019928938 = product of:
        0.039857876 = sum of:
          0.039857876 = weight(_text_:22 in 5293) [ClassicSimilarity], result of:
            0.039857876 = score(doc=5293,freq=2.0), product of:
              0.17169707 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049030673 = queryNorm
              0.23214069 = fieldWeight in 5293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.5 = coord(1/2)
    
    Abstract
    The authors describe a set of best practices that were developed to assist in the design of search user interfaces. Search user interfaces represent a challenging design domain because novices who have no desire to learn the mechanics of search engine architecture or algorithms often use them. These can lead to frustration and task failure when it is not addressed by the user interface. The best practices are organized into five domains: the corpus, search algorithms, user and task context, the search interface, and mobility. In each section the authors present an introduction to the design challenges related to the domain and a set of best practices for creating a user interface that facilitates effective use by a broad population of users and tasks.
    Date
    22. 7.2006 17:38:51

Languages

Types

  • a 1462
  • m 203
  • el 83
  • s 76
  • b 26
  • x 14
  • i 8
  • r 4
  • n 2
  • More… Less…

Themes

Subjects

Classifications