Search (1739 results, page 1 of 87)

  • × year_i:[2000 TO 2010}
  1. Ackermann, E.: Piaget's constructivism, Papert's constructionism : what's the difference? (2001) 0.12
    0.124338835 = product of:
      0.24867767 = sum of:
        0.24867767 = product of:
          0.49735534 = sum of:
            0.1940729 = weight(_text_:3a in 692) [ClassicSimilarity], result of:
              0.1940729 = score(doc=692,freq=2.0), product of:
                0.4143772 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0488767 = queryNorm
                0.46834838 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
            0.30328244 = weight(_text_:2c in 692) [ClassicSimilarity], result of:
              0.30328244 = score(doc=692,freq=2.0), product of:
                0.5180087 = queryWeight, product of:
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0488767 = queryNorm
                0.5854775 = fieldWeight in 692, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  10.598275 = idf(docFreq=2, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=692)
          0.5 = coord(2/4)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https://www.semanticscholar.org/paper/Piaget-%E2%80%99-s-Constructivism-%2C-Papert-%E2%80%99-s-%3A-What-%E2%80%99-s-Ackermann/89cbcc1e740a4591443ff4765a6ae8df0fdf5554. Darunter weitere Hinweise auf verwandte Beiträge. Auch unter: Learning Group Publication 5(2001) no.3, S.438.
  2. Matoria, R.K.; Upadhyay, P.K.: Migration of data from one library management system to another : a case study in India (2004) 0.12
    0.12197353 = product of:
      0.24394706 = sum of:
        0.24394706 = product of:
          0.48789412 = sum of:
            0.48789412 = weight(_text_:r.k in 4200) [ClassicSimilarity], result of:
              0.48789412 = score(doc=4200,freq=2.0), product of:
                0.3926424 = queryWeight, product of:
                  8.033325 = idf(docFreq=38, maxDocs=44218)
                  0.0488767 = queryNorm
                1.2425915 = fieldWeight in 4200, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.033325 = idf(docFreq=38, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4200)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Collins, L.M.; Hussell, J.A.T.; Hettinga, R.K.; Powell, J.E.; Mane, K.K.; Martinez, M.L.B.: Information visualization and large-scale repositories (2007) 0.12
    0.11849331 = sum of:
      0.03136936 = product of:
        0.12547743 = sum of:
          0.12547743 = weight(_text_:authors in 2596) [ClassicSimilarity], result of:
            0.12547743 = score(doc=2596,freq=10.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.5631342 = fieldWeight in 2596, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2596)
        0.25 = coord(1/4)
      0.08712395 = product of:
        0.1742479 = sum of:
          0.1742479 = weight(_text_:r.k in 2596) [ClassicSimilarity], result of:
            0.1742479 = score(doc=2596,freq=2.0), product of:
              0.3926424 = queryWeight, product of:
                8.033325 = idf(docFreq=38, maxDocs=44218)
                0.0488767 = queryNorm
              0.4437827 = fieldWeight in 2596, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.033325 = idf(docFreq=38, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2596)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - To describe how information visualization can be used in the design of interface tools for large-scale repositories. Design/methodology/approach - One challenge for designers in the context of large-scale repositories is to create interface tools that help users find specific information of interest. In order to be most effective, these tools need to leverage the cognitive characteristics of the target users. At the Los Alamos National Laboratory, the authors' target users are scientists and engineers who can be characterized as higher-order, analytical thinkers. In this paper, the authors describe a visualization tool they have created for making the authors' large-scale digital object repositories more usable for them: SearchGraph, which facilitates data set analysis by displaying search results in the form of a two- or three-dimensional interactive scatter plot. Findings - Using SearchGraph, users can view a condensed, abstract visualization of search results. They can view the same dataset from multiple perspectives by manipulating several display, sort, and filter options. Doing so allows them to see different patterns in the dataset. For example, they can apply a logarithmic transformation in order to create more scatter in a dense cluster of data points or they can apply filters in order to focus on a specific subset of data points. Originality/value - SearchGraph is a creative solution to the problem of how to design interface tools for large-scale repositories. It is particularly appropriate for the authors' target users, who are scientists and engineers. It extends the work of the first two authors on ActiveGraph, a read-write digital library visualization tool.
  4. Gödert, W.; Hubrich, J.; Boteram, F.: Thematische Recherche und Interoperabilität : Wege zur Optimierung des Zugriffs auf heterogen erschlossene Dokumente (2009) 0.09
    0.092375904 = sum of:
      0.07582061 = product of:
        0.30328244 = sum of:
          0.30328244 = weight(_text_:2c in 193) [ClassicSimilarity], result of:
            0.30328244 = score(doc=193,freq=2.0), product of:
              0.5180087 = queryWeight, product of:
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0488767 = queryNorm
              0.5854775 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                10.598275 = idf(docFreq=2, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.25 = coord(1/4)
      0.016555294 = product of:
        0.03311059 = sum of:
          0.03311059 = weight(_text_:22 in 193) [ClassicSimilarity], result of:
            0.03311059 = score(doc=193,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.19345059 = fieldWeight in 193, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=193)
        0.5 = coord(1/2)
    
    Source
    https://opus4.kobv.de/opus4-bib-info/frontdoor/index/index/searchtype/authorsearch/author/%22Hubrich%2C+Jessica%22/docId/703/start/0/rows/20
  5. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.07808822 = sum of:
      0.058221865 = product of:
        0.23288746 = sum of:
          0.23288746 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23288746 = score(doc=562,freq=2.0), product of:
              0.4143772 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0488767 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.019866353 = product of:
        0.039732706 = sum of:
          0.039732706 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.039732706 = score(doc=562,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  6. Sharma, R.K.; Vishwanathan, K.R.: Digital libraries : development and challenges (2001) 0.06
    0.060986765 = product of:
      0.12197353 = sum of:
        0.12197353 = product of:
          0.24394706 = sum of:
            0.24394706 = weight(_text_:r.k in 754) [ClassicSimilarity], result of:
              0.24394706 = score(doc=754,freq=2.0), product of:
                0.3926424 = queryWeight, product of:
                  8.033325 = idf(docFreq=38, maxDocs=44218)
                  0.0488767 = queryNorm
                0.62129575 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.033325 = idf(docFreq=38, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=754)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Hickey, T.B.; Toves, J.; O'Neill, E.T.: NACO normalization : a detailed examination of the authority file comparison rules (2006) 0.06
    0.057195455 = sum of:
      0.034018043 = product of:
        0.13607217 = sum of:
          0.13607217 = weight(_text_:authors in 5760) [ClassicSimilarity], result of:
            0.13607217 = score(doc=5760,freq=6.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.61068267 = fieldWeight in 5760, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 5760) [ClassicSimilarity], result of:
            0.046354823 = score(doc=5760,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 5760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5760)
        0.5 = coord(1/2)
    
    Abstract
    Normalization rules are essential for interoperability between bibliographic systems. In the process of working with Name Authority Cooperative Program (NACO) authority files to match records with Functional Requirements for Bibliographic Records (FRBR) and developing the Faceted Application of Subject Terminology (FAST) subject heading schema, the authors found inconsistencies in independently created NACO normalization implementations. Investigating these, the authors found ambiguities in the NACO standard that need resolution, and came to conclusions on how the procedure could be simplified with little impact on matching headings. To encourage others to test their software for compliance with the current rules, the authors have established a Web site that has test files and interactive services showing their current implementation.
    Date
    10. 9.2000 17:38:22
  8. Gerbé, O.; Mineau, G.W.; Keller, R.K.: Conceptual graphs, metamodelling, and notation of concepts : fundamental issues (2000) 0.05
    0.052274372 = product of:
      0.104548745 = sum of:
        0.104548745 = product of:
          0.20909749 = sum of:
            0.20909749 = weight(_text_:r.k in 5078) [ClassicSimilarity], result of:
              0.20909749 = score(doc=5078,freq=2.0), product of:
                0.3926424 = queryWeight, product of:
                  8.033325 = idf(docFreq=38, maxDocs=44218)
                  0.0488767 = queryNorm
                0.53253925 = fieldWeight in 5078, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.033325 = idf(docFreq=38, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5078)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Elovici, Y.; Shapira, Y.B.; Kantor, P.B.: ¬A decision theoretic approach to combining information filters : an analytical and empirical evaluation. (2006) 0.05
    0.050953023 = sum of:
      0.027775614 = product of:
        0.111102454 = sum of:
          0.111102454 = weight(_text_:authors in 5267) [ClassicSimilarity], result of:
            0.111102454 = score(doc=5267,freq=4.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.49862027 = fieldWeight in 5267, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 5267) [ClassicSimilarity], result of:
            0.046354823 = score(doc=5267,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 5267, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5267)
        0.5 = coord(1/2)
    
    Abstract
    The outputs of several information filtering (IF) systems can be combined to improve filtering performance. In this article the authors propose and explore a framework based on the so-called information structure (IS) model, which is frequently used in Information Economics, for combining the output of multiple IF systems according to each user's preferences (profile). The combination seeks to maximize the expected payoff to that user. The authors show analytically that the proposed framework increases users expected payoff from the combined filtering output for any user preferences. An experiment using the TREC-6 test collection confirms the theoretical findings.
    Date
    22. 7.2006 15:05:39
  10. LeBlanc, J.; Kurth, M.: ¬An operational model for library metadata maintenance (2008) 0.04
    0.044929832 = sum of:
      0.016834565 = product of:
        0.06733826 = sum of:
          0.06733826 = weight(_text_:authors in 101) [ClassicSimilarity], result of:
            0.06733826 = score(doc=101,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.30220953 = fieldWeight in 101, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.25 = coord(1/4)
      0.028095268 = product of:
        0.056190535 = sum of:
          0.056190535 = weight(_text_:22 in 101) [ClassicSimilarity], result of:
            0.056190535 = score(doc=101,freq=4.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.32829654 = fieldWeight in 101, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=101)
        0.5 = coord(1/2)
    
    Abstract
    Libraries pay considerable attention to the creation, preservation, and transformation of descriptive metadata in both MARC and non-MARC formats. Little evidence suggests that they devote as much time, energy, and financial resources to the ongoing maintenance of non-MARC metadata, especially with regard to updating and editing existing descriptive content, as they do to maintenance of such information in the MARC-based online public access catalog. In this paper, the authors introduce a model, derived loosely from J. A. Zachman's framework for information systems architecture, with which libraries can identify and inventory components of catalog or metadata maintenance and plan interdepartmental, even interinstitutional, workflows. The model draws on the notion that the expertise and skills that have long been the hallmark for the maintenance of libraries' catalog data can and should be parlayed towards metadata maintenance in a broader set of information delivery systems.
    Date
    10. 9.2000 17:38:22
    19. 6.2010 19:22:28
  11. Resnick, M.L.; Vaughan, M.W.: Best practices and future visions for search user interfaces (2006) 0.04
    0.043674022 = sum of:
      0.02380767 = product of:
        0.09523068 = sum of:
          0.09523068 = weight(_text_:authors in 5293) [ClassicSimilarity], result of:
            0.09523068 = score(doc=5293,freq=4.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.42738882 = fieldWeight in 5293, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.25 = coord(1/4)
      0.019866353 = product of:
        0.039732706 = sum of:
          0.039732706 = weight(_text_:22 in 5293) [ClassicSimilarity], result of:
            0.039732706 = score(doc=5293,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.23214069 = fieldWeight in 5293, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5293)
        0.5 = coord(1/2)
    
    Abstract
    The authors describe a set of best practices that were developed to assist in the design of search user interfaces. Search user interfaces represent a challenging design domain because novices who have no desire to learn the mechanics of search engine architecture or algorithms often use them. These can lead to frustration and task failure when it is not addressed by the user interface. The best practices are organized into five domains: the corpus, search algorithms, user and task context, the search interface, and mobility. In each section the authors present an introduction to the design challenges related to the domain and a set of best practices for creating a user interface that facilitates effective use by a broad population of users and tasks.
    Date
    22. 7.2006 17:38:51
  12. Camacho-Miñano, M.-del-Mar; Núñez-Nickel, M.: ¬The multilayered nature of reference selection (2009) 0.04
    0.043674022 = sum of:
      0.02380767 = product of:
        0.09523068 = sum of:
          0.09523068 = weight(_text_:authors in 2751) [ClassicSimilarity], result of:
            0.09523068 = score(doc=2751,freq=4.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.42738882 = fieldWeight in 2751, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
        0.25 = coord(1/4)
      0.019866353 = product of:
        0.039732706 = sum of:
          0.039732706 = weight(_text_:22 in 2751) [ClassicSimilarity], result of:
            0.039732706 = score(doc=2751,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.23214069 = fieldWeight in 2751, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2751)
        0.5 = coord(1/2)
    
    Abstract
    Why authors choose some references in preference to others is a question that is still not wholly answered despite its being of interest to scientists. The relevance of references is twofold: They are a mechanism for tracing the evolution of science, and because they enhance the image of the cited authors, citations are a widely known and used indicator of scientific endeavor. Following an extensive review of the literature, we selected all papers that seek to answer the central question and demonstrate that the existing theories are not sufficient: Neither citation nor indicator theory provides a complete and convincing answer. Some perspectives in this arena remain, which are isolated from the core literature. The purpose of this article is to offer a fresh perspective on a 30-year-old problem by extending the context of the discussion. We suggest reviving the discussion about citation theories with a new perspective, that of the readers, by layers or phases, in the final choice of references, allowing for a new classification in which any paper, to date, could be included.
    Date
    22. 3.2009 19:05:07
  13. Kavcic-Colic, A.: Archiving the Web : some legal aspects (2003) 0.04
    0.042817734 = sum of:
      0.019640325 = product of:
        0.0785613 = sum of:
          0.0785613 = weight(_text_:authors in 4754) [ClassicSimilarity], result of:
            0.0785613 = score(doc=4754,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.35257778 = fieldWeight in 4754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4754)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 4754) [ClassicSimilarity], result of:
            0.046354823 = score(doc=4754,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 4754, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4754)
        0.5 = coord(1/2)
    
    Abstract
    Technological developments have changed the concepts of publication, reproduction and distribution. However, legislation, and in particular the Legal Deposit Law has not adjusted to these changes - it is very restrictive in the sense of protecting the rights of authors of electronic publications. National libraries and national archival institutions, being aware of their important role in preserving the written and spoken cultural heritage, try to find different legal ways to live up to these responsibilities. This paper presents some legal aspects of archiving Web pages, examines the harvesting of Web pages, provision of public access to pages, and their long-term preservation.
    Date
    10.12.2005 11:22:13
  14. Jones, M.; Buchanan, G.; Cheng, T.-C.; Jain, P.: Changing the pace of search : supporting background information seeking (2006) 0.04
    0.042817734 = sum of:
      0.019640325 = product of:
        0.0785613 = sum of:
          0.0785613 = weight(_text_:authors in 5287) [ClassicSimilarity], result of:
            0.0785613 = score(doc=5287,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.35257778 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5287)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
            0.046354823 = score(doc=5287,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 5287, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5287)
        0.5 = coord(1/2)
    
    Abstract
    Almost all Web searches are carried out while the user is sitting at a conventional desktop computer connected to the Internet. Although online, handheld, mobile search offers new possibilities, the fast-paced, focused style of interaction may not be appropriate for all user search needs. The authors explore an alternative, relaxed style for Web searching that asynchronously combines an offline handheld computer and an online desktop personal computer. They discuss the role and utility of such an approach, present a tool to meet these user needs, and discuss its relation to other systems.
    Date
    22. 7.2006 18:37:49
  15. Horn, M.E.: "Garbage" in, "refuse and refuse disposal" out : making the most of the subject authority file in the OPAC (2002) 0.04
    0.042817734 = sum of:
      0.019640325 = product of:
        0.0785613 = sum of:
          0.0785613 = weight(_text_:authors in 156) [ClassicSimilarity], result of:
            0.0785613 = score(doc=156,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.35257778 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 156) [ClassicSimilarity], result of:
            0.046354823 = score(doc=156,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 156, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=156)
        0.5 = coord(1/2)
    
    Abstract
    Subject access in the OPAC, as discussed in this article, is predicated on two different kinds of searching: subject (authority, alphabetic, or controlled vocabulary searching) or keyword (uncontrolled, free text, natural language vocabulary). The literature has focused on demonstrating that both approaches are needed, but very few authors address the need to integrate keyword into authority searching. The article discusses this difference and compares, with a query on the term garbage, search results in two online catalogs, one that performs keyword searches through the authority file and one where only bibliographic records are included in keyword searches.
    Date
    10. 9.2000 17:38:22
  16. Copeland, A.; Hamburger, S.; Hamilton, J.; Robinson, K.J.: Cataloging and digitizing ephemera : one team's experience with Pennsylvania German broadsides and fraktur (2006) 0.04
    0.042817734 = sum of:
      0.019640325 = product of:
        0.0785613 = sum of:
          0.0785613 = weight(_text_:authors in 768) [ClassicSimilarity], result of:
            0.0785613 = score(doc=768,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.35257778 = fieldWeight in 768, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=768)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 768) [ClassicSimilarity], result of:
            0.046354823 = score(doc=768,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 768, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=768)
        0.5 = coord(1/2)
    
    Abstract
    The growing interest in ephemera collections within libraries will necessitate the bibliographic control of materials that do not easily fall into traditional categories. This paper discusses the many challenges confronting catalogers when approaching a mixed collection of unique materials of an ephemeral nature. Based on their experience cataloging a collection of Pennsylvania German broadsides and Fraktur at the Pennsylvania State University, the authors describe the process of deciphering handwriting, preserving genealogical information, deciding on cataloging approaches at the format and field level, and furthering access to the materials through digitization and the Encoded Archival Description finding aid. Observations are made on expanding the skills of traditional book catalogers to include manuscript cataloging, and on project management.
    Date
    10. 9.2000 17:38:22
  17. Feinberg, M.: Classificationist as author : the case of the Prelinger Library (2008) 0.04
    0.042817734 = sum of:
      0.019640325 = product of:
        0.0785613 = sum of:
          0.0785613 = weight(_text_:authors in 2237) [ClassicSimilarity], result of:
            0.0785613 = score(doc=2237,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.35257778 = fieldWeight in 2237, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2237)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 2237) [ClassicSimilarity], result of:
            0.046354823 = score(doc=2237,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 2237, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2237)
        0.5 = coord(1/2)
    
    Content
    Within information science, neutrality and objectivity have been standard design goals for knowledge organization schemes; designers have seen themselves as compilers, rather than as authors or creators. The organization of resources in the Prelinger Library in San Francisco, however, shows a distinct authorial voice, or unique sense of expression and vision. This voice, in turn, works as a persuasive mechanism, facilitating a rhetorical purpose for the collection.
    Pages
    S.22-28
  18. Creider, L.S.: Family names and the cataloger (2007) 0.04
    0.042817734 = sum of:
      0.019640325 = product of:
        0.0785613 = sum of:
          0.0785613 = weight(_text_:authors in 2285) [ClassicSimilarity], result of:
            0.0785613 = score(doc=2285,freq=2.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.35257778 = fieldWeight in 2285, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2285)
        0.25 = coord(1/4)
      0.023177411 = product of:
        0.046354823 = sum of:
          0.046354823 = weight(_text_:22 in 2285) [ClassicSimilarity], result of:
            0.046354823 = score(doc=2285,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.2708308 = fieldWeight in 2285, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2285)
        0.5 = coord(1/2)
    
    Abstract
    The Joint Steering Committee for the Revision of the Anglo-American Cataloguing Rules, to be known as Resource Description and Access (RDA), has indicated that the replacement for the Anglo-American Cataloguing Rules (AACR2) will allow the use of family names as authors and will provide rules for their formation. This paper discusses what a family name describes; examines how information seekers look for family names and what they expect to find; describes the ways in which family names have been established in Anglo-American cataloging and archival traditions; asks how adequately the headings established under these rules help users seek such information; and suggests how revised cataloging rules might better enable users to identify resources that meet their needs.
    Date
    10. 9.2000 17:38:22
  19. Schrodt, R.: Tiefen und Untiefen im wissenschaftlichen Sprachgebrauch (2008) 0.04
    0.03881458 = product of:
      0.07762916 = sum of:
        0.07762916 = product of:
          0.31051663 = sum of:
            0.31051663 = weight(_text_:3a in 140) [ClassicSimilarity], result of:
              0.31051663 = score(doc=140,freq=2.0), product of:
                0.4143772 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0488767 = queryNorm
                0.7493574 = fieldWeight in 140, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=140)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Content
    Vgl. auch: https://studylibde.com/doc/13053640/richard-schrodt. Vgl. auch: http%3A%2F%2Fwww.univie.ac.at%2FGermanistik%2Fschrodt%2Fvorlesung%2Fwissenschaftssprache.doc&usg=AOvVaw1lDLDR6NFf1W0-oC9mEUJf.
  20. Ahlgren, P.; Jarneving, B.; Rousseau, R.: Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient (2003) 0.04
    0.038339723 = sum of:
      0.025095487 = product of:
        0.10038195 = sum of:
          0.10038195 = weight(_text_:authors in 5171) [ClassicSimilarity], result of:
            0.10038195 = score(doc=5171,freq=10.0), product of:
              0.22281978 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0488767 = queryNorm
              0.45050737 = fieldWeight in 5171, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=5171)
        0.25 = coord(1/4)
      0.013244236 = product of:
        0.026488472 = sum of:
          0.026488472 = weight(_text_:22 in 5171) [ClassicSimilarity], result of:
            0.026488472 = score(doc=5171,freq=2.0), product of:
              0.17115787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0488767 = queryNorm
              0.15476047 = fieldWeight in 5171, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5171)
        0.5 = coord(1/2)
    
    Abstract
    Ahlgren, Jarneving, and. Rousseau review accepted procedures for author co-citation analysis first pointing out that since in the raw data matrix the row and column values are identical i,e, the co-citation count of two authors, there is no clear choice for diagonal values. They suggest the number of times an author has been co-cited with himself excluding self citation rather than the common treatment as zeros or as missing values. When the matrix is converted to a similarity matrix the normal procedure is to create a matrix of Pearson's r coefficients between data vectors. Ranking by r and by co-citation frequency and by intuition can easily yield three different orders. It would seem necessary that the adding of zeros to the matrix will not affect the value or the relative order of similarity measures but it is shown that this is not the case with Pearson's r. Using 913 bibliographic descriptions form the Web of Science of articles form JASIS and Scientometrics, authors names were extracted, edited and 12 information retrieval authors and 12 bibliometric authors each from the top 100 most cited were selected. Co-citation and r value (diagonal elements treated as missing) matrices were constructed, and then reconstructed in expanded form. Adding zeros can both change the r value and the ordering of the authors based upon that value. A chi-squared distance measure would not violate these requirements, nor would the cosine coefficient. It is also argued that co-citation data is ordinal data since there is no assurance of an absolute zero number of co-citations, and thus Pearson is not appropriate. The number of ties in co-citation data make the use of the Spearman rank order coefficient problematic.
    Date
    9. 7.2006 10:22:35

Languages

Types

  • a 1454
  • m 204
  • el 83
  • s 77
  • b 26
  • x 14
  • i 8
  • r 4
  • n 2
  • More… Less…

Themes

Subjects

Classifications