Search (207 results, page 1 of 11)

  • × type_ss:"el"
  1. Mai, F.; Galke, L.; Scherp, A.: Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text (2018) 0.07
    0.07445337 = product of:
      0.22336009 = sum of:
        0.22336009 = weight(_text_:title in 4093) [ClassicSimilarity], result of:
          0.22336009 = score(doc=4093,freq=14.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.8141054 = fieldWeight in 4093, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4093)
      0.33333334 = coord(1/3)
    
    Abstract
    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.9%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  2. Brinkman's cumulative catalogue on CD-ROM (1996-) 0.06
    0.06461962 = product of:
      0.19385886 = sum of:
        0.19385886 = sum of:
          0.12712237 = weight(_text_:catalogue in 6474) [ClassicSimilarity], result of:
            0.12712237 = score(doc=6474,freq=2.0), product of:
              0.23806341 = queryWeight, product of:
                4.8330836 = idf(docFreq=956, maxDocs=44218)
                0.049257044 = queryNorm
              0.5339854 = fieldWeight in 6474, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.8330836 = idf(docFreq=956, maxDocs=44218)
                0.078125 = fieldNorm(doc=6474)
          0.0667365 = weight(_text_:22 in 6474) [ClassicSimilarity], result of:
            0.0667365 = score(doc=6474,freq=2.0), product of:
              0.17248978 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049257044 = queryNorm
              0.38690117 = fieldWeight in 6474, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=6474)
      0.33333334 = coord(1/3)
    
    Date
    16. 2.1997 16:22:51
  3. Mas, S.; Zaher, L'H.; Zacklad, M.: Design & evaluation of multi-viewed knowledge system for administrative electronic document organization (2008) 0.05
    0.045025162 = product of:
      0.13507548 = sum of:
        0.13507548 = weight(_text_:title in 2480) [ClassicSimilarity], result of:
          0.13507548 = score(doc=2480,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.49232465 = fieldWeight in 2480, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0625 = fieldNorm(doc=2480)
      0.33333334 = coord(1/3)
    
    Abstract
    This communication describes part of a current research carried out at the Université de Technologie de Troyes and funded by a postdoctoral grant from the Fonds québécois de la recherche sur la société et la culture. Under the title "Design and evaluation of a faceted classification for uniform and personal organization of administrative electronic documents", our research investigates the feasibility of creating a faceted and multi-points-of-view classification scheme for administrative document organization and retrieval in online environments.
  4. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.04
    0.043462913 = product of:
      0.13038874 = sum of:
        0.13038874 = product of:
          0.3911662 = sum of:
            0.3911662 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.3911662 = score(doc=1826,freq=2.0), product of:
                0.41760176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049257044 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  5. FictionFinder : a FRBR-based prototype for fiction in WorldCat (o.J.) 0.04
    0.039397016 = product of:
      0.11819105 = sum of:
        0.11819105 = weight(_text_:title in 2432) [ClassicSimilarity], result of:
          0.11819105 = score(doc=2432,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.43078408 = fieldWeight in 2432, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2432)
      0.33333334 = coord(1/3)
    
    Abstract
    FictionFinder is a FRBR-based prototype that provides access to over 2.9 million bibliographic records for fiction books, eBooks, and audio materials described in OCLC WorldCat. This project applies principles of the FRBR model to aggregate bibliographic information above the manifestation level. Records are clustered into works using the OCLC FRBR Work-Set Algorithm. The algorithm collects bibliographic records into groups based on author and title information from bibliographic and authority records. Author names and titles are normalized to construct a key. All records with the same key are grouped together in a work set.
  6. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.03
    0.034770332 = product of:
      0.10431099 = sum of:
        0.10431099 = product of:
          0.31293297 = sum of:
            0.31293297 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.31293297 = score(doc=230,freq=2.0), product of:
                0.41760176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049257044 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  7. Marcum, D.B.: ¬The future of cataloging (2005) 0.03
    0.033768874 = product of:
      0.10130662 = sum of:
        0.10130662 = weight(_text_:title in 1086) [ClassicSimilarity], result of:
          0.10130662 = score(doc=1086,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 1086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=1086)
      0.33333334 = coord(1/3)
    
    Abstract
    This thought piece on the future of cataloging is long on musings and short on predictions. But that isn't to denigrate it, only to clarify it's role given the possible connotations of the title. Rather than coming up with solutions or predictions, Marcum ponders the proper role of cataloging in a Google age. Marcum cites the Google project to digitize much or all of the contents of a selected set of major research libraries as evidence that the world of cataloging is changing dramatically, and she briefly identifies ways in which the Library of Congress is responding to this new environment. But, Marcum cautions, "the future of cataloging is not something that the Library of Congress, or even the small library group with which we will meet, can or expects to resolve alone." She then poses some specific questions that should be considered, including how we can massively change our current MARC/AACR2 system without creating chaos
  8. Auer, S.; Lehmann, J.: What have Innsbruck and Leipzig in common? : extracting semantics from Wiki content (2007) 0.03
    0.033768874 = product of:
      0.10130662 = sum of:
        0.10130662 = weight(_text_:title in 2481) [ClassicSimilarity], result of:
          0.10130662 = score(doc=2481,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 2481, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=2481)
      0.33333334 = coord(1/3)
    
    Abstract
    Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used.
  9. Gatenby, J.; Thornburg, G.; Weitz, J.: Collected work clustering in WorldCat : three techniques for maintaining records (2015) 0.03
    0.033768874 = product of:
      0.10130662 = sum of:
        0.10130662 = weight(_text_:title in 2276) [ClassicSimilarity], result of:
          0.10130662 = score(doc=2276,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 2276, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=2276)
      0.33333334 = coord(1/3)
    
    Abstract
    WorldCat records are clustered into works, and within works, into content and manifestation clusters. A recent project revisited the clustering of collected works that had been previously sidelined because of the challenges posed by their complexity. Attention was given to both the identification of collected works and to the determination of the component works within them. By extensively analysing cast-list information, performance notes, contents notes, titles, uniform titles and added entries, the contents of collected works could be identified and differentiated so that correct clustering was achieved. Further work is envisaged in the form of refining the tests and weights and also in the creation and use of name/title authority records and other knowledge cards in clustering. There is a requirement to link collected works with their component works for use in search and retrieval.
  10. Guerrini, M.: Cataloguing based on bibliographic axiology (2010) 0.03
    0.033768874 = product of:
      0.10130662 = sum of:
        0.10130662 = weight(_text_:title in 2624) [ClassicSimilarity], result of:
          0.10130662 = score(doc=2624,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 2624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=2624)
      0.33333334 = coord(1/3)
    
    Abstract
    The article presents the work of Elaine Svenonius The Intellectual Foundation of Information Organization, translated in Italian and published by Le Lettere of Florence, within the series Pinakes, with the title Il fondamento intellettuale dell'organizzazione dell'informazione. The Intellectual Foundation of Information Organization defines the theoretical aspects of library science, its philosophical basics and principles, the purposes that must be kept in mind, abstracting from the technology used in a library. The book deals with information organization and bibliographic universe, in particular using the bibliographic entities defined in FRBR, at first. Then, it analyzes all the specific languages by which works and subjects are treated. This work, already acknowledged as a classic, organizes, synthesizes and make easily understood the whole complex of knowledge, practices and procedures developed in the last 150 years.
  11. Poli, R.: Steps towards a synthetic methodology (2006) 0.03
    0.031837597 = product of:
      0.09551279 = sum of:
        0.09551279 = weight(_text_:title in 1094) [ClassicSimilarity], result of:
          0.09551279 = score(doc=1094,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3481261 = fieldWeight in 1094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.03125 = fieldNorm(doc=1094)
      0.33333334 = coord(1/3)
    
    Abstract
    Three of the principal theories which can be used to understand, categorize and organize the many aspects of reality prima facie have unexpected interdependencies. The theories to which I refer are those concerned with the causal connections among the items that make up the real world, the space and the time in which they grow, and the levels of reality. What matters most is the discovery that the difficulties internal to theories of causation and to theories of space and time can be understood better, and perhaps dealt with, in the categorial context furnished by the theory of the levels of reality. The structural condition for this development to be possible is that the first two theories be opportunely generalized. In other words, the thesis outlined in this position paper has two aspects. The first is the hypothesis that the theory of levels can function as a general categorial framework within which to recast our understanding of causal and spatio-temporal phenomena. The second aspect is that the best-known and most usual categorizations of causal, spatial and temporal dependencies are not sufficiently generic and are structurally constrained to express only some of the relevant phenomena. Explicit consideration of the theory of the levels of reality furnishes the keystone for generalization of both the theory of causes and the theory of times and spaces. To assert that a theory is not sufficiently generic is to say that the manner in which it is configured may hamper rather than help full understanding of the relevant phenomena. From this assertion follow two of the three obstructions mentioned in the title to this paper. The third obstruction is easier to specify. Whilst the theories of causality and space-time are robust and well-structured - whatever criticisms one might wish to make of them - the situation of the theory of the levels of reality is entirely different, in that it is not at all widely endorsed or thoroughly developed. On the contrary, it is a decidedly minority proposal, and it still has many obscure, or simply under-developed, aspects. The theory of levels is the third obstruction cited in the title. Nonetheless, the approach outlined in what follows seems to be the most promising route to follow.
  12. Putkey, T.: Using SKOS to express faceted classification on the Semantic Web (2011) 0.03
    0.031837597 = product of:
      0.09551279 = sum of:
        0.09551279 = weight(_text_:title in 311) [ClassicSimilarity], result of:
          0.09551279 = score(doc=311,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3481261 = fieldWeight in 311, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.03125 = fieldNorm(doc=311)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper looks at Simple Knowledge Organization System (SKOS) to investigate how a faceted classification can be expressed in RDF and shared on the Semantic Web. Statement of the Problem Faceted classification outlines facets as well as subfacets and facet values. Hierarchical relationships and associative relationships are established in a faceted classification. RDF is used to describe how a specific URI has a relationship to a facet value. Not only does RDF decompose "information into pieces," but by incorporating facet values RDF also given the URI the hierarchical and associative relationships expressed in the faceted classification. Combining faceted classification and RDF creates more knowledge than if the two stood alone. An application understands the subjectpredicate-object relationship in RDF and can display hierarchical and associative relationships based on the object (facet) value. This paper continues to investigate if the above idea is indeed useful, used, and applicable. If so, how can a faceted classification be expressed in RDF? What would this expression look like? Literature Review This paper used the same articles as the paper A Survey of Faceted Classification: History, Uses, Drawbacks and the Semantic Web (Putkey, 2010). In that paper, appropriate resources were discovered by searching in various databases for "faceted classification" and "faceted search," either in the descriptor or title fields. Citations were also followed to find more articles as well as searching the Internet for the same terms. To retrieve the documents about RDF, searches combined "faceted classification" and "RDF, " looking for these words in either the descriptor or title.
  13. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.03
    0.029244704 = product of:
      0.08773411 = sum of:
        0.08773411 = weight(_text_:title in 1202) [ClassicSimilarity], result of:
          0.08773411 = score(doc=1202,freq=6.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.31977427 = fieldWeight in 1202, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
      0.33333334 = coord(1/3)
    
    Abstract
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  14. Robertson, S.E.; Sparck Jones, K.: Simple, proven approaches to text retrieval (1997) 0.03
    0.028140724 = product of:
      0.08442217 = sum of:
        0.08442217 = weight(_text_:title in 4532) [ClassicSimilarity], result of:
          0.08442217 = score(doc=4532,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
      0.33333334 = coord(1/3)
    
    Abstract
    This technical note describes straightforward techniques for document indexing and retrieval that have been solidly established through extensive testing and are easy to apply. They are useful for many different types of text material, are viable for very large files, and have the advantage that they do not require special skills or training for searching, but are easy for end users. The document and text retrieval methods described here have a sound theoretical basis, are well established by extensive testing, and the ideas involved are now implemented in some commercial retrieval systems. Testing in the last few years has, in particular, shown that the methods presented here work very well with full texts, not only title and abstracts, and with large files of texts containing three quarters of a million documents. These tests, the TREC Tests (see Harman 1993 - 1997; IP&M 1995), have been rigorous comparative evaluations involving many different approaches to information retrieval. These techniques depend an the use of simple terms for indexing both request and document texts; an term weighting exploiting statistical information about term occurrences; an scoring for request-document matching, using these weights, to obtain a ranked search output; and an relevance feedback to modify request weights or term sets in iterative searching. The normal implementation is via an inverted file organisation using a term list with linked document identifiers, plus counting data, and pointers to the actual texts. The user's request can be a word list, phrases, sentences or extended text.
  15. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 2: SFX, a generic linking solution (1999) 0.03
    0.028140724 = product of:
      0.08442217 = sum of:
        0.08442217 = weight(_text_:title in 1241) [ClassicSimilarity], result of:
          0.08442217 = score(doc=1241,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 1241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1241)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the second part of two articles about reference linking in hybrid digital libraries. The first part, Frameworks for Linking described the current state-of-the-art and contrasted various approaches to the problem. It identified static and dynamic linking solutions, as well as open and closed linking frameworks. It also included an extensive bibliography. The second part describes our work at the University of Ghent to address these issues. SFX is a generic linking system that we have developed for our own needs, but its underlying concepts can be applied in a wide range of digital libraries. This is a description of the approach to the creation of extended services in a hybrid library environment that has been taken by the Library Automation team at the University of Ghent. The ongoing research has been grouped under the working title Special Effects (SFX). In order to explain the SFX-concepts in a comprehensive way, the discussion will start with a brief description of pre-SFX experiments. Thereafter, the basics of the SFX-approach are explained briefly, in combination with concrete implementation choices taken for the Elektron SFX-linking experiment. Elektron was the name of a modest digital library collaboration between the Universities of Ghent, Louvain and Antwerp.
  16. Eversberg, B.: Zum Thema "Migration" - Beispiel USA (2018) 0.03
    0.028140724 = product of:
      0.08442217 = sum of:
        0.08442217 = weight(_text_:title in 4386) [ClassicSimilarity], result of:
          0.08442217 = score(doc=4386,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 4386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4386)
      0.33333334 = coord(1/3)
    
    Abstract
    Zu den Systemen KOHA und FOLIO gibt es folgende aktuelle Demos, die man mit allen Funktionen ausprobieren kann: KOHA Komplette Demo-Anwendung von Bywater Solutions: https://bywatersolutions.com/koha-demo user = bywater / password = bywater Empfohlen: Cataloguing, mit den MARC-Formularen und Direkt-Datenabruf per Z39 FOLIO (GBV: "The Next-Generation Library System") Demo: https://folio-demo.gbv.de/ user = diku_admin / password = admin Empfohlen: "Inventory" und dann Button "New" zum Katalogisieren Dann "Title Data" für neuen Datensatz. Das ist wohl aber noch in einem Beta-Zustand. Ferner: FOLIO-Präsentation Göttingen April 2018: https://www.zbw-mediatalk.eu/de/2018/05/folio-info-day-a-look-at-the-next-generation-library-system/
  17. Cathro, W.: New frameworks for resource discovery and delivery : the changing role of the catalogue (2006) 0.03
    0.028027851 = product of:
      0.08408355 = sum of:
        0.08408355 = product of:
          0.1681671 = sum of:
            0.1681671 = weight(_text_:catalogue in 6107) [ClassicSimilarity], result of:
              0.1681671 = score(doc=6107,freq=14.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.7063962 = fieldWeight in 6107, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6107)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    There is currently a lively debate about the role of the library catalogue and its relationship to other resource discovery tools. An example of this debate is the recent publication of a report commissioned by the Library of Congress on "the changing nature of the catalogue" As part of this debate, the role of union catalogues is also being re-examined. Some commentators have suggested that union catalogues, by virtue of their size, can aggregate both supply and demand, thus increasing the chance that a relatively little-used resource will be discovered by somebody for whom it is relevant. During the past year, the National Library of Australia (NLA) has been considering the future of its catalogue and its role in the resource discovery and delivery process. The review was prompted, in part, by the redevelopment of the Australian union catalogue and its exposure on the web as a free public service, badged as Libraries Australia. The NLA examined the enablers and inhibitors to proposition "that it replace its catalogue with Libraries Australia, as the primary database to be searched by users". Flowing from this review, the NLA is aiming to undertake a number of tasks to move in the medium to long term towards a scenario in which it could deprecate its local catalogue. Bezug zum Calhoun-Report
  18. Catalogue général, imprimés des origines à 1970 (1996) 0.03
    0.025424477 = product of:
      0.07627343 = sum of:
        0.07627343 = product of:
          0.15254685 = sum of:
            0.15254685 = weight(_text_:catalogue in 6472) [ClassicSimilarity], result of:
              0.15254685 = score(doc=6472,freq=2.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.6407824 = fieldWeight in 6472, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6472)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  19. Hider, P.: ¬The bibliographic advantages of a centralised union catalogue for ILL and resource sharing (2003) 0.03
    0.025424477 = product of:
      0.07627343 = sum of:
        0.07627343 = product of:
          0.15254685 = sum of:
            0.15254685 = weight(_text_:catalogue in 1737) [ClassicSimilarity], result of:
              0.15254685 = score(doc=1737,freq=2.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.6407824 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  20. Hildreth, C.R.: Accounting for users' inflated assessments of on-line catalogue search performance and usefulness : an experimental study (2001) 0.03
    0.025424477 = product of:
      0.07627343 = sum of:
        0.07627343 = product of:
          0.15254685 = sum of:
            0.15254685 = weight(_text_:catalogue in 4130) [ClassicSimilarity], result of:
              0.15254685 = score(doc=4130,freq=2.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.6407824 = fieldWeight in 4130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4130)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    

Years

Languages

  • e 108
  • d 86
  • a 4
  • el 2
  • f 1
  • i 1
  • m 1
  • nl 1
  • More… Less…

Types

  • a 90
  • i 10
  • b 5
  • m 5
  • r 2
  • s 2
  • x 2
  • n 1
  • More… Less…