Search (768 results, page 1 of 39)

  • × year_i:[2010 TO 2020}
  1. Aslanidi, M.; Papadakis, I.; Stefanidakis, M.: Name and title authorities in the music domain : alignment of UNIMARC authorities format with RDA (2018) 0.13
    0.12700345 = product of:
      0.19050516 = sum of:
        0.16714738 = weight(_text_:title in 5178) [ClassicSimilarity], result of:
          0.16714738 = score(doc=5178,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.6092207 = fieldWeight in 5178, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5178)
        0.023357773 = product of:
          0.046715546 = sum of:
            0.046715546 = weight(_text_:22 in 5178) [ClassicSimilarity], result of:
              0.046715546 = score(doc=5178,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.2708308 = fieldWeight in 5178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5178)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article discusses and highlights alignment issues that arise between UNIMARC Authorities Format and Resource Description and Access (RDA) regarding the creation of name and title authorities for musical works and creators. More specifically, RDA, as an implementation of the FRAD model, is compared with the UNIMARC Authorities Format (Updates 2012 and 2016) in an effort to highlight various cases where the discovery of equivalent fields between the two standards is not obvious. The study is envisioned as a first step in an ongoing process of working with the UNIMARC community throughout RDA's advancement and progression regarding the entities [musical] Work and Names.
    Date
    19. 3.2019 12:17:22
  2. Thelwall, M.; Sud, P.; Wilkinson, D.: Link and co-inlink network diagrams with URL citations or title mentions (2012) 0.12
    0.12368565 = product of:
      0.18552847 = sum of:
        0.16884434 = weight(_text_:title in 57) [ClassicSimilarity], result of:
          0.16884434 = score(doc=57,freq=8.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.6154058 = fieldWeight in 57, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=57)
        0.016684124 = product of:
          0.03336825 = sum of:
            0.03336825 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.03336825 = score(doc=57,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.19345059 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=57)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Webometric network analyses have been used to map the connectivity of groups of websites to identify clusters, important sites or overall structure. Such analyses have mainly been based upon hyperlink counts, the number of hyperlinks between a pair of websites, although some have used title mentions or URL citations instead. The ability to automatically gather hyperlink counts from Yahoo! ceased in April 2011 and the ability to manually gather such counts was due to cease by early 2012, creating a need for alternatives. This article assesses URL citations and title mentions as possible replacements for hyperlinks in both binary and weighted direct link and co-inlink network diagrams. It also assesses three different types of data for the network connections: hit count estimates, counts of matching URLs, and filtered counts of matching URLs. Results from analyses of U.S. library and information science departments and U.K. universities give evidence that metrics based upon URLs or titles can be appropriate replacements for metrics based upon hyperlinks for both binary and weighted networks, although filtered counts of matching URLs are necessary to give the best results for co-title mention and co-URL citation network diagrams.
    Date
    6. 4.2012 18:16:22
  3. Smiraglia, R.P.; Lee, H.-L.: Rethinking the authorship principle (2012) 0.11
    0.11157423 = product of:
      0.16736135 = sum of:
        0.10130662 = weight(_text_:title in 5575) [ClassicSimilarity], result of:
          0.10130662 = score(doc=5575,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 5575, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=5575)
        0.06605473 = product of:
          0.13210946 = sum of:
            0.13210946 = weight(_text_:catalogue in 5575) [ClassicSimilarity], result of:
              0.13210946 = score(doc=5575,freq=6.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.5549339 = fieldWeight in 5575, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5575)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The fundamental principle of order in the library catalogue is the authorship principle, which serves as the organizing node of an alphabetico-classed system, in which "texts" of "works" are organized first alphabetically by uniform title of the progenitor work and then are subarranged using titles for variant instantiations, under the heading for an "author." We analyze case studies of entries from (1) the first documented imperial library catalogue, the Seven Epitomes (Qilue [??]), in China; (2) Abelard's Works, which featured prominently in the 1848 testimony of Antonio Panizzi; and (3) The French Chef and the large family of instantiated works associated with it. Our analysis shows that the catalogue typically contains many large superwork sets. A more pragmatic approach to the design of catalogues is to array descriptions of resources in relation to the superwork sets to which they might belong. In all cases, a multidimensional faceted arrangement incorporating ideational nodes from the universe of recorded knowledge holds promise for greatly enhanced retrieval capability.
  4. Blackman, C.; Moore, E.R.; Seikel, M.; Smith, M.: WorldCat and SkyRiver (2014) 0.11
    0.10886009 = product of:
      0.16329013 = sum of:
        0.14326918 = weight(_text_:title in 2602) [ClassicSimilarity], result of:
          0.14326918 = score(doc=2602,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.52218914 = fieldWeight in 2602, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=2602)
        0.020020949 = product of:
          0.040041897 = sum of:
            0.040041897 = weight(_text_:22 in 2602) [ClassicSimilarity], result of:
              0.040041897 = score(doc=2602,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.23214069 = fieldWeight in 2602, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2602)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In 2009, a new company, SkyRiver, began offering bibliographic utility services to libraries in direct competition to OCLC's WorldCat. This study examines the differences between the two databases in terms of hit rates, total number of records found for each title in the sample, number of non-English language records, and the presence and completeness of several elements in the most-held bibliographic record for each title. While this study discovered that the two databases had virtually the same hit rates and record fullness for the sample used-with encoding levels as the sole exception-the study results do indicate meaningful differences in the number of duplicate records and non-English-language records available in each database for recently published scholarly monographs.
    Date
    10. 9.2000 17:38:22
  5. Green, R.: Facet detection using WorldCat and WordNet (2014) 0.09
    0.09436589 = product of:
      0.14154883 = sum of:
        0.11819105 = weight(_text_:title in 1419) [ClassicSimilarity], result of:
          0.11819105 = score(doc=1419,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.43078408 = fieldWeight in 1419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1419)
        0.023357773 = product of:
          0.046715546 = sum of:
            0.046715546 = weight(_text_:22 in 1419) [ClassicSimilarity], result of:
              0.046715546 = score(doc=1419,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.2708308 = fieldWeight in 1419, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1419)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Because procedures for establishing facets tend toward subjectivity, this pilot project investigates whether the facet structure of a subject literature can be discerned automatically on the basis of its own metadata. Nouns found in the titles of works retrieved from the WorldCat bibliographic database based on Dewey number are mapped against the nodes of the WordNet noun network. Density measures are computed for these nodes to identify nodes best summarizing the title noun data / best corresponding to facets of the subject. Results of the work to date are promising enough to warrant further investigation.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  6. Maurer, M.B.; Shakeri, S.: Disciplinary differences : LCSH and keyword assignment for ETDs from different disciplines (2016) 0.09
    0.09436589 = product of:
      0.14154883 = sum of:
        0.11819105 = weight(_text_:title in 5122) [ClassicSimilarity], result of:
          0.11819105 = score(doc=5122,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.43078408 = fieldWeight in 5122, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5122)
        0.023357773 = product of:
          0.046715546 = sum of:
            0.046715546 = weight(_text_:22 in 5122) [ClassicSimilarity], result of:
              0.046715546 = score(doc=5122,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.2708308 = fieldWeight in 5122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5122)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This research concerns the frequency of the assignment of author-supplied keyword strings and cataloger supplied subject heading strings within a library catalog. The results reveal that, on average, more author-assigned keywords and more cataloger-assigned Library of Congress Subject Headings were assigned to works emerging from the arts & humanities than to works emerging from the social sciences and science, technology, engineering, and mathematics (STEM) disciplines. STEM disciplines in particular received a lower amount of topical metadata, in part because of the under-assignment of name/title, geographical, and corporate subject headings. These findings reveal how librarians could increase their understanding of how topical access is functioning within academic disciplines.
    Date
    17. 3.2019 18:04:22
  7. Wagner, B.: ¬Die ältesten Drucke im Internet : vom lokalen Inkunalbelkatalog zu einem koordinierten nationalen Digitalisierungsprojekt (2011) 0.09
    0.09296223 = product of:
      0.13944334 = sum of:
        0.10130662 = weight(_text_:title in 4502) [ClassicSimilarity], result of:
          0.10130662 = score(doc=4502,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 4502, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=4502)
        0.038136713 = product of:
          0.07627343 = sum of:
            0.07627343 = weight(_text_:catalogue in 4502) [ClassicSimilarity], result of:
              0.07627343 = score(doc=4502,freq=2.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                0.3203912 = fieldWeight in 4502, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4502)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Seit über 20 Jahren finanziert die Deutsche Forschungsgemeinschaft die Redaktionsstelle des »Incunabula Short Title Catalogue« der British Library London an der Bayerischen Staatsbibliothek München. Der bevorstehende Abschluss dieses Unternehmens gibt Anlass, den aktuellen Stand der Erschließung von Wiegendrucken in elektronischen Nachweisinstrumenten aus lokaler, regionaler und internationaler Sicht Revue passieren zu lassen. Im vergangenen Jahrzehnt sind neben diese oft langfristig angelegten Vorhaben zunehmend Projekte zur Digitalisierung von Wiegendrucken getreten. So wurden im Rahmen eines DFG-Projekts der Bayerischen Staatsbibliothek seit 2008 bereits mehr als 4.000 Inkunabeln vollständig digitalisiert. Der Beitrag stellt dar, welche Konsequenzen für die Aufnahme von Inkunabeln in Verbundkataloge und lokale OPACs und die gegenseitige Vernetzung der Ressourcen sich daraus ergeben, und entwirft Perspektiven für eine zukünftige koordinierte Digitalisierung von Inkunabelbeständen deutscher Bibliotheken.
  8. Baker, T.: Dublin Core Application Profiles : current approaches (2010) 0.08
    0.080885045 = product of:
      0.121327564 = sum of:
        0.10130662 = weight(_text_:title in 3737) [ClassicSimilarity], result of:
          0.10130662 = score(doc=3737,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 3737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=3737)
        0.020020949 = product of:
          0.040041897 = sum of:
            0.040041897 = weight(_text_:22 in 3737) [ClassicSimilarity], result of:
              0.040041897 = score(doc=3737,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.23214069 = fieldWeight in 3737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3737)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Dublin Core Metadata Initiative currently defines a Dublin Core Application Profile as a set of specifications about the metadata design of a particular application or for a particular domain or community of users. The current approach to application profiles is summarized in the Singapore Framework for Application Profiles [SINGAPORE-FRAMEWORK] (see Figure 1). While the approach originally developed as a means of specifying customized applications based on the fifteen elements of the Dublin Core Element Set (e.g., Title, Date, Subject), it has evolved into a generic approach to creating metadata that meets specific local requirements while integrating coherently with other RDF-based metadata.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  9. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.08
    0.080885045 = product of:
      0.121327564 = sum of:
        0.10130662 = weight(_text_:title in 3197) [ClassicSimilarity], result of:
          0.10130662 = score(doc=3197,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3692435 = fieldWeight in 3197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=3197)
        0.020020949 = product of:
          0.040041897 = sum of:
            0.040041897 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
              0.040041897 = score(doc=3197,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.23214069 = fieldWeight in 3197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3197)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
  10. Mai, F.; Galke, L.; Scherp, A.: Using deep learning for title-based semantic subject indexing to reach competitive performance to full-text (2018) 0.07
    0.07445337 = product of:
      0.22336009 = sum of:
        0.22336009 = weight(_text_:title in 4093) [ClassicSimilarity], result of:
          0.22336009 = score(doc=4093,freq=14.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.8141054 = fieldWeight in 4093, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4093)
      0.33333334 = coord(1/3)
    
    Abstract
    For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.9%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
  11. Sjökvist, P.: Transcription in rare books cataloging (2016) 0.07
    0.06823764 = product of:
      0.20471291 = sum of:
        0.20471291 = weight(_text_:title in 5127) [ClassicSimilarity], result of:
          0.20471291 = score(doc=5127,freq=6.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.74613994 = fieldWeight in 5127, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5127)
      0.33333334 = coord(1/3)
    
    Abstract
    The implementation of RDA poses questions regarding its application on early printed material, e.g., concerning transcription of title information. Cataloging rules used today for early printed books often include a normalization that is misleading, both for libraries and for users. In this article, ideas concerning transcription according to RDA are discussed. These ideas focus on the double purposes of identifying and retrieving an item. For the first purpose, I suggest a transcription of the title, which closely follows the original ("take what you see"), and for the second, a completely normalized variant title.
  12. Gnoli, C.; Merli, G.; Pavan, G.; Bernuzzi, E.; Priano, M.: Freely faceted classification for a Web-based bibliographic archive : the BioAcoustic Reference Database (2010) 0.07
    0.067404196 = product of:
      0.10110629 = sum of:
        0.08442217 = weight(_text_:title in 3739) [ClassicSimilarity], result of:
          0.08442217 = score(doc=3739,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.016684124 = product of:
          0.03336825 = sum of:
            0.03336825 = weight(_text_:22 in 3739) [ClassicSimilarity], result of:
              0.03336825 = score(doc=3739,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.19345059 = fieldWeight in 3739, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3739)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Integrative Level Classification (ILC) research project is experimenting with a knowledge organization system based on phenomena rather than disciplines. Each phenomenon has a constant notation, which can be combined with that of any other phenomenon in a freely faceted structure. Citation order can express differential focality of the facets. Very specific subjects can have long classmarks, although their complexity is reduced by various devices. Freely faceted classification is being tested by indexing a corpus of about 3300 papers in the interdisciplinary domain of bioacoustics. The subjects of these papers often include phenomena from a wide variety of integrative levels (mechanical waves, animals, behaviour, vessels, fishing, law, ...) as well as information about the methods of study, as predicted in the León Manifesto. The archive is recorded in a MySQL database, and can be fed and searched through PHP Web interfaces. Indexer's work is made easier by mechanisms that suggest possible classes on the basis of matching title words with terms in the ILC schedules, and synthesize automatically the verbal caption corresponding to the classmark being edited. Users can search the archive by selecting and combining values in each facet. Search refinement should be improved, especially for the cases where no record, or too many records, match the faceted query. However, experience is being gained progressively, showing that freely faceted classification by phenomena, theories, and methods is feasible and successfully working.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  13. McCain, K.W.: Mining full-text journal articles to assess obliteration by incorporation : Herbert A. Simon's concepts of bounded rationality and satisficing in economics, management, and psychology (2015) 0.07
    0.067404196 = product of:
      0.10110629 = sum of:
        0.08442217 = weight(_text_:title in 2260) [ClassicSimilarity], result of:
          0.08442217 = score(doc=2260,freq=2.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.3077029 = fieldWeight in 2260, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2260)
        0.016684124 = product of:
          0.03336825 = sum of:
            0.03336825 = weight(_text_:22 in 2260) [ClassicSimilarity], result of:
              0.03336825 = score(doc=2260,freq=2.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.19345059 = fieldWeight in 2260, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2260)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This study explores the usefulness of full-text retrieval in assessing obliteration by incorporation (OBI) by comparing patterns of OBI and citation substitution across economics, management, and psychology for two concept catch phrases-bounded rationality and satisficing. Searches using each term are conducted in JSTOR and in selected additional full-text journal sources from over the years 1987-2011. Two measures of OBI are used, one simply tallying the presence or absence of references to Simon's oeuvre (strict OBI) linked to the catch phrase and one counting only papers lacking any embedded reference as evidence of obliteration (lenient OBI). By either measure, OBI existed but varied across subject area, time period, and catch phrase. Economics had the highest strict OBI (82%) and lenient OBI (43%) for bounded rationality and the highest strict OBI (64%) for satisficing; all 3 subject areas were essentially tied for lenient OBI at about 30%. Sixty-two percent of the articles for bounded rationality-psychology were retrieved only because the catch phrase occurred in a title in the article bibliography. OBI research can benefit from full-text searching; the main tradeoff is more detailed and nuanced evidence concerning OBI existence and trends versus increased noise in the retrieval.
    Date
    15.10.2015 19:22:55
  14. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.07
    0.06672676 = product of:
      0.10009013 = sum of:
        0.08357369 = weight(_text_:title in 1155) [ClassicSimilarity], result of:
          0.08357369 = score(doc=1155,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.30461034 = fieldWeight in 1155, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1155)
        0.01651644 = product of:
          0.03303288 = sum of:
            0.03303288 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
              0.03303288 = score(doc=1155,freq=4.0), product of:
                0.17248978 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049257044 = queryNorm
                0.19150631 = fieldWeight in 1155, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1155)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
    Date
    17.12.2013 12:51:22
  15. Karpuk, S.: Cataloging seventeenth- and eighteenth-century German dissertations : guidelines and observations (2010) 0.06
    0.063675195 = product of:
      0.19102558 = sum of:
        0.19102558 = weight(_text_:title in 3555) [ClassicSimilarity], result of:
          0.19102558 = score(doc=3555,freq=4.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.6962522 = fieldWeight in 3555, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
      0.33333334 = coord(1/3)
    
    Abstract
    The author provides historical background useful in understanding the title pages of seventeenth- and eighteenth-century German dissertations. Images of title pages are included, with details of bibliographic description, and Machine Readable Cataloging (MARC) coding, as well as links to examples of catalog records in the Yale Law Library catalog, MORRIS. This article also includes comments on Anglo-American Cataloguing Rules, Second Edition (AACR2) Rule 21.27 regarding the problem of authorship in early dissertations.
  16. O'Neill, E.T.; Kammerer, K.A.; Bennett, R.: ¬The aboutness of words (2017) 0.06
    0.05848941 = product of:
      0.17546822 = sum of:
        0.17546822 = weight(_text_:title in 3835) [ClassicSimilarity], result of:
          0.17546822 = score(doc=3835,freq=6.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.63954854 = fieldWeight in 3835, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.046875 = fieldNorm(doc=3835)
      0.33333334 = coord(1/3)
    
    Abstract
    Word aboutness is defined as the relationship between words and subjects associated with them. An aboutness coefficient is developed to estimate the strength of the aboutness relationship. Words that are randomly distributed across subjects are assumed to lack aboutness and the degree to which their usage deviates from a random pattern indicates the strength of the aboutness. To estimate aboutness, title words and their associated subjects are extracted from the titles of non-fiction English language books in the OCLC WorldCat database. The usage patterns of the title words are analyzed and used to compute aboutness coefficients for each of the common title words. Words with low aboutness coefficients (An and In) are commonly found in stop word lists, whereas words with high aboutness coefficients (Carbonate, Autism) are unambiguous and have a strong subject association. The aboutness coefficient potentially can enhance indexing, advance authority control, and improve retrieval.
  17. Cota, R.G.; Ferreira, A.A.; Nascimento, C.; Gonçalves, M.A.; Laender, A.H.F.: ¬An unsupervised heuristic-based hierarchical method for name disambiguation in bibliographic citations (2010) 0.06
    0.056281447 = product of:
      0.16884434 = sum of:
        0.16884434 = weight(_text_:title in 3986) [ClassicSimilarity], result of:
          0.16884434 = score(doc=3986,freq=8.0), product of:
            0.27436262 = queryWeight, product of:
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.049257044 = queryNorm
            0.6154058 = fieldWeight in 3986, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.570018 = idf(docFreq=457, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3986)
      0.33333334 = coord(1/3)
    
    Abstract
    Name ambiguity in the context of bibliographic citations is a difficult problem which, despite the many efforts from the research community, still has a lot of room for improvement. In this article, we present a heuristic-based hierarchical clustering method to deal with this problem. The method successively fuses clusters of citations of similar author names based on several heuristics and similarity measures on the components of the citations (e.g., coauthor names, work title, and publication venue title). During the disambiguation task, the information about fused clusters is aggregated providing more information for the next round of fusion. In order to demonstrate the effectiveness of our method, we ran a series of experiments in two different collections extracted from real-world digital libraries and compared it, under two metrics, with four representative methods described in the literature. We present comparisons of results using each considered attribute separately (i.e., coauthor names, work title, and publication venue title) with the author name attribute and using all attributes together. These results show that our unsupervised method, when using all attributes, performs competitively against all other methods, under both metrics, loosing only in one case against a supervised method, whose result was very close to ours. Moreover, such results are achieved without the burden of any training and without using any privileged information such as knowing a priori the correct number of clusters.
  18. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.05
    0.052155495 = product of:
      0.15646648 = sum of:
        0.15646648 = product of:
          0.46939945 = sum of:
            0.46939945 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.46939945 = score(doc=973,freq=2.0), product of:
                0.41760176 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049257044 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  19. Koster, L.; Heesakkers, D.: ¬The mobile library catalogue (2013) 0.05
    0.051375896 = product of:
      0.15412769 = sum of:
        0.15412769 = product of:
          0.30825537 = sum of:
            0.30825537 = weight(_text_:catalogue in 1479) [ClassicSimilarity], result of:
              0.30825537 = score(doc=1479,freq=6.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                1.2948457 = fieldWeight in 1479, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1479)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Catalogue 2.0: the future of the library catalogue. Ed. by Sally Chambers
  20. Callewaert, R.: FRBRizing your catalogue (2013) 0.05
    0.051375896 = product of:
      0.15412769 = sum of:
        0.15412769 = product of:
          0.30825537 = sum of:
            0.30825537 = weight(_text_:catalogue in 1480) [ClassicSimilarity], result of:
              0.30825537 = score(doc=1480,freq=6.0), product of:
                0.23806341 = queryWeight, product of:
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.049257044 = queryNorm
                1.2948457 = fieldWeight in 1480, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.8330836 = idf(docFreq=956, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1480)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Catalogue 2.0: the future of the library catalogue. Ed. by Sally Chambers

Languages

  • e 574
  • d 182
  • a 1
  • hu 1
  • i 1
  • More… Less…

Types

  • a 663
  • el 72
  • m 57
  • s 17
  • x 13
  • r 7
  • b 5
  • i 2
  • n 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications