Search (177 results, page 1 of 9)

  • × year_i:[2020 TO 2030}
  1. Kusber, E.: Ständige Aktualisierung von ASB und KAB : Zehn Jahre Systematik-Kooperati-on / Bibliotheken können jederzeit anfragen (2020) 0.10
    0.0968663 = product of:
      0.2905989 = sum of:
        0.2905989 = weight(_text_:systematik in 1779) [ClassicSimilarity], result of:
          0.2905989 = score(doc=1779,freq=2.0), product of:
            0.355158 = queryWeight, product of:
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.057548698 = queryNorm
            0.8182243 = fieldWeight in 1779, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.09375 = fieldNorm(doc=1779)
      0.33333334 = coord(1/3)
    
  2. Becker, H.-G.: ¬Der Katalog als virtueller Navigationsraum (2020) 0.08
    0.083888695 = product of:
      0.25166607 = sum of:
        0.25166607 = weight(_text_:systematik in 49) [ClassicSimilarity], result of:
          0.25166607 = score(doc=49,freq=6.0), product of:
            0.355158 = queryWeight, product of:
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.057548698 = queryNorm
            0.7086031 = fieldWeight in 49, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.046875 = fieldNorm(doc=49)
      0.33333334 = coord(1/3)
    
    Abstract
    Der Katalog der Universitätsbibliothek Dortmund stellt nicht nur einen Verfügbarkeitsraum für alle relevanten Informationen dar, vielmehr soll er darüber hinaus ein Navigationsraum für eine zeitgemäße inhaltliche Suche sein. Die Aufgabe war daher, eine automatisierte Lösung mit einer maximalen Abdeckung der eigenen Bestände zu finden, die für möglichst viele Fächer eine anerkannte Systematik bereitstellt.Mittels der unter einer offenen Lizenz veröffentlichten CultureGraph-Daten der Deutschen Nationalbibliothek wurde eine navigationsfähige Systematik auf Basis der Regensburger Verbundklassifikation entwickelt, aus der heraus sowohl auf die gedruckt als auch auf die elektronisch verfügbaren Bestände der Universitätsbibliothek (UB) zugegriffen werden kann. Ferner wurde eine direkte Einbindung in das Discovery-System realisiert, in der die Systematik mit anderen Navigatoren und Suchfiltern kombiniert werden kann. Das so entstandene Suchinstrument führt dazu, dass in der UB Dortmund künftig auf die systematische Buchaufstellung verzichtet werden kann.
  3. Colombi, C.: Bibliothekarische Fachsystematiken am Deutschen Archäologischen Institut : 180 Jahre Wissensordnung (2023) 0.06
    0.056505345 = product of:
      0.16951603 = sum of:
        0.16951603 = weight(_text_:systematik in 999) [ClassicSimilarity], result of:
          0.16951603 = score(doc=999,freq=2.0), product of:
            0.355158 = queryWeight, product of:
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.057548698 = queryNorm
            0.4772975 = fieldWeight in 999, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.1714344 = idf(docFreq=250, maxDocs=44218)
              0.0546875 = fieldNorm(doc=999)
      0.33333334 = coord(1/3)
    
    Abstract
    Seit 1836 werden bibliothekarische Titel an der Abteilung Rom des Deutschen Archäologischen Instituts inhaltlich erschlossen. Die hierfür entwickelten Systematiken zeugen von der Entstehung der Klassischen Archäologie als Disziplin und von der Geschichte der bibliothekarischen Klassifikation. Die systematischen Kataloge und die Fachbibliographien des Deutschen Archäologischen Instituts werden vorgestellt und ihre Entstehung kontextualisiert. Die Unterschiede zwischen den einzelnen Katalogen und den Bibliographien dienen als Ausgangspunkt, um die Anpassungsstrategien der Klassifikation an die Fortschritte der Forschung und der Technologie zu untersuchen. Der Abgleich mit dem Publikationsaufkommen ermöglicht zudem Bemerkungen zu den Änderungen in der Systematik.
  4. Dunn, H.; Bourcier, P.: Nomenclature for museum cataloging (2020) 0.05
    0.052502852 = product of:
      0.078754276 = sum of:
        0.050804675 = product of:
          0.15241402 = sum of:
            0.15241402 = weight(_text_:objects in 5483) [ClassicSimilarity], result of:
              0.15241402 = score(doc=5483,freq=4.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.49828792 = fieldWeight in 5483, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5483)
          0.33333334 = coord(1/3)
        0.027949603 = product of:
          0.055899207 = sum of:
            0.055899207 = weight(_text_:indexing in 5483) [ClassicSimilarity], result of:
              0.055899207 = score(doc=5483,freq=2.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.2537542 = fieldWeight in 5483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5483)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present an overview of Nomenclature's history, characteristics, structure, use, management, development process, limitations, and future. Nomenclature for Museum Cataloging is a bilingual (English/French) structured and controlled list of object terms organized in a classification system to provide a basis for indexing and cataloging collections of human-made objects. It includes illustrations and bibliographic references as well as a user guide. It is used in the creation and management of object records in human history collections within museums and other organizations, and it focuses on objects relevant to North American history and culture. First published in 1978, Nomenclature is the most extensively used museum classification and controlled vocabulary for historical and ethnological collections in North America and represents thereby a de facto standard in the field. An online reference version of Nomenclature was made available in 2018, and it will be available under open license in 2020.
  5. Golub, K.; Tyrkkö, J.; Hansson, J.; Ahlström, I.: Subject indexing in humanities : a comparison between a local university repository and an international bibliographic service (2020) 0.05
    0.04685248 = product of:
      0.07027872 = sum of:
        0.029936943 = product of:
          0.089810826 = sum of:
            0.089810826 = weight(_text_:objects in 5982) [ClassicSimilarity], result of:
              0.089810826 = score(doc=5982,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.29361898 = fieldWeight in 5982, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5982)
          0.33333334 = coord(1/3)
        0.040341776 = product of:
          0.08068355 = sum of:
            0.08068355 = weight(_text_:indexing in 5982) [ClassicSimilarity], result of:
              0.08068355 = score(doc=5982,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3662626 = fieldWeight in 5982, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5982)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As the humanities develop in the realm of increasingly more pronounced digital scholarship, it is important to provide quality subject access to a vast range of heterogeneous information objects in digital services. The study aims to paint a representative picture of the current state of affairs of the use of subject index terms in humanities journal articles with particular reference to the well-established subject access needs of humanities researchers, with the purpose of identifying which improvements are needed in this context. Design/methodology/approach The comparison of subject metadata on a sample of 649 peer-reviewed journal articles from across the humanities is conducted in a university repository, against Scopus, the former reflecting local and national policies and the latter being the most comprehensive international abstract and citation database of research output. Findings The study shows that established bibliographic objectives to ensure subject access for humanities journal articles are not supported in either the world's largest commercial abstract and citation database Scopus or the local repository of a public university in Sweden. The indexing policies in the two services do not seem to address the needs of humanities scholars for highly granular subject index terms with appropriate facets; no controlled vocabularies for any humanities discipline are used whatsoever. Originality/value In all, not much has changed since 1990s when indexing for the humanities was shown to lag behind the sciences. The community of researchers and information professionals, today working together on digital humanities projects, as well as interdisciplinary research teams, should demand that their subject access needs be fulfilled, especially in commercial services like Scopus and discovery services.
  6. Cho, H.; Disher, T.; Lee, W.-C.; Keating, S.A.; Lee, J.H.: Facet analysis of anime genres : the challenges of defining genre information for popular cultural objects (2020) 0.04
    0.042582624 = product of:
      0.06387393 = sum of:
        0.03592433 = product of:
          0.10777299 = sum of:
            0.10777299 = weight(_text_:objects in 5730) [ClassicSimilarity], result of:
              0.10777299 = score(doc=5730,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.35234275 = fieldWeight in 5730, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5730)
          0.33333334 = coord(1/3)
        0.027949603 = product of:
          0.055899207 = sum of:
            0.055899207 = weight(_text_:indexing in 5730) [ClassicSimilarity], result of:
              0.055899207 = score(doc=5730,freq=2.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.2537542 = fieldWeight in 5730, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5730)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Anime, as a growing form of multimedia, needs a better and more thorough organization for its myriad unique terminologies. Existing studies show patrons' desire to search and get recommendations for anime. However, due to inadequate indexing and often confusing or inaccurate usage of terms, searching and acquiring recommendations remain challenging. Our research seeks to close the gap and make discovery and recommendations more viable. In this study, we conducted a facet analysis of anime genre terms that are currently used in thirty-six anime-related English-language databases and websites. Using a card sorting method with an inductive approach to the 1,597 terms collected, we identified and defined nine facets and 153 foci terms that describe different genres of anime. Identified terms can be implemented within different organizational systems including library catalogs, recommendation systems, and online databases to improve genre definitions and search experiences.
  7. Marcondes, C.H.: Towards a vocabulary to implement culturally relevant relationships between digital collections in heritage institutions (2020) 0.04
    0.041219912 = product of:
      0.061829865 = sum of:
        0.04233723 = product of:
          0.12701169 = sum of:
            0.12701169 = weight(_text_:objects in 5757) [ClassicSimilarity], result of:
              0.12701169 = score(doc=5757,freq=4.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.41523993 = fieldWeight in 5757, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.33333334 = coord(1/3)
        0.019492636 = product of:
          0.03898527 = sum of:
            0.03898527 = weight(_text_:22 in 5757) [ClassicSimilarity], result of:
              0.03898527 = score(doc=5757,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.19345059 = fieldWeight in 5757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5757)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Cultural heritage institutions are publishing their digital collections over the web as LOD. This is is a new step in the patrimonialization and curatorial processes developed by such institutions. Many of these collections are thematically superimposed and complementary. Frequently, objects in these collections present culturally relevant relationships, such as a book about a painting, or a draft or sketch of a famous painting, etc. LOD technology enables such heritage records to be interlinked, achieving interoperability and adding value to digital collections, thus empowering heritage institutions. An aim of this research is characterizing such culturally relevant relationships and organizing them in a vocabulary. Use cases or examples of relationships between objects suggested by curators or mentioned in literature and in the conceptual models as FRBR/LRM, CIDOC CRM and RiC-CM, were collected and used as examples or inspiration of cultural relevant relationships. Relationships identified are collated and compared for identifying those with the same or similar meaning, synthesized and normalized. A set of thirty-three culturally relevant relationships are identified and formalized as a LOD property vocabulary to be used by digital curators to interlink digital collections. The results presented are provisional and a starting point to be discussed, tested, and enhanced.
    Date
    4. 3.2020 14:22:41
  8. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.04
    0.039931707 = product of:
      0.119795114 = sum of:
        0.119795114 = sum of:
          0.06521574 = weight(_text_:indexing in 40) [ClassicSimilarity], result of:
            0.06521574 = score(doc=40,freq=2.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.29604656 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.054579377 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.054579377 = score(doc=40,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.33333334 = coord(1/3)
    
    Date
    17.11.2020 12:22:59
    Theme
    Citation indexing
  9. Siqueira, J.; Martins, D.L.: Workflow models for aggregating cultural heritage data on the web : a systematic literature review (2022) 0.04
    0.03548552 = product of:
      0.053228278 = sum of:
        0.029936943 = product of:
          0.089810826 = sum of:
            0.089810826 = weight(_text_:objects in 464) [ClassicSimilarity], result of:
              0.089810826 = score(doc=464,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.29361898 = fieldWeight in 464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=464)
          0.33333334 = coord(1/3)
        0.023291335 = product of:
          0.04658267 = sum of:
            0.04658267 = weight(_text_:indexing in 464) [ClassicSimilarity], result of:
              0.04658267 = score(doc=464,freq=2.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.21146181 = fieldWeight in 464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=464)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In recent years, different cultural institutions have made efforts to spread culture through the construction of a unique search interface that integrates their digital objects and facilitates data retrieval for lay users. However, integrating cultural data is not a trivial task; therefore, this work performs a systematic literature review on data aggregation workflows, in order to answer five questions: What are the projects? What are the planned steps? Which technologies are used? Are the steps performed manually, automatically, or semi-automatically? Which perform semantic search? The searches were carried out in three databases: Networked Digital Library of Theses and Dissertations, Scopus and Web of Science. In Q01, 12 projects were selected. In Q02, 9 stages were identified: Harvesting, Ingestion, Mapping, Indexing, Storing, Monitoring, Enriching, Displaying, and Publishing LOD. In Q03, 19 different technologies were found it. In Q04, we identified that most of the solutions are semi-automatic and, in Q05, that most of them perform a semantic search. The analysis of the workflows allowed us to identify that there is no consensus regarding the stages, their nomenclatures, and technologies, besides presenting superficial discussions. But it allowed to identify the main steps for the implementation of the aggregation of cultural data.
  10. Rae, A.R.; Mork, J.G.; Demner-Fushman, D.: ¬The National Library of Medicine indexer assignment dataset : a new large-scale dataset for reviewer assignment research (2023) 0.03
    0.034954377 = product of:
      0.10486312 = sum of:
        0.10486312 = sum of:
          0.06587785 = weight(_text_:indexing in 885) [ClassicSimilarity], result of:
            0.06587785 = score(doc=885,freq=4.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.29905218 = fieldWeight in 885, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
          0.03898527 = weight(_text_:22 in 885) [ClassicSimilarity], result of:
            0.03898527 = score(doc=885,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.19345059 = fieldWeight in 885, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=885)
      0.33333334 = coord(1/3)
    
    Abstract
    MEDLINE is the National Library of Medicine's (NLM) journal citation database. It contains over 28 million references to biomedical and life science journal articles, and a key feature of the database is that all articles are indexed with NLM Medical Subject Headings (MeSH). The library employs a team of MeSH indexers, and in recent years they have been asked to index close to 1 million articles per year in order to keep MEDLINE up to date. An important part of the MEDLINE indexing process is the assignment of articles to indexers. High quality and timely indexing is only possible when articles are assigned to indexers with suitable expertise. This article introduces the NLM indexer assignment dataset: a large dataset of 4.2 million indexer article assignments for articles indexed between 2011 and 2019. The dataset is shown to be a valuable testbed for expert matching and assignment algorithms, and indexer article assignment is also found to be useful domain-adaptive pre-training for the closely related task of reviewer assignment.
    Date
    22. 1.2023 18:49:49
  11. Asubiaro, T.V.; Onaolapo, S.: ¬A comparative study of the coverage of African journals in Web of Science, Scopus, and CrossRef (2023) 0.03
    0.034954377 = product of:
      0.10486312 = sum of:
        0.10486312 = sum of:
          0.06587785 = weight(_text_:indexing in 992) [ClassicSimilarity], result of:
            0.06587785 = score(doc=992,freq=4.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.29905218 = fieldWeight in 992, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=992)
          0.03898527 = weight(_text_:22 in 992) [ClassicSimilarity], result of:
            0.03898527 = score(doc=992,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.19345059 = fieldWeight in 992, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=992)
      0.33333334 = coord(1/3)
    
    Abstract
    This is the first study that evaluated the coverage of journals from Africa in Web of Science, Scopus, and CrossRef. A list of active journals published in each of the 55 African countries was compiled from Ulrich's periodicals directory and African Journals Online (AJOL) website. Journal master lists for Web of Science, Scopus, and CrossRef were searched for the African journals. A total of 2,229 unique active African journals were identified from Ulrich (N = 2,117, 95.0%) and AJOL (N = 243, 10.9%) after removing duplicates. The volume of African journals in Web of Science and Scopus databases is 7.4% (N = 166) and 7.8% (N = 174), respectively, compared to the 45.6% (N = 1,017) covered in CrossRef. While making up only 17.% of all the African journals, South African journals had the best coverage in the two most authoritative databases, accounting for 73.5% and 62.1% of all the African journals in Web of Science and Scopus, respectively. In contrast, Nigeria published 44.5% of all the African journals. The distribution of the African journals is biased in favor of Medical, Life and Health Sciences and Humanities and the Arts in the three databases. The low representation of African journals in CrossRef, a free indexing infrastructure that could be harnessed for building an African-centric research indexing database, is concerning.
    Date
    22. 6.2023 14:09:06
  12. Cheti, A.; Viti, E.: Functionality and merits of a faceted thesaurus : the case of the Nuovo soggettario (2023) 0.03
    0.034227178 = product of:
      0.10268153 = sum of:
        0.10268153 = sum of:
          0.055899207 = weight(_text_:indexing in 1181) [ClassicSimilarity], result of:
            0.055899207 = score(doc=1181,freq=2.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.2537542 = fieldWeight in 1181, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
          0.046782322 = weight(_text_:22 in 1181) [ClassicSimilarity], result of:
            0.046782322 = score(doc=1181,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.23214069 = fieldWeight in 1181, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1181)
      0.33333334 = coord(1/3)
    
    Abstract
    The Nuovo soggettario, the official Italian subject indexing system edited by the National Central Library of Florence, is made up of interactive components, the core of which is a general thesaurus and some rules of a conventional syntax for subject string construction. The Nuovo soggettario Thesaurus is in compliance with ISO 25964: 2011-2013, IFLA LRM, and FAIR principle (findability, accessibility, interoperability, and reusability). Its open data are available in the Zthes, MARC21, and in SKOS formats and allow for interoperability with l library, archive, and museum databases. The Thesaurus's macrostructure is organized into four fundamental macro-categories, thirteen categories, and facets. The facets allow for the orderly development of hierarchies, thereby limiting polyhierarchies and promoting the grouping of homogenous concepts. This paper addresses the main features and peculiarities which have characterized the consistent development of this categorical structure and its effects on the syntactic sphere in a predominantly pre-coordinated usage context.
    Date
    26.11.2023 18:59:22
  13. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.03046753 = product of:
      0.09140259 = sum of:
        0.09140259 = product of:
          0.27420777 = sum of:
            0.27420777 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.27420777 = score(doc=862,freq=2.0), product of:
                0.4878985 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057548698 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  14. Hjoerland, B.: Table of contents (ToC) (2022) 0.03
    0.028522646 = product of:
      0.08556794 = sum of:
        0.08556794 = sum of:
          0.04658267 = weight(_text_:indexing in 1096) [ClassicSimilarity], result of:
            0.04658267 = score(doc=1096,freq=2.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.21146181 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1096)
          0.03898527 = weight(_text_:22 in 1096) [ClassicSimilarity], result of:
            0.03898527 = score(doc=1096,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.19345059 = fieldWeight in 1096, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1096)
      0.33333334 = coord(1/3)
    
    Abstract
    A table of contents (ToC) is a kind of document representation as well as a paratext and a kind of finding device to the document it represents. TOCs are very common in books and some other kinds of documents, but not in all kinds. This article discusses the definition and functions of ToC, normative guidelines for their design, and the history and forms of ToC in different kinds of documents and media. A main part of the article is about the role of ToC in information searching, in current awareness services and as items added to bibliographical records. The introduction and the conclusion focus on the core theoretical issues concerning ToCs. Should they be document-oriented or request-oriented, neutral, or policy-oriented, objective, or subjective? It is concluded that because of the special functions of ToCs, the arguments for the request-oriented (policy-oriented, subjective) view are weaker than they are in relation to indexing and knowledge organization in general. Apart from level of granularity, the evaluation of a ToC is difficult to separate from the evaluation of the structuring and naming of the elements of the structure of the document it represents.
    Date
    18.11.2023 13:47:22
  15. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.02538961 = product of:
      0.07616883 = sum of:
        0.07616883 = product of:
          0.22850648 = sum of:
            0.22850648 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.22850648 = score(doc=5669,freq=2.0), product of:
                0.4878985 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057548698 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  16. Gabler, S.: Vergabe von DDC-Sachgruppen mittels eines Schlagwort-Thesaurus (2021) 0.03
    0.02538961 = product of:
      0.07616883 = sum of:
        0.07616883 = product of:
          0.22850648 = sum of:
            0.22850648 = weight(_text_:3a in 1000) [ClassicSimilarity], result of:
              0.22850648 = score(doc=1000,freq=2.0), product of:
                0.4878985 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.057548698 = queryNorm
                0.46834838 = fieldWeight in 1000, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1000)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Master thesis Master of Science (Library and Information Studies) (MSc), Universität Wien. Advisor: Christoph Steiner. Vgl.: https://www.researchgate.net/publication/371680244_Vergabe_von_DDC-Sachgruppen_mittels_eines_Schlagwort-Thesaurus. DOI: 10.25365/thesis.70030. Vgl. dazu die Präsentation unter: https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=web&cd=&ved=0CAIQw7AJahcKEwjwoZzzytz_AhUAAAAAHQAAAAAQAg&url=https%3A%2F%2Fwiki.dnb.de%2Fdownload%2Fattachments%2F252121510%2FDA3%2520Workshop-Gabler.pdf%3Fversion%3D1%26modificationDate%3D1671093170000%26api%3Dv2&psig=AOvVaw0szwENK1or3HevgvIDOfjx&ust=1687719410889597&opi=89978449.
  17. Bedford, D.: Knowledge architectures : structures and semantics (2021) 0.02
    0.022818118 = product of:
      0.068454355 = sum of:
        0.068454355 = sum of:
          0.037266135 = weight(_text_:indexing in 566) [ClassicSimilarity], result of:
            0.037266135 = score(doc=566,freq=2.0), product of:
              0.2202888 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.057548698 = queryNorm
              0.16916946 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
          0.031188216 = weight(_text_:22 in 566) [ClassicSimilarity], result of:
            0.031188216 = score(doc=566,freq=2.0), product of:
              0.20152573 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.057548698 = queryNorm
              0.15476047 = fieldWeight in 566, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=566)
      0.33333334 = coord(1/3)
    
    Content
    Section 1 Context and purpose of knowledge architecture -- 1 Making the case for knowledge architecture -- 2 The landscape of knowledge assets -- 3 Knowledge architecture and design -- 4 Knowledge architecture reference model -- 5 Knowledge architecture segments -- Section 2 Designing for availability -- 6 Knowledge object modeling -- 7 Knowledge structures for encoding, formatting, and packaging -- 8 Functional architecture for identification and distinction -- 9 Functional architectures for knowledge asset disposition and destruction -- 10 Functional architecture designs for knowledge preservation and conservation -- Section 3 Designing for accessibility -- 11 Functional architectures for knowledge seeking and discovery -- 12 Functional architecture for knowledge search -- 13 Functional architecture for knowledge categorization -- 14 Functional architectures for indexing and keywording -- 15 Functional architecture for knowledge semantics -- 16 Functional architecture for knowledge abstraction and surrogation -- Section 4 Functional architectures to support knowledge consumption -- 17 Functional architecture for knowledge augmentation, derivation, and synthesis -- 18 Functional architecture to manage risk and harm -- 19 Functional architectures for knowledge authentication and provenance -- 20 Functional architectures for securing knowledge assets -- 21 Functional architectures for authorization and asset management -- Section 5 Pulling it all together - the big picture knowledge architecture -- 22 Functional architecture for knowledge metadata and metainformation -- 23 The whole knowledge architecture - pulling it all together
  18. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.02
    0.02173858 = product of:
      0.06521574 = sum of:
        0.06521574 = product of:
          0.13043147 = sum of:
            0.13043147 = weight(_text_:indexing in 1139) [ClassicSimilarity], result of:
              0.13043147 = score(doc=1139,freq=8.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.5920931 = fieldWeight in 1139, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1139)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  19. Manzoni, L.: Nuovo Soggettario and semantic indexing of cartographic resources in Italy : an exploratory study (2022) 0.02
    0.021515613 = product of:
      0.06454684 = sum of:
        0.06454684 = product of:
          0.12909368 = sum of:
            0.12909368 = weight(_text_:indexing in 1138) [ClassicSimilarity], result of:
              0.12909368 = score(doc=1138,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.5860202 = fieldWeight in 1138, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1138)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper focuses on the potential use of Nuovo soggettario, the semantic indexing tool adopted by the National Central Library of Florence (Biblioteca nazionale centrale di Firenze), for indexing cartographic resources. Particular attention is paid to the treatment of place names, the use of formal subjects, and the different ways of constructing subject strings for general and thematic maps.
  20. Koster, L.: Persistent identifiers for heritage objects (2020) 0.02
    0.019957963 = product of:
      0.059873886 = sum of:
        0.059873886 = product of:
          0.17962165 = sum of:
            0.17962165 = weight(_text_:objects in 5718) [ClassicSimilarity], result of:
              0.17962165 = score(doc=5718,freq=8.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.58723795 = fieldWeight in 5718, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5718)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Persistent identifiers (PID's) are essential for getting access and referring to library, archive and museum (LAM) collection objects in a sustainable and unambiguous way, both internally and externally. Heritage institutions need a universal policy for the use of PID's in order to have an efficient digital infrastructure at their disposal and to achieve optimal interoperability, leading to open data, open collections and efficient resource management. Here the discussion is limited to PID's that institutions can assign to objects they own or administer themselves. PID's for people, subjects etc. can be used by heritage institutions, but are generally managed by other parties. The first part of this article consists of a general theoretical description of persistent identifiers. First of all, I discuss the questions of what persistent identifiers are and what they are not, and what is needed to administer and use them. The most commonly used existing PID systems are briefly characterized. Then I discuss the types of objects PID's can be assigned to. This section concludes with an overview of the requirements that apply if PIDs should also be used for linked data. The second part examines current infrastructural practices, and existing PID systems and their advantages and shortcomings. Based on these practical issues and the pros and cons of existing PID systems a list of requirements for PID systems is presented which is used to address a number of practical considerations. This section concludes with a number of recommendations.

Languages

  • e 143
  • d 32
  • pt 1
  • More… Less…

Types

  • a 168
  • el 27
  • m 4
  • p 2
  • x 1
  • More… Less…