Search (89 results, page 1 of 5)

  • × type_ss:"a"
  • × type_ss:"el"
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.08
    0.08146443 = product of:
      0.16292886 = sum of:
        0.16292886 = sum of:
          0.114715785 = weight(_text_:ii in 40) [ClassicSimilarity], result of:
            0.114715785 = score(doc=40,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.41776034 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.048213083 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.048213083 = score(doc=40,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.5 = coord(1/2)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
  2. Jörs, B.: ¬Ein kleines Fach zwischen "Daten" und "Wissen" II : Anmerkungen zum (virtuellen) "16th International Symposium of Information Science" (ISI 2021", Regensburg) (2021) 0.06
    0.05818888 = product of:
      0.11637776 = sum of:
        0.11637776 = sum of:
          0.08193985 = weight(_text_:ii in 330) [ClassicSimilarity], result of:
            0.08193985 = score(doc=330,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.29840025 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
          0.034437917 = weight(_text_:22 in 330) [ClassicSimilarity], result of:
            0.034437917 = score(doc=330,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.19345059 = fieldWeight in 330, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=330)
      0.5 = coord(1/2)
    
    Abstract
    Nur noch Informationsethik, Informationskompetenz und Information Assessment? Doch gerade die Abschottung von anderen Disziplinen verstärkt die Isolation des "kleinen Faches" Informationswissenschaft in der Scientific Community. So bleiben ihr als letzte "eigenständige" Forschungsrandgebiete nur die, die Wolf Rauch als Keynote Speaker bereits in seinem einführenden, historisch-genetischen Vortrag zur Lage der Informationswissenschaft auf der ISI 2021 benannt hat: "Wenn die universitäre Informationswissenschaft (zumindest in Europa) wohl kaum eine Chance hat, im Bereich der Entwicklung von Systemen und Anwendungen wieder an die Spitze der Entwicklung vorzustoßen, bleiben ihr doch Gebiete, in denen ihr Beitrag in der kommenden Entwicklungsphase dringend erforderlich sein wird: Informationsethik, Informationskompetenz, Information Assessment" (Wolf Rauch: Was aus der Informationswissenschaft geworden ist; in: Thomas Schmidt; Christian Wolff (Eds): Information between Data and Knowledge. Schriften zur Informationswissenschaft 74, Regensburg, 2021, Seiten 20-22 - siehe auch die Rezeption des Beitrages von Rauch durch Johannes Elia Panskus, Was aus der Informationswissenschaft geworden ist. Sie ist in der Realität angekommen, in: Open Password, 17. März 2021). Das ist alles? Ernüchternd.
  3. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.05
    0.05382742 = product of:
      0.10765484 = sum of:
        0.10765484 = product of:
          0.32296452 = sum of:
            0.32296452 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32296452 = score(doc=230,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  4. Barth, T.: Inverse Panopticon : Digitalisierung & Transhumanismus [Transhumanismus II] (2020) 0.04
    0.040969923 = product of:
      0.08193985 = sum of:
        0.08193985 = product of:
          0.1638797 = sum of:
            0.1638797 = weight(_text_:ii in 5592) [ClassicSimilarity], result of:
              0.1638797 = score(doc=5592,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.5968005 = fieldWeight in 5592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5592)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  5. Francu, V.: Does convenience trump accuracy? : the avatars of the UDC in Romania (2007) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 544) [ClassicSimilarity], result of:
              0.114715785 = score(doc=544,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper will concentrate on some major issues regarding the potential of UDC and the current controversy about its use UDC in Romania: i) the importance of hierarchical structures in controlled vocabularies with a direct impact on improved information retrieval given by the browsing function which enables visualizing the hierarchies in subject areas rather than just locating a particular topic; ii) the lack of popularity of the UDC as an indexing and information retrieval language among its users be they librarians or end users of library OPACs; and iii) the situation of UDC teachers and teaching in Romanian universities.
  6. Zimmer, D.E.: ¬Das Unbehagen an der Autorität : Erziehung: "Respekt und Liebe schließen sich nicht aus" (II) (1996) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 4452) [ClassicSimilarity], result of:
              0.114715785 = score(doc=4452,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 4452, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4452)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Barth, T.: Digitalisierung und Lobby : Transhumanismus I (2020) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 5665) [ClassicSimilarity], result of:
              0.114715785 = score(doc=5665,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 5665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5665)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vgl. die Fortsetzung: Barth, T.: Inverse Panopticon: Digitalisierung & Transhumanismus [Transhumanismus II]. [25. Januar 2020]. Unter: https://www.heise.de/tp/features/Inverse-Panopticon-Digitalisierung-Transhumanismus-4645668.html?seite=all.
  8. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part II : achieving interoperability at the record and repository levels (2006) 0.03
    0.028384795 = product of:
      0.05676959 = sum of:
        0.05676959 = product of:
          0.11353918 = sum of:
            0.11353918 = weight(_text_:ii in 1177) [ClassicSimilarity], result of:
              0.11353918 = score(doc=1177,freq=6.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.4134755 = fieldWeight in 1177, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1177)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This is the second part of an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and their applications in order to facilitate the conversion and exchange of metadata and to enable cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level (discussed in Part I of the article), record level (discussed in Part II of the article), and repository level (also discussed in Part II). The results of efforts to improve interoperability may be observed from different perspectives as well, including element-based and value-based approaches. As discussed in Part I of this study, the results of efforts to improve interoperability can be observed at different levels: 1. Schema level - Efforts are focused on the elements of the schemas, being independent of any applications. The results usually appear as derived element sets or encoded schemas, crosswalks, application profiles, and element registries. 2. Record level - Efforts are intended to integrate the metadata records through the mapping of the elements according to the semantic meanings of these elements. Common results include converted records and new records resulting from combining values of existing records. 3. Repository level - With harvested or integrated records from varying sources, efforts at this level focus on mapping value strings associated with particular elements (e.g., terms associated with subject or format elements). The results enable cross-collection searching. In the following sections, we will continue to analyze interoperability efforts and methodologies, focusing on the record level and the repository level. It should be noted that the models to be discussed in this article are not always mutually exclusive. Sometimes, within a particular project, more than one method may be used.
  9. Aerts, D.; Broekaert, J.; Sozzo, S.; Veloz, T.: Meaning-focused and quantum-inspired information retrieval (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 735) [ClassicSimilarity], result of:
              0.098327816 = score(doc=735,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, quantum-based methods have promisingly integrated the traditional procedures in information retrieval (IR) and natural language processing (NLP). Inspired by our research on the identification and application of quantum structures in cognition, more specifically our work on the representation of concepts and their combinations, we put forward a 'quantum meaning based' framework for structured query retrieval in text corpora and standardized testing corpora. This scheme for IR rests on considering as basic notions, (i) 'entities of meaning', e.g., concepts and their combinations and (ii) traces of such entities of meaning, which is how documents are considered in this approach. The meaning content of these 'entities of meaning' is reconstructed by solving an 'inverse problem' in the quantum formalism, consisting of reconstructing the full states of the entities of meaning from their collapsed states identified as traces in relevant documents. The advantages with respect to traditional approaches, such as Latent Semantic Analysis (LSA), are discussed by means of concrete examples.
  10. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 1142) [ClassicSimilarity], result of:
              0.098327816 = score(doc=1142,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 1142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  11. Daquino, M.; Peroni, S.; Shotton, D.; Colavizza, G.; Ghavimi, B.; Lauscher, A.; Mayr, P.; Romanello, M.; Zumstein, P.: ¬The OpenCitations Data Model (2020) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 38) [ClassicSimilarity], result of:
              0.098327816 = score(doc=38,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 38, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=38)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Erschienen in: The Semantic Web - ISWC 2020, 19th International Semantic Web Conference, Athens, Greece, November 2-6, 2020, Proceedings, Part II. Vgl.: DOI: 10.1007/978-3-030-62466-8_28.
  12. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024351284 = product of:
      0.048702568 = sum of:
        0.048702568 = product of:
          0.097405136 = sum of:
            0.097405136 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.097405136 = score(doc=3925,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  13. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.024351284 = product of:
      0.048702568 = sum of:
        0.048702568 = product of:
          0.097405136 = sum of:
            0.097405136 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.097405136 = score(doc=3582,freq=4.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  14. Dunning, A.: Do we still need search engines? (1999) 0.02
    0.024106542 = product of:
      0.048213083 = sum of:
        0.048213083 = product of:
          0.09642617 = sum of:
            0.09642617 = weight(_text_:22 in 6021) [ClassicSimilarity], result of:
              0.09642617 = score(doc=6021,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.5416616 = fieldWeight in 6021, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6021)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Ariadne. 1999, no.22
  15. DiLauro, T.; Choudhury, G.S.; Patton, M.; Warner, J.W.; Brown, E.W.: Automated name authority control and enhanced searching in the Levy collection (2001) 0.02
    0.023176087 = product of:
      0.046352174 = sum of:
        0.046352174 = product of:
          0.09270435 = sum of:
            0.09270435 = weight(_text_:ii in 1160) [ClassicSimilarity], result of:
              0.09270435 = score(doc=1160,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.33760133 = fieldWeight in 1160, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1160)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper is the second in a series in D-Lib Magazine and describes a workflow management system being developed by the Digital Knowledge Center (DKC) at the Milton S. Eisenhower Library (MSEL) of The Johns Hopkins University. Based on experience from digitizing the Lester S. Levy Collection of Sheet Music, it was apparent that large-scale digitization efforts require a significant amount of human labor that is both time-consuming and costly. Consequently, this workflow management system aims to reduce the amount of human labor and time for large-scale digitization projects. The mission of this second phase of the project ("Levy II") can be summarized as follows: * Reduce costs for large collection ingestion by creating a suite of open-source processes, tools, and interfaces for workflow management * Increase access capabilities by providing a suite of research tools * Demonstrate utility of tools and processes with a subset of the online Levy Collection The cornerstones of the workflow management system include optical music recognition (OMR) software and an automated name authority control system (ANAC). The OMR software generates a logical representation of the score for sound generation, music searching, and musicological research. The ANAC disambiguates names, associating each name with an individual (e.g., the composer Septimus Winner also published under the pseudonyms Alice Hawthorne and Apsley Street, among others). Complementing the workflow tools, a suite of research tools focuses upon enhanced searching capabilities through the development and application of a fast, disk-based search engine for lyrics and music and the incorporation of an XML structure for metadata. The first paper (Choudhury et al. 2001) described the OMR software and musical components of Levy II. This paper focuses on the metadata and intellectual access components that include automated name authority control and the aforementioned search engine.
  16. Gil-Berrozpe, J.C.: Description, categorization, and representation of hyponymy in environmental terminology (2022) 0.02
    0.023176087 = product of:
      0.046352174 = sum of:
        0.046352174 = product of:
          0.09270435 = sum of:
            0.09270435 = weight(_text_:ii in 1004) [ClassicSimilarity], result of:
              0.09270435 = score(doc=1004,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.33760133 = fieldWeight in 1004, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1004)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Terminology has evolved from static and prescriptive theories to dynamic and cognitive approaches. Thanks to these approaches, there have been significant advances in the design and elaboration of terminological resources. This has resulted in the creation of tools such as terminological knowledge bases, which are able to show how concepts are interrelated through different semantic or conceptual relations. Of these relations, hyponymy is the most relevant to terminology work because it deals with concept categorization and term hierarchies. This doctoral thesis presents an enhancement of the semantic structure of EcoLexicon, a terminological knowledge base on environmental science. The aim of this research was to improve the description, categorization, and representation of hyponymy in environmental terminology. Therefore, we created HypoLexicon, a new stand-alone module for EcoLexicon in the form of a hyponymy-based terminological resource. This resource contains twelve terminological entries from four specialized domains (Biology, Chemistry, Civil Engineering, and Geology), which consist of 309 concepts and 465 terms associated with those concepts. This research was mainly based on the theoretical premises of Frame-based Terminology. This theory was combined with Cognitive Linguistics, for conceptual description and representation; Corpus Linguistics, for the extraction and processing of linguistic and terminological information; and Ontology, related to hyponymy and relevant for concept categorization. HypoLexicon was constructed from the following materials: (i) the EcoLexicon English Corpus; (ii) other specialized terminological resources, including EcoLexicon; (iii) Sketch Engine; and (iv) Lexonomy. This thesis explains the methodologies applied for corpus extraction and compilation, corpus analysis, the creation of conceptual hierarchies, and the design of the terminological template. The results of the creation of HypoLexicon are discussed by highlighting the information in the hyponymy-based terminological entries: (i) parent concept (hypernym); (ii) child concepts (hyponyms, with various hyponymy levels); (iii) terminological definitions; (iv) conceptual categories; (v) hyponymy subtypes; and (vi) hyponymic contexts. Furthermore, the features and the navigation within HypoLexicon are described from the user interface and the admin interface. In conclusion, this doctoral thesis lays the groundwork for developing a terminological resource that includes definitional, relational, ontological and contextual information about specialized hypernyms and hyponyms. All of this information on specialized knowledge is simple to follow thanks to the hierarchical structure of the terminological template used in HypoLexicon. Therefore, not only does it enhance knowledge representation, but it also facilitates its acquisition.
  17. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.02066275 = product of:
      0.0413255 = sum of:
        0.0413255 = product of:
          0.082651 = sum of:
            0.082651 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.082651 = score(doc=3895,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  18. Jaeger, L.: Wissenschaftler versus Wissenschaft (2020) 0.02
    0.02066275 = product of:
      0.0413255 = sum of:
        0.0413255 = product of:
          0.082651 = sum of:
            0.082651 = weight(_text_:22 in 4156) [ClassicSimilarity], result of:
              0.082651 = score(doc=4156,freq=2.0), product of:
                0.1780192 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46428138 = fieldWeight in 4156, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 3.2020 14:08:22
  19. Chan, L.M.; Zeng, M.L.: Metadata interoperability and standardization - a study of methodology, part I : achieving interoperability at the schema level (2006) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 1176) [ClassicSimilarity], result of:
              0.08193985 = score(doc=1176,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 1176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1176)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The rapid growth of Internet resources and digital collections has been accompanied by a proliferation of metadata schemas, each of which has been designed based on the requirements of particular user communities, intended users, types of materials, subject domains, project needs, etc. Problems arise when building large digital libraries or repositories with metadata records that were prepared according to diverse schemas. This article (published in two parts) contains an analysis of the methods that have been used to achieve or improve interoperability among metadata schemas and applications, for the purposes of facilitating conversion and exchange of metadata and enabling cross-domain metadata harvesting and federated searches. From a methodological point of view, implementing interoperability may be considered at different levels of operation: schema level, record level, and repository level. Part I of the article intends to explain possible situations in which metadata schemas may be created or implemented, whether in individual projects or in integrated repositories. It also discusses approaches used at the schema level. Part II of the article will discuss metadata interoperability efforts at the record and repository levels.
  20. Hammond, T.; Hannay, T.; Lund, B.; Flack, M.: Social bookmarking tools (II) : a case study - Connotea (2005) 0.02
    0.020484962 = product of:
      0.040969923 = sum of:
        0.040969923 = product of:
          0.08193985 = sum of:
            0.08193985 = weight(_text_:ii in 1189) [ClassicSimilarity], result of:
              0.08193985 = score(doc=1189,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.29840025 = fieldWeight in 1189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1189)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Years

Languages

  • d 46
  • e 42
  • a 1
  • More… Less…