Search (1001 results, page 2 of 51)

  • × year_i:[2010 TO 2020}
  1. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.08
    0.0783509 = product of:
      0.11752635 = sum of:
        0.10143948 = weight(_text_:systematic in 2547) [ClassicSimilarity], result of:
          0.10143948 = score(doc=2547,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.35721707 = fieldWeight in 2547, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
        0.016086869 = product of:
          0.032173738 = sum of:
            0.032173738 = weight(_text_:indexing in 2547) [ClassicSimilarity], result of:
              0.032173738 = score(doc=2547,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.16916946 = fieldWeight in 2547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  2. Samuelsson, J.: Knowledge organization for feminism and feminist research : a discourse oriented study of systematic outlines, logical structure, semantics and the process of indexing (2010) 0.07
    0.07317951 = product of:
      0.10976927 = sum of:
        0.08966068 = weight(_text_:systematic in 3354) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3354,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3354, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3354)
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 3354) [ClassicSimilarity], result of:
              0.04021717 = score(doc=3354,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 3354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3354)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  3. Broughton, V.: Facet analysis as a tool for modelling subject domains and terminologies (2011) 0.07
    0.07317951 = product of:
      0.10976927 = sum of:
        0.08966068 = weight(_text_:systematic in 4826) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4826,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4826)
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 4826) [ClassicSimilarity], result of:
              0.04021717 = score(doc=4826,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 4826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4826)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Facet analysis is proposed as a general theory of knowledge organization, with an associated methodology that may be applied to the development of terminology tools in a variety of contexts and formats. Faceted classifications originated as a means of representing complexity in semantic content that facilitates logical organization and effective retrieval in a physical environment. This is achieved through meticulous analysis of concepts, their structural and functional status (based on fundamental categories), and their inter-relationships. These features provide an excellent basis for the general conceptual modelling of domains, and for the generation of KOS other than systematic classifications. This is demonstrated by the adoption of a faceted approach to many web search and visualization tools, and by the emergence of a facet based methodology for the construction of thesauri. Current work on the Bliss Bibliographic Classification (Second Edition) is investigating the ways in which the full complexity of faceted structures may be represented through encoded data, capable of generating intellectually and mechanically compatible forms of indexing tools from a single source. It is suggested that a number of research questions relating to the Semantic Web could be tackled through the medium of facet analysis.
  4. LaBarre, K.A.; Tilley, C.L.: ¬The elusive tale : leveraging the study of information seeking and knowledge organization to improve access to and discovery of folktales (2012) 0.07
    0.07317951 = product of:
      0.10976927 = sum of:
        0.08966068 = weight(_text_:systematic in 48) [ClassicSimilarity], result of:
          0.08966068 = score(doc=48,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 48, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=48)
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 48) [ClassicSimilarity], result of:
              0.04021717 = score(doc=48,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 48, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=48)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The "Folktales and Facets" project proposes ways to enhance access to folktales-in written and audiovisual formats-through the systematic and rigorous development of user-focused and task-focused models of information representation. Methods used include cognitive task analysis and facet analysis to better understand the information-seeking and information-use practices of people working with folktales and the intellectual dimensions of the domain. Interviews were conducted with 9 informants, representing scholars, storytellers, and teachers who rely on folktales in their professional lives to determine common tasks across user groups. Four tasks were identified: collect, create, instruct, and study. Facet analysis was conducted on the transcripts of these interviews, and a representative set of literature that included subject indexing material and a random stratified set of document surrogates drawn from a collection of folktales, including bibliographic records, introductions, reviews, tables of contents, and bibliographies. Eight facets were identified as most salient for this group of users: agent, association, context, documentation, location, subject, time, and viewpoint. Implications include the need for systems designers to devise methods for harvesting and integrating extant contextual material into search and discovery systems, and to take into account user-desired features in the development of enhanced services for digital repositories.
  5. Oikarinen, T.; Kortelainen, T.: Challenges of diversity, consistency, and globality in indexing of local archeological artifacts (2013) 0.07
    0.07317951 = product of:
      0.10976927 = sum of:
        0.08966068 = weight(_text_:systematic in 782) [ClassicSimilarity], result of:
          0.08966068 = score(doc=782,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=782)
        0.020108584 = product of:
          0.04021717 = sum of:
            0.04021717 = weight(_text_:indexing in 782) [ClassicSimilarity], result of:
              0.04021717 = score(doc=782,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.21146181 = fieldWeight in 782, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=782)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We consider documents produced in archeological post-excavation analysis and re-raise a question of archeological cataloguing, which is a specific case in the context of global progress of digitalization in archeology. The catalogue of archeological artifacts from the excavation of the city of Jakobstad, Finland was analyzed through a content analysis. Quantitative analysis was conducted using SPSS statistical package, and the results are presented in figures and tables. The analysis was based on a qualitative definition of variables describing the archeological artifacts. The analysis shows that the catalogue of artifacts is mainly systematic, but the results also reveal non -uniformity in cataloguing. In the free description column, several categorizations were found that could be used in developing the structure of a n archeological catalogue. Traditional cataloguing methods are still practiced in archeology, but these do not fulfill requirements of the future use of data. In this case, a vocabulary and a tool for cataloguing archeological artifacts would contribute to the development of cataloguing a nd future access of data. These devices should be flexible and support uniqueness of the artifacts. There exist tools and vocabularies for archeological cataloguing and these could be localized to fulfill the needs for the future digitalization of archeological data.
  6. Cui, H.: Competency evaluation of plant character ontologies against domain literature (2010) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 3466) [ClassicSimilarity], result of:
          0.08966068 = score(doc=3466,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 3466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 3466) [ClassicSimilarity], result of:
              0.033657953 = score(doc=3466,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 3466, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3466)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Specimen identification keys are still the most commonly created tools used by systematic biologists to access biodiversity information. Creating identification keys requires analyzing and synthesizing large amounts of information from specimens and their descriptions and is a very labor-intensive and time-consuming activity. Automating the generation of identification keys from text descriptions becomes a highly attractive text mining application in the biodiversity domain. Fine-grained semantic annotation of morphological descriptions of organisms is a necessary first step in generating keys from text. Machine-readable ontologies are needed in this process because most biological characters are only implied (i.e., not stated) in descriptions. The immediate question to ask is How well do existing ontologies support semantic annotation and automated key generation? With the intention to either select an existing ontology or develop a unified ontology based on existing ones, this paper evaluates the coverage, semantic consistency, and inter-ontology agreement of a biodiversity character ontology and three plant glossaries that may be turned into ontologies. The coverage and semantic consistency of the ontology/glossaries are checked against the authoritative domain literature, namely, Flora of North America and Flora of China. The evaluation results suggest that more work is needed to improve the coverage and interoperability of the ontology/glossaries. More concepts need to be added to the ontology/glossaries and careful work is needed to improve the semantic consistency. The method used in this paper to evaluate the ontology/glossaries can be used to propose new candidate concepts from the domain literature and suggest appropriate definitions.
    Date
    1. 6.2010 9:55:22
  7. Leydesdorff, L.; Bornmann, L.: How fractional counting of citations affects the impact factor : normalization in terms of differences in citation potentials among fields of science (2011) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 4186) [ClassicSimilarity], result of:
          0.08966068 = score(doc=4186,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 4186, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4186)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 4186) [ClassicSimilarity], result of:
              0.033657953 = score(doc=4186,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 4186, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4186)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Impact Factors (IFs) of the Institute for Scientific Information suffer from a number of drawbacks, among them the statistics-Why should one use the mean and not the median?-and the incomparability among fields of science because of systematic differences in citation behavior among fields. Can these drawbacks be counteracted by fractionally counting citation weights instead of using whole numbers in the numerators? (a) Fractional citation counts are normalized in terms of the citing sources and thus would take into account differences in citation behavior among fields of science. (b) Differences in the resulting distributions can be tested statistically for their significance at different levels of aggregation. (c) Fractional counting can be generalized to any document set including journals or groups of journals, and thus the significance of differences among both small and large sets can be tested. A list of fractionally counted IFs for 2008 is available online at http:www.leydesdorff.net/weighted_if/weighted_if.xls The between-group variance among the 13 fields of science identified in the U.S. Science and Engineering Indicators is no longer statistically significant after this normalization. Although citation behavior differs largely between disciplines, the reflection of these differences in fractionally counted citation distributions can not be used as a reliable instrument for the classification.
    Date
    22. 1.2011 12:51:07
  8. Díaz-Faes, A.A.; Bordons, M.: Acknowledgments in scientific publications : presence in Spanish science and text patterns across disciplines (2014) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 1351) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1351,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1351, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1351)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 1351) [ClassicSimilarity], result of:
              0.033657953 = score(doc=1351,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 1351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1351)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The acknowledgments in scientific publications are an important feature in the scholarly communication process. This research analyzes funding acknowledgment presence in scientific publications and introduces a novel approach for discovering text patterns by discipline in the acknowledgment section of papers. First, the presence of acknowledgments in 38,257 English-language papers published by Spanish researchers in 2010 is studied by subject area on the basis of the funding acknowledgment information available in the Web of Science database. Funding acknowledgments are present in two thirds of Spanish articles, with significant differences by subject area, number of authors, impact factor of journals, and, in one specific area, basic/applied nature of research. Second, the existence of specific acknowledgment patterns in English-language papers of Spanish researchers in 4 selected subject categories (cardiac and cardiovascular systems, economics, evolutionary biology, and statistics and probability) is explored through a combination of text mining and multivariate analyses. "Peer interactive communication" predominates in the more theoretical or social-oriented fields (statistics and probability, economics), whereas the recognition of technical assistance is more common in experimental research (evolutionary biology), and the mention of potential conflicts of interest emerges forcefully in the clinical field (cardiac and cardiovascular systems). The systematic inclusion of structured data about acknowledgments in journal articles and bibliographic databases would have a positive impact on the study of collaboration practices in science.
    Date
    22. 8.2014 17:06:28
  9. Alahmari, F.; Thom, J.A.; Magee, L.: ¬A model for ranking entity attributes using DBpedia (2014) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 1623) [ClassicSimilarity], result of:
          0.08966068 = score(doc=1623,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 1623, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1623)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 1623) [ClassicSimilarity], result of:
              0.033657953 = score(doc=1623,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 1623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1623)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - Previous work highlights two key challenges in searching for information about individual entities (such as persons, places and organisations) over semantic data: query ambiguity and redundant attributes. The purpose of this paper is to consider these challenges and proposes the Attribute Importance Model (AIM) for clustering and ranking aggregated entity search to improve the overall users' experience of finding and navigating entities over the Web of Data. Design/methodology/approach - The proposed model describes three distinct techniques for augmenting semantic search: first, presenting entity type-based query suggestions; second, clustering aggregated attributes; and third, ranking attributes based on their importance to a given query. To evaluate the model, 36 subjects were recruited to experience entity search with and without AIM. Findings - The experimental results show that the model achieves significant improvements over the default method of semantic aggregated search provided by Sig.ma, a leading entity search and navigation tool. Originality/value - This proposal develops more informative views for aggregated entity search and exploration to enhance users' understanding of semantic data. The user study is the first to evaluate user interaction with Sig.ma's search capabilities in a systematic way.
    Date
    20. 1.2015 18:30:22
  10. Bodoff, D.; Raban, D.: Question types and intermediary elicitations (2016) 0.07
    0.07099311 = product of:
      0.10648966 = sum of:
        0.08966068 = weight(_text_:systematic in 2638) [ClassicSimilarity], result of:
          0.08966068 = score(doc=2638,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.31573826 = fieldWeight in 2638, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2638)
        0.016828977 = product of:
          0.033657953 = sum of:
            0.033657953 = weight(_text_:22 in 2638) [ClassicSimilarity], result of:
              0.033657953 = score(doc=2638,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.19345059 = fieldWeight in 2638, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2638)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the context of online question-answering services, an intermediary clarifies the user's needs by eliciting additional information. This research proposes that these elicitations will depend on the type of question. In particular, this research explores the relationship between three constructs: question types, elicitations, and the fee that is paid for the answer. These relationships are explored for a few different question typologies, including a new kind of question type that we call Identity. It is found that the kinds of clarifications that intermediaries elicit depend on the type of question in systematic ways. A practical implication is that interactive question-answering services-whether human or automated-can be steered to focus attention on the kinds of clarification that are evidently most needed for that question type. Further, it is found that certain question types, as well as the number of elicitations, are associated with higher fees. This means that it may be possible to define a pricing structure for question-answering services based on objective and predictable characteristics of the question, which would help to establish a rational market for this type of information service. The newly introduced Identity question type was found to be especially reliable in predicting elicitations and fees.
    Date
    22. 1.2016 11:58:25
  11. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.06
    0.056794487 = product of:
      0.08519173 = sum of:
        0.07172854 = weight(_text_:systematic in 168) [ClassicSimilarity], result of:
          0.07172854 = score(doc=168,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.2525906 = fieldWeight in 168, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.03125 = fieldNorm(doc=168)
        0.013463181 = product of:
          0.026926363 = sum of:
            0.026926363 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
              0.026926363 = score(doc=168,freq=2.0), product of:
                0.17398734 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049684696 = queryNorm
                0.15476047 = fieldWeight in 168, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=168)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  12. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.06
    0.055101942 = product of:
      0.16530582 = sum of:
        0.16530582 = sum of:
          0.11145309 = weight(_text_:indexing in 1149) [ClassicSimilarity], result of:
            0.11145309 = score(doc=1149,freq=6.0), product of:
              0.19018644 = queryWeight, product of:
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.049684696 = queryNorm
              0.5860202 = fieldWeight in 1149, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.8278677 = idf(docFreq=2614, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
          0.053852726 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
            0.053852726 = score(doc=1149,freq=2.0), product of:
              0.17398734 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049684696 = queryNorm
              0.30952093 = fieldWeight in 1149, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=1149)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
    Theme
    Citation indexing
  13. Verwer, K.: Freiheit und Verantwortung bei Hans Jonas (2011) 0.05
    0.05260831 = product of:
      0.15782493 = sum of:
        0.15782493 = product of:
          0.4734748 = sum of:
            0.4734748 = weight(_text_:3a in 973) [ClassicSimilarity], result of:
              0.4734748 = score(doc=973,freq=2.0), product of:
                0.4212274 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.049684696 = queryNorm
                1.1240361 = fieldWeight in 973, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.09375 = fieldNorm(doc=973)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: http%3A%2F%2Fcreativechoice.org%2Fdoc%2FHansJonas.pdf&usg=AOvVaw1TM3teaYKgABL5H9yoIifA&opi=89978449.
  14. Kelly, D.; Sugimoto, C.R.: ¬A systematic review of interactive information retrieval evaluation studies, 1967-2006 (2013) 0.05
    0.05176563 = product of:
      0.15529688 = sum of:
        0.15529688 = weight(_text_:systematic in 684) [ClassicSimilarity], result of:
          0.15529688 = score(doc=684,freq=6.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.54687476 = fieldWeight in 684, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=684)
      0.33333334 = coord(1/3)
    
    Abstract
    With the increasing number and diversity of search tools available, interest in the evaluation of search systems, particularly from a user perspective, has grown among researchers. More researchers are designing and evaluating interactive information retrieval (IIR) systems and beginning to innovate in evaluation methods. Maturation of a research specialty relies on the ability to replicate research, provide standards for measurement and analysis, and understand past endeavors. This article presents a historical overview of 40 years of IIR evaluation studies using the method of systematic review. A total of 2,791 journal and conference units were manually examined and 127 articles were selected for analysis in this study, based on predefined inclusion and exclusion criteria. These articles were systematically coded using features such as author, publication date, sources and references, and properties of the research method used in the articles, such as number of subjects, tasks, corpora, and measures. Results include data describing the growth of IIR studies over time, the most frequently occurring and cited authors and sources, and the most common types of corpora and measures used. An additional product of this research is a bibliography of IIR evaluation research that can be used by students, teachers, and those new to the area. To the authors' knowledge, this is the first historical, systematic characterization of the IIR evaluation literature, including the documentation of methods and measures used by researchers in this specialty.
  15. Lavranos, C.; Kostagiolas, P.; Korfiatis, N.; Papadatos, J.: Information seeking for musical creativity : a systematic literature review (2016) 0.05
    0.05176563 = product of:
      0.15529688 = sum of:
        0.15529688 = weight(_text_:systematic in 3079) [ClassicSimilarity], result of:
          0.15529688 = score(doc=3079,freq=6.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.54687476 = fieldWeight in 3079, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3079)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper aims to present a systematic literature review of research in music information seeking and its application to musical creativity and creative activities and in particular composition, performance and improvisation, and listening and analysis. A seed set of 901 articles published between 1973 and 2015 was evaluated and in total 65 studies were considered for further analyses. Data extraction and synthesis was performed through content analysis using the PRISMA method. Three thematic categories were identified in regard to music information needs: (a) those related to scholarly activities, (b) musically motivated, as well as (c) those which are related to socializing and communication. In addition, 3 categories of music information sources were connected to musical creativity: (i) those that are related to Internet and media technologies, (ii) those that are related to music libraries, organizations, and music stores, and (iii) those that are related to the subjects' social settings. The paper provides a systematic review, with the aim of showcasing the effect of modern information retrieval techniques in a creative and intensive area of information-dependent activity such as music making and consumption.
  16. Theories of information, communication and knowledge : a multidisciplinary approach (2014) 0.05
    0.05122566 = product of:
      0.076838486 = sum of:
        0.06276248 = weight(_text_:systematic in 2110) [ClassicSimilarity], result of:
          0.06276248 = score(doc=2110,freq=2.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.22101676 = fieldWeight in 2110, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2110)
        0.01407601 = product of:
          0.02815202 = sum of:
            0.02815202 = weight(_text_:indexing in 2110) [ClassicSimilarity], result of:
              0.02815202 = score(doc=2110,freq=2.0), product of:
                0.19018644 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.049684696 = queryNorm
                0.14802328 = fieldWeight in 2110, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2110)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Introduction; 1. Fidelia Ibekwe-SanJuan and Thomas Dousa.- 2. Cybersemiotics: A new foundation for transdisciplinary theory of information, cognition, meaning, communication and consciousness; Soren Brier.- 3. Epistemology and the Study of Social Information within the Perspective of a Unified Theory of Information;Wolfgang Hofkirchner.- 4. Perception and Testimony as Data Providers; Luciano Floridi.- 5. Human communication from the semiotic perspective; Winfried Noth.- 6. Mind the gap: transitions between concepts of information in varied domains; Lyn Robinson and David Bawden.- 7. Information and the disciplines: A conceptual meta-analysis; Jonathan Furner.- 8. Epistemological Challenges for Information Science; Ian Cornelius.- 9. The nature of information science and its core concepts; Birger Hjorland.- 10. Visual information construing: bistability as a revealer of mediating patterns; Sylvie Leleu-Merviel. - 11. Understanding users' informational constructs via a triadic method approach: a case study; Michel Labour. - 12. Documentary languages and the demarcation of information units in textual information: the case of Julius O. Kaisers's Systematic Indexing
  17. Dahlberg, I: ¬A systematic new lexicon of all knowledge fields based on the Information Coding Classification (2012) 0.05
    0.050719745 = product of:
      0.15215923 = sum of:
        0.15215923 = weight(_text_:systematic in 81) [ClassicSimilarity], result of:
          0.15215923 = score(doc=81,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.5358256 = fieldWeight in 81, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=81)
      0.33333334 = coord(1/3)
    
    Abstract
    A new lexicon of all knowledge fields in the German language with the terms of the fields in English is under preparation. The article is meant to provide an idea of its genesis and its structure. It will, of course, also contain an alphabetical arrangement of entries. The structure is provided by the Information Coding Classification (ICC), which is a theory-based, faceted universal classification system of knowledge fields. Section (1) outlines (1) its early history (1970-77). Section (2) discusses its twelve principles regarding concepts, conceptual relationships, and notation; its 9 main object area classes arranged on integrative levels, and its systematic digital schedule with its systematizer, offering 9 subdividing aspects. It shows possible links with other systems, as well as the system's assets for interdisciplinarity and transdisciplinarity. Providing concrete examples, section (3) describes the contents of the nine levels, section (4) delineates some issues of subject group/domain construction, and section (5) clarifies the lexicon entries.
  18. Nguyen, S.-H.; Chowdhury, G.: Interpreting the knowledge map of digital library research (1990-2010) (2013) 0.05
    0.050719745 = product of:
      0.15215923 = sum of:
        0.15215923 = weight(_text_:systematic in 958) [ClassicSimilarity], result of:
          0.15215923 = score(doc=958,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.5358256 = fieldWeight in 958, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=958)
      0.33333334 = coord(1/3)
    
    Abstract
    A knowledge map of digital library (DL) research shows the semantic organization of DL research topics and also the evolution of the field. The research reported in this article aims to find the core topics and subtopics of DL research in order to build a knowledge map of the DL domain. The methodology is comprised of a four-step research process, and two knowledge organization methods (classification and thesaurus building) were used. A knowledge map covering 21 core topics and 1,015 subtopics of DL research was created and provides a systematic overview of DL research during the last two decades (1990-2010). We argue that the map can work as a knowledge platform to guide, evaluate, and improve the activities of DL research, education, and practices. Moreover, it can be transformed into a DL ontology for various applications. The research methodology can be used to map any human knowledge domain; it is a novel and scientific method for producing comprehensive and systematic knowledge maps based on literary warrant.
  19. Qiu, X.Y.; Srinivasan, P.; Hu, Y.: Supervised learning models to predict firm performance with annual reports : an empirical study (2014) 0.05
    0.050719745 = product of:
      0.15215923 = sum of:
        0.15215923 = weight(_text_:systematic in 1205) [ClassicSimilarity], result of:
          0.15215923 = score(doc=1205,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.5358256 = fieldWeight in 1205, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=1205)
      0.33333334 = coord(1/3)
    
    Abstract
    Text mining and machine learning methodologies have been applied toward knowledge discovery in several domains, such as biomedicine and business. Interestingly, in the business domain, the text mining and machine learning community has minimally explored company annual reports with their mandatory disclosures. In this study, we explore the question "How can annual reports be used to predict change in company performance from one year to the next?" from a text mining perspective. Our article contributes a systematic study of the potential of company mandatory disclosures using a computational viewpoint in the following aspects: (a) We characterize our research problem along distinct dimensions to gain a reasonably comprehensive understanding of the capacity of supervised learning methods in predicting change in company performance using annual reports, and (b) our findings from unbiased systematic experiments provide further evidence about the economic incentives faced by analysts in their stock recommendations and speculations on analysts having access to more information in producing earnings forecast.
  20. Rockmore, D.N.; Fang, C.; Foti, N.J.; Ginsburg, T.; Krakauer, D.C.: ¬The cultural evolution of national constitutions (2018) 0.05
    0.050719745 = product of:
      0.15215923 = sum of:
        0.15215923 = weight(_text_:systematic in 4125) [ClassicSimilarity], result of:
          0.15215923 = score(doc=4125,freq=4.0), product of:
            0.28397155 = queryWeight, product of:
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.049684696 = queryNorm
            0.5358256 = fieldWeight in 4125, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.715473 = idf(docFreq=395, maxDocs=44218)
              0.046875 = fieldNorm(doc=4125)
      0.33333334 = coord(1/3)
    
    Abstract
    We explore how ideas from infectious disease and genetics can be used to uncover patterns of cultural inheritance and innovation in a corpus of 591 national constitutions spanning 1789-2008. Legal "ideas" are encoded as "topics"-words statistically linked in documents-derived from topic modeling the corpus of constitutions. Using these topics we derive a diffusion network for borrowing from ancestral constitutions back to the US Constitution of 1789 and reveal that constitutions are complex cultural recombinants. We find systematic variation in patterns of borrowing from ancestral texts and "biological"-like behavior in patterns of inheritance, with the distribution of "offspring" arising through a bounded preferential-attachment process. This process leads to a small number of highly innovative (influential) constitutions some of which have yet to have been identified as so in the current literature. Our findings thus shed new light on the critical nodes of the constitution-making network. The constitutional network structure reflects periods of intense constitution creation, and systematic patterns of variation in constitutional lifespan and temporal influence.

Authors

Languages

  • e 802
  • d 188
  • a 1
  • f 1
  • hu 1
  • i 1
  • More… Less…

Types

  • a 876
  • el 97
  • m 64
  • s 20
  • x 15
  • r 8
  • b 5
  • i 1
  • n 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications