Search (1775 results, page 1 of 89)

  • × year_i:[2010 TO 2020}
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.13
    0.1332649 = sum of:
      0.104036875 = product of:
        0.4161475 = sum of:
          0.4161475 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
            0.4161475 = score(doc=1826,freq=2.0), product of:
              0.44427133 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052402776 = queryNorm
              0.93669677 = fieldWeight in 1826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.078125 = fieldNorm(doc=1826)
        0.25 = coord(1/4)
      0.029228024 = product of:
        0.05845605 = sum of:
          0.05845605 = weight(_text_:j in 1826) [ClassicSimilarity], result of:
            0.05845605 = score(doc=1826,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.35106707 = fieldWeight in 1826, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.078125 = fieldNorm(doc=1826)
        0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Mugridge, R.L.; Edmunds, J.: Batchloading MARC bibliographic records (2012) 0.12
    0.12039761 = sum of:
      0.029779412 = product of:
        0.11911765 = sum of:
          0.11911765 = weight(_text_:authors in 2600) [ClassicSimilarity], result of:
            0.11911765 = score(doc=2600,freq=4.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.49862027 = fieldWeight in 2600, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2600)
        0.25 = coord(1/4)
      0.0906182 = sum of:
        0.040919233 = weight(_text_:j in 2600) [ClassicSimilarity], result of:
          0.040919233 = score(doc=2600,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.24574696 = fieldWeight in 2600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2600)
        0.049698967 = weight(_text_:22 in 2600) [ClassicSimilarity], result of:
          0.049698967 = score(doc=2600,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.2708308 = fieldWeight in 2600, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2600)
    
    Abstract
    Research libraries are using batchloading to provide access to many resources that they would otherwise be unable to catalog given the staff and other resources available. To explore how such libraries are managing their batchloading activities, the authors conducted a survey of the Association for Library Collections and Technical Services Directors of Large Research Libraries Interest Group member libraries. The survey addressed staffing, budgets, scope, workflow, management, quality standards, information technology support, collaborative efforts, and assessment of batchloading activities. The authors provide an analysis of the survey results along with suggestions for process improvements and future research.
    Date
    10. 9.2000 17:38:22
  3. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.11
    0.10548312 = sum of:
      0.02605156 = product of:
        0.10420624 = sum of:
          0.10420624 = weight(_text_:authors in 2590) [ClassicSimilarity], result of:
            0.10420624 = score(doc=2590,freq=6.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.43620193 = fieldWeight in 2590, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
        0.25 = coord(1/4)
      0.07943156 = sum of:
        0.029228024 = weight(_text_:j in 2590) [ClassicSimilarity], result of:
          0.029228024 = score(doc=2590,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.17553353 = fieldWeight in 2590, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2590)
        0.05020354 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
          0.05020354 = score(doc=2590,freq=4.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.27358043 = fieldWeight in 2590, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2590)
    
    Abstract
    Purpose - The purpose of this paper is to analyse the global scientific outputs of ontology research, an important emerging discipline that has huge potential to improve information understanding, organization, and management. Design/methodology/approach - This study collected literature published during 1900-2012 from the Web of Science database. The bibliometric analysis was performed from authorial, institutional, national, spatiotemporal, and topical aspects. Basic statistical analysis, visualization of geographic distribution, co-word analysis, and a new index were applied to the selected data. Findings - Characteristics of publication outputs suggested that ontology research has entered into the soaring stage, along with increased participation and collaboration. The authors identified the leading authors, institutions, nations, and articles in ontology research. Authors were more from North America, Europe, and East Asia. The USA took the lead, while China grew fastest. Four major categories of frequently used keywords were identified: applications in Semantic Web, applications in bioinformatics, philosophy theories, and common supporting technology. Semantic Web research played a core role, and gene ontology study was well-developed. The study focus of ontology has shifted from philosophy to information science. Originality/value - This is the first study to quantify global research patterns and trends in ontology, which might provide a potential guide for the future research. The new index provides an alternative way to evaluate the multidisciplinary influence of researchers.
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  4. Yuan, X. (J.); Belkin, N.J.: Applying an information-seeking dialogue model in an interactive information retrieval system (2014) 0.09
    0.09077885 = sum of:
      0.02605156 = product of:
        0.10420624 = sum of:
          0.10420624 = weight(_text_:authors in 4544) [ClassicSimilarity], result of:
            0.10420624 = score(doc=4544,freq=6.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.43620193 = fieldWeight in 4544, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4544)
        0.25 = coord(1/4)
      0.06472729 = sum of:
        0.029228024 = weight(_text_:j in 4544) [ClassicSimilarity], result of:
          0.029228024 = score(doc=4544,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.17553353 = fieldWeight in 4544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4544)
        0.035499264 = weight(_text_:22 in 4544) [ClassicSimilarity], result of:
          0.035499264 = score(doc=4544,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.19345059 = fieldWeight in 4544, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4544)
    
    Abstract
    Purpose - People often engage in different information-seeking strategies (ISSs) within a single information-seeking episode. A critical concern for the design of information retrieval (IR) systems is how to provide support for these different behaviors in a manner which searchers can easily understand, navigate and use, as they move from one ISS to another. The purpose of this paper is to describe a dialogue structure that was implemented in an experimental IR system, in order to address this concern. Design/methodology/approach - The authors conducted a user-centered experiment to evaluate the IR systems. Participants were asked to search for information on two different task types, with four different topics per task, in both the experimental system and a baseline system emulating state-of-the-art IR systems. The authors report here the results related explicitly to the use of the experimental system's dialogue structure. Findings - For one of the task types, most participants followed the search steps as predicted in the dialogue structures, and those who did so completed the task in fewer moves. For the other task type, predicted order of moves was often not followed, but participants again used fewer moves when following the predicted order. Results demonstrate that the dialogue structures the authors designed indeed support effective human information behavior patterns in a variety of ways, and that searchers can effectively use a system which changes to support different ISSs. Originality/value - This study shows that it is both possible and beneficial, to design an IR system which can support multiple ISSs, and that such a system can be understood and used successfully.
    Date
    6. 4.2015 19:22:59
  5. Grudin, J.: Human-computer interaction (2011) 0.09
    0.0906182 = product of:
      0.1812364 = sum of:
        0.1812364 = sum of:
          0.081838466 = weight(_text_:j in 1601) [ClassicSimilarity], result of:
            0.081838466 = score(doc=1601,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.4914939 = fieldWeight in 1601, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.109375 = fieldNorm(doc=1601)
          0.099397935 = weight(_text_:22 in 1601) [ClassicSimilarity], result of:
            0.099397935 = score(doc=1601,freq=2.0), product of:
              0.1835056 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052402776 = queryNorm
              0.5416616 = fieldWeight in 1601, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=1601)
      0.5 = coord(1/2)
    
    Date
    27.12.2014 18:54:22
  6. Pal, S.; Mitra, M.; Kamps, J.: Evaluation effort, reliability and reusability in XML retrieval (2011) 0.09
    0.0859983 = sum of:
      0.021271009 = product of:
        0.085084036 = sum of:
          0.085084036 = weight(_text_:authors in 4197) [ClassicSimilarity], result of:
            0.085084036 = score(doc=4197,freq=4.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.35615736 = fieldWeight in 4197, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4197)
        0.25 = coord(1/4)
      0.06472729 = sum of:
        0.029228024 = weight(_text_:j in 4197) [ClassicSimilarity], result of:
          0.029228024 = score(doc=4197,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.17553353 = fieldWeight in 4197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4197)
        0.035499264 = weight(_text_:22 in 4197) [ClassicSimilarity], result of:
          0.035499264 = score(doc=4197,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.19345059 = fieldWeight in 4197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4197)
    
    Abstract
    The Initiative for the Evaluation of XML retrieval (INEX) provides a TREC-like platform for evaluating content-oriented XML retrieval systems. Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks. The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method. They explore four specific questions: How reliable are the metrics when assessments are incomplete, or when query sets are small? What is the minimum pool/query-set size that can be used to reliably evaluate systems? Can the INEX collections be used to fairly evaluate "new" systems that did not participate in the pooling process? And, for a fixed amount of assessment effort, would this effort be better spent in thoroughly judging a few queries, or in judging many queries relatively superficially? The authors' findings validate properties of precision-recall-based metrics observed in document retrieval settings. Early precision measures are found to be more error-prone and less stable under incomplete judgments and small topic-set sizes. They also find that system rankings remain largely unaffected even when assessment effort is substantially (but systematically) reduced, and confirm that the INEX collections remain usable when evaluating nonparticipating systems. Finally, they observe that for a fixed amount of effort, judging shallow pools for many queries is better than judging deep pools for a smaller set of queries. However, when judging only a random sample of a pool, it is better to completely judge fewer topics than to partially judge many topics. This result confirms the effectiveness of pooling methods.
    Date
    22. 1.2011 14:20:56
  7. Torres-Salinas, D.; Gorraiz, J.; Robinson-Garcia, N.: ¬The insoluble problems of books : what does Altmetric.com have to offer? (2018) 0.08
    0.08125581 = sum of:
      0.029473973 = product of:
        0.117895894 = sum of:
          0.117895894 = weight(_text_:authors in 4633) [ClassicSimilarity], result of:
            0.117895894 = score(doc=4633,freq=12.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.49350607 = fieldWeight in 4633, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=4633)
        0.25 = coord(1/4)
      0.051781833 = sum of:
        0.02338242 = weight(_text_:j in 4633) [ClassicSimilarity], result of:
          0.02338242 = score(doc=4633,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.14042683 = fieldWeight in 4633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
        0.028399412 = weight(_text_:22 in 4633) [ClassicSimilarity], result of:
          0.028399412 = score(doc=4633,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.15476047 = fieldWeight in 4633, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
    
    Abstract
    Purpose The purpose of this paper is to analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX. Design/methodology/approach The authors perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions, platform offers for books. The authors use two distinct data sets of books. On the one hand, the authors analyze the Book Collection included in Altmetric.com. On the other hand, the authors use Clarivate's Master Book List, to analyze Altmetric.com's capabilities to download and merge data with external databases. Finally, the authors compare the findings with those obtained in a previous study performed in PlumX. Findings Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries, such as Harvard Library. We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, the authors find that automatic bots affect greatly Wikipedia mentions to books. The comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools. Practical implications This study targets different audience which can benefit from the findings. First, bibliometricians and researchers who seek for alternative sources to develop bibliometric analyses of books, with a special focus on the Social Sciences and Humanities fields. Second, librarians and research managers who are the main clients to which these tools are directed. Third, Altmetric.com itself as well as other altmetric providers who might get a better understanding of the limitations users encounter and improve this promising tool. Originality/value This is the first study to analyze Altmetric.com's functionalities and capabilities for providing metric data for books and to compare results from this platform, with those obtained via PlumX.
    Date
    20. 1.2015 18:30:22
  8. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.08
    0.07995894 = sum of:
      0.062422123 = product of:
        0.24968849 = sum of:
          0.24968849 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.24968849 = score(doc=400,freq=2.0), product of:
              0.44427133 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052402776 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.25 = coord(1/4)
      0.017536815 = product of:
        0.03507363 = sum of:
          0.03507363 = weight(_text_:j in 400) [ClassicSimilarity], result of:
            0.03507363 = score(doc=400,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.21064025 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  9. Savoy, J.: Estimating the probability of an authorship attribution (2016) 0.08
    0.079768166 = sum of:
      0.0150408745 = product of:
        0.060163498 = sum of:
          0.060163498 = weight(_text_:authors in 2937) [ClassicSimilarity], result of:
            0.060163498 = score(doc=2937,freq=2.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.25184128 = fieldWeight in 2937, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2937)
        0.25 = coord(1/4)
      0.06472729 = sum of:
        0.029228024 = weight(_text_:j in 2937) [ClassicSimilarity], result of:
          0.029228024 = score(doc=2937,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.17553353 = fieldWeight in 2937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2937)
        0.035499264 = weight(_text_:22 in 2937) [ClassicSimilarity], result of:
          0.035499264 = score(doc=2937,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.19345059 = fieldWeight in 2937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2937)
    
    Abstract
    In authorship attribution, various distance-based metrics have been proposed to determine the most probable author of a disputed text. In this paradigm, a distance is computed between each author profile and the query text. These values are then employed only to rank the possible authors. In this article, we analyze their distribution and show that we can model it as a mixture of 2 Beta distributions. Based on this finding, we demonstrate how we can derive a more accurate probability that the closest author is, in fact, the real author. To evaluate this approach, we have chosen 4 authorship attribution methods (Burrows' Delta, Kullback-Leibler divergence, Labbé's intertextual distance, and the naïve Bayes). As the first test collection, we have downloaded 224 State of the Union addresses (from 1790 to 2014) delivered by 41 U.S. presidents. The second test collection is formed by the Federalist Papers. The evaluations indicate that the accuracy rate of some authorship decisions can be improved. The suggested method can signal that the proposed assignment should be interpreted as possible, without strong certainty. Being able to quantify the certainty associated with an authorship decision can be a useful component when important decisions must be taken.
    Date
    7. 5.2016 21:22:27
  10. Engels, T.C.E; Istenic Starcic, A.; Kulczycki, E.; Pölönen, J.; Sivertsen, G.: Are book publications disappearing from scholarly communication in the social sciences and humanities? (2018) 0.08
    0.07584723 = sum of:
      0.024065398 = product of:
        0.09626159 = sum of:
          0.09626159 = weight(_text_:authors in 4631) [ClassicSimilarity], result of:
            0.09626159 = score(doc=4631,freq=8.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.40294603 = fieldWeight in 4631, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=4631)
        0.25 = coord(1/4)
      0.051781833 = sum of:
        0.02338242 = weight(_text_:j in 4631) [ClassicSimilarity], result of:
          0.02338242 = score(doc=4631,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.14042683 = fieldWeight in 4631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=4631)
        0.028399412 = weight(_text_:22 in 4631) [ClassicSimilarity], result of:
          0.028399412 = score(doc=4631,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.15476047 = fieldWeight in 4631, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4631)
    
    Abstract
    Purpose The purpose of this paper is to analyze the evolution in terms of shares of scholarly book publications in the social sciences and humanities (SSH) in five European countries, i.e. Flanders (Belgium), Finland, Norway, Poland and Slovenia. In addition to aggregate results for the whole of the social sciences and the humanities, the authors focus on two well-established fields, namely, economics & business and history. Design/methodology/approach Comprehensive coverage databases of SSH scholarly output have been set up in Flanders (VABB-SHW), Finland (VIRTA), Norway (NSI), Poland (PBN) and Slovenia (COBISS). These systems allow to trace the shares of monographs and book chapters among the total volume of scholarly publications in each of these countries. Findings As expected, the shares of scholarly monographs and book chapters in the humanities and in the social sciences differ considerably between fields of science and between the five countries studied. In economics & business and in history, the results show similar field-based variations as well as country variations. Most year-to-year and overall variation is rather limited. The data presented illustrate that book publishing is not disappearing from an SSH. Research limitations/implications The results presented in this paper illustrate that the polish scholarly evaluation system has influenced scholarly publication patterns considerably, while in the other countries the variations are manifested only slightly. The authors conclude that generalizations like "performance-based research funding systems (PRFS) are bad for book publishing" are flawed. Research evaluation systems need to take book publishing fully into account because of the crucial epistemic and social roles it serves in an SSH. Originality/value The authors present data on monographs and book chapters from five comprehensive coverage databases in Europe and analyze the data in view of the debates regarding the perceived detrimental effects of research evaluation systems on scholarly book publishing. The authors show that there is little reason to suspect a dramatic decline of scholarly book publishing in an SSH.
    Date
    20. 1.2015 18:30:22
  11. Semantic keyword-based search on structured data sources : First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers (2016) 0.08
    0.075577945 = sum of:
      0.012032699 = product of:
        0.048130795 = sum of:
          0.048130795 = weight(_text_:authors in 2753) [ClassicSimilarity], result of:
            0.048130795 = score(doc=2753,freq=2.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.20147301 = fieldWeight in 2753, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
        0.25 = coord(1/4)
      0.06354525 = sum of:
        0.02338242 = weight(_text_:j in 2753) [ClassicSimilarity], result of:
          0.02338242 = score(doc=2753,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.14042683 = fieldWeight in 2753, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=2753)
        0.04016283 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
          0.04016283 = score(doc=2753,freq=4.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.21886435 = fieldWeight in 2753, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2753)
    
    Abstract
    This book constitutes the thoroughly refereed post-conference proceedings of the First COST Action IC1302 International KEYSTONE Conference on semantic Keyword-based Search on Structured Data Sources, IKC 2015, held in Coimbra, Portugal, in September 2015. The 13 revised full papers, 3 revised short papers, and 2 invited papers were carefully reviewed and selected from 22 initial submissions. The paper topics cover techniques for keyword search, semantic data management, social Web and social media, information retrieval, benchmarking for search on big data.
    Content
    Inhalt: Professional Collaborative Information Seeking: On Traceability and Creative Sensemaking / Nürnberger, Andreas (et al.) - Recommending Web Pages Using Item-Based Collaborative Filtering Approaches / Cadegnani, Sara (et al.) - Processing Keyword Queries Under Access Limitations / Calì, Andrea (et al.) - Balanced Large Scale Knowledge Matching Using LSH Forest / Cochez, Michael (et al.) - Improving css-KNN Classification Performance by Shifts in Training Data / Draszawka, Karol (et al.) - Classification Using Various Machine Learning Methods and Combinations of Key-Phrases and Visual Features / HaCohen-Kerner, Yaakov (et al.) - Mining Workflow Repositories for Improving Fragments Reuse / Harmassi, Mariem (et al.) - AgileDBLP: A Search-Based Mobile Application for Structured Digital Libraries / Ifrim, Claudia (et al.) - Support of Part-Whole Relations in Query Answering / Kozikowski, Piotr (et al.) - Key-Phrases as Means to Estimate Birth and Death Years of Jewish Text Authors / Mughaz, Dror (et al.) - Visualization of Uncertainty in Tag Clouds / Platis, Nikos (et al.) - Multimodal Image Retrieval Based on Keywords and Low-Level Image Features / Pobar, Miran (et al.) - Toward Optimized Multimodal Concept Indexing / Rekabsaz, Navid (et al.) - Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives / Souza, Tarcisio (et al.) - Indexing of Textual Databases Based on Lexical Resources: A Case Study for Serbian / Stankovic, Ranka (et al.) - Domain-Specific Modeling: Towards a Food and Drink Gazetteer / Tagarev, Andrey (et al.) - Analysing Entity Context in Multilingual Wikipedia to Support Entity-Centric Retrieval Applications / Zhou, Yiwei (et al.)
    Date
    1. 2.2016 18:25:22
    Editor
    Cardoso, J. et al.
  12. Zhitomirsky-Geffet, M.; Bar-Ilan, J.: Towards maximal unification of semantically diverse ontologies for controversial domains (2014) 0.07
    0.07262308 = sum of:
      0.020841248 = product of:
        0.08336499 = sum of:
          0.08336499 = weight(_text_:authors in 1634) [ClassicSimilarity], result of:
            0.08336499 = score(doc=1634,freq=6.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.34896153 = fieldWeight in 1634, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=1634)
        0.25 = coord(1/4)
      0.051781833 = sum of:
        0.02338242 = weight(_text_:j in 1634) [ClassicSimilarity], result of:
          0.02338242 = score(doc=1634,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.14042683 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
        0.028399412 = weight(_text_:22 in 1634) [ClassicSimilarity], result of:
          0.028399412 = score(doc=1634,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.15476047 = fieldWeight in 1634, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=1634)
    
    Abstract
    Purpose - Ontologies are prone to wide semantic variability due to subjective points of view of their composers. The purpose of this paper is to propose a new approach for maximal unification of diverse ontologies for controversial domains by their relations. Design/methodology/approach - Effective matching or unification of multiple ontologies for a specific domain is crucial for the success of many semantic web applications, such as semantic information retrieval and organization, document tagging, summarization and search. To this end, numerous automatic and semi-automatic techniques were proposed in the past decade that attempt to identify similar entities, mostly classes, in diverse ontologies for similar domains. Apparently, matching individual entities cannot result in full integration of ontologies' semantics without matching their inter-relations with all other-related classes (and instances). However, semantic matching of ontological relations still constitutes a major research challenge. Therefore, in this paper the authors propose a new paradigm for assessment of maximal possible matching and unification of ontological relations. To this end, several unification rules for ontological relations were devised based on ontological reference rules, and lexical and textual entailment. These rules were semi-automatically implemented to extend a given ontology with semantically matching relations from another ontology for a similar domain. Then, the ontologies were unified through these similar pairs of relations. The authors observe that these rules can be also facilitated to reveal the contradictory relations in different ontologies. Findings - To assess the feasibility of the approach two experiments were conducted with different sets of multiple personal ontologies on controversial domains constructed by trained subjects. The results for about 50 distinct ontology pairs demonstrate a good potential of the methodology for increasing inter-ontology agreement. Furthermore, the authors show that the presented methodology can lead to a complete unification of multiple semantically heterogeneous ontologies. Research limitations/implications - This is a conceptual study that presents a new approach for semantic unification of ontologies by a devised set of rules along with the initial experimental evidence of its feasibility and effectiveness. However, this methodology has to be fully automatically implemented and tested on a larger dataset in future research. Practical implications - This result has implication for semantic search, since a richer ontology, comprised of multiple aspects and viewpoints of the domain of knowledge, enhances discoverability and improves search results. Originality/value - To the best of the knowledge, this is the first study to examine and assess the maximal level of semantic relation-based ontology unification.
    Date
    20. 1.2015 18:30:22
  13. Willis, C.; Greenberg, J.; White, H.: Analysis and synthesis of metadata goals for scientific data (2012) 0.07
    0.06879864 = sum of:
      0.017016808 = product of:
        0.06806723 = sum of:
          0.06806723 = weight(_text_:authors in 367) [ClassicSimilarity], result of:
            0.06806723 = score(doc=367,freq=4.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.28492588 = fieldWeight in 367, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=367)
        0.25 = coord(1/4)
      0.051781833 = sum of:
        0.02338242 = weight(_text_:j in 367) [ClassicSimilarity], result of:
          0.02338242 = score(doc=367,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.14042683 = fieldWeight in 367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=367)
        0.028399412 = weight(_text_:22 in 367) [ClassicSimilarity], result of:
          0.028399412 = score(doc=367,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.15476047 = fieldWeight in 367, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=367)
    
    Abstract
    The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenberg's () metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (>0.6), a Fisher's exact test for nonparametric data was used to determine significance (p < .05). Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have "scheme harmonization" (compatibility and interoperability with related schemes) as an objective; schemes with the objective "abstraction" (a conceptual model exists separate from the technical implementation) also have the objective "sufficiency" (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective "data publication" do not have the objective "element refinement." The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes.
  14. Costas, R.; Perianes-Rodríguez, A.; Ruiz-Castillo, J.: On the quest for currencies of science : field "exchange rates" for citations and Mendeley readership (2017) 0.07
    0.06879864 = sum of:
      0.017016808 = product of:
        0.06806723 = sum of:
          0.06806723 = weight(_text_:authors in 4051) [ClassicSimilarity], result of:
            0.06806723 = score(doc=4051,freq=4.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.28492588 = fieldWeight in 4051, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=4051)
        0.25 = coord(1/4)
      0.051781833 = sum of:
        0.02338242 = weight(_text_:j in 4051) [ClassicSimilarity], result of:
          0.02338242 = score(doc=4051,freq=2.0), product of:
            0.16650963 = queryWeight, product of:
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.052402776 = queryNorm
            0.14042683 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1774964 = idf(docFreq=5010, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
        0.028399412 = weight(_text_:22 in 4051) [ClassicSimilarity], result of:
          0.028399412 = score(doc=4051,freq=2.0), product of:
            0.1835056 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052402776 = queryNorm
            0.15476047 = fieldWeight in 4051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=4051)
    
    Abstract
    Purpose The introduction of "altmetrics" as new tools to analyze scientific impact within the reward system of science has challenged the hegemony of citations as the predominant source for measuring scientific impact. Mendeley readership has been identified as one of the most important altmetric sources, with several features that are similar to citations. The purpose of this paper is to perform an in-depth analysis of the differences and similarities between the distributions of Mendeley readership and citations across fields. Design/methodology/approach The authors analyze two issues by using in each case a common analytical framework for both metrics: the shape of the distributions of readership and citations, and the field normalization problem generated by differences in citation and readership practices across fields. In the first issue the authors use the characteristic scores and scales method, and in the second the measurement framework introduced in Crespo et al. (2013). Findings There are three main results. First, the citations and Mendeley readership distributions exhibit a strikingly similar degree of skewness in all fields. Second, the results on "exchange rates (ERs)" for Mendeley readership empirically supports the possibility of comparing readership counts across fields, as well as the field normalization of readership distributions using ERs as normalization factors. Third, field normalization using field mean readerships as normalization factors leads to comparably good results. Originality/value These findings open up challenging new questions, particularly regarding the possibility of obtaining conflicting results from field normalized citation and Mendeley readership indicators; this suggests the need for better determining the role of the two metrics in capturing scientific recognition.
    Date
    20. 1.2015 18:30:22
  15. Shala, E.: ¬Die Autonomie des Menschen und der Maschine : gegenwärtige Definitionen von Autonomie zwischen philosophischem Hintergrund und technologischer Umsetzbarkeit (2014) 0.07
    0.06663245 = sum of:
      0.052018438 = product of:
        0.20807375 = sum of:
          0.20807375 = weight(_text_:3a in 4388) [ClassicSimilarity], result of:
            0.20807375 = score(doc=4388,freq=2.0), product of:
              0.44427133 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052402776 = queryNorm
              0.46834838 = fieldWeight in 4388, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4388)
        0.25 = coord(1/4)
      0.014614012 = product of:
        0.029228024 = sum of:
          0.029228024 = weight(_text_:j in 4388) [ClassicSimilarity], result of:
            0.029228024 = score(doc=4388,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.17553353 = fieldWeight in 4388, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4388)
        0.5 = coord(1/2)
    
    Footnote
    Vgl. unter: https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwizweHljdbcAhVS16QKHXcFD9QQFjABegQICRAB&url=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F271200105_Die_Autonomie_des_Menschen_und_der_Maschine_-_gegenwartige_Definitionen_von_Autonomie_zwischen_philosophischem_Hintergrund_und_technologischer_Umsetzbarkeit_Redigierte_Version_der_Magisterarbeit_Karls&usg=AOvVaw06orrdJmFF2xbCCp_hL26q.
  16. Kozak, M.; Hartley, J.: Publication fees for open access journals : different disciplines-different methods (2013) 0.07
    0.065064915 = sum of:
      0.041682497 = product of:
        0.16672999 = sum of:
          0.16672999 = weight(_text_:authors in 1140) [ClassicSimilarity], result of:
            0.16672999 = score(doc=1140,freq=6.0), product of:
              0.2388945 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052402776 = queryNorm
              0.69792306 = fieldWeight in 1140, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0625 = fieldNorm(doc=1140)
        0.25 = coord(1/4)
      0.02338242 = product of:
        0.04676484 = sum of:
          0.04676484 = weight(_text_:j in 1140) [ClassicSimilarity], result of:
            0.04676484 = score(doc=1140,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.28085366 = fieldWeight in 1140, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.0625 = fieldNorm(doc=1140)
        0.5 = coord(1/2)
    
    Abstract
    Many authors appear to think that most open access (OA) journals charge authors for their publications. This brief communication examines the basis for such beliefs and finds it wanting. Indeed, in this study of over 9,000 OA journals included in the Directory of Open Access Journals, only 28% charged authors for publishing in their journals. This figure, however, was highest in various disciplines in medicine (47%) and the sciences (43%) and lowest in the humanities (4%) and the arts (0%).
  17. Calì, A. et al.: Processing keyword queries under access limitations (2016) 0.06
    0.06472729 = product of:
      0.12945458 = sum of:
        0.12945458 = sum of:
          0.05845605 = weight(_text_:j in 4233) [ClassicSimilarity], result of:
            0.05845605 = score(doc=4233,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.35106707 = fieldWeight in 4233, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.078125 = fieldNorm(doc=4233)
          0.07099853 = weight(_text_:22 in 4233) [ClassicSimilarity], result of:
            0.07099853 = score(doc=4233,freq=2.0), product of:
              0.1835056 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052402776 = queryNorm
              0.38690117 = fieldWeight in 4233, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4233)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  18. Ermert, A.: Terminologie - Bedeutung, Erarbeitung, professionelle Strukturierung und Management : Der 13. Deutsche Terminologietag vom 19. bis 21. April 2012 in Heidelberg (2012) 0.06
    0.06472729 = product of:
      0.12945458 = sum of:
        0.12945458 = sum of:
          0.05845605 = weight(_text_:j in 327) [ClassicSimilarity], result of:
            0.05845605 = score(doc=327,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.35106707 = fieldWeight in 327, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.078125 = fieldNorm(doc=327)
          0.07099853 = weight(_text_:22 in 327) [ClassicSimilarity], result of:
            0.07099853 = score(doc=327,freq=2.0), product of:
              0.1835056 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052402776 = queryNorm
              0.38690117 = fieldWeight in 327, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=327)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2012.63.issue-3/iwp-2012-0041/iwp-2012-0041.xml?format=INT.
    Date
    22. 7.2012 19:37:29
  19. Geiß, D.: Aus der Praxis der Patentinformation : Die Entwicklung der elektronischen Medien und Dienstleistungen bei den Patentbehörden und Internetprovidern im Jahr 2012 (2013) 0.06
    0.06472729 = product of:
      0.12945458 = sum of:
        0.12945458 = sum of:
          0.05845605 = weight(_text_:j in 658) [ClassicSimilarity], result of:
            0.05845605 = score(doc=658,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.35106707 = fieldWeight in 658, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.078125 = fieldNorm(doc=658)
          0.07099853 = weight(_text_:22 in 658) [ClassicSimilarity], result of:
            0.07099853 = score(doc=658,freq=2.0), product of:
              0.1835056 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052402776 = queryNorm
              0.38690117 = fieldWeight in 658, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=658)
      0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.degruyter.com/view/j/iwp.2013.64.issue-1/iwp-2013-0003/iwp-2013-0003.xml?format=INT.
    Date
    22. 3.2013 16:08:18
  20. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.06
    0.06472729 = product of:
      0.12945458 = sum of:
        0.12945458 = sum of:
          0.05845605 = weight(_text_:j in 2748) [ClassicSimilarity], result of:
            0.05845605 = score(doc=2748,freq=2.0), product of:
              0.16650963 = queryWeight, product of:
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.052402776 = queryNorm
              0.35106707 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.1774964 = idf(docFreq=5010, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
          0.07099853 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.07099853 = score(doc=2748,freq=2.0), product of:
              0.1835056 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052402776 = queryNorm
              0.38690117 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al

Languages

  • e 1362
  • d 400
  • a 2
  • f 2
  • hu 1
  • More… Less…

Types

  • a 1581
  • el 162
  • m 120
  • s 40
  • x 18
  • r 7
  • b 5
  • i 2
  • ag 1
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications