Search (3305 results, page 1 of 166)

  • × year_i:[2010 TO 2020}
  1. De Santis, R.; Fernandez de Souza, R.: Towards a synthetic approach for classifying popular songs (2014) 0.15
    0.14926156 = product of:
      0.22389233 = sum of:
        0.026404712 = weight(_text_:on in 1438) [ClassicSimilarity], result of:
          0.026404712 = score(doc=1438,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24056101 = fieldWeight in 1438, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1438)
        0.19748762 = sum of:
          0.150157 = weight(_text_:demand in 1438) [ClassicSimilarity], result of:
            0.150157 = score(doc=1438,freq=2.0), product of:
              0.31127608 = queryWeight, product of:
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.04990557 = queryNorm
              0.48239172 = fieldWeight in 1438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1438)
          0.04733061 = weight(_text_:22 in 1438) [ClassicSimilarity], result of:
            0.04733061 = score(doc=1438,freq=2.0), product of:
              0.1747608 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04990557 = queryNorm
              0.2708308 = fieldWeight in 1438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1438)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper discusses the classification of popular songs by studying how six online systems for different purposes (a service broadcaster, a library, a guide, an encyclopedia, a radio and an on demand music seller) describe and retrieve this kind of object. It investigates some aspects of faceted classification and proposes a reflection about the path towards a synthetic approach considering the concept of complexity and their implications. These discussions are based on the "That Music" prototype - an ontology-based system built specifically for popular songs.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Xu, Y.; Li, G.; Mou, L.; Lu, Y.: Learning non-taxonomic relations on demand for ontology extension (2014) 0.11
    0.10631579 = product of:
      0.15947369 = sum of:
        0.048011016 = weight(_text_:on in 2961) [ClassicSimilarity], result of:
          0.048011016 = score(doc=2961,freq=18.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.43740597 = fieldWeight in 2961, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2961)
        0.111462675 = product of:
          0.22292535 = sum of:
            0.22292535 = weight(_text_:demand in 2961) [ClassicSimilarity], result of:
              0.22292535 = score(doc=2961,freq=6.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.716166 = fieldWeight in 2961, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2961)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Learning non-taxonomic relations becomes an important research topic in ontology extension. Most of the existing learning approaches are mainly based on expert crafted corpora. These approaches are normally domain-specific and the corpora acquisition is laborious and costly. On the other hand, based on the static corpora, it is not able to meet personalized needs of semantic relations discovery for various taxonomies. In this paper, we propose a novel approach for learning non-taxonomic relations on demand. For any supplied taxonomy, it can focus on the segment of the taxonomy and collect information dynamically about the taxonomic concepts by using Wikipedia as a learning source. Based on the newly generated corpus, non-taxonomic relations are acquired through three steps: a) semantic relatedness detection; b) relations extraction between concepts; and c) relations generalization within a hierarchy. The proposed approach is evaluated on three different predefined taxonomies and the experimental results show that it is effective in capturing non-taxonomic relations as needed and has good potential for the ontology extension on demand.
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.11
    0.10585217 = product of:
      0.15877825 = sum of:
        0.13210547 = product of:
          0.39631638 = sum of:
            0.39631638 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.39631638 = score(doc=1826,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.026672786 = weight(_text_:on in 1826) [ClassicSimilarity], result of:
          0.026672786 = score(doc=1826,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24300331 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Gstrein, S.: VuFind: Ebooks on demand Suchmaschine (2011) 0.09
    0.09278791 = product of:
      0.13918185 = sum of:
        0.027719175 = weight(_text_:on in 192) [ClassicSimilarity], result of:
          0.027719175 = score(doc=192,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25253648 = fieldWeight in 192, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=192)
        0.111462675 = product of:
          0.22292535 = sum of:
            0.22292535 = weight(_text_:demand in 192) [ClassicSimilarity], result of:
              0.22292535 = score(doc=192,freq=6.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.716166 = fieldWeight in 192, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=192)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    eBooks on Demand (EOD) ist ein europaweites Netzwerk von mehr als 30 Bibliotheken in 12 europäischen Ländern mit dem Ziel, urheberrechtsfreie Bücher auf Anfrage zu digitalisieren und zur Verfügung zu stellen. Zur Digitalisierung kann jedes Buch bestellt werden, das im Online-Katalog der Bibliothek mit dem sog. EOD-Button versehen ist. Das so bestellte Buch wird dann innerhalb weniger Tage hochauflösend gescannt und nach erfolgreicher Bezahlung als PDF mit hinterlegtem OCR-Text zur Verfügung gestellt. Bisher musste in jedem einzelnen Katalog jeder teilnehmenden Bibliothek separat gesucht werden, um ein bestimmtes Buch zu finden. Seit Ende 2010 wird nun unter der Adresse http://search.books2ebooks.eu eine bibliotheksübergreifende Suchmaschine angeboten, die mit der Open Source Software VuFind realisiert wurde. Derzeit werden hier 1,8 Mio. Datensätze von 12 Bibliotheken durchsuchbar gemacht. Den NutzerInnen der bibliotheksübergreifenden Suchmaschine wird so schnell und unkompliziert Zugang zu bereits digitalisierten Werken als auch Büchern, die zur Digitalisierung auf Anfrage zur Verfügung stehen, gegeben.
    Object
    eBooks on Demand
  5. Brunetti, J.M.; Roberto García, R.: User-centered design and evaluation of overview components for semantic data exploration (2014) 0.09
    0.08529231 = product of:
      0.12793846 = sum of:
        0.015088406 = weight(_text_:on in 1626) [ClassicSimilarity], result of:
          0.015088406 = score(doc=1626,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.13746344 = fieldWeight in 1626, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=1626)
        0.11285006 = sum of:
          0.085804 = weight(_text_:demand in 1626) [ClassicSimilarity], result of:
            0.085804 = score(doc=1626,freq=2.0), product of:
              0.31127608 = queryWeight, product of:
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.04990557 = queryNorm
              0.2756524 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
          0.027046064 = weight(_text_:22 in 1626) [ClassicSimilarity], result of:
            0.027046064 = score(doc=1626,freq=2.0), product of:
              0.1747608 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04990557 = queryNorm
              0.15476047 = fieldWeight in 1626, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1626)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - The growing volumes of semantic data available in the web result in the need for handling the information overload phenomenon. The potential of this amount of data is enormous but in most cases it is very difficult for users to visualize, explore and use this data, especially for lay-users without experience with Semantic Web technologies. The paper aims to discuss these issues. Design/methodology/approach - The Visual Information-Seeking Mantra "Overview first, zoom and filter, then details-on-demand" proposed by Shneiderman describes how data should be presented in different stages to achieve an effective exploration. The overview is the first user task when dealing with a data set. The objective is that the user is capable of getting an idea about the overall structure of the data set. Different information architecture (IA) components supporting the overview tasks have been developed, so they are automatically generated from semantic data, and evaluated with end-users. Findings - The chosen IA components are well known to web users, as they are present in most web pages: navigation bars, site maps and site indexes. The authors complement them with Treemaps, a visualization technique for displaying hierarchical data. These components have been developed following an iterative User-Centered Design methodology. Evaluations with end-users have shown that they get easily used to them despite the fact that they are generated automatically from structured data, without requiring knowledge about the underlying semantic technologies, and that the different overview components complement each other as they focus on different information search needs. Originality/value - Obtaining semantic data sets overviews cannot be easily done with the current semantic web browsers. Overviews become difficult to achieve with large heterogeneous data sets, which is typical in the Semantic Web, because traditional IA techniques do not easily scale to large data sets. There is little or no support to obtain overview information quickly and easily at the beginning of the exploration of a new data set. This can be a serious limitation when exploring a data set for the first time, especially for lay-users. The proposal is to reuse and adapt existing IA components to provide this overview to users and show that they can be generated automatically from the thesaurus and ontologies that structure semantic data while providing a comparable user experience to traditional web sites.
    Date
    20. 1.2015 18:30:22
  6. Chin, J.Y.; Bhowmick, S.S.; Jatowt, A.: On-demand recent personal tweets summarization on mobile devices (2019) 0.08
    0.08452946 = product of:
      0.12679419 = sum of:
        0.0357853 = weight(_text_:on in 5246) [ClassicSimilarity], result of:
          0.0357853 = score(doc=5246,freq=10.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.32602316 = fieldWeight in 5246, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5246)
        0.09100889 = product of:
          0.18201777 = sum of:
            0.18201777 = weight(_text_:demand in 5246) [ClassicSimilarity], result of:
              0.18201777 = score(doc=5246,freq=4.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5847471 = fieldWeight in 5246, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5246)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Tweets summarization aims to find a group of representative tweets for a specific set of input tweets or a given topic. In recent times, there have been several research efforts toward devising a variety of techniques to summarize tweets in Twitter. However, these techniques are either not personal (that is, consider only tweets in the timeline of a specific user) or are too expensive to be realized on a mobile device. Given that 80% of active Twitter users access the site on mobile devices, in this article we present a lightweight, personal, on-demand, topic modeling-based tweets summarization engine called TOTEM, designed for such devices. Specifically, TOTEM first preprocesses recent tweets in a user's timeline and exploits Latent Dirichlet Allocation-based topic modeling to assign each preprocessed tweet to a topic. Then it generates a ranked list of relevant tweets, a topic label, and a topic summary for each of the topics. Our experimental study with real-world data sets demonstrates the superiority of TOTEM.
  7. Brown, D.J.: Access to scientific research : challenges facing communications in STM (2016) 0.08
    0.08234612 = product of:
      0.123519175 = sum of:
        0.010669114 = weight(_text_:on in 3769) [ClassicSimilarity], result of:
          0.010669114 = score(doc=3769,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.097201325 = fieldWeight in 3769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.03125 = fieldNorm(doc=3769)
        0.11285006 = sum of:
          0.085804 = weight(_text_:demand in 3769) [ClassicSimilarity], result of:
            0.085804 = score(doc=3769,freq=2.0), product of:
              0.31127608 = queryWeight, product of:
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.04990557 = queryNorm
              0.2756524 = fieldWeight in 3769, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.03125 = fieldNorm(doc=3769)
          0.027046064 = weight(_text_:22 in 3769) [ClassicSimilarity], result of:
            0.027046064 = score(doc=3769,freq=2.0), product of:
              0.1747608 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04990557 = queryNorm
              0.15476047 = fieldWeight in 3769, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3769)
      0.6666667 = coord(2/3)
    
    Abstract
    The debate about access to scientific research raises questions about the current effectiveness of scholarly communication processes. This book explores, from an independent point of view, the current state of the STM publishing market, new publishing technologies and business models as well as the information habit of researchers, the politics of research funders, and the demand for scientific research as a public good. The book also investigates the democratisation of science including how the information needs of knowledge workers outside academia can be embraced in future.
    Content
    Inhalt: Chapter 1. Background -- Chapter 2. Definitions -- Chapter 3. Aims, Objectives, and Methodology -- Chapter 4. Setting the Scene -- Chapter 5. Information Society -- Chapter 6. Drivers for Change -- Chapter 7 A Dysfunctional STM Scene? -- Chapter 8. Comments on the Dysfunctionality of STM Publishing -- Chapter 9. The Main Stakeholders -- Chapter 10. Search and Discovery -- Chapter 11. Impact of Google -- Chapter 12. Psychological Issues -- Chapter 13. Users of Research Output -- Chapter 14. Underlying Sociological Developments -- Chapter 15. Social Media and Social Networking -- Chapter 16. Forms of Article Delivery -- Chapter 17. Future Communication Trends -- Chapter 18. Academic Knowledge Workers -- Chapter 19. Unaffiliated Knowledge Workers -- Chapter 20. The Professions -- Chapter 21. Small and Medium Enterprises -- Chapter 22. Citizen Scientists -- Chapter 23. Learned Societies -- Chapter 24. Business Models -- Chapter 25. Open Access -- Chapter 26. Political Initiatives -- Chapter 27. Summary and Conclusions -- Chapter 28. Research Questions Addressed
  8. Song, L.; Tso, G.; Fu, Y.: Click behavior and link prioritization : multiple demand theory application for web improvement (2019) 0.08
    0.075761005 = product of:
      0.1136415 = sum of:
        0.02263261 = weight(_text_:on in 5322) [ClassicSimilarity], result of:
          0.02263261 = score(doc=5322,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 5322, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5322)
        0.09100889 = product of:
          0.18201777 = sum of:
            0.18201777 = weight(_text_:demand in 5322) [ClassicSimilarity], result of:
              0.18201777 = score(doc=5322,freq=4.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5847471 = fieldWeight in 5322, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5322)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A common problem encountered in Web improvement is how to arrange the homepage links of a Website. This study analyses Web information search behavior, and applies the multiple demand theory to propose two models to help a visitor allocate time for multiple links. The process of searching is viewed as a formal choice problem in which the visitor attempts to choose from multiple Web links to maximize the total utility. The proposed models are calibrated to clickstream data collected from an educational institute over a seven-and-a-half month period. Based on the best fit model, a metric, utility loss, is constructed to measure the performance of each link and arrange them accordingly. Empirical results show that the proposed metric is highly efficient for prioritizing the links on a homepage and the methodology can also be used to study the feasibility of introducing a new function in a Website.
  9. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.07
    0.074117765 = product of:
      0.11117664 = sum of:
        0.02653909 = weight(_text_:on in 3809) [ClassicSimilarity], result of:
          0.02653909 = score(doc=3809,freq=22.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.24178526 = fieldWeight in 3809, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.08463755 = sum of:
          0.064353004 = weight(_text_:demand in 3809) [ClassicSimilarity], result of:
            0.064353004 = score(doc=3809,freq=2.0), product of:
              0.31127608 = queryWeight, product of:
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.04990557 = queryNorm
              0.2067393 = fieldWeight in 3809, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.237302 = idf(docFreq=234, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
          0.020284547 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
            0.020284547 = score(doc=3809,freq=2.0), product of:
              0.1747608 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04990557 = queryNorm
              0.116070345 = fieldWeight in 3809, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3809)
      0.6666667 = coord(2/3)
    
    Abstract
    One of the solutions to help scientists filter the most relevant publications and, thus, to stay current on developments in their fields during the transition from "little science" to "big science", was the introduction of citation indexing as a Wellsian "World Brain" (Garfield, 1964) of scientific information: It is too much to expect a research worker to spend an inordinate amount of time searching for the bibliographic descendants of antecedent papers. It would not be excessive to demand that the thorough scholar check all papers that have cited or criticized such papers, if they could be located quickly. The citation index makes this check practicable (Garfield, 1955, p. 108). In retrospective, citation indexing can be perceived as a pre-social web version of crowdsourcing, as it is based on the concept that the community of citing authors outperforms indexers in highlighting cognitive links between papers, particularly on the level of specific ideas and concepts (Garfield, 1983). Over the last 50 years, citation analysis and more generally, bibliometric methods, have developed from information retrieval tools to research evaluation metrics, where they are presumed to make scientific funding more efficient and effective (Moed, 2006). However, the dominance of bibliometric indicators in research evaluation has also led to significant goal displacement (Merton, 1957) and the oversimplification of notions of "research productivity" and "scientific quality", creating adverse effects such as salami publishing, honorary authorships, citation cartels, and misuse of indicators (Binswanger, 2015; Cronin and Sugimoto, 2014; Frey and Osterloh, 2006; Haustein and Larivière, 2015; Weingart, 2005).
    Furthermore, the rise of the web, and subsequently, the social web, has challenged the quasi-monopolistic status of the journal as the main form of scholarly communication and citation indices as the primary assessment mechanisms. Scientific communication is becoming more open, transparent, and diverse: publications are increasingly open access; manuscripts, presentations, code, and data are shared online; research ideas and results are discussed and criticized openly on blogs; and new peer review experiments, with open post publication assessment by anonymous or non-anonymous referees, are underway. The diversification of scholarly production and assessment, paired with the increasing speed of the communication process, leads to an increased information overload (Bawden and Robinson, 2008), demanding new filters. The concept of altmetrics, short for alternative (to citation) metrics, was created out of an attempt to provide a filter (Priem et al., 2010) and to steer against the oversimplification of the measurement of scientific success solely on the basis of number of journal articles published and citations received, by considering a wider range of research outputs and metrics (Piwowar, 2013). Although the term altmetrics was introduced in a tweet in 2010 (Priem, 2010), the idea of capturing traces - "polymorphous mentioning" (Cronin et al., 1998, p. 1320) - of scholars and their documents on the web to measure "impact" of science in a broader manner than citations was introduced years before, largely in the context of webometrics (Almind and Ingwersen, 1997; Thelwall et al., 2005):
    There will soon be a critical mass of web-based digital objects and usage statistics on which to model scholars' communication behaviors - publishing, posting, blogging, scanning, reading, downloading, glossing, linking, citing, recommending, acknowledging - and with which to track their scholarly influence and impact, broadly conceived and broadly felt (Cronin, 2005, p. 196). A decade after Cronin's prediction and five years after the coining of altmetrics, the time seems ripe to reflect upon the role of social media in scholarly communication. This Special Issue does so by providing an overview of current research on the indicators and metrics grouped under the umbrella term of altmetrics, on their relationships with traditional indicators of scientific activity, and on the uses that are made of the various social media platforms - on which these indicators are based - by scientists of various disciplines.
    Date
    20. 1.2015 18:30:22
  10. Dextre Clarke, S.G.: ¬The Information Retrieval Thesaurus (2019) 0.07
    0.07134171 = product of:
      0.107012555 = sum of:
        0.016003672 = weight(_text_:on in 5210) [ClassicSimilarity], result of:
          0.016003672 = score(doc=5210,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 5210, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=5210)
        0.09100889 = product of:
          0.18201777 = sum of:
            0.18201777 = weight(_text_:demand in 5210) [ClassicSimilarity], result of:
              0.18201777 = score(doc=5210,freq=4.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.5847471 = fieldWeight in 5210, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5210)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the post-war period before computers were readily available, urgent demand for scientific and industrial develop-ment stimulated research and development (R&D) that led to the birth of the information retrieval thesaurus. This article traces the early history, speciation and progressive improvement of the thesaurus to reach the state now conveyed by guidelines in inter-national and national standards. Despite doubts about the effec-tiveness of the thesaurus throughout this period, and notwith-standing the dominance of Google and other search engines in the information retrieval (IR) scene today, the thesaurus still plays a complementary part in the organization of knowledge and in-formation resources. Success today depends on interoperability, and is opening up opportunities in linked data applications. At the same time, the IR demand from workers in the knowledge society drives interest in hybrid forms of knowledge organization system (KOS) that may pool the genes of thesauri with those of ontologies and classification schemes.
  11. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.07
    0.067930594 = product of:
      0.10189588 = sum of:
        0.07926327 = product of:
          0.23778981 = sum of:
            0.23778981 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.23778981 = score(doc=400,freq=2.0), product of:
                0.42309996 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04990557 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
        0.02263261 = weight(_text_:on in 400) [ClassicSimilarity], result of:
          0.02263261 = score(doc=400,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.20619515 = fieldWeight in 400, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=400)
      0.6666667 = coord(2/3)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  12. British Library / FAST/Dewey Review Group: Consultation on subject indexing and classification standards applied by the British Library (2015) 0.06
    0.06424023 = product of:
      0.09636035 = sum of:
        0.032007344 = weight(_text_:on in 2810) [ClassicSimilarity], result of:
          0.032007344 = score(doc=2810,freq=8.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.29160398 = fieldWeight in 2810, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2810)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 2810) [ClassicSimilarity], result of:
              0.12870601 = score(doc=2810,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 2810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2810)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A broad-based review of the subject and classification schemes used on British Library records began in late 2014. The review was undertaken in response to a number of drivers including: - An increasing demand on available resources due to the rapidly expanding digital publishing arena, and continuing steady state in print publication patterns - Increased demands on metadata to meet changing audience expectations.
  13. Müller, V.C.: Pancomputationalism: theory or metaphor? (2014) 0.06
    0.06313416 = product of:
      0.094701245 = sum of:
        0.01886051 = weight(_text_:on in 3411) [ClassicSimilarity], result of:
          0.01886051 = score(doc=3411,freq=4.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.1718293 = fieldWeight in 3411, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3411)
        0.075840734 = product of:
          0.15168147 = sum of:
            0.15168147 = weight(_text_:demand in 3411) [ClassicSimilarity], result of:
              0.15168147 = score(doc=3411,freq=4.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4872892 = fieldWeight in 3411, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3411)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Prelude: Some Science Fiction on the Ultimate Answer and The Ultimate Question Many many millions of years ago a race of hyperintelligent pan-dimensional beings (whose physical manifestation in their own pan-dimensional universe is not dissimilar to our own) got so fed up with the constant bickering about the meaning of life which used to interrupt their favourite pastime of Brockian Ultra Cricket (a curious game which involved suddenly hitting people for no readily apparent reason and then running away) that they decided to sit down and solve their problems once and for all. And to this end they built themselves a stupendous super computer . 'O Deep Thought computer', Fook said, 'the task we have designed you to perform is this. We want you to tell us . ' he paused, 'the Answer!' 'The Answer?' said Deep Thought. 'The Answer to what?' 'Life!' urged Fook. 'The Universe!' said Lunkwill. 'Everything!' they said in chorus. (At this point the whole procedure is interrupted by two representatives of the 'Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons' who demand to switch off the machine because it endangers their jobs. They demand 'rigidly defined areas of doubt and uncertainty!', and threaten: 'You'll have a national Philosopher's strike on your hands!' . . . )
  14. Wong, W.; Liu, W.; Bennamoun, M.: Ontology learning from text : a look back and into the future (2010) 0.06
    0.062499635 = product of:
      0.09374945 = sum of:
        0.01867095 = weight(_text_:on in 4733) [ClassicSimilarity], result of:
          0.01867095 = score(doc=4733,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.17010231 = fieldWeight in 4733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4733)
        0.0750785 = product of:
          0.150157 = sum of:
            0.150157 = weight(_text_:demand in 4733) [ClassicSimilarity], result of:
              0.150157 = score(doc=4733,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.48239172 = fieldWeight in 4733, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4733)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Ontologies are often viewed as the answer to the need for inter-operable semantics in modern information systems. The explosion of textual information on the "Read/Write" Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas such as natural language processing have fuelled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium, and discusses the remaining challenges that will define the research directions in this area in the near future.
  15. Murguia, E.I.; Sales, R. de: CNPq's knowledge area table as a knowledge and power apparatus (2012) 0.06
    0.061381456 = product of:
      0.09207218 = sum of:
        0.027719175 = weight(_text_:on in 845) [ClassicSimilarity], result of:
          0.027719175 = score(doc=845,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25253648 = fieldWeight in 845, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=845)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 845) [ClassicSimilarity], result of:
              0.12870601 = score(doc=845,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 845, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=845)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This work is a first reflection on what we understand as knowledge organization based on politics. To do so, we resorted to Foucault.s conceptions of politics, state and governance, aiming to analyze an instrument that guides knowledge organization in Brazil.s research and academic fields. The current version, updated in 1984 by the CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico . National Council for Scientific and Technological Development), of the Knowledge Area Table (KAT) represents and establishes investigation in the scientific and technologic fields. We highlight that the position of Information Science in such Table was a product of national and international reflections in the 1980s on building a scientific subject. Information Science (IS) is placed higher than Information Theory, Library Science and Archival Science in the Table.s hierarchy. This led to a demand to promote interdisciplinarity which, although specific to certain periods, made the discipline acceptable.
  16. Nédellec, C.; Bossy, R.; Valsamou, D.; Ranoux, M.; Golik, W.; Sourdille, P.: Information extraction from bbliography for marker-assisted selection in wheat (2014) 0.06
    0.061381456 = product of:
      0.09207218 = sum of:
        0.027719175 = weight(_text_:on in 1592) [ClassicSimilarity], result of:
          0.027719175 = score(doc=1592,freq=6.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.25253648 = fieldWeight in 1592, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1592)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 1592) [ClassicSimilarity], result of:
              0.12870601 = score(doc=1592,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 1592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1592)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Improvement of most animal and plant species of agronomical interest in the near future has become an international stake because of the increasing demand for feeding a growing world population and to mitigate the reduction of the industrial resources. The recent advent of genomic tools contributed to improve the discovery of linkage between molecular markers and genes that are involved in the control of traits of agronomical interest such as grain number or disease resistance. This information is mostly published as scientific papers but rarely available in databases. Here, we present a method aiming at automatically extract this information from the scientific literature and relying on a knowledge model of the target information and on the WheatPhenotype ontology that we developed for this purpose. The information extraction results were evaluated and integrated into the on-line semantic search engine AlvisIR WheatMarker.
  17. Dreusicke, M.: Produktion und Distribution für multimedialen Content in Form von Linked Data am Beispiel von PAUX (2010) 0.05
    0.053571116 = product of:
      0.08035667 = sum of:
        0.016003672 = weight(_text_:on in 4263) [ClassicSimilarity], result of:
          0.016003672 = score(doc=4263,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 4263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=4263)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 4263) [ClassicSimilarity], result of:
              0.12870601 = score(doc=4263,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 4263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4263)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Im ersten Teil beschreibt der Vortrag die Prinzipien von semantisch verknüpftem Microcontent in Form von linked data. Dies beinhaltet 1. Micro-content im Sinne einer wortweisen Speicherung von Text, 2. Mehrfach-Verknüpfungen von Textbestandteilen, 3. semantische Gewichtung von Content- und Verknüpfungs-Objekten und 4. das Konzept "linked data". Der zweite Teil zeigt auf welche Vorteile die Beteiligten des digitalen Workflows hierdurch haben. Das Verständnis der Leser kann mit Zusatzfunktionen im Text beschleunigt und die Inhalte können aktiv gelernt werden. Autoren können ihr Wissen umfassender abbilden, indem sie mehrschichtige Texte schreiben und einzelne Informationsbestandteile verschiedenen Nutzergruppen zuordnen. Verlage und andere Content Provider können kleinste Content-Objekte ausliefern, die auf einem zentralen Server verbleiben und erst durch den Abruf des Lesers on demand gesendet werden, so dass das Nutzungsverhalten genau überwacht und statt Content Mehrwertdienste monetarisiert werden.
  18. Kim, S.; Cho, S.: Characteristics of Korean personal names (2013) 0.05
    0.053571116 = product of:
      0.08035667 = sum of:
        0.016003672 = weight(_text_:on in 531) [ClassicSimilarity], result of:
          0.016003672 = score(doc=531,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 531, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=531)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 531) [ClassicSimilarity], result of:
              0.12870601 = score(doc=531,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 531, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=531)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Korea, along with Asia at large, is producing more and more valuable academic materials. Furthermore, the demand for academic materials produced in non-Western societies is increasing among English-speaking users. In order to search among such material, users rely on keywords such as author names. However, Asian nations such as Korea and China have markedly different methods of writing personal names from Western naming traditions. Among these differences are name components, structure, writing customs, and distribution of surnames. These differences influence the Anglicization of Asian academic researchers' names, often leading to them being written in various fashions, unlike Western personal names. These inconsistent formats can often lead to difficulties in searching and finding academic materials for Western users unfamiliar with Korean and Asian personal names. This article presents methods for precisely understanding and categorizing Korean personal names in order to make academic materials by Korean authors easier to find for Westerners. As such, this article discusses characteristics particular to Korean personal names and furthermore analyzes how the personal names of Korean academic researchers are currently being written in English.
  19. Hu, G.; Lin, H.; Pan, W.: Conceptualizing and examining E-government service capability : a review and empirical study (2013) 0.05
    0.053571116 = product of:
      0.08035667 = sum of:
        0.016003672 = weight(_text_:on in 1118) [ClassicSimilarity], result of:
          0.016003672 = score(doc=1118,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 1118, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=1118)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 1118) [ClassicSimilarity], result of:
              0.12870601 = score(doc=1118,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 1118, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1118)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The effectiveness and efficiency of e-government (e-gov) services (EGS) are critical issues that have yet to be fully discussed. Inspired by successful practices in the areas of SERVQUAL, capability-based theories, and IT-related capability management, the efficient delivery of EGS should derive from the high capabilities of a government to provide such services. This article aims to develop a conceptual framework to assess and empirically examine EGSC using data from local governments in Mainland China. The fitness test and the case study prove that the conceptual framework was suitable in analyzing China's EGSC. In particular, the EGSC can be examined from 3 dimensions/layers: content service capability, service delivery capability, and on-demand capability. The results of the structural analysis illustrate the practical management applications of EGSC, which can facilitate the improvement of EGS.
  20. Suman, A.: From knowledge abstraction to management : using Ranganathan's faceted schema to develop conceptual frameworks for digital libraries (2014) 0.05
    0.053571116 = product of:
      0.08035667 = sum of:
        0.016003672 = weight(_text_:on in 2032) [ClassicSimilarity], result of:
          0.016003672 = score(doc=2032,freq=2.0), product of:
            0.109763056 = queryWeight, product of:
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.04990557 = queryNorm
            0.14580199 = fieldWeight in 2032, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.199415 = idf(docFreq=13325, maxDocs=44218)
              0.046875 = fieldNorm(doc=2032)
        0.064353004 = product of:
          0.12870601 = sum of:
            0.12870601 = weight(_text_:demand in 2032) [ClassicSimilarity], result of:
              0.12870601 = score(doc=2032,freq=2.0), product of:
                0.31127608 = queryWeight, product of:
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.04990557 = queryNorm
                0.4134786 = fieldWeight in 2032, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.237302 = idf(docFreq=234, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2032)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The increasing volume of information in the contemporary world entails demand for efficient knowledge management (KM) systems; a logical method of information organization that will allow proper semantic querying to identify things that match meaning in natural language. On this concept, the role of an information manager goes beyond implementing a search and clustering system, to the ability to map and logically present the subject domain and related cross domains. From Knowledge Abstraction to Management answers this need by analysing ontology tools and techniques, helping the reader develop

Languages

Types

  • a 2961
  • el 275
  • m 219
  • s 77
  • x 22
  • r 13
  • b 6
  • n 3
  • i 2
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications