Search (4268 results, page 1 of 214)

  • × year_i:[2010 TO 2020}
  1. Osiñska, V.: Visual analysis of classification scheme (2010) 0.17
    0.17010799 = product of:
      0.22681065 = sum of:
        0.01594702 = weight(_text_:for in 4068) [ClassicSimilarity], result of:
          0.01594702 = score(doc=4068,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17964928 = fieldWeight in 4068, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
        0.13841431 = weight(_text_:computing in 4068) [ClassicSimilarity], result of:
          0.13841431 = score(doc=4068,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.5292687 = fieldWeight in 4068, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
        0.07244933 = product of:
          0.14489865 = sum of:
            0.14489865 = weight(_text_:machinery in 4068) [ClassicSimilarity], result of:
              0.14489865 = score(doc=4068,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.4114694 = fieldWeight in 4068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4068)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper proposes a novel methodology to visualize a classification scheme. It is demonstrated with the Association for Computing Machinery (ACM) Computing Classification System (CCS). The collection derived from the ACM digital library, containing 37,543 documents classified by CCS. The assigned classes, subject descriptors, and keywords were processed in a dataset to produce a graphical representation of the documents. The general conception is based on the similarity of co-classes (themes) proportional to the number of common publications. The final number of all possible classes and subclasses in the collection was 353 and therefore the similarity matrix of co-classes had the same dimension. A spherical surface was chosen as the target information space. Classes and documents' node locations on the sphere were obtained by means of Multidimensional Scaling coordinates. By representing the surface on a plane like a map projection, it is possible to analyze the visualization layout. The graphical patterns were organized in some colour clusters. For evaluation of given visualization maps, graphics filtering was applied. This proposed method can be very useful in interdisciplinary research fields. It allows for a great amount of heterogeneous information to be conveyed in a compact display, including topics, relationships among topics, frequency of occurrence, importance and changes of these properties over time.
    Object
    Computing Classification System
  2. Kopak, R.; Freund, L.; O'Brien, H.: Digital information interaction as semantic navigation (2011) 0.15
    0.1488452 = product of:
      0.19846025 = sum of:
        0.015624823 = weight(_text_:for in 14) [ClassicSimilarity], result of:
          0.015624823 = score(doc=14,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 14, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=14)
        0.095896244 = weight(_text_:computing in 14) [ClassicSimilarity], result of:
          0.095896244 = score(doc=14,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 14, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=14)
        0.08693919 = product of:
          0.17387839 = sum of:
            0.17387839 = weight(_text_:machinery in 14) [ClassicSimilarity], result of:
              0.17387839 = score(doc=14,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.4937633 = fieldWeight in 14, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.046875 = fieldNorm(doc=14)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this chapter we focus on the research area of digital information interaction, which emphasizes searchers' direct engagement with and manipulation of information objects as they search and browse through digital information environments. This is an area of active research that has opened up in recent years as information retrieval (IR) research has expanded its focus from the mechanics of retrieval (i.e. indexing, data structures and retrieval algorithms) to include a broader 'retrieval in context' perspective that takes into account the whole system, the affective, cognitive and physical attributes of users and the environment in which searching takes place (Ingwersen and Järvelin, 2005). A number of meetings and workshops have focused on this area, including the Information Retrieval in Context (IRiX) workshops at the ACM SIGIR (Association for Computing Machinery Special Interest Group Information Retrieval) conference (2004-5), the Information Interaction in Context (IIiX) Conference (2006-ongoing) and the Human Computer Information Retrieval (HCIR) Workshops (2007-ongoing).
    Source
    Innovations in information retrieval: perspectives for theory and practice. Eds.: A. Foster, u. P. Rafferty
  3. Jiang, Z.; Liu, X.; Chen, Y.: Recovering uncaptured citations in a scholarly network : a two-step citation analysis to estimate publication importance (2016) 0.13
    0.13380319 = product of:
      0.17840424 = sum of:
        0.026041372 = weight(_text_:for in 3018) [ClassicSimilarity], result of:
          0.026041372 = score(doc=3018,freq=16.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29336601 = fieldWeight in 3018, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
        0.079913534 = weight(_text_:computing in 3018) [ClassicSimilarity], result of:
          0.079913534 = score(doc=3018,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 3018, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3018)
        0.07244933 = product of:
          0.14489865 = sum of:
            0.14489865 = weight(_text_:machinery in 3018) [ClassicSimilarity], result of:
              0.14489865 = score(doc=3018,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.4114694 = fieldWeight in 3018, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3018)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The citation relationships between publications, which are significant for assessing the importance of scholarly components within a network, have been used for various scientific applications. Missing citation metadata in scholarly databases, however, create problems for classical citation-based ranking algorithms and challenge the performance of citation-based retrieval systems. In this research, we utilize a two-step citation analysis method to investigate the importance of publications for which citation information is partially missing. First, we calculate the importance of the author and then use his importance to estimate the publication importance for some selected articles. To evaluate this method, we designed a simulation experiment-"random citation-missing"-to test the two-step citation analysis that we carried out with the Association for Computing Machinery (ACM) Digital Library (DL). In this experiment, we simulated different scenarios in a large-scale scientific digital library, from high-quality citation data, to very poor quality data, The results show that a two-step citation analysis can effectively uncover the importance of publications in different situations. More importantly, we found that the optimized impact from the importance of an author (first step) is exponentially increased when the quality of citation decreases. The findings from this study can further enhance citation-based publication-ranking algorithms for real-world applications.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1722-1735
  4. Tsakonas, G.; Mitrelis, A.; Papachristopoulos, L.; Papatheodorou, C.: ¬An exploration of the digital library evaluation literature based on an ontological representation (2013) 0.13
    0.12808266 = product of:
      0.17077689 = sum of:
        0.01841403 = weight(_text_:for in 1048) [ClassicSimilarity], result of:
          0.01841403 = score(doc=1048,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 1048, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
        0.079913534 = weight(_text_:computing in 1048) [ClassicSimilarity], result of:
          0.079913534 = score(doc=1048,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 1048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1048)
        0.07244933 = product of:
          0.14489865 = sum of:
            0.14489865 = weight(_text_:machinery in 1048) [ClassicSimilarity], result of:
              0.14489865 = score(doc=1048,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.4114694 = fieldWeight in 1048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1048)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Evaluation is a vital research area in the digital library domain, demonstrating a growing literature in conference and journal articles. We explore the directions and the evolution of evaluation research for the period 2001-2011 by studying the evaluation initiatives presented at 2 main conferences of the digital library domain, namely the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers (ACM/IEEE) Joint Conference on Digital Libraries (JCDL), and the European Conference on Digital Libraries (ECDL; since 2011 renamed to the International Conference on Theory and Practice of Digital Libraries [TPDL]). The literature is annotated using a domain ontology, named DiLEO, which defines explicitly the main concepts of the digital library evaluation domain and their correlations. The ontology instances constitute a semantic network that enables the uniform and formal representation of the critical evaluation constructs in both conferences, untangles their associations, and supports the study of their evolution. We discuss interesting patterns in the evaluation practices as well as in the research foci of the 2 venues, and outline current research trends and areas for further research.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.9, S.1914-1926
  5. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.10
    0.09805338 = product of:
      0.13073784 = sum of:
        0.015624823 = weight(_text_:for in 3693) [ClassicSimilarity], result of:
          0.015624823 = score(doc=3693,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 3693, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.095896244 = weight(_text_:computing in 3693) [ClassicSimilarity], result of:
          0.095896244 = score(doc=3693,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.36668807 = fieldWeight in 3693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.019216778 = product of:
          0.038433556 = sum of:
            0.038433556 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.038433556 = score(doc=3693,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  6. Buckland, M.K.: Knowledge organization and the technology of intellectual work (2014) 0.09
    0.087386265 = product of:
      0.116515025 = sum of:
        0.020587513 = weight(_text_:for in 1399) [ClassicSimilarity], result of:
          0.020587513 = score(doc=1399,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 1399, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1399)
        0.079913534 = weight(_text_:computing in 1399) [ClassicSimilarity], result of:
          0.079913534 = score(doc=1399,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 1399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1399)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 1399) [ClassicSimilarity], result of:
              0.032027967 = score(doc=1399,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 1399, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1399)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Since ancient times intellectual work has required tools for writing, documents for reading, and bibliographies for finding, not to mention more specialized techniques and technologies. Direct personal discussion is often impractical and we depend on documents instead. Document technology evolved through writing, printing, telecommunications, copying, and computing and facilitated an 'information flood' which motivated important knowledge organization initiatives, especially in the nineteenth century (library science, bibliography, documentation). Electronics and the Internet amplified these trends. As an example we consider an initiative to provide shared access to the working notes of editors preparing scholarly editions of historically important texts. For the future, we can project trends leading to ubiquitous recording, pervasive representations, simultaneous interaction regardless of geography, and powerful analysis and visualization of the records resulting from that ubiquitous recording. This evolving situation has implications for publishing, archival practice, and knowledge organization. The passing of time is of special interest in knowledge organization because knowing is cultural, living, and always changing. Technique and technology are also cultural ("material culture") but fixed and inanimate, as can be seen in the obsolescence of subject headings, which remain inscribed while culture moves on. The tension between the benefits of technology and the limitations imposed by fixity in a changing world provide a central tension in knowledge organization over time.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  7. Jaskolla, L.; Rugel, M.: Smart questions : steps towards an ontology of questions and answers (2014) 0.09
    0.087386265 = product of:
      0.116515025 = sum of:
        0.020587513 = weight(_text_:for in 3404) [ClassicSimilarity], result of:
          0.020587513 = score(doc=3404,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 3404, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.079913534 = weight(_text_:computing in 3404) [ClassicSimilarity], result of:
          0.079913534 = score(doc=3404,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 3404, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3404)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 3404) [ClassicSimilarity], result of:
              0.032027967 = score(doc=3404,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 3404, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3404)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The present essay is based on research funded by the German Ministry of Economics and Technology and carried out by the Munich School of Philosophy (Prof. Godehard Brüntrup) in cooperation with the IT company Comelio GmbH. It is concerned with setting up the philosophical framework for a systematic, hierarchical and categorical account of questions and answers in order to use this framework as an ontology for software engineers who create a tool for intelligent questionnaire design. In recent years, there has been considerable interest in programming software that enables users to create and carry out their own surveys. Considering the, to say the least, vast amount of areas of applications these software tools try to cover, it is surprising that most of the existing tools lack a systematic approach to what questions and answers really are and in what kind of systematic hierarchical relations different types of questions stand to each other. The theoretical background to this essay is inspired Barry Smith's theory of regional ontologies. The notion of ontology used in this essay can be defined by the following characteristics: (1) The basic notions of the ontology should be defined in a manner that excludes equivocations of any kind. They should also be presented in a way that allows for an easy translation into a semi-formal language, in order to secure easy applicability for software engineers. (2) The hierarchical structure of the ontology should be that of an arbor porphyriana.
    Date
    9. 2.2017 19:22:59
    Source
    Philosophy, computing and information science. Eds.: R. Hagengruber u. U.V. Riss
  8. Town, C.; Harrison, K.: Large-scale grid computing for content-based image retrieval (2010) 0.09
    0.087319955 = product of:
      0.17463991 = sum of:
        0.01804199 = weight(_text_:for in 3947) [ClassicSimilarity], result of:
          0.01804199 = score(doc=3947,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20324993 = fieldWeight in 3947, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=3947)
        0.15659791 = weight(_text_:computing in 3947) [ClassicSimilarity], result of:
          0.15659791 = score(doc=3947,freq=12.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.5987991 = fieldWeight in 3947, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=3947)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Content-based image retrieval (CBIR) technologies offer many advantages over purely text-based image search. However, one of the drawbacks associated with CBIR is the increased computational cost arising from tasks such as image processing, feature extraction, image classification, and object detection and recognition. Consequently CBIR systems have suffered from a lack of scalability, which has greatly hampered their adoption for real-world public and commercial image search. At the same time, paradigms for large-scale heterogeneous distributed computing such as grid computing, cloud computing, and utility-based computing are gaining traction as a way of providing more scalable and efficient solutions to large-scale computing tasks. Design/methodology/approach - This paper presents an approach in which a large distributed processing grid has been used to apply a range of CBIR methods to a substantial number of images. By massively distributing the required computational task across thousands of grid nodes, very high through-put has been achieved at relatively low overheads. Findings - This has allowed one to analyse and index about 25 million high resolution images thus far, while using just two servers for storage and job submission. The CBIR system was developed by Imense Ltd and is based on automated analysis and recognition of image content using a semantic ontology. It features a range of image-processing and analysis modules, including image segmentation, region classification, scene analysis, object detection, and face recognition methods. Originality/value - In the case of content-based image analysis, the primary performance criterion is the overall through-put achieved by the system in terms of the number of images that can be processed over a given time frame, irrespective of the time taken to process any given image. As such, grid processing has great potential for massively parallel content-based image retrieval and other tasks with similar performance requirements.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  9. Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011) 0.09
    0.08575615 = product of:
      0.11434154 = sum of:
        0.01841403 = weight(_text_:for in 4130) [ClassicSimilarity], result of:
          0.01841403 = score(doc=4130,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 4130, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4130)
        0.079913534 = weight(_text_:computing in 4130) [ClassicSimilarity], result of:
          0.079913534 = score(doc=4130,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 4130, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4130)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
              0.032027967 = score(doc=4130,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 4130, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4130)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
    Date
    8. 1.2011 18:22:50
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.1, S.19-39
  10. Blobel, B.: Ontologies, knowledge representation, artificial intelligence : hype or prerequisite for international pHealth interoperability? (2011) 0.09
    0.08555527 = product of:
      0.17111054 = sum of:
        0.012889821 = weight(_text_:for in 760) [ClassicSimilarity], result of:
          0.012889821 = score(doc=760,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14520876 = fieldWeight in 760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=760)
        0.15822072 = weight(_text_:computing in 760) [ClassicSimilarity], result of:
          0.15822072 = score(doc=760,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.60500443 = fieldWeight in 760, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0546875 = fieldNorm(doc=760)
      0.5 = coord(2/4)
    
    Abstract
    Nowadays, eHealth and pHealth solutions have to meet advanced interoperability challenges. Enabling pervasive computing and even autonomic computing, pHealth system architectures cover many domains, scientifically managed by specialized disciplines using their specific ontologies. Therefore, semantic interoperability has to advance from a communication protocol to an ontology coordination challenge including semantic integration, bringing knowledge representation and artificial intelligence on the table. The resulting solutions comprehensively support multi-lingual and multi-jurisdictional environments.
  11. Bringsjord, S.; Clark, M.; Taylor, J.: Sophisticated knowledge representation and reasoning requires philosophy (2014) 0.08
    0.078850895 = product of:
      0.10513453 = sum of:
        0.009207015 = weight(_text_:for in 3403) [ClassicSimilarity], result of:
          0.009207015 = score(doc=3403,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.103720546 = fieldWeight in 3403, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3403)
        0.079913534 = weight(_text_:computing in 3403) [ClassicSimilarity], result of:
          0.079913534 = score(doc=3403,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 3403, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3403)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 3403) [ClassicSimilarity], result of:
              0.032027967 = score(doc=3403,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 3403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3403)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    What is knowledge representation and reasoning (KR&R)? Alas, a thorough account would require a book, or at least a dedicated, full-length paper, but here we shall have to make do with something simpler. Since most readers are likely to have an intuitive grasp of the essence of KR&R, our simple account should suffice. The interesting thing is that this simple account itself makes reference to some of the foundational distinctions in the field of philosophy. These distinctions also play a central role in artificial intelligence (AI) and computer science. To begin with, the first distinction in KR&R is that we identify knowledge with knowledge that such-and-such holds (possibly to a degree), rather than knowing how. If you ask an expert tennis player how he manages to serve a ball at 130 miles per hour on his first serve, and then serve a safer, topspin serve on his second should the first be out, you may well receive a confession that, if truth be told, this athlete can't really tell you. He just does it; he does something he has been doing since his youth. Yet, there is no denying that he knows how to serve. In contrast, the knowledge in KR&R must be expressible in declarative statements. For example, our tennis player knows that if his first serve lands outside the service box, it's not in play. He thus knows a proposition, conditional in form.
    Date
    9. 2.2017 19:22:14
    Source
    Philosophy, computing and information science. Eds.: R. Hagengruber u. U.V. Riss
  12. Bouyssou, D.; Marchant, T.: Ranking scientists and departments in a consistent manner (2011) 0.08
    0.077377096 = product of:
      0.15475419 = sum of:
        0.019136423 = weight(_text_:for in 4751) [ClassicSimilarity], result of:
          0.019136423 = score(doc=4751,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 4751, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4751)
        0.13561776 = weight(_text_:computing in 4751) [ClassicSimilarity], result of:
          0.13561776 = score(doc=4751,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.51857525 = fieldWeight in 4751, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=4751)
      0.5 = coord(2/4)
    
    Abstract
    The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1761-1769
  13. Durno, J.: Digital archaeology and/or forensics : working with floppy disks from the 1980s (2016) 0.08
    0.076688446 = product of:
      0.15337689 = sum of:
        0.02551523 = weight(_text_:for in 3196) [ClassicSimilarity], result of:
          0.02551523 = score(doc=3196,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.28743884 = fieldWeight in 3196, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=3196)
        0.12786166 = weight(_text_:computing in 3196) [ClassicSimilarity], result of:
          0.12786166 = score(doc=3196,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.48891744 = fieldWeight in 3196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0625 = fieldNorm(doc=3196)
      0.5 = coord(2/4)
    
    Abstract
    While software originating from the domain of digital forensics has demonstrated utility for data recovery from contemporary storage media, it is not as effective for working with floppy disks from the 1980s. This paper details alternative strategies for recovering data from floppy disks employing software originating from the software preservation and retro-computing communities. Imaging hardware, storage formats and processing workflows are also discussed.
  14. Richards, L.L.: Records management in the cloud : from system design to resource ownership (2018) 0.08
    0.075717494 = product of:
      0.15143499 = sum of:
        0.013020686 = weight(_text_:for in 4041) [ClassicSimilarity], result of:
          0.013020686 = score(doc=4041,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4041)
        0.13841431 = weight(_text_:computing in 4041) [ClassicSimilarity], result of:
          0.13841431 = score(doc=4041,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.5292687 = fieldWeight in 4041, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4041)
      0.5 = coord(2/4)
    
    Abstract
    New technology implementations impact organizational behavior and outcomes, sometimes in unintended ways. A combination of design decisions, altered affordances, and political struggles within a state cloud computing implementation reduced levels of service among records management professionals, in spite of their strongly expressed desire to manage records with excellence. Struggles to maintain ownership and control over organizational processes and resources illustrate the power dynamics that are affected by the design of a new system implementation. By designing the system with a single goal in mind (centralization to reduce costs), strategic management failed to consider otherwise predictable outcomes of reducing the resources controlled by a group with lesser power and increasing the resources controlled by an already dominant power within the institution. These findings provide valuable insights into the considerations which cloud computing designs should take into account. They also offer an understanding of changing educational requirements for records management workers to engage more effectively across occupations in technologically changing environments and the potential risks that cloud computing provide to productivity. The research was comprised of an extensive literature review, a grounded theory methodological approach, and rigorous data collection and synthesis via an empirical case study.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.2, S.281-289
  15. Almeida, M.B.; Souza, R.R.; Porto, R.B.: Looking for the identity of information science in the age of big data, computing clouds and social networks (2015) 0.08
    0.07562129 = product of:
      0.15124258 = sum of:
        0.015624823 = weight(_text_:for in 3453) [ClassicSimilarity], result of:
          0.015624823 = score(doc=3453,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 3453, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=3453)
        0.13561776 = weight(_text_:computing in 3453) [ClassicSimilarity], result of:
          0.13561776 = score(doc=3453,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.51857525 = fieldWeight in 3453, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=3453)
      0.5 = coord(2/4)
    
    Abstract
    In this paper we discuss, under a critical point of view, the current Information Science landscape and some future prospects regarding contemporary information phenomena. We present thoughts about the process of thematic deflation of Information Science, through the analysis of the research objects currently under development in this field. In addition to this, we look at the process of absorption of these and other relevant objects in distinguished knowledge fields. We seek to challenge the emphasis and the volume of interdisciplinary research within the field, and present some comments about what might be the results of such processes for the future of Information Science. Subsequently, we analyze the impact in the Information Science field due to phenomena like information boom, the consolidation of the social networks as interactive spaces, cloud computing, as well as other key elements.
  16. Ghazzawi, N.; Robichaud, B.; Drouin, P.; Sadat, F.: Automatic extraction of specialized verbal units (2018) 0.08
    0.07562129 = product of:
      0.15124258 = sum of:
        0.015624823 = weight(_text_:for in 4094) [ClassicSimilarity], result of:
          0.015624823 = score(doc=4094,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.17601961 = fieldWeight in 4094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=4094)
        0.13561776 = weight(_text_:computing in 4094) [ClassicSimilarity], result of:
          0.13561776 = score(doc=4094,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.51857525 = fieldWeight in 4094, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=4094)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents a methodology for the automatic extraction of specialized Arabic, English and French verbs of the field of computing. Since nominal terms are predominant in terminology, our interest is to explore to what extent verbs can also be part of a terminological analysis. Hence, our objective is to verify how an existing extraction tool will perform when it comes to specialized verbs in a given specialized domain. Furthermore, we want to investigate any particularities that a language can represent regarding verbal terms from the automatic extraction perspective. Our choice to operate on three different languages reflects our desire to see whether the chosen tool can perform better on one language compared to the others. Moreover, given that Arabic is a morphologically rich and complex language, we consider investigating the results yielded by the extraction tool. The extractor used for our experiment is TermoStat (Drouin 2003). So far, our results show that the extraction of verbs of computing represents certain differences in terms of quality and particularities of these units in this specialized domain between the languages under question.
  17. Philosophy, computing and information science (2014) 0.08
    0.07515965 = product of:
      0.1503193 = sum of:
        0.0073656123 = weight(_text_:for in 3407) [ClassicSimilarity], result of:
          0.0073656123 = score(doc=3407,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.08297644 = fieldWeight in 3407, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=3407)
        0.14295368 = weight(_text_:computing in 3407) [ClassicSimilarity], result of:
          0.14295368 = score(doc=3407,freq=10.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.5466263 = fieldWeight in 3407, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=3407)
      0.5 = coord(2/4)
    
    Content
    Introduction: Philosophy's Relevance in Computing and Information Science - Ruth Hagengruber and Uwe V.Riss Part I: Philosophy of Computing and Information 1 The Fourth Revolution in our Self-Understanding - Luciano Floridi -- 2 Information Transfer as a Metaphor - Jakob Krebs -- 3 With Aristotle towards a Differentiated Concept of Information? - Uwe Voigt -- 4 The Influence of Philosophy on the Understanding of Computing and Information - Klaus Fuchs-Kittowski -- Part II: Complexity and System Theory 5 The Emergence of Self-Conscious Systems: From Symbolic AI to Embodied Robotics - Klaus Mainzer -- 6 Artificial Intelligence as a New Metaphysical Project - Aziz F. Zambak Part III: Ontology 7 The Relevance of Philosophical Ontology to Information and Computer Science - Barry Smith -- 8 Ontology, its Origins and its Meaning in Information Science - Jens Kohne -- 9 Smart Questions: Steps towards an Ontology of Questions and Answers - Ludwig Jaskolla and Matthias Rugel Part IV: Knowledge Representation 10 Sophisticated Knowledge Representation and Reasoning Requires Philosophy - Selmer Bringsjord, Micah Clark and Joshua Taylor -- 11 On Frames and Theory-Elements of Structuralism Holger Andreas -- 12 Ontological Complexity and Human Culture David J. Saab and Frederico Fonseca Part V: Action Theory 13 Knowledge and Action between Abstraction and Concretion - Uwe V.Riss -- 14 Action-Directing Construction of Reality in Product Creation Using Social Software: Employing Philosophy to Solve Real-World Problems - Kai Holzweifiig and Jens Krüger -- 15 An Action-Theory-Based Treatment ofTemporal Individuals - Tillmann Pross -- 16 Four Rules for Classifying Social Entities - Ludger Jansen Part VI: Info-Computationalism 17 Info-Computationalism and Philosophical Aspects of Research in Information Sciences - Gordana Dodig-Crnkovic -- 18 Pancomputationalism: Theory or Metaphor ? - Vincent C. Mutter Part VII: Ethics 19 The Importance of the Sources of Professional Obligations - Francis C. Dane
    Footnote
    Vgl.: https://www.cambridge.org/core/books/philosophy-computing-and-information-science/EFE440F6D9884BD733C19D1BF535045B.
  18. Qu, R.; Fang, Y.; Bai, W.; Jiang, Y.: Computing semantic similarity based on novel models of semantic representation using Wikipedia (2018) 0.07
    0.07381066 = product of:
      0.14762132 = sum of:
        0.009207015 = weight(_text_:for in 5052) [ClassicSimilarity], result of:
          0.009207015 = score(doc=5052,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.103720546 = fieldWeight in 5052, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5052)
        0.13841431 = weight(_text_:computing in 5052) [ClassicSimilarity], result of:
          0.13841431 = score(doc=5052,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.5292687 = fieldWeight in 5052, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5052)
      0.5 = coord(2/4)
    
    Abstract
    Computing Semantic Similarity (SS) between concepts is one of the most critical issues in many domains such as Natural Language Processing and Artificial Intelligence. Over the years, several SS measurement methods have been proposed by exploiting different knowledge resources. Wikipedia provides a large domain-independent encyclopedic repository and a semantic network for computing SS between concepts. Traditional feature-based measures rely on linear combinations of different properties with two main limitations, the insufficient information and the loss of semantic information. In this paper, we propose several hybrid SS measurement approaches by using the Information Content (IC) and features of concepts, which avoid the limitations introduced above. Considering integrating discrete properties into one component, we present two models of semantic representation, called CORM and CARM. Then, we compute SS based on these models and take the IC of categories as a supplement of SS measurement. The evaluation, based on several widely used benchmarks and a benchmark developed by ourselves, sustains the intuitions with respect to human judgments. In summary, our approaches are more efficient in determining SS between concepts and have a better human correlation than previous methods such as Word2Vec and NASARI.
  19. Semantic applications (2018) 0.07
    0.06868714 = product of:
      0.13737428 = sum of:
        0.024359472 = weight(_text_:for in 5204) [ClassicSimilarity], result of:
          0.024359472 = score(doc=5204,freq=14.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.27441877 = fieldWeight in 5204, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
        0.11301481 = weight(_text_:computing in 5204) [ClassicSimilarity], result of:
          0.11301481 = score(doc=5204,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 5204, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5204)
      0.5 = coord(2/4)
    
    Abstract
    This book describes proven methodologies for developing semantic applications: software applications which explicitly or implicitly uses the semantics (i.e., the meaning) of a domain terminology in order to improve usability, correctness, and completeness. An example is semantic search, where synonyms and related terms are used for enriching the results of a simple text-based search. Ontologies, thesauri or controlled vocabularies are the centerpiece of semantic applications. The book includes technological and architectural best practices for corporate use.
    Content
    Introduction.- Ontology Development.- Compliance using Metadata.- Variety Management for Big Data.- Text Mining in Economics.- Generation of Natural Language Texts.- Sentiment Analysis.- Building Concise Text Corpora from Web Contents.- Ontology-Based Modelling of Web Content.- Personalized Clinical Decision Support for Cancer Care.- Applications of Temporal Conceptual Semantic Systems.- Context-Aware Documentation in the Smart Factory.- Knowledge-Based Production Planning for Industry 4.0.- Information Exchange in Jurisdiction.- Supporting Automated License Clearing.- Managing cultural assets: Implementing typical cultural heritage archive's usage scenarios via Semantic Web technologies.- Semantic Applications for Process Management.- Domain-Specific Semantic Search Applications.
    LCSH
    Management of Computing and Information Systems
    Subject
    Management of Computing and Information Systems
  20. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2010) 0.07
    0.06680116 = product of:
      0.13360232 = sum of:
        0.020587513 = weight(_text_:for in 3944) [ClassicSimilarity], result of:
          0.020587513 = score(doc=3944,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 3944, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3944)
        0.11301481 = weight(_text_:computing in 3944) [ClassicSimilarity], result of:
          0.11301481 = score(doc=3944,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 3944, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3944)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The paper aims to develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach - Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies, UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings - The major findings showed that, given the large variety of terminology resources distributed throughout the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made, outlining the important approaches and features that support such a cross-browsing middleware service. Originality/value - Cross-browsing features are lacking in current library portal meta-search systems. Users are therefore deprived of this valuable retrieval provision. This research investigated the case for such a system and developed a prototype to fill this gap.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
    Object
    ACM Computing Classification

Languages

Types

  • a 3876
  • el 307
  • m 238
  • s 75
  • x 33
  • r 14
  • b 8
  • n 8
  • i 3
  • ag 2
  • p 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications