Search (2054 results, page 103 of 103)

  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Bressan, M.; Peserico, E.: Choose the damping, choose the ranking? (2010) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 2563) [ClassicSimilarity], result of:
              0.018833783 = score(doc=2563,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2563)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    To what extent can changes in PageRank's damping factor affect node ranking? We prove that, at least on some graphs, the top k nodes assume all possible k! orderings as the damping factor varies, even if it varies within an arbitrarily small interval (e.g. [0.84999,0.85001][0.84999,0.85001]). Thus, the rank of a node for a given (finite set of discrete) damping factor(s) provides very little information about the rank of that node as the damping factor varies over a continuous interval. We bypass this problem introducing lineage analysis and proving that there is a simple condition, with a "natural" interpretation independent of PageRank, that allows one to verify "in one shot" if a node outperforms another simultaneously for all damping factors and all damping variables (informally, time variant damping factors). The novel notions of strong rank and weak rank of a node provide a measure of the fuzziness of the rank of that node, of the objective orderability of a graph's nodes, and of the quality of results returned by different ranking algorithms based on the random surfer model. We deploy our analytical tools on a 41M node snapshot of the .it Web domain and on a 0.7M node snapshot of the CiteSeer citation graph. Among other findings, we show that rank is indeed relatively stable in both graphs; that "classic" PageRank (d=0.85) marginally outperforms Weighted In-degree (d->0), mainly due to its ability to ferret out "niche" items; and that, for both the Web and CiteSeer, the ideal damping factor appears to be 0.8-0.9 to obtain those items of high importance to at least one (model of randomly surfing) user, but only 0.5-0.6 to obtain those items important to every (model of randomly surfing) user.
  2. Gnoli, C.: Fundamentos ontológicos de la organización del conocimiento : la teoría de los niveles integrativos aplicada al orden de cita (2011) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 2659) [ClassicSimilarity], result of:
              0.018833783 = score(doc=2659,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The field of knowledge organization (KO) can be described as composed of the four distinct but connected layers of theory, systems, representation, and application. This paper focuses on the relations between KO theory and KO systems. It is acknowledged how the structure of KO systems is the product of a mixture of ontological, epistemological, and pragmatical factors. However, different systems give different priorities to each factor. A more ontologically-oriented approach, though not offering quick solutions for any particular group of users, will produce systems of wide and long-lasting application as they are based on general, shareable principles. I take the case of the ontological theory of integrative levels, which has been considered as a useful source for general classifications for several decades, and is currently implemented in the Integrative Levels Classification system. The theory produces a sequence of main classes modelling a natural order between phenomena. This order has interesting effects also on other features of the system, like the citation order of concepts within compounds. As it has been shown by facet analytical theory, it is useful that citation order follow a principle of inversion, as compared to the order of the same concepts in the schedules. In the light of integrative levels theory, this principle also acquires an ontological meaning: phenomena of lower level should be cited first, as most often they act as specifications of higher-level ones. This ontological principle should be complemented by consideration of the epistemological treatment of phenomena: in case a lower-level phenomenon is the main theme, it can be promoted to the leading position in the compound subject heading. The integration of these principles is believed to produce optimal results in the ordering of knowledge contents.
  3. Kenter, T.; Balog, K.; Rijke, M. de: Evaluating document filtering systems over time (2015) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 2672) [ClassicSimilarity], result of:
              0.018833783 = score(doc=2672,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 2672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2672)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Document filtering is a popular task in information retrieval. A stream of documents arriving over time is filtered for documents relevant to a set of topics. The distinguishing feature of document filtering is the temporal aspect introduced by the stream of documents. Document filtering systems, up to now, have been evaluated in terms of traditional metrics like (micro- or macro-averaged) precision, recall, MAP, nDCG, F1 and utility. We argue that these metrics do not capture all relevant aspects of the systems being evaluated. In particular, they lack support for the temporal dimension of the task. We propose a time-sensitive way of measuring performance of document filtering systems over time by employing trend estimation. In short, the performance is calculated for batches, a trend line is fitted to the results, and the estimated performance of systems at the end of the evaluation period is used to compare systems. We detail the application of our proposed trend estimation framework and examine the assumptions that need to hold for valid significance testing. Additionally, we analyze the requirements a document filtering metric has to meet and show that traditional macro-averaged true-positive-based metrics, like precision, recall and utility fail to capture essential information when applied in a batch setting. In particular, false positives returned in a batch for topics that are absent from the ground truth in that batch go unnoticed. This is a serious flaw as over-generation of a system might be overlooked this way. We propose a new metric, aptness, that does capture false positives. We incorporate this metric in an overall score and show that this new score does meet all requirements. To demonstrate the results of our proposed evaluation methodology, we analyze the runs submitted to the two most recent editions of a document filtering evaluation campaign. We re-evaluate the runs submitted to the Cumulative Citation Recommendation task of the 2012 and 2013 editions of the TREC Knowledge Base Acceleration track, and show that important new insights emerge.
  4. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 3109) [ClassicSimilarity], result of:
              0.018833783 = score(doc=3109,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 3109, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
  5. Madalli, D.P.; Chatterjee, U.; Dutta, B.: ¬An analytical approach to building a core ontology for food (2017) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 3362) [ClassicSimilarity], result of:
              0.018833783 = score(doc=3362,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 3362, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3362)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to demonstrate the construction of a core ontology for food. To construct the core ontology, the authors propose here an approach called, yet another methodology for ontology plus (YAMO+). The goal is to exhibit the construction of a core ontology for a domain, which can be further extended and converted into application ontologies. Design/methodology/approach To motivate the construction of the core ontology for food, the authors have first articulated a set of application scenarios. The idea is that the constructed core ontology can be used to build application-specific ontologies for those scenarios. As part of the developmental approach to core ontology, the authors have proposed a methodology called YAMO+. It is designed following the theory of analytico-synthetic classification. YAMO+ is generic in nature and can be applied to build core ontologies for any domain. Findings Construction of a core ontology needs a thorough understanding of the domain and domain requirements. There are various challenges involved in constructing a core ontology as discussed in this paper. The proposed approach has proven to be sturdy enough to face the challenges that the construction of a core ontology poses. It is observed that core ontology is amenable to conversion to an application ontology. Practical implications The constructed core ontology for domain food can be readily used for developing application ontologies related to food. The proposed methodology YAMO+ can be applied to build core ontologies for any domain. Originality/value As per the knowledge, the proposed approach is the first attempt based on the study of the state of the art literature, in terms of, a formal approach to the design of a core ontology. Also, the constructed core ontology for food is the first one as there is no such ontology available on the web for domain food.
  6. Lee, D.; Robinson, L.: ¬The heart of music classification : toward a model of classifying musical medium (2018) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 4198) [ClassicSimilarity], result of:
              0.018833783 = score(doc=4198,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 4198, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4198)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to understand the classification of musical medium, which is a critical part of music classification. It considers how musical medium is currently classified, provides a theoretical understanding of what is currently problematic, and proposes a model which rethinks the classification of medium and resolves these issues. Design/methodology/approach The analysis is drawn from existing classification schemes, additionally using musicological and knowledge organization literature where relevant. The paper culminates in the design of a model of musical medium. Findings The analysis elicits sub-facets, orders and categorizations of medium: there is a strict categorization between vocal and instrumental music, a categorization based on broad size, and important sub-facets for multiples, accompaniment and arrangement. Problematically, there is a mismatch between the definitiveness of library and information science vocal/instrumental categorization and the blurred nature of real musical works; arrangements and accompaniments are limited by other categorizations; multiple voices and groups are not accommodated. So, a model with a radical new structure is proposed which resolves these classification issues. Research limitations/implications The results could be used to further understanding of music classification generally, for Western art music and other types of music. Practical implications The resulting model could be used to improve and design new classification schemes and to improve understanding of music retrieval. Originality/value Deep theoretical analysis of music classification is rare, so this paper's approach is original. Furthermore, the paper's value lies in studying a vital area of music classification which is not currently understood, and providing explanations and solutions. The proposed model is novel in structure and concept, and its original structure could be adapted for other knotty subjects.
  7. Li, X.; Schijvenaars, B.J.A.; Rijke, M.de: Investigating queries and search failures in academic search (2017) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 5033) [ClassicSimilarity], result of:
              0.018833783 = score(doc=5033,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 5033, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5033)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Academic search concerns the retrieval and profiling of information objects in the domain of academic research. In this paper we reveal important observations of academic search queries, and provide an algorithmic solution to address a type of failure during search sessions: null queries. We start by providing a general characterization of academic search queries, by analyzing a large-scale transaction log of a leading academic search engine. Unlike previous small-scale analyses of academic search queries, we find important differences with query characteristics known from web search. E.g., in academic search there is a substantially bigger proportion of entity queries, and a heavier tail in query length distribution. We then focus on search failures and, in particular, on null queries that lead to an empty search engine result page, on null sessions that contain such null queries, and on users who are prone to issue null queries. In academic search approximately 1 in 10 queries is a null query, and 25% of the sessions contain a null query. They appear in different types of search sessions, and prevent users from achieving their search goal. To address the high rate of null queries in academic search, we consider the task of providing query suggestions. Specifically we focus on a highly frequent query type: non-boolean informational queries. To this end we need to overcome query sparsity and make effective use of session information. We find that using entities helps to surface more relevant query suggestions in the face of query sparsity. We also find that query suggestions should be conditioned on the type of session in which they are offered to be more effective. After casting the session classification problem as a multi-label classification problem, we generate session-conditional query suggestions based on predicted session type. We find that this session-conditional method leads to significant improvements over a generic query suggestion method. Personalization yields very little further improvements over session-conditional query suggestions.
  8. Kutlu, M.; Elsayed, T.; Lease, M.: Intelligent topic selection for low-cost information retrieval evaluation : a new perspective on deep vs. shallow judging (2018) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 5092) [ClassicSimilarity], result of:
              0.018833783 = score(doc=5092,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 5092, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5092)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of today's massive document collections (e.g., ClueWeb12's 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
  9. Orso, V.; Ruotsalo, T.; Leino, J.; Gamberini, L.; Jacucci, G.: Overlaying social information : the effects on users' search and information-selection behavior (2017) 0.00
    0.0011771114 = product of:
      0.0047084456 = sum of:
        0.0047084456 = product of:
          0.018833783 = sum of:
            0.018833783 = weight(_text_:based in 5097) [ClassicSimilarity], result of:
              0.018833783 = score(doc=5097,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.13315678 = fieldWeight in 5097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5097)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Previous research investigated how to leverage the new type of social data available on the web, e.g., tags, ratings and reviews, in recommending and personalizing information. However, previous works mainly focused on predicting ratings using collaborative filtering or quantifying personalized ranking quality in simulations. As a consequence, the effect of social information in user's information search and information-selection behavior remains elusive. The objective of our research is to investigate the effects of social information on users' interactive search and information-selection behavior. We present a computational method and a system implementation combining different graph overlays: social, personal and search-time user input that are visualized for the user to support interactive information search. We report on a controlled laboratory experiment, in which 24 users performed search tasks using three system variants with different graphs as overlays composed from the largest publicly available social content and review data from Yelp: personal preferences, tags combined with personal preferences, and tags and social ratings combined with personal preferences. Data comprising search logs, questionnaires, simulations, and eye-tracking recordings show that: 1) the search effectiveness is improved by using and visualizing the social rating information and the personal preference information as compared to content-based ranking. 2) The need to consult external information before selecting information is reduced by the presentation of the effects of different overlays on the search results. Search effectiveness improvements can be attributed to the use of social rating and personal preference overlays, which was also confirmed in a follow-up simulation study. With the proposed method we demonstrate that social information can be incorporated to the interactive search process by overlaying graphs representing different information sources. We show that the combination of social rating information and personal preference information improves search effectiveness and reduce the need to consult external information. Our method and findings can inform the design of interactive search systems that leverage the information available on the social web.
  10. Hypén, K.; Mäkelä, E.: ¬An ideal model for an information system for fiction and its application : Kirjasampo and Semantic Web (2011) 0.00
    0.0010299725 = product of:
      0.00411989 = sum of:
        0.00411989 = product of:
          0.01647956 = sum of:
            0.01647956 = weight(_text_:based in 4550) [ClassicSimilarity], result of:
              0.01647956 = score(doc=4550,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11651218 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4550)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - Library Director Jarmo Saarti introduced a wide or ideal model for fiction in literature in his dissertation, published in 1999. It introduces those aspects that should be included in an information system for fiction. Such aspects include literary prose and its intertextual references to other works, the writer, readers' and critics' receptions of the work as well as a researcher's view. It is also important to note how libraries approach a literary work by means of inventory, classification and content description. The most ambiguous of the aspects relates to that context in cultural history, which the work reflects and is a part of. The paper aims to discuss these issues. Design/methodology/approach - Since the model consists of several components which are not found in present library information systems and cannot be implemented by them, a new way had to be found to produce, save, process and present fiction-related metadata. The Semantic Computing Research Group of Aalto University has developed several Semantic Web services for use in the field of culture, so cooperation with it and the use of Semantic Web tools were a natural starting point for the construction of the new service. Kirjasampo will be based on the Semantic Web RDF data model. The model enables a flexible linking of metadata derived from different sources, and it can be used to build a Semantic Web that can be approached contextually from different angles. Findings - The "semantically enriched" ideal model for fiction has hence been realised, at least to some extent: Kirjasampo supports literature-related metadata that is more varied than earlier and aims to account for different contexts within literature and connections with regard to other cultural phenomena. It also includes contemporary reviews of works and, as such, readers' receptions as well. Modern readers can share their views on works, once the user interface of the server is completed. It will include several features from the Kirjasto 2.0-application, which enables the evaluation, description and recommendations of works. The service should be online by the end of Spring 2011. Research limitations/implications - The project involves novel collaboration between a public library and a computer science research unit, and utilises a novel approach to the description of fiction. Practical implications - The system encourages user participation in the description of fiction and is of practical benefit to librarians in understanding both how fiction is organised and how users interpret the same. Originality/value - Upon completion, the service will be the first Finnish information system for libraries built with the tools of the Semantic Web which offers a completely new user environment and application for data produced by libraries. It also strives to create a new model for saving and producing data, available to both library professionals and readers. The aim is to save, accumulate and distribute literary knowledge, experiences and silent information.
  11. Chen, H.; Baptista Nunes, J.M.; Ragsdell, G.; An, X.: Somatic and cultural knowledge : drivers of a habitus-driven model of tacit knowledge acquisition (2019) 0.00
    0.0010299725 = product of:
      0.00411989 = sum of:
        0.00411989 = product of:
          0.01647956 = sum of:
            0.01647956 = weight(_text_:based in 5460) [ClassicSimilarity], result of:
              0.01647956 = score(doc=5460,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.11651218 = fieldWeight in 5460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5460)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Abstract
    The purpose of this paper is to identify and explain the role of individual learning and development in acquiring tacit knowledge in the context of the inexorable and intense continuous change (technological and otherwise) that characterizes our society today, and also to investigate the software (SW) sector, which is at the core of contemporary continuous change and is a paradigm of effective and intrinsic knowledge sharing (KS). This makes the SW sector unique and different from others where KS is so hard to implement. Design/methodology/approach The study employed an inductive qualitative approach based on a multi-case study approach, composed of three successful SW companies in China. These companies are representative of the fabric of the sector, namely a small- and medium-sized enterprise, a large private company and a large state-owned enterprise. The fieldwork included 44 participants who were interviewed using a semi-structured script. The interview data were coded and interpreted following the Straussian grounded theory pattern of open coding, axial coding and selective coding. The process of interviewing was stopped when theoretical saturation was achieved after a careful process of theoretical sampling.
  12. Berg, L.; Metzner, J.; Thrun, S.: Studieren im Netz - Das Ende der Uni? : Kostenloser Online-Unterricht (2012) 0.00
    7.356946E-4 = product of:
      0.0029427784 = sum of:
        0.0029427784 = product of:
          0.011771114 = sum of:
            0.011771114 = weight(_text_:based in 227) [ClassicSimilarity], result of:
              0.011771114 = score(doc=227,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.083222985 = fieldWeight in 227, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=227)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Theme
    Computer Based Training
  13. Hawking, S.: This is the most dangerous time for our planet (2016) 0.00
    7.356946E-4 = product of:
      0.0029427784 = sum of:
        0.0029427784 = product of:
          0.011771114 = sum of:
            0.011771114 = weight(_text_:based in 3273) [ClassicSimilarity], result of:
              0.011771114 = score(doc=3273,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.083222985 = fieldWeight in 3273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3273)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Content
    "As a theoretical physicist based in Cambridge, I have lived my life in an extraordinarily privileged bubble. Cambridge is an unusual town, centered around one of the world's great universities. Within that town, the scientific community which I became part of in my twenties is even more rarefied. And within that scientific community, the small group of international theoretical physicists with whom I have spent my working life might sometimes be tempted to regard themselves as the pinnacle. Add to this, the celebrity that has come with my books, and the isolation imposed by my illness, I feel as though my ivory tower is getting taller. So the recent apparent rejection of the elite in both America and Britain is surely aimed at me, as much as anyone. Whatever we might think about the decision by the British electorate to reject membership of the European Union, and by the American public to embrace Donald Trump as their next President, there is no doubt in the minds of commentators that this was a cry of anger by people who felt that they had been abandoned by their leaders. It was, everyone seems to agree, the moment that the forgotten spoke, finding their voice to reject the advice and guidance of experts and the elite everywhere.
  14. Berg, L.: Pablo will es wissen : Lernen mit Salman Khan (2012) 0.00
    5.885557E-4 = product of:
      0.0023542228 = sum of:
        0.0023542228 = product of:
          0.009416891 = sum of:
            0.009416891 = weight(_text_:based in 228) [ClassicSimilarity], result of:
              0.009416891 = score(doc=228,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.06657839 = fieldWeight in 228, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.015625 = fieldNorm(doc=228)
          0.25 = coord(1/4)
      0.25 = coord(1/4)
    
    Theme
    Computer Based Training

Languages

  • e 1893
  • d 153
  • i 2
  • a 1
  • f 1
  • sp 1
  • More… Less…

Types

  • el 112
  • b 4
  • s 1
  • x 1
  • More… Less…

Themes

Classifications