Search (36707 results, page 1836 of 1836)

  1. Blair, D.C.: ¬The challenge of commercial document retrieval : Part I: Major issues, and a framework based on search exhaustivity, determinacy of representation and document collection size (2002) 0.00
    1.177235E-4 = product of:
      0.00235447 = sum of:
        0.00235447 = weight(_text_:in in 2580) [ClassicSimilarity], result of:
          0.00235447 = score(doc=2580,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.060115322 = fieldWeight in 2580, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2580)
      0.05 = coord(1/20)
    
    Abstract
    With the growing focus on what is collectively known as "knowledge management", a shift continues to take place in commercial information system development: a shift away from the well-understood data retrieval/database model, to the more complex and challenging development of commercial document/information retrieval models. While document retrieval has had a long and rich legacy of research, its impact on commercial applications has been modest. At the enterprise level most large organizations have little understanding of, or commitment to, high quality document access and management. Part of the reason for this is that we still do not have a good framework for understanding the major factors which affect the performance of large-scale corporate document retrieval systems. The thesis of this discussion is that document retrieval - specifically, access to intellectual content - is a complex process which is most strongly influenced by three factors: the size of the document collection; the type of search (exhaustive, existence or sample); and, the determinacy of document representation. Collectively, these factors can be used to provide a useful framework for, or taxonomy of, document retrieval, and highlight some of the fundamental issues facing the design and development of commercial document retrieval systems. This is the first of a series of three articles. Part II (D.C. Blair, The challenge of commercial document retrieval. Part II. A strategy for document searching based on identifiable document partitions, Information Processing and Management, 2001b, this issue) will discuss the implications of this framework for search strategy, and Part III (D.C. Blair, Some thoughts on the reported results of Text REtrieval Conference (TREC), Information Processing and Management, 2002, forthcoming) will consider the importance of the TREC results for our understanding of operating information retrieval systems.
  2. Tredinnick, L.: Why Intranets fail (and how to fix them) : a practical guide for information professionals (2004) 0.00
    1.177235E-4 = product of:
      0.00235447 = sum of:
        0.00235447 = weight(_text_:in in 4499) [ClassicSimilarity], result of:
          0.00235447 = score(doc=4499,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.060115322 = fieldWeight in 4499, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4499)
      0.05 = coord(1/20)
    
    Abstract
    This book is a practical guide to some of the common problems associated with Intranets, and solutions to those problems. The book takes a unique end-user perspective an the role of intranets within organisations. It explores how the needs of the end-user very often conflict with the needs of the organisation, creatiog a confusion of purpose that impedes the success of intranet. It sets out clearly why intranets cannot be thought of as merely internal Internets, and require their own management strategies and approaches. The book draws an a wide range of examples and analogies from a variety of contexts to set-out in a clear and concise way the issues at the heart of failing intranets. It presents step-by-step solutions with universal application. Each issue discussed is accompanied by short practical suggestions for improved intranet design and architecture.
  3. Paskin, N.: DOI: current status and outlook (1999) 0.00
    1.177235E-4 = product of:
      0.00235447 = sum of:
        0.00235447 = weight(_text_:in in 1245) [ClassicSimilarity], result of:
          0.00235447 = score(doc=1245,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.060115322 = fieldWeight in 1245, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1245)
      0.05 = coord(1/20)
    
    Abstract
    Over the past few months the International DOI Foundation (IDF) has produced a number of discussion papers and other materials about the Digital Object Identifier (DOIsm) initiative. They are all available at the DOI web site, including a brief summary of the DOI origins and purpose. The aim of the present paper is to update those papers, reflecting recent progress, and to provide a summary of the current position and context of the DOI. Although much of the material presented here is the result of a consensus by the organisations forming the International DOI Foundation, some of the points discuss work in progress. The paper describes the origin of the DOI as a persistent identifier for managing copyrighted materials and its development under the non-profit International DOI Foundation into a system providing identifiers of intellectual property with a framework for open applications to be built using them. Persistent identification implementations consistent with URN specifications have up to now been hindered by lack of widespread availability of resolution mechanisms, content typology consensus, and sufficiently flexible infrastructure; DOI attempts to overcome these obstacles. Resolution of the DOI uses the Handle System®, which offers the necessary functionality for open applications. The aim of the International DOI Foundation is to promote widespread applications of the DOI, which it is doing by pioneering some early implementations and by providing an extensible framework to ensure interoperability of future DOI uses. Applications of the DOI will require an interoperable scheme of declared metadata with each DOI; the basis of the DOI metadata scheme is a minimal "kernel" of elements supplemented by additional application-specific elements, under an umbrella data model (derived from the INDECS analysis) that promotes convergence of different application metadata sets. The IDF intends to require declaration of only a minimal set of metadata, sufficient to enable unambiguous look-up of a DOI, but this must be capable of extension by others to create open applications.
  4. Chowdhury, G.: Carbon footprint of the knowledge sector : what's the future? (2010) 0.00
    1.177235E-4 = product of:
      0.00235447 = sum of:
        0.00235447 = weight(_text_:in in 4152) [ClassicSimilarity], result of:
          0.00235447 = score(doc=4152,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.060115322 = fieldWeight in 4152, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=4152)
      0.05 = coord(1/20)
    
    Abstract
    Purpose - The purpose of this paper is to produce figures showing the carbon footprint of the knowledge industry - from creation to distribution and use of knowledge, and to provide comparative figures for digital distribution and access. Design/methodology/approach - An extensive literature search and environmental scan was conducted to produce data relating to the CO2 emissions from various industries and activities such as book and journal production, photocopying activities, information technology and the internet. Other sources such as the International Energy Agency (IEA), Carbon Monitoring for Action (CARMA ), Copyright Licensing Agency, UK (CLA), Copyright Agency Limited, Australia (CAL), etc., have been used to generate emission figures for production and distribution of print knowledge products versus digital distribution and access. Findings - The current practices for production and distribution of printed knowledge products generate an enormous amount of CO2. It is estimated that the book industry in the UK and USA alone produces about 1.8 million tonnes and about 11.27 million tonnes of CO2 respectively. CO2 emission for the worldwide journal publishing industry is estimated to be about 12 million tonnes. It is shown that the production and distribution costs of digital knowledge products are negligible compared to the environmental costs of production and distribution of printed knowledge products. Practical implications - Given the astounding emission figures for production and distribution of printed knowledge products, and the associated activities for access and distribution of these products, for example, emissions from photocopying activities permitted within the provisions of statutory licenses provided by agencies like CLA, CAL, etc., it is proposed that a digital distribution and access model is the way forward, and that such a system will be environmentally sustainable. Originality/value - It is expected that the findings of this study will pave the way for further research and this paper will be extremely helpful for design and development of the future knowledge distribution and access systems.
  5. Bianchini, D.; Antonellis, V. De: Linked data services and semantics-enabled mashup (2012) 0.00
    1.177235E-4 = product of:
      0.00235447 = sum of:
        0.00235447 = weight(_text_:in in 435) [ClassicSimilarity], result of:
          0.00235447 = score(doc=435,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.060115322 = fieldWeight in 435, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=435)
      0.05 = coord(1/20)
    
    Abstract
    The Web of Linked Data can be seen as a global database, where resources are identified through URIs, are self-described (by means of the URI dereferencing mechanism), and are globally connected through RDF links. According to the Linked Data perspective, research attention is progressively shifting from data organization and representation to linkage and composition of the huge amount of data available on the Web. For example, at the time of this writing, the DBpedia knowledge base describes more than 3.5 million things, conceptualized through 672 million RDF triples, with 6.5 million external links into other RDF datasets. Useful applications have been provided for enabling people to browse this wealth of data, like Tabulator. Other systems have been implemented to collect, index, and provide advanced searching facilities over the Web of Linked Data, such as Watson and Sindice. Besides these applications, domain-specific systems to gather and mash up Linked Data have been proposed, like DBpedia Mobile and Revyu . corn. DBpedia Mobile is a location-aware client for the semantic Web that can be used on an iPhone and other mobile devices. Based on the current GPS position of a mobile device, DBpedia Mobile renders a map indicating nearby locations from the DBpedia dataset. Starting from this map, the user can explore background information about his or her surroundings. Revyu . corn is a Web site where you can review and rate whatever is possible to identify (through a URI) on the Web. Nevertheless, the potential advantages implicit in the Web of Linked Data are far from being fully exploited. Current applications hardly go beyond presenting together data gathered from different sources. Recently, research on the Web of Linked Data has been devoted to the study of models and languages to add functionalities to the Web of Linked Data by means of Linked Data services.
  6. Falavarjani, S.A.M.; Jovanovic, J.; Fani, H.; Ghorbani, A.A.; Noorian, Z.; Bagheri, E.: On the causal relation between real world activities and emotional expressions of social media users (2021) 0.00
    1.177235E-4 = product of:
      0.00235447 = sum of:
        0.00235447 = weight(_text_:in in 243) [ClassicSimilarity], result of:
          0.00235447 = score(doc=243,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.060115322 = fieldWeight in 243, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=243)
      0.05 = coord(1/20)
    
    Abstract
    Social interactions through online social media have become a daily routine of many, and the number of those whose real world (offline) and online lives have become intertwined is continuously growing. As such, the interplay of individuals' online and offline activities has been the subject of numerous research studies, the majority of which explored the impact of people's online actions on their offline activities. The opposite direction of impact-the effect of real-world activities on online actions-has also received attention but to a lesser degree. To contribute to the latter form of impact, this paper reports on a quasi-experimental design study that examined the presence of causal relations between real-world activities of online social media users and their online emotional expressions. To this end, we have collected a large dataset (over 17K users) from Twitter and Foursquare, and systematically aligned user content on the two social media platforms. Users' Foursquare check-ins provided information about their offline activities, whereas the users' expressions of emotions and moods were derived from their Twitter posts. Since our study was based on a quasi-experimental design, to minimize the impact of covariates, we applied an innovative model of computing propensity scores. Our main findings can be summarized as follows: (a) users' offline activities do impact their affective expressions, both of emotions and moods, as evidenced in their online shared textual content; (b) the impact depends on the type of offline activity and if the user embarks on or abandons the activity. Our findings can be used to devise a personalized recommendation mechanism to help people better manage their online emotional expressions.
  7. Frey, J.; Streitmatter, D.; Götz, F.; Hellmann, S.; Arndt, N.: DBpedia Archivo (2020) 0.00
    1.0300806E-4 = product of:
      0.0020601612 = sum of:
        0.0020601612 = weight(_text_:in in 53) [ClassicSimilarity], result of:
          0.0020601612 = score(doc=53,freq=2.0), product of:
            0.039165888 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02879306 = queryNorm
            0.052600905 = fieldWeight in 53, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.02734375 = fieldNorm(doc=53)
      0.05 = coord(1/20)
    
    Content
    # How does Archivo work? Each week Archivo runs several discovery algorithms to scan for new ontologies. Once discovered Archivo checks them every 8 hours. When changes are detected, Archivo downloads and rates and archives the latest snapshot persistently on the DBpedia Databus. # Archivo's mission Archivo's mission is to improve FAIRness (findability, accessibility, interoperability, and reusability) of all available ontologies on the Semantic Web. Archivo is not a guideline, it is fully automated, machine-readable and enforces interoperability with its star rating. - Ontology developers can implement against Archivo until they reach more stars. The stars and tests are designed to guarantee the interoperability and fitness of the ontology. - Ontology users can better find, access and re-use ontologies. Snapshots are persisted in case the original is not reachable anymore adding a layer of reliability to the decentral web of ontologies.

Authors

Languages

Types

Themes

Subjects

Classifications