Search (1566 results, page 2 of 79)

  • × language_ss:"e"
  • × year_i:[2010 TO 2020}
  1. Kaminski, R.; Schaub, T.; Wanko, P.: ¬A tutorial on hybrid answer set solving with clingo (2017) 0.06
    0.06313178 = product of:
      0.12626356 = sum of:
        0.052265707 = weight(_text_:web in 3937) [ClassicSimilarity], result of:
          0.052265707 = score(doc=3937,freq=8.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.36057037 = fieldWeight in 3937, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
        0.03276989 = weight(_text_:computer in 3937) [ClassicSimilarity], result of:
          0.03276989 = score(doc=3937,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 3937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3937)
        0.04122796 = product of:
          0.08245592 = sum of:
            0.08245592 = weight(_text_:programs in 3937) [ClassicSimilarity], result of:
              0.08245592 = score(doc=3937,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.32024145 = fieldWeight in 3937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3937)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Answer Set Programming (ASP) has become an established paradigm for Knowledge Representation and Reasoning, in particular, when it comes to solving knowledge-intense combinatorial (optimization) problems. ASP's unique pairing of a simple yet rich modeling language with highly performant solving technology has led to an increasing interest in ASP in academia as well as industry. To further boost this development and make ASP fit for real world applications it is indispensable to equip it with means for an easy integration into software environments and for adding complementary forms of reasoning. In this tutorial, we describe how both issues are addressed in the ASP system clingo. At first, we outline features of clingo's application programming interface (API) that are essential for multi-shot ASP solving, a technique for dealing with continuously changing logic programs. This is illustrated by realizing two exemplary reasoning modes, namely branch-and-bound-based optimization and incremental ASP solving. We then switch to the design of the API for integrating complementary forms of reasoning and detail this in an extensive case study dealing with the integration of difference constraints. We show how the syntax of these constraints is added to the modeling language and seamlessly merged into the grounding process. We then develop in detail a corresponding theory propagator for difference constraints and present how it is integrated into clingo's solving process.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  2. Vaughan, L.; Ninkov, A.: ¬A new approach to web co-link analysis (2018) 0.06
    0.063101456 = product of:
      0.12620291 = sum of:
        0.04816959 = weight(_text_:wide in 4256) [ClassicSimilarity], result of:
          0.04816959 = score(doc=4256,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4256)
        0.045263432 = weight(_text_:web in 4256) [ClassicSimilarity], result of:
          0.045263432 = score(doc=4256,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 4256, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4256)
        0.03276989 = weight(_text_:computer in 4256) [ClassicSimilarity], result of:
          0.03276989 = score(doc=4256,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 4256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4256)
      0.5 = coord(3/6)
    
    Abstract
    Numerous web co-link studies have analyzed a wide variety of websites ranging from those in the academic and business arena to those dealing with politics and governments. Such studies uncover rich information about these organizations. In recent years, however, there has been a dearth of co-link analysis, mainly due to the lack of sources from which co-link data can be collected directly. Although several commercial services such as Alexa provide inlink data, none provide co-link data. We propose a new approach to web co-link analysis that can alleviate this problem so that researchers can continue to mine the valuable information contained in co-link data. The proposed approach has two components: (a) generating co-link data from inlink data using a computer program; (b) analyzing co-link data at the site level in addition to the page level that previous co-link analyses have used. The site-level analysis has the potential of expanding co-link data sources. We tested this proposed approach by analyzing a group of websites focused on vaccination using Moz inlink data. We found that the approach is feasible, as we were able to generate co-link data from inlink data and analyze the co-link data with multidimensional scaling.
  3. Heuvel, C. van den: Multidimensional classifications : past and future conceptualizations and visualizations (2012) 0.06
    0.06254284 = product of:
      0.12508568 = sum of:
        0.067437425 = weight(_text_:wide in 632) [ClassicSimilarity], result of:
          0.067437425 = score(doc=632,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.342674 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.036585998 = weight(_text_:web in 632) [ClassicSimilarity], result of:
          0.036585998 = score(doc=632,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 632, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=632)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 632) [ClassicSimilarity], result of:
              0.04212451 = score(doc=632,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 632, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=632)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    This paper maps the concepts "space" and "dimensionality" in classifications, in particular in visualizations hereof, from a historical perspective. After a historical excursion in the domain of classification theory of what in mathematics is known as dimensionality reduction in representations of a single universe of knowledge, its potentiality will be explored for information retrieval and navigation in the multiverse of the World Wide Web.
    Date
    22. 2.2013 11:31:25
  4. Semantic keyword-based search on structured data sources : First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers (2016) 0.06
    0.061296202 = product of:
      0.122592404 = sum of:
        0.036210746 = weight(_text_:web in 2753) [ClassicSimilarity], result of:
          0.036210746 = score(doc=2753,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.24981049 = fieldWeight in 2753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2753)
        0.069360785 = weight(_text_:computer in 2753) [ClassicSimilarity], result of:
          0.069360785 = score(doc=2753,freq=14.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.42731008 = fieldWeight in 2753, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=2753)
        0.017020872 = product of:
          0.034041744 = sum of:
            0.034041744 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
              0.034041744 = score(doc=2753,freq=4.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.21886435 = fieldWeight in 2753, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2753)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    This book constitutes the thoroughly refereed post-conference proceedings of the First COST Action IC1302 International KEYSTONE Conference on semantic Keyword-based Search on Structured Data Sources, IKC 2015, held in Coimbra, Portugal, in September 2015. The 13 revised full papers, 3 revised short papers, and 2 invited papers were carefully reviewed and selected from 22 initial submissions. The paper topics cover techniques for keyword search, semantic data management, social Web and social media, information retrieval, benchmarking for search on big data.
    Content
    Inhalt: Professional Collaborative Information Seeking: On Traceability and Creative Sensemaking / Nürnberger, Andreas (et al.) - Recommending Web Pages Using Item-Based Collaborative Filtering Approaches / Cadegnani, Sara (et al.) - Processing Keyword Queries Under Access Limitations / Calì, Andrea (et al.) - Balanced Large Scale Knowledge Matching Using LSH Forest / Cochez, Michael (et al.) - Improving css-KNN Classification Performance by Shifts in Training Data / Draszawka, Karol (et al.) - Classification Using Various Machine Learning Methods and Combinations of Key-Phrases and Visual Features / HaCohen-Kerner, Yaakov (et al.) - Mining Workflow Repositories for Improving Fragments Reuse / Harmassi, Mariem (et al.) - AgileDBLP: A Search-Based Mobile Application for Structured Digital Libraries / Ifrim, Claudia (et al.) - Support of Part-Whole Relations in Query Answering / Kozikowski, Piotr (et al.) - Key-Phrases as Means to Estimate Birth and Death Years of Jewish Text Authors / Mughaz, Dror (et al.) - Visualization of Uncertainty in Tag Clouds / Platis, Nikos (et al.) - Multimodal Image Retrieval Based on Keywords and Low-Level Image Features / Pobar, Miran (et al.) - Toward Optimized Multimodal Concept Indexing / Rekabsaz, Navid (et al.) - Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives / Souza, Tarcisio (et al.) - Indexing of Textual Databases Based on Lexical Resources: A Case Study for Serbian / Stankovic, Ranka (et al.) - Domain-Specific Modeling: Towards a Food and Drink Gazetteer / Tagarev, Andrey (et al.) - Analysing Entity Context in Multilingual Wikipedia to Support Entity-Centric Retrieval Applications / Zhou, Yiwei (et al.)
    Date
    1. 2.2016 18:25:22
    LCSH
    Computer science
    User interfaces (Computer systems)
    Text processing (Computer science)
    Series
    Lecture notes in computer science ; 9398
    Subject
    Computer science
    User interfaces (Computer systems)
    Text processing (Computer science)
  5. Mining text data (2012) 0.06
    0.059031256 = product of:
      0.11806251 = sum of:
        0.03853567 = weight(_text_:wide in 362) [ClassicSimilarity], result of:
          0.03853567 = score(doc=362,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.020906283 = weight(_text_:web in 362) [ClassicSimilarity], result of:
          0.020906283 = score(doc=362,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.14422815 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.058620557 = weight(_text_:computer in 362) [ClassicSimilarity], result of:
          0.058620557 = score(doc=362,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.3611429 = fieldWeight in 362, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
      0.5 = coord(3/6)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
    LCSH
    Computer science
    Computer Communication Networks
    Subject
    Computer science
    Computer Communication Networks
  6. Das, A.K.; Mishra, S.: S R Ranganathan in Google Scholar and other citation databases (2015) 0.06
    0.058948457 = product of:
      0.117896914 = sum of:
        0.04816959 = weight(_text_:wide in 2797) [ClassicSimilarity], result of:
          0.04816959 = score(doc=2797,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 2797, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2797)
        0.036957435 = weight(_text_:web in 2797) [ClassicSimilarity], result of:
          0.036957435 = score(doc=2797,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 2797, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2797)
        0.03276989 = weight(_text_:computer in 2797) [ClassicSimilarity], result of:
          0.03276989 = score(doc=2797,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 2797, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2797)
      0.5 = coord(3/6)
    
    Abstract
    This paper analyses the scholarly contribution of S R Ranganathan as reflected in Google Scholar Citations, Web of Science, and Scopus. This paper also identifies popularity of his published works, particularly which are highly referred by the researchers and LIS curriculum designers. His top three highly cited books are namely Prolegomena to Library Classification, The Five Laws of Library Science, and Colon Classification. His top three highly referred journal articles are titled "Hidden Roots of Classification", "Subject Heading and Facet Analysis", and "Colon Classification Edition 7 (1971): A Preview". This paper identifies the articles that cited his works extensively and got considerable citations from the other researchers. Top citing journal articles are namely "The Need for a Faceted Classification as the Basis of All Methods of Information Retrieval", "Ranganathan and the Net: Using Facet Analysis to Search and Organise the World Wide Web" and "Grounded Classification: Grounded Theory and Faceted Classification". These citing articles also indicate that Ranganathan is very relevant to today's researchers in interdisciplinary areas, particularly which belong to the fields of computer applications and information systems.
  7. Next generation search engines : advanced models for information retrieval (2012) 0.06
    0.057860512 = product of:
      0.115721025 = sum of:
        0.034061044 = weight(_text_:wide in 357) [ClassicSimilarity], result of:
          0.034061044 = score(doc=357,freq=4.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.17307651 = fieldWeight in 357, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.04889009 = weight(_text_:web in 357) [ClassicSimilarity], result of:
          0.04889009 = score(doc=357,freq=28.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3372827 = fieldWeight in 357, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
        0.03276989 = weight(_text_:computer in 357) [ClassicSimilarity], result of:
          0.03276989 = score(doc=357,freq=8.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 357, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.01953125 = fieldNorm(doc=357)
      0.5 = coord(3/6)
    
    Abstract
    The main goal of this book is to transfer new research results from the fields of advanced computer sciences and information science to the design of new search engines. The readers will have a better idea of the new trends in applied research. The achievement of relevant, organized, sorted, and workable answers- to name but a few - from a search is becoming a daily need for enterprises and organizations, and, to a greater extent, for anyone. It does not consist of getting access to structural information as in standard databases; nor does it consist of searching information strictly by way of a combination of key words. It goes far beyond that. Whatever its modality, the information sought should be identified by the topics it contains, that is to say by its textual, audio, video or graphical contents. This is not a new issue. However, recent technological advances have completely changed the techniques being used. New Web technologies, the emergence of Intranet systems and the abundance of information on the Internet have created the need for efficient search and information access tools.
    Recent technological progress in computer science, Web technologies, and constantly evolving information available on the Internet has drastically changed the landscape of search and access to information. Web search has significantly evolved in recent years. In the beginning, web search engines such as Google and Yahoo! were only providing search service over text documents. Aggregated search was one of the first steps to go beyond text search, and was the beginning of a new era for information seeking and retrieval. These days, new web search engines support aggregated search over a number of vertices, and blend different types of documents (e.g., images, videos) in their search results. New search engines employ advanced techniques involving machine learning, computational linguistics and psychology, user interaction and modeling, information visualization, Web engineering, artificial intelligence, distributed systems, social networks, statistical analysis, semantic analysis, and technologies over query sessions. Documents no longer exist on their own; they are connected to other documents, they are associated with users and their position in a social network, and they can be mapped onto a variety of ontologies. Similarly, retrieval tasks have become more interactive and are solidly embedded in a user's geospatial, social, and historical context. It is conjectured that new breakthroughs in information retrieval will not come from smarter algorithms that better exploit existing information sources, but from new retrieval algorithms that can intelligently use and combine new sources of contextual metadata.
    With the rapid growth of web-based applications, such as search engines, Facebook, and Twitter, the development of effective and personalized information retrieval techniques and of user interfaces is essential. The amount of shared information and of social networks has also considerably grown, requiring metadata for new sources of information, like Wikipedia and ODP. These metadata have to provide classification information for a wide range of topics, as well as for social networking sites like Twitter, and Facebook, each of which provides additional preferences, tagging information and social contexts. Due to the explosion of social networks and other metadata sources, it is an opportune time to identify ways to exploit such metadata in IR tasks such as user modeling, query understanding, and personalization, to name a few. Although the use of traditional metadata such as html text, web page titles, and anchor text is fairly well-understood, the use of category information, user behavior data, and geographical information is just beginning to be studied. This book is intended for scientists and decision-makers who wish to gain working knowledge about search engines in order to evaluate available solutions and to dialogue with software and data providers.
    Content
    Enthält die Beiträge: Das, A., A. Jain: Indexing the World Wide Web: the journey so far. Ke, W.: Decentralized search and the clustering paradox in large scale information networks. Roux, M.: Metadata for search engines: what can be learned from e-Sciences? Fluhr, C.: Crosslingual access to photo databases. Djioua, B., J.-P. Desclés u. M. Alrahabi: Searching and mining with semantic categories. Ghorbel, H., A. Bahri u. R. Bouaziz: Fuzzy ontologies building platform for Semantic Web: FOB platform. Lassalle, E., E. Lassalle: Semantic models in information retrieval. Berry, M.W., R. Esau u. B. Kiefer: The use of text mining techniques in electronic discovery for legal matters. Sleem-Amer, M., I. Bigorgne u. S. Brizard u.a.: Intelligent semantic search engines for opinion and sentiment mining. Hoeber, O.: Human-centred Web search.
    Vert, S.: Extensions of Web browsers useful to knowledge workers. Chen, L.-C.: Next generation search engine for the result clustering technology. Biskri, I., L. Rompré: Using association rules for query reformulation. Habernal, I., M. Konopík u. O. Rohlík: Question answering. Grau, B.: Finding answers to questions, in text collections or Web, in open domain or specialty domains. Berri, J., R. Benlamri: Context-aware mobile search engine. Bouidghaghen, O., L. Tamine: Spatio-temporal based personalization for mobile search. Chaudiron, S., M. Ihadjadene: Studying Web search engines from a user perspective: key concepts and main approaches. Karaman, F.: Artificial intelligence enabled search engines (AIESE) and the implications. Lewandowski, D.: A framework for evaluating the retrieval effectiveness of search engines.
    LCSH
    User interfaces (Computer systems)
    Subject
    User interfaces (Computer systems)
  8. Crespo, J.A.; Herranz, N.; Li, Y.; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level (2014) 0.06
    0.057354555 = product of:
      0.11470911 = sum of:
        0.04816959 = weight(_text_:wide in 1291) [ClassicSimilarity], result of:
          0.04816959 = score(doc=1291,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 1291, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.045263432 = weight(_text_:web in 1291) [ClassicSimilarity], result of:
          0.045263432 = score(doc=1291,freq=6.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3122631 = fieldWeight in 1291, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.02127609 = product of:
          0.04255218 = sum of:
            0.04255218 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.04255218 = score(doc=1291,freq=4.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.27358043 = fieldWeight in 1291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
    Object
    Web of Science
  9. Witten, I.H.; Bainbridge, M.; Nichols, D.M.: How to build a digital library (2010) 0.06
    0.056642476 = product of:
      0.11328495 = sum of:
        0.029565949 = weight(_text_:web in 4027) [ClassicSimilarity], result of:
          0.029565949 = score(doc=4027,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.2039694 = fieldWeight in 4027, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4027)
        0.037074894 = weight(_text_:computer in 4027) [ClassicSimilarity], result of:
          0.037074894 = score(doc=4027,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.22840683 = fieldWeight in 4027, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=4027)
        0.046644114 = product of:
          0.09328823 = sum of:
            0.09328823 = weight(_text_:programs in 4027) [ClassicSimilarity], result of:
              0.09328823 = score(doc=4027,freq=4.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.36231187 = fieldWeight in 4027, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4027)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    "How to Build a Digital Library" is the only book that offers all the knowledge and tools needed to construct and maintain a digital library, regardless of the size or purpose. It is the perfectly self-contained resource for individuals, agencies, and institutions wishing to put this powerful tool to work in their burgeoning information treasuries. The second edition reflects new developments in the field as well as in the Greenstone Digital Library open source software. In Part I, the authors have added an entire new chapter on user groups, user support, collaborative browsing, user contributions, and so on. There is also new material on content-based queries, map-based queries, cross-media queries. There is an increased emphasis placed on multimedia by adding a 'digitizing' section to each major media type. A new chapter has also been added on 'internationalization', which will address Unicode standards, multi-language interfaces and collections, and issues with non-European languages (Chinese, Hindi, etc.). Part II, the software tools section, has been completely rewritten to reflect the new developments in Greenstone Digital Library Software, an internationally popular open source software tool with a comprehensive graphical facility for creating and maintaining digital libraries. As with the First Edition, a web site, implemented as a digital library, will accompany the book and provide access to color versions of all figures, two online appendices, a full-text sentence-level index, and an automatically generated glossary of acronyms and their definitions. In addition, demonstration digital library collections will be included to demonstrate particular points in the book. To access the online content please visit our associated website. This title outlines the history of libraries - both traditional and digital - and their impact on present practices and future directions. It is written for both technical and non-technical audiences and covers the entire spectrum of media, including text, images, audio, video, and related XML standards. It is web-enhanced with software documentation, color illustrations, full-text index, source code, and more.
    LCSH
    Digital libraries / Collection development / Computer programs
    Subject
    Digital libraries / Collection development / Computer programs
  10. Joint, N.: Web 2.0 and the library : a transformational technology? (2010) 0.05
    0.05485157 = product of:
      0.10970314 = sum of:
        0.03853567 = weight(_text_:wide in 4202) [ClassicSimilarity], result of:
          0.03853567 = score(doc=4202,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.1958137 = fieldWeight in 4202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.059131898 = weight(_text_:web in 4202) [ClassicSimilarity], result of:
          0.059131898 = score(doc=4202,freq=16.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.4079388 = fieldWeight in 4202, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4202)
        0.012035574 = product of:
          0.024071148 = sum of:
            0.024071148 = weight(_text_:22 in 4202) [ClassicSimilarity], result of:
              0.024071148 = score(doc=4202,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.15476047 = fieldWeight in 4202, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4202)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    Purpose - This paper is the final one in a series which has tried to give an overview of so-called transformational areas of digital library technology. The aim has been to assess how much real transformation these applications can bring about, in terms of creating genuine user benefit and also changing everyday library practice. Design/methodology/approach - The paper provides a summary of some of the legal and ethical issues associated with web 2.0 applications in libraries, associated with a brief retrospective view of some relevant literature. Findings - Although web 2.0 innovations have had a massive impact on the larger World Wide Web, the practical impact on library service delivery has been limited to date. What probably can be termed transformational in the effect of web 2.0 developments on library and information work is their effect on some underlying principles of professional practice. Research limitations/implications - The legal and ethical challenges of incorporating web 2.0 platforms into mainstream institutional service delivery need to be subject to further research, so that the risks associated with these innovations are better understood at the strategic and policy-making level. Practical implications - This paper makes some recommendations about new principles of library and information practice which will help practitioners make better sense of these innovations in their overall information environment. Social implications - The paper puts in context some of the more problematic social impacts of web 2.0 innovations, without denying the undeniable positive contribution of social networking to the sphere of human interactivity. Originality/value - This paper raises some cautionary points about web 2.0 applications without adopting a precautionary approach of total prohibition. However, none of the suggestions or analysis in this piece should be considered to constitute legal advice. If such advice is required, the reader should consult appropriate legal professionals.
    Date
    22. 1.2011 17:54:04
  11. Das, S.; Roy, S.: Faceted ontological model for brain tumour study (2016) 0.05
    0.054778837 = product of:
      0.10955767 = sum of:
        0.04816959 = weight(_text_:wide in 2831) [ClassicSimilarity], result of:
          0.04816959 = score(doc=2831,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 2831, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.04634362 = weight(_text_:computer in 2831) [ClassicSimilarity], result of:
          0.04634362 = score(doc=2831,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28550854 = fieldWeight in 2831, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2831)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 2831) [ClassicSimilarity], result of:
              0.030088935 = score(doc=2831,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 2831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2831)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The purpose of this work is to develop an ontology-based framework for developing an information retrieval system to cater to specific queries of users. For creating such an ontology, information was obtained from a wide range of information sources involved with brain tumour study and research. The information thus obtained was compiled and analysed to provide a standard, reliable and relevant information base to aid our proposed system. Facet-based methodology has been used for ontology formalization for quite some time. Ontology formalization involves different steps such as identification of the terminology, analysis, synthesis, standardization and ordering. A vast majority of the ontologies being developed nowadays lack flexibility. This becomes a formidable constraint when it comes to interoperability. We found that a facet-based method provides a distinct guideline for the development of a robust and flexible model concerning the domain of brain tumours. Our attempt has been to bridge library and information science and computer science, which itself involved an experimental approach. It was discovered that a faceted approach is really enduring, as it helps in the achievement of properties like navigation, exploration and faceted browsing. Computer-based brain tumour ontology supports the work of researchers towards gathering information on brain tumour research and allows users across the world to intelligently access new scientific information quickly and efficiently.
    Date
    12. 3.2016 13:21:22
  12. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.05
    0.0547482 = product of:
      0.16424459 = sum of:
        0.108632244 = weight(_text_:web in 2735) [ClassicSimilarity], result of:
          0.108632244 = score(doc=2735,freq=24.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.7494315 = fieldWeight in 2735, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
        0.05561234 = weight(_text_:computer in 2735) [ClassicSimilarity], result of:
          0.05561234 = score(doc=2735,freq=4.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.34261024 = fieldWeight in 2735, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
      0.33333334 = coord(2/6)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
  13. Reasoning Web : Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures (2017) 0.05
    0.05331619 = product of:
      0.15994857 = sum of:
        0.08667288 = weight(_text_:web in 3934) [ClassicSimilarity], result of:
          0.08667288 = score(doc=3934,freq=22.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.59793836 = fieldWeight in 3934, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
        0.07327569 = weight(_text_:computer in 3934) [ClassicSimilarity], result of:
          0.07327569 = score(doc=3934,freq=10.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.45142862 = fieldWeight in 3934, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3934)
      0.33333334 = coord(2/6)
    
    Abstract
    This volume contains the lecture notes of the 13th Reasoning Web Summer School, RW 2017, held in London, UK, in July 2017. In 2017, the theme of the school was "Semantic Interoperability on the Web", which encompasses subjects such as data integration, open data management, reasoning over linked data, database to ontology mapping, query answering over ontologies, hybrid reasoning with rules and ontologies, and ontology-based dynamic systems. The papers of this volume focus on these topics and also address foundational reasoning techniques used in answer set programming and ontologies.
    Content
    Neumaier, Sebastian (et al.): Data Integration for Open Data on the Web - Stamou, Giorgos (et al.): Ontological Query Answering over Semantic Data - Calì, Andrea: Ontology Querying: Datalog Strikes Back - Sequeda, Juan F.: Integrating Relational Databases with the Semantic Web: A Reflection - Rousset, Marie-Christine (et al.): Datalog Revisited for Reasoning in Linked Data - Kaminski, Roland (et al.): A Tutorial on Hybrid Answer Set Solving with clingo - Eiter, Thomas (et al.): Answer Set Programming with External Source Access - Lukasiewicz, Thomas: Uncertainty Reasoning for the Semantic Web - Calvanese, Diego (et al.): OBDA for Log Extraction in Process Mining
    LCSH
    Computer science
    Computer Science
    RSWK
    Ontologie <Wissensverarbeitung> / Semantic Web
    Series
    Lecture Notes in Computer Scienc;10370 )(Information Systems and Applications, incl. Internet/Web, and HCI
    Subject
    Ontologie <Wissensverarbeitung> / Semantic Web
    Computer science
    Computer Science
    Theme
    Semantic Web
  14. Lukasiewicz, T.: Uncertainty reasoning for the Semantic Web (2017) 0.05
    0.05187861 = product of:
      0.15563582 = sum of:
        0.10975798 = weight(_text_:web in 3939) [ClassicSimilarity], result of:
          0.10975798 = score(doc=3939,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.75719774 = fieldWeight in 3939, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3939)
        0.04587784 = weight(_text_:computer in 3939) [ClassicSimilarity], result of:
          0.04587784 = score(doc=3939,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 3939, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3939)
      0.33333334 = coord(2/6)
    
    Abstract
    The Semantic Web has attracted much attention, both from academia and industry. An important role in research towards the Semantic Web is played by formalisms and technologies for handling uncertainty and/or vagueness. In this paper, I first provide some motivating examples for handling uncertainty and/or vagueness in the Semantic Web. I then give an overview of some own formalisms for handling uncertainty and/or vagueness in the Semantic Web.
    Series
    Lecture Notes in Computer Scienc;10370) (Information Systems and Applications, incl. Internet/Web, and HCI
    Source
    Reasoning Web: Semantic Interoperability on the Web, 13th International Summer School 2017, London, UK, July 7-11, 2017, Tutorial Lectures. Eds.: Ianni, G. et al
    Theme
    Semantic Web
  15. Rogers, R.: Digital methods (2013) 0.05
    0.051765166 = product of:
      0.15529549 = sum of:
        0.07707134 = weight(_text_:wide in 2354) [ClassicSimilarity], result of:
          0.07707134 = score(doc=2354,freq=8.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.3916274 = fieldWeight in 2354, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
        0.078224145 = weight(_text_:web in 2354) [ClassicSimilarity], result of:
          0.078224145 = score(doc=2354,freq=28.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.5396523 = fieldWeight in 2354, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2354)
      0.33333334 = coord(2/6)
    
    Abstract
    In Digital Methods, Richard Rogers proposes a methodological outlook for social and cultural scholarly research on the Web that seeks to move Internet research beyond the study of online culture. It is not a toolkit for Internet research, or operating instructions for a software package; it deals with broader questions. How can we study social media to learn something about society rather than about social media use? How can hyperlinks reveal not just the value of a Web site but the politics of association? Rogers proposes repurposing Web-native techniques for research into cultural change and societal conditions. We can learn to reapply such "methods of the medium" as crawling and crowd sourcing, PageRank and similar algorithms, tag clouds and other visualizations; we can learn how they handle hits, likes, tags, date stamps, and other Web-native objects. By "thinking along" with devices and the objects they handle, digital research methods can follow the evolving methods of the medium. Rogers uses this new methodological outlook to examine the findings of inquiries into 9/11 search results, the recognition of climate change skeptics by climate-change-related Web sites, the events surrounding the Srebrenica massacre according to Dutch, Serbian, Bosnian, and Croatian Wikipedias, presidential candidates' social media "friends," and the censorship of the Iranian Web. With Digital Methods, Rogers introduces a new vision and method for Internet research and at the same time applies them to the Web's objects of study, from tiny particles (hyperlinks) to large masses (social media).
    Content
    The end of the virtual : digital methods -- The link and the politics of Web space -- The website as archived object -- Googlization and the inculpable engine -- Search as research -- National Web studies -- Social media and post-demographics -- Wikipedia as cultural reference -- After cyberspace : big data, small data.
    LCSH
    Web search engines
    World Wide Web / Research
    RSWK
    Internet / Recherche / World Wide Web 2.0
    Subject
    Internet / Recherche / World Wide Web 2.0
    Web search engines
    World Wide Web / Research
  16. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.05
    0.051763047 = product of:
      0.10352609 = sum of:
        0.036585998 = weight(_text_:web in 3283) [ClassicSimilarity], result of:
          0.036585998 = score(doc=3283,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25239927 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.04587784 = weight(_text_:computer in 3283) [ClassicSimilarity], result of:
          0.04587784 = score(doc=3283,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.28263903 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.021062255 = product of:
          0.04212451 = sum of:
            0.04212451 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.04212451 = score(doc=3283,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Series
    Communications in computer and information science; 672
    Theme
    Semantic Web
  17. Hypén, K.; Mäkelä, E.: ¬An ideal model for an information system for fiction and its application : Kirjasampo and Semantic Web (2011) 0.05
    0.050733075 = product of:
      0.10146615 = sum of:
        0.033718713 = weight(_text_:wide in 4550) [ClassicSimilarity], result of:
          0.033718713 = score(doc=4550,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.171337 = fieldWeight in 4550, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4550)
        0.04480851 = weight(_text_:web in 4550) [ClassicSimilarity], result of:
          0.04480851 = score(doc=4550,freq=12.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.3091247 = fieldWeight in 4550, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4550)
        0.02293892 = weight(_text_:computer in 4550) [ClassicSimilarity], result of:
          0.02293892 = score(doc=4550,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.14131951 = fieldWeight in 4550, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4550)
      0.5 = coord(3/6)
    
    Abstract
    Purpose - Library Director Jarmo Saarti introduced a wide or ideal model for fiction in literature in his dissertation, published in 1999. It introduces those aspects that should be included in an information system for fiction. Such aspects include literary prose and its intertextual references to other works, the writer, readers' and critics' receptions of the work as well as a researcher's view. It is also important to note how libraries approach a literary work by means of inventory, classification and content description. The most ambiguous of the aspects relates to that context in cultural history, which the work reflects and is a part of. The paper aims to discuss these issues. Design/methodology/approach - Since the model consists of several components which are not found in present library information systems and cannot be implemented by them, a new way had to be found to produce, save, process and present fiction-related metadata. The Semantic Computing Research Group of Aalto University has developed several Semantic Web services for use in the field of culture, so cooperation with it and the use of Semantic Web tools were a natural starting point for the construction of the new service. Kirjasampo will be based on the Semantic Web RDF data model. The model enables a flexible linking of metadata derived from different sources, and it can be used to build a Semantic Web that can be approached contextually from different angles. Findings - The "semantically enriched" ideal model for fiction has hence been realised, at least to some extent: Kirjasampo supports literature-related metadata that is more varied than earlier and aims to account for different contexts within literature and connections with regard to other cultural phenomena. It also includes contemporary reviews of works and, as such, readers' receptions as well. Modern readers can share their views on works, once the user interface of the server is completed. It will include several features from the Kirjasto 2.0-application, which enables the evaluation, description and recommendations of works. The service should be online by the end of Spring 2011. Research limitations/implications - The project involves novel collaboration between a public library and a computer science research unit, and utilises a novel approach to the description of fiction. Practical implications - The system encourages user participation in the description of fiction and is of practical benefit to librarians in understanding both how fiction is organised and how users interpret the same. Originality/value - Upon completion, the service will be the first Finnish information system for libraries built with the tools of the Semantic Web which offers a completely new user environment and application for data produced by libraries. It also strives to create a new model for saving and producing data, available to both library professionals and readers. The aim is to save, accumulate and distribute literary knowledge, experiences and silent information.
  18. Ceri, S.; Bozzon, A.; Brambilla, M.; Della Valle, E.; Fraternali, P.; Quarteroni, S.: Web Information Retrieval (2013) 0.05
    0.050485164 = product of:
      0.10097033 = sum of:
        0.062718846 = weight(_text_:web in 1082) [ClassicSimilarity], result of:
          0.062718846 = score(doc=1082,freq=18.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.43268442 = fieldWeight in 1082, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.02621591 = weight(_text_:computer in 1082) [ClassicSimilarity], result of:
          0.02621591 = score(doc=1082,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.16150802 = fieldWeight in 1082, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=1082)
        0.012035574 = product of:
          0.024071148 = sum of:
            0.024071148 = weight(_text_:22 in 1082) [ClassicSimilarity], result of:
              0.024071148 = score(doc=1082,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.15476047 = fieldWeight in 1082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1082)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo!, are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
    Date
    16.10.2013 19:22:44
  19. Gnoli, C.; Merli, G.; Pavan, G.; Bernuzzi, E.; Priano, M.: Freely faceted classification for a Web-based bibliographic archive : the BioAcoustic Reference Database (2010) 0.05
    0.050085746 = product of:
      0.10017149 = sum of:
        0.04816959 = weight(_text_:wide in 3739) [ClassicSimilarity], result of:
          0.04816959 = score(doc=3739,freq=2.0), product of:
            0.19679762 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.044416238 = queryNorm
            0.24476713 = fieldWeight in 3739, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.036957435 = weight(_text_:web in 3739) [ClassicSimilarity], result of:
          0.036957435 = score(doc=3739,freq=4.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.25496176 = fieldWeight in 3739, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3739)
        0.0150444675 = product of:
          0.030088935 = sum of:
            0.030088935 = weight(_text_:22 in 3739) [ClassicSimilarity], result of:
              0.030088935 = score(doc=3739,freq=2.0), product of:
                0.1555381 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044416238 = queryNorm
                0.19345059 = fieldWeight in 3739, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3739)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The Integrative Level Classification (ILC) research project is experimenting with a knowledge organization system based on phenomena rather than disciplines. Each phenomenon has a constant notation, which can be combined with that of any other phenomenon in a freely faceted structure. Citation order can express differential focality of the facets. Very specific subjects can have long classmarks, although their complexity is reduced by various devices. Freely faceted classification is being tested by indexing a corpus of about 3300 papers in the interdisciplinary domain of bioacoustics. The subjects of these papers often include phenomena from a wide variety of integrative levels (mechanical waves, animals, behaviour, vessels, fishing, law, ...) as well as information about the methods of study, as predicted in the León Manifesto. The archive is recorded in a MySQL database, and can be fed and searched through PHP Web interfaces. Indexer's work is made easier by mechanisms that suggest possible classes on the basis of matching title words with terms in the ILC schedules, and synthesize automatically the verbal caption corresponding to the classmark being edited. Users can search the archive by selecting and combining values in each facet. Search refinement should be improved, especially for the cases where no record, or too many records, match the faceted query. However, experience is being gained progressively, showing that freely faceted classification by phenomena, theories, and methods is feasible and successfully working.
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  20. Victorino, M.; Terto de Holanda, M.; Ishikawa, E.; Costa Oliveira, E.; Chhetri, S.: Transforming open data to linked open data using ontologies for information organization in big data environments of the Brazilian Government : the Brazilian database Government Open Linked Data - DBgoldbr (2018) 0.05
    0.05006535 = product of:
      0.1001307 = sum of:
        0.026132854 = weight(_text_:web in 4532) [ClassicSimilarity], result of:
          0.026132854 = score(doc=4532,freq=2.0), product of:
            0.14495286 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.044416238 = queryNorm
            0.18028519 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.03276989 = weight(_text_:computer in 4532) [ClassicSimilarity], result of:
          0.03276989 = score(doc=4532,freq=2.0), product of:
            0.16231956 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.044416238 = queryNorm
            0.20188503 = fieldWeight in 4532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4532)
        0.04122796 = product of:
          0.08245592 = sum of:
            0.08245592 = weight(_text_:programs in 4532) [ClassicSimilarity], result of:
              0.08245592 = score(doc=4532,freq=2.0), product of:
                0.25748047 = queryWeight, product of:
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.044416238 = queryNorm
                0.32024145 = fieldWeight in 4532, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.79699 = idf(docFreq=364, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4532)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The Brazilian Government has made a massive volume of structured, semi-structured and non-structured public data available on the web to ensure that the administration is as transparent as possible. Subsequently, providing applications with enough capability to handle this "big data environment" so that vital and decisive information is readily accessible, has become a tremendous challenge. In this environment, data processing is done via new approaches in the area of information and computer science, involving technologies and processes for collecting, representing, storing and disseminating information. Along these lines, this paper presents a conceptual model, the technical architecture and the prototype implementation of a tool, denominated DBgoldbr, designed to classify government public information with the help of ontologies, by transforming open data into open linked data. To achieve this objective, we used "soft system methodology" to identify problems, to collect users needs and to design solutions according to the objectives of specific groups. The DBgoldbr tool was designed to facilitate the search for open data made available by many Brazilian government institutions, so that this data can be reused to support the evaluation and monitoring of social programs, in order to support the design and management of public policies.

Types

  • a 1370
  • el 134
  • m 125
  • s 55
  • x 14
  • b 4
  • r 4
  • i 1
  • n 1
  • p 1
  • More… Less…

Themes

Subjects

Classifications