Search (2240 results, page 1 of 112)

  • × year_i:[2010 TO 2020}
  1. Calì, A. et al.: Processing keyword queries under access limitations (2016) 0.15
    0.1452428 = product of:
      0.21786419 = sum of:
        0.056362033 = weight(_text_:data in 4233) [ClassicSimilarity], result of:
          0.056362033 = score(doc=4233,freq=2.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.34936053 = fieldWeight in 4233, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=4233)
        0.16150215 = sum of:
          0.09237652 = weight(_text_:processing in 4233) [ClassicSimilarity], result of:
            0.09237652 = score(doc=4233,freq=2.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.4472613 = fieldWeight in 4233, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.078125 = fieldNorm(doc=4233)
          0.06912562 = weight(_text_:22 in 4233) [ClassicSimilarity], result of:
            0.06912562 = score(doc=4233,freq=2.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.38690117 = fieldWeight in 4233, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4233)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  2. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.14
    0.14256412 = sum of:
      0.081033945 = product of:
        0.24310184 = sum of:
          0.24310184 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
            0.24310184 = score(doc=400,freq=2.0), product of:
              0.43255165 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.051020417 = queryNorm
              0.56201804 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.33333334 = coord(1/3)
      0.033817217 = weight(_text_:data in 400) [ClassicSimilarity], result of:
        0.033817217 = score(doc=400,freq=2.0), product of:
          0.16132914 = queryWeight, product of:
            3.1620505 = idf(docFreq=5088, maxDocs=44218)
            0.051020417 = queryNorm
          0.2096163 = fieldWeight in 400, product of:
            1.4142135 = tf(freq=2.0), with freq of:
              2.0 = termFreq=2.0
            3.1620505 = idf(docFreq=5088, maxDocs=44218)
            0.046875 = fieldNorm(doc=400)
      0.027712956 = product of:
        0.055425912 = sum of:
          0.055425912 = weight(_text_:processing in 400) [ClassicSimilarity], result of:
            0.055425912 = score(doc=400,freq=2.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.26835677 = fieldWeight in 400, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046875 = fieldNorm(doc=400)
        0.5 = coord(1/2)
    
    Abstract
    On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts. We call these attributes facets: classification has a few facets such as application (e.g., face recognition), model (e.g., svm, knn), and metric (e.g., precision). In this work, we aim at building faceted concept hierarchies from scientific literature. Hierarchy construction methods heavily rely on hypernym detection, however, the faceted relations are parent-to-child links but the hypernym relation is a multi-hop, i.e., ancestor-to-descendent link with a specific facet "type-of". We use information extraction techniques to find synonyms, sibling concepts, and ancestor-descendent relations from a data science corpus. And we propose a hierarchy growth algorithm to infer the parent-child links from the three types of relationships. It resolves conflicts by maintaining the acyclic structure of a hierarchy.
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
    Source
    Graph-Based Methods for Natural Language Processing - proceedings of the Thirteenth Workshop (TextGraphs-13): November 4, 2019, Hong Kong : EMNLP-IJCNLP 2019. Ed.: Dmitry Ustalov
  3. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.13
    0.12761241 = product of:
      0.19141862 = sum of:
        0.13505659 = product of:
          0.40516973 = sum of:
            0.40516973 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40516973 = score(doc=1826,freq=2.0), product of:
                0.43255165 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051020417 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
        0.056362033 = weight(_text_:data in 1826) [ClassicSimilarity], result of:
          0.056362033 = score(doc=1826,freq=2.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.34936053 = fieldWeight in 1826, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=1826)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation anlässlich: European Conference on Data Analysis (ECDA 2014) in Bremen, Germany, July 2nd to 4th 2014, LIS-Workshop.
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  4. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.12
    0.1232426 = product of:
      0.1848639 = sum of:
        0.04782477 = weight(_text_:data in 3355) [ClassicSimilarity], result of:
          0.04782477 = score(doc=3355,freq=4.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.29644224 = fieldWeight in 3355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.13703912 = sum of:
          0.07838408 = weight(_text_:processing in 3355) [ClassicSimilarity], result of:
            0.07838408 = score(doc=3355,freq=4.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.3795138 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.05865504 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.05865504 = score(doc=3355,freq=4.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.6666667 = coord(2/3)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    LCSH
    Communication in science / Data processing
    Subject
    Communication in science / Data processing
  5. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.12
    0.11891532 = product of:
      0.17837298 = sum of:
        0.0976219 = weight(_text_:data in 1605) [ClassicSimilarity], result of:
          0.0976219 = score(doc=1605,freq=24.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.60511017 = fieldWeight in 1605, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.08075108 = sum of:
          0.04618826 = weight(_text_:processing in 1605) [ClassicSimilarity], result of:
            0.04618826 = score(doc=1605,freq=2.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.22363065 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
          0.03456281 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
            0.03456281 = score(doc=1605,freq=2.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.19345059 = fieldWeight in 1605, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1605)
      0.6666667 = coord(2/3)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
    Theme
    Data Mining
  6. Saggi, M.K.; Jain, S.: ¬A survey towards an integration of big data analytics to big insights for value-creation (2018) 0.10
    0.103665486 = product of:
      0.15549822 = sum of:
        0.12283819 = weight(_text_:data in 5053) [ClassicSimilarity], result of:
          0.12283819 = score(doc=5053,freq=38.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.7614136 = fieldWeight in 5053, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5053)
        0.032660034 = product of:
          0.06532007 = sum of:
            0.06532007 = weight(_text_:processing in 5053) [ClassicSimilarity], result of:
              0.06532007 = score(doc=5053,freq=4.0), product of:
                0.20653816 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.051020417 = queryNorm
                0.3162615 = fieldWeight in 5053, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5053)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Big Data Analytics (BDA) is increasingly becoming a trending practice that generates an enormous amount of data and provides a new opportunity that is helpful in relevant decision-making. The developments in Big Data Analytics provide a new paradigm and solutions for big data sources, storage, and advanced analytics. The BDA provide a nuanced view of big data development, and insights on how it can truly create value for firm and customer. This article presents a comprehensive, well-informed examination, and realistic analysis of deploying big data analytics successfully in companies. It provides an overview of the architecture of BDA including six components, namely: (i) data generation, (ii) data acquisition, (iii) data storage, (iv) advanced data analytics, (v) data visualization, and (vi) decision-making for value-creation. In this paper, seven V's characteristics of BDA namely Volume, Velocity, Variety, Valence, Veracity, Variability, and Value are explored. The various big data analytics tools, techniques and technologies have been described. Furthermore, it presents a methodical analysis for the usage of Big Data Analytics in various applications such as agriculture, healthcare, cyber security, and smart city. This paper also highlights the previous research, challenges, current status, and future directions of big data analytics for various application platforms. This overview highlights three issues, namely (i) concepts, characteristics and processing paradigms of Big Data Analytics; (ii) the state-of-the-art framework for decision-making in BDA for companies to insight value-creation; and (iii) the current challenges of Big Data Analytics as well as possible future directions.
    Footnote
    Beitrag in einem Themenheft: 'In (Big) Data we trust: Value creation in knowledge organizations'.
    Source
    Information processing and management. 54(2018) no.5, S.758-790
    Theme
    Data Mining
  7. Semantic keyword-based search on structured data sources : First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers (2016) 0.10
    0.10234362 = product of:
      0.15351543 = sum of:
        0.050411735 = weight(_text_:data in 2753) [ClassicSimilarity], result of:
          0.050411735 = score(doc=2753,freq=10.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.31247756 = fieldWeight in 2753, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2753)
        0.1031037 = sum of:
          0.06400034 = weight(_text_:processing in 2753) [ClassicSimilarity], result of:
            0.06400034 = score(doc=2753,freq=6.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.30987173 = fieldWeight in 2753, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
          0.039103355 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
            0.039103355 = score(doc=2753,freq=4.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.21886435 = fieldWeight in 2753, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the thoroughly refereed post-conference proceedings of the First COST Action IC1302 International KEYSTONE Conference on semantic Keyword-based Search on Structured Data Sources, IKC 2015, held in Coimbra, Portugal, in September 2015. The 13 revised full papers, 3 revised short papers, and 2 invited papers were carefully reviewed and selected from 22 initial submissions. The paper topics cover techniques for keyword search, semantic data management, social Web and social media, information retrieval, benchmarking for search on big data.
    Content
    Inhalt: Professional Collaborative Information Seeking: On Traceability and Creative Sensemaking / Nürnberger, Andreas (et al.) - Recommending Web Pages Using Item-Based Collaborative Filtering Approaches / Cadegnani, Sara (et al.) - Processing Keyword Queries Under Access Limitations / Calì, Andrea (et al.) - Balanced Large Scale Knowledge Matching Using LSH Forest / Cochez, Michael (et al.) - Improving css-KNN Classification Performance by Shifts in Training Data / Draszawka, Karol (et al.) - Classification Using Various Machine Learning Methods and Combinations of Key-Phrases and Visual Features / HaCohen-Kerner, Yaakov (et al.) - Mining Workflow Repositories for Improving Fragments Reuse / Harmassi, Mariem (et al.) - AgileDBLP: A Search-Based Mobile Application for Structured Digital Libraries / Ifrim, Claudia (et al.) - Support of Part-Whole Relations in Query Answering / Kozikowski, Piotr (et al.) - Key-Phrases as Means to Estimate Birth and Death Years of Jewish Text Authors / Mughaz, Dror (et al.) - Visualization of Uncertainty in Tag Clouds / Platis, Nikos (et al.) - Multimodal Image Retrieval Based on Keywords and Low-Level Image Features / Pobar, Miran (et al.) - Toward Optimized Multimodal Concept Indexing / Rekabsaz, Navid (et al.) - Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives / Souza, Tarcisio (et al.) - Indexing of Textual Databases Based on Lexical Resources: A Case Study for Serbian / Stankovic, Ranka (et al.) - Domain-Specific Modeling: Towards a Food and Drink Gazetteer / Tagarev, Andrey (et al.) - Analysing Entity Context in Multilingual Wikipedia to Support Entity-Centric Retrieval Applications / Zhou, Yiwei (et al.)
    Date
    1. 2.2016 18:25:22
    LCSH
    Text processing (Computer science)
    Subject
    Text processing (Computer science)
  8. Dow, K.E.; Hackbarth, G.; Wong, J.: Data architectures for an organizational memory information system (2013) 0.10
    0.100072 = product of:
      0.150108 = sum of:
        0.104383945 = weight(_text_:data in 963) [ClassicSimilarity], result of:
          0.104383945 = score(doc=963,freq=14.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.64702475 = fieldWeight in 963, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=963)
        0.045724045 = product of:
          0.09144809 = sum of:
            0.09144809 = weight(_text_:processing in 963) [ClassicSimilarity], result of:
              0.09144809 = score(doc=963,freq=4.0), product of:
                0.20653816 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.051020417 = queryNorm
                0.4427661 = fieldWeight in 963, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=963)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    A framework is developed that supports the theoretical design of an organizational memory information system (OMIS). The framework provides guidance for managing the processing capabilities of an organization by matching knowledge location, flexibility, and processing requirements with data architecture. This framework is tested using three different sets of data attributes and data architectures from 147 business professionals that have experience in IS development. We find that trade-offs exist between the amount of knowledge embedded in the data architecture and the flexibility of data architectures. This trade-off is contingent on the characteristics of the set of tasks that the data architecture is being designed to support. Further, the match is important to consider in the design of OMIS database architecture.
  9. Petric, K.; Petric, T.; Krisper, M.; Rajkovic, V.: User profiling on a pilot digital library with the final result of a new adaptive knowledge management solution (2011) 0.10
    0.096484035 = product of:
      0.14472605 = sum of:
        0.04782477 = weight(_text_:data in 4560) [ClassicSimilarity], result of:
          0.04782477 = score(doc=4560,freq=4.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.29644224 = fieldWeight in 4560, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=4560)
        0.09690128 = sum of:
          0.055425912 = weight(_text_:processing in 4560) [ClassicSimilarity], result of:
            0.055425912 = score(doc=4560,freq=2.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.26835677 = fieldWeight in 4560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046875 = fieldNorm(doc=4560)
          0.04147537 = weight(_text_:22 in 4560) [ClassicSimilarity], result of:
            0.04147537 = score(doc=4560,freq=2.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.23214069 = fieldWeight in 4560, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4560)
      0.6666667 = coord(2/3)
    
    Abstract
    In this article, several procedures (e.g., measurements, information retrieval analyses, power law, association rules, hierarchical clustering) are introduced which were made on a pilot digital library. Information retrievals of web users from 01/01/2003 to 01/01/2006 on the internal search engine of the pilot digital library have been analyzed. With the power law method of data processing, a constant information retrieval pattern has been established, stable over a longer period of time. After this, the data have been analyzed. On the basis of the accomplished measurements and analyses, a series of mental models of web users for global (educational) purposes have been developed (e.g., the metamodel of thought hierarchy of web users, the segmentation model of web users), and the users were profiled in four different groups (adventurers, observers, applicable, and know-alls). The article concludes with the construction of a new knowledge management solution called multidimensional rank thesaurus.
    Date
    13. 7.2011 14:47:22
  10. Borgman, C.L.: Big data, little data, no data : scholarship in the networked world (2015) 0.10
    0.095516205 = product of:
      0.1432743 = sum of:
        0.11714628 = weight(_text_:data in 2785) [ClassicSimilarity], result of:
          0.11714628 = score(doc=2785,freq=54.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.7261322 = fieldWeight in 2785, product of:
              7.3484693 = tf(freq=54.0), with freq of:
                54.0 = termFreq=54.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2785)
        0.026128028 = product of:
          0.052256055 = sum of:
            0.052256055 = weight(_text_:processing in 2785) [ClassicSimilarity], result of:
              0.052256055 = score(doc=2785,freq=4.0), product of:
                0.20653816 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.051020417 = queryNorm
                0.2530092 = fieldWeight in 2785, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2785)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    "Big Data" is on the covers of Science, Nature, the Economist, and Wired magazines, on the front pages of the Wall Street Journal and the New York Times. But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines. Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six "provocations" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
    Content
    Provocations -- What are data? -- Data scholarship -- Data diversity -- Data scholarship in the sciences -- Data scholarship in the social sciences -- Data scholarship in the humanities -- Sharing, releasing, and reusing data -- Credit, attribution, and discovery of data -- What to keep and why to keep them.
    LCSH
    Research / Data processing
    Subject
    Research / Data processing
  11. Shaw, R.; Golden, P.; Buckland, M.: Using linked library data in working research notes (2015) 0.09
    0.09141661 = product of:
      0.13712491 = sum of:
        0.09564954 = weight(_text_:data in 2555) [ClassicSimilarity], result of:
          0.09564954 = score(doc=2555,freq=4.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.5928845 = fieldWeight in 2555, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=2555)
        0.04147537 = product of:
          0.08295074 = sum of:
            0.08295074 = weight(_text_:22 in 2555) [ClassicSimilarity], result of:
              0.08295074 = score(doc=2555,freq=2.0), product of:
                0.1786648 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051020417 = queryNorm
                0.46428138 = fieldWeight in 2555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2555)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    15. 1.2016 19:22:28
    Source
    Linked data and user interaction: the road ahead. Eds.: Cervone, H.F. u. L.G. Svensson
  12. Gödert, W.; Lepsky, K.: Informationelle Kompetenz : ein humanistischer Entwurf (2019) 0.09
    0.08932869 = product of:
      0.13399303 = sum of:
        0.094539605 = product of:
          0.2836188 = sum of:
            0.2836188 = weight(_text_:3a in 5955) [ClassicSimilarity], result of:
              0.2836188 = score(doc=5955,freq=2.0), product of:
                0.43255165 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.051020417 = queryNorm
                0.65568775 = fieldWeight in 5955, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5955)
          0.33333334 = coord(1/3)
        0.03945342 = weight(_text_:data in 5955) [ClassicSimilarity], result of:
          0.03945342 = score(doc=5955,freq=2.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.24455236 = fieldWeight in 5955, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5955)
      0.6666667 = coord(2/3)
    
    Footnote
    Rez. in: Philosophisch-ethische Rezensionen vom 09.11.2019 (Jürgen Czogalla), Unter: https://philosophisch-ethische-rezensionen.de/rezension/Goedert1.html. In: B.I.T. online 23(2020) H.3, S.345-347 (W. Sühl-Strohmenger) [Unter: https%3A%2F%2Fwww.b-i-t-online.de%2Fheft%2F2020-03-rezensionen.pdf&usg=AOvVaw0iY3f_zNcvEjeZ6inHVnOK]. In: Open Password Nr. 805 vom 14.08.2020 (H.-C. Hobohm) [Unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzE0MywiOGI3NjZkZmNkZjQ1IiwwLDAsMTMxLDFd].
  13. Zhao, G.; Wu, J.; Wang, D.; Li, T.: Entity disambiguation to Wikipedia using collective ranking (2016) 0.09
    0.08714567 = product of:
      0.1307185 = sum of:
        0.033817217 = weight(_text_:data in 3266) [ClassicSimilarity], result of:
          0.033817217 = score(doc=3266,freq=2.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.2096163 = fieldWeight in 3266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3266)
        0.09690128 = sum of:
          0.055425912 = weight(_text_:processing in 3266) [ClassicSimilarity], result of:
            0.055425912 = score(doc=3266,freq=2.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.26835677 = fieldWeight in 3266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.046875 = fieldNorm(doc=3266)
          0.04147537 = weight(_text_:22 in 3266) [ClassicSimilarity], result of:
            0.04147537 = score(doc=3266,freq=2.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.23214069 = fieldWeight in 3266, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3266)
      0.6666667 = coord(2/3)
    
    Abstract
    Entity disambiguation is a fundamental task of semantic Web annotation. Entity Linking (EL) is an essential procedure in entity disambiguation, which aims to link a mention appearing in a plain text to a structured or semi-structured knowledge base, such as Wikipedia. Existing research on EL usually annotates the mentions in a text one by one and treats entities independent to each other. However this might not be true in many application scenarios. For example, if two mentions appear in one text, they are likely to have certain intrinsic relationships. In this paper, we first propose a novel query expansion method for candidate generation utilizing the information of co-occurrences of mentions. We further propose a re-ranking model which can be iteratively adjusted based on the prediction in the previous round. Experiments on real-world data demonstrate the effectiveness of our proposed methods for entity disambiguation.
    Date
    24.10.2016 19:22:54
    Source
    Information processing and management. 52(2016) no.6, S.1247-1257
  14. He, L.; Nahar, V.: Reuse of scientific data in academic publications : an investigation of Dryad Digital Repository (2016) 0.09
    0.08511808 = product of:
      0.12767711 = sum of:
        0.106939435 = weight(_text_:data in 3072) [ClassicSimilarity], result of:
          0.106939435 = score(doc=3072,freq=20.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.662865 = fieldWeight in 3072, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3072)
        0.020737685 = product of:
          0.04147537 = sum of:
            0.04147537 = weight(_text_:22 in 3072) [ClassicSimilarity], result of:
              0.04147537 = score(doc=3072,freq=2.0), product of:
                0.1786648 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051020417 = queryNorm
                0.23214069 = fieldWeight in 3072, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3072)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - In recent years, a large number of data repositories have been built and used. However, the extent to which scientific data are re-used in academic publications is still unknown. The purpose of this paper is to explore the functions of re-used scientific data in scholarly publication in different fields. Design/methodology/approach - To address these questions, the authors identified 827 publications citing resources in the Dryad Digital Repository indexed by Scopus from 2010 to 2015. Findings - The results show that: the number of citations to scientific data increases sharply over the years, but mainly from data-intensive disciplines, such as agricultural, biology science, environment science and medicine; the majority of citations are from the originating articles; and researchers tend to reuse data produced by their own research groups. Research limitations/implications - Dryad data may be re-used without being formally cited. Originality/value - The conservatism in data sharing suggests that more should be done to encourage researchers to re-use other's data.
    Date
    20. 1.2015 18:30:22
  15. Cronin, B.: Thinking about data (2013) 0.08
    0.084863186 = product of:
      0.12729478 = sum of:
        0.07890684 = weight(_text_:data in 4347) [ClassicSimilarity], result of:
          0.07890684 = score(doc=4347,freq=2.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.48910472 = fieldWeight in 4347, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.109375 = fieldNorm(doc=4347)
        0.048387934 = product of:
          0.09677587 = sum of:
            0.09677587 = weight(_text_:22 in 4347) [ClassicSimilarity], result of:
              0.09677587 = score(doc=4347,freq=2.0), product of:
                0.1786648 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051020417 = queryNorm
                0.5416616 = fieldWeight in 4347, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4347)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    22. 3.2013 16:18:36
  16. Boerner, K.: Atlas of science : visualizing what we know (2010) 0.08
    0.08333062 = product of:
      0.124995925 = sum of:
        0.045089625 = weight(_text_:data in 3359) [ClassicSimilarity], result of:
          0.045089625 = score(doc=3359,freq=8.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.2794884 = fieldWeight in 3359, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=3359)
        0.0799063 = sum of:
          0.052256055 = weight(_text_:processing in 3359) [ClassicSimilarity], result of:
            0.052256055 = score(doc=3359,freq=4.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.2530092 = fieldWeight in 3359, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.03125 = fieldNorm(doc=3359)
          0.027650248 = weight(_text_:22 in 3359) [ClassicSimilarity], result of:
            0.027650248 = score(doc=3359,freq=2.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.15476047 = fieldWeight in 3359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3359)
      0.6666667 = coord(2/3)
    
    Abstract
    Cartographic maps have guided our explorations for centuries, allowing us to navigate the world. Science maps have the potential to guide our search for knowledge in the same way, helping us navigate, understand, and communicate the dynamic and changing structure of science and technology. Allowing us to visualize scientific results, science maps help us make sense of the avalanche of data generated by scientific research today. Atlas of Science, features more than thirty full-page science maps, fifty data charts, a timeline of science-mapping milestones, and 500 color images; it serves as a sumptuous visual index to the evolution of modern science and as an introduction to "the science of science"--charting the trajectory from scientific concept to published results. Atlas of Science, based on the popular exhibit "Places & Spaces: Mapping Science," describes and displays successful mapping techniques. The heart of the book is a visual feast: Claudius Ptolemy's Cosmographia World Map from 1482; a guide to a PhD thesis that resembles a subway map; "the structure of science" as revealed in a map of citation relationships in papers published in 2002; a periodic table; a history flow visualization of the Wikipedia article on abortion; a globe showing the worldwide distribution of patents; a forecast of earthquake risk; hands-on science maps for kids; and many more. Each entry includes the story behind the map and biographies of its makers. Not even the most brilliant minds can keep up with today's deluge of scientific results. Science maps show us the landscape of what we know. Exhibition Ongoing National Science Foundation, Washington, D.C. The Institute for Research Information and Quality Assurance, Bonn, Germany Storm Hall, San Diego State College
    Date
    22. 1.2017 17:12:16
    LCSH
    Data processing
    Subject
    Data processing
  17. Jeffery, K.G.; Bailo, D.: EPOS: using metadata in geoscience (2014) 0.08
    0.08224167 = product of:
      0.1233625 = sum of:
        0.09564954 = weight(_text_:data in 1581) [ClassicSimilarity], result of:
          0.09564954 = score(doc=1581,freq=16.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.5928845 = fieldWeight in 1581, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=1581)
        0.027712956 = product of:
          0.055425912 = sum of:
            0.055425912 = weight(_text_:processing in 1581) [ClassicSimilarity], result of:
              0.055425912 = score(doc=1581,freq=2.0), product of:
                0.20653816 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.051020417 = queryNorm
                0.26835677 = fieldWeight in 1581, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1581)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    One of the key aspects of the approaching data-intensive science era is integration of data through interoperability of systems providing data products or visualisation and processing services. Far from being simple, interoperability requires robust and scalable e-infrastructures capable of supporting it. In this work we present the case of EPOS, a project for data integration in the field of Earth Sciences. We describe the design of its e-infrastructure and show its main characteristics. One of the main elements enabling the system to integrate data, data products and services is the metadata catalog based on the CERIF metadata model. Such a model, modified to fit into the general e-infrastructure design, is part of a three-layer metadata architecture. CERIF guarantees a robust handling of metadata, which is in this case the key to the interoperability and to one of the feature of the EPOS system: the possibility of carrying on data intensive science orchestrating the distributed resources made available by EPOS data providers and stakeholders.
  18. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.08
    0.08118415 = product of:
      0.12177622 = sum of:
        0.08911619 = weight(_text_:data in 2192) [ClassicSimilarity], result of:
          0.08911619 = score(doc=2192,freq=20.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.5523875 = fieldWeight in 2192, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.032660034 = product of:
          0.06532007 = sum of:
            0.06532007 = weight(_text_:processing in 2192) [ClassicSimilarity], result of:
              0.06532007 = score(doc=2192,freq=4.0), product of:
                0.20653816 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.051020417 = queryNorm
                0.3162615 = fieldWeight in 2192, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book constitutes the refereed proceedings of the 8th Metadata and Semantics Research Conference, MTSR 2014, held in Karlsruhe, Germany, in November 2014. The 23 full papers and 9 short papers presented were carefully reviewed and selected from 57 submissions. The papers are organized in several sessions and tracks. They cover the following topics: metadata and linked data: tools and models; (meta) data quality assessment and curation; semantic interoperability, ontology-based data access and representation; big data and digital libraries in health, science and technology; metadata and semantics for open repositories, research information systems and data infrastructure; metadata and semantics for cultural collections and applications; semantics for agriculture, food and environment.
    Content
    Metadata and linked data.- Tools and models.- (Meta)data quality assessment and curation.- Semantic interoperability, ontology-based data access and representation.- Big data and digital libraries in health, science and technology.- Metadata and semantics for open repositories, research information systems and data infrastructure.- Metadata and semantics for cultural collections and applications.- Semantics for agriculture, food and environment.
    LCSH
    Text processing (Computer science)
    Subject
    Text processing (Computer science)
  19. Salaba, A.; Zeng, M.L.: Extending the "Explore" user task beyond subject authority data into the linked data sphere (2014) 0.08
    0.080556475 = product of:
      0.12083471 = sum of:
        0.09664074 = weight(_text_:data in 1465) [ClassicSimilarity], result of:
          0.09664074 = score(doc=1465,freq=12.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.59902847 = fieldWeight in 1465, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1465)
        0.024193967 = product of:
          0.048387934 = sum of:
            0.048387934 = weight(_text_:22 in 1465) [ClassicSimilarity], result of:
              0.048387934 = score(doc=1465,freq=2.0), product of:
                0.1786648 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051020417 = queryNorm
                0.2708308 = fieldWeight in 1465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1465)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    "Explore" is a user task introduced in the Functional Requirements for Subject Authority Data (FRSAD) final report. Through various case scenarios, the authors discuss how structured data, presented based on Linked Data principles and using knowledge organisation systems (KOS) as the backbone, extend the explore task within and beyond subject authority data.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  20. Niu, X.; Kelly, D.: ¬The use of query suggestions during information search (2014) 0.08
    0.08040337 = product of:
      0.12060505 = sum of:
        0.039853975 = weight(_text_:data in 2702) [ClassicSimilarity], result of:
          0.039853975 = score(doc=2702,freq=4.0), product of:
            0.16132914 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.051020417 = queryNorm
            0.24703519 = fieldWeight in 2702, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2702)
        0.08075108 = sum of:
          0.04618826 = weight(_text_:processing in 2702) [ClassicSimilarity], result of:
            0.04618826 = score(doc=2702,freq=2.0), product of:
              0.20653816 = queryWeight, product of:
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.051020417 = queryNorm
              0.22363065 = fieldWeight in 2702, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.048147 = idf(docFreq=2097, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2702)
          0.03456281 = weight(_text_:22 in 2702) [ClassicSimilarity], result of:
            0.03456281 = score(doc=2702,freq=2.0), product of:
              0.1786648 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.051020417 = queryNorm
              0.19345059 = fieldWeight in 2702, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2702)
      0.6666667 = coord(2/3)
    
    Abstract
    Query suggestion is a common feature of many information search systems. While much research has been conducted about how to generate suggestions, fewer studies have been conducted about how people interact with and use suggestions. The purpose of this paper is to investigate how and when people integrate query suggestions into their searches and the outcome of this usage. The paper further investigates the relationships between search expertise, topic difficulty, and temporal segment of the search and query suggestion usage. A secondary analysis of data was conducted using data collected in a previous controlled laboratory study. In this previous study, 23 undergraduate research participants used an experimental search system with query suggestions to conduct four topic searches. Results showed that participants integrated the suggestions into their searching fairly quickly and that participants with less search expertise used more suggestions and saved more documents. Participants also used more suggestions towards the end of their searches and when searching for more difficult topics. These results show that query suggestion can provide support in situations where people have less search expertise, greater difficulty searching and at specific times during the search.
    Date
    25. 1.2016 18:43:22
    Source
    Information processing and management. 50(2014) no.1, S.218-234

Languages

  • e 1909
  • d 307
  • f 2
  • a 1
  • hu 1
  • i 1
  • pt 1
  • More… Less…

Types

  • a 1947
  • el 238
  • m 165
  • s 65
  • x 29
  • r 14
  • b 5
  • i 2
  • p 2
  • n 1
  • z 1
  • More… Less…

Themes

Subjects

Classifications