Search (530 results, page 1 of 27)

  • × year_i:[2010 TO 2020}
  • × language_ss:"e"
  1. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.07
    0.06728427 = product of:
      0.13456854 = sum of:
        0.13456854 = product of:
          0.40370563 = sum of:
            0.40370563 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.40370563 = score(doc=1826,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  2. Chianese, A.; Cantone, F.; Caropreso, M.; Moscato, V.: ARCHAEOLOGY 2.0 : Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments. (2010) 0.06
    0.05818888 = product of:
      0.11637776 = sum of:
        0.11637776 = sum of:
          0.08193985 = weight(_text_:ii in 3733) [ClassicSimilarity], result of:
            0.08193985 = score(doc=3733,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.29840025 = fieldWeight in 3733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3733)
          0.034437917 = weight(_text_:22 in 3733) [ClassicSimilarity], result of:
            0.034437917 = score(doc=3733,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.19345059 = fieldWeight in 3733, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3733)
      0.5 = coord(1/2)
    
    Abstract
    The focus of the present research has been the development and the application to Virtual Archaeology of a Web-Based framework for Learning Objects indexing and retrieval. The paper presents the main outcomes of a experimentation carried out by an interdisciplinary group of Federico II University of Naples. Our equipe is composed by researchers both in ICT and in Humanities disciplines, in particular in the domain of Virtual Archaeology and Cultural Heritage Informatics in order to develop specific ICT methodological approaches to Virtual Archaeology. The methodological background is the progressive diffusion of Web 2.0 technologies and the attempt to analyze their impact and perspectives in the Cultural Heritage field. In particular, we approached the specific requirements of the so called Learning 2.0, and the possibility to improve the automation of modular courseware generation in Virtual Archaeology Didactics. The developed framework was called SEMANTICA, and it was applied to Virtual Archaeology Domain Ontologies in order to generate a didactic course in a semi-automated way. The main results of this test and the first students feedback on the course fruition will be presented and discussed..
    Source
    Wissensspeicher in digitalen Räumen: Nachhaltigkeit - Verfügbarkeit - semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008. Hrsg.: J. Sieglerschmidt u. H.P.Ohly
  3. Coyle, K.: FRBR, before and after : a look at our bibliographic models (2016) 0.06
    0.05818888 = product of:
      0.11637776 = sum of:
        0.11637776 = sum of:
          0.08193985 = weight(_text_:ii in 2786) [ClassicSimilarity], result of:
            0.08193985 = score(doc=2786,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.29840025 = fieldWeight in 2786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2786)
          0.034437917 = weight(_text_:22 in 2786) [ClassicSimilarity], result of:
            0.034437917 = score(doc=2786,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.19345059 = fieldWeight in 2786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2786)
      0.5 = coord(1/2)
    
    Content
    Part I. Work, model, technologyThe work -- The model -- The technology -- Part II. FRBR and other solutions -- Introduction -- FRBR : standard for international sharing -- The entity-relation model -- What is modeled in FRBR -- Does FRBR meet FRBR's objectives? -- Some issues that arise -- Bibliographic description and the Semantic Web.
    Date
    12. 2.2016 16:22:58
  4. Genuis, S.K.; Bronstein, J.: Looking for "normal" : sense making in the context of health disruption (2017) 0.06
    0.05818888 = product of:
      0.11637776 = sum of:
        0.11637776 = sum of:
          0.08193985 = weight(_text_:ii in 3438) [ClassicSimilarity], result of:
            0.08193985 = score(doc=3438,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.29840025 = fieldWeight in 3438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3438)
          0.034437917 = weight(_text_:22 in 3438) [ClassicSimilarity], result of:
            0.034437917 = score(doc=3438,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.19345059 = fieldWeight in 3438, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3438)
      0.5 = coord(1/2)
    
    Abstract
    This investigation examines perceptions of normality emerging from two distinct studies of information behavior associated with life disrupting health symptoms and theorizes the search for normality in the context of sense making theory. Study I explored the experiences of women striving to make sense of symptoms associated with menopause; Study II examined posts from two online discussion groups for people with symptoms of obsessive compulsive disorder. Joint data analysis demonstrates that normality was initially perceived as the absence of illness. A breakdown in perceived normality because of disruptive symptoms created gaps and discontinuities in understanding. As participants interacted with information about the experiences of health-challenged peers, socially constructed notions of normality emerged. This was internalized as a "new normal." Findings demonstrate normality as an element of sense making that changes and develops over time, and experiential information and social contexts as central to health-related sense making. Re-establishing perceptions of normality, as experienced by health-challenged peers, was an important element of sense making. This investigation provides nuanced insight into notions of normality, extends understanding of social processes involved in sense making, and represents the first theorizing of and model development for normality within the information science and sense making literature.
    Date
    16.11.2017 13:29:22
  5. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.05
    0.046551105 = product of:
      0.09310221 = sum of:
        0.09310221 = sum of:
          0.06555188 = weight(_text_:ii in 168) [ClassicSimilarity], result of:
            0.06555188 = score(doc=168,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.2387202 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.027550334 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.027550334 = score(doc=168,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  6. Murphy, M.L.: Lexical meaning (2010) 0.05
    0.046551105 = product of:
      0.09310221 = sum of:
        0.09310221 = sum of:
          0.06555188 = weight(_text_:ii in 998) [ClassicSimilarity], result of:
            0.06555188 = score(doc=998,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.2387202 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
          0.027550334 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
            0.027550334 = score(doc=998,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.15476047 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
      0.5 = coord(1/2)
    
    Content
    Inhalt: Machine generated contents note: Part I. Meaning and the Lexicon: 1. The lexicon - some preliminaries; 2. What do we mean by meaning?; 3. Components and prototypes; 4. Modern componential approaches - and some alternatives; Part II. Relations Among Words and Senses: 5. Meaning variation: polysemy, homonymy and vagueness; 6. Lexical and semantic relations; Part III. Word Classes and Semantic Types: 7. Ontological categories and word classes; 8. Nouns and countability; 9. Predication: verbs, events, and states; 10. Verbs and time; 11. Adjectives and properties.
    Date
    22. 7.2013 10:53:30
  7. Gossen, T.: Search engines for children : search user interfaces and information-seeking behaviour (2016) 0.04
    0.040732216 = product of:
      0.08146443 = sum of:
        0.08146443 = sum of:
          0.057357892 = weight(_text_:ii in 2752) [ClassicSimilarity], result of:
            0.057357892 = score(doc=2752,freq=2.0), product of:
              0.2745971 = queryWeight, product of:
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.050836053 = queryNorm
              0.20888017 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.4016213 = idf(docFreq=541, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2752)
          0.024106542 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.024106542 = score(doc=2752,freq=2.0), product of:
              0.1780192 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050836053 = queryNorm
              0.1354154 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2752)
      0.5 = coord(1/2)
    
    Content
    Inhalt: Acknowledgments; Abstract; Zusammenfassung; Contents; List of Figures; List of Tables; List of Acronyms; Chapter 1 Introduction ; 1.1 Research Questions; 1.2 Thesis Outline; Part I Fundamentals ; Chapter 2 Information Retrieval for Young Users ; 2.1 Basics of Information Retrieval; 2.1.1 Architecture of an IR System; 2.1.2 Relevance Ranking; 2.1.3 Search User Interfaces; 2.1.4 Targeted Search Engines; 2.2 Aspects of Child Development Relevant for Information Retrieval Tasks; 2.2.1 Human Cognitive Development; 2.2.2 Information Processing Theory; 2.2.3 Psychosocial Development 2.3 User Studies and Evaluation2.3.1 Methods in User Studies; 2.3.2 Types of Evaluation; 2.3.3 Evaluation with Children; 2.4 Discussion; Chapter 3 State of the Art ; 3.1 Children's Information-Seeking Behaviour; 3.1.1 Querying Behaviour; 3.1.2 Search Strategy; 3.1.3 Navigation Style; 3.1.4 User Interface; 3.1.5 Relevance Judgement; 3.2 Existing Algorithms and User Interface Concepts for Children; 3.2.1 Query; 3.2.2 Content; 3.2.3 Ranking; 3.2.4 Search Result Visualisation; 3.3 Existing Information Retrieval Systems for Children; 3.3.1 Digital Book Libraries; 3.3.2 Web Search Engines 3.4 Summary and DiscussionPart II Studying Open Issues ; Chapter 4 Usability of Existing Search Engines for Young Users ; 4.1 Assessment Criteria; 4.1.1 Criteria for Matching the Motor Skills; 4.1.2 Criteria for Matching the Cognitive Skills; 4.2 Results; 4.2.1 Conformance with Motor Skills; 4.2.2 Conformance with the Cognitive Skills; 4.2.3 Presentation of Search Results; 4.2.4 Browsing versus Searching; 4.2.5 Navigational Style; 4.3 Summary and Discussion; Chapter 5 Large-scale Analysis of Children's Queries and Search Interactions; 5.1 Dataset; 5.2 Results; 5.3 Summary and Discussion Chapter 6 Differences in Usability and Perception of Targeted Web Search Engines between Children and Adults 6.1 Related Work; 6.2 User Study; 6.3 Study Results; 6.4 Summary and Discussion; Part III Tackling the Challenges ; Chapter 7 Search User Interface Design for Children ; 7.1 Conceptual Challenges and Possible Solutions; 7.2 Knowledge Journey Design; 7.3 Evaluation; 7.3.1 Study Design; 7.3.2 Study Results; 7.4 Voice-Controlled Search: Initial Study; 7.4.1 User Study; 7.5 Summary and Discussion; Chapter 8 Addressing User Diversity ; 8.1 Evolving Search User Interface 8.1.1 Mapping Function8.1.2 Evolving Skills; 8.1.3 Detection of User Abilities; 8.1.4 Design Concepts; 8.2 Adaptation of a Search User Interface towards User Needs; 8.2.1 Design & Implementation; 8.2.2 Search Input; 8.2.3 Result Output; 8.2.4 General Properties; 8.2.5 Configuration and Further Details; 8.3 Evaluation; 8.3.1 Study Design; 8.3.2 Study Results; 8.3.3 Preferred UI Settings; 8.3.4 User satisfaction; 8.4 Knowledge Journey Exhibit; 8.4.1 Hardware; 8.4.2 Frontend; 8.4.3 Backend; 8.5 Summary and Discussion; Chapter 9 Supporting Visual Searchers in Processing Search Results 9.1 Related Work
    Date
    1. 2.2016 18:25:22
  8. Burke, C.B.: America's information wars : the untold story of information systems in America's conflicts and politics from World War II to the internet age (2018) 0.04
    0.040558152 = product of:
      0.081116304 = sum of:
        0.081116304 = product of:
          0.16223261 = sum of:
            0.16223261 = weight(_text_:ii in 5953) [ClassicSimilarity], result of:
              0.16223261 = score(doc=5953,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.5908023 = fieldWeight in 5953, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5953)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book narrates the development of science and intelligence information systems and technologies in the U.S. from World War II through today. The story ranges from a description of the information systems and machines of the 1940s to the rise of a huge international science information industry, and to the 1990's Open Access-Open Culture.
  9. Zeng, Q.; Yu, M.; Yu, W.; Xiong, J.; Shi, Y.; Jiang, M.: Faceted hierarchy : a new graph type to organize scientific concepts and a construction method (2019) 0.04
    0.04037056 = product of:
      0.08074112 = sum of:
        0.08074112 = product of:
          0.24222337 = sum of:
            0.24222337 = weight(_text_:3a in 400) [ClassicSimilarity], result of:
              0.24222337 = score(doc=400,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.56201804 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=400)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Vgl.: https%3A%2F%2Faclanthology.org%2FD19-5317.pdf&usg=AOvVaw0ZZFyq5wWTtNTvNkrvjlGA.
  10. Farazi, M.: Faceted lightweight ontologies : a formalization and some experiments (2010) 0.03
    0.033642136 = product of:
      0.06728427 = sum of:
        0.06728427 = product of:
          0.20185281 = sum of:
            0.20185281 = weight(_text_:3a in 4997) [ClassicSimilarity], result of:
              0.20185281 = score(doc=4997,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.46834838 = fieldWeight in 4997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4997)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    PhD Dissertation at International Doctorate School in Information and Communication Technology. Vgl.: https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F150083013.pdf&usg=AOvVaw2n-qisNagpyT0lli_6QbAQ.
  11. Cerbo II, M.A.: Is there a future for library catalogers? (2011) 0.03
    0.03277594 = product of:
      0.06555188 = sum of:
        0.06555188 = product of:
          0.13110375 = sum of:
            0.13110375 = weight(_text_:ii in 1892) [ClassicSimilarity], result of:
              0.13110375 = score(doc=1892,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.4774404 = fieldWeight in 1892, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1892)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Zuccala, A.; Leeuwen, T.van: Book reviews in humanities research evaluations (2011) 0.03
    0.02897011 = product of:
      0.05794022 = sum of:
        0.05794022 = product of:
          0.11588044 = sum of:
            0.11588044 = weight(_text_:ii in 4771) [ClassicSimilarity], result of:
              0.11588044 = score(doc=4771,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.42200166 = fieldWeight in 4771, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4771)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric evaluations of research outputs in the social sciences and humanities are challenging due to limitations associated with Web of Science data; however, background literature has shown that scholars are interested in stimulating improvements. We give special attention to book reviews processed by Web of Sciencehistory and literature journals, focusing on two types: Type I (i.e., reference to book only) and Type II (i.e., reference to book and other scholarly sources). Bibliometric data are collected and analyzed for a large set of reviews (1981-2009) to observe general publication patterns and patterns of citedness and co-citedness with books under review. Results show that reviews giving reference only to the book (Type I) are published more frequently while reviews referencing the book and other works (Type II) are more likely to be cited. The referencing culture of the humanities makes it difficult to understand patterns of co-citedness between books and review articles without further in-depth content analyses. Overall, citation counts to book reviews are typically low, but our data showed that they are scholarly and do play a role in the scholarly communication system. In the disciplines of history and literature, where book reviews are prominent, counting the number and type of reviews that a scholar produces throughout his/her career is a positive step forward in research evaluations. We propose a new set of journal quality indicators for the purpose of monitoring their scholarly influence.
  13. Hu, X.; Kando, N.: Task complexity and difficulty in music information retrieval (2017) 0.03
    0.02897011 = product of:
      0.05794022 = sum of:
        0.05794022 = product of:
          0.11588044 = sum of:
            0.11588044 = weight(_text_:ii in 3690) [ClassicSimilarity], result of:
              0.11588044 = score(doc=3690,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.42200166 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    There has been little research on task complexity and difficulty in music information retrieval (MIR), whereas many studies in the text retrieval domain have found that task complexity and difficulty have significant effects on user effectiveness. This study aimed to bridge the gap by exploring i) the relationship between task complexity and difficulty; ii) factors affecting task difficulty; and iii) the relationship between task difficulty, task complexity, and user search behaviors in MIR. An empirical user experiment was conducted with 51 participants and a novel MIR system. The participants searched for 6 topics across 3 complexity levels. The results revealed that i) perceived task difficulty in music search is influenced by task complexity, user background, system affordances, and task uncertainty and enjoyability; and ii) perceived task difficulty in MIR is significantly correlated with effectiveness metrics such as the number of songs found, number of clicks, and task completion time. The findings have implications for the design of music search tasks (in research) or use cases (in system development) as well as future MIR systems that can detect task difficulty based on user effectiveness metrics.
  14. Saggi, M.K.; Jain, S.: ¬A survey towards an integration of big data analytics to big insights for value-creation (2018) 0.03
    0.02897011 = product of:
      0.05794022 = sum of:
        0.05794022 = product of:
          0.11588044 = sum of:
            0.11588044 = weight(_text_:ii in 5053) [ClassicSimilarity], result of:
              0.11588044 = score(doc=5053,freq=4.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.42200166 = fieldWeight in 5053, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5053)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Big Data Analytics (BDA) is increasingly becoming a trending practice that generates an enormous amount of data and provides a new opportunity that is helpful in relevant decision-making. The developments in Big Data Analytics provide a new paradigm and solutions for big data sources, storage, and advanced analytics. The BDA provide a nuanced view of big data development, and insights on how it can truly create value for firm and customer. This article presents a comprehensive, well-informed examination, and realistic analysis of deploying big data analytics successfully in companies. It provides an overview of the architecture of BDA including six components, namely: (i) data generation, (ii) data acquisition, (iii) data storage, (iv) advanced data analytics, (v) data visualization, and (vi) decision-making for value-creation. In this paper, seven V's characteristics of BDA namely Volume, Velocity, Variety, Valence, Veracity, Variability, and Value are explored. The various big data analytics tools, techniques and technologies have been described. Furthermore, it presents a methodical analysis for the usage of Big Data Analytics in various applications such as agriculture, healthcare, cyber security, and smart city. This paper also highlights the previous research, challenges, current status, and future directions of big data analytics for various application platforms. This overview highlights three issues, namely (i) concepts, characteristics and processing paradigms of Big Data Analytics; (ii) the state-of-the-art framework for decision-making in BDA for companies to insight value-creation; and (iii) the current challenges of Big Data Analytics as well as possible future directions.
  15. Chu, H.: Information representation and retrieval in the digital age (2010) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 377) [ClassicSimilarity], result of:
              0.114715785 = score(doc=377,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 377, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=377)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Information representation and retrieval : an overview -- Information representation I : basic approaches -- Information representation II : related topics -- Language in information representation and retrieval -- Retrieval techniques and query representation -- Retrieval approaches -- Information retrieval models -- Information retrieval systems -- Retrieval of information unique in content or format -- The user dimension in information representation and retrieval -- Evaluation of information representation and retrieval -- Artificial intelligence in information representation and retrieval.
  16. Munkelt, J.: Erstellung einer DNB-Retrieval-Testkollektion (2018) 0.03
    0.028678946 = product of:
      0.057357892 = sum of:
        0.057357892 = product of:
          0.114715785 = sum of:
            0.114715785 = weight(_text_:ii in 4310) [ClassicSimilarity], result of:
              0.114715785 = score(doc=4310,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.41776034 = fieldWeight in 4310, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4310)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    II, 79 S
  17. Cai, F.; Wang, S.; Rijke, M.de: Behavior-based personalization in web search (2017) 0.03
    0.028384795 = product of:
      0.05676959 = sum of:
        0.05676959 = product of:
          0.11353918 = sum of:
            0.11353918 = weight(_text_:ii in 3527) [ClassicSimilarity], result of:
              0.11353918 = score(doc=3527,freq=6.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.4134755 = fieldWeight in 3527, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3527)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Personalized search approaches tailor search results to users' current interests, so as to help improve the likelihood of a user finding relevant documents for their query. Previous work on personalized search focuses on using the content of the user's query and of the documents clicked to model the user's preference. In this paper we focus on a different type of signal: We investigate the use of behavioral information for the purpose of search personalization. That is, we consider clicks and dwell time for reranking an initially retrieved list of documents. In particular, we (i) investigate the impact of distributions of users and queries on document reranking; (ii) estimate the relevance of a document for a query at 2 levels, at the query-level and at the word-level, to alleviate the problem of sparseness; and (iii) perform an experimental evaluation both for users seen during the training period and for users not seen during training. For the latter, we explore the use of information from similar users who have been seen during the training period. We use the dwell time on clicked documents to estimate a document's relevance to a query, and perform Bayesian probabilistic matrix factorization to generate a relevance distribution of a document over queries. Our experiments show that: (i) for personalized ranking, behavioral information helps to improve retrieval effectiveness; and (ii) given a query, merging information inferred from behavior of a particular user and from behaviors of other users with a user-dependent adaptive weight outperforms any combination with a fixed weight.
    Footnote
    A preliminary version of this paper was published in the proceedings of SIGIR '14. In this extension, we (i) extend the behavioral personalization search model introduced there to deal with queries issued by new users for whom long-term search logs are unavailable; (ii) examine the impact of sparseness on the performance of our model by considering both word-level and query-level modeling, as we find that the word-document relevance matrix is less sparse than the query-document relevance matrix; (iii) investigate the effectiveness of our behavior-based reranking model with and without assuming a uniform distribution of users as users may behave differently; (iv) include more related work and provide a detailed discussion of the experimental results.
  18. Xiong, C.: Knowledge based text representations for information retrieval (2016) 0.03
    0.02691371 = product of:
      0.05382742 = sum of:
        0.05382742 = product of:
          0.16148226 = sum of:
            0.16148226 = weight(_text_:3a in 5820) [ClassicSimilarity], result of:
              0.16148226 = score(doc=5820,freq=2.0), product of:
                0.4309886 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3746787 = fieldWeight in 5820, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5820)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Content
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Language and Information Technologies. Vgl.: https%3A%2F%2Fwww.cs.cmu.edu%2F~cx%2Fpapers%2Fknowledge_based_text_representation.pdf&usg=AOvVaw0SaTSvhWLTh__Uz_HtOtl3.
  19. Aerts, D.; Broekaert, J.; Sozzo, S.; Veloz, T.: Meaning-focused and quantum-inspired information retrieval (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 735) [ClassicSimilarity], result of:
              0.098327816 = score(doc=735,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=735)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In recent years, quantum-based methods have promisingly integrated the traditional procedures in information retrieval (IR) and natural language processing (NLP). Inspired by our research on the identification and application of quantum structures in cognition, more specifically our work on the representation of concepts and their combinations, we put forward a 'quantum meaning based' framework for structured query retrieval in text corpora and standardized testing corpora. This scheme for IR rests on considering as basic notions, (i) 'entities of meaning', e.g., concepts and their combinations and (ii) traces of such entities of meaning, which is how documents are considered in this approach. The meaning content of these 'entities of meaning' is reconstructed by solving an 'inverse problem' in the quantum formalism, consisting of reconstructing the full states of the entities of meaning from their collapsed states identified as traces in relevant documents. The advantages with respect to traditional approaches, such as Latent Semantic Analysis (LSA), are discussed by means of concrete examples.
  20. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.02
    0.024581954 = product of:
      0.049163908 = sum of:
        0.049163908 = product of:
          0.098327816 = sum of:
            0.098327816 = weight(_text_:ii in 1142) [ClassicSimilarity], result of:
              0.098327816 = score(doc=1142,freq=2.0), product of:
                0.2745971 = queryWeight, product of:
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.050836053 = queryNorm
                0.3580803 = fieldWeight in 1142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4016213 = idf(docFreq=541, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.

Types

  • a 473
  • m 45
  • el 19
  • s 17
  • x 5
  • b 4
  • i 1
  • r 1
  • More… Less…

Subjects

Classifications