Search (210 results, page 2 of 11)

  • × language_ss:"e"
  • × type_ss:"m"
  • × year_i:[2010 TO 2020}
  1. Gossen, T.: Search engines for children : search user interfaces and information-seeking behaviour (2016) 0.01
    0.012970729 = product of:
      0.025941458 = sum of:
        0.025941458 = sum of:
          0.004101291 = weight(_text_:a in 2752) [ClassicSimilarity], result of:
            0.004101291 = score(doc=2752,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.07722905 = fieldWeight in 2752, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2752)
          0.021840166 = weight(_text_:22 in 2752) [ClassicSimilarity], result of:
            0.021840166 = score(doc=2752,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.1354154 = fieldWeight in 2752, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=2752)
      0.5 = coord(1/2)
    
    Abstract
    The doctoral thesis of Tatiana Gossen formulates criteria and guidelines on how to design the user interfaces of search engines for children. In her work, the author identifies the conceptual challenges based on own and previous user studies and addresses the changing characteristics of the users by providing a means of adaptation. Additionally, a novel type of search result visualisation for children with cartoon style characters is developed taking children's preference for visual information into account.
    Content
    Inhalt: Acknowledgments; Abstract; Zusammenfassung; Contents; List of Figures; List of Tables; List of Acronyms; Chapter 1 Introduction ; 1.1 Research Questions; 1.2 Thesis Outline; Part I Fundamentals ; Chapter 2 Information Retrieval for Young Users ; 2.1 Basics of Information Retrieval; 2.1.1 Architecture of an IR System; 2.1.2 Relevance Ranking; 2.1.3 Search User Interfaces; 2.1.4 Targeted Search Engines; 2.2 Aspects of Child Development Relevant for Information Retrieval Tasks; 2.2.1 Human Cognitive Development; 2.2.2 Information Processing Theory; 2.2.3 Psychosocial Development 2.3 User Studies and Evaluation2.3.1 Methods in User Studies; 2.3.2 Types of Evaluation; 2.3.3 Evaluation with Children; 2.4 Discussion; Chapter 3 State of the Art ; 3.1 Children's Information-Seeking Behaviour; 3.1.1 Querying Behaviour; 3.1.2 Search Strategy; 3.1.3 Navigation Style; 3.1.4 User Interface; 3.1.5 Relevance Judgement; 3.2 Existing Algorithms and User Interface Concepts for Children; 3.2.1 Query; 3.2.2 Content; 3.2.3 Ranking; 3.2.4 Search Result Visualisation; 3.3 Existing Information Retrieval Systems for Children; 3.3.1 Digital Book Libraries; 3.3.2 Web Search Engines 3.4 Summary and DiscussionPart II Studying Open Issues ; Chapter 4 Usability of Existing Search Engines for Young Users ; 4.1 Assessment Criteria; 4.1.1 Criteria for Matching the Motor Skills; 4.1.2 Criteria for Matching the Cognitive Skills; 4.2 Results; 4.2.1 Conformance with Motor Skills; 4.2.2 Conformance with the Cognitive Skills; 4.2.3 Presentation of Search Results; 4.2.4 Browsing versus Searching; 4.2.5 Navigational Style; 4.3 Summary and Discussion; Chapter 5 Large-scale Analysis of Children's Queries and Search Interactions; 5.1 Dataset; 5.2 Results; 5.3 Summary and Discussion Chapter 6 Differences in Usability and Perception of Targeted Web Search Engines between Children and Adults 6.1 Related Work; 6.2 User Study; 6.3 Study Results; 6.4 Summary and Discussion; Part III Tackling the Challenges ; Chapter 7 Search User Interface Design for Children ; 7.1 Conceptual Challenges and Possible Solutions; 7.2 Knowledge Journey Design; 7.3 Evaluation; 7.3.1 Study Design; 7.3.2 Study Results; 7.4 Voice-Controlled Search: Initial Study; 7.4.1 User Study; 7.5 Summary and Discussion; Chapter 8 Addressing User Diversity ; 8.1 Evolving Search User Interface 8.1.1 Mapping Function8.1.2 Evolving Skills; 8.1.3 Detection of User Abilities; 8.1.4 Design Concepts; 8.2 Adaptation of a Search User Interface towards User Needs; 8.2.1 Design & Implementation; 8.2.2 Search Input; 8.2.3 Result Output; 8.2.4 General Properties; 8.2.5 Configuration and Further Details; 8.3 Evaluation; 8.3.1 Study Design; 8.3.2 Study Results; 8.3.3 Preferred UI Settings; 8.3.4 User satisfaction; 8.4 Knowledge Journey Exhibit; 8.4.1 Hardware; 8.4.2 Frontend; 8.4.3 Backend; 8.5 Summary and Discussion; Chapter 9 Supporting Visual Searchers in Processing Search Results 9.1 Related Work
    Date
    1. 2.2016 18:25:22
  2. Scholarly metrics under the microscope : from citation analysis to academic auditing (2015) 0.01
    0.012480095 = product of:
      0.02496019 = sum of:
        0.02496019 = product of:
          0.04992038 = sum of:
            0.04992038 = weight(_text_:22 in 4654) [ClassicSimilarity], result of:
              0.04992038 = score(doc=4654,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.30952093 = fieldWeight in 4654, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4654)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 17:12:50
  3. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.010920083 = product of:
      0.021840166 = sum of:
        0.021840166 = product of:
          0.043680333 = sum of:
            0.043680333 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.043680333 = score(doc=3283,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Paradigms and conceptual systems in knowledge organization : Proceedings of the Eleventh International ISKO Conference, 23-26 February 2010 Rome, Italy (2010) 0.01
    0.009360071 = product of:
      0.018720143 = sum of:
        0.018720143 = product of:
          0.037440285 = sum of:
            0.037440285 = weight(_text_:22 in 773) [ClassicSimilarity], result of:
              0.037440285 = score(doc=773,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23214069 = fieldWeight in 773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 2.2013 12:09:34
  5. Informationswissenschaft zwischen virtueller Infrastruktur und materiellen Lebenswelten : Proceedings des 13. Internationalen Symposiums für Informationswissenschaft (ISI 2013), Potsdam, 19.-22. März 2013. (2013) 0.01
    0.009360071 = product of:
      0.018720143 = sum of:
        0.018720143 = product of:
          0.037440285 = sum of:
            0.037440285 = weight(_text_:22 in 2979) [ClassicSimilarity], result of:
              0.037440285 = score(doc=2979,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23214069 = fieldWeight in 2979, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2979)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Functional requirements for subject authority data (FRSAD) : a conceptual model (2011) 0.00
    0.0032090992 = product of:
      0.0064181983 = sum of:
        0.0064181983 = product of:
          0.012836397 = sum of:
            0.012836397 = weight(_text_:a in 2880) [ClassicSimilarity], result of:
              0.012836397 = score(doc=2880,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.24171482 = fieldWeight in 2880, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2880)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of authority control is to ensure consistency in representing a value - a name of a person, a place name, or a term or code representing a subject - in the elements used as access points in information retrieval. The primary purpose of this study is to produce a framework that will provide a clearly stated and commonly shared understanding of what the subject authority data/record/file aims to provide information about, and the expectation of what such data should achieve in terms of answering user needs.
    Editor
    Zeng, M.L., M. Zumer u. A. Salaba
  7. Hidalgo, C.: Why information grows : the evolution of order, from atoms to economies (2015) 0.00
    0.0026202186 = product of:
      0.005240437 = sum of:
        0.005240437 = product of:
          0.010480874 = sum of:
            0.010480874 = weight(_text_:a in 2154) [ClassicSimilarity], result of:
              0.010480874 = score(doc=2154,freq=30.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19735932 = fieldWeight in 2154, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2154)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Why do some nations prosper while others do not? While economists often turn to measures like GDP or per-capita income to answer this question, interdisciplinary theorist Cesar Hidalgo argues that there is a better way to understand economic success. Instead of measuring the money a country makes, he proposes, we can learn more from measuring a country's ability to make complex products--in other words, the ability to turn an idea into an artifact and imagination into capital. In Why Information Grows, Hidalgo combines the seemingly disparate fields of economic development and physics to present this new rubric for economic growth. He argues that viewing development solely in terms of money and politics is too simplistic to provide a true understanding of national wealth. Rather, we should be investigating what makes some countries more capable than others. Complex products--from films to robots, apps to automobiles--are a physical distillation of an economy's knowledge, a measurable embodiment of the education, infrastructure, and capability of an economy. Economic wealth is about applying this knowledge to turn ideas into tangible products, and the more complex these products, the more economic growth a country will experience. Just look at the East Asian countries, he argues, whose rapid rise can be attributed to their ability to manufacture products at all levels of complexity. A radical new interpretation of global economics, Why Information Grows overturns traditional assumptions about wealth and development. In a world where knowledge is quite literally power, Hidalgo shows how we can create societies that are limited by nothing more than their imagination"-- "Why do some nations prosper while others do not? Economists usually turn to measures such as gross domestic product or per capita income to answer this question, but interdisciplinary theorist Cesar Hidalgo argues that we can learn more by measuring a country's ability to make complex products. In Why Information Grows, Hidalgo combines the seemingly disparate fields of economic development and physics to present this new rubric for economic growth. He believes that we should investigate what makes some countries more capable than others. Complex products-from films to robots, apps to automobiles-are a physical distillation of an economy's knowledge, a measurable embodiment of its education, infrastructure, and capability. Economic wealth accrues when applications of this knowledge turn ideas into tangible products; the more complex its products, the more economic growth a country will experience. A radical new interpretation of global economics, Why Information Grows overturns traditional assumptions about the development of economies and the origins of wealth and takes a crucial step toward making economics less the dismal science and more the insightful one."
  8. Frické, M.: Logic and the organization of information (2012) 0.00
    0.0025115174 = product of:
      0.0050230348 = sum of:
        0.0050230348 = product of:
          0.0100460695 = sum of:
            0.0100460695 = weight(_text_:a in 1782) [ClassicSimilarity], result of:
              0.0100460695 = score(doc=1782,freq=36.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18917176 = fieldWeight in 1782, product of:
                  6.0 = tf(freq=36.0), with freq of:
                    36.0 = termFreq=36.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1782)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Logic and the Organization of Information closely examines the historical and contemporary methodologies used to catalogue information objects-books, ebooks, journals, articles, web pages, images, emails, podcasts and more-in the digital era. This book provides an in-depth technical background for digital librarianship, and covers a broad range of theoretical and practical topics including: classification theory, topic annotation, automatic clustering, generalized synonymy and concept indexing, distributed libraries, semantic web ontologies and Simple Knowledge Organization System (SKOS). It also analyzes the challenges facing today's information architects, and outlines a series of techniques for overcoming them. Logic and the Organization of Information is intended for practitioners and professionals working at a design level as a reference book for digital librarianship. Advanced-level students, researchers and academics studying information science, library science, digital libraries and computer science will also find this book invaluable.
    Footnote
    Rez. in: J. Doc. 70(2014) no.4: "Books on the organization of information and knowledge, aimed at a library/information audience, tend to fall into two clear categories. Most are practical and pragmatic, explaining the "how" as much or more than the "why". Some are theoretical, in part or in whole, showing how the practice of classification, indexing, resource description and the like relates to philosophy, logic, and other foundational bases; the books by Langridge (1992) and by Svenonious (2000) are well-known examples this latter kind. To this category certainly belongs a recent book by Martin Frické (2012). The author takes the reader for an extended tour through a variety of aspects of information organization, including classification and taxonomy, alphabetical vocabularies and indexing, cataloguing and FRBR, and aspects of the semantic web. The emphasis throughout is on showing how practice is, or should be, underpinned by formal structures; there is a particular emphasis on first order predicate calculus. The advantages of a greater, and more explicit, use of symbolic logic is a recurring theme of the book. There is a particularly commendable historical dimension, often omitted in texts on this subject. It cannot be said that this book is entirely an easy read, although it is well written with a helpful index, and its arguments are generally well supported by clear and relevant examples. It is thorough and detailed, but thereby seems better geared to the needs of advanced students and researchers than to the practitioners who are suggested as a main market. For graduate students in library/information science and related disciplines, in particular, this will be a valuable resource. I would place it alongside Svenonious' book as the best insight into the theoretical "why" of information organization. It has evoked a good deal of interest, including a set of essay commentaries in Journal of Information Science (Gilchrist et al., 2013). Introducing these, Alan Gilchrist rightly says that Frické deserves a salute for making explicit the fundamental relationship between the ancient discipline of logic and modern information organization. If information science is to continue to develop, and make a contribution to the organization of the information environments of the future, then this book sets the groundwork for the kind of studies which will be needed." (D. Bawden)
  9. Hassanzadeh, O.; Kementsietsidis, A.; Lim, L.; Miller, R.J.; Wang, M.: Semantic link discovery over relational data (2012) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 412) [ClassicSimilarity], result of:
              0.00994303 = score(doc=412,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 412, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=412)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    From small research groups to large organizations, there has been tremendous effort in the last few years in publishing data online so that it is widely accessible to a large community. These efforts have been successful across a number of domains and have resulted in a proliferation of online sources. In the field of biology, there were 1.330 major online molecular databases at the beginning of 2011, which is 96 more than a year earlier. In the Linking Open Data (LOD) community project at the W3C, the number of published RDF triples has grown from 500 million in May 2007 to over 28 billion triples in March 2011. Fueling this data publishing explosion are tools for translating relational and semistructured data into RDF. In this chapter, we present LinQuer, a generic and extensible framework for integrating link discovery methods over relational data.
  10. Walsh, T.: Machines that think : the future of artificial intelligence (2018) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 4479) [ClassicSimilarity], result of:
              0.00994303 = score(doc=4479,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 4479, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4479)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A scientist who has spent a career developing Artificial Intelligence takes a realistic look at the technological challenges and assesses the likely effect of AI on the future. How will Artificial Intelligence (AI) impact our lives? Toby Walsh, one of the leading AI researchers in the world, takes a critical look at the many ways in which "thinking machines" will change our world. Based on a deep understanding of the technology, Walsh describes where Artificial Intelligence is today, and where it will take us. Will automation take away most of our jobs? Is a "technological singularity" near? What is the chance that robots will take over? How do we best prepare for this future? The author concludes that, if we plan well, AI could be our greatest legacy, the last invention human beings will ever need to make.
  11. Mirizzi, R.; Ragone, A.; Noia, T. Di; Sciascio, E. Di: ¬A recommender system for linked data (2012) 0.00
    0.0024392908 = product of:
      0.0048785815 = sum of:
        0.0048785815 = product of:
          0.009757163 = sum of:
            0.009757163 = weight(_text_:a in 436) [ClassicSimilarity], result of:
              0.009757163 = score(doc=436,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18373153 = fieldWeight in 436, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=436)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Peter and Alice are at home, it is a calm winter night, snow is falling, and it is too cold to go outside. "Why don't we just order a pizza and watch a movie?" says Alice wrapped in her favorite blanket. "Why not?"-Peter replies-"Which movie do you wanna watch?" "Well, what about some comedy, romance-like one? Com'on Pete, look on Facebook, there is that nice application Kara suggested me some days ago!" answers Alice. "Oh yes, MORE, here we go, tell me a movie you like a lot," says Peter excited. "Uhm, I wanna see something like the Bridget Jones's Diary or Four Weddings and a Funeral, humour, romance, good actors..." replies his beloved, rubbing her hands. Peter is a bit concerned, he is more into fantasy genre, but he wants to please Alice, so he looks on MORE for movies similar to the Bridget Jones's Diary and Four Weddings and a Funeral: "Here we are my dear, MORE suggests the sequel or, if you prefer, Love Actually," I would prefer the second." "Great! Let's rent it!" nods Peter in agreement. The scenario just presented highlights an interesting and useful feature of a modern Web application. There are tasks where the users look for items similar to the ones they already know. Hence, we need systems that recommend items based on user preferences. In other words, systems should allow an easy and friendly exploration of the information/data related to a particular domain of interest. Such characteristics are well known in the literature and in common applications such as recommender systems. Nevertheless, new challenges in this field arise whenthe information used by these systems exploits the huge amount of interlinked data coming from the Semantic Web. In this chapter, we present MORE, a system for 'movie recommendation' in the Web of Data.
  12. Chalmers, D.J.: Constructing the world (2012) 0.00
    0.0024392908 = product of:
      0.0048785815 = sum of:
        0.0048785815 = product of:
          0.009757163 = sum of:
            0.009757163 = weight(_text_:a in 4402) [ClassicSimilarity], result of:
              0.009757163 = score(doc=4402,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18373153 = fieldWeight in 4402, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4402)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    David J. Chalmers constructs a highly ambitious and original picture of the world, from a few basic elements. He develops and extends Rudolf Carnap's attempt to do the same in Der Logische Aufbau Der Welt (1928). Carnap gave a blueprint for describing the entire world using a limited vocabulary, so that all truths about the world could be derived from that description--but his Aufbau is often seen as a noble failure. In Constructing the World, Chalmers argues that something like the Aufbau project can succeed. With the right vocabulary and the right derivation relation, we can indeed construct the world. The focal point of Chalmers's project is scrutability: roughly, the thesis that ideal reasoning from a limited class of basic truths yields all truths about the world. Chalmers first argues for the scrutability thesis and then considers how small the base can be. All this can be seen as a project in metaphysical epistemology: epistemology in service of a global picture of the world and of our conception thereof. The scrutability framework has ramifications throughout philosophy. Using it, Chalmers defends a broadly Fregean approach to meaning, argues for an internalist approach to the contents of thought, and rebuts W. V. Quine's arguments against the analytic and the a priori. He also uses scrutability to analyze the unity of science, to defend a conceptual approach to metaphysics, and to mount a structuralist response to skepticism. Based on Chalmers's 2010 John Locke lectures, Constructing the World opens up debate on central areas of philosophy including philosophy of language, consciousness, knowledge, and reality. This major work by a leading philosopher will appeal to philosophers in all areas.
  13. Wright, A.: Cataloging the world : Paul Otlet and the birth of the information age (2014) 0.00
    0.00243342 = product of:
      0.00486684 = sum of:
        0.00486684 = product of:
          0.00973368 = sum of:
            0.00973368 = weight(_text_:a in 2788) [ClassicSimilarity], result of:
              0.00973368 = score(doc=2788,freq=46.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18328933 = fieldWeight in 2788, product of:
                  6.78233 = tf(freq=46.0), with freq of:
                    46.0 = termFreq=46.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2788)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 1934, a Belgian entrepreneur named Paul Otlet sketched out plans for a worldwide network of computers-or "electric telescopes," as he called them - that would allow people anywhere in the world to search and browse through millions of books, newspapers, photographs, films and sound recordings, all linked together in what he termed a reseau mondial: a "worldwide web." Today, Otlet and his visionary proto-Internet have been all but forgotten, thanks to a series of historical misfortunes - not least of which involved the Nazis marching into Brussels and destroying most of his life's work. In the years since Otlet's death, however, the world has witnessed the emergence of a global network that has proved him right about the possibilities - and the perils - of networked information. In Cataloging the World, Alex Wright brings to light the forgotten genius of Paul Otlet, an introverted librarian who harbored a bookworm's dream to organize all the world's information. Recognizing the limitations of traditional libraries and archives, Otlet began to imagine a radically new way of organizing information, and undertook his life's great work: a universal bibliography of all the world's published knowledge that ultimately totaled more than 12 million individual entries. That effort eventually evolved into the Mundaneum, a vast "city of knowledge" that opened its doors to the public in 1921 to widespread attention. Like many ambitious dreams, however, Otlet's eventually faltered, a victim to technological constraints and political upheaval in Europe on the eve of World War II. Wright tells not just the story of a failed entrepreneur, but the story of a powerful idea - the dream of universal knowledge - that has captivated humankind since before the great Library at Alexandria. Cataloging the World explores this story through the prism of today's digital age, considering the intellectual challenge and tantalizing vision of Otlet's digital universe that in some ways seems far more sophisticated than the Web as we know it today.
    The dream of universal knowledge hardly started with the digital age. From the archives of Sumeria to the Library of Alexandria, humanity has long wrestled with information overload and management of intellectual output. Revived during the Renaissance and picking up pace in the Enlightenment, the dream grew and by the late nineteenth century was embraced by a number of visionaries who felt that at long last it was within their grasp. Among them, Paul Otlet stands out. A librarian by training, he worked at expanding the potential of the catalogue card -- the world's first information chip. From there followed universal libraries and reading rooms, connecting his native Belgium to the world -- by means of vast collections of cards that brought together everything that had ever been put to paper. Recognizing that the rapid acceleration of technology was transforming the world's intellectual landscape, Otlet devoted himself to creating a universal bibliography of all published knowledge. Ultimately totaling more than 12 million individual entries, it would evolve into the Mundaneum, a vast "city of knowledge" that opened its doors to the public in 1921. By 1934, Otlet had drawn up plans for a network of "electric telescopes" that would allow people everywhere to search through books, newspapers, photographs, and recordings, all linked together in what he termed a réseau mondial: a worldwide web. It all seemed possible, almost until the moment when the Nazis marched into Brussels and carted it all away. In Cataloging the World, Alex Wright places Otlet in the long continuum of visionaries and pioneers who have dreamed of unifying the world's knowledge, from H.G. Wells and Melvil Dewey to Ted Nelson and Steve Jobs. And while history has passed Otlet by, Wright shows that his legacy persists in today's networked age, where Internet corporations like Google and Twitter play much the same role that Otlet envisioned for the Mundaneum -- as the gathering and distribution channels for the world's intellectual output. In this sense, Cataloging the World is more than just the story of a failed entrepreneur; it is an ongoing story of a powerful idea that has captivated humanity from time immemorial, and that continues to inspire many of us in today's digital age.
  14. Melucci, M.: Contextual search : a computational framework (2012) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 4913) [ClassicSimilarity], result of:
              0.009567685 = score(doc=4913,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 4913, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The growing availability of data in electronic form, the expansion of the World Wide Web and the accessibility of computational methods for large-scale data processing have allowed researchers in Information Retrieval (IR) to design systems which can effectively and efficiently constrain search within the boundaries given by context, thus transforming classical search into contextual search. Contextual Search: A Computational Framework introduces contextual search within a computational framework based on contextual variables, contextual factors and statistical models. It describes how statistical models can process contextual variables to infer the contextual factors underlying the current search context. It also provides background to the subject by: placing it among other surveys on relevance, interaction, context, and behaviour; providing a description of the contextual variables used for implementing the statistical models which represent and predict relevance and contextual factors; and providing an overview of the evaluation methodologies and findings relevant to this subject. Contextual Search: A Computational Framework is a highly recommended read, both for beginners who are embarking on research in this area and as a useful reference for established IR researchers.
    Content
    Table of contents 1. Introduction 2. Query Intent 3. Personal Interest 4. Document Quality 5. Contextual Search Evaluation 6. Conclusions Acknowledgements References A. Implementations
  15. Arafat, S.; Ashoori, E.: Search foundations : toward a science of technology-mediated experience (2018) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 158) [ClassicSimilarity], result of:
              0.009374379 = score(doc=158,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 158, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=158)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This book contributes to discussions within Information Retrieval and Science (IR&S) by improving our conceptual understanding of the relationship between humans and technology. A call to redirect the intellectual focus of information retrieval and science (IR&S) toward the phenomenon of technology-mediated experience. In this book, Sachi Arafat and Elham Ashoori issue a call to reorient the intellectual focus of information retrieval and science (IR&S) away from search and related processes toward the more general phenomenon of technology-mediated experience. Technology-mediated experience accounts for an increasing proportion of human lived experience; the phenomenon of mediation gets at the heart of the human-machine relationship. Framing IR&S more broadly in this way generalizes its problems and perspectives, dovetailing them with those shared across disciplines dealing with socio-technical phenomena. This reorientation of IR&S requires imagining it as a new kind of science: a science of technology-mediated experience (STME). Arafat and Ashoori not only offer detailed analysis of the foundational concepts underlying IR&S and other technical disciplines but also boldly call for a radical, systematic appropriation of the sciences and humanities to create a better understanding of the human-technology relationship. Arafat and Ashoori discuss the notion of progress in IR&S and consider ideas of progress from the history and philosophy of science. They argue that progress in IR&S requires explicit linking between technical and nontechnical aspects of discourse. They develop a network of basic questions and present a discursive framework for addressing these questions. With this book, Arafat and Ashoori provide both a manifesto for the reimagining of their field and the foundations on which a reframed IR&S would rest.
    Content
    The embedding of the foundational in the adhoc -- Notions of progress in information retrieval -- From growth to progress I : methodology for understanding progress -- From growth to progress II : the network of discourse -- Basic questions characterising foundations discourse -- Enduring nature of foundations -- Foundations as the way to the authoritative against the authoritarian : a conclusion
  16. Hart, A.: RDA made simple : a practical guide to the new cataloging rules (2014) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 2807) [ClassicSimilarity], result of:
              0.009076704 = score(doc=2807,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 2807, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2807)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Looking for a comprehensive, all-in-one guide to RDA that keeps it simple and provides exactly what you need to know? This book covers planning and training considerations, presents relevant FRBR and FRAD background, and offers practical, step-by-step cataloging advice for a variety of material formats. - Supplies an accessible, up-to-date guide to RDA in a single resource - Covers history and development of the new cataloging code, including the results of the U.S. RDA Test Coordinating Committee Report - Presents the latest information on RDA cataloging for multiple material formats, including print, audiovisual, and digital resources - Explains how RDA's concepts, structure, and vocabulary are based on FRBR (Functional Requirements for Bibliographic Records) and FRAD (Functional Requirements for Authority Data), both of which are reviewed in the book
  17. Giedion, S.: Mechanization takes command : a contribution to anonymous history (2013) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 3445) [ClassicSimilarity], result of:
              0.009076704 = score(doc=3445,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 3445, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3445)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    First published in 1948, "Mechanization Takes Command" is an examination of mechanization and its effects on everyday life. A monumental figure in the field of architectural history, Sigfried Giedion traces the evolution and resulting philosophical implications of such disparate innovations as the slaughterhouse, the Yale lock, the assembly line, tractors, ovens, and comfort as defined by advancements in furniture design. A groundbreaking text when originally published, Giedion's pioneering work remains an important contribution to architecture, philosophy, and technology studies.
    Classification
    Techn. I 2 a 1
    SBB
    Techn. I 2 a 1
  18. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.00
    0.0022438213 = product of:
      0.0044876426 = sum of:
        0.0044876426 = product of:
          0.008975285 = sum of:
            0.008975285 = weight(_text_:a in 4725) [ClassicSimilarity], result of:
              0.008975285 = score(doc=4725,freq=22.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16900843 = fieldWeight in 4725, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4725)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
  19. Koltay, T.: Abstracts and abstracting : a genre and set of skills for the twenty-first century (2010) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 4125) [ClassicSimilarity], result of:
              0.00894975 = score(doc=4125,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 4125, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4125)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Despite their changing role, abstracts remain useful in the digital world. Aimed at both information professionals and researchers who work and publish in different fields, this book summarizes the most important and up-to-date theory of abstracting, as well as giving advice and examples for the practice of writing different kinds of abstracts. The book discusses the length, the functions and basic structure of abstracts. A new approach is outlined on the questions of informative and indicative abstracts. The abstractors' personality, their linguistic and non-linguistic knowledge and skills are also discussed with special attention. The process of abstracting, its steps and models, as well as recipient's role are treated with special distinction. Abstracting is presented as an aimed (purported) understanding of the original text, its interpretation and then a special projection of the information deemed to be worth of abstracting into a new text.Despite the relatively large number of textbooks on the topic there is no up-to-date book on abstracting in the English language. In addition to providing a comprehensive coverage of the topic, the proposed book contains novel views - especially on informative and indicative abstracts. The discussion is based on an interdisciplinary approach, blending the methods of library and information science and linguistics. The book strives to a synthesis of theory and practice. The synthesis is based on a large and existing body of knowledge which, however, is often characterised by misleading terminology and flawed beliefs.
  20. Virgilio, R. De; Cappellari, P.; Maccioni, A.; Torlone, R.: Path-oriented keyword search query over RDF (2012) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 429) [ClassicSimilarity], result of:
              0.00894975 = score(doc=429,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 429, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=429)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We are witnessing a smooth evolution of the Web from a worldwide information space of linked documents to a global knowledge base, where resources are identified by means of uniform resource identifiers (URIs, essentially string identifiers) and are semantically described and correlated through resource description framework (RDF, a metadata data model) statements. With the size and availability of data constantly increasing (currently around 7 billion RDF triples and 150 million RDF links), a fundamental problem lies in the difficulty users face to find and retrieve the information they are interested in. In general, to access semantic data, users need to know the organization of data and the syntax of a specific query language (e.g., SPARQL or variants thereof). Clearly, this represents an obstacle to information access for nonexpert users. For this reason, keyword search-based systems are increasingly capturing the attention of researchers. Recently, many approaches to keyword-based search over structured and semistructured data have been proposed]. These approaches usually implement IR strategies on top of traditional database management systems with the goal of freeing the users from having to know data organization and query languages.

Types

  • s 56
  • el 3
  • i 2
  • b 1
  • n 1
  • More… Less…

Subjects

Classifications