Search (210 results, page 1 of 11)

  • × language_ss:"e"
  • × type_ss:"m"
  • × year_i:[2010 TO 2020}
  1. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.03
    0.028503895 = product of:
      0.05700779 = sum of:
        0.05700779 = sum of:
          0.0040592253 = weight(_text_:a in 3355) [ClassicSimilarity], result of:
            0.0040592253 = score(doc=3355,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.07643694 = fieldWeight in 3355, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
          0.052948564 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
            0.052948564 = score(doc=3355,freq=4.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.32829654 = fieldWeight in 3355, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3355)
      0.5 = coord(1/2)
    
    Content
    One of a series of three publications influenced by the travelling exhibit Places & Spaces: Mapping Science, curated by the Cyberinfrastructure for Network Science Center at Indiana University. - Additional materials can be found at http://http://scimaps.org/atlas2. Erweitert durch: Börner, Katy. Atlas of Science: Visualizing What We Know.
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  2. Sears' list of subject headings (2018) 0.02
    0.024089992 = product of:
      0.048179984 = sum of:
        0.048179984 = sum of:
          0.010739701 = weight(_text_:a in 4652) [ClassicSimilarity], result of:
            0.010739701 = score(doc=4652,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.20223314 = fieldWeight in 4652, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=4652)
          0.037440285 = weight(_text_:22 in 4652) [ClassicSimilarity], result of:
            0.037440285 = score(doc=4652,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 4652, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=4652)
      0.5 = coord(1/2)
    
    Abstract
    The system is available both in print and online versions. Names a few new subject headings in areas like science, technology, engineering and medicine (STEM). In this edition, there are a total of 1,600 new headings making it a total of 12,000+ preferred headings meant for subject access in small and medium sized libraries. This unprecedented increase of about 1,600 headings is mostly due the complete incorporation of the Canadian Sears last published independently in 2006. Also critically examines inconsistencies in a few headings. Concludes to say the new edition in resplendent, hard binding maintains its stellar reputation of a handy list of general subject headings both for applications and a teaching resource.
    Date
    21.12.2018 18:22:12
    Footnote
    Introduction und Rez. in: Knowledge Organization 45(2018) no.8, S.712-714. u.d.T. "Satija, M. P. 2018: "The 22nd edition (2018) of the Sears List of Subject Headings: A brief introduction." (DOI:10.5771/0943-7444-2018-8-712).
  3. Gödert, W.; Hubrich, J.; Nagelschmidt, M.: Semantic knowledge representation for information retrieval (2014) 0.02
    0.021590449 = product of:
      0.043180898 = sum of:
        0.043180898 = sum of:
          0.005740611 = weight(_text_:a in 987) [ClassicSimilarity], result of:
            0.005740611 = score(doc=987,freq=4.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.10809815 = fieldWeight in 987, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
          0.037440285 = weight(_text_:22 in 987) [ClassicSimilarity], result of:
            0.037440285 = score(doc=987,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 987, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=987)
      0.5 = coord(1/2)
    
    Abstract
    This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve languages as a tool for subject queries and knowledge exploration. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
    Date
    23. 7.2017 13:49:22
  4. Keyser, P. de: Indexing : from thesauri to the Semantic Web (2012) 0.02
    0.020749755 = product of:
      0.04149951 = sum of:
        0.04149951 = sum of:
          0.0040592253 = weight(_text_:a in 3197) [ClassicSimilarity], result of:
            0.0040592253 = score(doc=3197,freq=2.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.07643694 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
          0.037440285 = weight(_text_:22 in 3197) [ClassicSimilarity], result of:
            0.037440285 = score(doc=3197,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.23214069 = fieldWeight in 3197, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3197)
      0.5 = coord(1/2)
    
    Abstract
    Indexing consists of both novel and more traditional techniques. Cutting-edge indexing techniques, such as automatic indexing, ontologies, and topic maps, were developed independently of older techniques such as thesauri, but it is now recognized that these older methods also hold expertise. Indexing describes various traditional and novel indexing techniques, giving information professionals and students of library and information sciences a broad and comprehensible introduction to indexing. This title consists of twelve chapters: an Introduction to subject readings and theasauri; Automatic indexing versus manual indexing; Techniques applied in automatic indexing of text material; Automatic indexing of images; The black art of indexing moving images; Automatic indexing of music; Taxonomies and ontologies; Metadata formats and indexing; Tagging; Topic maps; Indexing the web; and The Semantic Web.
    Date
    24. 8.2016 14:03:22
  5. Parrochia, D.; Neuville, D.: Towards a general theory of classifications (2013) 0.02
    0.020383961 = product of:
      0.040767923 = sum of:
        0.040767923 = sum of:
          0.009567685 = weight(_text_:a in 3100) [ClassicSimilarity], result of:
            0.009567685 = score(doc=3100,freq=16.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.18016359 = fieldWeight in 3100, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3100)
          0.03120024 = weight(_text_:22 in 3100) [ClassicSimilarity], result of:
            0.03120024 = score(doc=3100,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 3100, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3100)
      0.5 = coord(1/2)
    
    Abstract
    This book is an essay on the epistemology of classifications. Its main purpose is not to provide an exposition of an actual mathematical theory of classifications, that is, a general theory which would be available to any kind of them: hierarchical or non-hierarchical, ordinary or fuzzy, overlapping or not overlapping, finite or infinite, and so on, establishing a basis for all possible divisions of the real world. For the moment, such a theory remains nothing but a dream. Instead, the authors are essentially put forward a number of key questions. Their aim is rather to reveal the "state of art" of this dynamic field and the philosophy one may eventually adopt to go further. To this end they present some advances made in the course of the last century, discuss a few tricky problems that remain to be solved, and show the avenues open to those who no longer wish to stay on the wrong track. Researchers and professionals interested in the epistemology and philosophy of science, library science, logic and set theory, order theory or cluster analysis will find this book a comprehensive, original and progressive introduction to the main questions in this field.
    Date
    8. 9.2016 22:04:09
  6. Semantic keyword-based search on structured data sources : First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers (2016) 0.02
    0.019993115 = product of:
      0.03998623 = sum of:
        0.03998623 = sum of:
          0.0046871896 = weight(_text_:a in 2753) [ClassicSimilarity], result of:
            0.0046871896 = score(doc=2753,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.088261776 = fieldWeight in 2753, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
          0.03529904 = weight(_text_:22 in 2753) [ClassicSimilarity], result of:
            0.03529904 = score(doc=2753,freq=4.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.21886435 = fieldWeight in 2753, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2753)
      0.5 = coord(1/2)
    
    Abstract
    This book constitutes the thoroughly refereed post-conference proceedings of the First COST Action IC1302 International KEYSTONE Conference on semantic Keyword-based Search on Structured Data Sources, IKC 2015, held in Coimbra, Portugal, in September 2015. The 13 revised full papers, 3 revised short papers, and 2 invited papers were carefully reviewed and selected from 22 initial submissions. The paper topics cover techniques for keyword search, semantic data management, social Web and social media, information retrieval, benchmarking for search on big data.
    Content
    Inhalt: Professional Collaborative Information Seeking: On Traceability and Creative Sensemaking / Nürnberger, Andreas (et al.) - Recommending Web Pages Using Item-Based Collaborative Filtering Approaches / Cadegnani, Sara (et al.) - Processing Keyword Queries Under Access Limitations / Calì, Andrea (et al.) - Balanced Large Scale Knowledge Matching Using LSH Forest / Cochez, Michael (et al.) - Improving css-KNN Classification Performance by Shifts in Training Data / Draszawka, Karol (et al.) - Classification Using Various Machine Learning Methods and Combinations of Key-Phrases and Visual Features / HaCohen-Kerner, Yaakov (et al.) - Mining Workflow Repositories for Improving Fragments Reuse / Harmassi, Mariem (et al.) - AgileDBLP: A Search-Based Mobile Application for Structured Digital Libraries / Ifrim, Claudia (et al.) - Support of Part-Whole Relations in Query Answering / Kozikowski, Piotr (et al.) - Key-Phrases as Means to Estimate Birth and Death Years of Jewish Text Authors / Mughaz, Dror (et al.) - Visualization of Uncertainty in Tag Clouds / Platis, Nikos (et al.) - Multimodal Image Retrieval Based on Keywords and Low-Level Image Features / Pobar, Miran (et al.) - Toward Optimized Multimodal Concept Indexing / Rekabsaz, Navid (et al.) - Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives / Souza, Tarcisio (et al.) - Indexing of Textual Databases Based on Lexical Resources: A Case Study for Serbian / Stankovic, Ranka (et al.) - Domain-Specific Modeling: Towards a Food and Drink Gazetteer / Tagarev, Andrey (et al.) - Analysing Entity Context in Multilingual Wikipedia to Support Entity-Centric Retrieval Applications / Zhou, Yiwei (et al.)
    Date
    1. 2.2016 18:25:22
  7. Coyle, K.: FRBR, before and after : a look at our bibliographic models (2016) 0.02
    0.01974305 = product of:
      0.0394861 = sum of:
        0.0394861 = sum of:
          0.008285859 = weight(_text_:a in 2786) [ClassicSimilarity], result of:
            0.008285859 = score(doc=2786,freq=12.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15602624 = fieldWeight in 2786, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2786)
          0.03120024 = weight(_text_:22 in 2786) [ClassicSimilarity], result of:
            0.03120024 = score(doc=2786,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 2786, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2786)
      0.5 = coord(1/2)
    
    Abstract
    This book looks at the ways that we define the things of the bibliographic world, and in particular how our bibliographic models reflect our technology and the assumed goals of libraries. There is, of course, a history behind this, as well as a present and a future. The first part of the book begins by looking at the concept of the 'work' in library cataloging theory, and how that concept has evolved since the mid-nineteenth century to date. Next it talks about models and technology, two areas that need to be understood before taking a long look at where we are today. It then examines the new bibliographic model called Functional Requirements for Bibliographic Records (FRBR) and the technical and social goals that the FRBR Study Group was tasked to address. The FRBR entities are analyzed in some detail. Finally, FRBR as an entity-relation model is compared to a small set of Semantic Web vocabularies that can be seen as variants of the multi-entity bibliographic model that FRBR introduced.
    Date
    12. 2.2016 16:22:58
  8. Kumbhar, R.: Library classification trends in the 21st century (2012) 0.02
    0.01938208 = product of:
      0.03876416 = sum of:
        0.03876416 = sum of:
          0.0075639198 = weight(_text_:a in 736) [ClassicSimilarity], result of:
            0.0075639198 = score(doc=736,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14243183 = fieldWeight in 736, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=736)
          0.03120024 = weight(_text_:22 in 736) [ClassicSimilarity], result of:
            0.03120024 = score(doc=736,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 736, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=736)
      0.5 = coord(1/2)
    
    Abstract
    This book would serve as a good introductory textbook for a library science student or as a reference work on the types of classification currently in use. College and Research Libraries - covers all aspects of library classification - it is the only book that reviews literature published over a decade's time span (1999-2009) - well thought chapterization which is in tune with the LIS and classification curriculum - useful reference tool for researchers in classification - a valuable contribution to the bibliographic control of classification literature Library Classification Trends in the 21st Century traces development in and around library classification as reported in literature published in the first decade of the 21st century. It reviews literature published on various aspects of library classification, including modern applications of classification such as internet resource discovery, automatic book classification, text categorization, modern manifestations of classification such as taxonomies, folksonomies and ontologies and interoperable systems enabling crosswalk. The book also features classification education and an exploration of relevant topics.
    Date
    22. 2.2013 12:23:55
  9. Challenges and opportunities for knowledge organization in the digital age : proceedings of the Fifteenth International ISKO Conference, 9-11 July 2018, Porto, Portugal / organized by: International Society for Knowledge Organization (ISKO), ISKO Spain and Portugal Chapter, University of Porto - Faculty of Arts and Humanities, Research Centre in Communication, Information and Digital Culture (CIC.digital) - Porto (2018) 0.02
    0.01938208 = product of:
      0.03876416 = sum of:
        0.03876416 = sum of:
          0.0075639198 = weight(_text_:a in 4696) [ClassicSimilarity], result of:
            0.0075639198 = score(doc=4696,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14243183 = fieldWeight in 4696, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4696)
          0.03120024 = weight(_text_:22 in 4696) [ClassicSimilarity], result of:
            0.03120024 = score(doc=4696,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 4696, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4696)
      0.5 = coord(1/2)
    
    Abstract
    The 15th International ISKO Conference has been held in Porto (Portugal) under the topic Challenges and opportunities for KO in the digital age. ISKO has been organizing biennial international conferences since 1990, in order to promote a space for debate among Knowledge Organization (KO) scholars and practitioners all over the world. The topics under discussion in the 15th International ISKO Conference are intended to cover a wide range of issues that, in a very incisive way, constitute challenges, obstacles and questions in the field of KO, but also highlight ways and open innovative perspectives for this area in a world undergoing constant change, due to the digital revolution that unavoidably moulds our society. Accordingly, the three aggregating themes, chosen to fit the proposals for papers and posters to be submitted, are as follows: 1 - Foundations and methods for KO; 2 - Interoperability towards information access; 3 - Societal challenges in KO. In addition to these themes, the inaugural session includes a keynote speech by Prof. David Bawden of City University London, entitled Supporting truth and promoting understanding: knowledge organization and the curation of the infosphere.
    Date
    17. 1.2019 17:22:18
  10. Metadata and semantics research : 7th Research Conference, MTSR 2013 Thessaloniki, Greece, November 19-22, 2013. Proceedings (2013) 0.02
    0.01879202 = product of:
      0.03758404 = sum of:
        0.03758404 = sum of:
          0.00669738 = weight(_text_:a in 1155) [ClassicSimilarity], result of:
            0.00669738 = score(doc=1155,freq=16.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.12611452 = fieldWeight in 1155, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1155)
          0.030886661 = weight(_text_:22 in 1155) [ClassicSimilarity], result of:
            0.030886661 = score(doc=1155,freq=4.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19150631 = fieldWeight in 1155, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=1155)
      0.5 = coord(1/2)
    
    Abstract
    Metadata and semantics are integral to any information system and significant to the sphere of Web data. Research focusing on metadata and semantics is crucial for advancing our understanding and knowledge of metadata; and, more profoundly for being able to effectively discover, use, archive, and repurpose information. In response to this need, researchers are actively examining methods for generating, reusing, and interchanging metadata. Integrated with these developments is research on the application of computational methods, linked data, and data analytics. A growing body of work also targets conceptual and theoretical designs providing foundational frameworks for metadata and semantic applications. There is no doubt that metadata weaves its way into nearly every aspect of our information ecosystem, and there is great motivation for advancing the current state of metadata and semantics. To this end, it is vital that scholars and practitioners convene and share their work.
    The MTSR 2013 program and the contents of these proceedings show a rich diversity of research and practices, drawing on problems from metadata and semantically focused tools and technologies, linked data, cross-language semantics, ontologies, metadata models, and semantic system and metadata standards. The general session of the conference included 18 papers covering a broad spectrum of topics, proving the interdisciplinary field of metadata, and was divided into three main themes: platforms for research data sets, system architecture and data management; metadata and ontology validation, evaluation, mapping and interoperability; and content management. Metadata as a research topic is maturing, and the conference also supported the following five tracks: Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures; Metadata and Semantics for Cultural Collections and Applications; Metadata and Semantics for Agriculture, Food and Environment; Big Data and Digital Libraries in Health, Science and Technology; and European and National Projects, and Project Networking. Each track had a rich selection of papers, giving broader diversity to MTSR, and enabling deeper exploration of significant topics.
    All the papers underwent a thorough and rigorous peer-review process. The review and selection this year was highly competitive and only papers containing significant research results, innovative methods, or novel and best practices were accepted for publication. Only 29 of 89 submissions were accepted as full papers, representing 32.5% of the total number of submissions. Additional contributions covering noteworthy and important results in special tracks or project reports were accepted, totaling 42 accepted contributions. This year's conference included two outstanding keynote speakers. Dr. Stefan Gradmann, a professor arts department of KU Leuven (Belgium) and director of university library, addressed semantic research drawing from his work with Europeana. The title of his presentation was, "Towards a Semantic Research Library: Digital Humanities Research, Europeana and the Linked Data Paradigm". Dr. Michail Salampasis, associate professor from our conference host institution, the Department of Informatics of the Alexander TEI of Thessaloniki, presented new potential, intersecting search and linked data. The title of his talk was, "Rethinking the Search Experience: What Could Professional Search Systems Do Better?"
    Date
    17.12.2013 12:51:22
  11. Knowledge organization in the 21st century : between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland (2014) 0.02
    0.018720143 = product of:
      0.037440285 = sum of:
        0.037440285 = product of:
          0.07488057 = sum of:
            0.07488057 = weight(_text_:22 in 4693) [ClassicSimilarity], result of:
              0.07488057 = score(doc=4693,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.46428138 = fieldWeight in 4693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Concepts in Context : Proceedings of the Cologne Conference on Interoperability and Semantics in Knowledge Organization July 19th - 20th, 2010 (2011) 0.02
    0.018529613 = product of:
      0.037059225 = sum of:
        0.037059225 = sum of:
          0.005858987 = weight(_text_:a in 628) [ClassicSimilarity], result of:
            0.005858987 = score(doc=628,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.11032722 = fieldWeight in 628, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0390625 = fieldNorm(doc=628)
          0.03120024 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
            0.03120024 = score(doc=628,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.19345059 = fieldWeight in 628, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=628)
      0.5 = coord(1/2)
    
    Content
    Winfried Gödert: Programmatic Issues and Introduction - Dagobert Soergel: Conceptual Foundations for Semantic Mapping and Semantic Search - Jan-Helge Jacobs, Tina Mengel, Katrin Müller: Insights and Outlooks: A Retrospective View on the CrissCross Project - Yvonne Jahns, Helga Karg: Translingual Retrieval: Moving between Vocabularies - MACS 2010 - Jessica Hubrich: Intersystem Relations: Characteristics and Functionalities - Stella G Dextre Clarke: In Pursuit of Interoperability: Can We Standardize Mapping Types? - Philipp Mayr, Philipp Schaer, Peter Mutschke: A Science Model Driven Retrieval Prototype - Claudia Effenberger, Julia Hauser: Would an Explicit Versioning of the DDC Bring Advantages for Retrieval? - Gordon Dunsire: Interoperability and Semantics in RDF Representations of FRBR, FRAD and FRSAD - Maja Zumer: FRSAD: Challenges of Modeling the Aboutness - Michael Panzer: Two Tales of a Concept: Aligning FRSAD with SKOS - Felix Boteram: Integrating Semantic Interoperability into FRSAD
    Date
    22. 2.2013 11:34:18
  13. Hackett, P.M.W.: Facet theory and the mapping sentence : evolving philosophy, use and application (2014) 0.02
    0.016758895 = product of:
      0.03351779 = sum of:
        0.03351779 = sum of:
          0.008557598 = weight(_text_:a in 2258) [ClassicSimilarity], result of:
            0.008557598 = score(doc=2258,freq=20.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.16114321 = fieldWeight in 2258, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=2258)
          0.02496019 = weight(_text_:22 in 2258) [ClassicSimilarity], result of:
            0.02496019 = score(doc=2258,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 2258, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2258)
      0.5 = coord(1/2)
    
    Abstract
    This book brings together contemporary facet theory research to propose mapping sentences as a new way of understanding complex behavior, and suggests future directions the approach may take. How do we think about the worlds we live in? The formation of categories of events and objects seems to be a fundamental orientation procedure. Facet theory and its main tool, the mapping sentence, deal with categories of behavior and experience, their interrelationship, and their unification as our worldviews. In this book Hackett reviews philosophical writing along with neuroscientific research and information form other disciplines to provide a context for facet theory and the qualitative developments in this approach. With a variety of examples, the author proposes mapping sentences as a new way of understanding and defining complex behavior.
    Content
    1 Introduction; 2 Ontological Categorisation and Mereology; Human assessment; Categories and the properties of experiential events; Mathematical, computing, artificial intelligence and library classification approaches; Sociological approaches; Psychological approaches; Personal Construct Theory; Philosophical approaches to categories; Mereology: facet theory and relationships between categories; Neuroscience and categories; Conclusions; 3 Facet Theory and Thinking about Human Behaviour Generating knowledge in facet theory: a brief overviewWhat is facet theory?; Facets and facet elements; The mapping sentence; Designing a mapping sentence; Narrative; Roles that facets play; Single-facet structures: axial role and modular role; Polar role; Circumplex; Two-facet structures; Radex; Three-facet structures; Cylindrex; Analysing facet theory research; Conclusions; 4 Evolving Facet Theory Applications; The evolution of facet theory; Mapping a domain: the mapping sentence as a stand-alone approach and integrative tool; Making and understanding fine art; Defining the grid: a mapping sentence for grid imagesFacet sort-technique; Facet mapping therapy: using the mapping sentence and the facet structures to explore client issues; Research program coordination; Conclusions and Future Directions; Glossary of Terms; Bibliography; Index
    Date
    17.10.2015 17:22:01
  14. Euzenat, J.; Shvaiko, P.: Ontology matching (2010) 0.02
    0.01630717 = product of:
      0.03261434 = sum of:
        0.03261434 = sum of:
          0.007654148 = weight(_text_:a in 168) [ClassicSimilarity], result of:
            0.007654148 = score(doc=168,freq=16.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14413087 = fieldWeight in 168, product of:
                4.0 = tf(freq=16.0), with freq of:
                  16.0 = termFreq=16.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
          0.02496019 = weight(_text_:22 in 168) [ClassicSimilarity], result of:
            0.02496019 = score(doc=168,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 168, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=168)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies are viewed as the silver bullet for many applications, but in open or evolving systems, different parties can adopt different ontologies. This increases heterogeneity problems rather than reducing heterogeneity. This book proposes ontology matching as a solution to the problem of semantic heterogeneity, offering researchers and practitioners a uniform framework of reference to currently available work. The techniques presented apply to database schema matching, catalog integration, XML schema matching and more. Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With Ontology Matching, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
    Date
    20. 6.2012 19:08:22
  15. Murphy, M.L.: Lexical meaning (2010) 0.02
    0.016059995 = product of:
      0.03211999 = sum of:
        0.03211999 = sum of:
          0.0071598003 = weight(_text_:a in 998) [ClassicSimilarity], result of:
            0.0071598003 = score(doc=998,freq=14.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.13482209 = fieldWeight in 998, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
          0.02496019 = weight(_text_:22 in 998) [ClassicSimilarity], result of:
            0.02496019 = score(doc=998,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 998, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=998)
      0.5 = coord(1/2)
    
    Abstract
    The ideal introduction for students of semantics, Lexical Meaning fills the gap left by more general semantics textbooks, providing the teacher and the student with insights into word meaning beyond the traditional overviews of lexical relations. The book explores the relationship between word meanings and syntax and semantics more generally. It provides a balanced overview of the main theoretical approaches, along with a lucid explanation of their relative strengths and weaknesses. After covering the main topics in lexical meaning, such as polysemy and sense relations, the textbook surveys the types of meanings represented by different word classes. It explains abstract concepts in clear language, using a wide range of examples, and includes linguistic puzzles in each chapter to encourage the student to practise using the concepts. 'Adopt-a-Word' exercises give students the chance to research a particular word, building a portfolio of specialist work on a single word.
    Date
    22. 7.2013 10:53:30
  16. Ford, N.: Introduction to information behaviour (2015) 0.02
    0.01560012 = product of:
      0.03120024 = sum of:
        0.03120024 = product of:
          0.06240048 = sum of:
            0.06240048 = weight(_text_:22 in 3341) [ClassicSimilarity], result of:
              0.06240048 = score(doc=3341,freq=2.0), product of:
                0.16128273 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046056706 = queryNorm
                0.38690117 = fieldWeight in 3341, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3341)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 16:45:48
  17. Ceri, S.; Bozzon, A.; Brambilla, M.; Della Valle, E.; Fraternali, P.; Quarteroni, S.: Web Information Retrieval (2013) 0.02
    0.015505663 = product of:
      0.031011326 = sum of:
        0.031011326 = sum of:
          0.0060511357 = weight(_text_:a in 1082) [ClassicSimilarity], result of:
            0.0060511357 = score(doc=1082,freq=10.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.11394546 = fieldWeight in 1082, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=1082)
          0.02496019 = weight(_text_:22 in 1082) [ClassicSimilarity], result of:
            0.02496019 = score(doc=1082,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 1082, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1082)
      0.5 = coord(1/2)
    
    Abstract
    With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo!, are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
    Date
    16.10.2013 19:22:44
  18. Wissensspeicher in digitalen Räumen : Nachhaltigkeit, Verfügbarkeit, semantische Interoperabilität. Proceedings der 11. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Konstanz, 20. bis 22. Februar 2008 (2010) 0.01
    0.01482369 = product of:
      0.02964738 = sum of:
        0.02964738 = sum of:
          0.0046871896 = weight(_text_:a in 774) [ClassicSimilarity], result of:
            0.0046871896 = score(doc=774,freq=6.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.088261776 = fieldWeight in 774, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.03125 = fieldNorm(doc=774)
          0.02496019 = weight(_text_:22 in 774) [ClassicSimilarity], result of:
            0.02496019 = score(doc=774,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.15476047 = fieldWeight in 774, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=774)
      0.5 = coord(1/2)
    
    Content
    Inhalt: A. Grundsätzliche Fragen (aus dem Umfeld) der Wissensorganisation Markus Gottwald, Matthias Klemm und Jan Weyand: Warum ist es schwierig, Wissen zu managen? Ein soziologischer Deutungsversuch anhand eines Wissensmanagementprojekts in einem Großunternehmen H. Peter Ohly: Wissenskommunikation und -organisation. Quo vadis? Helmut F. Spinner: Wissenspartizipation und Wissenschaftskommunikation in drei Wissensräumen: Entwurf einer integrierten Theorie B. Dokumentationssprachen in der Anwendung Felix Boteram: Semantische Relationen in Dokumentationssprachen vom Thesaurus zum semantischen Netz Jessica Hubrich: Multilinguale Wissensorganisation im Zeitalter der Globalisierung: das Projekt CrissCross Vivien Petras: Heterogenitätsbehandlung und Terminology Mapping durch Crosskonkordanzen - eine Fallstudie Manfred Hauer, Uwe Leissing und Karl Rädler: Query-Expansion durch Fachthesauri Erfahrungsbericht zu dandelon.com, Vorarlberger Parlamentsinformationssystem und vorarlberg.at
    C. Begriffsarbeit in der Wissensorganisation Ingetraut Dahlberg: Begriffsarbeit in der Wissensorganisation Claudio Gnoli, Gabriele Merli, Gianni Pavan, Elisabetta Bernuzzi, and Marco Priano: Freely faceted classification for a Web-based bibliographic archive The BioAcoustic Reference Database Stefan Hauser: Terminologiearbeit im Bereich Wissensorganisation - Vergleich dreier Publikationen anhand der Darstellung des Themenkomplexes Thesaurus Daniel Kless: Erstellung eines allgemeinen Standards zur Wissensorganisation: Nutzen, Möglichkeiten, Herausforderungen, Wege D. Kommunikation und Lernen Gerald Beck und Simon Meissner: Strukturierung und Vermittlung von heterogenen (Nicht-)Wissensbeständen in der Risikokommunikation Angelo Chianese, Francesca Cantone, Mario Caropreso, and Vincenzo Moscato: ARCHAEOLOGY 2.0: Cultural E-Learning tools and distributed repositories supported by SEMANTICA, a System for Learning Object Retrieval and Adaptive Courseware Generation for e-learning environments Sonja Hierl, Lydia Bauer, Nadja Böller und Josef Herget: Kollaborative Konzeption von Ontologien in der Hochschullehre: Theorie, Chancen und mögliche Umsetzung Marc Wilhelm Küster, Christoph Ludwig, Yahya Al-Haff und Andreas Aschenbrenner: TextGrid: eScholarship und der Fortschritt der Wissenschaft durch vernetzte Angebote
  19. Kandel, E.R.: Reductionism in art and brain science : bridging the two cultures (2016) 0.01
    0.0146640325 = product of:
      0.029328065 = sum of:
        0.029328065 = sum of:
          0.0074878987 = weight(_text_:a in 5305) [ClassicSimilarity], result of:
            0.0074878987 = score(doc=5305,freq=20.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.14100032 = fieldWeight in 5305, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5305)
          0.021840166 = weight(_text_:22 in 5305) [ClassicSimilarity], result of:
            0.021840166 = score(doc=5305,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.1354154 = fieldWeight in 5305, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=5305)
      0.5 = coord(1/2)
    
    Abstract
    Are art and science separated by an unbridgeable divide? Can they find common ground? In this new book, neuroscientist Eric R. Kandel, whose remarkable scientific career and deep interest in art give him a unique perspective, demonstrates how science can inform the way we experience a work of art and seek to understand its meaning. Kandel illustrates how reductionism?the distillation of larger scientific or aesthetic concepts into smaller, more tractable components?has been used by scientists and artists alike to pursue their respective truths. He draws on his Nobel Prize-winning work revealing the neurobiological underpinnings of learning and memory in sea slugs to shed light on the complex workings of the mental processes of higher animals. In Reductionism in Art and Brain Science, Kandel shows how this radically reductionist approach, applied to the most complex puzzle of our time?the brain?has been employed by modern artists who distill their subjective world into color, form, and light. Kandel demonstrates through bottom-up sensory and top-down cognitive functions how science can explore the complexities of human perception and help us to perceive, appreciate, and understand great works of art. At the heart of the book is an elegant elucidation of the contribution of reductionism to the evolution of modern art and its role in a monumental shift in artistic perspective. Reductionism steered the transition from figurative art to the first explorations of abstract art reflected in the works of Turner, Monet, Kandinsky, Schoenberg, and Mondrian. Kandel explains how, in the postwar era, Pollock, de Kooning, Rothko, Louis, Turrell, and Flavin used a reductionist approach to arrive at their abstract expressionism and how Katz, Warhol, Close, and Sandback built upon the advances of the New York School to reimagine figurative and minimal art. Featuring captivating drawings of the brain alongside full-color reproductions of modern art masterpieces, this book draws out the common concerns of science and art and how they illuminate each other.
    Content
    The emergence of a reductionist school of abstract art in New York -- The Beginning of a Scientific Approach to Art -- The Biology of the Beholder's Share: Visual Perception and Bottom-Up Processing in Art -- The Biology of Learning and Memory: Top-Down Processing in Art -- A Reductionist Approach to Art. Reductionism in the Emergence of Abstract Art -- Mondrian and the Radical Reduction of the Figurative Image -- The New York School of Painters -- How the Brain Processes and Perceives Abstract Images -- From Figuration to Color Abstraction -- Color and the Brain -- A Focus on Light -- A Reductionist Influence on Figuration -- The Emerging Dialogue Between Abstract Art and Science. Why Is Reductionism Successful in Art? -- A Return to the Two Cultures
    Date
    14. 6.2019 12:22:37
  20. Sautoy, M. du: What we cannot know (2016) 0.01
    0.013419297 = product of:
      0.026838593 = sum of:
        0.026838593 = sum of:
          0.008118451 = weight(_text_:a in 3034) [ClassicSimilarity], result of:
            0.008118451 = score(doc=3034,freq=32.0), product of:
              0.053105544 = queryWeight, product of:
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.046056706 = queryNorm
              0.15287387 = fieldWeight in 3034, product of:
                5.656854 = tf(freq=32.0), with freq of:
                  32.0 = termFreq=32.0
                1.153047 = idf(docFreq=37942, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3034)
          0.018720143 = weight(_text_:22 in 3034) [ClassicSimilarity], result of:
            0.018720143 = score(doc=3034,freq=2.0), product of:
              0.16128273 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046056706 = queryNorm
              0.116070345 = fieldWeight in 3034, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3034)
      0.5 = coord(1/2)
    
    Date
    22. 6.2016 16:08:54
    Footnote
    Rez. in: Economist vom Jun 18.06.2016 [http://www.economist.com/news/books-and-arts/21700611-circle-circle]: "Everyone by nature desires to know," wrote Aristotle more than 2,000 years ago. But are there limits to what human beings can know? This is the question that Marcus du Sautoy, the British mathematician who succeeeded Richard Dawkins as the Simonyi professor for the public understanding of science at Oxford University, explores in "What We Cannot Know", his fascinating book on the limits of scientific knowledge. As Mr du Sautoy argues, this is a golden age of scientific knowledge. Remarkable achievements stretch across the sciences, from the Large Hadron Collider and the sequencing of the human genome to the proof of Fermat's Last Theorem. And the rate of progress is accelerating: the number of scientific publications has doubled every nine years since the second world war. But even bigger challenges await. Can cancer be cured? Ageing beaten? Is there a "Theory of Everything" that will include all of physics? Can we know it all? One limit to people's knowledge is practical. In theory, if you throw a die, Newton's laws of motion make it possible to predict what number will come up. But the calculations are too long to be practicable. What is more, many natural systems, such as the weather, are "chaotic" or sensitive to small changes: a tiny nudge now can lead to vastly different behaviour later. Since people cannot measure with complete accuracy, they can't forecast far into the future. The problem was memorably articulated by Edward Lorenz, an American scientist, in 1972 in a famous paper called "Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?"
    Even if the future cannot be predicted, people can still hope to uncover the laws of physics. As Stephen Hawking wrote in his 1988 bestseller "A Brief History of Time", "I still believe there are grounds for cautious optimism that we may be near the end of the search for the ultimate laws of nature." But how can people know when they have got there? They have been wrong before: Lord Kelvin, a great physicist, confidently announced in 1900: "There is nothing new to be discovered in physics now." Just a few years later, physics was upended by the new theories of relativity and quantum physics. Quantum physics presents particular limits on human knowledge, as it suggests that there is a basic randomness or uncertainty in the universe. For example, electrons exist as a "wave function", smeared out across space, and do not have a definite position until you observe them (which "collapses" the wave function). At the same time there seems to be an absolute limit on how much people can know. This is quantified by Heisenberg's Uncertainty Principle, which says that there is a trade-off between knowing the position and momentum of a particle. So the more you know about where an electron is, the less you know about which way it is going. Even scientists find this weird. As Niels Bohr, a Danish physicist, said: "If quantum physics hasn't profoundly shocked you, you haven't understood it yet."

Types

  • s 56
  • el 3
  • i 2
  • b 1
  • n 1
  • More… Less…

Subjects

Classifications