Search (22 results, page 1 of 2)

  • × classification_ss:"ST 205"
  1. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (2007) 0.02
    0.023911756 = product of:
      0.059779387 = sum of:
        0.0034055763 = weight(_text_:a in 5135) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=5135,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 5135, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5135)
        0.056373812 = sum of:
          0.02496246 = weight(_text_:information in 5135) [ClassicSimilarity], result of:
            0.02496246 = score(doc=5135,freq=20.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.30666938 = fieldWeight in 5135, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5135)
          0.031411353 = weight(_text_:22 in 5135) [ClassicSimilarity], result of:
            0.031411353 = score(doc=5135,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.19345059 = fieldWeight in 5135, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5135)
      0.4 = coord(2/5)
    
    Abstract
    The scale of web site design has grown so that what was once comparable to decorating a room is now comparable to designing buildings or even cities. Designing sites so that people can find their way around is an ever-growing challenge as sites contain more and more information. In the past, Information Architecture for the World Wide Web has helped developers and designers establish consistent and usable structures for their sites and their information. This edition of the classic primer on web site design and navigation is updated with recent examples, new scenarios, and new information on best practices. Readers will learn how to present large volumes of information to visitors who need to find what they're looking for quickly. With topics that range from aesthetics to mechanics, this valuable book explains how to create interfaces that users can understand easily.
    Date
    22. 3.2008 16:18:27
    LCSH
    Information storage and retrieval systems / Architecture
    RSWK
    Internet / Information / Strukturierung (BVB)
    Subject
    Internet / Information / Strukturierung (BVB)
    Information storage and retrieval systems / Architecture
  2. Croft, W.B.; Metzler, D.; Strohman, T.: Search engines : information retrieval in practice (2010) 0.01
    0.008595185 = product of:
      0.021487962 = sum of:
        0.005779455 = weight(_text_:a in 2605) [ClassicSimilarity], result of:
          0.005779455 = score(doc=2605,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 2605, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2605)
        0.015708508 = product of:
          0.031417016 = sum of:
            0.031417016 = weight(_text_:information in 2605) [ClassicSimilarity], result of:
              0.031417016 = score(doc=2605,freq=22.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.38596505 = fieldWeight in 2605, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2605)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    For introductory information retrieval courses at the undergraduate and graduate level in computer science, information science and computer engineering departments. Written by a leader in the field of information retrieval, Search Engines: Information Retrieval in Practice, is designed to give undergraduate students the understanding and tools they need to evaluate, compare and modify search engines. Coverage of the underlying IR and mathematical models reinforce key concepts. The book's numerous programming exercises make extensive use of Galago, a Java-based open source search engine. SUPPLEMENTS / Extensive lecture slides (in PDF and PPT format) / Solutions to selected end of chapter problems (Instructors only) / Test collections for exercises / Galago search engine
    LCSH
    Information retrieval
    Information Storage and Retrieval
    RSWK
    Suchmaschine / Information Retrieval
    Subject
    Suchmaschine / Information Retrieval
    Information retrieval
    Information Storage and Retrieval
  3. Rogers, R.: Information politics on the Web (2004) 0.01
    0.006240537 = product of:
      0.015601342 = sum of:
        0.006811153 = weight(_text_:a in 442) [ClassicSimilarity], result of:
          0.006811153 = score(doc=442,freq=50.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1273949 = fieldWeight in 442, product of:
              7.071068 = tf(freq=50.0), with freq of:
                50.0 = termFreq=50.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.015625 = fieldNorm(doc=442)
        0.0087901885 = product of:
          0.017580377 = sum of:
            0.017580377 = weight(_text_:information in 442) [ClassicSimilarity], result of:
              0.017580377 = score(doc=442,freq=62.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.21597885 = fieldWeight in 442, product of:
                  7.8740077 = tf(freq=62.0), with freq of:
                    62.0 = termFreq=62.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.015625 = fieldNorm(doc=442)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Rogers presents a profoundly different way of thinking about information in cyberspace, one that supports the political efforts of democratic activists and NGOs and takes seriously the epistemological issues at the heart of networked communications.
    Footnote
    Rez. in: JASIST 58(2007) no.4, S.608-609 (K.D. Desouza): "Richard Rogers explores the distinctiveness of the World Wide Web as a politically contested space where information searchers may encounter multiple explanations of reality. Sources of information on the Web are in constant competition with each other for attention. The attention a source receives will determine its prominence, the ability to be a provider of leading information, and its inclusion in authoritative spaces. Rogers explores the politics behind evaluating sources that are collected and housed on authoritative spaces. Information politics on the Web can be looked at in terms of frontend or back-end politics. Front-end politics is concerned with whether sources on the Web pay attention to principles of inclusivity, fairness, and scope of representation in how information is presented, while back-end politics examines the logic behind how search engines or portals select and index information. Concerning front-end politics, Rogers questions the various versions of reality one can derive from examining information on the Web, especially when issues of information inclusivity and scope of representation are toiled with. In addition, Rogers is concerned with how back-end politics are being controlled by dominant forces of the market (i.e., the more an organization is willing to pay, the greater will be the site's visibility and prominence in authoritative spaces), regardless of whether the information presented on the site justifies such a placement. In the book, Rogers illustrates the issues involved in back-end and front-end politics (though heavily slanted on front-end politics) using vivid cases, all of which are derived from his own research. The main thrust is the exploration of how various "information instruments," defined as "a digital and analytical means of recording (capturing) and subsequently reading indications of states of defined information streams (p. 19)," help capture the politics of the Web. Rogers employs four specific instruments (Lay Decision Support System, Issue Barometer, Web Issue Index of Civil Society, and Election Issue Tracker), which are covered in detail in core chapters of the book (Chapter 2-Chapter 5). The book is comprised of six chapters, with Chapter 1 being the traditional introduction and Chapter 6 being a summary of the major concepts discussed.
    Chapter 2 examines the politics of information retrieval in the context of collaborative filtering techniques. Rogers begins by discussing the underpinnings of modern search engine design by examining medieval practices of knowledge seeking, following up with a critique of the collaborative filtering techniques. Rogers's major contention is that collaborative filtering rids us of user idiosyncrasies as search query strings, preferences, and recommendations are shared among users and without much care for the differences among them, both in terms of their innate characteristics and also their search goals. To illustrate Rogers' critiques of collaborative filtering, he describes an information searching experiment that he conducted with students at University of Vienna and University of Amsterdam. Students were asked to search for information on Viagra. As one can imagine, depending on a number of issues, not the least of which is what sources did one extract information from, a student would find different accounts of reality about Viagra, everything from a medical drug to a black-market drug ideal for underground trade. Rogers described how information on the Web differed from official accounts for certain events. The information on the Web served as an alternative reality. Chapter 3 describes the Web as a dynamic debate-mapping tool, a political instrument. Rogers introduces the "Issue Barometer," an information instrument that measures the social pressure on a topic being debated by analyzing data available from the Web. Measures used by the Issue Barometer include temperature of the issue (cold to hot), activity level of the debate (mild to intense), and territorialization (one country to many countries). The Issues Barometer is applied to an illustrative case of the public debate surrounding food safety in the Netherlands in 2001. Chapter 4 introduces "The Web Issue Index," which provides an indication of leading societal issues discussed on the Web. The empirical research on the Web Issues Index was conducted on the Genoa G8 Summit in 1999 and the anti-globalization movement. Rogers focus here was to examine the changing nature of prominent issues over time, i.e., how issues gained and lost attention and traction over time.
    In Chapter 5, the "Election Issue Tracker" is introduced. The Election Issue Tracker calculates currency that is defined as "frequency of mentions of the issue terms per newspaper and across newspapers" in the three major national newspapers. The Election Issue Tracker is used to study which issues resonate with the press and which do not. As one would expect, Rogers found that not all issues that are considered important or central to a political party resonate with the press. This book contains a wealth of information that can be accessed by both researcher and practitioner. Even more interesting is the fact that researchers from a wide assortment of disciplines, from political science to information science and even communication studies, will appreciate the research and insights put forth by Rogers. Concepts presented in each chapter are thoroughly described using a wide variety of cases. Albeit all the cases are of a European flavor, mainly Dutch, they are interesting and thought-provoking. I found the descriptions of Rogers various information instruments to be very interesting. Researchers can gain from an examination of these instruments as it points to an interesting method for studying activities and behaviors on the Internet. In addition, each chapter has adequate illustrations and the bibliography is comprehensive. This book will make for an ideal supplementary text for graduate courses in information science, communication and media studies, and even political science. Like all books, however, this book had its share of shortcomings. While I was able to appreciate the content of the book, and certainly commend Rogers for studying an issue of immense significance, I found the book to be very difficult to read and parse through. The book is laden with jargon, political statements, and even has several instances of deficient writing. The book also lacked a sense of structure, and this affected the presentation of Rogers' material. I would have also hoped to see some recommendations by Rogers in terms of how should researchers further the ideas he has put forth. Areas of future research, methods for studying future problems, and even insights on what the future might hold for information politics were not given enough attention in the book; in my opinion, this was a major shortcoming. Overall, I commend Rogers for putting forth a very informative book on the issues of information politics on the Web. Information politics, especially when delivered on the communication technologies such as the Web, is going to play a vital role in our societies for a long time to come. Debates will range from the politics of how information is searched for and displayed on the Web to how the Web is used to manipulate or politicize information to meet the agendas of various entities. Richard Rogers' book will be of the seminal and foundational readings on the topic for any curious minds that want to explore these issues."
    LCSH
    Information technology / Political aspects
    Subject
    Information technology / Political aspects
  4. Manning, C.D.; Raghavan, P.; Schütze, H.: Introduction to information retrieval (2008) 0.01
    0.0054649855 = product of:
      0.013662464 = sum of:
        0.002724461 = weight(_text_:a in 4041) [ClassicSimilarity], result of:
          0.002724461 = score(doc=4041,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 4041, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4041)
        0.010938003 = product of:
          0.021876005 = sum of:
            0.021876005 = weight(_text_:information in 4041) [ClassicSimilarity], result of:
              0.021876005 = score(doc=4041,freq=24.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2687516 = fieldWeight in 4041, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4041)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Class-tested and coherent, this textbook teaches information retrieval, including web search, text classification, and text clustering from basic concepts. Ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students. Slides and additional exercises are available for lecturers. - This book provides what Salton and Van Rijsbergen both failed to achieve. Even more important, unlike some other books in IR, the authors appear to care about making the theory as accessible as possible to the reader, on occasion including short primers to certain topics or choosing to explain difficult concepts using simplified approaches. Its coverage [is] excellent, the quality of writing high and I was surprised how much I learned from reading it. I think the online resources are impressive.
    Content
    Inhalt: Boolean retrieval - The term vocabulary & postings lists - Dictionaries and tolerant retrieval - Index construction - Index compression - Scoring, term weighting & the vector space model - Computing scores in a complete search system - Evaluation in information retrieval - Relevance feedback & query expansion - XML retrieval - Probabilistic information retrieval - Language models for information retrieval - Text classification & Naive Bayes - Vector space classification - Support vector machines & machine learning on documents - Flat clustering - Hierarchical clustering - Matrix decompositions & latent semantic indexing - Web search basics - Web crawling and indexes - Link analysis Vgl. die digitale Fassung unter: http://nlp.stanford.edu/IR-book/pdf/irbookprint.pdf.
    LCSH
    Information retrieval
    RSWK
    Dokumentverarbeitung / Information Retrieval / Abfrageverarbeitung (GBV)
    Information Retrieval / Einführung (BVB)
    Subject
    Dokumentverarbeitung / Information Retrieval / Abfrageverarbeitung (GBV)
    Information Retrieval / Einführung (BVB)
    Information retrieval
  5. Rosenfeld, L.; Morville, P.: Information architecture for the World Wide Web : designing large-scale Web sites (1998) 0.01
    0.005404096 = product of:
      0.01351024 = sum of:
        0.0067426977 = weight(_text_:a in 493) [ClassicSimilarity], result of:
          0.0067426977 = score(doc=493,freq=16.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12611452 = fieldWeight in 493, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=493)
        0.006767543 = product of:
          0.013535086 = sum of:
            0.013535086 = weight(_text_:information in 493) [ClassicSimilarity], result of:
              0.013535086 = score(doc=493,freq=12.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16628155 = fieldWeight in 493, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=493)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Some web sites "work" and some don't. Good web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls. You need to know who will be using the site, and what they'll be using it for. You need some idea of what you'd like to draw their attention to during their visit. Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable. Information Architecture for the World Wide Web is about applying the principles of architecture and library science to web site design. Each web site is like a public building, available for tourists and regulars alike to breeze through at their leisure. The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday. Most books on web development concentrate either on the aesthetics or the mechanics of the site. This book is about the framework that holds the two together. With this book, you learn how to design web sites and intranets that support growth, management, and ease of use. Special attention is given to: * The process behind architecting a large, complex site * Web site hierarchy design and organization Information Architecture for the World Wide Web is for webmasters, designers, and anyone else involved in building a web site. It's for novice web designers who, from the start, want to avoid the traps that result in poorly designed sites. It's for experienced web designers who have already created sites but realize that something "is missing" from their sites and want to improve them. It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their web pages into a cohesive site. The authors are two of the principals of Argus Associates, a web consulting firm. At Argus, they have created information architectures for web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical.
    LCSH
    Information storage and retrieval systems / Architecture
    Subject
    Information storage and retrieval systems / Architecture
  6. Horch, A.; Kett, H.; Weisbecker, A.: Semantische Suchsysteme für das Internet : Architekturen und Komponenten semantischer Suchmaschinen (2013) 0.01
    0.005084014 = product of:
      0.012710035 = sum of:
        0.0048162127 = weight(_text_:a in 4063) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=4063,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 4063, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4063)
        0.007893822 = product of:
          0.015787644 = sum of:
            0.015787644 = weight(_text_:information in 4063) [ClassicSimilarity], result of:
              0.015787644 = score(doc=4063,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19395474 = fieldWeight in 4063, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4063)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    RSWK
    Suchmaschine / Semantic Web / Information Retrieval
    Suchmaschine / Information Retrieval / Ranking / Datenstruktur / Kontextbezogenes System
    Subject
    Suchmaschine / Semantic Web / Information Retrieval
    Suchmaschine / Information Retrieval / Ranking / Datenstruktur / Kontextbezogenes System
  7. Bizer, C.; Heath, T.: Linked Data : evolving the web into a global data space (2011) 0.00
    0.004877418 = product of:
      0.012193545 = sum of:
        0.009036016 = weight(_text_:a in 4725) [ClassicSimilarity], result of:
          0.009036016 = score(doc=4725,freq=22.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16900843 = fieldWeight in 4725, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4725)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 4725) [ClassicSimilarity], result of:
              0.006315058 = score(doc=4725,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 4725, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4725)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.
  8. Semantische Technologien : Grundlagen - Konzepte - Anwendungen (2012) 0.00
    0.004546009 = product of:
      0.0113650225 = sum of:
        0.005839347 = weight(_text_:a in 167) [ClassicSimilarity], result of:
          0.005839347 = score(doc=167,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10921837 = fieldWeight in 167, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=167)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 167) [ClassicSimilarity], result of:
              0.011051352 = score(doc=167,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 167, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=167)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Dieses Lehrbuch bietet eine umfassende Einführung in Grundlagen, Potentiale und Anwendungen Semantischer Technologien. Es richtet sich an Studierende der Informatik und angrenzender Fächer sowie an Entwickler, die Semantische Technologien am Arbeitsplatz oder in verteilten Applikationen nutzen möchten. Mit seiner an praktischen Beispielen orientierten Darstellung gibt es aber auch Anwendern und Entscheidern in Unternehmen einen breiten Überblick über Nutzen und Möglichkeiten dieser Technologie. Semantische Technologien versetzen Computer in die Lage, Informationen nicht nur zu speichern und wieder zu finden, sondern sie ihrer Bedeutung entsprechend auszuwerten, zu verbinden, zu Neuem zu verknüpfen, und so flexibel und zielgerichtet nützliche Leistungen zu erbringen. Das vorliegende Buch stellt im ersten Teil die als Semantische Technologien bezeichneten Techniken, Sprachen und Repräsentationsformalismen vor. Diese Elemente erlauben es, das in Informationen enthaltene Wissen formal und damit für den Computer verarbeitbar zu beschreiben, Konzepte und Beziehungen darzustellen und schließlich Inhalte zu erfragen, zu erschließen und in Netzen zugänglich zu machen. Der zweite Teil beschreibt, wie mit Semantischen Technologien elementare Funktionen und umfassende Dienste der Informations- und Wissensverarbeitung realisiert werden können. Hierzu gehören etwa die Annotation und das Erschließen von Information, die Suche in den resultierenden Strukturen, das Erklären von Bedeutungszusammenhängen sowie die Integration einzelner Komponenten in komplexe Ablaufprozesse und Anwendungslösungen. Der dritte Teil beschreibt schließlich vielfältige Anwendungsbeispiele in unterschiedlichen Bereichen und illustriert so Mehrwert, Potenzial und Grenzen von Semantischen Technologien. Die dargestellten Systeme reichen von Werkzeugen für persönliches, individuelles Informationsmanagement über Unterstützungsfunktionen für Gruppen bis hin zu neuen Ansätzen im Internet der Dinge und Dienste, einschließlich der Integration verschiedener Medien und Anwendungen von Medizin bis Musik.
    Content
    Inhalt: 1. Einleitung (A. Dengel, A. Bernardi) 2. Wissensrepräsentation (A. Dengel, A. Bernardi, L. van Elst) 3. Semantische Netze, Thesauri und Topic Maps (O. Rostanin, G. Weber) 4. Das Ressource Description Framework (T. Roth-Berghofer) 5. Ontologien und Ontologie-Abgleich in verteilten Informationssystemen (L. van Elst) 6. Anfragesprachen und Reasoning (M. Sintek) 7. Linked Open Data, Semantic Web Datensätze (G.A. Grimnes, O. Hartig, M. Kiesel, M. Liwicki) 8. Semantik in der Informationsextraktion (B. Adrian, B. Endres-Niggemeyer) 9. Semantische Suche (K. Schumacher, B. Forcher, T. Tran) 10. Erklärungsfähigkeit semantischer Systeme (B. Forcher, T. Roth-Berghofer, S. Agne) 11. Semantische Webservices zur Steuerung von Prooduktionsprozessen (M. Loskyll, J. Schlick, S. Hodeck, L. Ollinger, C. Maxeiner) 12. Wissensarbeit am Desktop (S. Schwarz, H. Maus, M. Kiesel, L. Sauermann) 13. Semantische Suche für medizinische Bilder (MEDICO) (M. Möller, M. Sintek) 14. Semantische Musikempfehlungen (S. Baumann, A. Passant) 15. Optimierung von Instandhaltungsprozessen durch Semantische Technologien (P. Stephan, M. Loskyll, C. Stahl, J. Schlick)
    Editor
    Dengel, A.
    Footnote
    Auch als digitale Ausgabe verfügbar. Auf S. 5 befindet sich der Satz: "Wissen ist Information, die in Aktion umgesetzt wird".
    RSWK
    Semantic Web / Information Extraction / Suche / Wissensbasiertes System / Aufsatzsammlung
    Subject
    Semantic Web / Information Extraction / Suche / Wissensbasiertes System / Aufsatzsammlung
  9. Spink, A.; Jansen, B.J.: Web searching : public searching of the Web (2004) 0.00
    0.0024462277 = product of:
      0.0061155693 = sum of:
        0.0017027882 = weight(_text_:a in 1443) [ClassicSimilarity], result of:
          0.0017027882 = score(doc=1443,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.03184872 = fieldWeight in 1443, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1443)
        0.004412781 = product of:
          0.008825562 = sum of:
            0.008825562 = weight(_text_:information in 1443) [ClassicSimilarity], result of:
              0.008825562 = score(doc=1443,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.10842399 = fieldWeight in 1443, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1443)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 56(2004) H.1, S.61-62 (D. Lewandowski): "Die Autoren des vorliegenden Bandes haben sich in den letzten Jahren durch ihre zahlreichen Veröffentlichungen zum Verhalten von Suchmaschinen-Nutzern einen guten Namen gemacht. Das nun erschienene Buch bietet eine Zusammenfassung der verstreut publizierten Aufsätze und stellt deren Ergebnisse in den Kontext eines umfassenderen Forschungsansatzes. Spink und Jansen verwenden zur Analyse des Nutzungsverhaltens query logs von Suchmaschinen. In diesen werden vom Server Informationen protokolliert, die die Anfragen an diesen Server betreffen. Daten, die aus diesen Dateien gewonnen werden können, sind unter anderem die gestellten Suchanfragen, die Adresse des Rechners, von dem aus die Anfrage gestellt wurde, sowie die aus den Trefferlisten ausgewählten Dokumente. Der klare Vorteil der Analyse von Logfiles liegt in der Möglichkeit, große Datenmengen ohne hohen personellen Aufwand erheben zu können. Die Daten einer Vielzahl anonymer Nutzer können analysiert werden; ohne dass dabei die Datenerhebung das Nutzerverhalten beeinflusst. Dies ist bei Suchmaschinen von besonderer Bedeutung, weil sie im Gegensatz zu den meisten anderen professionellen Information-Retrieval-Systemen nicht nur im beruflichen Kontext, sondern auch (und vor allem) privat genutzt werden. Das Bild des Nutzungsverhaltens wird in Umfragen und Laboruntersuchungen verfälscht, weil Nutzer ihr Anfrageverhalten falsch einschätzen oder aber die Themen ihrer Anfragen nicht nennen möchten. Hier ist vor allem an Suchanfragen, die auf medizinische oder pornographische Inhalte gerichtet sind, zu denken. Die Analyse von Logfiles ist allerdings auch mit Problemen behaftet: So sind nicht alle gewünschten Daten überhaupt in den Logfiles enthalten (es fehlen alle Informationen über den einzelnen Nutzer), es werden keine qualitativen Informationen wie etwa der Grund einer Suche erfasst und die Logfiles sind aufgrund technischer Gegebenheiten teils unvollständig. Die Autoren schließen aus den genannten Vor- und Nachteilen, dass sich Logfiles gut für die Auswertung des Nutzerverhaltens eignen, bei der Auswertung jedoch die Ergebnisse von Untersuchungen, welche andere Methoden verwenden, berücksichtigt werden sollten.
    RSWK
    Internet / Information Retrieval (BVB)
    Series
    Information science and knowledge management; 6
    Subject
    Internet / Information Retrieval (BVB)
  10. Suchen und Finden im Internet (2007) 0.00
    0.002118135 = product of:
      0.010590675 = sum of:
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 484) [ClassicSimilarity], result of:
              0.02118135 = score(doc=484,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 484, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=484)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Das Internet hat die Welt von Information, Kommunikation und Medien nachhaltig verändert. Suchmaschinen spielen dabei eine zentrale Rolle. Sie bilden das Tor zum Meer der elektronisch verfügbaren Informationen, leisten dem Nutzer wertvolle Hilfe beim Auffinden von Inhalten, haben sich zwischenzeitlich zum Kristallisationspunkt für vielfältige ergänzende Informations-, Kommunikations- und Mediendienste entwickelt und schicken sich an, Strukturen und Strategien der beteiligten Branchen umzuwälzen. Dabei ist die dynamische Entwicklung der Such- und Finde-Technologien für das Internet weiterhin in vollem Gange. Der MÜNCHNER KREIS hat vor diesem Hintergrund mit exzellenten Fachleuten aus Wirtschaft und Wissenschaft die Entwicklungen analysiert und die Zukunftsperspektiven diskutiert. das vorliegende Buch enthält die Ergebnisse.
    LCSH
    Business Information Systems
    Information Systems Applications (incl.Internet)
    Subject
    Business Information Systems
    Information Systems Applications (incl.Internet)
  11. Hüsken, P.: Informationssuche im Semantic Web : Methoden des Information Retrieval für die Wissensrepräsentation (2006) 0.00
    0.002118135 = product of:
      0.010590675 = sum of:
        0.010590675 = product of:
          0.02118135 = sum of:
            0.02118135 = weight(_text_:information in 4332) [ClassicSimilarity], result of:
              0.02118135 = score(doc=4332,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2602176 = fieldWeight in 4332, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4332)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Das Semantic Web bezeichnet ein erweitertes World Wide Web (WWW), das die Bedeutung von präsentierten Inhalten in neuen standardisierten Sprachen wie RDF Schema und OWL modelliert. Diese Arbeit befasst sich mit dem Aspekt des Information Retrieval, d.h. es wird untersucht, in wie weit Methoden der Informationssuche sich auf modelliertes Wissen übertragen lassen. Die kennzeichnenden Merkmale von IR-Systemen wie vage Anfragen sowie die Unterstützung unsicheren Wissens werden im Kontext des Semantic Web behandelt. Im Fokus steht die Suche nach Fakten innerhalb einer Wissensdomäne, die entweder explizit modelliert sind oder implizit durch die Anwendung von Inferenz abgeleitet werden können. Aufbauend auf der an der Universität Duisburg-Essen entwickelten Retrievalmaschine PIRE wird die Anwendung unsicherer Inferenz mit probabilistischer Prädikatenlogik (pDatalog) implementiert.
    Footnote
    Zugl.: Dortmund, Univ., Dipl.-Arb., 2006 u.d.T.: Hüsken, Peter: Information-Retrieval im Semantic-Web.
    RSWK
    Information Retrieval / Semantic Web
    Subject
    Information Retrieval / Semantic Web
  12. Social Semantic Web : Web 2.0, was nun? (2009) 0.00
    0.001764597 = product of:
      0.0044114925 = sum of:
        0.002043346 = weight(_text_:a in 4854) [ClassicSimilarity], result of:
          0.002043346 = score(doc=4854,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.03821847 = fieldWeight in 4854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4854)
        0.0023681468 = product of:
          0.0047362936 = sum of:
            0.0047362936 = weight(_text_:information in 4854) [ClassicSimilarity], result of:
              0.0047362936 = score(doc=4854,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.058186423 = fieldWeight in 4854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4854)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Editor
    Blumauer, A.
    Footnote
    Vgl.: http://www.springer.com/computer/database+management+%26+information+retrieval/book/978-3-540-72215-1.
  13. Hitzler, P.; Krötzsch, M.; Rudolph, S.: Foundations of Semantic Web technologies (2010) 0.00
    0.0016346768 = product of:
      0.008173384 = sum of:
        0.008173384 = weight(_text_:a in 359) [ClassicSimilarity], result of:
          0.008173384 = score(doc=359,freq=18.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 359, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=359)
      0.2 = coord(1/5)
    
    Abstract
    This text introduces the standardized knowledge representation languages for modeling ontologies operating at the core of the semantic web. It covers RDF schema, Web Ontology Language (OWL), rules, query languages, the OWL 2 revision, and the forthcoming Rule Interchange Format (RIF). A 2010 CHOICE Outstanding Academic Title ! The nine chapters of the book guide the reader through the major foundational languages for the semantic Web and highlight the formal semantics. ! the book has very interesting supporting material and exercises, is oriented to W3C standards, and provides the necessary foundations for the semantic Web. It will be easy to follow by the computer scientist who already has a basic background on semantic Web issues; it will also be helpful for both self-study and teaching purposes. I recommend this book primarily as a complementary textbook for a graduate or undergraduate course in a computer science or a Web science academic program. --Computing Reviews, February 2010 This book is unique in several respects. It contains an in-depth treatment of all the major foundational languages for the Semantic Web and provides a full treatment of the underlying formal semantics, which is central to the Semantic Web effort. It is also the very first textbook that addresses the forthcoming W3C recommended standards OWL 2 and RIF. Furthermore, the covered topics and underlying concepts are easily accessible for the reader due to a clear separation of syntax and semantics ! I am confident this book will be well received and play an important role in training a larger number of students who will seek to become proficient in this growing discipline.
  14. Widhalm, R.; Mück, T.: Topic maps : Semantische Suche im Internet (2002) 0.00
    0.0014120899 = product of:
      0.0070604496 = sum of:
        0.0070604496 = product of:
          0.014120899 = sum of:
            0.014120899 = weight(_text_:information in 4731) [ClassicSimilarity], result of:
              0.014120899 = score(doc=4731,freq=10.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1734784 = fieldWeight in 4731, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4731)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Das Werk behandelt die aktuellen Entwicklungen zur inhaltlichen Erschließung von Informationsquellen im Internet. Topic Maps, semantische Modelle vernetzter Informationsressourcen unter Verwendung von XML bzw. HyTime, bieten alle notwendigen Modellierungskonstrukte, um Dokumente im Internet zu klassifizieren und ein assoziatives, semantisches Netzwerk über diese zu legen. Neben Einführungen in XML, XLink, XPointer sowie HyTime wird anhand von Einsatzszenarien gezeigt, wie diese neuartige Technologie für Content Management und Information Retrieval im Internet funktioniert. Der Entwurf einer Abfragesprache wird ebenso skizziert wie der Prototyp einer intelligenten Suchmaschine. Das Buch zeigt, wie Topic Maps den Weg zu semantisch gesteuerten Suchprozessen im Internet weisen.
    RSWK
    Internet / Information Retrieval / Semantisches Netz / HyTime
    Internet / Information Retrieval / Semantisches Netz / XML
    Subject
    Internet / Information Retrieval / Semantisches Netz / HyTime
    Internet / Information Retrieval / Semantisches Netz / XML
  15. Hübener, M.: Suchmaschinenoptimierung kompakt : anwendungsorientierte Techniken für die Praxis (2009) 0.00
    0.0013396261 = product of:
      0.0066981306 = sum of:
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 3911) [ClassicSimilarity], result of:
              0.013396261 = score(doc=3911,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 3911, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3911)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    RSWK
    Suchmaschine / Information-Retrieval-System / Optimierung
    Subject
    Suchmaschine / Information-Retrieval-System / Optimierung
  16. Web-2.0-Dienste als Ergänzung zu algorithmischen Suchmaschinen (2008) 0.00
    9.472587E-4 = product of:
      0.0047362936 = sum of:
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 4323) [ClassicSimilarity], result of:
              0.009472587 = score(doc=4323,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 4323, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4323)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Issue
    Ergebnisse des Fachprojektes "Einbindung von Frage-Antwort-Diensten in die Web-Suche" am Department Information der Hochschule für Angewandte Wissenschaften Hamburg (WS 2007/2008).
  17. ¬Die Googleisierung der Informationssuche : Suchmaschinen zwischen Nutzung und Regulierung (2014) 0.00
    8.9308404E-4 = product of:
      0.0044654203 = sum of:
        0.0044654203 = product of:
          0.0089308405 = sum of:
            0.0089308405 = weight(_text_:information in 1840) [ClassicSimilarity], result of:
              0.0089308405 = score(doc=1840,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.10971737 = fieldWeight in 1840, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1840)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    RSWK
    Google / Internet / Information Retrieval / Aufsatzsammlung
    Subject
    Google / Internet / Information Retrieval / Aufsatzsammlung
  18. Hassler, M.: Web analytics : Metriken auswerten, Besucherverhalten verstehen, Website optimieren ; [Metriken analysieren und interpretieren ; Besucherverhalten verstehen und auswerten ; Website-Ziele definieren, Webauftritt optimieren und den Erfolg steigern] (2009) 0.00
    7.8144856E-4 = product of:
      0.003907243 = sum of:
        0.003907243 = product of:
          0.007814486 = sum of:
            0.007814486 = weight(_text_:information in 3586) [ClassicSimilarity], result of:
              0.007814486 = score(doc=3586,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0960027 = fieldWeight in 3586, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3586)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    BK
    85.20 / Betriebliche Information und Kommunikation
    Classification
    85.20 / Betriebliche Information und Kommunikation
  19. Stöcklin, N.: Wikipedia clever nutzen : in Schule und Beruf (2010) 0.00
    5.581776E-4 = product of:
      0.0027908878 = sum of:
        0.0027908878 = product of:
          0.0055817757 = sum of:
            0.0055817757 = weight(_text_:information in 4531) [ClassicSimilarity], result of:
              0.0055817757 = score(doc=4531,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.068573356 = fieldWeight in 4531, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4531)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Mitt. VÖB 64(2011) H.1, S. 153-155 (K Niedermair): "Vor einigen Wochen war in der Wochenzeitung Die Zeit ein dreiseitiger Beitrag über die gerade 10 Jahre alt gewordene Wikipedia zu lesen, unter dem Titel "Das größte Werk der Menschen": Wie könnte Wikipedia dieses Prädikat verdienen? Wohl schon aufgrund der Quantitäten: Die freie Enzyklopädie Wikipedia ist, seit sie am 15. Jänner 2001 online ging, extrem gewachsen, es gibt Plattformen in den meisten Sprachen, mit sehr viel Content, am meisten natürlich in der englischsprachigen Version, nämlich über drei Millionen Beiträge, auch die deutschsprachige ist beachtlich vertreten mit ca. einer Million. Wikipedia ist zu einer wichtigen Informationsquelle geworden, im Alltag, im Beruf, im Lehr- und Wissenschaftsbetrieb, dies zeigen immer wieder Befragungen von Studierenden und Wissenschaftler/innen. Verdienstvoll ist Wikipedia auch, weil sie nicht auf Gewinn orientiert ist, auf Werbeeinnahmen verzichtet. Wikipedia lebt vom Idealismus unzähliger freiwilliger Mitarbeiter/innen, die nicht um Geld, sondern aus Freude an der Arbeit gemeinsam dem großen Ziel verpflichtet sind, Wissen zu sammeln, zu ordnen und bereitzustellen - und zwar kostenlos für alle zu jeder Zeit und an jedem Ort. Es ist wohltuend, dass dieses Programm einer universalen Enzyklopädia publico in der kommerzialisierten Wirklichkeit des Internet überleben konnte und dass Erfolg und Wachstum im Internet nicht immer mit Geld zu tun haben müssen, wie es Google, Facebook usw. nahelegen, deren Gründer bekanntlich inzwischen Milliardäre sind. Und Wikipedia ist insofern ein starkes Argument gegen die landläufige These, dass Information nur brauchbar ist, wenn sie etwas kostet: Qualitätssicherung von Information hängt nicht zwangsläufig mit ihrer Ökonomisierung zusammen. Tatsächlich ist Wikipedia inzwischen eine massive Konkurrenz für die herkömmlichen, kommerziell orientierten Lexika und Enzyklopädien.
  20. Mythos Internet (1997) 0.00
    5.448922E-4 = product of:
      0.002724461 = sum of:
        0.002724461 = weight(_text_:a in 3175) [ClassicSimilarity], result of:
          0.002724461 = score(doc=3175,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 3175, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3175)
      0.2 = coord(1/5)
    
    Editor
    Münker, S. u. A. Roesler

Languages

  • d 15
  • e 7

Types

  • m 21
  • s 4
  • r 1
  • More… Less…

Subjects

Classifications