Search (7 results, page 1 of 1)

  • × classification_ss:"54.32 / Rechnerkommunikation"
  • × type_ss:"m"
  • × year_i:[2000 TO 2010}
  1. Spinning the Semantic Web : bringing the World Wide Web to its full potential (2003) 0.06
    0.056126453 = product of:
      0.112252906 = sum of:
        0.03781448 = weight(_text_:l in 1981) [ClassicSimilarity], result of:
          0.03781448 = score(doc=1981,freq=4.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.2173638 = fieldWeight in 1981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
        0.07443842 = weight(_text_:van in 1981) [ClassicSimilarity], result of:
          0.07443842 = score(doc=1981,freq=4.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.30496973 = fieldWeight in 1981, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1981)
      0.5 = coord(2/4)
    
    Content
    Inhalt: Tim Bemers-Lee: The Original Dream - Re-enter Machines - Where Are We Now? - The World Wide Web Consortium - Where Is the Web Going Next? / Dieter Fensel, James Hendler, Henry Lieberman, and Wolfgang Wahlster: Why Is There a Need for the Semantic Web and What Will It Provide? - How the Semantic Web Will Be Possible / Jeff Heflin, James Hendler, and Sean Luke: SHOE: A Blueprint for the Semantic Web / Deborah L. McGuinness, Richard Fikes, Lynn Andrea Stein, and James Hendler: DAML-ONT: An Ontology Language for the Semantic Web / Michel Klein, Jeen Broekstra, Dieter Fensel, Frank van Harmelen, and Ian Horrocks: Ontologies and Schema Languages on the Web / Borys Omelayenko, Monica Crubezy, Dieter Fensel, Richard Benjamins, Bob Wielinga, Enrico Motta, Mark Musen, and Ying Ding: UPML: The Language and Tool Support for Making the Semantic Web Alive / Deborah L. McGuinness: Ontologies Come of Age / Jeen Broekstra, Arjohn Kampman, and Frank van Harmelen: Sesame: An Architecture for Storing and Querying RDF Data and Schema Information / Rob Jasper and Mike Uschold: Enabling Task-Centered Knowledge Support through Semantic Markup / Yolanda Gil: Knowledge Mobility: Semantics for the Web as a White Knight for Knowledge-Based Systems / Sanjeev Thacker, Amit Sheth, and Shuchi Patel: Complex Relationships for the Semantic Web / Alexander Maedche, Steffen Staab, Nenad Stojanovic, Rudi Studer, and York Sure: SEmantic portAL: The SEAL Approach / Ora Lassila and Mark Adler: Semantic Gadgets: Ubiquitous Computing Meets the Semantic Web / Christopher Frye, Mike Plusch, and Henry Lieberman: Static and Dynamic Semantics of the Web / Masahiro Hori: Semantic Annotation for Web Content Adaptation / Austin Tate, Jeff Dalton, John Levine, and Alex Nixon: Task-Achieving Agents on the World Wide Web
  2. Belew, R.K.: Finding out about : a cognitive perspective on search engine technology and the WWW (2001) 0.02
    0.021268122 = product of:
      0.08507249 = sum of:
        0.08507249 = weight(_text_:van in 3346) [ClassicSimilarity], result of:
          0.08507249 = score(doc=3346,freq=4.0), product of:
            0.24408463 = queryWeight, product of:
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.043769516 = queryNorm
            0.34853685 = fieldWeight in 3346, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5765896 = idf(docFreq=454, maxDocs=44218)
              0.03125 = fieldNorm(doc=3346)
      0.25 = coord(1/4)
    
    Abstract
    The World Wide Web is rapidly filling with more text than anyone could have imagined even a short time ago, but the task of isolating relevant parts of this vast information has become just that much more daunting. Richard Belew brings a cognitive perspective to the study of information retrieval as a discipline within computer science. He introduces the idea of Finding Out About (FDA) as the process of actively seeking out information relevant to a topic of interest and describes its many facets - ranging from creating a good characterization of what the user seeks, to what documents actually mean, to methods of inferring semantic clues about each document, to the problem of evaluating whether our search engines are performing as we have intended. Finding Out About explains how to build the tools that are useful for searching collections of text and other media. In the process it takes a close look at the properties of textual documents that do not become clear until very large collections of them are brought together and shows that the construction of effective search engines requires knowledge of the statistical and mathematical properties of linguistic phenomena, as well as an appreciation for the cognitive foundation we bring to the task as language users. The unique approach of this book is its even handling of the phenomena of both numbers and words, making it accessible to a wide audience. The textbook is usable in both undergraduate and graduate classes on information retrieval, library science, and computational linguistics. The text is accompanied by a CD-ROM that contains a hypertext version of the book, including additional topics and notes not present in the printed edition. In addition, the CD contains the full text of C.J. "Keith" van Rijsbergen's famous textbook, Information Retrieval (now out of print). Many active links from Belew's to van Rijsbergen's hypertexts help to unite the material. Several test corpora and indexing tools are provided, to support the design of your own search engine. Additional exercises using these corpora and code are available to instructors. Also supporting this book is a Web site that will include recent additions to the book, as well as links to sites of new topics and methods.
  3. Schweibenz, W.; Thissen, F.: Qualität im Web : Benutzerfreundliche Webseiten durch Usability Evaluation (2003) 0.01
    0.0126369465 = product of:
      0.050547786 = sum of:
        0.050547786 = sum of:
          0.020896958 = weight(_text_:der in 767) [ClassicSimilarity], result of:
            0.020896958 = score(doc=767,freq=6.0), product of:
              0.09777089 = queryWeight, product of:
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.043769516 = queryNorm
              0.21373394 = fieldWeight in 767, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.2337668 = idf(docFreq=12875, maxDocs=44218)
                0.0390625 = fieldNorm(doc=767)
          0.029650826 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
            0.029650826 = score(doc=767,freq=2.0), product of:
              0.15327339 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.043769516 = queryNorm
              0.19345059 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=767)
      0.25 = coord(1/4)
    
    Abstract
    Für Webseiten ist, wie für alle interaktiven Anwendungen vom einfachen Automaten bis zur komplexen Software, die Benutzerfreundlichkeit von zentraler Bedeutung. Allerdings wird eine sinnvolle Benutzung von Informationsangeboten im World Wide Web häufig durch "cooles Design" unnötig erschwert, weil zentrale Punkte der Benutzerfreundlichkeit (Usability) vernachlässigt werden. Durch Usability Evaluation kann die Benutzerfreundlichkeit von Webseiten und damit auch die Akzeptanz bei den Benutzern verbessert werden. Ziel ist die Gestaltung von ansprechenden benutzerfreundlichen Webangeboten, die den Benutzern einen effektiven und effizienten Dialog ermöglichen. Das Buch bietet eine praxisorientierte Einführung in die Web Usability Evaluation und beschreibt die Anwendung ihrer verschiedenen Methoden.
    Classification
    AP 15860 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Formen der Kommunikation und des Kommunikationsdesigns / Kommunikationsdesign in elektronischen Medien
    Date
    22. 3.2008 14:24:08
    RVK
    AP 15860 Allgemeines / Medien- und Kommunikationswissenschaften, Kommunikationsdesign / Formen der Kommunikation und des Kommunikationsdesigns / Kommunikationsdesign in elektronischen Medien
  4. Net effects : how librarians can manage the unintended consequenees of the Internet (2003) 0.00
    0.0038198393 = product of:
      0.015279357 = sum of:
        0.015279357 = weight(_text_:l in 1796) [ClassicSimilarity], result of:
          0.015279357 = score(doc=1796,freq=2.0), product of:
            0.17396861 = queryWeight, product of:
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.043769516 = queryNorm
            0.08782824 = fieldWeight in 1796, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9746525 = idf(docFreq=2257, maxDocs=44218)
              0.015625 = fieldNorm(doc=1796)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 55(2004) no.11, S.1025-1026 (D.E. Agosto): ""Did you ever feel as though the Internet has caused you to lose control of your library?" So begins the introduction to this volume of over 50 articles, essays, library policies, and other documents from a variety of sources, most of which are library journals aimed at practitioners. Volume editor Block has a long history of library service as well as an active career as an online journalist. From 1977 to 1999 she was the Associate Director of Public Services at the St. Ambrose University library in Davenport, Iowa. She was also a Fox News Online weekly columnist from 1998 to 2000. She currently writes for and publishes the weekly ezine Exlibris, which focuses an the use of computers, the Internet, and digital databases to improve library services. Despite the promising premise of this book, the final product is largely a disappointment because of the superficial coverage of its issues. A listing of the most frequently represented sources serves to express the general level and style of the entries: nine articles are reprinted from Computers in Libraries, five from Library Journal, four from Library Journal NetConnect, four from ExLibris, four from American Libraries, three from College & Research Libraries News, two from Online, and two from The Chronicle of Higher Education. Most of the authors included contributed only one item, although Roy Tennant (manager of the California Digital Library) authored three of the pieces, and Janet L. Balas (library information systems specialist at the Monroeville Public Library in Pennsylvania) and Karen G. Schneider (coordinator of lii.org, the Librarians' Index to the Internet) each wrote two. Volume editor Block herself wrote six of the entries, most of which have been reprinted from ExLibris. Reading the volume is muck like reading an issue of one of these journals-a pleasant experience that discusses issues in the field without presenting much research. Net Effects doesn't offer much in the way of theory or research, but then again it doesn't claim to. Instead, it claims to be an "idea book" (p. 5) with practical solutions to Internet-generated library problems. While the idea is a good one, little of the material is revolutionary or surprising (or even very creative), and most of the solutions offered will already be familiar to most of the book's intended audience.
  5. Wegweiser im Netz : Qualität und Nutzung von Suchmaschinen (2004) 0.00
    0.0036194592 = product of:
      0.014477837 = sum of:
        0.014477837 = product of:
          0.028955674 = sum of:
            0.028955674 = weight(_text_:der in 2858) [ClassicSimilarity], result of:
              0.028955674 = score(doc=2858,freq=18.0), product of:
                0.09777089 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043769516 = queryNorm
                0.29615843 = fieldWeight in 2858, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2858)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Suchmaschinen sind die neuen »Gatekeeper« im Internet. Sie kanalisieren unsere Aufmerksamkeit und haben entscheidenden Einfluss darauf, welche Inhalte wie zugänglich sind. Ohne sie sind Informationen im Netz nur schwer auffindbar. Allerdings: Nur wenige Nutzer wissen, wie man Suchmaschinen optimal bedient und wie sie funktionieren. Sie sind anfällig für Manipulationen (»Spamming«) und verschaffen auch ungewollt Zugang zu illegalen und jugendgefährdenden Inhalten. Wie können Suchmaschinen trotzdem ihrer Verantwortung als zentrale Informationssortierer gerecht werden? Eine groß angelegte Untersuchung der Bertelsmann Stiftung stellt diese Beobachtungen auf eine wissenschaftliche Basis. Eine Nutzerbefragung, ein Laborexperiment und ein Leistungsvergleich geben Aufschluss über Image, Bedienerfreundlichkeit und Qualität von Suchmaschinen. Aus dieser Analyse entwickeln die Autoren einen Code of Conduct für Suchmaschinenbetreiber, der einen möglichst objektiven und transparenten Zugang zu Informationen im Netz garantieren soll. Das Buch ist dreigeteilt: Im ersten umfangreichen Teil (bis Seite 490) werden, nach einer Einführung in die Suchmaschinenproblematik und ihr Umfeld, Qualität und Nutzung erforscht: Nach der Marktanalyse der deutschsprachigen Suchdienste werden ausgewählte einem Leistungsvergleich unterzogen. Der Gefährdung von Kindern und Jugendlichen widmet sich das Kapitel Problemanalyse. Wie erfolgreich Spamversuche die Suchergebnisse beeinflussen können, wird anschließend dargestellt. Den Kenntnissen und Einstellungen von Nutzern von Suchdiensten widmet sich ein ausführliches Kapitel. Nutzungshäufigkeit, Suchprozesse und Vorgehensweisen sind detailliert untersucht worden. Die Ergebnisse der Laborexperimente liefern konkrete Einsichten, auf über 100 Seiten verständlich beschrieben. In Kapitel 6 werden die angewandten Methoden ausführlich erläutert. Das angefügte Glossar könnte ausführlicher sein. Der zweite Teil appelliert an die gesellschaftliche Verantwortung der deutschen Suchdienstbetreiber, indem ein Code of Conduct für Suchmaschinen entworfen wird. Im dritten Teil wird auf die Entwicklungen in der Suchmaschinenlandschaft eingegangen, die sich durch Firmenübernahmen und die Monopolstellung von Google ergeben haben.
  6. Block, C.H.: ¬Das Intranet : die neue Informationsverarbeitung (2004) 0.00
    0.0029859017 = product of:
      0.011943607 = sum of:
        0.011943607 = product of:
          0.023887213 = sum of:
            0.023887213 = weight(_text_:der in 2396) [ClassicSimilarity], result of:
              0.023887213 = score(doc=2396,freq=4.0), product of:
                0.09777089 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043769516 = queryNorm
                0.24431825 = fieldWeight in 2396, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2396)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Wechselwirkung 26(2004) Nr.128, S.110: "Dieses Buch zeigt die vielfältigen Möglichkeiten, die ein Intranet-Einsatz dem Unternehmen bietet - angefangen vom Firmeninformations system bis zum Einsatz in Form des Extranet als Basis für Workflow und Wissensmanagement. Dabei wird deutlich, dass ein Intranet von enormer strategischer Bedeutung für die Informationsverarbeitung des gesamten Unternehmens ist. Der Autor Carl Hans Block behandelt neben den Grundlagen, die ein Intranet-Einsatz erfordert, insbesondere das praktische Vorgehen sowohl bei der Erstellung als auch beim laufenden Betrieb des Intranet."
  7. Widhalm, R.; Mück, T.: Topic maps : Semantische Suche im Internet (2002) 0.00
    0.0017062295 = product of:
      0.006824918 = sum of:
        0.006824918 = product of:
          0.013649836 = sum of:
            0.013649836 = weight(_text_:der in 4731) [ClassicSimilarity], result of:
              0.013649836 = score(doc=4731,freq=4.0), product of:
                0.09777089 = queryWeight, product of:
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.043769516 = queryNorm
                0.13961042 = fieldWeight in 4731, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.2337668 = idf(docFreq=12875, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4731)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Das Werk behandelt die aktuellen Entwicklungen zur inhaltlichen Erschließung von Informationsquellen im Internet. Topic Maps, semantische Modelle vernetzter Informationsressourcen unter Verwendung von XML bzw. HyTime, bieten alle notwendigen Modellierungskonstrukte, um Dokumente im Internet zu klassifizieren und ein assoziatives, semantisches Netzwerk über diese zu legen. Neben Einführungen in XML, XLink, XPointer sowie HyTime wird anhand von Einsatzszenarien gezeigt, wie diese neuartige Technologie für Content Management und Information Retrieval im Internet funktioniert. Der Entwurf einer Abfragesprache wird ebenso skizziert wie der Prototyp einer intelligenten Suchmaschine. Das Buch zeigt, wie Topic Maps den Weg zu semantisch gesteuerten Suchprozessen im Internet weisen.

Languages