Search (7 results, page 1 of 1)

  • × classification_ss:"025.04 / dc22"
  1. O'Connor, B.C.; Kearns, J.; Anderson, R.L.: Doing things with information : beyond indexing and abstracting (2008) 0.03
    0.026754394 = sum of:
      0.024335213 = product of:
        0.09734085 = sum of:
          0.09734085 = weight(_text_:authors in 4297) [ClassicSimilarity], result of:
            0.09734085 = score(doc=4297,freq=8.0), product of:
              0.24157293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052990302 = queryNorm
              0.40294603 = fieldWeight in 4297, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.03125 = fieldNorm(doc=4297)
        0.25 = coord(1/4)
      0.0024191819 = product of:
        0.0048383637 = sum of:
          0.0048383637 = weight(_text_:e in 4297) [ClassicSimilarity], result of:
            0.0048383637 = score(doc=4297,freq=2.0), product of:
              0.07616667 = queryWeight, product of:
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.052990302 = queryNorm
              0.063523374 = fieldWeight in 4297, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.03125 = fieldNorm(doc=4297)
        0.5 = coord(1/2)
    
    Footnote
    The authors state that this book emerged from a proposal to do a second edition of Explorations in Indexing and Abstracting (O'Connor 1996); much of its content is the result of the authors' reaction to the reviews of this first edition and their realization for "the necessity to address some more fundamental questions". Rez. in: KO 38(2011) no.1, S.62-64 (L.F. Spiteri): "This book provides a good overview of the relationship between the document and the user; in this regard, it reinforces the importance of the clientcentred approach to the design of document representation systems. In the final chapter, the authors state: "We have offered examples of new ways to think about messages in all sorts of media and how they might be discovered, analyzed, synthesized, and generated. We brought together philosophical, scientific, and engineering notions into a fundamental model for just how we might understand doing this with information" (p. 225). The authors have certainly succeeded in highlighting the complex processes, nature, and implications of document representation systems, although, as has been seen, the novelty of some of their discussions and suggestions is sometimes limited. With further explanation, the FOC model may serve as a useful way to understand how to build document representation systems to better meet user needs."; vgl.: http://www.ergon-verlag.de/isko_ko/downloads/ko_38_2011_1e.pdf.
    Language
    e
  2. Antoniou, G.; Harmelen, F. van: ¬A semantic Web primer (2004) 0.02
    0.017347785 = sum of:
      0.015209509 = product of:
        0.060838036 = sum of:
          0.060838036 = weight(_text_:authors in 468) [ClassicSimilarity], result of:
            0.060838036 = score(doc=468,freq=8.0), product of:
              0.24157293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052990302 = queryNorm
              0.25184128 = fieldWeight in 468, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=468)
        0.25 = coord(1/4)
      0.002138275 = product of:
        0.00427655 = sum of:
          0.00427655 = weight(_text_:e in 468) [ClassicSimilarity], result of:
            0.00427655 = score(doc=468,freq=4.0), product of:
              0.07616667 = queryWeight, product of:
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.052990302 = queryNorm
              0.056147262 = fieldWeight in 468, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.01953125 = fieldNorm(doc=468)
        0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 57(2006) no.8, S.1132-1133 (H. Che): "The World Wide Web has been the main source of an important shift in the way people communicate with each other, get information, and conduct business. However, most of the current Web content is only suitable for human consumption. The main obstacle to providing better quality of service is that the meaning of Web content is not machine-accessible. The "Semantic Web" is envisioned by Tim Berners-Lee as a logical extension to the current Web that enables explicit representations of term meaning. It aims to bring the Web to its full potential via the exploration of these machine-processable metadata. To fulfill this, it pros ides some meta languages like RDF, OWL, DAML+OIL, and SHOE for expressing knowledge that has clear, unambiguous meanings. The first steps in searing the Semantic Web into the current Web are successfully underway. In the forthcoming years, these efforts still remain highly focused in the research and development community. In the next phase, the Semantic Web will respond more intelligently to user queries. The first chapter gets started with an excellent introduction to the Semantic Web vision. At first, today's Web is introduced, and problems with some current applications like search engines are also covered. Subsequently, knowledge management. business-to-consumer electronic commerce, business-to-business electronic commerce, and personal agents are used as examples to show the potential requirements for the Semantic Web. Next comes the brief description of the underpinning technologies, including metadata, ontology, logic, and agent. The differences between the Semantic Web and Artificial Intelligence are also discussed in a later subsection. In section 1.4, the famous "laser-cake" diagram is given to show a layered view of the Semantic Web. From chapter 2, the book starts addressing some of the most important technologies for constructing the Semantic Web. In chapter 2, the authors discuss XML and its related technologies such as namespaces, XPath, and XSLT. XML is a simple, very flexible text format which is often used for the exchange of a wide variety of data on the Web and elsewhere. The W3C has defined various languages on top of XML, such as RDF. Although this chapter is very well planned and written, many details are not included because of the extensiveness of the XML technologies. Many other books on XML provide more comprehensive coverage.
    The next chapter introduces resource description framework (RDF) and RDF schema (RDFS). Unlike XML, RDF provides a foundation for expressing the semantics of dada: it is a standard dada model for machine-processable semantics. Resource description framework schema offers a number of modeling primitives for organizing RDF vocabularies in typed hierarchies. In addition to RDF and RDFS, a query language for RDF, i.e. RQL. is introduced. This chapter and the next chapter are two of the most important chapters in the book. Chapter 4 presents another language called Web Ontology Language (OWL). Because RDFS is quite primitive as a modeling language for the Web, more powerful languages are needed. A richer language. DAML+OIL, is thus proposed as a joint endeavor of the United States and Europe. OWL takes DAML+OIL as the starting point, and aims to be the standardized and broadly accepted ontology language. At the beginning of the chapter, the nontrivial relation with RDF/RDFS is discussed. Then the authors describe the various language elements of OWL in some detail. Moreover, Appendix A contains an abstract OWL syntax. which compresses OWL and makes OWL much easier to read. Chapter 5 covers both monotonic and nonmonotonic rules. Whereas the previous chapter's mainly concentrate on specializations of knowledge representation, this chapter depicts the foundation of knowledge representation and inference. Two examples are also givwn to explain monotonic and non-monotonic rules, respectively. "To get the most out of the chapter. readers had better gain a thorough understanding of predicate logic first. Chapter 6 presents several realistic application scenarios to which the Semantic Web technology can be applied. including horizontal information products at Elsevier, data integration at Audi, skill finding at Swiss Life, a think tank portal at EnerSearch, e-learning. Web services, multimedia collection indexing, online procurement, raid device interoperability. These case studies give us some real feelings about the Semantic Web.
    The chapter on ontology engineering describes the development of ontology-based systems for the Web using manual and semiautomatic methods. Ontology is a concept similar to taxonomy. As stated in the introduction, ontology engineering deals with some of the methodological issues that arise when building ontologies, in particular, con-structing ontologies manually, reusing existing ontologies. and using semiautomatic methods. A medium-scale project is included at the end of the chapter. Overall the book is a nice introduction to the key components of the Semantic Web. The reading is quite pleasant, in part due to the concise layout that allows just enough content per page to facilitate readers' comprehension. Furthermore, the book provides a large number of examples, code snippets, exercises, and annotated online materials. Thus, it is very suitable for use as a textbook for undergraduates and low-grade graduates, as the authors say in the preface. However, I believe that not only students but also professionals in both academia and iudustry will benefit from the book. The authors also built an accompanying Web site for the book at http://www.semanticwebprimer.org. On the main page, there are eight tabs for each of the eight chapters. For each tabm the following sections are included: overview, example, presentations, problems and quizzes, errata, and links. These contents will greatly facilitate readers: for example, readers can open the listed links to further their readings. The vacancy of the errata sections also proves the quality of the book."
    Language
    e
  3. TREC: experiment and evaluation in information retrieval (2005) 0.01
    0.009116743 = sum of:
      0.0076047545 = product of:
        0.030419018 = sum of:
          0.030419018 = weight(_text_:authors in 636) [ClassicSimilarity], result of:
            0.030419018 = score(doc=636,freq=2.0), product of:
              0.24157293 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.052990302 = queryNorm
              0.12592064 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.01953125 = fieldNorm(doc=636)
        0.25 = coord(1/4)
      0.0015119887 = product of:
        0.0030239774 = sum of:
          0.0030239774 = weight(_text_:e in 636) [ClassicSimilarity], result of:
            0.0030239774 = score(doc=636,freq=2.0), product of:
              0.07616667 = queryWeight, product of:
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.052990302 = queryNorm
              0.03970211 = fieldWeight in 636, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                1.43737 = idf(docFreq=28552, maxDocs=44218)
                0.01953125 = fieldNorm(doc=636)
        0.5 = coord(1/2)
    
    Footnote
    ... TREC: Experiment and Evaluation in Information Retrieval is a reliable and comprehensive review of the TREC program and has been adopted by NIST as the official history of TREC (see http://trec.nist.gov). We were favorably surprised by the book. Well structured and written, chapters are self-contained and the existence of references to specialized and more detailed publications is continuous, which makes it easier to expand into the different aspects analyzed in the text. This book succeeds in compiling TREC evolution from its inception in 1992 to 2003 in an adequate and manageable volume. Thanks to the impressive effort performed by the authors and their experience in the field, it can satiate the interests of a great variety of readers. While expert researchers in the IR field and IR-related industrial companies can use it as a reference manual, it seems especially useful for students and non-expert readers willing to approach this research area. Like NIST, we would recommend this reading to anyone who may be interested in textual information retrieval."
    Language
    e
  4. Stuckenschmidt, H.; Harmelen, F. van: Information sharing on the semantic web (2005) 0.00
    0.0026188414 = product of:
      0.0052376827 = sum of:
        0.0052376827 = product of:
          0.010475365 = sum of:
            0.010475365 = weight(_text_:e in 2789) [ClassicSimilarity], result of:
              0.010475365 = score(doc=2789,freq=6.0), product of:
                0.07616667 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.052990302 = queryNorm
                0.13753214 = fieldWeight in 2789, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2789)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Classification
    TVU (E)
    GHBS
    TVU (E)
    Language
    e
  5. Sherman, C.: Google power : Unleash the full potential of Google (2005) 0.00
    0.0018143863 = product of:
      0.0036287727 = sum of:
        0.0036287727 = product of:
          0.0072575454 = sum of:
            0.0072575454 = weight(_text_:e in 3185) [ClassicSimilarity], result of:
              0.0072575454 = score(doc=3185,freq=2.0), product of:
                0.07616667 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.052990302 = queryNorm
                0.09528506 = fieldWeight in 3185, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3185)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e
  6. Spink, A.; Jansen, B.J.: Web searching : public searching of the Web (2004) 0.00
    0.0016904548 = product of:
      0.0033809096 = sum of:
        0.0033809096 = product of:
          0.006761819 = sum of:
            0.006761819 = weight(_text_:e in 1443) [ClassicSimilarity], result of:
              0.006761819 = score(doc=1443,freq=10.0), product of:
                0.07616667 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.052990302 = queryNorm
                0.08877662 = fieldWeight in 1443, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1443)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Den Autoren wurden von den kommerziellen Suchmaschinen AltaVista, Excite und All the Web größere Datenbestände zur Verfügung gestellt. Die ausgewerteten Files umfassten jeweils alle an die jeweilige Suchmaschine an einem bestimmten Tag gestellten Anfragen. Die Daten wurden zwischen 199'] und 2002 erhoben; allerdings liegen nicht von allen Jahren Daten von allen Suchmaschinen vor, so dass einige der festgestellten Unterschiede im Nutzerverhalten sich wohl auf die unterschiedlichen Nutzergruppen der einzelnen Suchmaschinen zurückführen lassen. In einem Fall werden die Nutzergruppen sogar explizit nach den Suchmaschinen getrennt, so dass das Nutzerverhalten der europäischen Nutzer der Suchmaschine All the Web mit dem Verhalten der US-amerikanischen Nutzer verglichen wird. Die Analyse der Logfiles erfolgt auf unterschiedlichen Ebenen: Es werden sowohl die eingegebenen Suchbegriffe, die kompletten Suchanfragen, die Such-Sessions und die Anzahl der angesehenen Ergebnisseiten ermittelt. Bei den Suchbegriffen ist besonders interessant, dass die Spannbreite der Informationsbedürfnisse im Lauf der Jahre deutlich zugenommen hat. Zwar werden 20 Prozent aller eingegebenen Suchbegriffe regelmäßig verwendet, zehn Prozent kamen hingegen nur ein einziges Mal vor. Die thematischen Interessen der Suchmaschinen-Nutzer haben sich im Lauf der letzten Jahre ebenfalls gewandelt. Während in den Anfangsjahren viele Anfragen aus den beiden Themenfeldern Sex und Technologie stammten, gehen diese mittlerweile zurück. Dafür nehmen Anfragen im Bereich E-Commerce zu. Weiterhin zugenommen haben nicht-englischsprachige Begriffe sowie Zahlen und Akronyme. Die Popularität von Suchbegriffen ist auch saisonabhängig und wird durch aktuelle Nachrichten beeinflusst. Auf der Ebene der Suchanfragen zeigt sich weiterhin die vielfach belegte Tatsache, dass Suchanfragen in Web-Suchmaschinen extrem kurz sind. Die durchschnittliche Suchanfrage enthält je nach Suchmaschine zwischen 2,3 und 2,9 Terme. Dies deckt sich mit anderen Untersuchungen zu diesem Thema. Die Länge der Suchanfragen ist in den letzten Jahren leicht steigend; größere Sprünge hin zu längeren Anfragen sind jedoch nicht zu erwarten. Ebenso verhält es sich mit dem Einsatz von Operatoren: Nur etwa in jeder zehnten Anfrage kommen diese vor, wobei die Phrasensuche am häufigsten verwendet wird. Dass die SuchmaschinenNutzer noch weitgehend als Anfänger angesehen werden müssen, zeigt sich auch daran, dass sie pro Suchanfrage nur drei oder vier Dokumente aus der Trefferliste tatsächlich sichten.
    In Hinblick auf die Informationsbedürfnisse ergibt sich eine weitere Besonderheit dadurch, dass Suchmaschinen nicht nur für eine Anfrageform genutzt werden. Eine "Spezialität" der Suchmaschinen ist die Beantwortung von navigationsorientierten Anfragen, beispielsweise nach der Homepage eines Unternehmens. Hier wird keine Menge von Dokumenten oder Fakteninformation verlangt; vielmehr ist eine Navigationshilfe gefragt. Solche Anfragen nehmen weiter zu. Die Untersuchung der Such-Sessions bringt Ergebnisse über die Formulierung und Bearbeitung der Suchanfragen zu einem Informationsbedürfnis zutage. Die Sessions dauern weit überwiegend weniger als 15 Minuten (dies inklusive Sichtung der Dokumente!), wobei etwa fünf Dokumente angesehen werden. Die Anzahl der angesehenen Ergebnisseiten hat im Lauf der Zeit abgenommen; dies könnte darauf zurückzuführen sein, dass es den Suchmaschinen im Lauf der Zeit gelungen ist, die Suchanfragen besser zu beantworten, so dass sich brauchbare Ergebnisse öfter bereits auf der ersten Ergebnisseite finden. Insgesamt bestätigt sich auch hier das Bild vom wenig fortgeschrittenen Suchmaschinen-Nutzer, der nach Eingabe einer unspezifischen Suchanfrage schnelle und gute Ergebnisse erwartet. Der zweite Teil des Buchs widmet sich einigen der bei den Suchmaschinen-Nutzern populären Themen und analysiert das Nutzerverhalten bei solchen Suchen. Dabei werden die eingegebenen Suchbegriffe und Anfragen untersucht. Die Bereiche sind E-Commerce, medizinische Themen, Sex und Multimedia. Anfragen aus dem Bereich E-Commerce sind in der Regel länger als allgemeine Anfragen. Sie werden seltener modifiziert und pro Anfrage werden weniger Dokumente angesehen. Einige generische Ausdrücke wie "shopping" werden sehr häufig verwendet. Der Anteil der E-Commerce-Anfragen ist hoch und die Autoren sehen die Notwendigkeit, spezielle Suchfunktionen für die Suche nach Unternehmenshomepages und Produkten zu erstellen bzw. zu verbessern. Nur zwischen drei und neun Prozent der Anfragen beziehen sich auf medizinische Themen, der Anteil dieser Anfragen nimmt tendenziell ab. Auch der Anteil der Anfragen nach sexuellen Inhalten dürfte mit einem Wert zwischen drei und knapp 1'7 Prozent geringer ausfallen als allgemein angenommen.
    Language
    e
  7. Aberer, K. et al.: ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings (2007) 0.00
    0.0012095909 = product of:
      0.0024191819 = sum of:
        0.0024191819 = product of:
          0.0048383637 = sum of:
            0.0048383637 = weight(_text_:e in 2477) [ClassicSimilarity], result of:
              0.0048383637 = score(doc=2477,freq=2.0), product of:
                0.07616667 = queryWeight, product of:
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.052990302 = queryNorm
                0.063523374 = fieldWeight in 2477, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.43737 = idf(docFreq=28552, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2477)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Language
    e

Types

Subjects