Search (2034 results, page 2 of 102)

  • × language_ss:"e"
  • × year_i:[2000 TO 2010}
  1. Tudhope, D.: Knowledge Organization System Services : brief review of NKOS activities and possibility of KOS registries (2007) 0.01
    0.009925528 = product of:
      0.04466488 = sum of:
        0.020761002 = product of:
          0.041522004 = sum of:
            0.041522004 = weight(_text_:web in 100) [ClassicSimilarity], result of:
              0.041522004 = score(doc=100,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.43268442 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
        0.023903877 = product of:
          0.047807753 = sum of:
            0.047807753 = weight(_text_:22 in 100) [ClassicSimilarity], result of:
              0.047807753 = score(doc=100,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.46428138 = fieldWeight in 100, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=100)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    22. 9.2007 15:41:14
    Theme
    Semantic Web
  2. Lavoie, B.F.; O'Neill, E.T.: How "World Wide" Is the Web? : Trends in the Internationalization of Web Sites (2001) 0.01
    0.009863772 = product of:
      0.044386975 = sum of:
        0.024467077 = product of:
          0.048934154 = sum of:
            0.048934154 = weight(_text_:web in 1066) [ClassicSimilarity], result of:
              0.048934154 = score(doc=1066,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5099235 = fieldWeight in 1066, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1066)
          0.5 = coord(1/2)
        0.019919898 = product of:
          0.039839797 = sum of:
            0.039839797 = weight(_text_:22 in 1066) [ClassicSimilarity], result of:
              0.039839797 = score(doc=1066,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.38690117 = fieldWeight in 1066, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1066)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Date
    7.10.2002 9:22:14
  3. Panzer, M.: Cool URIs for the DDC : towards Web-scale accessibility of a large classification system (2008) 0.01
    0.009692724 = product of:
      0.043617256 = sum of:
        0.027681338 = product of:
          0.055362675 = sum of:
            0.055362675 = weight(_text_:web in 2629) [ClassicSimilarity], result of:
              0.055362675 = score(doc=2629,freq=8.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5769126 = fieldWeight in 2629, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2629)
          0.5 = coord(1/2)
        0.015935918 = product of:
          0.031871837 = sum of:
            0.031871837 = weight(_text_:22 in 2629) [ClassicSimilarity], result of:
              0.031871837 = score(doc=2629,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.30952093 = fieldWeight in 2629, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2629)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The report discusses metadata strategies employed and problems encountered during the first step of transforming the DDC into a Web information resource. It focuses on the process of URI design, with regard to W3C recommendations and Semantic Web paradigms. Special emphasis is placed on usefulness of the URIs for RESTful web services.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Russell, B.M.; Spillane, J.L.: Using the Web for name authority work (2001) 0.01
    0.009690818 = product of:
      0.043608684 = sum of:
        0.029664757 = product of:
          0.059329513 = sum of:
            0.059329513 = weight(_text_:web in 167) [ClassicSimilarity], result of:
              0.059329513 = score(doc=167,freq=12.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.6182494 = fieldWeight in 167, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=167)
          0.5 = coord(1/2)
        0.013943928 = product of:
          0.027887857 = sum of:
            0.027887857 = weight(_text_:22 in 167) [ClassicSimilarity], result of:
              0.027887857 = score(doc=167,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2708308 = fieldWeight in 167, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=167)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    While many catalogers are using the Web to find the information they need to perform authority work quickly and accurately, the full potential of the Web to assist catalogers in name authority work has yet to be realized. The ever-growing nature of the Web means that available information for creating personal name, corporate name, and other types of headings will increase. In this article, we examine ways in which simple and effective Web searching can save catalogers time and money in the process of authority work. In addition, questions involving evaluating authority information found on the Web are explored.
    Date
    10. 9.2000 17:38:22
  5. Hsu, C.-N.; Chang, C.-H.; Hsieh, C.-H.; Lu, J.-J.; Chang, C.-C.: Reconfigurable Web wrapper agents for biological information integration (2005) 0.01
    0.008872417 = product of:
      0.039925877 = sum of:
        0.029965928 = product of:
          0.059931856 = sum of:
            0.059931856 = weight(_text_:web in 5263) [ClassicSimilarity], result of:
              0.059931856 = score(doc=5263,freq=24.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.6245262 = fieldWeight in 5263, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5263)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 5263) [ClassicSimilarity], result of:
              0.019919898 = score(doc=5263,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 5263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5263)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    A variety of biological data is transferred and exchanged in overwhelming volumes on the World Wide Web. How to rapidly capture, utilize, and integrate the information on the Internet to discover valuable biological knowledge is one of the most critical issues in bioinformatics. Many information integration systems have been proposed for integrating biological data. These systems usually rely on an intermediate software layer called wrappers to access connected information sources. Wrapper construction for Web data sources is often specially hand coded to accommodate the differences between each Web site. However, programming a Web wrapper requires substantial programming skill, and is time-consuming and hard to maintain. In this article we provide a solution for rapidly building software agents that can serve as Web wrappers for biological information integration. We define an XML-based language called Web Navigation Description Language (WNDL), to model a Web-browsing session. A WNDL script describes how to locate the data, extract the data, and combine the data. By executing different WNDL scripts, we can automate virtually all types of Web-browsing sessions. We also describe IEPAD (Information Extraction Based on Pattern Discovery), a data extractor based on pattern discovery techniques. IEPAD allows our software agents to automatically discover the extraction rules to extract the contents of a structurally formatted Web page. With a programming-by-example authoring tool, a user can generate a complete Web wrapper agent by browsing the target Web sites. We built a variety of biological applications to demonstrate the feasibility of our approach.
    Date
    22. 7.2006 14:36:42
  6. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.01
    0.008868592 = product of:
      0.039908662 = sum of:
        0.023972742 = product of:
          0.047945485 = sum of:
            0.047945485 = weight(_text_:web in 3376) [ClassicSimilarity], result of:
              0.047945485 = score(doc=3376,freq=6.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.49962097 = fieldWeight in 3376, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
        0.015935918 = product of:
          0.031871837 = sum of:
            0.031871837 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
              0.031871837 = score(doc=3376,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.30952093 = fieldWeight in 3376, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3376)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This chapter presents ontologies and their role in the creation of the Semantic Web. Ontologies hold special interest, because they are very closely related to the way we understand the world. They provide common understanding, the very first step to successful communication. In following sections, we will present ontologies, how they are created and used. We will describe available tools for specifying and working with ontologies.
    Date
    31. 7.2010 16:58:22
    Theme
    Semantic Web
  7. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.01
    0.0087591475 = product of:
      0.039416164 = sum of:
        0.027464228 = product of:
          0.054928456 = sum of:
            0.054928456 = weight(_text_:web in 2556) [ClassicSimilarity], result of:
              0.054928456 = score(doc=2556,freq=14.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.57238775 = fieldWeight in 2556, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
        0.011951938 = product of:
          0.023903877 = sum of:
            0.023903877 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.023903877 = score(doc=2556,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Theme
    Semantic Web
  8. Mundle, K.; Huie, H.; Bangalore, N.S.: ARL Library Catalog Department Web sites : an evaluative study (2006) 0.01
    0.00858892 = product of:
      0.03865014 = sum of:
        0.028690193 = product of:
          0.057380386 = sum of:
            0.057380386 = weight(_text_:web in 771) [ClassicSimilarity], result of:
              0.057380386 = score(doc=771,freq=22.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.59793836 = fieldWeight in 771, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=771)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
              0.019919898 = score(doc=771,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 771, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=771)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    User-friendly and content-rich Web sites are indispensable for any knowledge-based organization. Web site evaluation studies point to ways to improve the efficiency and usability of Web sites. Library catalog or technical services department Web sites have proliferated in the past few years, but there is no systematic and accepted method that evaluates the performance of these Web sites. An earlier study by Mundle, Zhao, and Bangalore evaluated catalog department Web sites within the consortium of the Committee on Institutional Cooperation (CIC) libraries, proposed a model to assess these Web sites, and recommended desirable features for them. The present study was undertaken to test the model further and to assess the recommended features. The study evaluated the catalog department Web sites of Association of Research Libraries members. It validated the model proposed, and confirmed the use of the performance index (PI) as an objective measure to assess the usability or workability of a catalog department Web site. The model advocates using a PI of 1.5 as the benchmark for catalog department Web site evaluation by employing the study tool and scoring method suggested in this paper.
    Date
    10. 9.2000 17:38:22
  9. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.01
    0.008481134 = product of:
      0.0381651 = sum of:
        0.02422117 = product of:
          0.04844234 = sum of:
            0.04844234 = weight(_text_:web in 88) [ClassicSimilarity], result of:
              0.04844234 = score(doc=88,freq=8.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.50479853 = fieldWeight in 88, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
        0.013943928 = product of:
          0.027887857 = sum of:
            0.027887857 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
              0.027887857 = score(doc=88,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2708308 = fieldWeight in 88, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=88)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  10. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.01
    0.008481134 = product of:
      0.0381651 = sum of:
        0.02422117 = product of:
          0.04844234 = sum of:
            0.04844234 = weight(_text_:web in 2640) [ClassicSimilarity], result of:
              0.04844234 = score(doc=2640,freq=8.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.50479853 = fieldWeight in 2640, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
        0.013943928 = product of:
          0.027887857 = sum of:
            0.027887857 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
              0.027887857 = score(doc=2640,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2708308 = fieldWeight in 2640, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2640)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Theme
    Semantic Web
  11. Agosto, D.E.: Bounded rationality and satisficing in young people's Web-based decision making (2002) 0.01
    0.008306416 = product of:
      0.03737887 = sum of:
        0.025426934 = product of:
          0.050853867 = sum of:
            0.050853867 = weight(_text_:web in 177) [ClassicSimilarity], result of:
              0.050853867 = score(doc=177,freq=12.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5299281 = fieldWeight in 177, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=177)
          0.5 = coord(1/2)
        0.011951938 = product of:
          0.023903877 = sum of:
            0.023903877 = weight(_text_:22 in 177) [ClassicSimilarity], result of:
              0.023903877 = score(doc=177,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.23214069 = fieldWeight in 177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=177)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    This study investigated Simon's behavioral decisionmaking theories of bounded rationality and satisficing in relation to young people's decision making in the World Wide Web, and considered the role of personal preferences in Web-based decisions. It employed a qualitative research methodology involving group interviews with 22 adolescent females. Data analysis took the form of iterative pattern coding using QSR NUD*IST Vivo qualitative data analysis software. Data analysis revealed that the study participants did operate within the limits of bounded rationality. These limits took the form of time constraints, information overload, and physical constraints. Data analysis also uncovered two major satisficing behaviors-reduction and termination. Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences. This study has related implications for Web site designers and for adult intermediaries who work with young people and the Web
  12. Peters, I.: Folksonomies : indexing and retrieval in Web 2.0 (2009) 0.01
    0.00829682 = product of:
      0.07467138 = sum of:
        0.07467138 = sum of:
          0.033902578 = weight(_text_:web in 4203) [ClassicSimilarity], result of:
            0.033902578 = score(doc=4203,freq=12.0), product of:
              0.09596372 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.02940506 = queryNorm
              0.35328537 = fieldWeight in 4203, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=4203)
          0.040768802 = weight(_text_:seite in 4203) [ClassicSimilarity], result of:
            0.040768802 = score(doc=4203,freq=2.0), product of:
              0.16469958 = queryWeight, product of:
                5.601063 = idf(docFreq=443, maxDocs=44218)
                0.02940506 = queryNorm
              0.24753433 = fieldWeight in 4203, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.601063 = idf(docFreq=443, maxDocs=44218)
                0.03125 = fieldNorm(doc=4203)
      0.11111111 = coord(1/9)
    
    Abstract
    Kollaborative Informationsdienste im Web 2.0 werden von den Internetnutzern nicht nur dazu genutzt, digitale Informationsressourcen zu produzieren, sondern auch, um sie inhaltlich mit eigenen Schlagworten, sog. Tags, zu erschließen. Dabei müssen die Nutzer nicht wie bei Bibliothekskatalogen auf Regeln achten. Die Menge an nutzergenerierten Tags innerhalb eines Kollaborativen Informationsdienstes wird als Folksonomy bezeichnet. Die Folksonomies dienen den Nutzern zum Wiederauffinden eigener Ressourcen und für die Recherche nach fremden Ressourcen. Das Buch beschäftigt sich mit Kollaborativen Informationsdiensten, Folksonomies als Methode der Wissensrepräsentation und als Werkzeug des Information Retrievals.
    Footnote
    Zugl.: Düsseldorf, Univ., Diss., 2009 u.d.T.: Peters, Isabella: Folksonomies in Wissensrepräsentation und Information Retrieval Rez. in: IWP - Information Wissenschaft & Praxis, 61(2010) Heft 8, S.469-470 (U. Spree): "... Nachdem sich die Rezensentin durch 418 Seiten Text hindurch gelesen hat, bleibt sie unentschieden, wie der auffällige Einsatz langer Zitate (im Durchschnitt drei Zitate, die länger als vier kleingedruckte Zeilen sind, pro Seite) zu bewerten ist, zumal die Zitate nicht selten rein illustrativen Charakter haben bzw. Isabella Peters noch einmal zitiert, was sie bereits in eigenen Worten ausgedrückt hat. Redundanz und Verlängerung der Lesezeit halten sich hier die Waage mit der Möglichkeit, dass sich die Leserin einen unmittelbaren Eindruck von Sprache und Duktus der zitierten Literatur verschaffen kann. Eindeutig unschön ist das Beenden eines Gedankens oder einer Argumentation durch ein Zitat (z. B. S. 170). Im deutschen Original entstehen auf diese Weise die für deutsche wissenschaftliche Qualifikationsarbeiten typischen denglischen Texte. Für alle, die sich für Wissensrepräsentation, Information Retrieval und kollaborative Informationsdienste interessieren, ist "Folksonomies : Indexing and Retrieval in Web 2.0" trotz der angeführten kleinen Mängel zur Lektüre und Anschaffung - wegen seines beinahe enzyklopädischen Charakters auch als Nachschlage- oder Referenzwerk geeignet - unbedingt zu empfehlen. Abschließend möchte ich mich in einem Punkt der Produktinfo von de Gruyter uneingeschränkt anschließen: ein "Grundlagenwerk für Folksonomies".
    Object
    Web 2.0
    RSWK
    World Wide Web 2.0
    Subject
    World Wide Web 2.0
  13. Herrera-Viedma, E.; Pasi, G.; Lopez-Herrera, A.G.; Porcel; C.: Evaluating the information quality of Web sites : a methodology based on fuzzy computing with words (2006) 0.01
    0.008292217 = product of:
      0.037314974 = sum of:
        0.027355025 = product of:
          0.05471005 = sum of:
            0.05471005 = weight(_text_:web in 5286) [ClassicSimilarity], result of:
              0.05471005 = score(doc=5286,freq=20.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5701118 = fieldWeight in 5286, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5286)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 5286) [ClassicSimilarity], result of:
              0.019919898 = score(doc=5286,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 5286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5286)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    An evaluation methodology based on fuzzy computing with words aimed at measuring the information quality of Web sites containing documents is presented. This methodology is qualitative and user oriented because it generates linguistic recommendations on the information quality of the content-based Web sites based on users' perceptions. It is composed of two main components, an evaluation scheme to analyze the information quality of Web sites and a measurement method to generate the linguistic recommendations. The evaluation scheme is based on both technical criteria related to the Web site structure and criteria related to the content of information on the Web sites. It is user driven because the chosen criteria are easily understandable by the users, in such a way that Web visitors can assess them by means of linguistic evaluation judgments. The measurement method is user centered because it generates linguistic recommendations of the Web sites based on the visitors' linguistic evaluation judgments. To combine the linguistic evaluation judgments we introduce two new majority guided linguistic aggregation operators, the Majority guided Linguistic Induced Ordered Weighted Averaging (MLIOWA) and weighted MLIOWA operators, which generate the linguistic recommendations according to the majority of the evaluation judgments provided by different visitors. The use of this methodology could improve tasks such as information filtering and evaluation on the World Wide Web.
    Date
    22. 7.2006 17:05:46
    Footnote
    Beitrag in einer Special Topic Section on Soft Approaches to Information Retrieval and Information Access on the Web
  14. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.01
    0.008292217 = product of:
      0.037314974 = sum of:
        0.027355025 = product of:
          0.05471005 = sum of:
            0.05471005 = weight(_text_:web in 586) [ClassicSimilarity], result of:
              0.05471005 = score(doc=586,freq=20.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5701118 = fieldWeight in 586, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
              0.019919898 = score(doc=586,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 586, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=586)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
  15. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.01
    0.008264816 = product of:
      0.07438335 = sum of:
        0.07438335 = sum of:
          0.031141505 = weight(_text_:web in 6) [ClassicSimilarity], result of:
            0.031141505 = score(doc=6,freq=18.0), product of:
              0.09596372 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.02940506 = queryNorm
              0.32451332 = fieldWeight in 6, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0234375 = fieldNorm(doc=6)
          0.043241847 = weight(_text_:seite in 6) [ClassicSimilarity], result of:
            0.043241847 = score(doc=6,freq=4.0), product of:
              0.16469958 = queryWeight, product of:
                5.601063 = idf(docFreq=443, maxDocs=44218)
                0.02940506 = queryNorm
              0.26254982 = fieldWeight in 6, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.601063 = idf(docFreq=443, maxDocs=44218)
                0.0234375 = fieldNorm(doc=6)
      0.11111111 = coord(1/9)
    
    Abstract
    Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other Web pages always appear at the top? What creates these powerful rankings? And how? The first book ever about the science of Web page rankings, "Google's PageRank and Beyond" supplies the answers to these and other questions and more. The book serves two very different audiences: the curious science reader and the technical computational reader. The chapters build in mathematical sophistication, so that the first five are accessible to the general academic reader. While other chapters are much more mathematical in nature, each one contains something for both audiences. For example, the authors include entertaining asides such as how search engines make money and how the Great Firewall of China influences research. The book includes an extensive background chapter designed to help readers learn more about the mathematics of search engines, and it contains several MATLAB codes and links to sample Web data sets. The philosophy throughout is to encourage readers to experiment with the ideas and algorithms in the text. Any business seriously interested in improving its rankings in the major search engines can benefit from the clear examples, sample code, and list of resources provided. It includes: many illustrative examples and entertaining asides; MATLAB code; accessible and informal style; and complete and self-contained section for mathematics review.
    Content
    Inhalt: Chapter 1. Introduction to Web Search Engines: 1.1 A Short History of Information Retrieval - 1.2 An Overview of Traditional Information Retrieval - 1.3 Web Information Retrieval Chapter 2. Crawling, Indexing, and Query Processing: 2.1 Crawling - 2.2 The Content Index - 2.3 Query Processing Chapter 3. Ranking Webpages by Popularity: 3.1 The Scene in 1998 - 3.2 Two Theses - 3.3 Query-Independence Chapter 4. The Mathematics of Google's PageRank: 4.1 The Original Summation Formula for PageRank - 4.2 Matrix Representation of the Summation Equations - 4.3 Problems with the Iterative Process - 4.4 A Little Markov Chain Theory - 4.5 Early Adjustments to the Basic Model - 4.6 Computation of the PageRank Vector - 4.7 Theorem and Proof for Spectrum of the Google Matrix Chapter 5. Parameters in the PageRank Model: 5.1 The a Factor - 5.2 The Hyperlink Matrix H - 5.3 The Teleportation Matrix E Chapter 6. The Sensitivity of PageRank; 6.1 Sensitivity with respect to alpha - 6.2 Sensitivity with respect to H - 6.3 Sensitivity with respect to vT - 6.4 Other Analyses of Sensitivity - 6.5 Sensitivity Theorems and Proofs Chapter 7. The PageRank Problem as a Linear System: 7.1 Properties of (I - alphaS) - 7.2 Properties of (I - alphaH) - 7.3 Proof of the PageRank Sparse Linear System Chapter 8. Issues in Large-Scale Implementation of PageRank: 8.1 Storage Issues - 8.2 Convergence Criterion - 8.3 Accuracy - 8.4 Dangling Nodes - 8.5 Back Button Modeling
    Chapter 9. Accelerating the Computation of PageRank: 9.1 An Adaptive Power Method - 9.2 Extrapolation - 9.3 Aggregation - 9.4 Other Numerical Methods Chapter 10. Updating the PageRank Vector: 10.1 The Two Updating Problems and their History - 10.2 Restarting the Power Method - 10.3 Approximate Updating Using Approximate Aggregation - 10.4 Exact Aggregation - 10.5 Exact vs. Approximate Aggregation - 10.6 Updating with Iterative Aggregation - 10.7 Determining the Partition - 10.8 Conclusions Chapter 11. The HITS Method for Ranking Webpages: 11.1 The HITS Algorithm - 11.2 HITS Implementation - 11.3 HITS Convergence - 11.4 HITS Example - 11.5 Strengths and Weaknesses of HITS - 11.6 HITS's Relationship to Bibliometrics - 11.7 Query-Independent HITS - 11.8 Accelerating HITS - 11.9 HITS Sensitivity Chapter 12. Other Link Methods for Ranking Webpages: 12.1 SALSA - 12.2 Hybrid Ranking Methods - 12.3 Rankings based on Traffic Flow Chapter 13. The Future of Web Information Retrieval: 13.1 Spam - 13.2 Personalization - 13.3 Clustering - 13.4 Intelligent Agents - 13.5 Trends and Time-Sensitive Search - 13.6 Privacy and Censorship - 13.7 Library Classification Schemes - 13.8 Data Fusion Chapter 14. Resources for Web Information Retrieval: 14.1 Resources for Getting Started - 14.2 Resources for Serious Study Chapter 15. The Mathematics Guide: 15.1 Linear Algebra - 15.2 Perron-Frobenius Theory - 15.3 Markov Chains - 15.4 Perron Complementation - 15.5 Stochastic Complementation - 15.6 Censoring - 15.7 Aggregation - 15.8 Disaggregation
    RSWK
    Google / Web-Seite / Rangstatistik (HEBIS)
    Subject
    Google / Web-Seite / Rangstatistik (HEBIS)
  16. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.01
    0.00808388 = product of:
      0.03637746 = sum of:
        0.013840669 = product of:
          0.027681338 = sum of:
            0.027681338 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
              0.027681338 = score(doc=2845,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2884563 = fieldWeight in 2845, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
        0.022536792 = product of:
          0.045073584 = sum of:
            0.045073584 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.045073584 = score(doc=2845,freq=4.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  17. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.01
    0.0079802675 = product of:
      0.035911202 = sum of:
        0.025951253 = product of:
          0.051902507 = sum of:
            0.051902507 = weight(_text_:web in 2738) [ClassicSimilarity], result of:
              0.051902507 = score(doc=2738,freq=18.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5408555 = fieldWeight in 2738, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
        0.009959949 = product of:
          0.019919898 = sum of:
            0.019919898 = weight(_text_:22 in 2738) [ClassicSimilarity], result of:
              0.019919898 = score(doc=2738,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.19345059 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Navigating through hyperlinks within a Web site to look for information from one of its Web pages without the support of a site map can be inefficient and ineffective. Although the content of a Web site is usually organized with an inherent structure like a topic hierarchy, which is a directed tree rooted at a Web site's homepage whose vertices and edges correspond to Web pages and hyperlinks, such a topic hierarchy is not always available to the user. In this work, we studied the problem of automatic generation of Web sites' topic hierarchies. We modeled a Web site's link structure as a weighted directed graph and proposed methods for estimating edge weights based on eight types of features and three learning algorithms, namely decision trees, naïve Bayes classifiers, and logistic regression. Three graph algorithms, namely breadth-first search, shortest-path search, and directed minimum-spanning tree, were adapted to generate the topic hierarchy based on the graph model. We have tested the model and algorithms on real Web sites. It is found that the directed minimum-spanning tree algorithm with the decision tree as the weight learning algorithm achieves the highest performance with an average accuracy of 91.9%.
    Date
    22. 3.2009 12:51:47
  18. Rösch, H.: Entwicklungsstand und Qualitätsmanagement digitaler Auskunft in Bibliotheken (2007) 0.01
    0.007936741 = product of:
      0.035715334 = sum of:
        0.009786831 = product of:
          0.019573662 = sum of:
            0.019573662 = weight(_text_:web in 400) [ClassicSimilarity], result of:
              0.019573662 = score(doc=400,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.2039694 = fieldWeight in 400, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=400)
          0.5 = coord(1/2)
        0.025928505 = product of:
          0.05185701 = sum of:
            0.05185701 = weight(_text_:bewertung in 400) [ClassicSimilarity], result of:
              0.05185701 = score(doc=400,freq=2.0), product of:
                0.18575147 = queryWeight, product of:
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.02940506 = queryNorm
                0.27917415 = fieldWeight in 400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.31699 = idf(docFreq=216, maxDocs=44218)
                  0.03125 = fieldNorm(doc=400)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Zunächst wird die aktuelle Bedeutung digitaler Auskunft in der Informationsgesellschaft angesprochen. Anschließend folgt ein Überblick über den bislang erreichten Entwicklungsstand dieser relativ neuen Dienstleistung. Dabei werden die Vor- und Nachteile der technischen und organisatorischen Varianten erläutert. Schließlich richtet sich der Blick auf Qualitätskriterien zur Bewertung und Verbesserung digitaler Auskunft in der Praxis.
    Content
    "Die Ursprünge digitaler Auskunft reichen zurück in die 1980er Jahre. Aus bescheidenen Anfängen hat sich inzwischen eine bibliothekarische Standarddienstleistung entwickelt. Mit dem digitalen Umbruch stellten die Bibliotheken zunächst ihre Kataloge im Web für die Recherche bereit und boten FAQs zur Beantwortung von Standardfragen an. Um den vollen Umfang bibliothekarischer Dienstleistungen im Internet präsentieren zu können, bedurfte es darüber hinaus der Entwicklung eines Äquivalents für die klassische Auskunft im WWW. Die Entwicklung von digitaler Auskunft drängte sich aber nicht nur aus diesem Grund auf; das Web veränderte (und verändert) zudem die Informationskultur der Kunden; diese erwarten schnelleren und einfacheren Service. Alles soll so unmittelbar und so unkompliziert recherchierbar sein, wie man es von Google, Yahoo und anderen gewohnt ist. Außerdem hat die bibliothekarische Auskunft mit "Yahoo Clever" oder "Lycos IQ" kommerzielle Konkurrenten erhalten. Digitale Auskunft musste also als Antwort auf die Herausforderungen der kommerziellen Konkurrenz und der veränderten Benutzergewohnheiten schnell entwickelt werden. Denn nur so konnte und kann rechtzeitig unter Beweis gestellt werden, dass Bibliotheken für viele Auskunftsfälle gegenüber Suchmaschinen und Webkatalogen einen ungeheueren Vorteil besitzen: Die klassische und damit auch die digitale Auskunft zielt nicht darauf, die Fragen zu beantworten, die Benutzer stellen, sondern (idealerweise) darauf, ihnen die Informationen zu verschaffen, die sie tatsächlich benötigen.
  19. Drabenstott, K.M.: Web search strategies (2000) 0.01
    0.007922065 = product of:
      0.035649296 = sum of:
        0.027681338 = product of:
          0.055362675 = sum of:
            0.055362675 = weight(_text_:web in 1188) [ClassicSimilarity], result of:
              0.055362675 = score(doc=1188,freq=32.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5769126 = fieldWeight in 1188, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1188)
          0.5 = coord(1/2)
        0.007967959 = product of:
          0.015935918 = sum of:
            0.015935918 = weight(_text_:22 in 1188) [ClassicSimilarity], result of:
              0.015935918 = score(doc=1188,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.15476047 = fieldWeight in 1188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1188)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    Surfing the World Wide Web used to be cool, dude, real cool. But things have gotten hot - so hot that finding something useful an the Web is no longer cool. It is suffocating Web searchers in the smoke and debris of mountain-sized lists of hits, decisions about which search engines they should use, whether they will get lost in the dizzying maze of a subject directory, use the right syntax for the search engine at hand, enter keywords that are likely to retrieve hits an the topics they have in mind, or enlist a browser that has sufficient functionality to display the most promising hits. When it comes to Web searching, in a few short years we have gone from the cool image of surfing the Web into the frying pan of searching the Web. We can turn down the heat by rethinking what Web searchers are doing and introduce some order into the chaos. Web search strategies that are tool-based-oriented to specific Web searching tools such as search en gines, subject directories, and meta search engines-have been widely promoted, and these strategies are just not working. It is time to dissect what Web searching tools expect from searchers and adjust our search strategies to these new tools. This discussion offers Web searchers help in the form of search strategies that are based an strategies that librarians have been using for a long time to search commercial information retrieval systems like Dialog, NEXIS, Wilsonline, FirstSearch, and Data-Star.
    Content
    "Web searching is different from searching commercial IR systems. We can learn from search strategies recommended for searching IR systems, but most won't be effective for Web searching. Web searchers need strate gies that let search engines do the job they were designed to do. This article presents six new Web searching strategies that do just that."
    Date
    22. 9.1997 19:16:05
  20. Daconta, M.C.; Oberst, L.J.; Smith, K.T.: ¬The Semantic Web : A guide to the future of XML, Web services and knowledge management (2003) 0.01
    0.007922065 = product of:
      0.035649296 = sum of:
        0.027681338 = product of:
          0.055362675 = sum of:
            0.055362675 = weight(_text_:web in 320) [ClassicSimilarity], result of:
              0.055362675 = score(doc=320,freq=32.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.5769126 = fieldWeight in 320, product of:
                  5.656854 = tf(freq=32.0), with freq of:
                    32.0 = termFreq=32.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=320)
          0.5 = coord(1/2)
        0.007967959 = product of:
          0.015935918 = sum of:
            0.015935918 = weight(_text_:22 in 320) [ClassicSimilarity], result of:
              0.015935918 = score(doc=320,freq=2.0), product of:
                0.10297151 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02940506 = queryNorm
                0.15476047 = fieldWeight in 320, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=320)
          0.5 = coord(1/2)
      0.22222222 = coord(2/9)
    
    Abstract
    "The Semantic Web is an extension of the current Web in which information is given well defined meaning, better enabling computers and people to work in cooperation." - Tim Berners Lee, "Scientific American", May 2001. This authoritative guide shows how the "Semantic Web" works technically and how businesses can utilize it to gain a competitive advantage. It explains what taxonomies and ontologies are as well as their importance in constructing the Semantic Web. The companion web site includes further updates as the framework develops and links to related sites.
    Date
    22. 5.2007 10:37:38
    Footnote
    Rez. Amazon: "Die Autoren bezeichnen das Buch im Vorwort als strategischen Führer für Führungskräfte und Entwickler die sich einen Überblick über das Semantic Web und die dahinter stehende Vision machen wollen. Genau diesem Anspruch wird das Buch auch absolut gerecht. Die ersten beiden Kapitel beschreiben die Vision sowie die Möglichkeiten, die sich durch den Einsatz der in den nachfolgenden Kapiteln beschriebenen Techniken bieten. Die Autoren schaffen es anhand vieler praktischer Szenarien (die zwar teilweise meiner Einschätzung nach schon noch in einiger Zukunft liegen, aber die große Vision des ganzen schön vergegenwärtigen) sehr schnell den Leser für die Technik zu begeistern und mehr darüber wissen zu wollen. Die nachfolgenden Kapitel beschreiben die Techniken auf den verschiedenen semantischen Ebenen von XML als Basis für alles weitere, über Web Services, RDF, Taxonomies und Ontologies. Den Autoren gelingt es die beschriebenen Techniken so kurz und prägnant zu erklären, dass sich der Leser danach zumindest ein Bild über die Techniken an sich, sowie über deren komplexes Zusammenspiel machen kann. Auch für Entwickler würde ich das Buch empfehlen, da es einen sehr guten Einstieg in viele doch sehr neue Techniken bietet mit vielen Verweisen auf weitere Literatur. Alles in allem ein sehr gelungenes Buch, das es trotz relativ geringem Umfangs schafft, einen guten Überblick über dieses komplexe Thema zu vermitteln."
    LCSH
    Semantic Web
    Web site development
    RSWK
    Semantic Web
    Subject
    Semantic Web
    Semantic Web
    Web site development
    Theme
    Semantic Web

Languages

Types

  • a 1650
  • m 218
  • el 208
  • s 94
  • b 27
  • n 11
  • i 10
  • x 10
  • r 9
  • p 1
  • More… Less…

Themes

Subjects

Classifications