Search (3319 results, page 2 of 166)

  • × year_i:[2000 TO 2010}
  1. Russell, B.M.; Spillane, J.L.: Using the Web for name authority work (2001) 0.08
    0.07726393 = product of:
      0.15452786 = sum of:
        0.15452786 = sum of:
          0.10511739 = weight(_text_:web in 167) [ClassicSimilarity], result of:
            0.10511739 = score(doc=167,freq=12.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.6182494 = fieldWeight in 167, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=167)
          0.049410466 = weight(_text_:22 in 167) [ClassicSimilarity], result of:
            0.049410466 = score(doc=167,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 167, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=167)
      0.5 = coord(1/2)
    
    Abstract
    While many catalogers are using the Web to find the information they need to perform authority work quickly and accurately, the full potential of the Web to assist catalogers in name authority work has yet to be realized. The ever-growing nature of the Web means that available information for creating personal name, corporate name, and other types of headings will increase. In this article, we examine ways in which simple and effective Web searching can save catalogers time and money in the process of authority work. In addition, questions involving evaluating authority information found on the Web are explored.
    Date
    10. 9.2000 17:38:22
  2. Schweibenz, W.; Thissen, F.: Qualität im Web : Benutzerfreundliche Webseiten durch Usability Evaluation (2003) 0.08
    0.077005595 = product of:
      0.15401119 = sum of:
        0.15401119 = sum of:
          0.11871799 = weight(_text_:web in 767) [ClassicSimilarity], result of:
            0.11871799 = score(doc=767,freq=30.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.69824153 = fieldWeight in 767, product of:
                5.477226 = tf(freq=30.0), with freq of:
                  30.0 = termFreq=30.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=767)
          0.03529319 = weight(_text_:22 in 767) [ClassicSimilarity], result of:
            0.03529319 = score(doc=767,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=767)
      0.5 = coord(1/2)
    
    Abstract
    Für Webseiten ist, wie für alle interaktiven Anwendungen vom einfachen Automaten bis zur komplexen Software, die Benutzerfreundlichkeit von zentraler Bedeutung. Allerdings wird eine sinnvolle Benutzung von Informationsangeboten im World Wide Web häufig durch "cooles Design" unnötig erschwert, weil zentrale Punkte der Benutzerfreundlichkeit (Usability) vernachlässigt werden. Durch Usability Evaluation kann die Benutzerfreundlichkeit von Webseiten und damit auch die Akzeptanz bei den Benutzern verbessert werden. Ziel ist die Gestaltung von ansprechenden benutzerfreundlichen Webangeboten, die den Benutzern einen effektiven und effizienten Dialog ermöglichen. Das Buch bietet eine praxisorientierte Einführung in die Web Usability Evaluation und beschreibt die Anwendung ihrer verschiedenen Methoden.
    Classification
    ST 252 Informatik / Monographien / Software und -entwicklung / Web-Programmierung, allgemein
    Content
    Einführung.- Grundlagen des Web-Designs.- Usability und Usability Engineering.- Usability Engineering und das Web.- Methodenfragen zur Usability Evaluation.Expertenorientierte Methoden.- Benutzerorientierte Methoden.- Suchmaschinenorientierte Methoden.- Literatur.Glossar.- Index.- Checklisten.
    Date
    22. 3.2008 14:24:08
    RSWK
    Web-Seite / Gestaltung / Benutzerorientierung / Benutzerfreundlichkeit
    World Wide Web / Web Site / Gebrauchswert / Kundenorientierung / Kommunikationsdesign (GBV)
    Web-Seite / Qualität (BVB)
    RVK
    ST 252 Informatik / Monographien / Software und -entwicklung / Web-Programmierung, allgemein
    Subject
    Web-Seite / Gestaltung / Benutzerorientierung / Benutzerfreundlichkeit
    World Wide Web / Web Site / Gebrauchswert / Kundenorientierung / Kommunikationsdesign (GBV)
    Web-Seite / Qualität (BVB)
  3. Stojanovic, N.: Ontology-based Information Retrieval : methods and tools for cooperative query answering (2005) 0.07
    0.0725041 = sum of:
      0.055164233 = product of:
        0.1654927 = sum of:
          0.1654927 = weight(_text_:3a in 701) [ClassicSimilarity], result of:
            0.1654927 = score(doc=701,freq=2.0), product of:
              0.4416923 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052098576 = queryNorm
              0.3746787 = fieldWeight in 701, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.33333334 = coord(1/3)
      0.017339872 = product of:
        0.034679744 = sum of:
          0.034679744 = weight(_text_:web in 701) [ClassicSimilarity], result of:
            0.034679744 = score(doc=701,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.2039694 = fieldWeight in 701, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=701)
        0.5 = coord(1/2)
    
    Abstract
    By the explosion of possibilities for a ubiquitous content production, the information overload problem reaches the level of complexity which cannot be managed by traditional modelling approaches anymore. Due to their pure syntactical nature traditional information retrieval approaches did not succeed in treating content itself (i.e. its meaning, and not its representation). This leads to a very low usefulness of the results of a retrieval process for a user's task at hand. In the last ten years ontologies have been emerged from an interesting conceptualisation paradigm to a very promising (semantic) modelling technology, especially in the context of the Semantic Web. From the information retrieval point of view, ontologies enable a machine-understandable form of content description, such that the retrieval process can be driven by the meaning of the content. However, the very ambiguous nature of the retrieval process in which a user, due to the unfamiliarity with the underlying repository and/or query syntax, just approximates his information need in a query, implies a necessity to include the user in the retrieval process more actively in order to close the gap between the meaning of the content and the meaning of a user's query (i.e. his information need). This thesis lays foundation for such an ontology-based interactive retrieval process, in which the retrieval system interacts with a user in order to conceptually interpret the meaning of his query, whereas the underlying domain ontology drives the conceptualisation process. In that way the retrieval process evolves from a query evaluation process into a highly interactive cooperation between a user and the retrieval system, in which the system tries to anticipate the user's information need and to deliver the relevant content proactively. Moreover, the notion of content relevance for a user's query evolves from a content dependent artefact to the multidimensional context-dependent structure, strongly influenced by the user's preferences. This cooperation process is realized as the so-called Librarian Agent Query Refinement Process. In order to clarify the impact of an ontology on the retrieval process (regarding its complexity and quality), a set of methods and tools for different levels of content and query formalisation is developed, ranging from pure ontology-based inferencing to keyword-based querying in which semantics automatically emerges from the results. Our evaluation studies have shown that the possibilities to conceptualize a user's information need in the right manner and to interpret the retrieval results accordingly are key issues for realizing much more meaningful information retrieval systems.
    Content
    Vgl.: http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F1627&ei=tAtYUYrBNoHKtQb3l4GYBw&usg=AFQjCNHeaxKkKU3-u54LWxMNYGXaaDLCGw&sig2=8WykXWQoDKjDSdGtAakH2Q&bvm=bv.44442042,d.Yms.
    Theme
    Semantic Web
  4. Hsu, C.-N.; Chang, C.-H.; Hsieh, C.-H.; Lu, J.-J.; Chang, C.-C.: Reconfigurable Web wrapper agents for biological information integration (2005) 0.07
    0.0707389 = product of:
      0.1414778 = sum of:
        0.1414778 = sum of:
          0.106184594 = weight(_text_:web in 5263) [ClassicSimilarity], result of:
            0.106184594 = score(doc=5263,freq=24.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.6245262 = fieldWeight in 5263, product of:
                4.8989797 = tf(freq=24.0), with freq of:
                  24.0 = termFreq=24.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5263)
          0.03529319 = weight(_text_:22 in 5263) [ClassicSimilarity], result of:
            0.03529319 = score(doc=5263,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 5263, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5263)
      0.5 = coord(1/2)
    
    Abstract
    A variety of biological data is transferred and exchanged in overwhelming volumes on the World Wide Web. How to rapidly capture, utilize, and integrate the information on the Internet to discover valuable biological knowledge is one of the most critical issues in bioinformatics. Many information integration systems have been proposed for integrating biological data. These systems usually rely on an intermediate software layer called wrappers to access connected information sources. Wrapper construction for Web data sources is often specially hand coded to accommodate the differences between each Web site. However, programming a Web wrapper requires substantial programming skill, and is time-consuming and hard to maintain. In this article we provide a solution for rapidly building software agents that can serve as Web wrappers for biological information integration. We define an XML-based language called Web Navigation Description Language (WNDL), to model a Web-browsing session. A WNDL script describes how to locate the data, extract the data, and combine the data. By executing different WNDL scripts, we can automate virtually all types of Web-browsing sessions. We also describe IEPAD (Information Extraction Based on Pattern Discovery), a data extractor based on pattern discovery techniques. IEPAD allows our software agents to automatically discover the extraction rules to extract the contents of a structurally formatted Web page. With a programming-by-example authoring tool, a user can generate a complete Web wrapper agent by browsing the target Web sites. We built a variety of biological applications to demonstrate the feasibility of our approach.
    Date
    22. 7.2006 14:36:42
  5. Synak, M.; Dabrowski, M.; Kruk, S.R.: Semantic Web and ontologies (2009) 0.07
    0.070708394 = product of:
      0.14141679 = sum of:
        0.14141679 = sum of:
          0.08494768 = weight(_text_:web in 3376) [ClassicSimilarity], result of:
            0.08494768 = score(doc=3376,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.49962097 = fieldWeight in 3376, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
          0.056469105 = weight(_text_:22 in 3376) [ClassicSimilarity], result of:
            0.056469105 = score(doc=3376,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.30952093 = fieldWeight in 3376, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=3376)
      0.5 = coord(1/2)
    
    Abstract
    This chapter presents ontologies and their role in the creation of the Semantic Web. Ontologies hold special interest, because they are very closely related to the way we understand the world. They provide common understanding, the very first step to successful communication. In following sections, we will present ontologies, how they are created and used. We will describe available tools for specifying and working with ontologies.
    Date
    31. 7.2010 16:58:22
    Theme
    Semantic Web
  6. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.07
    0.06983581 = product of:
      0.13967162 = sum of:
        0.13967162 = sum of:
          0.0973198 = weight(_text_:web in 2556) [ClassicSimilarity], result of:
            0.0973198 = score(doc=2556,freq=14.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.57238775 = fieldWeight in 2556, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
          0.042351827 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
            0.042351827 = score(doc=2556,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.23214069 = fieldWeight in 2556, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2556)
      0.5 = coord(1/2)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Theme
    Semantic Web
  7. Mundle, K.; Huie, H.; Bangalore, N.S.: ARL Library Catalog Department Web sites : an evaluative study (2006) 0.07
    0.06847861 = product of:
      0.13695721 = sum of:
        0.13695721 = sum of:
          0.10166402 = weight(_text_:web in 771) [ClassicSimilarity], result of:
            0.10166402 = score(doc=771,freq=22.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.59793836 = fieldWeight in 771, product of:
                4.690416 = tf(freq=22.0), with freq of:
                  22.0 = termFreq=22.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=771)
          0.03529319 = weight(_text_:22 in 771) [ClassicSimilarity], result of:
            0.03529319 = score(doc=771,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 771, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=771)
      0.5 = coord(1/2)
    
    Abstract
    User-friendly and content-rich Web sites are indispensable for any knowledge-based organization. Web site evaluation studies point to ways to improve the efficiency and usability of Web sites. Library catalog or technical services department Web sites have proliferated in the past few years, but there is no systematic and accepted method that evaluates the performance of these Web sites. An earlier study by Mundle, Zhao, and Bangalore evaluated catalog department Web sites within the consortium of the Committee on Institutional Cooperation (CIC) libraries, proposed a model to assess these Web sites, and recommended desirable features for them. The present study was undertaken to test the model further and to assess the recommended features. The study evaluated the catalog department Web sites of Association of Research Libraries members. It validated the model proposed, and confirmed the use of the performance index (PI) as an objective measure to assess the usability or workability of a catalog department Web site. The model advocates using a PI of 1.5 as the benchmark for catalog department Web site evaluation by employing the study tool and scoring method suggested in this paper.
    Date
    10. 9.2000 17:38:22
  8. Ferris, A.M.: If you buy it, will they use it? : a case study on the use of Classification web (2006) 0.07
    0.06761923 = product of:
      0.13523845 = sum of:
        0.13523845 = sum of:
          0.08582799 = weight(_text_:web in 88) [ClassicSimilarity], result of:
            0.08582799 = score(doc=88,freq=8.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.50479853 = fieldWeight in 88, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=88)
          0.049410466 = weight(_text_:22 in 88) [ClassicSimilarity], result of:
            0.049410466 = score(doc=88,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 88, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=88)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents a study conducted at the University of Colorado at Boulder (CU-Boulder) to assess the extent to which its catalogers were using Classification Web (Class Web), the subscription-based, online cataloging documentation resource provided by the Library of Congress. In addition, this paper will explore assumptions made by management regarding CU-Boulder catalogers' use of the product, possible reasons for the lower-than-expected use, and recommendations for promoting a more efficient and cost-effective use of Class Web at other institutions similar to CU-Boulder.
    Date
    10. 9.2000 17:38:22
  9. Malmsten, M.: Making a library catalogue part of the Semantic Web (2008) 0.07
    0.06761923 = product of:
      0.13523845 = sum of:
        0.13523845 = sum of:
          0.08582799 = weight(_text_:web in 2640) [ClassicSimilarity], result of:
            0.08582799 = score(doc=2640,freq=8.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.50479853 = fieldWeight in 2640, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
          0.049410466 = weight(_text_:22 in 2640) [ClassicSimilarity], result of:
            0.049410466 = score(doc=2640,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 2640, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2640)
      0.5 = coord(1/2)
    
    Abstract
    Library catalogues contain an enormous amount of structured, high-quality data, however, this data is generally not made available to semantic web applications. In this paper we describe the tools and techniques used to make the Swedish Union Catalogue (LIBRIS) part of the Semantic Web and Linked Data. The focus is on links to and between resources and the mechanisms used to make data available, rather than perfect description of the individual resources. We also present a method of creating links between records of the same work.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
    Theme
    Semantic Web
  10. Agosto, D.E.: Bounded rationality and satisficing in young people's Web-based decision making (2002) 0.07
    0.06622623 = product of:
      0.13245246 = sum of:
        0.13245246 = sum of:
          0.09010062 = weight(_text_:web in 177) [ClassicSimilarity], result of:
            0.09010062 = score(doc=177,freq=12.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.5299281 = fieldWeight in 177, product of:
                3.4641016 = tf(freq=12.0), with freq of:
                  12.0 = termFreq=12.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=177)
          0.042351827 = weight(_text_:22 in 177) [ClassicSimilarity], result of:
            0.042351827 = score(doc=177,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.23214069 = fieldWeight in 177, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=177)
      0.5 = coord(1/2)
    
    Abstract
    This study investigated Simon's behavioral decisionmaking theories of bounded rationality and satisficing in relation to young people's decision making in the World Wide Web, and considered the role of personal preferences in Web-based decisions. It employed a qualitative research methodology involving group interviews with 22 adolescent females. Data analysis took the form of iterative pattern coding using QSR NUD*IST Vivo qualitative data analysis software. Data analysis revealed that the study participants did operate within the limits of bounded rationality. These limits took the form of time constraints, information overload, and physical constraints. Data analysis also uncovered two major satisficing behaviors-reduction and termination. Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences. This study has related implications for Web site designers and for adult intermediaries who work with young people and the Web
  11. Herrera-Viedma, E.; Pasi, G.; Lopez-Herrera, A.G.; Porcel; C.: Evaluating the information quality of Web sites : a methodology based on fuzzy computing with words (2006) 0.07
    0.06611301 = product of:
      0.13222602 = sum of:
        0.13222602 = sum of:
          0.09693283 = weight(_text_:web in 5286) [ClassicSimilarity], result of:
            0.09693283 = score(doc=5286,freq=20.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.5701118 = fieldWeight in 5286, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5286)
          0.03529319 = weight(_text_:22 in 5286) [ClassicSimilarity], result of:
            0.03529319 = score(doc=5286,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 5286, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5286)
      0.5 = coord(1/2)
    
    Abstract
    An evaluation methodology based on fuzzy computing with words aimed at measuring the information quality of Web sites containing documents is presented. This methodology is qualitative and user oriented because it generates linguistic recommendations on the information quality of the content-based Web sites based on users' perceptions. It is composed of two main components, an evaluation scheme to analyze the information quality of Web sites and a measurement method to generate the linguistic recommendations. The evaluation scheme is based on both technical criteria related to the Web site structure and criteria related to the content of information on the Web sites. It is user driven because the chosen criteria are easily understandable by the users, in such a way that Web visitors can assess them by means of linguistic evaluation judgments. The measurement method is user centered because it generates linguistic recommendations of the Web sites based on the visitors' linguistic evaluation judgments. To combine the linguistic evaluation judgments we introduce two new majority guided linguistic aggregation operators, the Majority guided Linguistic Induced Ordered Weighted Averaging (MLIOWA) and weighted MLIOWA operators, which generate the linguistic recommendations according to the majority of the evaluation judgments provided by different visitors. The use of this methodology could improve tasks such as information filtering and evaluation on the World Wide Web.
    Date
    22. 7.2006 17:05:46
    Footnote
    Beitrag in einer Special Topic Section on Soft Approaches to Information Retrieval and Information Access on the Web
  12. Kousha, K.; Thelwall, M.: How is science cited on the Web? : a classification of google unique Web citations (2007) 0.07
    0.06611301 = product of:
      0.13222602 = sum of:
        0.13222602 = sum of:
          0.09693283 = weight(_text_:web in 586) [ClassicSimilarity], result of:
            0.09693283 = score(doc=586,freq=20.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.5701118 = fieldWeight in 586, product of:
                4.472136 = tf(freq=20.0), with freq of:
                  20.0 = termFreq=20.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=586)
          0.03529319 = weight(_text_:22 in 586) [ClassicSimilarity], result of:
            0.03529319 = score(doc=586,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 586, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=586)
      0.5 = coord(1/2)
    
    Abstract
    Although the analysis of citations in the scholarly literature is now an established and relatively well understood part of information science, not enough is known about citations that can be found on the Web. In particular, are there new Web types, and if so, are these trivial or potentially useful for studying or evaluating research communication? We sought evidence based upon a sample of 1,577 Web citations of the URLs or titles of research articles in 64 open-access journals from biology, physics, chemistry, and computing. Only 25% represented intellectual impact, from references of Web documents (23%) and other informal scholarly sources (2%). Many of the Web/URL citations were created for general or subject-specific navigation (45%) or for self-publicity (22%). Additional analyses revealed significant disciplinary differences in the types of Google unique Web/URL citations as well as some characteristics of scientific open-access publishing on the Web. We conclude that the Web provides access to a new and different type of citation information, one that may therefore enable us to measure different aspects of research, and the research process in particular; but to obtain good information, the different types should be separated.
  13. Eggeling, T.; Kroschel, A.: Alles finden im Web (2000) 0.07
    0.06594604 = product of:
      0.13189209 = sum of:
        0.13189209 = sum of:
          0.061305705 = weight(_text_:web in 4884) [ClassicSimilarity], result of:
            0.061305705 = score(doc=4884,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.36057037 = fieldWeight in 4884, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.078125 = fieldNorm(doc=4884)
          0.07058638 = weight(_text_:22 in 4884) [ClassicSimilarity], result of:
            0.07058638 = score(doc=4884,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.38690117 = fieldWeight in 4884, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=4884)
      0.5 = coord(1/2)
    
    Date
    9. 7.2000 14:06:22
  14. Miedtke, E.: Ileks@oeb : Content - Manpower - Vison (2001) 0.07
    0.06594604 = product of:
      0.13189209 = sum of:
        0.13189209 = sum of:
          0.061305705 = weight(_text_:web in 5796) [ClassicSimilarity], result of:
            0.061305705 = score(doc=5796,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.36057037 = fieldWeight in 5796, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.078125 = fieldNorm(doc=5796)
          0.07058638 = weight(_text_:22 in 5796) [ClassicSimilarity], result of:
            0.07058638 = score(doc=5796,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.38690117 = fieldWeight in 5796, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=5796)
      0.5 = coord(1/2)
    
    Abstract
    Seit gut einem Jahr läuft Ileks im Echtbetrieb, und auch neue Partner konnten gewonnen werden. Die Zielstellung ist klar: Die Öffentlichen Bibliotheken sollen ein gemeinsames Web-Portal aufbauen, das qualifizierte Internetressourcen zu Alltagsfragen aufbereitet anbietet. Bislang werden jedoch noch zu wenige Themengebiete beackert; weitere Kooperationswillige sind deshalb gefragt
    Date
    5. 5.2001 9:22:47
  15. Degkwitz, A.: Bologna, University 2.0 : Akademisches Leben als Web-Version? (2008) 0.07
    0.065283254 = product of:
      0.13056651 = sum of:
        0.13056651 = sum of:
          0.060689554 = weight(_text_:web in 1423) [ClassicSimilarity], result of:
            0.060689554 = score(doc=1423,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.35694647 = fieldWeight in 1423, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1423)
          0.069876954 = weight(_text_:22 in 1423) [ClassicSimilarity], result of:
            0.069876954 = score(doc=1423,freq=4.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.38301262 = fieldWeight in 1423, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1423)
      0.5 = coord(1/2)
    
    Abstract
    Die mit dem Bologna-Prozess eingeleiteten Veränderungen haben erhebliche Auswirkungen auf die Weiterentwicklung der deutschen Hochschulen und Universitäten. Dies betrifft auch die Bereiche der Informationsinfrastruktur, für die sich neue Anforderungen und Herausforderungen aus dem Bologna-Prozess ergeben. Ökonomisierung und Technologisierung von Geschäfts- und Supportprozessen sind dabei wesentliche Antriebskräfte, die insbesondere die soziale Dimension akademischen Lebens und die damit verbundenen Werte zu verdrängen drohen: Auch in Zeiten von Bologna ist 'Universität, keine Web-Version! Deshalb sind Bibliotheken, Medien- und Rechenzentren sowie Verwaltungen gut beraten, auf der Plattform der Informations- und Kommunikationstechnologien und unter Einschluss betriebswirtschaftlicher Verfahren eine 'Vision von Qualität', zu entwickeln, wie sie in der Tradition des europäischen Universitätswesens liegt.
    Date
    22. 2.2008 13:28:00
    Source
    Zeitschrift für Bibliothekswesen und Bibliographie. 55(2008) H.1, S.18-22
  16. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.06
    0.06445197 = product of:
      0.12890394 = sum of:
        0.12890394 = sum of:
          0.049044564 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
            0.049044564 = score(doc=2845,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.2884563 = fieldWeight in 2845, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=2845)
          0.079859376 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
            0.079859376 = score(doc=2845,freq=4.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.4377287 = fieldWeight in 2845, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2845)
      0.5 = coord(1/2)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  17. Yang, C.C.; Liu, N.: Web site topic-hierarchy generation based on link structure (2009) 0.06
    0.06362587 = product of:
      0.12725174 = sum of:
        0.12725174 = sum of:
          0.09195855 = weight(_text_:web in 2738) [ClassicSimilarity], result of:
            0.09195855 = score(doc=2738,freq=18.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.5408555 = fieldWeight in 2738, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2738)
          0.03529319 = weight(_text_:22 in 2738) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2738,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2738, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2738)
      0.5 = coord(1/2)
    
    Abstract
    Navigating through hyperlinks within a Web site to look for information from one of its Web pages without the support of a site map can be inefficient and ineffective. Although the content of a Web site is usually organized with an inherent structure like a topic hierarchy, which is a directed tree rooted at a Web site's homepage whose vertices and edges correspond to Web pages and hyperlinks, such a topic hierarchy is not always available to the user. In this work, we studied the problem of automatic generation of Web sites' topic hierarchies. We modeled a Web site's link structure as a weighted directed graph and proposed methods for estimating edge weights based on eight types of features and three learning algorithms, namely decision trees, naïve Bayes classifiers, and logistic regression. Three graph algorithms, namely breadth-first search, shortest-path search, and directed minimum-spanning tree, were adapted to generate the topic hierarchy based on the graph model. We have tested the model and algorithms on real Web sites. It is found that the directed minimum-spanning tree algorithm with the decision tree as the weight learning algorithm achieves the highest performance with an average accuracy of 91.9%.
    Date
    22. 3.2009 12:51:47
  18. Drabenstott, K.M.: Web search strategies (2000) 0.06
    0.06316184 = product of:
      0.12632369 = sum of:
        0.12632369 = sum of:
          0.09808913 = weight(_text_:web in 1188) [ClassicSimilarity], result of:
            0.09808913 = score(doc=1188,freq=32.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.5769126 = fieldWeight in 1188, product of:
                5.656854 = tf(freq=32.0), with freq of:
                  32.0 = termFreq=32.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=1188)
          0.028234553 = weight(_text_:22 in 1188) [ClassicSimilarity], result of:
            0.028234553 = score(doc=1188,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 1188, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1188)
      0.5 = coord(1/2)
    
    Abstract
    Surfing the World Wide Web used to be cool, dude, real cool. But things have gotten hot - so hot that finding something useful an the Web is no longer cool. It is suffocating Web searchers in the smoke and debris of mountain-sized lists of hits, decisions about which search engines they should use, whether they will get lost in the dizzying maze of a subject directory, use the right syntax for the search engine at hand, enter keywords that are likely to retrieve hits an the topics they have in mind, or enlist a browser that has sufficient functionality to display the most promising hits. When it comes to Web searching, in a few short years we have gone from the cool image of surfing the Web into the frying pan of searching the Web. We can turn down the heat by rethinking what Web searchers are doing and introduce some order into the chaos. Web search strategies that are tool-based-oriented to specific Web searching tools such as search en gines, subject directories, and meta search engines-have been widely promoted, and these strategies are just not working. It is time to dissect what Web searching tools expect from searchers and adjust our search strategies to these new tools. This discussion offers Web searchers help in the form of search strategies that are based an strategies that librarians have been using for a long time to search commercial information retrieval systems like Dialog, NEXIS, Wilsonline, FirstSearch, and Data-Star.
    Content
    "Web searching is different from searching commercial IR systems. We can learn from search strategies recommended for searching IR systems, but most won't be effective for Web searching. Web searchers need strate gies that let search engines do the job they were designed to do. This article presents six new Web searching strategies that do just that."
    Date
    22. 9.1997 19:16:05
  19. Daconta, M.C.; Oberst, L.J.; Smith, K.T.: ¬The Semantic Web : A guide to the future of XML, Web services and knowledge management (2003) 0.06
    0.06316184 = product of:
      0.12632369 = sum of:
        0.12632369 = sum of:
          0.09808913 = weight(_text_:web in 320) [ClassicSimilarity], result of:
            0.09808913 = score(doc=320,freq=32.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.5769126 = fieldWeight in 320, product of:
                5.656854 = tf(freq=32.0), with freq of:
                  32.0 = termFreq=32.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=320)
          0.028234553 = weight(_text_:22 in 320) [ClassicSimilarity], result of:
            0.028234553 = score(doc=320,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 320, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=320)
      0.5 = coord(1/2)
    
    Abstract
    "The Semantic Web is an extension of the current Web in which information is given well defined meaning, better enabling computers and people to work in cooperation." - Tim Berners Lee, "Scientific American", May 2001. This authoritative guide shows how the "Semantic Web" works technically and how businesses can utilize it to gain a competitive advantage. It explains what taxonomies and ontologies are as well as their importance in constructing the Semantic Web. The companion web site includes further updates as the framework develops and links to related sites.
    Date
    22. 5.2007 10:37:38
    Footnote
    Rez. Amazon: "Die Autoren bezeichnen das Buch im Vorwort als strategischen Führer für Führungskräfte und Entwickler die sich einen Überblick über das Semantic Web und die dahinter stehende Vision machen wollen. Genau diesem Anspruch wird das Buch auch absolut gerecht. Die ersten beiden Kapitel beschreiben die Vision sowie die Möglichkeiten, die sich durch den Einsatz der in den nachfolgenden Kapiteln beschriebenen Techniken bieten. Die Autoren schaffen es anhand vieler praktischer Szenarien (die zwar teilweise meiner Einschätzung nach schon noch in einiger Zukunft liegen, aber die große Vision des ganzen schön vergegenwärtigen) sehr schnell den Leser für die Technik zu begeistern und mehr darüber wissen zu wollen. Die nachfolgenden Kapitel beschreiben die Techniken auf den verschiedenen semantischen Ebenen von XML als Basis für alles weitere, über Web Services, RDF, Taxonomies und Ontologies. Den Autoren gelingt es die beschriebenen Techniken so kurz und prägnant zu erklären, dass sich der Leser danach zumindest ein Bild über die Techniken an sich, sowie über deren komplexes Zusammenspiel machen kann. Auch für Entwickler würde ich das Buch empfehlen, da es einen sehr guten Einstieg in viele doch sehr neue Techniken bietet mit vielen Verweisen auf weitere Literatur. Alles in allem ein sehr gelungenes Buch, das es trotz relativ geringem Umfangs schafft, einen guten Überblick über dieses komplexe Thema zu vermitteln."
    LCSH
    Semantic Web
    Web site development
    RSWK
    Semantic Web
    Subject
    Semantic Web
    Semantic Web
    Web site development
    Theme
    Semantic Web
  20. Doyle, B.: ¬The classification and evaluation of Content Management Systems (2003) 0.06
    0.0629143 = product of:
      0.1258286 = sum of:
        0.1258286 = sum of:
          0.06935949 = weight(_text_:web in 2871) [ClassicSimilarity], result of:
            0.06935949 = score(doc=2871,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.4079388 = fieldWeight in 2871, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
          0.056469105 = weight(_text_:22 in 2871) [ClassicSimilarity], result of:
            0.056469105 = score(doc=2871,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.30952093 = fieldWeight in 2871, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2871)
      0.5 = coord(1/2)
    
    Abstract
    This is a report on how Doyle and others made a faceted classification scheme for content management systems and made it browsable on the web (see CMS Review in Example Web Sites, below). They discuss why they did it, how, their use of OPML and XFML, how they did research to find terms and categories, and they also include their taxonomy. It is interesting to see facets used in a business environment.
    Date
    30. 7.2004 12:22:52

Languages

Types

  • a 2668
  • m 387
  • el 271
  • s 137
  • x 53
  • b 29
  • i 23
  • r 15
  • n 13
  • p 1
  • More… Less…

Themes

Subjects

Classifications