Search (5872 results, page 1 of 294)

  • × year_i:[2000 TO 2010}
  1. Brenes, D.F.: Classification schemes (2006) 0.40
    0.39612716 = product of:
      0.5281696 = sum of:
        0.022096837 = weight(_text_:for in 5187) [ClassicSimilarity], result of:
          0.022096837 = score(doc=5187,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.24892932 = fieldWeight in 5187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.09375 = fieldNorm(doc=5187)
        0.33219436 = weight(_text_:computing in 5187) [ClassicSimilarity], result of:
          0.33219436 = score(doc=5187,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            1.2702448 = fieldWeight in 5187, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.09375 = fieldNorm(doc=5187)
        0.17387839 = product of:
          0.34775677 = sum of:
            0.34775677 = weight(_text_:machinery in 5187) [ClassicSimilarity], result of:
              0.34775677 = score(doc=5187,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.9875266 = fieldWeight in 5187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5187)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The article reviews the Web site Classification schemes ACM Computing Classification Systems from the Association for Computing Machinery, available at http://www.acm.org/class/.
    Object
    ACM Computing Classification Systems
  2. 16th International World Wide Web Conference, WWW 2007 : May 8 - 12, 2007, Banff, Alberta, Canada (2007) 0.34
    0.33929676 = product of:
      0.45239568 = sum of:
        0.025779642 = weight(_text_:for in 6104) [ClassicSimilarity], result of:
          0.025779642 = score(doc=6104,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 6104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=6104)
        0.22375791 = weight(_text_:computing in 6104) [ClassicSimilarity], result of:
          0.22375791 = score(doc=6104,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.85560554 = fieldWeight in 6104, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.109375 = fieldNorm(doc=6104)
        0.20285812 = product of:
          0.40571624 = sum of:
            0.40571624 = weight(_text_:machinery in 6104) [ClassicSimilarity], result of:
              0.40571624 = score(doc=6104,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                1.1521144 = fieldWeight in 6104, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6104)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Editor
    Association for Computing Machinery
  3. Walters, S.; Rajashekar, T.B.: Mapping of two schemes of classification for software classification (2005) 0.20
    0.19806358 = product of:
      0.2640848 = sum of:
        0.0110484185 = weight(_text_:for in 5724) [ClassicSimilarity], result of:
          0.0110484185 = score(doc=5724,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.12446466 = fieldWeight in 5724, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=5724)
        0.16609718 = weight(_text_:computing in 5724) [ClassicSimilarity], result of:
          0.16609718 = score(doc=5724,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.6351224 = fieldWeight in 5724, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=5724)
        0.08693919 = product of:
          0.17387839 = sum of:
            0.17387839 = weight(_text_:machinery in 5724) [ClassicSimilarity], result of:
              0.17387839 = score(doc=5724,freq=2.0), product of:
                0.35214928 = queryWeight, product of:
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.047278564 = queryNorm
                0.4937633 = fieldWeight in 5724, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.448392 = idf(docFreq=69, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5724)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    SALIS is a repository of open source software along with metadata information. It is a pilot project covering the areas of computer networks and information systems. The objective is to demonstrate the usefulness of such repositories to the Indian academic and developer community in making informed decisions while using open source software. To enable organization and retrieval of the information stored in the repository, a modified CCS (Computing Classification Scheme) classification scheme by the ACM (Association of Computing Machinery) was used. Since a sizeable section of the end users community were familiar with the USPTO classification scheme, a need was felt to classify the software by USPTO scheme also. Instead of classifying by two schemes it was decided to have a mapping or a concordance between the two schemes so that the classification process can be simplified. The approach used to derive a concordance between two diverse classification schemes is described.
    Object
    Computing Classification Scheme
  4. MacFarlane, A.; Robertson, S.E.; McCann, J.A.: Parallel computing for passage retrieval (2004) 0.17
    0.17397097 = product of:
      0.23196128 = sum of:
        0.02551523 = weight(_text_:for in 5108) [ClassicSimilarity], result of:
          0.02551523 = score(doc=5108,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.28743884 = fieldWeight in 5108, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.18082368 = weight(_text_:computing in 5108) [ClassicSimilarity], result of:
          0.18082368 = score(doc=5108,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.69143367 = fieldWeight in 5108, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0625 = fieldNorm(doc=5108)
        0.025622372 = product of:
          0.051244743 = sum of:
            0.051244743 = weight(_text_:22 in 5108) [ClassicSimilarity], result of:
              0.051244743 = score(doc=5108,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.30952093 = fieldWeight in 5108, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5108)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    In this paper methods for both speeding up passage processing and examining more passages using parallel computers are explored. The number of passages processed are varied in order to examine the effect on retrieval effectiveness and efficiency. The particular algorithm applied has previously been used to good effect in Okapi experiments at TREC. This algorithm and the mechanism for applying parallel computing to speed up processing are described.
    Date
    20. 1.2007 18:30:22
  5. Herrera-Viedma, E.; Pasi, G.: Soft approaches to information retrieval and information access on the Web : an introduction to the special topic section (2006) 0.15
    0.1460354 = product of:
      0.19471388 = sum of:
        0.012757615 = weight(_text_:for in 5285) [ClassicSimilarity], result of:
          0.012757615 = score(doc=5285,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14371942 = fieldWeight in 5285, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=5285)
        0.16914508 = weight(_text_:computing in 5285) [ClassicSimilarity], result of:
          0.16914508 = score(doc=5285,freq=14.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.646777 = fieldWeight in 5285, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=5285)
        0.012811186 = product of:
          0.025622372 = sum of:
            0.025622372 = weight(_text_:22 in 5285) [ClassicSimilarity], result of:
              0.025622372 = score(doc=5285,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.15476047 = fieldWeight in 5285, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5285)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The World Wide Web is a popular and interactive medium used to collect, disseminate, and access an increasingly huge amount of information, which constitutes the mainstay of the so-called information and knowledge society. Because of its spectacular growth, related to both Web resources (pages, sites, and services) and number of users, the Web is nowadays the main information repository and provides some automatic systems for locating, accessing, and retrieving information. However, an open and crucial question remains: how to provide fast and effective retrieval of the information relevant to specific users' needs. This is a very hard and complex task, since it is pervaded with subjectivity, vagueness, and uncertainty. The expression soft computing refers to techniques and methodologies that work synergistically with the aim of providing flexible information processing tolerant of imprecision, vagueness, partial truth, and approximation. So, soft computing represents a good candidate to design effective systems for information access and retrieval on the Web. One of the most representative tools of soft computing is fuzzy set theory. This special topic section collects research articles witnessing some recent advances in improving the processes of information access and retrieval on the Web by using soft computing tools, and in particular, by using fuzzy sets and/or integrating them with other soft computing tools. In this introductory article, we first review the problem of Web retrieval and the concept of soft computing technology. We then briefly introduce the articles in this section and conclude by highlighting some future research directions that could benefit from the use of soft computing technologies.
    Date
    22. 7.2006 16:59:33
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.4, S.511-514
  6. Gao, Q.: Visual knowledge representation for three-dimensional computing vision (2000) 0.12
    0.12476878 = product of:
      0.24953756 = sum of:
        0.025779642 = weight(_text_:for in 4673) [ClassicSimilarity], result of:
          0.025779642 = score(doc=4673,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.29041752 = fieldWeight in 4673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.109375 = fieldNorm(doc=4673)
        0.22375791 = weight(_text_:computing in 4673) [ClassicSimilarity], result of:
          0.22375791 = score(doc=4673,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.85560554 = fieldWeight in 4673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.109375 = fieldNorm(doc=4673)
      0.5 = coord(2/4)
    
  7. Soft computing in information retrieval : techniques and applications (2000) 0.12
    0.11809707 = product of:
      0.23619413 = sum of:
        0.014731225 = weight(_text_:for in 4947) [ClassicSimilarity], result of:
          0.014731225 = score(doc=4947,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.16595288 = fieldWeight in 4947, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0625 = fieldNorm(doc=4947)
        0.2214629 = weight(_text_:computing in 4947) [ClassicSimilarity], result of:
          0.2214629 = score(doc=4947,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.8468299 = fieldWeight in 4947, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0625 = fieldNorm(doc=4947)
      0.5 = coord(2/4)
    
    Abstract
    Presented are a number of advanced models for the representation and retrieval of information originating from the application of soft computing techniques to information retrieval. The book is a collection of articles from some of the most outstanding and well known researchers in the area of information retrieval
    Series
    Studies in fuzziness and soft computing; vol.50
  8. Hendry, D.G.: Workspaces for search (2006) 0.12
    0.11746826 = product of:
      0.15662435 = sum of:
        0.022325827 = weight(_text_:for in 5297) [ClassicSimilarity], result of:
          0.022325827 = score(doc=5297,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.25150898 = fieldWeight in 5297, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5297)
        0.111878954 = weight(_text_:computing in 5297) [ClassicSimilarity], result of:
          0.111878954 = score(doc=5297,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.42780277 = fieldWeight in 5297, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5297)
        0.022419576 = product of:
          0.04483915 = sum of:
            0.04483915 = weight(_text_:22 in 5297) [ClassicSimilarity], result of:
              0.04483915 = score(doc=5297,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.2708308 = fieldWeight in 5297, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5297)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Progress in search interfaces requires vigorous inquiry into how search features can be embedded into application environments such as those for decision-making, personal information collecting, and designing. Progress can be made by focusing on mid-level descriptions of how search components can draw upon and update workspace content and structure. The immediate goal is to advance our understanding of how to shape and exploit context in search. The long-term goal is to develop an interdisciplinary design resource that enables stakeholders in the computing, social, and information sciences to more richly impact each others' work.
    Date
    22. 7.2006 18:01:11
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.6, S.800-802
  9. Olsen, K.A.: ¬The Internet, the Web, and eBusiness : formalizing applications for the real world (2005) 0.12
    0.11578794 = product of:
      0.15438391 = sum of:
        0.019487578 = weight(_text_:for in 149) [ClassicSimilarity], result of:
          0.019487578 = score(doc=149,freq=56.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21953502 = fieldWeight in 149, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.12380152 = weight(_text_:computing in 149) [ClassicSimilarity], result of:
          0.12380152 = score(doc=149,freq=30.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.47339228 = fieldWeight in 149, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.015625 = fieldNorm(doc=149)
        0.011094813 = product of:
          0.022189626 = sum of:
            0.022189626 = weight(_text_:22 in 149) [ClassicSimilarity], result of:
              0.022189626 = score(doc=149,freq=6.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.1340265 = fieldWeight in 149, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=149)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Classification
    004.678 22
    DDC
    004.678 22
    Footnote
    Rez. in: JASIST 57(2006) no.14, S.1979-1980 (J.G. Williams): "The Introduction and Part I of this book presents the world of computing with a historical and philosophical overview of computers, computer applications, networks, the World Wide Web, and eBusiness based on the notion that the real world places constraints on the application of these technologies and without a formalized approach, the benefits of these technologies cannot be realized. The concepts of real world constraints and the need for formalization are used as the cornerstones for a building-block approach for helping the reader understand computing, networking, the World Wide Web, and the applications that use these technologies as well as all the possibilities that these technologies hold for the future. The author's building block approach to understanding computing, networking and application building makes the book useful for science, business, and engineering students taking an introductory computing course and for social science students who want to understand more about the social impact of computers, the Internet, and Web technology. It is useful as well for managers and designers of Web and ebusiness applications, and for the general public who are interested in understanding how these technologies may impact their lives, their jobs, and the social context in which they live and work. The book does assume some experience and terminology in using PCs and the Internet but is not intended for computer science students, although they could benefit from the philosophical basis and the diverse viewpoints presented. The author uses numerous analogies from domains outside the area of computing to illustrate concepts and points of view that make the content understandable as well as interesting to individuals without any in-depth knowledge of computing, networking, software engineering, system design, ebusiness, and Web design. These analogies include interesting real-world events ranging from the beginning of railroads, to Henry Ford's mass produced automobile, to the European Space Agency's loss of the 7 billion dollar Adriane rocket, to travel agency booking, to medical systems, to banking, to expanding democracy. The book gives the pros and cons of the possibilities offered by the Internet and the Web by presenting numerous examples and an analysis of the pros and cons of these technologies for the examples provided. The author shows, in an interesting manner, how the new economy based on the Internet and the Web affects society and business life on a worldwide basis now and how it will affect the future, and how society can take advantage of the opportunities that the Internet and the Web offer.
    The book is organized into six sections or parts with several chapters within each part. Part 1, does a good job of building an understanding some of the historical aspects of computing and why formalization is important for building computer-based applications. A distinction is made between formalized and unformalized data, processes, and procedures, which the author cleverly uses to show how the level of formalization of data, processes, and procedures determines the functionality of computer applications. Part I also discusses the types of data that can be represented in symbolic form, which is crucial to using computer and networking technology in a virtual environment. This part also discusses the technical and cultural constraints upon computing, networking, and web technologies with many interesting examples. The cultural constraints discussed range from copyright to privacy issues. Part 1 is critical to understanding the author's point of view and discussions in other sections of the book. The discussion on machine intelligence and natural language processing is particularly well done. Part 2 discusses the fundamental concepts and standards of the Internet and Web. Part 3 introduces the need for formalization to construct ebusiness applications in the business-to-consumer category (B2C). There are many good and interesting examples of these B2C applications and the associated analyses of them using the concepts introduced in Parts I and 2 of the book. Part 4 examines the formalization of business-to-business (B2B) applications and discusses the standards that are needed to transmit data with a high level of formalization. Part 5 is a rather fascinating discussion of future possibilities and Part 6 presents a concise summary and conclusion. The book covers a wide array of subjects in the computing, networking, and Web areas and although all of them are presented in an interesting style, some subjects may be more relevant and useful to individuals depending on their background or academic discipline. Part 1 is relevant to all potential readers no matter what their background or academic discipline but Part 2 is a little more technical; although most people with an information technology or computer science background will not find much new here with the exception of the chapters on "Dynamic Web Pages" and "Embedded Scripts." Other readers will find this section informative and useful for understanding other parts of the book. Part 3 does not offer individuals with a background in computing, networking, or information science much in addition to what they should already know, but the chapters on "Searching" and "Web Presence" may be useful because they present some interesting notions about using the Web. Part 3 gives an overview of B2C applications and is where the author provides examples of the difference between services that are completely symbolic and services that have both a symbolic portion and a physical portion. Part 4 of the book discusses B2B technology once again with many good examples. The chapter on "XML" in Part 4 is not appropriate for readers without a technical background. Part 5 is a teacher's dream because it offers a number of situations that can be used for classroom discussions or case studies independent of background or academic discipline.
    Each chapter provides suggestions for exercises and discussions, which makes the book useful as a textbook. The suggestions in the exercise and discussion section at the end of each chapter are simply delightful to read and provide a basis for some lively discussion and fun exercises by students. These exercises appear to be well thought out and are intended to highlight the content of the chapter. The notes at the end of chapters provide valuable data that help the reader to understand a topic or a reference to an entity that the reader may not know. Chapter 1 on "formalism," chapter 2 on "symbolic data," chapter 3 on "constraints on technology," and chapter 4 on "cultural constraints" are extremely well presented and every reader needs to read these chapters because they lay the foundation for most of the chapters that follow. The analogies, examples, and points of view presented make for some really interesting reading and lively debate and discussion. These chapters comprise Part 1 of the book and not only provide a foundation for the rest of the book but could be used alone as the basis of a social science course on computing, networking, and the Web. Chapters 5 and 6 on Internet protocols and the development of Web protocols may be more detailed and filled with more acronyms than the average person wants to deal with but content is presented with analogies and examples that make it easier to digest. Chapter 7 will capture most readers attention because it discusses how e-mail works and many of the issues with e-mail, which a majority of people in developed countries have dealt with. Chapter 8 is also one that most people will be interested in reading because it shows how Internet browsers work and the many issues such as security associated with these software entities. Chapter 9 discusses the what, why, and how of the World Wide Web, which is a lead-in to chapter 10 on "Searching the Web" and chapter 11 on "Organizing the Web-Portals," which are two chapters that even technically oriented people should read since it provides information that most people outside of information and library science are not likely to know.
    Chapter 12 on "Web Presence" is a useful discussion of what it means to have a Web site that is indexed by a spider from a major Web search engine. Chapter 13 on "Mobile Computing" is very well done and gives the reader a solid basis of what is involved with mobile computing without overwhelming them with technical details. Chapter 14 discusses the difference between pull technologies and push technologies using the Web that is understandable to almost anyone who has ever used the Web. Chapters 15, 16, and 17 are for the technically stout at heart; they cover "Dynamic Web Pages," " Embedded Scripts," and "Peer-to-Peer Computing." These three chapters will tend to dampen the spirits of anyone who does not come from a technical background. Chapter 18 on "Symbolic Services-Information Providers" and chapter 19 on "OnLine Symbolic Services-Case Studies" are ideal for class discussion and students assignments as is chapter 20, "Online Retail Shopping-Physical Items." Chapter 21 presents a number of case studies on the "Technical Constraints" discussed in chapter 3 and chapter 22 presents case studies on the "Cultural Constraints" discussed in chapter 4. These case studies are not only presented in an interesting manner they focus on situations that most Web users have encountered but never really given much thought to. Chapter 24 "A Better Model?" discusses a combined "formalized/unformalized" model that might make Web applications such as banking and booking travel work better than the current models. This chapter will cause readers to think about the role of formalization and the unformalized processes that are involved in any application. Chapters 24, 25, 26, and 27 which discuss the role of "Data Exchange," "Formalized Data Exchange," "Electronic Data Interchange-EDI," and "XML" in business-to-business applications on the Web may stress the limits of the nontechnically oriented reader even though it is presented in a very understandable manner. Chapters 28, 29, 30, and 31 discuss Web services, the automated value chain, electronic market places, and outsourcing, which are of high interest to business students, businessmen, and designers of Web applications and can be skimmed by others who want to understand ebusiness but are not interested in the details. In Part 5, the chapters 32, 33, and 34 on "Interfacing with the Web of the Future," "A Disruptive Technology," "Virtual Businesses," and "Semantic Web," were, for me, who teaches courses in IT and develops ebusiness applications the most interesting chapters in the book because they provided some useful insights about what is likely to happen in the future. The summary in part 6 of the book is quite well done and I wish I had read it before I started reading the other parts of the book.
    The book is quite large with over 400 pages and covers a myriad of topics, which is probably more than any one course could cover but an instructor could pick and choose those chapters most appropriate to the course content. The book could be used for multiple courses by selecting the relevant topics. I enjoyed the first person, rather down to earth, writing style and the number of examples and analogies that the author presented. I believe most people could relate to the examples and situations presented by the author. As a teacher in Information Technology, the discussion questions at the end of the chapters and the case studies are a valuable resource as are the end of chapter notes. I highly recommend this book for an introductory course that combines computing, networking, the Web, and ebusiness for Business and Social Science students as well as an introductory course for students in Information Science, Library Science, and Computer Science. Likewise, I believe IT managers and Web page designers could benefit from selected chapters in the book."
  10. Si, L.E.; O'Brien, A.; Probets, S.: Integration of distributed terminology resources to facilitate subject cross-browsing for library portal systems (2009) 0.11
    0.11058211 = product of:
      0.14744282 = sum of:
        0.01841403 = weight(_text_:for in 3628) [ClassicSimilarity], result of:
          0.01841403 = score(doc=3628,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 3628, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.11301481 = weight(_text_:computing in 3628) [ClassicSimilarity], result of:
          0.11301481 = score(doc=3628,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 3628, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3628)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 3628) [ClassicSimilarity], result of:
              0.032027967 = score(doc=3628,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 3628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3628)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Purpose: To develop a prototype middleware framework between different terminology resources in order to provide a subject cross-browsing service for library portal systems. Design/methodology/approach: Nine terminology experts were interviewed to collect appropriate knowledge to support the development of a theoretical framework for the research. Based on this, a simplified software-based prototype system was constructed incorporating the knowledge acquired. The prototype involved mappings between the computer science schedule of the Dewey Decimal Classification (which acted as a spine) and two controlled vocabularies UKAT and ACM Computing Classification. Subsequently, six further experts in the field were invited to evaluate the prototype system and provide feedback to improve the framework. Findings: The major findings showed that given the large variety of terminology resources distributed on the web, the proposed middleware service is essential to integrate technically and semantically the different terminology resources in order to facilitate subject cross-browsing. A set of recommendations are also made outlining the important approaches and features that support such a cross browsing middleware service.
    Content
    This paper is a pre-print version presented at the ISKO UK 2009 conference, 22-23 June, prior to peer review and editing. For published proceedings see special issue of Aslib Proceedings journal.
    Object
    ACM Computing Classification
  11. Shaw, R.; Buckland, M.: Open identification and linking of the four Ws (2008) 0.11
    0.10974465 = product of:
      0.1463262 = sum of:
        0.023237456 = weight(_text_:for in 2665) [ClassicSimilarity], result of:
          0.023237456 = score(doc=2665,freq=26.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.26177883 = fieldWeight in 2665, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.111878954 = weight(_text_:computing in 2665) [ClassicSimilarity], result of:
          0.111878954 = score(doc=2665,freq=8.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.42780277 = fieldWeight in 2665, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2665)
        0.011209788 = product of:
          0.022419576 = sum of:
            0.022419576 = weight(_text_:22 in 2665) [ClassicSimilarity], result of:
              0.022419576 = score(doc=2665,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.1354154 = fieldWeight in 2665, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2665)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Platforms for social computing connect users via shared references to people with whom they have relationships, events attended, places lived in or traveled to, and topics such as favorite books or movies. Since free text is insufficient for expressing such references precisely and unambiguously, many social computing platforms coin identifiers for topics, places, events, and people and provide interfaces for finding and selecting these identifiers from controlled lists. Using these interfaces, users collaboratively construct a web of links among entities. This model needn't be limited to social networking sites. Understanding an item in a digital library or museum requires context: information about the topics, places, events, and people to which the item is related. Students, journalists and investigators traditionally discover this kind of context by asking "the four Ws": what, where, when and who. The DCMI Kernel Metadata Community has recognized the four Ws as fundamental elements of descriptions (Kunze & Turner, 2007). Making better use of metadata to answer these questions via links to appropriate contextual resources has been our focus in a series of research projects over the past few years. Currently we are building a system for enabling readers of any text to relate any topic, place, event or person mentioned in the text to the best explanatory resources available. This system is being developed with two different corpora: a diverse variety of biographical texts characterized by very rich and dense mentions of people, events, places and activities, and a large collection of newly-scanned books, journals and manuscripts relating to Irish culture and history. Like a social computing platform, our system consists of tools for referring to topics, places, events or people, disambiguating these references by linking them to unique identifiers, and using the disambiguated references to provide useful information in context and to link to related resources. Yet current social computing platforms, while usually amenable to importing and exporting data, tend to mint proprietary identifiers and expect links to be traversed using their own interfaces. We take a different approach, using identifiers from both established and emerging naming authorities, representing relationships using standardized metadata vocabularies, and publishing those representations using standard protocols so that links can be stored and traversed anywhere. Central to our strategy is to move from appearances in a text to naming authorities to the the construction of links for searching or querying trusted resources. Using identifiers from naming authorities, rather than literal values (as in the DCMI Kernel) or keys from a proprietary database, makes it more likely that links constructed using our system will continue to be useful in the future. WorldCat Identities URIs (http://worldcat.org/identities/) linked to Library of Congress and Deutsche Nationalbibliothek authority files for persons and organizations and Geonames (http://geonames.org/) URIs for places are stable identifiers attached to a wealth of useful metadata. Yet no naming authority can be totally comprehensive, so our system can be extended to use new sources of identifiers as needed. For example, we are experimenting with using Freebase (http://freebase.com/) URIs to identify historical events, for which no established naming authority currently exists. Stable identifiers (URIs), standardized hyperlinked data formats (XML), and uniform publishing protocols (HTTP) are key ingredients of the web's open architecture. Our system provides an example of how this open architecture can be exploited to build flexible and useful tools for connecting resources via shared references to topics, places, events, and people.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  12. Innovations and advanced techniques in systems, computing sciences and software engineering (2008) 0.10
    0.104384035 = product of:
      0.20876807 = sum of:
        0.013020686 = weight(_text_:for in 4319) [ClassicSimilarity], result of:
          0.013020686 = score(doc=4319,freq=4.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.14668301 = fieldWeight in 4319, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
        0.19574739 = weight(_text_:computing in 4319) [ClassicSimilarity], result of:
          0.19574739 = score(doc=4319,freq=12.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.7484989 = fieldWeight in 4319, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4319)
      0.5 = coord(2/4)
    
    Abstract
    Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Innovations and Advanced Techniques in Systems, Computing Sciences and Software Engineering includes selected papers form the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
    Content
    Inhalt: Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures. Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools. Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications. New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language Processing, Neural Networks, and Online Decision Support System
  13. Herrera-Viedma, E.; Pasi, G.; Lopez-Herrera, A.G.; Porcel; C.: Evaluating the information quality of Web sites : a methodology based on fuzzy computing with words (2006) 0.10
    0.103676856 = product of:
      0.1382358 = sum of:
        0.009207015 = weight(_text_:for in 5286) [ClassicSimilarity], result of:
          0.009207015 = score(doc=5286,freq=2.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.103720546 = fieldWeight in 5286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5286)
        0.11301481 = weight(_text_:computing in 5286) [ClassicSimilarity], result of:
          0.11301481 = score(doc=5286,freq=4.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.43214604 = fieldWeight in 5286, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5286)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 5286) [ClassicSimilarity], result of:
              0.032027967 = score(doc=5286,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 5286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5286)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    An evaluation methodology based on fuzzy computing with words aimed at measuring the information quality of Web sites containing documents is presented. This methodology is qualitative and user oriented because it generates linguistic recommendations on the information quality of the content-based Web sites based on users' perceptions. It is composed of two main components, an evaluation scheme to analyze the information quality of Web sites and a measurement method to generate the linguistic recommendations. The evaluation scheme is based on both technical criteria related to the Web site structure and criteria related to the content of information on the Web sites. It is user driven because the chosen criteria are easily understandable by the users, in such a way that Web visitors can assess them by means of linguistic evaluation judgments. The measurement method is user centered because it generates linguistic recommendations of the Web sites based on the visitors' linguistic evaluation judgments. To combine the linguistic evaluation judgments we introduce two new majority guided linguistic aggregation operators, the Majority guided Linguistic Induced Ordered Weighted Averaging (MLIOWA) and weighted MLIOWA operators, which generate the linguistic recommendations according to the majority of the evaluation judgments provided by different visitors. The use of this methodology could improve tasks such as information filtering and evaluation on the World Wide Web.
    Date
    22. 7.2006 17:05:46
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.4, S.538-549
  14. Fogg, B.J.: Persuasive technology : using computers to change what we think and do (2003) 0.10
    0.09562096 = product of:
      0.19124192 = sum of:
        0.022096835 = weight(_text_:for in 1877) [ClassicSimilarity], result of:
          0.022096835 = score(doc=1877,freq=18.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2489293 = fieldWeight in 1877, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.03125 = fieldNorm(doc=1877)
        0.16914508 = weight(_text_:computing in 1877) [ClassicSimilarity], result of:
          0.16914508 = score(doc=1877,freq=14.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.646777 = fieldWeight in 1877, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.03125 = fieldNorm(doc=1877)
      0.5 = coord(2/4)
    
    Footnote
    Rez. in: JASIS 54(2003) no.12, S.1168-1170 (A.D. Petrou): "Computers as persuasive technology, or Captology, is the topic of the ten chapters in B.J. Fogg's book. As the author states, the main focus of Captology is not an computer mediated communications (CMC), but rather an human computer interaction (HCI). Furthermore, according to the author, "captology focuses an the design, research, and analysis of interactive computing products created for the purpose of changing people's attitudes or behaviors. It describes the areas where technology and persuasion overlap" (p. 5). Each of the book's chapters presents theories, arguments, and examples to convince readers of a large and growing part that computing products play in persuading people to change their behaviors for the better in a variety of areas. Currently, some of the areas for which B.J. Fogg considers computing products as persuasive or influential in motivating individuals to change their behaviors include quitting smoking, practicing safer sex, eating healthier, staying in shape, improving study habits, and helping doctors develop richer empathy for the pain experienced by their patients. In the wrong hands, however, B.J. Fogg wams, the computer's power to persuade can be enlisted to support unethical social ends and to serve corporate interests that deliver no real benefits to consumers. While Captology's concerns about the ethical side of computing products as persuasive tools are summarized in a chapter an ethics, they are also incorporated as short reminders throughout the book's ten chapters. A strength of the book, however, is that the author does not take it for granted that readers will agree with him an the persuasive power for computers. In addition to the technical and social theories he articulates, B .J. Fogg presents empirical evidence from his own research and also provides many examples of computing products designed to persuade people to change their behaviors. Computers can be designed to be highly interactive and to include many modalities for persuasion to match different situations and human personalities, such as submissive or dominant. Furthermore, computers may allow for anonymity in use and can be ubiquitous. ... Yet, there is no denying an effectiveness in the arguments and empirical data put forth by B.J. Fogg about Captology's power to explain how a merging of technology with techniques of persuasion can help change human behavior for the better. The widespread influence of computing products and a need to ethically manage such influence over human behavior should command our attention as users and researchers and most importantly as designers and producers of computing products."
  15. MacFarlane, A.; McCann, J.A.; Robertson, S.E.: Parallel methods for the generation of partitioned inverted files (2005) 0.09
    0.092616804 = product of:
      0.18523361 = sum of:
        0.019136423 = weight(_text_:for in 651) [ClassicSimilarity], result of:
          0.019136423 = score(doc=651,freq=6.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.21557912 = fieldWeight in 651, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
        0.16609718 = weight(_text_:computing in 651) [ClassicSimilarity], result of:
          0.16609718 = score(doc=651,freq=6.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.6351224 = fieldWeight in 651, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.046875 = fieldNorm(doc=651)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - The generation of inverted indexes is one of the most computationally intensive activities for information retrieval systems: indexing large multi-gigabyte text databases can take many hours or even days to complete. We examine the generation of partitioned inverted files in order to speed up the process of indexing. Two types of index partitions are investigated: TermId and DocId. Design/methodology/approach - We use standard measures used in parallel computing such as speedup and efficiency to examine the computing results and also the space costs of our trial indexing experiments. Findings - The results from runs on both partitioning methods are compared and contrasted, concluding that DocId is the more efficient method. Practical implications - The practical implications are that the DocId partitioning method would in most circumstances be used for distributing inverted file data in a parallel computer, particularly if indexing speed is the primary consideration. Originality/value - The paper is of value to database administrators who manage large-scale text collections, and who need to use parallel computing to implement their text retrieval services.
  16. Women and information technology : research on underrepresentation (2006) 0.09
    0.09196392 = product of:
      0.18392783 = sum of:
        0.0110484185 = weight(_text_:for in 592) [ClassicSimilarity], result of:
          0.0110484185 = score(doc=592,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.12446466 = fieldWeight in 592, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0234375 = fieldNorm(doc=592)
        0.17287941 = weight(_text_:computing in 592) [ClassicSimilarity], result of:
          0.17287941 = score(doc=592,freq=26.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.66105634 = fieldWeight in 592, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0234375 = fieldNorm(doc=592)
      0.5 = coord(2/4)
    
    Abstract
    Experts investigate the reasons for low female participation in computing and suggest strategies for moving toward parity through studies of middle and high school girls, female students and postsecondary computer science programs, and women in the information technology workforce. Computing remains a heavily male-dominated field even after 25 years of extensive efforts to promote female participation. The contributors to "Women and Information Technology" look at reasons for the persistent gender imbalance in computing and explore some strategies intended to reverse the downward trend. The studies included are rigorous social science investigations; they rely on empirical evidence - not rhetoric, hunches, folk wisdom, or off-the-cuff speculation about supposed innate differences between men and women. Taking advantage of the recent surge in research in this area, the editors present the latest findings of both qualitative and quantitative studies. Each section begins with an overview of the literature on current research in the field, followed by individual studies. The first section investigates the relationship between gender and information technology among preteens and adolescents, with each study considering what could lead girls' interest in computing to diverge from boys'; the second section, on higher education, includes a nationwide study of computing programs and a cross-national comparison of computing education; the final section, on pathways into the IT workforce, considers both traditional and non-traditional paths to computing careers.
    Footnote
    Rez. in: JASIST 58(2007) no.11, S.1704 (D.E. Agosto): "Student participation in computer science (CS) has dropped significantly over the past few years in the United States. As the Computing Research Association (Vegso, 2006) recently noted, "After five years of decline, the number of new CS majors in fall 2005 was half of what it was in fall 2000 (15,958 vs. 7,952)." Many computing educators and working professionals worry that this reduced level of participation might result in slowed technological innovation in future years. Adding to the problem is especially low female participation in the computer-related disciplines. For example, Cohoon (2003) showed that the percentage of high school girls indicating intent to study CS in college dropped steadily from 1991 to 2001, from a high of 37% to a low of 20%. The National Science Foundation's most recent report on Women, Minorities, and Persons with Disabilities in Science and Engineering (National Science Foundation, 2004) indicates that while females obtained 57% of all bachelor's degrees in 2001, they obtained just 28% of computer-related undergraduate degrees. These low percentages of female participation are reflected in the computing workforce as well. Women and Information Technology: Research on Underrepresentation provides an overview of research projects and research trends relating to gender and computing. The book takes a proactive general stance; the ultimate goal of publishing the research included in the volume is to lead to significant gains in female representation in the study and practice of the computing-related fields. ... The volume as a whole does not offer a clear-cut solution to the problem of female underrepresentation, but a number of the chapters do indicate that recruitment and retention must be dealt with jointly, as each is dependent on the other. Another recurring theme is the importance of role models from early on in girls' lives, in the form of both female faculty and female computing professionals as role models. Still another recurring theme is the importance of female mentoring before and during the college years, including both informal peer mentoring and formal faculty mentoring. Taken as a whole, this is a successful work that is probably most useful as a background reference tool. As such, it should assist students and scholars interested in continuing this undeniably important area of research."
  17. Dirks, L.: eResearch, semantic computing and the cloud : towards a smart cyberinfrastructure for eResearch (2009) 0.09
    0.09118978 = product of:
      0.18237956 = sum of:
        0.022552488 = weight(_text_:for in 2815) [ClassicSimilarity], result of:
          0.022552488 = score(doc=2815,freq=12.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2540624 = fieldWeight in 2815, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2815)
        0.15982707 = weight(_text_:computing in 2815) [ClassicSimilarity], result of:
          0.15982707 = score(doc=2815,freq=8.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.6111468 = fieldWeight in 2815, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2815)
      0.5 = coord(2/4)
    
    Abstract
    In the future, frontier research in many fields will increasingly require the collaboration of globally distributed groups of researchers needing access to distributed computing, data resources and support for remote access to expensive, multi-national specialized facilities such as telescopes and accelerators or specialist data archives. There is also a general belief that an important road to innovation will be provided by multi-disciplinary and collaborative research - from bio-informatics and earth systems science to social science and archaeology. There will also be an explosion in the amount of research data collected in the next decade - 100's of Terabytes will be common in many fields. These future research requirements constitute the 'eResearch' agenda. Powerful software services will be widely deployed on top of the academic research networks to form the necessary 'Cyberinfrastructure' to provide a collaborative research environment for the global academic community. The difficulties in combining data and information from distributed sources, the multi-disciplinary nature of research and collaboration, and the need to move to present researchers with tooling that enable them to express what they want to do rather than how to do it highlight the need for an ecosystem of Semantic Computing technologies. Such technologies will further facilitate information sharing and discovery, will enable reasoning over information, and will allow us to start thinking about knowledge and how it can be handled by computers. This talk will review the elements of this vision and explain the need for semantic-oriented computing by exploring eResearch projects that have successfully applied relevant technologies. It will also suggest that a software + service model with scientific services delivered from the cloud will become an increasingly accepted model for research.
  18. Chan, H.C.; Teo, H.H.; Zeng, X.H.: ¬An evaluation of novice end-user computing performance : data modeling, query writing, and comprehension (2005) 0.09
    0.09020729 = product of:
      0.18041459 = sum of:
        0.020587513 = weight(_text_:for in 3563) [ClassicSimilarity], result of:
          0.020587513 = score(doc=3563,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 3563, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3563)
        0.15982707 = weight(_text_:computing in 3563) [ClassicSimilarity], result of:
          0.15982707 = score(doc=3563,freq=8.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.6111468 = fieldWeight in 3563, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3563)
      0.5 = coord(2/4)
    
    Abstract
    End-user computing has become a weIl-established aspect of enterprise database systems today. End-user computing performance depends an the user-database interface, in which the data model and query language are major components. We examined three prominent data models-the relational model, the Extended-EntityRelationship (EIER) model, and the Object-Oriented (00) model-and their query languages in a rigorous and systematic experiment to evaluate their effects an novice end-user computing performance in the context of database design and data manipulation. In addition, relationships among the performances for different tasks (modeling, query writing, query comprehension) were postulated with the use of a cognitive model for the query process, and are tested in the experiment. Structural Equation Modeling (SEM) techniques were used to examine the multiple causal relationships simultaneously. The findings indicate that the EER and 00 models overwhelmingly outperformed the relational model in terms of accuracy for both database design and data manipulation. The associations between tasks suggest that data modeling techniques would enhance query writing correctness, and query writing ability would contribute to query comprehension. This study provides a better and thorough understanding of the inter-relationships among these data modeling and task factors. Our findings have significant implications for novice end-user training and development.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.8, S.843-853
  19. Reed, G.M.; Sanders, J.W.: ¬The principle of distribution (2008) 0.09
    0.087386265 = product of:
      0.116515025 = sum of:
        0.020587513 = weight(_text_:for in 1868) [ClassicSimilarity], result of:
          0.020587513 = score(doc=1868,freq=10.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.2319262 = fieldWeight in 1868, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1868)
        0.079913534 = weight(_text_:computing in 1868) [ClassicSimilarity], result of:
          0.079913534 = score(doc=1868,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 1868, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1868)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 1868) [ClassicSimilarity], result of:
              0.032027967 = score(doc=1868,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 1868, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1868)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This article introduces a normative principle for the behavior of contemporary computing and communication systems and considers some of its consequences. The principle, named the principle of distribution, says that in a distributed multi-agent system, control resides as much as possible with the individuals constituting the system rather than in centralized agents; and when that is unfeasible or becomes inappropriate due to environmental changes, control evolves upwards from the individuals to an appropriate intermediate level rather than being imposed from above. The setting for the work is the dynamically changing global space resulting from ubiquitous communication. Accordingly, the article begins by determining the characteristics of the distributed multi-agent space it spans. It then fleshes out the principle of distribution, with examples from daily life as well as from Computer Science. The case is made for the principle of distribution to work at various levels of abstraction of system behavior: to inform the high-level discussion that ought to precede the more low-level concerns of technology, protocols, and standardization, but also to facilitate those lower levels. Of the more substantial applications given here of the principle of distribution, a technical example concerns the design of secure ad hoc networks of mobile devices, achievable without any form of centralized authentication or identification but in a solely distributed manner. Here, the context is how the principle can be used to provide new and provably secure protocols for genuinely ubiquitous communication. A second, more managerial example concerns the distributed production and management of open-source software, and a third investigates some pertinent questions involving the dynamic restructuring of control in distributed systems, important in times of disaster or malevolence.
    Date
    1. 6.2008 12:22:41
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.7, S.1134-1142
  20. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.09
    0.08575615 = product of:
      0.11434154 = sum of:
        0.01841403 = weight(_text_:for in 6959) [ClassicSimilarity], result of:
          0.01841403 = score(doc=6959,freq=8.0), product of:
            0.08876751 = queryWeight, product of:
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.047278564 = queryNorm
            0.20744109 = fieldWeight in 6959, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.8775425 = idf(docFreq=18385, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.079913534 = weight(_text_:computing in 6959) [ClassicSimilarity], result of:
          0.079913534 = score(doc=6959,freq=2.0), product of:
            0.26151994 = queryWeight, product of:
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.047278564 = queryNorm
            0.3055734 = fieldWeight in 6959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.5314693 = idf(docFreq=475, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.016013984 = product of:
          0.032027967 = sum of:
            0.032027967 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.032027967 = score(doc=6959,freq=2.0), product of:
                0.16556148 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047278564 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05

Authors

Languages

Types

  • a 5022
  • m 517
  • el 414
  • s 189
  • b 35
  • x 30
  • r 24
  • i 20
  • n 18
  • p 8
  • More… Less…

Themes

Subjects

Classifications