Search (901 results, page 1 of 46)

  • × theme_ss:"Internet"
  1. Pu, H.-T.; Chuang, S.-L.; Yang, C.: Subject categorization of query terms for exploring Web users' search interests (2002) 0.14
    0.14214307 = product of:
      0.1895241 = sum of:
        0.005885557 = product of:
          0.023542227 = sum of:
            0.023542227 = weight(_text_:based in 587) [ClassicSimilarity], result of:
              0.023542227 = score(doc=587,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.16644597 = fieldWeight in 587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=587)
          0.25 = coord(1/4)
        0.056460675 = weight(_text_:term in 587) [ClassicSimilarity], result of:
          0.056460675 = score(doc=587,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.25776416 = fieldWeight in 587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=587)
        0.12717786 = weight(_text_:frequency in 587) [ClassicSimilarity], result of:
          0.12717786 = score(doc=587,freq=4.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.46005818 = fieldWeight in 587, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0390625 = fieldNorm(doc=587)
      0.75 = coord(3/4)
    
    Abstract
    Subject content analysis of Web query terms is essential to understand Web searching interests. Such analysis includes exploring search topics and observing changes in their frequency distributions with time. To provide a basis for in-depth analysis of users' search interests on a larger scale, this article presents a query categorization approach to automatically classifying Web query terms into broad subject categories. Because a query is short in length and simple in structure, its intended subject(s) of search is difficult to judge. Our approach, therefore, combines the search processes of real-world search engines to obtain highly ranked Web documents based on each unknown query term. These documents are used to extract cooccurring terms and to create a feature set. An effective ranking function has also been developed to find the most appropriate categories. Three search engine logs in Taiwan were collected and tested. They contained over 5 million queries from different periods of time. The achieved performance is quite encouraging compared with that of human categorization. The experimental results demonstrate that the approach is efficient in dealing with large numbers of queries and adaptable to the dynamic Web environment. Through good integration of human and machine efforts, the frequency distributions of subject categories in response to changes in users' search interests can be systematically observed in real time. The approach has also shown potential for use in various information retrieval applications, and provides a basis for further Web searching studies.
  2. Dillon, M.; Jul, E.: Cataloging Internet resources : the convergence of libraries and Internet resources (1996) 0.11
    0.106715195 = product of:
      0.14228693 = sum of:
        0.00823978 = product of:
          0.03295912 = sum of:
            0.03295912 = weight(_text_:based in 6737) [ClassicSimilarity], result of:
              0.03295912 = score(doc=6737,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23302436 = fieldWeight in 6737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6737)
          0.25 = coord(1/4)
        0.11178643 = weight(_text_:term in 6737) [ClassicSimilarity], result of:
          0.11178643 = score(doc=6737,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.510347 = fieldWeight in 6737, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6737)
        0.022260714 = product of:
          0.04452143 = sum of:
            0.04452143 = weight(_text_:22 in 6737) [ClassicSimilarity], result of:
              0.04452143 = score(doc=6737,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2708308 = fieldWeight in 6737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6737)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Reviews issues related to the cataloguing of Internet resources and considers short term and long term directions for cataloguing and the gereal provision of library services for remotely accessible, electronic information resources. Discusses the strengths and weaknesses of using a library catalogue model to improve access to Internet resources. Based on experience gained through 2 OCLC Internet cataloguing projects, recommends continued application of library cataloguing standard and methods for Internet resources with the expectation that catalogues, cataloguing and libraries in general will continue to evolve. Points to problems inherent in the MARC field 856
    Series
    Cataloging and classification quarterly; vol.22, nos.3/4
  3. Srinivasan, P.; Ruiz, M.E.; Lam, W.: ¬An investigation of indexing on the WWW (1996) 0.10
    0.102472305 = product of:
      0.20494461 = sum of:
        0.079044946 = weight(_text_:term in 7424) [ClassicSimilarity], result of:
          0.079044946 = score(doc=7424,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 7424, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7424)
        0.12589967 = weight(_text_:frequency in 7424) [ClassicSimilarity], result of:
          0.12589967 = score(doc=7424,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.45543438 = fieldWeight in 7424, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7424)
      0.5 = coord(2/4)
    
    Abstract
    Proposes a model that assists in understanding indexing on the WWW. It specifies key features of indexing startegies that are currently being used. Presents an experiment assessing the validity of Inverse Document Frequency (IDF) as a term weighting strategy for WWW documents. The experiment indicates that IDF scores are not stable in the heterogeneous and dynamic context of the WWW. Recommends further investigation to clarify the effectiveness of alternative indexing strategies for the WWW
  4. Davis, P.M.; Cohen, S.A.: ¬The effect of the Web on undergraduate citation behavior 1996-1999 (2001) 0.10
    0.10186547 = product of:
      0.20373094 = sum of:
        0.09581695 = weight(_text_:term in 5768) [ClassicSimilarity], result of:
          0.09581695 = score(doc=5768,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.4374403 = fieldWeight in 5768, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=5768)
        0.107914 = weight(_text_:frequency in 5768) [ClassicSimilarity], result of:
          0.107914 = score(doc=5768,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.39037234 = fieldWeight in 5768, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.046875 = fieldNorm(doc=5768)
      0.5 = coord(2/4)
    
    Abstract
    A citation analysis of undergraduate term papers in microeconomics revealed a significant decrease in the frequency of scholarly resources cited between 1996 and 1999. Book citations decreased from 30% to 19%, newspaper citations increased from 7% to 19%, and Web citations increased from 9% to 21%. Web citations checked in 2000 revealed that only 18% of URLs cited in 1996 led to the correct Internet document. For 1999 bibliographies, only 55% of URLs led to the correct document. The authors recommend (1) setting stricter guidelines for acceptable citations in course assignments; (2) creating and maintaining scholarly portals for authoritative Web sites with a commitment to long-term access; and (3) continuing to instruct students how to critically evaluate resources
  5. Chang, C.-H.; Hsu, C.-C.: Integrating query expansion and conceptual relevance feedback for personalized Web information retrieval (1998) 0.08
    0.08473002 = product of:
      0.16946004 = sum of:
        0.014271717 = product of:
          0.057086866 = sum of:
            0.057086866 = weight(_text_:based in 1319) [ClassicSimilarity], result of:
              0.057086866 = score(doc=1319,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.40361002 = fieldWeight in 1319, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1319)
          0.25 = coord(1/4)
        0.15518832 = sum of:
          0.11066689 = weight(_text_:assessment in 1319) [ClassicSimilarity], result of:
            0.11066689 = score(doc=1319,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.4269946 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
          0.04452143 = weight(_text_:22 in 1319) [ClassicSimilarity], result of:
            0.04452143 = score(doc=1319,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.2708308 = fieldWeight in 1319, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1319)
      0.5 = coord(2/4)
    
    Abstract
    Keyword based querying has been an immediate and efficient way to specify and retrieve related information that the user inquired. However, conventional document ranking based on an automatic assessment of document relevance to the query may not be the best approach when little information is given. Proposes an idea to integrate 2 existing techniques, query expansion and relevance feedback to achieve a concept-based information search for the Web
    Date
    1. 8.1996 22:08:06
  6. Paltoglou, G.: Sentiment-based event detection in Twitter (2016) 0.08
    0.08297726 = product of:
      0.16595452 = sum of:
        0.010194084 = product of:
          0.040776335 = sum of:
            0.040776335 = weight(_text_:based in 3010) [ClassicSimilarity], result of:
              0.040776335 = score(doc=3010,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28829288 = fieldWeight in 3010, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3010)
          0.25 = coord(1/4)
        0.15576044 = weight(_text_:frequency in 3010) [ClassicSimilarity], result of:
          0.15576044 = score(doc=3010,freq=6.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.5634539 = fieldWeight in 3010, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3010)
      0.5 = coord(2/4)
    
    Abstract
    The main focus of this article is to examine whether sentiment analysis can be successfully used for "event detection," that is, detecting significant events that occur in the world. Most solutions to this problem are typically based on increases or spikes in frequency of terms in social media. In our case, we explore whether sudden changes in the positivity or negativity that keywords are typically associated with can be exploited for this purpose. A data set that contains several million Twitter messages over a 1-month time span is presented and experimental results demonstrate that sentiment analysis can be successfully utilized for this purpose. Further experiments study the sensitivity of both frequency- or sentiment-based solutions to a number of parameters. Concretely, we show that the number of tweets that are used for event detection is an important factor, while the number of days used to extract token frequency or sentiment averages is not. Lastly, we present results focusing on detecting local events and conclude that all approaches are dependant on the level of coverage that such events receive in social media.
  7. Huang, C.-K.; Chien, L.-F.; Oyang, Y.-J.: Relevant term suggestion in interactive Web search based on contextual information in query session logs (2003) 0.07
    0.07331164 = product of:
      0.14662328 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 1612) [ClassicSimilarity], result of:
              0.033293735 = score(doc=1612,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 1612, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1612)
          0.25 = coord(1/4)
        0.13829985 = weight(_text_:term in 1612) [ClassicSimilarity], result of:
          0.13829985 = score(doc=1612,freq=12.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.6313907 = fieldWeight in 1612, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1612)
      0.5 = coord(2/4)
    
    Abstract
    This paper proposes an effective term suggestion approach to interactive Web search. Conventional approaches to making term suggestions involve extracting co-occurring keyterms from highly ranked retrieved documents. Such approaches must deal with term extraction difficulties and interference from irrelevant documents, and, more importantly, have difficulty extracting terms that are conceptually related but do not frequently co-occur in documents. In this paper, we present a new, effective log-based approach to relevant term extraction and term suggestion. Using this approach, the relevant terms suggested for a user query are those that cooccur in similar query sessions from search engine logs, rather than in the retrieved documents. In addition, the suggested terms in each interactive search step can be organized according to its relevance to the entire query session, rather than to the most recent single query as in conventional approaches. The proposed approach was tested using a proxy server log containing about two million query transactions submitted to search engines in Taiwan. The obtained experimental results show that the proposed approach can provide organized and highly relevant terms, and can exploit the contextual information in a user's query session to make more effective suggestions.
  8. Bruce, H.: ¬The user's view of the Internet (2002) 0.07
    0.06993598 = sum of:
      0.003058225 = product of:
        0.0122329 = sum of:
          0.0122329 = weight(_text_:based in 4344) [ClassicSimilarity], result of:
            0.0122329 = score(doc=4344,freq=6.0), product of:
              0.14144066 = queryWeight, product of:
                3.0129938 = idf(docFreq=5906, maxDocs=44218)
                0.04694356 = queryNorm
              0.08648786 = fieldWeight in 4344, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.0129938 = idf(docFreq=5906, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.25 = coord(1/4)
      0.023954237 = weight(_text_:term in 4344) [ClassicSimilarity], result of:
        0.023954237 = score(doc=4344,freq=4.0), product of:
          0.21904005 = queryWeight, product of:
            4.66603 = idf(docFreq=1130, maxDocs=44218)
            0.04694356 = queryNorm
          0.10936008 = fieldWeight in 4344, product of:
            2.0 = tf(freq=4.0), with freq of:
              4.0 = termFreq=4.0
            4.66603 = idf(docFreq=1130, maxDocs=44218)
            0.01171875 = fieldNorm(doc=4344)
      0.03815336 = weight(_text_:frequency in 4344) [ClassicSimilarity], result of:
        0.03815336 = score(doc=4344,freq=4.0), product of:
          0.27643865 = queryWeight, product of:
            5.888745 = idf(docFreq=332, maxDocs=44218)
            0.04694356 = queryNorm
          0.13801746 = fieldWeight in 4344, product of:
            2.0 = tf(freq=4.0), with freq of:
              4.0 = termFreq=4.0
            5.888745 = idf(docFreq=332, maxDocs=44218)
            0.01171875 = fieldNorm(doc=4344)
      0.0047701527 = product of:
        0.0095403055 = sum of:
          0.0095403055 = weight(_text_:22 in 4344) [ClassicSimilarity], result of:
            0.0095403055 = score(doc=4344,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.058035173 = fieldWeight in 4344, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.01171875 = fieldNorm(doc=4344)
        0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST. 54(2003) no.9, S.906-908 (E.G. Ackermann): "In this book Harry Bruce provides a construct or view of "how and why people are using the Internet," which can be used "to inform the design of new services and to augment our usings of the Internet" (pp. viii-ix; see also pp. 183-184). In the process, he develops an analytical tool that I term the Metatheory of Circulating Usings, and proves an impressive distillation of a vast quantity of research data from previous studies. The book's perspective is explicitly user-centered, as is its theoretical bent. The book is organized into a preface, acknowledgments, and five chapters (Chapter 1, "The Internet Story;" Chapter 2, "Technology and People;" Chapter 3, "A Focus an Usings;" Chapter 4, "Users of the Internet;" Chapter 5, "The User's View of the Internet"), followed by an extensive bibliography and short index. Any notes are found at the end of the relevant Chapter. The book is illustrated with figures and tables, which are clearly presented and labeled. The text is clearly written in a conversational style, relatively jargon-free, and contains no quantification. The intellectual structure follows that of the book for the most part, with some exceptions. The definition of several key concepts or terms are scattered throughout the book, often appearing much later after extensive earlier use. For example, "stakeholders" used repeatedly from p. viii onward, remains undefined until late in the book (pp. 175-176). The study's method is presented in Chapter 3 (p. 34), relatively late in the book. Its metatheoretical basis is developed in two widely separated places (Chapter 3, pp. 56-61, and Chapter 5, pp. 157-159) for no apparent reason. The goal or purpose of presenting the data in Chapter 4 is explained after its presentation (p. 129) rather than earlier with the limits of the data (p. 69). Although none of these problems are crippling to the book, it does introduce an element of unevenness into the flow of the narrative that can confuse the reader and unnecessarily obscures the author's intent. Bruce provides the contextual Background of the book in Chapter 1 (The Internet Story) in the form of a brief history of the Internet followed by a brief delineation of the early popular views of the Internet as an information superstructure. His recapitulation of the origins and development of the Internet from its origins as ARPANET in 1957 to 1995 touches an the highlights of this familiar story that will not be retold here. The early popular views or characterizations of the Internet as an "information society" or "information superhighway" revolved primarily around its function as an information infrastructure (p. 13). These views shared three main components (technology, political values, and implied information values) as well as a set of common assumptions. The technology aspect focused an the Internet as a "common ground an which digital information products and services achieve interoperability" (p. 14). The political values provided a "vision of universal access to distributed information resources and the benefits that this will bring to the lives of individual people and to society in general" (p. 14). The implied communication and information values portrayed the Internet as a "medium for human creativity and innovation" (p. 14). These popular views also assumed that "good decisions arise from good information," that "good democracy is based an making information available to all sectors of society," and that "wisdom is the by-product of effective use of information" (p. 15). Therefore, because the Internet is an information infrastructure, it must be "good and using the Internet will benefit individuals and society in general" (p. 15).
    Chapter 2 (Technology and People) focuses an several theories of technological acceptance and diffusion. Unfortunately, Bruce's presentation is somewhat confusing as he moves from one theory to next, never quite connecting them into a logical sequence or coherent whole. Two theories are of particular interest to Bruce: the Theory of Diffusion of Innovations and the Theory of Planned Behavior. The Theory of Diffusion of Innovations is an "information-centric view of technology acceptance" in which technology adopters are placed in the information flows of society from which they learn about innovations and "drive innovation adoption decisions" (p. 20). The Theory of Planned Behavior maintains that the "performance of a behavior is a joint function of intentions and perceived behavioral control" (i.e., how muck control a person thinks they have) (pp. 22-23). Bruce combines these two theories to form the basis for the Technology Acceptance Model. This model posits that "an individual's acceptance of information technology is based an beliefs, attitudes, intentions, and behaviors" (p. 24). In all these theories and models echoes a recurring theme: "individual perceptions of the innovation or technology are critical" in terms of both its characteristics and its use (pp. 24-25). From these, in turn, Bruce derives a predictive theory of the role personal perceptions play in technology adoption: Personal Innovativeness of Information Technology Adoption (PIITA). Personal inventiveness is defined as "the willingness of an individual to try out any new information technology" (p. 26). In general, the PIITA theory predicts that information technology will be adopted by individuals that have a greater exposure to mass media, rely less an the evaluation of information technology by others, exhibit a greater ability to cope with uncertainty and take risks, and requires a less positive perception of an information technology prior to its adoption. Chapter 3 (A Focus an Usings) introduces the User-Centered Paradigm (UCP). The UCP is characteristic of the shift of emphasis from technology to users as the driving force behind technology and research agendas for Internet development [for a dissenting view, see Andrew Dillion's (2003) challenge to the utility of user-centerness for design guidance]. It entails the "broad acceptance of the user-oriented perspective across a range of disciplines and professional fields," such as business, education, cognitive engineering, and information science (p. 34).
    The UCP's effect an business practices is focused mainly in the management and marketing areas. Marketing experienced a shift from "product-oriented operations" with its focus an "selling the products' features" and customer contact only at the point of sale toward more service-Centered business practice ("customer Jemand orientation") and the development of one-to-one customer relationships (pp. 35-36). For management, the adoption of the UCP caused a shift from "mechanistic, bureaucratic, top-down organizational structures" to "flatter, inclusive, and participative" ones (p. 37). In education, practice shifted from the teachercentered model where the "teacher is responsible for and makes all the decisions related to the learning environment" to a learnercentered model where the student is "responsible for his or her own learning" and the teacher focuses an "matching learning events to the individual skills, aptitudes, and interests of the individual learner" (pp. 38-39). Cognitive engineering saw the rise of "user-Centered design" and human factors that were concerned with applying "scientific knowledge of humans to the design of man-machine interface systems" (p. 44). The UCP had a great effect an Information Science in the "design of information systems" (p. 47). Previous to UCP's explicit proposed by Brenda Dervin and M. Nilan in 1986, systems design was dominated by the "physical of system oriented paradigm" (p. 48). The physical paradigm held a positivistic and materialistic view of technology and (passive) human interaction as exemplified by the 1953 Cranfield tests of information retrieval mechanisms. Instead, the UCP focuses an "users rather than systems" by making the perceptions of individual information users the "centerpiece consideration for information service and system design" (pp. 47-48). Bruce briefly touches an the various schools of thought within user-oriented paradigm, such as the cognitive/self studies approach with its emphasis is an an individual's knowledge structures or model of the world [e.g., Belkin (1990)], the cognitve/context studies approach that focuses an "context in explaining variations in information behavior" [e.g., Savolainen (1995) and Dervin's (1999) sensemaking], and the social constructionism/discourse analytic theory with its focus an that language, not mental/knowledge constructs, as the primary shaper of the world as a system of intersubjective meanings [e.g., Talja 1996], (pp. 53-54). Drawing from the rich tradition of user oriented research, Bruce attempts to gain a metatheoretical understanding of the Internet as a phenomena by combining Dervin's (1996) "micromoments of human usings" with the French philosopher Bruno Latour's (1999) "conception of Circulating reference" to form what 1 term the Metatheory of Circulating Usings (pp. ix, 56, 60). According to Bruce, Latour's concept is designed to bridge "the gap between mind and object" by engaging in a "succession of finely grained transformations that construct and transfer truth about the object" through a chain of "microtranslations" from "matter to form," thereby connecting mind and object (p. 56). The connection works as long as the chain remains unbroken. The nature of this chain of "information producing translations" are such that as one moves away from the object, one experiences a "reduction" of the object's "locality, particularity, materiality, multiplicity and continuity," while simultaneously gaining the "amplification" of its "compatibility, standardization, text, calculation, circulation, and relative universality" (p. 57).
    Bruce points out that Dervin is also concerned about how "we look at the world" in terms of "information needs and seeking" (p.60). She maintains that information scientists traditionally view information seeking and needs in terms of "contexts, users, and systems." Dervin questions whether or not, from a user's point of view, these three "points of interest" even exist. Rather it is the "micromoments of human usings" [emphasis original], and the "world viewings, seekings, and valuings" that comprise them that are real (p. 60). Using his metatheory, Bruce represents the Internet, the "object" of study, as a "chain of transformations made up of the micromoments of human usings" (p. 60). The Internet then is a "composite of usings" that, through research and study, is continuously reduced in complexity while its "essence" and "explanation" are amplified (p. 60). Bruce plans to use the Metatheory of Circulating Usings as an analytical "lens" to "tease out a characterization of the micromoments of Internet usings" from previous research an the Internet thereby exposing "the user's view of the Internet" (pp. 60-61). In Chapter 4 (Users of the Internet), Bruce presents the research data for the study. He begins with an explanation of the limits of the data, and to a certain extent, the study itself. The perspective is that of the Internet user, with a focus an use, not nonuse, thereby exluding issues such as the digital divide and universal service. The research is limited to Internet users "in modern economies around the world" (p. 60). The data is a synthesis of research from many disciplines, but mainly from those "associated with the information field" with its traditional focus an users, systems, and context rather than usings (p. 70). Bruce then presents an extensive summary of the research results from a massive literature review of available Internet studies. He examines the research for each study group in order of the amount of data available, starting with the most studied group professional users ("academics, librarians, and teachers") followed by "the younger generation" ("College students, youths, and young adults"), users of e-government information and e-business services, and ending with the general public (the least studied group) (p. 70). Bruce does a masterful job of condensing and summarizing a vast amount of research data in 49 pages. Although there is too muck to recapitulate here, one can get a sense of the results by looking at the areas of data examined for one of the study groups: academic Internet users. There is data an their frequency of use, reasons for nonuse, length of use, specific types of use (e.g., research, teaching, administration), use of discussion lists, use of e-journals, use of Web browsers and search engines, how academics learn to use web tools and services (mainly by self-instruction), factors affecting use, and information seeking habits. Bruce's goal in presenting all this research data is to provide "the foundation for constructs of the Internet that can inform stakeholders who will play a role in determining how the Internet will develop" (p. 129). These constructs are presented in Chapter 5.
    Bruce begins Chapter 5 (The Users' View of the Internet) by pointing out that the Internet not only exists as a physical entity of hardware, software, and networked connectivity, but also as a mental representation or knowledge structure constructed by users based an their usings. These knowledge structures or constructs "allow people to interpret and make sense of things" by functioning as a link between the new unknown thing with known thing(s) (p. 158). The knowledge structures or using constructs are continually evolving as people use the Internet over time, and represent the user's view of the Internet. To capture the users' view of the Internet from the research literature, Bruce uses his Metatheory of Circulating Usings. He recapitulates the theory, casting it more closely to the study of Internet use than previously. Here the reduction component provides a more detailed "understanding of the individual users involved in the micromoment of Internet using" while simultaneously the amplification component increases our understanding of the "generalized construct of the Internet" (p. 158). From this point an Bruce presents a relatively detail users' view of the Internet. He starts with examining Internet usings, which is composed of three parts: using space, using literacies, and Internet space. According to Bruce, using space is a using horizon likened to a "sphere of influence," comfortable and intimate, in which an individual interacts with the Internet successfully (p. 164). It is a "composite of individual (professional nonwork) constructs of Internet utility" (p. 165). Using literacies are the groups of skills or tools that an individual must acquire for successful interaction with the Internet. These literacies serve to link the using space with the Internet space. They are usually self-taught and form individual standards of successful or satisfactory usings that can be (and often are) at odds with the standards of the information profession. Internet space is, according to Bruce, a user construct that perceives the Internet as a physical, tangible place separate from using space. Bruce concludes that the user's view of the Internet explains six "principles" (p. 173). "Internet using is proof of concept" and occurs in contexts; using space is created through using frequency, individuals use literacies to explore and utilize Internet space, Internet space "does not require proof of concept, and is often influence by the perceptions and usings of others," and "the user's view of the Internet is upbeat and optimistic" (pp. 173-175). He ends with a section describing who are the Internet stakeholders. Bruce defines them as Internet hardware/software developers, Professional users practicing their profession in both familiar and transformational ways, and individuals using the Internet "for the tasks and pleasures of everyday life" (p. 176).
  9. Lucas, W.; Topi, H.: Form and function : the impact of query term and operator usage on Web search results (2002) 0.07
    0.068221994 = product of:
      0.13644399 = sum of:
        0.010194084 = product of:
          0.040776335 = sum of:
            0.040776335 = weight(_text_:based in 198) [ClassicSimilarity], result of:
              0.040776335 = score(doc=198,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28829288 = fieldWeight in 198, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=198)
          0.25 = coord(1/4)
        0.12624991 = weight(_text_:term in 198) [ClassicSimilarity], result of:
          0.12624991 = score(doc=198,freq=10.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.5763782 = fieldWeight in 198, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0390625 = fieldNorm(doc=198)
      0.5 = coord(2/4)
    
    Abstract
    Conventional wisdom holds that queries to information retrieval systems will yield more relevant results if they contain multiple topic-related terms and use Boolean and phrase operators to enhance interpretation. Although studies have shown that the users of Web-based search engines typically enter short, term-based queries and rarely use search operators, little information exists concerning the effects of term and operator usage on the relevancy of search results. In this study, search engine users formulated queries on eight search topics. Each query was submitted to the user-specified search engine, and relevancy ratings for the retrieved pages were assigned. Expert-formulated queries were also submitted and provided a basis for comparing relevancy ratings across search engines. Data analysis based on our research model of the term and operator factors affecting relevancy was then conducted. The results show that the difference in the number of terms between expert and nonexpert searches, the percentage of matching terms between those searches, and the erroneous use of nonsupported operators in nonexpert searches explain most of the variation in the relevancy of search results. These findings highlight the need for designing search engine interfaces that provide greater support in the areas of term selection and operator usage
  10. Evans, H.K.; Ovalle, J.; Green, S.: Rockin' robins : do congresswomen rule the roost in the Twittersphere? (2016) 0.06
    0.063497305 = product of:
      0.12699461 = sum of:
        0.107914 = weight(_text_:frequency in 2636) [ClassicSimilarity], result of:
          0.107914 = score(doc=2636,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.39037234 = fieldWeight in 2636, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.046875 = fieldNorm(doc=2636)
        0.019080611 = product of:
          0.038161222 = sum of:
            0.038161222 = weight(_text_:22 in 2636) [ClassicSimilarity], result of:
              0.038161222 = score(doc=2636,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23214069 = fieldWeight in 2636, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2636)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Recent work by Evans, Cordova, and Sipole (2014) reveals that in the two months leading up to the 2012 election, female House candidates used the social media site Twitter more often than male candidates. Not only did female candidates tweet more often, but they also spent more time attacking their opponents and discussing important issues in American politics. In this article, we examine whether the female winners of those races acted differently than the male winners in the 2012 election, and whether they differed in their tweeting-style during two months in the summer of 2013. Using a hand-coded content analysis of every tweet from each member in the U.S. House of Representatives in June and July of 2013, we show that women differ from their male colleagues in their frequency and type of tweeting, and note some key differences between the period during the election and the period after. This article suggests that context greatly affects representatives' Twitter-style.
    Date
    22. 1.2016 11:51:19
  11. Dalip, D.H.; Gonçalves, M.A.; Cristo, M.; Calado, P.: ¬A general multiview framework for assessing the quality of collaboratively created content on web 2.0 (2017) 0.06
    0.05958612 = product of:
      0.11917224 = sum of:
        0.008323434 = product of:
          0.033293735 = sum of:
            0.033293735 = weight(_text_:based in 3343) [ClassicSimilarity], result of:
              0.033293735 = score(doc=3343,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.23539014 = fieldWeight in 3343, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3343)
          0.25 = coord(1/4)
        0.11084881 = sum of:
          0.079047784 = weight(_text_:assessment in 3343) [ClassicSimilarity], result of:
            0.079047784 = score(doc=3343,freq=2.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.30499613 = fieldWeight in 3343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3343)
          0.031801023 = weight(_text_:22 in 3343) [ClassicSimilarity], result of:
            0.031801023 = score(doc=3343,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.19345059 = fieldWeight in 3343, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=3343)
      0.5 = coord(2/4)
    
    Abstract
    User-generated content is one of the most interesting phenomena of current published media, as users are now able not only to consume, but also to produce content in a much faster and easier manner. However, such freedom also carries concerns about content quality. In this work, we propose an automatic framework to assess the quality of collaboratively generated content. Quality is addressed as a multidimensional concept, modeled as a combination of independent assessments, each regarding different quality dimensions. Accordingly, we adopt a machine-learning (ML)-based multiview approach to assess content quality. We perform a thorough analysis of our framework on two different domains: Questions and Answer Forums and Collaborative Encyclopedias. This allowed us to better understand when and how the proposed multiview approach is able to provide accurate quality assessments. Our main contributions are: (a) a general ML multiview framework that takes advantage of different views of quality indicators; (b) the improvement (up to 30%) in quality assessment over the best state-of-the-art baseline methods; (c) a thorough feature and view analysis regarding impact, informativeness, and correlation, based on two distinct domains.
    Date
    16.11.2017 13:04:22
  12. Weibel, S.: ¬An architecture for scholarly publishing on the World Wide Web (1995) 0.06
    0.057888947 = product of:
      0.115777895 = sum of:
        0.09033708 = weight(_text_:term in 4555) [ClassicSimilarity], result of:
          0.09033708 = score(doc=4555,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.41242266 = fieldWeight in 4555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0625 = fieldNorm(doc=4555)
        0.025440816 = product of:
          0.05088163 = sum of:
            0.05088163 = weight(_text_:22 in 4555) [ClassicSimilarity], result of:
              0.05088163 = score(doc=4555,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.30952093 = fieldWeight in 4555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4555)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    OCLC distributes several scholarly journals under its Electronic Journals Online programme, acting, in effect, as an 'electronic printer' for scholarly publishers. It is prototyping a WWW accessible version of these journals. Describes the problems encountered, detail some of the short term solutions, and highlight changes to existing standards that will enhance the use of the WWW for scholarly electronic publishing
    Date
    23. 7.1996 10:22:20
  13. Chung, Y.-M.; Noh, Y.-H.: Developing a specialized directory system by automatically classifying Web documents (2003) 0.05
    0.052902535 = product of:
      0.10580507 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 1566) [ClassicSimilarity], result of:
              0.039952483 = score(doc=1566,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 1566, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
          0.25 = coord(1/4)
        0.09581695 = weight(_text_:term in 1566) [ClassicSimilarity], result of:
          0.09581695 = score(doc=1566,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.4374403 = fieldWeight in 1566, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=1566)
      0.5 = coord(2/4)
    
    Abstract
    This study developed a specialized directory system using an automatic classification technique. Economics was selected as the subject field for the classification experiments with Web documents. The classification scheme of the directory follows the DDC, and subject terms representing each class number or subject category were selected from the DDC table to construct a representative term dictionary. In collecting and classifying the Web documents, various strategies were tested in order to find the optimal thresholds. In the classification experiments, Web documents in economics were classified into a total of 757 hierarchical subject categories built from the DDC scheme. The first and second experiments using the representative term dictionary resulted in relatively high precision ratios of 77 and 60%, respectively. The third experiment employing a machine learning-based k-nearest neighbours (kNN) classifier in a closed experimental setting achieved a precision ratio of 96%. This implies that it is possible to enhance the classification performance by applying a hybrid method combining a dictionary-based technique and a kNN classifier
  14. Ross, N.C.M.; Wolfram, D.: End user searching on the Internet : an analysis of term pair topics submitted to the Excite search engine (2000) 0.05
    0.051439807 = product of:
      0.10287961 = sum of:
        0.0070626684 = product of:
          0.028250674 = sum of:
            0.028250674 = weight(_text_:based in 4998) [ClassicSimilarity], result of:
              0.028250674 = score(doc=4998,freq=2.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.19973516 = fieldWeight in 4998, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4998)
          0.25 = coord(1/4)
        0.09581695 = weight(_text_:term in 4998) [ClassicSimilarity], result of:
          0.09581695 = score(doc=4998,freq=4.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.4374403 = fieldWeight in 4998, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=4998)
      0.5 = coord(2/4)
    
    Abstract
    Queries submitted to the Excite search engine were analyzed for subject content based on the cooccurrence of terms within multiterm queries. More than 1000 of the most frequently cooccurring term pairs were categorized into one or more of 30 developed subject areas. Subject area frequencies and their cooccurrences with one another were tallied and analyzed using hierarchical cluster analysis and multidimensional scaling. The cluster analyses revealed several anticipated and a few unanticipated groupings of subjects, resulting in several well-defined high-level clusters of broad subject areas. Multidimensional scaling of subject cooccurrences revealed similar relationships among the different subject categories. Applications that arise from a better understanding of the topics users search and their relationships are discussed
  15. Bane, A.F.; Milheim, W.D.: Internet insights : how academics are using the Internet (1995) 0.05
    0.050871145 = product of:
      0.20348458 = sum of:
        0.20348458 = weight(_text_:frequency in 2207) [ClassicSimilarity], result of:
          0.20348458 = score(doc=2207,freq=4.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.7360931 = fieldWeight in 2207, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0625 = fieldNorm(doc=2207)
      0.25 = coord(1/4)
    
    Abstract
    Reports on a survey of academic use of the Internet. It was sent to 231 randomly chosen discussion groups from a list of Scholarly Electronic Conferences on the Internet. 1.536 surveys were completed and returned. Survey questions included generation questions about computer expertise, frequency of electronic mail utilization, access to various telnet sites, frequency of connections to a variety of Internet sources, access to several file transfer protocol locations, use of navigational aids and open ended questions covering the importance and use of the Internet to the survey respondents. Reports the results and reactions to the survey
  16. Kavcic-Colic, A.: Archiving the Web : some legal aspects (2003) 0.05
    0.05065283 = product of:
      0.10130566 = sum of:
        0.079044946 = weight(_text_:term in 4754) [ClassicSimilarity], result of:
          0.079044946 = score(doc=4754,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.36086982 = fieldWeight in 4754, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4754)
        0.022260714 = product of:
          0.04452143 = sum of:
            0.04452143 = weight(_text_:22 in 4754) [ClassicSimilarity], result of:
              0.04452143 = score(doc=4754,freq=2.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.2708308 = fieldWeight in 4754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4754)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Technological developments have changed the concepts of publication, reproduction and distribution. However, legislation, and in particular the Legal Deposit Law has not adjusted to these changes - it is very restrictive in the sense of protecting the rights of authors of electronic publications. National libraries and national archival institutions, being aware of their important role in preserving the written and spoken cultural heritage, try to find different legal ways to live up to these responsibilities. This paper presents some legal aspects of archiving Web pages, examines the harvesting of Web pages, provision of public access to pages, and their long-term preservation.
    Date
    10.12.2005 11:22:13
  17. Bertot, J.C.; McClure, C.R.: Developing assessment techniques for statewide electronic networks (1996) 0.05
    0.050257012 = product of:
      0.20102805 = sum of:
        0.20102805 = sum of:
          0.15650663 = weight(_text_:assessment in 2173) [ClassicSimilarity], result of:
            0.15650663 = score(doc=2173,freq=4.0), product of:
              0.25917634 = queryWeight, product of:
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.04694356 = queryNorm
              0.6038616 = fieldWeight in 2173, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.52102 = idf(docFreq=480, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2173)
          0.04452143 = weight(_text_:22 in 2173) [ClassicSimilarity], result of:
            0.04452143 = score(doc=2173,freq=2.0), product of:
              0.16438834 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04694356 = queryNorm
              0.2708308 = fieldWeight in 2173, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2173)
      0.25 = coord(1/4)
    
    Abstract
    Reports on a study assessing statewide electronic network initiatives using the Maryland Sailor network as a case study. Aims to develop assessment techniques and indicators for the evaluation of statewide electronic networks. Defines key components of the statewide networked environment. Develops and operationalizes performance measures for networked information technologies and services provided through statewide networks. Explores several methods of evaluating statewide electronic networks. Identifies and discusses key issues and preliminary findings that affect the successful evaluation of statewide networked services
    Date
    7.11.1998 20:27:22
  18. Lee, L.-H.; Chen, H.-H.: Mining search intents for collaborative cyberporn filtering (2012) 0.05
    0.050061207 = product of:
      0.100122415 = sum of:
        0.010194084 = product of:
          0.040776335 = sum of:
            0.040776335 = weight(_text_:based in 4988) [ClassicSimilarity], result of:
              0.040776335 = score(doc=4988,freq=6.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28829288 = fieldWeight in 4988, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4988)
          0.25 = coord(1/4)
        0.08992833 = weight(_text_:frequency in 4988) [ClassicSimilarity], result of:
          0.08992833 = score(doc=4988,freq=2.0), product of:
            0.27643865 = queryWeight, product of:
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.04694356 = queryNorm
            0.32531026 = fieldWeight in 4988, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.888745 = idf(docFreq=332, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4988)
      0.5 = coord(2/4)
    
    Abstract
    This article presents a search-intent-based method to generate pornographic blacklists for collaborative cyberporn filtering. A novel porn-detection framework that can find newly appearing pornographic web pages by mining search query logs is proposed. First, suspected queries are identified along with their clicked URLs by an automatically constructed lexicon. Then, a candidate URL is determined if the number of clicks satisfies majority voting rules. Finally, a candidate whose URL contains at least one categorical keyword will be included in a blacklist. Several experiments are conducted on an MSN search porn dataset to demonstrate the effectiveness of our method. The resulting blacklist generated by our search-intent-based method achieves high precision (0.701) while maintaining a favorably low false-positive rate (0.086). The experiments of a real-life filtering simulation reveal that our proposed method with its accumulative update strategy can achieve 44.15% of a macro-averaging blocking rate, when the update frequency is set to 1 day. In addition, the overblocking rates are less than 9% with time change due to the strong advantages of our search-intent-based method. This user-behavior-oriented method can be easily applied to search engines for incorporating only implicit collective intelligence from query logs without other efforts. In practice, it is complementary to intelligent content analysis for keeping up with the changing trails of objectionable websites from users' perspectives.
  19. Gaines, B.R.; Chen, L.-J.; Shaw, M.L.G.: Modeling the human factors of scholarly communities supported through the Internet and World Wide Web (1997) 0.05
    0.047368437 = product of:
      0.094736874 = sum of:
        0.06775281 = weight(_text_:term in 1458) [ClassicSimilarity], result of:
          0.06775281 = score(doc=1458,freq=2.0), product of:
            0.21904005 = queryWeight, product of:
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.04694356 = queryNorm
            0.309317 = fieldWeight in 1458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.66603 = idf(docFreq=1130, maxDocs=44218)
              0.046875 = fieldNorm(doc=1458)
        0.026984062 = product of:
          0.053968124 = sum of:
            0.053968124 = weight(_text_:22 in 1458) [ClassicSimilarity], result of:
              0.053968124 = score(doc=1458,freq=4.0), product of:
                0.16438834 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04694356 = queryNorm
                0.32829654 = fieldWeight in 1458, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1458)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Provides a framework for analysing the utility, usability and likeability of net and web services and illustrates its application to significant aspects of supporting scholarly communities. The utility of the net and web are measured in terms of the growth of usage and the different services involved are distinguished in terms of their specific utilities. A layered protocol model is used to model discourse through the net and is extended to encompass interaction in communities. An operational criterion for distinguishing different communities is defined in terms of the types of awareness that resource providers and user have of one another. Develops a temporal model of discourse that enables the spectrum of services ranging from real-time discourse to long-term publication to be analyzed in a unified framework. The dimensions of awareness and time are used to characterise and compare the full range of net services and model their unification through the next generation of web browsers
    Date
    17. 7.1998 22:22:58
  20. Metzger, M.J.: Making sense of credibility on the Web : Models for evaluating online information and recommendations for future research (2007) 0.05
    0.046068497 = product of:
      0.092136994 = sum of:
        0.009988121 = product of:
          0.039952483 = sum of:
            0.039952483 = weight(_text_:based in 623) [ClassicSimilarity], result of:
              0.039952483 = score(doc=623,freq=4.0), product of:
                0.14144066 = queryWeight, product of:
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.04694356 = queryNorm
                0.28246817 = fieldWeight in 623, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0129938 = idf(docFreq=5906, maxDocs=44218)
                  0.046875 = fieldNorm(doc=623)
          0.25 = coord(1/4)
        0.08214887 = product of:
          0.16429774 = sum of:
            0.16429774 = weight(_text_:assessment in 623) [ClassicSimilarity], result of:
              0.16429774 = score(doc=623,freq=6.0), product of:
                0.25917634 = queryWeight, product of:
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.04694356 = queryNorm
                0.63392264 = fieldWeight in 623, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.52102 = idf(docFreq=480, maxDocs=44218)
                  0.046875 = fieldNorm(doc=623)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article summarizes much of what is known from the communication and information literacy fields about the skills that Internet users need to assess the credibility of online information. The article reviews current recommendations for credibility assessment, empirical research on how users determine the credibility of Internet information, and describes several cognitive models of online information evaluation. Based on the literature review and critique of existing models of credibility assessment, recommendations for future online credibility education and practice are provided to assist users in locating reliable information online. The article concludes by offering ideas for research and theory development on this topic in an effort to advance knowledge in the area of credibility assessment of Internet-based information.

Years

Languages

Types

  • a 789
  • m 74
  • s 32
  • el 26
  • r 4
  • x 4
  • b 1
  • i 1
  • More… Less…

Subjects

Classifications