Search (269 results, page 1 of 14)

  • × theme_ss:"Suchmaschinen"
  1. Li, L.; Shang, Y.; Zhang, W.: Improvement of HITS-based algorithms on Web documents 0.38
    0.37651956 = product of:
      0.6589092 = sum of:
        0.062032532 = product of:
          0.18609759 = sum of:
            0.18609759 = weight(_text_:3a in 2514) [ClassicSimilarity], result of:
              0.18609759 = score(doc=2514,freq=2.0), product of:
                0.3311239 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.03905679 = queryNorm
                0.56201804 = fieldWeight in 2514, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2514)
          0.33333334 = coord(1/3)
        0.26318175 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.26318175 = score(doc=2514,freq=4.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.07051322 = weight(_text_:based in 2514) [ClassicSimilarity], result of:
          0.07051322 = score(doc=2514,freq=18.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.59920543 = fieldWeight in 2514, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
        0.26318175 = weight(_text_:2f in 2514) [ClassicSimilarity], result of:
          0.26318175 = score(doc=2514,freq=4.0), product of:
            0.3311239 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03905679 = queryNorm
            0.7948135 = fieldWeight in 2514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=2514)
      0.5714286 = coord(4/7)
    
    Abstract
    In this paper, we present two ways to improve the precision of HITS-based algorithms onWeb documents. First, by analyzing the limitations of current HITS-based algorithms, we propose a new weighted HITS-based method that assigns appropriate weights to in-links of root documents. Then, we combine content analysis with HITS-based algorithms and study the effects of four representative relevance scoring methods, VSM, Okapi, TLS, and CDR, using a set of broad topic queries. Our experimental results show that our weighted HITS-based method performs significantly better than Bharat's improved HITS algorithm. When we combine our weighted HITS-based method or Bharat's HITS algorithm with any of the four relevance scoring methods, the combined methods are only marginally better than our weighted HITS-based method. Between the four relevance scoring methods, there is no significant quality difference when they are combined with a HITS-based algorithm.
    Content
    Vgl.: http%3A%2F%2Fdelab.csd.auth.gr%2F~dimitris%2Fcourses%2Fir_spring06%2Fpage_rank_computing%2Fp527-li.pdf. Vgl. auch: http://www2002.org/CDROM/refereed/643/.
  2. Ortiz-Cordova, A.; Jansen, B.J.: Classifying web search queries to identify high revenue generating customers (2012) 0.04
    0.036867417 = product of:
      0.2580719 = sum of:
        0.2580719 = weight(_text_:businesses in 279) [ClassicSimilarity], result of:
          0.2580719 = score(doc=279,freq=6.0), product of:
            0.29628533 = queryWeight, product of:
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.03905679 = queryNorm
            0.87102485 = fieldWeight in 279, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.046875 = fieldNorm(doc=279)
      0.14285715 = coord(1/7)
    
    Abstract
    Traffic from search engines is important for most online businesses, with the majority of visitors to many websites being referred by search engines. Therefore, an understanding of this search engine traffic is critical to the success of these websites. Understanding search engine traffic means understanding the underlying intent of the query terms and the corresponding user behaviors of searchers submitting keywords. In this research, using 712,643 query keywords from a popular Spanish music website relying on contextual advertising as its business model, we use a k-means clustering algorithm to categorize the referral keywords with similar characteristics of onsite customer behavior, including attributes such as clickthrough rate and revenue. We identified 6 clusters of consumer keywords. Clusters range from a large number of users who are low impact to a small number of high impact users. We demonstrate how online businesses can leverage this segmentation clustering approach to provide a more tailored consumer experience. Implications are that businesses can effectively segment customers to develop better business models to increase advertising conversion rates.
  3. Auletta, K: Googled : the end of the world as we know it (2009) 0.03
    0.03285758 = product of:
      0.115001515 = sum of:
        0.015669605 = weight(_text_:based in 1991) [ClassicSimilarity], result of:
          0.015669605 = score(doc=1991,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.13315678 = fieldWeight in 1991, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=1991)
        0.09933191 = weight(_text_:businesses in 1991) [ClassicSimilarity], result of:
          0.09933191 = score(doc=1991,freq=2.0), product of:
            0.29628533 = queryWeight, product of:
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.03905679 = queryNorm
            0.3352576 = fieldWeight in 1991, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.03125 = fieldNorm(doc=1991)
      0.2857143 = coord(2/7)
    
    Abstract
    There are companies that create waves and those that ride or are drowned by them. This is a ride on the Google wave, and the fullest account of how it formed and crashed into traditional media businesses. With unprecedented access to Google's founders and executives, as well as to those in media who are struggling to keep their heads above water, Ken Auletta reveals how the industry is being disrupted and redefined. On one level Auletta uses Google as a stand-in for the digital revolution as a whole - and goes inside Google's closed door meetings, introducing Google's notoriously private founders, Larry Page and Sergey Brin, as well as those who work with - and against - them. In "Googled", the reader discovers the 'secret sauce' of the company's success and why the worlds of 'new' and 'old' media often communicate as if residents of different planets. It may send chills down traditionalists' spines, but it's a crucial roadmap to the future of media business: the Google story may well be the canary in the coal mine. "Googled" is candid, objective and authoritative - based on extensive research including in-house at Google HQ. Crucially, it's not just a history or reportage: it's forward-looking. This book is ahead of the curve and, unlike any other Google books, which tend to have been near-histories, somewhat starstruck, to be now out of date or which fail to look at the full synthesis of business and technology.
  4. Lewandowski, D.: Evaluating the retrieval effectiveness of web search engines using a representative query sample (2015) 0.03
    0.03016981 = product of:
      0.10559433 = sum of:
        0.023504408 = weight(_text_:based in 2157) [ClassicSimilarity], result of:
          0.023504408 = score(doc=2157,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.19973516 = fieldWeight in 2157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
        0.08208992 = weight(_text_:great in 2157) [ClassicSimilarity], result of:
          0.08208992 = score(doc=2157,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.37327147 = fieldWeight in 2157, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2157)
      0.2857143 = coord(2/7)
    
    Abstract
    Search engine retrieval effectiveness studies are usually small scale, using only limited query samples. Furthermore, queries are selected by the researchers. We address these issues by taking a random representative sample of 1,000 informational and 1,000 navigational queries from a major German search engine and comparing Google's and Bing's results based on this sample. Jurors were found through crowdsourcing, and data were collected using specialized software, the Relevance Assessment Tool (RAT). We found that although Google outperforms Bing in both query types, the difference in the performance for informational queries was rather low. However, for navigational queries, Google found the correct answer in 95.3% of cases, whereas Bing only found the correct answer 76.6% of the time. We conclude that search engine performance on navigational queries is of great importance, because users in this case can clearly identify queries that have returned correct results. So, performance on this query type may contribute to explaining user satisfaction with search engines.
  5. Vise, D.A.; Malseed, M.: ¬The Google story (2005) 0.03
    0.027478807 = product of:
      0.09617582 = sum of:
        0.08691542 = weight(_text_:businesses in 5937) [ClassicSimilarity], result of:
          0.08691542 = score(doc=5937,freq=2.0), product of:
            0.29628533 = queryWeight, product of:
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.03905679 = queryNorm
            0.2933504 = fieldWeight in 5937, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.02734375 = fieldNorm(doc=5937)
        0.009260397 = product of:
          0.018520795 = sum of:
            0.018520795 = weight(_text_:22 in 5937) [ClassicSimilarity], result of:
              0.018520795 = score(doc=5937,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.1354154 = fieldWeight in 5937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=5937)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Social phenomena happen, and the historians follow. So it goes with Google, the latest star shooting through the universe of trend-setting businesses. This company has even entered our popular lexicon: as many note, "Google" has moved beyond noun to verb, becoming an action which most tech-savvy citizens at the turn of the twenty-first century recognize and in fact do, on a daily basis. It's this wide societal impact that fascinated authors David Vise and Mark Malseed, who came to the book with well-established reputations in investigative reporting. Vise authored the bestselling The Bureau and the Mole, and Malseed contributed significantly to two Bob Woodward books, Bush at War and Plan of Attack. The kind of voluminous research and behind-the-scenes insight in which both writers specialize, and on which their earlier books rested, comes through in The Google Story. The strength of the book comes from its command of many small details, and its focus on the human side of the Google story, as opposed to the merely academic one. Some may prefer a dryer, more analytic approach to Google's impact on the Internet, like The Search or books that tilt more heavily towards bits and bytes on the spectrum between technology and business, like The Singularity is Near. Those wanting to understand the motivations and personal growth of founders Larry Page and Sergey Brin and CEO Eric Schmidt, however, will enjoy this book. Vise and Malseed interviewed over 150 people, including numerous Google employees, Wall Street analysts, Stanford professors, venture capitalists, even Larry Page's Cub Scout leader, and their comprehensiveness shows. As the narrative unfolds, readers learn how Google grew out of the intellectually fertile and not particularly directed friendship between Page and Brin; how the founders attempted to peddle early versions of their search technology to different Silicon Valley firms for $1 million; how Larry and Sergey celebrated their first investor's check with breakfast at Burger King; how the pair initially housed their company in a Palo Alto office, then eventually moved to a futuristic campus dubbed the "Googleplex"; how the company found its financial footing through keyword-targeted Web ads; how various products like Google News, Froogle, and others were cooked up by an inventive staff; how Brin and Page proved their mettle as tough businessmen through negotiations with AOL Europe and their controversial IPO process, among other instances; and how the company's vision for itself continues to grow, such as geographic expansion to China and cooperation with Craig Venter on the Human Genome Project. Like the company it profiles, The Google Story is a bit of a wild ride, and fun, too. Its first appendix lists 23 "tips" which readers can use to get more utility out of Google. The second contains the intelligence test which Google Research offers to prospective job applicants, and shows the sometimes zany methods of this most unusual business. Through it all, Vise and Malseed synthesize a variety of fascinating anecdotes and speculation about Google, and readers seeking a first draft of the history of the company will enjoy an easy read.
    Date
    3. 5.1997 8:44:22
  6. Bradley, P.: ¬The great search-engine con-trick (1999) 0.02
    0.023454266 = product of:
      0.16417985 = sum of:
        0.16417985 = weight(_text_:great in 3853) [ClassicSimilarity], result of:
          0.16417985 = score(doc=3853,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.74654293 = fieldWeight in 3853, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.09375 = fieldNorm(doc=3853)
      0.14285715 = coord(1/7)
    
  7. Moghaddam, A.I.; Parirokh, M.: ¬A comparative study on overlapping of search results in metasearch engines and their common underlying search engines (2006) 0.02
    0.021967653 = product of:
      0.07688678 = sum of:
        0.022160169 = weight(_text_:based in 4741) [ClassicSimilarity], result of:
          0.022160169 = score(doc=4741,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.18831211 = fieldWeight in 4741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=4741)
        0.05472661 = weight(_text_:great in 4741) [ClassicSimilarity], result of:
          0.05472661 = score(doc=4741,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.24884763 = fieldWeight in 4741, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=4741)
      0.2857143 = coord(2/7)
    
    Abstract
    Purpose - The age of the information explosion, effective access to the most relevant resources available on the internet is one of the chief concerns for users. Familiarity with types of search tools is required. One of the search tools designed to solve this problem for internet users is the metasearch engine (MSE). The purpose of this paper is to assess how far this search tool is truly effective in solving users' problems of Internet access. Design/methodology/approach - This research examines MSEs in terms of recall ratio in retrieving documents indexed and ranked highly (1-10) within their common underlying search engines (SEs). Five general MSEs in English, which are free of charge, were utilized in this research. In order to calculate the recall ratio of MSEs, five well known MSEs which have four common underlying SEs were chosen. Then, selected keywords were searched in each SE and MSE. Two lists were prepared: one list was based on the first ten results recalled by the SE, and the other was based on the first 40 results recalled by the MSE. These lists were compared with each other. An equation was utilized in this process. Findings - The findings indicate that MSEs are more likely to find the same documents which are common in their underlying search engines. Research limitations/implications - This paper offers a rigorous quantitative method for comparative evaluation of MSEs. Practical implications - Furthermore, MSEs which have a successful recall ratio are identified, which is a finding of great practical relevance to library and information practitioners helping users exploit the Internet to best effect. Originality/value - This paper provides clear descriptive evidence for the underlying retrieval patterns of important search tools which are commonly used by internet users today.
  8. Couvering, E. van: ¬The economy of navigation : search engines, search optimisation and search results (2007) 0.02
    0.017737841 = product of:
      0.12416488 = sum of:
        0.12416488 = weight(_text_:businesses in 379) [ClassicSimilarity], result of:
          0.12416488 = score(doc=379,freq=2.0), product of:
            0.29628533 = queryWeight, product of:
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.03905679 = queryNorm
            0.41907197 = fieldWeight in 379, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              7.5860133 = idf(docFreq=60, maxDocs=44218)
              0.0390625 = fieldNorm(doc=379)
      0.14285715 = coord(1/7)
    
    Abstract
    The political economy of communication focuses critically on what structural issues in mass media - ownership, labour practices, professional ethics, and so on - mean for products of those mass media and thus for society more generally. In the case of new media, recent political economic studies have looked at the technical infrastructure of the Internet and also at Internet usage. However, political economic studies of internet content are only beginning. Recent studies on the phenomenology of the Web, that is, the way the Web is experienced from an individual user's perspective, highlight the centrality of the search engine to most users' experiences of the Web, particularly when they venture beyond familiar Web sites. Search engines are therefore an obvi ous place to begin the analysis of Web content. An important assumption of this chapter is that internet search engines are media businesses and that the tools developed in media studies can be profitably brought to bear on them. This focus on search engine as industry comes from the critical tradition of the political economy of communications in rejecting the notion that the market alone should be the arbiter of the structure of the media industry, as might be appropriate for other types of products.
  9. Kurzke, C.; Galle, M.; Bathelt, M.: WebAssistant : a user profile specific information retrieval assistant (1998) 0.02
    0.01637174 = product of:
      0.05730109 = sum of:
        0.038780294 = weight(_text_:based in 3559) [ClassicSimilarity], result of:
          0.038780294 = score(doc=3559,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.3295462 = fieldWeight in 3559, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3559)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 3559) [ClassicSimilarity], result of:
              0.03704159 = score(doc=3559,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 3559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3559)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Describes the concept of a proxy based information classification and filtering utility, named Web Assistant. On the behalf of users a private view of the WWW is generated based on a previously determined profile. This profile is created by monitoring the user anf group activities when browsing WWW pages. Additional features are integrated to allow for easy interoperability workgroups with similar project interests, maintain personal and common hotlists with automatic modification checks and a sophisticated search engine front-end
    Date
    1. 8.1996 22:08:06
  10. Koch, T.: Quality-controlled subject gateways : definitions, typologies, empirical overview (2000) 0.02
    0.01637174 = product of:
      0.05730109 = sum of:
        0.038780294 = weight(_text_:based in 631) [ClassicSimilarity], result of:
          0.038780294 = score(doc=631,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.3295462 = fieldWeight in 631, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=631)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 631) [ClassicSimilarity], result of:
              0.03704159 = score(doc=631,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 631, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=631)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    'Quality-controlled subject gateways' are Internet services which apply a rich set of quality measures to support systematic resource discovery. Considerable manual effort is used to secure a selection of resources which meet quality criteria and to display a rich description of these resources with standards-based metadata. Regular checking and updating ensure good collection management. A main goal is to provide a high quality of subject access through indexing resources using controlled vocabularies and by offering a deep classification structure for advanced searching and browsing. This article provides an initial empirical overview of existing services of this kind, their approaches and technologies, based on proposed working definitions and typologies of subject gateways
    Date
    22. 6.2002 19:37:55
  11. Garcés, P.J.; Olivas, J.A.; Romero, F.P.: Concept-matching IR systems versus word-matching information retrieval systems : considering fuzzy interrelations for indexing Web pages (2006) 0.02
    0.016293434 = product of:
      0.05702702 = sum of:
        0.04379788 = weight(_text_:based in 5288) [ClassicSimilarity], result of:
          0.04379788 = score(doc=5288,freq=10.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.37218451 = fieldWeight in 5288, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5288)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 5288) [ClassicSimilarity], result of:
              0.026458278 = score(doc=5288,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 5288, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5288)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This article presents a semantic-based Web retrieval system that is capable of retrieving the Web pages that are conceptually related to the implicit concepts of the query. The concept of concept is managed from a fuzzy point of view by means of semantic areas. In this context, the proposed system improves most search engines that are based on matching words. The key of the system is to use a new version of the Fuzzy Interrelations and Synonymy-Based Concept Representation Model (FIS-CRM) to extract and represent the concepts contained in both the Web pages and the user query. This model, which was integrated into other tools such as the Fuzzy Interrelations and Synonymy based Searcher (FISS) metasearcher and the fz-mail system, considers the fuzzy synonymy and the fuzzy generality interrelations as a means of representing word interrelations (stored in a fuzzy synonymy dictionary and ontologies). The new version of the model, which is based on the study of the cooccurrences of synonyms, integrates a soft method for disambiguating word senses. This method also considers the context of the word to be disambiguated and the thematic ontologies and sets of synonyms stored in the dictionary.
    Date
    22. 7.2006 17:14:12
  12. Bates, M.E.: Quick answers to odd questions (2004) 0.02
    0.015084905 = product of:
      0.052797165 = sum of:
        0.011752204 = weight(_text_:based in 3071) [ClassicSimilarity], result of:
          0.011752204 = score(doc=3071,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.09986758 = fieldWeight in 3071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
        0.04104496 = weight(_text_:great in 3071) [ClassicSimilarity], result of:
          0.04104496 = score(doc=3071,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.18663573 = fieldWeight in 3071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3071)
      0.2857143 = coord(2/7)
    
    Content
    "One of the things I enjoyed the most when I was a reference librarian was the wide range of questions my clients sent my way. What was the original title of the first Godzilla movie? (Gojira, released in 1954) Who said 'I'm as pure as the driven slush'? (Tallulah Bankhead) What percentage of adults have gone to a jazz performance in the last year? (11%) I have found that librarians, speech writers and journalists have one thing in common - we all need to find information on all kinds of topics, and we usually need the answers right now. The following are a few of my favorite sites for finding answers to those there-must-be-an-answer-out-there questions. - For the electronic equivalent to the "ready reference" shelf of resources that most librarians keep hidden behind their desks, check out RefDesk . It is particularly good for answering factual questions - Where do I get the new Windows XP Service Pack? Where is the 386 area code? How do I contact my member of Congress? - Another resource for lots of those quick-fact questions is InfoPlease, the publishers of the Information Please almanac .- Right now, it's full of Olympics data, but it also has links to facts and factoids that you would look up in an almanac, atlas, or encyclopedia. - If you want numbers, start with the Statistical Abstract of the US. This source, produced by the U.S. Census Bureau, gives you everything from the divorce rate by state to airline cost indexes going back to 1980. It is many librarians' secret weapon for pulling numbers together quickly. - My favorite question is "how does that work?" Haven't you ever wondered how they get that Olympic torch to continue to burn while it is being carried by runners from one city to the next? Or how solar sails manage to propel a spacecraft? For answers, check out the appropriately-named How Stuff Works. - For questions about movies, my first resource is the Internet Movie Database. It is easy to search, is such a popular site that mistakes are corrected quickly, and is a fun place to catch trailers of both upcoming movies and those dating back to the 30s. - When I need to figure out who said what, I still tend to rely on the print sources such as Bartlett's Familiar Quotations . No, the current edition is not available on the web, but - and this is the librarian in me - I really appreciate the fact that I not only get the attribution but I also see the source of the quote. There are far too many quotes being attributed to a celebrity, but with no indication of the publication in which the quote appeared. Take, for example, the much-cited quote of Margaret Meade, "Never doubt that a small group of thoughtful committed people can change the world; indeed, it's the only thing that ever has!" Then see the page on the Institute for Intercultural Studies site, founded by Meade, and read its statement that it has never been able to verify this alleged quote from Meade. While there are lots of web-based sources of quotes (see QuotationsPage.com and Bartleby, for example), unless the site provides the original source for the quotation, I wouldn't rely on the citation. Of course, if you have a hunch as to the source of a quote, and it was published prior to 1923, head over to Project Gutenberg , which includes the full text of over 12,000 books that are in the public domain. When I needed to confirm a quotation of the Red Queen in "Through the Looking Glass", this is where I started. - And if you are stumped as to where to go to find information, instead of Googling it, try the Librarians' Index to the Internet. While it is somewhat US-centric, it is a great directory of web resources."
  13. Langville, A.N.; Meyer, C.D.: Google's PageRank and beyond : the science of search engine rankings (2006) 0.02
    0.015084905 = product of:
      0.052797165 = sum of:
        0.011752204 = weight(_text_:based in 6) [ClassicSimilarity], result of:
          0.011752204 = score(doc=6,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.09986758 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
        0.04104496 = weight(_text_:great in 6) [ClassicSimilarity], result of:
          0.04104496 = score(doc=6,freq=2.0), product of:
            0.21992016 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03905679 = queryNorm
            0.18663573 = fieldWeight in 6, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=6)
      0.2857143 = coord(2/7)
    
    Abstract
    Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other Web pages always appear at the top? What creates these powerful rankings? And how? The first book ever about the science of Web page rankings, "Google's PageRank and Beyond" supplies the answers to these and other questions and more. The book serves two very different audiences: the curious science reader and the technical computational reader. The chapters build in mathematical sophistication, so that the first five are accessible to the general academic reader. While other chapters are much more mathematical in nature, each one contains something for both audiences. For example, the authors include entertaining asides such as how search engines make money and how the Great Firewall of China influences research. The book includes an extensive background chapter designed to help readers learn more about the mathematics of search engines, and it contains several MATLAB codes and links to sample Web data sets. The philosophy throughout is to encourage readers to experiment with the ideas and algorithms in the text. Any business seriously interested in improving its rankings in the major search engines can benefit from the clear examples, sample code, and list of resources provided. It includes: many illustrative examples and entertaining asides; MATLAB code; accessible and informal style; and complete and self-contained section for mathematics review.
    Content
    Chapter 9. Accelerating the Computation of PageRank: 9.1 An Adaptive Power Method - 9.2 Extrapolation - 9.3 Aggregation - 9.4 Other Numerical Methods Chapter 10. Updating the PageRank Vector: 10.1 The Two Updating Problems and their History - 10.2 Restarting the Power Method - 10.3 Approximate Updating Using Approximate Aggregation - 10.4 Exact Aggregation - 10.5 Exact vs. Approximate Aggregation - 10.6 Updating with Iterative Aggregation - 10.7 Determining the Partition - 10.8 Conclusions Chapter 11. The HITS Method for Ranking Webpages: 11.1 The HITS Algorithm - 11.2 HITS Implementation - 11.3 HITS Convergence - 11.4 HITS Example - 11.5 Strengths and Weaknesses of HITS - 11.6 HITS's Relationship to Bibliometrics - 11.7 Query-Independent HITS - 11.8 Accelerating HITS - 11.9 HITS Sensitivity Chapter 12. Other Link Methods for Ranking Webpages: 12.1 SALSA - 12.2 Hybrid Ranking Methods - 12.3 Rankings based on Traffic Flow Chapter 13. The Future of Web Information Retrieval: 13.1 Spam - 13.2 Personalization - 13.3 Clustering - 13.4 Intelligent Agents - 13.5 Trends and Time-Sensitive Search - 13.6 Privacy and Censorship - 13.7 Library Classification Schemes - 13.8 Data Fusion Chapter 14. Resources for Web Information Retrieval: 14.1 Resources for Getting Started - 14.2 Resources for Serious Study Chapter 15. The Mathematics Guide: 15.1 Linear Algebra - 15.2 Perron-Frobenius Theory - 15.3 Markov Chains - 15.4 Perron Complementation - 15.5 Stochastic Complementation - 15.6 Censoring - 15.7 Aggregation - 15.8 Disaggregation
  14. Dempsey, L.: ¬The subject gateway : experiences and issues based on the emergence of the Resource Discovery Network (2000) 0.02
    0.015001667 = product of:
      0.052505832 = sum of:
        0.03133921 = weight(_text_:based in 628) [ClassicSimilarity], result of:
          0.03133921 = score(doc=628,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 628, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=628)
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 628) [ClassicSimilarity], result of:
              0.042333245 = score(doc=628,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 628, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=628)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 6.2002 19:36:13
  15. Bensman, S.J.: Eugene Garfield, Francis Narin, and PageRank : the theoretical bases of the Google search engine (2013) 0.02
    0.015001667 = product of:
      0.052505832 = sum of:
        0.03133921 = weight(_text_:based in 1149) [ClassicSimilarity], result of:
          0.03133921 = score(doc=1149,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.26631355 = fieldWeight in 1149, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0625 = fieldNorm(doc=1149)
        0.021166623 = product of:
          0.042333245 = sum of:
            0.042333245 = weight(_text_:22 in 1149) [ClassicSimilarity], result of:
              0.042333245 = score(doc=1149,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.30952093 = fieldWeight in 1149, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1149)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.
    Date
    17.12.2013 11:02:22
  16. Carrière, S.J.; Kazman, R.: Webquery : searching and visualising the Web through connectivity (1997) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 2674) [ClassicSimilarity], result of:
          0.03324025 = score(doc=2674,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 2674, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=2674)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 2674) [ClassicSimilarity], result of:
              0.031749934 = score(doc=2674,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 2674, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2674)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The WebQuery system offers a powerful new method for searching the Web based on connectivity and content. Examines links among the nodes returned in a keyword-based query. Rankes the nodes, giving the highest rank to the most highly connected nodes. By doing so, finds hot spots on the Web that contain information germane to a user's query. WebQuery not only ranks and filters the results of a Web query; it also extends the result set beyond what the search engine retrieves, by finding interesting sites that are highly connected to those sites returned by the original query. Even with WebQuery filering and ranking query results, the result set can be enormous. Explores techniques for visualizing the returned information and discusses the criteria for using each of the technique
    Date
    1. 8.1996 22:08:06
  17. Large, A.; Beheshti, J.; Rahman, T.: Design criteria for children's Web portals : the users speak out (2002) 0.01
    0.01403292 = product of:
      0.04911522 = sum of:
        0.03324025 = weight(_text_:based in 197) [ClassicSimilarity], result of:
          0.03324025 = score(doc=197,freq=4.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28246817 = fieldWeight in 197, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.046875 = fieldNorm(doc=197)
        0.015874967 = product of:
          0.031749934 = sum of:
            0.031749934 = weight(_text_:22 in 197) [ClassicSimilarity], result of:
              0.031749934 = score(doc=197,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.23214069 = fieldWeight in 197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=197)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Four focus groups were held with young Web users (10 to 13 years of age) to explore design criteria for Web portals. The focus group participants commented upon four existing portals designed with young users in mind: Ask Jeeves for Kids, KidsClick, Lycos Zone, and Yahooligans! This article reports their first impressions on using these portals, their likes and dislikes, and their suggestions for improvements. Design criteria for children's Web portals are elaborated based upon these comments under four headings: portal goals, visual design, information architecture, and personalization. An ideal portal should cater for both educational and entertainment needs, use attractive screen designs based especially on effective use of color, graphics, and animation, provide both keyword search facilities and browsable subject categories, and allow individual user personalization in areas such as color and graphics
    Date
    2. 6.2005 10:34:22
  18. Su, L.T.: ¬A comprehensive and systematic model of user evaluation of Web search engines : Il. An evaluation by undergraduates (2003) 0.01
    0.01347281 = product of:
      0.047154833 = sum of:
        0.033925693 = weight(_text_:based in 2117) [ClassicSimilarity], result of:
          0.033925693 = score(doc=2117,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28829288 = fieldWeight in 2117, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2117)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 2117) [ClassicSimilarity], result of:
              0.026458278 = score(doc=2117,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 2117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2117)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based an actual interaction with the search engines. User evaluation was based an 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non-performance (user-related) measures were also applied. Each participant searched his/ her own topic an all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post-search Interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1 relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions an all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based an these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback an strengths and weaknesses of search engines for system improvement
    Date
    24. 1.2004 18:27:22
  19. Loia, V.; Pedrycz, W.; Senatore, S.; Sessa, M.I.: Web navigation support by means of proximity-driven assistant agents (2006) 0.01
    0.01347281 = product of:
      0.047154833 = sum of:
        0.033925693 = weight(_text_:based in 5283) [ClassicSimilarity], result of:
          0.033925693 = score(doc=5283,freq=6.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.28829288 = fieldWeight in 5283, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5283)
        0.013229139 = product of:
          0.026458278 = sum of:
            0.026458278 = weight(_text_:22 in 5283) [ClassicSimilarity], result of:
              0.026458278 = score(doc=5283,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.19345059 = fieldWeight in 5283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5283)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    The explosive growth of the Web and the consequent exigency of the Web personalization domain have gained a key position in the direction of customization of the Web information to the needs of specific users, taking advantage of the knowledge acquired from the analysis of the user's navigational behavior (usage data) in correlation with other information collected in the Web context, namely, structure, content, and user profile data. This work presents an agent-based framework designed to help a user in achieving personalized navigation, by recommending related documents according to the user's responses in similar-pages searching mode. Our agent-based approach is grounded in the integration of different techniques and methodologies into a unique platform featuring user profiling, fuzzy multisets, proximity-oriented fuzzy clustering, and knowledge-based discovery technologies. Each of these methodologies serves to solve one facet of the general problem (discovering documents relevant to the user by searching the Web) and is treated by specialized agents that ultimately achieve the final functionality through cooperation and task distribution.
    Date
    22. 7.2006 16:59:13
  20. Heery, R.: Information gateways : collaboration and content (2000) 0.01
    0.013126459 = product of:
      0.045942605 = sum of:
        0.02742181 = weight(_text_:based in 4866) [ClassicSimilarity], result of:
          0.02742181 = score(doc=4866,freq=2.0), product of:
            0.11767787 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03905679 = queryNorm
            0.23302436 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.018520795 = product of:
          0.03704159 = sum of:
            0.03704159 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.03704159 = score(doc=4866,freq=2.0), product of:
                0.13677022 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03905679 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54

Years

Languages

  • e 185
  • d 82
  • f 1
  • nl 1
  • More… Less…

Types

  • a 243
  • el 17
  • m 14
  • x 3
  • p 2
  • r 1
  • s 1
  • More… Less…